The Hardware-Software Interface is where the Action Is

pn4_quad-gigaswift-utp-adapterWhen I started out doing computer science “for real” way back, the emphasis and a lot of the fun was in the basics of algorithms, optimizing code, getting complex trees and sorts and hashes right an efficient. It was very much about computing defined as processor and memory (with maybe a bit of disk or printing or user interface accessed at a very high level, and providing the data for the interesting stuff). However, as time has gone on, I have come to feel that this is almost too clean, too easy to abstract… and gone back to where I started in my first home computer, programming close to the metal.

As I dig deeper into operating systems and the hardware-software interface layer (mostly with the help of virtual platforms), I have come to appreciate just how hard and interesting that part of the computing stack is. I guess it is partially because that is the level where most of the nice thick layers of middleware and API software we use these days (and which to be frank I find fairly boring) just break down and have to start dealing with the real world. For some reasons, web servers and their programming feels barren and boring compared to dealing with interrupts, memory maps, and bit twiddling.

Several things I have read and heard about recently touch on this subject in various ways. All of them point to the fact that hardware-software interface design is important, and that there is a lot of right and wrong ways of doing it… which are rarely taught in universities and rarely approached in computing literature.

First, Brian Cantrill of Sun wrote a blog post blasting transactional memory in November of 2008, which I recently reread and got a bit of a epiphany from in this paragraph:

… Even if one assumes that writing a transaction is conceptually easier than acquiring a lock, and even if one further assumes that transaction-based pathologies like livelock are easier on the brain than lock-based pathologies like deadlock, there remains a fatal flaw with transactional memory: much system software can never be in a transaction because it does not merely operate on memory. That is, system software frequently takes action outside of its own memory, requesting services from software or hardware operating on a disjoint memory (the operating system kernel, an I/O device, a hypervisor, firmware, another process — or any of these on a remote machine). In much system software, the in-memory state that corresponds to these services is protected by a lock — and the manipulation of such state will never be representable in a transaction. So for me at least, transactional memory is an unacceptable solution to a non-problem.

In the same style, Keith Adams at VMWare picked up on the above and applied it to the microkernel idea:

It’s interesting to me that, as with microkernels, one of the principle reasons TM will fail is the messy, messy reality of peripheral devices. One of the claims made by microkernel proponents is that, since microkernel drivers are “just user-level processes”, they’ll survive driver failures. And this is almost true, for some definition of “survive.” Suppose you’re a microkernel, and you restart a failed user-level driver; the new driver instance has no way of knowing what state the borked-out driver left the actual, physical hardware in. Sometimes, a blind reset procedure can safely be carried out, but sometimes it can’t. Also, the devices being driven are DMA masters, so they might very well have done something horrible to the kernel even though the buggy driver was “just a user-level app.” And if there were I/Os in flight at failure time, have they happened, or not? Remember, they might not be idempotent… I’m not saying that some best-effort way of dealing with many of these problems is impossible, just that it’s unclear that moving the driver into userspace has helped the situation at all.

So what this shows is that the hardware-software interface is where the really hard and interesting problems start to pop up. I am big fan of abstraction and layers of indirection as programming methodologies, I am not a Steve Gibson who feels that programs are best written in assembly… but the abstractions do have to allow for the truth that is underneath the system. Bad abstractions or too simple abstractions make things more complex, rather than less.

Moving on from the software side of things to the hardware design side,Gary Stringham is running a nice series of tips for hardware design. Here, there are lots of interesting issues to confront as well to make hardware easy or worthwhile to use. He recently ran a link to a 2004 Microsoft article on how hardware should be designed, based on the experience of the Windows driver team at Microsoft.

If every hardware engineer just understood that write-only registers make debugging almost impossible, our job would be a lot easier. Many products are designed with registers that can be written, but not read. This makes the hardware design easier, but it means there is no way to snapshot the current state of the hardware, or do a debug dump of the registers, or do read-modify-write operations. Now that virtually all hardware design is done in Verilog or VHDL, it takes only a tiny bit of additional effort to make the registers readable.

Another typical hardware trick is registers that automatically clear themselves when written. Although this is sometimes useful, it also makes debugging difficult when overused.

I guess it is kind of sad that even five years later, this same issues do seem to crop up in new products and merit volumes of venom from driver developers… On the other hand, some companies do seem to be getting it. To me, the Freescale designs of recent years do seem to be fairly easy to configure and debug, and not feature write-only bits in any large number.

The article about hardware acceleration for TCP/IP by Mike Odell that I discussed in a previous blog post is also relevant: when do the complexity of hardware interfacing negate any performance benefit from an accelerator?

(for some reason, the initial posting of this post had an incomplete last paragraph, something weird in WordPress updates happened)

To sum up, I think the interaction of hardware and software in the context of full opreating systems and device driver stacks is a really interesting topic that seems to have not gotten very much academic coverage. I hope to be able to help remedy some of this, once I get the Simics setup used in my experiments with hardware accelerators packaged and available for academia. Full-system virtual platforms make for a very good experimental system, especially those where you use some third-party or standard operating system rather than just your own controlled code.

9 thoughts on “The Hardware-Software Interface is where the Action Is”

  1. Aloha!

    I really like the design guidelines presented by Gary and look forward to his new book. One issue I don’t see discussed very much imho is observability/debug access and security. In plain terms – adding a lot of good debug functionality including observability and even manpulation of internal states adds potential vectors for info leakage and attacks.

    Disabling/removing debug support in SW is normally down to not compiling release candidates with debug flags enabled. But in HW this is generally not that easy. Compile time defines can be used, but the resulting design will probably have different size and different timings.

    Also, at least for high volume products, being able to speedily track down major issues might require keeping debug functionality in the product. This makes it hard to remove or permanently disable debug support.

    Finally, as has been shown at conferences, disabling things like JTAG ports by burning internal fuses has been bypassed both using FIBS and sometimes simply by using the chip in unexpected ways that leads to info leakage and acces to what was assumed to be disabled paths.

  2. JoachimS :Aloha!
    I really like the design guidelines presented by Gary and look forward to his new book. One issue I don’t see discussed very much imho is observability/debug access and security.

    I think both Gary and I are too concerned with getting things to work and be debuggable. Which is the opposite of buttoning things down for security. I wonder what can be done at the hardware-software interface apart from just not exposing certain internal states in any way at all? Maybe the idea of small hidden trusted state is the best we can do, and letting the rest of the system be “open for all to see”.

    The argument that security by obscurity is bad does not hold for the case when the opponent has the device and its software physically in hand. Then, hiding things is your only choice…

  3. Aloha!

    I think both Gary and I are too concerned with getting things to work and be debuggable. Which is the opposite of buttoning things down for security. I wonder what can be done at the hardware-software interface apart from just not exposing certain internal states in any way at all? Maybe the idea of small hidden trusted state is the best we can do, and letting the rest of the system be “open for all to see”.

    I’m sorry but I strongly disagree. It is the “too concerned with getting things to work and be debuggable” idiom is at the core of basically all security problems historically. Not taking security into the design methodology from the start is the only way to get system designed and implemented that both meets application usability and security.

    You can’t just tack it on as an afterthought and hope it all goes well. Security is (should) be part of the requirement spec and used as one of other important drivers for the development work. Otherwise you will never get it right.

    I’m actually amazed that a heavily methodology focused person can think like this in 2009.

    The argument that security by obscurity is bad does not hold for the case when the opponent has the device and its software physically in hand. Then, hiding things is your only choice…

    Yes and no. Once again, if you had taken security into account during design phase you would have been able to assess the minimal amount of things to keep secret and protect them – and ensured that they are made up of information (IDs, keys, some parameters) not constructs and algorithms. Then figuring out how to protect them in different use cases (debug vs deployed) and what happens and what to do if there is a breach.

    If you had done this you might have come to the conclusion that due to the application use cases a breach of the local device is ok or easily corrected.

    Security by obscurity om a device level is a bad security method that should be combined with other methods of protection as far as possible.

    A final thing about security by obscurity is trust. If you need to keep internals secret you will have bigger problems instilling trust simply because you can’t tell the customer (and thir customers) how things work for the fear of loosing the security.

  4. JoachimS :Aloha!

    I’m sorry but I strongly disagree. It is the “too concerned with getting things to work and be debuggable” idiom is at the core of basically all security problems historically. Not taking security into the design methodology from the start is the only way to get system designed and implemented that both meets application usability and security.

    I totally agree, and I freely admitted that most of hte people that I meet and serve with products and help are still in the “get to work” camp. We have tried to discuss using Simics to help in securing things, but it is a dud commercially so far…

    You can’t just tack it on as an afterthought and hope it all goes well. Security is (should) be part of the requirement spec and used as one of other important drivers for the development work. Otherwise you will never get it right.

    I agree.

    you would have been able to assess the minimal amount of things to keep secret and protect them – and ensured that they are made up of information (IDs, keys, some parameters) not constructs and algorithms.

    I believe I was saying the same thing… if less clearly: some kind of small trusted core. even so, this small trusted core has to be hidden from a user with access to hardware and software, and with no external activation needed, that amounts to security by obscurity — you have an essentially untrusted user.

    Then figuring out how to protect them in different use cases (debug vs deployed) and what happens and what to do if there is a breach.

    This is the interesting key here: how do we do this? It has to be possible to apply debug in some cases, and not in other cases, without changing the hardware. Which to me sounds like you need some really clever design in the hardware itself.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.