When I started out doing computer science “for real” way back, the emphasis and a lot of the fun was in the basics of algorithms, optimizing code, getting complex trees and sorts and hashes right an efficient. It was very much about computing defined as processor and memory (with maybe a bit of disk or printing or user interface accessed at a very high level, and providing the data for the interesting stuff). However, as time has gone on, I have come to feel that this is almost too clean, too easy to abstract… and gone back to where I started in my first home computer, programming close to the metal.
As I dig deeper into operating systems and the hardware-software interface layer (mostly with the help of virtual platforms), I have come to appreciate just how hard and interesting that part of the computing stack is. I guess it is partially because that is the level where most of the nice thick layers of middleware and API software we use these days (and which to be frank I find fairly boring) just break down and have to start dealing with the real world. For some reasons, web servers and their programming feels barren and boring compared to dealing with interrupts, memory maps, and bit twiddling.
Several things I have read and heard about recently touch on this subject in various ways. All of them point to the fact that hardware-software interface design is important, and that there is a lot of right and wrong ways of doing it… which are rarely taught in universities and rarely approached in computing literature.
First, Brian Cantrill of Sun wrote a blog post blasting transactional memory in November of 2008, which I recently reread and got a bit of a epiphany from in this paragraph:
… Even if one assumes that writing a transaction is conceptually easier than acquiring a lock, and even if one further assumes that transaction-based pathologies like livelock are easier on the brain than lock-based pathologies like deadlock, there remains a fatal flaw with transactional memory: much system software can never be in a transaction because it does not merely operate on memory. That is, system software frequently takes action outside of its own memory, requesting services from software or hardware operating on a disjoint memory (the operating system kernel, an I/O device, a hypervisor, firmware, another process — or any of these on a remote machine). In much system software, the in-memory state that corresponds to these services is protected by a lock — and the manipulation of such state will never be representable in a transaction. So for me at least, transactional memory is an unacceptable solution to a non-problem.
In the same style, Keith Adams at VMWare picked up on the above and applied it to the microkernel idea:
It’s interesting to me that, as with microkernels, one of the principle reasons TM will fail is the messy, messy reality of peripheral devices. One of the claims made by microkernel proponents is that, since microkernel drivers are “just user-level processes”, they’ll survive driver failures. And this is almost true, for some definition of “survive.” Suppose you’re a microkernel, and you restart a failed user-level driver; the new driver instance has no way of knowing what state the borked-out driver left the actual, physical hardware in. Sometimes, a blind reset procedure can safely be carried out, but sometimes it can’t. Also, the devices being driven are DMA masters, so they might very well have done something horrible to the kernel even though the buggy driver was “just a user-level app.” And if there were I/Os in flight at failure time, have they happened, or not? Remember, they might not be idempotent… I’m not saying that some best-effort way of dealing with many of these problems is impossible, just that it’s unclear that moving the driver into userspace has helped the situation at all.
So what this shows is that the hardware-software interface is where the really hard and interesting problems start to pop up. I am big fan of abstraction and layers of indirection as programming methodologies, I am not a Steve Gibson who feels that programs are best written in assembly… but the abstractions do have to allow for the truth that is underneath the system. Bad abstractions or too simple abstractions make things more complex, rather than less.
Moving on from the software side of things to the hardware design side,Gary Stringham is running a nice series of tips for hardware design. Here, there are lots of interesting issues to confront as well to make hardware easy or worthwhile to use. He recently ran a link to a 2004 Microsoft article on how hardware should be designed, based on the experience of the Windows driver team at Microsoft.
If every hardware engineer just understood that write-only registers make debugging almost impossible, our job would be a lot easier. Many products are designed with registers that can be written, but not read. This makes the hardware design easier, but it means there is no way to snapshot the current state of the hardware, or do a debug dump of the registers, or do read-modify-write operations. Now that virtually all hardware design is done in Verilog or VHDL, it takes only a tiny bit of additional effort to make the registers readable.
Another typical hardware trick is registers that automatically clear themselves when written. Although this is sometimes useful, it also makes debugging difficult when overused.
I guess it is kind of sad that even five years later, this same issues do seem to crop up in new products and merit volumes of venom from driver developers… On the other hand, some companies do seem to be getting it. To me, the Freescale designs of recent years do seem to be fairly easy to configure and debug, and not feature write-only bits in any large number.
The article about hardware acceleration for TCP/IP by Mike Odell that I discussed in a previous blog post is also relevant: when do the complexity of hardware interfacing negate any performance benefit from an accelerator?
(for some reason, the initial posting of this post had an incomplete last paragraph, something weird in WordPress updates happened)
To sum up, I think the interaction of hardware and software in the context of full opreating systems and device driver stacks is a really interesting topic that seems to have not gotten very much academic coverage. I hope to be able to help remedy some of this, once I get the Simics setup used in my experiments with hardware accelerators packaged and available for academia. Full-system virtual platforms make for a very good experimental system, especially those where you use some third-party or standard operating system rather than just your own controlled code.