This post is a follow-up to the DAC panel discussion we had yesterday on how to conquer hardware-dependent software development. Most of the panel turned into a very useful dialogue on virtual platforms and how they are created, not really discussing how to actually use them for easing low-level software development. We did get to software eventually though, and had another good dialogue with the audience. Thanks to the tough DAC participants who held out to the end of the last panel of the last day!
As is often the case, after the panel has ended, I realized several good and important points that I never got around to making… and of those one struck me as worthy of a blog post in its own right.It is the issue of how high-level synthesis can help software design.
At the end of the panel, the last comment from Kees Vissers of Xilinx pointed out that high-level synthesis is a very powerful way to build hardware. I think his point was that hardware and software are not that different… but the remark also got me thinking. If it is the case that high-level synthesis currently makes hardware creation easier, can’t it also be applied to software creation? In particular, if you have a HLS description of a piece of hardware, can’t you also generate its driver software?
I think that makes eminent sense, since one of the hard parts of doing device drivers is just getting the use of the programming registers of a device right. The programming register interface is a really strange thing if you think about it. It is not native to either software or hardware, really.
In hardware, you communicate between devices using fifos or signals or packet-based mechanisms which do not in general look like programming register writes. You move data by sending a stream of data directly, not word-by-word addressing registers. Similarly, on the software side, software units call each other using functions (or higher-level OS abstractions like signals or network packets). They do not put data into addressed registers…
Today, high-level synthesis as practiced in industry involves describing the function of a device in pretty abstract terms, so that the compiler can make smart decisions on the implementation details. It also makes it easier to try different alternatives in the implementation, trading size, speed, and power consumption against each other. Different types of concurrency and pipelining can be explored.
However, once we get to the hardware-software interface, we get rudely dropped into a world of manual detailing of an interface with no tool support to explore it or validate it. Why should that really be the case? I think that the hardware-software interface requires just as much care as the internals of the device. After all, it is the external face of the device, and if that is too hard to use, users will not get the full benefit of the device. Here are some previous posts on the nature of interfaces and why they matter: 1, 2, 3, 4.
So I would propose a different take on this, where you apply synthesis at a higher-level, and generate the hardware internals, the programming register interface, and the software driver from the same source. The interface you design for a device would be a set of function calls expressed in software terms, and thus relatively easy to use from software. Let’s call this SHLS, Software High-Level Synthesis. Or maybe Software-Level Synthesis, SLS.
I am well aware that most operating systems do not provide an interface to device drivers consisting of function calls… but rather rely on pretty non-semantic methods like read/write/ioctl. However, that is easy to overcome by generating a user-level interface library in addition to the raw device driver. Obviously, a tool like this would need some adaptation to apply to each operating system targeted. But that does not feel like something that cannot fairly easy be handled by a template system. Compared to the complexity of synthesizing hardware this feels pretty basic.
If you hide the software-hardware interface inside a black box like this, it can also be implemented in quite different and interesting ways. For example, you could imagine using a small memory-mapped buffer where you enter commands and data and then ask the hardware to “parse” it, rather than using discrete registers with immediate effects. Or you could optimize the coding of the relevant hardware operations into a bit-compacted representation no human would like, but which are no problem for the machine.
Another option is obviously to just stop at the hardware-software interface, but let the tool help the hardware designer build the programming register interface and explore various options for it. Not having to invent a register layout from scratch should make that job much easier.