This post is a belated comment on the FDL 2009 conference that I attended some months ago. I have had some things in mind for a while, but some recent podcast listening has brought the issues to front again. What has been striking is the extent to which FDL was about languages only to a very small degree. Compared to programming-language conferences like PLDI, there was precious little innovation going on in input languages, and very little concern for the programming aspects of virtual platform design and hardware modeling.
Walking to and from the conference from my hotel, I listened through a FLOSS Weekly interview with David Heinemeier Hanson, the creator of Ruby on Rails. His approach to programming and languages was quite unlike that exposed at FDL. In his world, anything that is repeated in code should be put into the language or library. In Ruby, that is easier than in many other cases, as the language can be extended arbitrarily without recompiling the VM. His focus on programmer productivity and convenience is in stark contrast to the FDL discussions which mostly dealt with how to simulate things in a single language, SystemC. Quite boring from a programming language perspective.
Another podcast that triggered thoughts on programming and how to improve it using languages was Stackoverflow Episode 73. In the listener questions section, the topic of language evolution came up. Joel and Jeff pointed out that C# is a glowing example of a language that quickly evolves and adds useful features, including things from the field of dynamic languages. Quite interesting. They made the crucial point that backwards compatibility in a language is not really needed, as long as you can link code compiled from the old and the new languages together. So, if C# 3.0 won’t compile all C# 2.0 code, it is no big deal, as you can still have the old C# 2.0 compiler around, and then link with the new C# 3.0 code.
The key is linkability between modules, not the standard of the input language. Here, Microsoft’s .net system is starting to make a very impressive showing, I think. C#, VB, F#, Python, Ruby — a ton of languages all share the same common language runtime and the basic libraries of .net. After hearing a talk by Tim Harris of Microsoft UK at MCC 2009, I am even more impressed by what .net can do.
.net was also the topic of the FLOSS Weekly interview with the team behind IronPython. IronPython is Python on top of the .net framework, and the interview went into a lot of interesting details on how that has played out. The short answer is: very impressive, very smart, and very much the way things should be.
Note that even if the perspective is that “ESL languages describe a single hardware chip configuration, which is fixed”, having a language which is more dynamic still helps.Remember that modeling is programming, and anything that makes programming more efficient is a good. All you need to do is to have a “freeze” operation that says that “this particular set of things is my design”. But you might get there by interactively adding and removing things at a command-line interface.
Working in OSCI CCI WG, I have come to realize just how useful reflection in languages like Python is (or as we implement it in Simics). When all you have is a static C++ compiled binary, you cannot easily do things like ask objects for their type and other metadata like documentation. Since it just is not there. While in Python, you can do such inspection, and also extend things at run-time, which is very useful. If you want to add configuration hooks to a class, Python makes it dead easy, while C++ makes it major painful.
The Dream ESL Language
Overall, what that I get from all of this that a sound design for an “ESL” language, had we started today, would be:
- Basic semantics given by a virtual machine, not an input language.
- Opportunities for several different input languages of potentially very different styles to be used, all compiling and linking into the same VM. That would open up for real innovation.
- Extensive reflection and introspection features.
- Dynamic reconfiguration during run-time, optionally frozen if the goal is to actually describe some hardware design for synthesis. But such synthesis would be from VM code, not some input language.
Essentially, taking the approach of providing a stable interoperability layer between languages in the form of a VM, and allowing languages to be anything anyone could care to invent.
I suspect that if Microsoft developed more hardware, and developed commercial EDA and ESL tools, we might see more innovation in languages in the EDA and ESL domain. Something about having the profits of what are still partial monopolies gives Microsoft a lot more resources to invest in a whole host of language approaches, as well as in fundamental research. Clearly, a lot of interesting results have emerged, and as Jakob says, it would be very interesting if there were similar moves in EDA/ESL towards better languages. However, even when the big EDA companies had some near-monopoly profits in various domains, they were not very innovative nor very visionary in really pushing design methods and fundamental tools and languages. Given the current state of EDA, I don’t see the innovations likely to come from the largest (albeit, some shrinking) companies soon.
That is true to some extent, but there is also the clear appreciation that programmer productivity and ease of use matters far more than any adherence to standards beyond the basic platform ABI and API. What “EDA” could do quite easily is to stop discussing languages for standards and rather move to platform ABI and some particular APIs. An ESL virtual machine would be great… but I guess that won’t replace SystemC any day soon.
Still, to be fair, there are some really cool things that have come out of the EDA field. Things like BlueSpec and “e” are pretty cool.
Yes, but interesting to note that neither Bluespec nor “e” came from the “Big 3”. The past many years, most of the ESL cool things have come from small companies. We’ll have to see if the future brings a better track record – all three of the Big 3 have some investments in this area and have done some new things.
True, but the very fact that languages come from the smaller comapnies seems to indicate that being inventive is not a matter of money, resources, or manpower.
Getting a language accepted by a large community is another question altogether. And that’s my main rub.
We need to find a way to agree on a level of interoperability that is lower than language source code, so that language innovation can take place. If we are tied to source languages for interoperability, we will be very stuck for a long time.
It seems like SystemC is emerging as the key language for ESL design being used for new IP design entry level, architectural analysis and virtual prototyping with link to implementation and verification. Read more info at my blog at http://tinyurl.com/ydf8pjx
But SystemC is far from a good language, as it is based on the quite old-fashioned C++ core language. No reflection, no support for running on virtual machines, no automatic memory allocation, horribly intricate semantics, no dynamic linking and loading of modules at runtime, … so I feel we need something better for people to actually program in. It is just not modern enough to be a good language for actual programming of models.
Jakob, thanks for referring to Bluespec as “cool”! I fully agree with your desideratum: “Basic semantics given by a virtual machine”. The semantics of BSV (Bluespec SystemVerilog) are explained and understood in terms of an abstract collection of atomic rewrite rules operating on abstract state in abstract discrete time. We have gone through at least three radically different surface syntaxes for the same basic semantic idea, initially a Haskell-like syntax before Bluespec, Inc. was founded (which today we fondly refer to as “Bluespec Classic”), and now BSV which builds on SystemVerilog notations. For a while we even had a syntax based on SystemC (called ESE).
Regarding reflection and introspection in BSV, rules, functions, modules, and interfaces are first-class data types and are completely manipulable using higher-order functions during static elaboration. The FSM embedded domain-specific language in BSV, and our recently announced “PAClib” library (Pipeline Architecture Composers) makes extensive use of this capability.
Another desideratum I’d add is “full synthesizability”, so that even very high-level models can be mapped easily to FPGA/emulation platforms, thereby exploiting their parallelism to provide, effectively, VERY fast simulation.
Synthesizability is an interesting requirement. I think there is a fundamental problem with requiring that, as it makes the language too much geared towards hardware and much less into a software language. You want to able to stub things, use arbitrary data structures, etc.
Chiming in a bit late here… SystemC is at best a clever hack – an unholy combination of C macros and C++ templates. Because C++ is so static and has no reflection capabilities you often repeat yourself – needing to pass a string to the constructor for signal names, for example – very redundant. Violates the DRY principle – Don’t Repeat Yourself.
And then there are the error messages that come out of SystemC. You almost need another parser just to parse them. Basically they are C++ template error messages. I really can’t figure out how hardware engineers are supposed to be able to deal with them. And I suspect that many hardware folks will balk when they see their first multi-page compilation error report. I’ve found that g++ 4.x is much pickier about things than g++ 3.x was – a lot of our code is broken under g++ 4.x (ambiguous operator overloading, ambiguous this and that…).
We definitely need something else. If SystemC is supposed to be the answer, then the question was not formulated correctly.