I just read an opinion-provoking piece “Software developer attitudes: just get on with it” by Frank Schirrmeister, as well as the article “Life imitating art: Hardware development imitating software development” by Glenn Perry that he linked to. Both these articles touch on the long-standing question of who does development the “best” in computing. I have heard these arguments many times, where software developers think that there is something mythical about hardware development that makes things work so much better with much fewer bugs, and hardware people looking at the speed of development and fanciful fireworks of coding that software engineers can do. It could be a case of the grass always looking greener on the other side… but there are some concrete things that are relevant here.
Perry: OO and onwards!
As Glenn Perry notes, hardware languages have only just now discovered object-oriented programming, and he sees a gap of five to ten years from software development practice to hardware development practice. I think saying that OO is mainstream in 1998 is a bit late… I remember doing my first OO things in object Pascal and Hypercard around 1990, and it was standard fare at that point in time. So with this perspective, maybe there is some hope that hardware development one day takes up current practices like dynamic typing, explicitly threaded languages, and scripting-style (as I have argued before on this blog).
It also looks to me as if the OO is really mostly applied to verification systems and test benches rather than the actual design. Which is not too surprising: in the end, hardware design is about creating a fixed hardware layout, and polymorphic objects and pointer chains map really poorly to transistors. Considering the incredible difficulties of static optimization of C++ code with fun things like escape analysis and very conservative assumptions, I have a hard time seeing synthesis from arbitrary OO code any time soon.
But the main use seems to be in test benches, and in that world there is no reason to stick to antiquated concepts like static typing and old-fashion OO… it would seem quite feasible to move to an asynchronous message-passing parallel model with dynamic types and special support for validating hardware behavior. I think a limiting factor here is the user base that has little computer science schooling and little exposure to non-procedural languages.
Schirrmeister: The Sword of Damocles
Over to Schirrmeister. His assertion is really that the grass looks greener on the other side, but that that grass is really quite poisonous for those living on your own side:
While I agree with Glenn that the technology in software engineering may be more advanced, for example around languages, I am certain that the required methodologies are fundamentally incompatible. The reason? In hardware engineering the project team always has the “Sword of Damocles” hanging over their head. Mess up the tape out and you will cost the company several millions in NRE (Non recurring engineering, or, “N”ever “R”eturn “E”ver). In addition you have to consider the lost product revenue because of the several months the project is now delaying production.
In contrast, in software engineering there is always service pack 2. The requirement to get everything right is not as deadly as it is in hardware engineering. And as a result of all that Skip Hovsmith is perfectly right – “Just get on with it” is unfortunately often the approach taken in software engineering. It looks to like things have to get a lot worse before this approach changes.
I think this makes some sense… but it is really a bit simplistic.
My Synthesis: It is a sliding scale…
I think that the argument that Frank puts forward has some merit. But it is not true that there is always a next patch in the software world. In my experience, we really have a sliding scale of software criticality and ease of patching.
At one extreme, we have hosted web applications where the code lives on the provider’s servers, and can be and is updated all the time. Here, a user does not even see versions usually, they just see problems being fixed. Wonderful quick turnaround, and also very easy to just send something out for an eternal beta a la google… Development can be very productive as it is very easy to deploy new versions and customers are quite tolerant of issues as long as you can show that you can fix things rapidly. Often, the code is really throw-away and not intended to live for more than a few months anyway, so you can live with glitches and bugs, they are not economically meaningful to patch.
Somewhere in the middle there is PC software that you need to update over the Internet (or diskettes in the good old days). Here, patching is relatively cheap and easy, and most users happily update their software once a week or once a month, basically as often as you can release it. Some Enterprise Customers tend to be slow to adopt the latest patches as they want to check that things do not break before deployment.
Then you have embedded software which is usually much harder to patch in the field. I have only updated my mobile phone a few times in two years. And if you go down to printers, routers, and similar devices they only very rarely get updated. Here, you do need to be quite careful about testing since the cost of fixing goes up.
The most extreme software is safety-critical life-determining items like radiation machines, car brakes, aircraft engine controls, missile guidance, and similar. The cost of developing such software is on par with hardware, since there is no second chance if a bug manifests itself. Patching is about as hard as replacing faulty chips. Methodologies have to be fairly heavyweight, just like in hardware design.
So software can be just as precisely engineered and costly and slow to develop as hardware.
The real question is how to reduce the drag induced by the tough requirements of hardware development, so that at least some parts of the development can be fast and furious and fun. And on par with modern software development.
Virtual platforms is where things meet
In my mind, one of the places for fast and fun development is virtual platforms. A virtual platform is initially a vehicle for exploration and experimentation and iteration on how a design should be. That is an ideal place for software-like attitudes and modern software development. Code quickly, test with software, iterate often is really exactly what you want to do up-front in a hardware or system design project. That the code is incomplete and not sufficient for synthesis does not matter: you want to get to the essential questions as quickly as possible. Or provide a virtual platform for software developers are soon as possible.
The VP itself is not a hardware design, it is really a software program and can be and should be developed as such. Modeling is programming, not hardware design. I do not see the initial VP or the software-development VP as being something that is (necessarily) to be converted into hardware. They are really design specifications and executable data sheets, which at some point are used to create more detailed design that can be actually turned into hardware.
A model of a piece of hardware is usually something that a single programmer with good tools can put together in days, as long as it is kept at a software-timed transaction level. There is no need for heavy processes for these kinds of models, and they offer a chance for hardware designers to have some fun and go off into software land and quickly program things in a more relaxed and richer programming environment. Doing the initial work on a virtual platform is really like web development, where a new version can be generated very often to see what the users think of it. There is no hardware cost or fab cost to dampen enthusiasm…
Virtual platforms are uniquely interesting in that respect, they are really where software and hardware meet, and can be considered to belong on either side. In my mind, they should be considered to be software, as that is what makes it possible to develop them with the required speed.
3 thoughts on “Software, Hardware, and Development Methods”
Excellent augmentation and comments on my post, thanks a lot!
I fully agree with your assertion that there are different types of software with different types of requirements. For critical software I would argue that there also is a “Sword of Damocles” in delivery, just like in hardware. That’s why Jack Little was able to show in his keynote at DAC examples of flight systems fully automatically generated from model based design technologies (http://www.synopsysoc.org/viewfromtop/?p=37), just as Kennedy Carter shows F16 code automatically generated from xUML (http://www.inf.u-szeged.hu/stf/slides/e19.ppt). In these mission critical designs a failure in the software is so catastrophic, that the design flow just has the same requirements as you will find in hardware – no bugs allowed at release.
But then again, there still are the counter examples too, like “F-22 Squadron Shot Down by the International Date Line” (http://www.defenseindustrydaily.com/f22-squadron-shot-down-by-the-international-date-line-03087/) or “Segways recalled because of software glitch” (http://www.msnbc.msn.com/id/14831843/).
Looks lke we are not quite there yet.
You can never rule out errors it seems… the F22 sounded like a classic case of “did not test that corner case”. My understanding is that these things do happen in hardware too, but not as publicly or frequently (I hope). Old cases like the Pentium FDIV bug and a processor that shipped with 1/4 of all cache lines not used due to a logic bug come to mind… or memory coherency in a recent AMD processor that restricted its clock frequency.
But compared to general-purpose software it is sure solid — while writing this comment, my mouse disconnected from my laptop once due to some kind USB issue with my mobile phone, and Outlook had to be restarted to get out of a crash caused by a corrupted local mail file. Not exactly solid stuff.
In defense of software I must say that the problems being solved in software are many times more difficult than those in hardware. Outlook is a software program using a few hundred megabytes of memory and containing tens of megabyte of code, manipulating totally irregular data on the order of gigabytes… the state space and possible interactions is just so much bigger than for a typical hardware design.