Once upon a time, all programming was bare metal programming. You coded to the processor core, you took care of memory, and no operating system got in your way. Over time, as computer programmers, users, and designers got more sophisticated and as more clock cycles and memory bytes became available, more and more layers were added between the programmer and the computer. However, I have recently spotted what might seem like a trend away from ever-thicker software stacks, in the interest of performance and, in particular, latency.
Tag Archives: Communications Of The Acm
It used to be that Microsoft was the big, boring, evil company that nobody felt was very inspiring. Today, with competition from Google and Apple as well as a strong internal research department, Microsoft feels very different. There are really interesting and innovative ideas and paper coming out of Microsoft today. It seems that their investments in research and software engineering are generating very sophisticated software tools (and good software products).
I have recently seen a number of examples of what Microsoft does with the user feedback data they collect from their massive installed base. I am not talking about Google-style personal information collection, but rather anonymous collection of user interface and error data in a way that is more designed to built better products than targeting ads.
Paul Henning-Kamp has written a series of columns for the ACM Queue and Communications of the ACM. He is pointed, always controversial, and often quite funny. One recent column was called “The Most Expensive One-Byte Mistake“, which discusses the bad design decision of using null-terminated strings (with the associated buffer overrun risks that would have been easily avoided with a length+data-style string format). Well worth a read. A key part of the article is the dual observation that compilers are starting to try to solve the efficiency problems of null-terminated strings – and that such heavily optimizing compilers quite often very hard to use.
I just read an interview with Steve Furber, the original ARM designer, in the May 2011 issue of the Communications of the ACM. It is a good read about the early days of the home computing revolution in the UK. He not only designed the ARM processor, but also the BBC Micro and some other early machines.
In the June 2010 issue of Communications of the ACM, as well as the April 2010 edition of the ACM Queue magazine, George Phillips discusses the development of a simulator for the graphics system of the 1977 Tandy-RadioShack TRS-80 home computer. It is a very interesting read for all interested in simulation, as well as a good example of just why this kind of old hardware is much harder to simulate than more recent machines.
I just finished reading the October 2010 issue of Communications of the ACM. It contained some very good articles on performance and parallel computing. In particular, I found the ACM Case Study on the parallelism of Photoshop a fascinating read. There was also the second part of Cary Millsap’s articles about “Thinking Clearly about Performance”.
In the February 2010 issue of the Communications of the ACM there is an article by the team behind the Coverity static analysis tool describing how they went from a research project to a commercial tool. It is quite interesting, and I recognize many of the effects that real customers have on a tool from my own experience at IAR and Virtutech (now part of Wind River).
In the April 2009 issue of Communications of the ACM, Mike Shapiro of Sun (or should we say Oracle now?) has an interesting technical article about what he calls “purpose-built languages“. The article was earlier published in ACM Queue. Essentially, it is about domain-specific languages. He describes how many of the most useful little languages in use for the developmentof large systems have grown up without formal design, a grammar, or even a name.