S4D 2010 Part 2

My previous post on S4D did omit some of my notes from the conference. In particular, the very entertaining and serious keynote of Barry Lock from Lauterbach and some more philosophical observations on the nature of debugging.

Barry Lock

Barry lock gave a very entertaining keynote, from his viewpoint as essentially the champion of physical hardware debug. Lauterbach is clearly focused on debugging using hardware assists in real systems, with not much to do with high-level programming or virtual platforms. Barry has been working with computers longer than I have lived, and have seen both the semiconductor and software side of things.

His main message was that you have to take debuggability into account when buying chips for your embedded project. Saving a few cents by buying a chip with no or limited debug features will come back to haunt you, many times, in many nightmares. He had had grown men crying over the phone, asking for a miracle to save their projects after debugging had utterly failed for many months. He had seen startup companies go under, burning all their money chasing the last bug… and claimed that 75% of all product starts never got to market, blaming debug problems for a large proportion of that.

The most important debug feature is trace – which follows the theme of this being the S4D of trace. After trace, you want hardware breakpoints. Apparently, you need at least two to breakpoints to debug a system with a virtual memory RTOS. One to keep a look at the MMU, and one to actually debug code. More are better, but it is rare to see silicon vendors include many more breakpoints.

Barry gave  a number of examples of projects which had failed by not buying the right hardware. He put the blame both on the buyers chasing a few cents of costs in the end product, but also on the poor quality of silicon salespeople who rather took the price route than the quality route when selling chips.

He also noted that there seemed to be a positive correlation between industry leaders and buying debuggable hardware. Companies like Bosch, Ericsson, and Nokia always spend the extra money to get hardware that can be debugged, and have results to show for it.

Philosophy of Debug

During our two panel discussions on debug, there were two ways to look at debugging that stood out from the crowd.

The first was the observation that debugging today is very much a craft. When things go really bad, you go to the proven expert. Debugging is a craft you learn by apprenticeship with a master, and master debuggers are incredibly valuable for their organizations. This reliance on masters indicate that general programming education to a large extent overlooks debugging as a crucial skill for programmers. It also means that, in the words of one member of the audience, debug cannot scale. As problems become more complex, we still rely on single individuals, which reduces our ability to tackle problems.

The second observation was to liken debugging to the diagnosis of human diseases. As systems become more complex, their behavior gets so complicated and rich that it is hard to even precisely identify what a bug is. A simple crash or illegal operation is clear-cut – but when results of programs are just a bit off? When control loops don’t quite do the right thing, but almost? When the quality of a picture of a television just feels wrong? In those cases, we might be looking at composite measurements of many different parameters and factors in a system, and making a diagnosis of error based on the whole picture rather than each factor in isolation.

Based on these observations, I can envision a somewhat weird future where we train computer doctors (as in medical doctor, not PhD)  to diagnose computer problems based on holistic, systematic, approaches. Such education could be separate from the training of programmers and testers, as their specialty would be the diagnosis of system outputs against the expected outcomes at a high level, rather than the details of code.

5 thoughts on “S4D 2010 Part 2”

  1. Aloha!

    The “debugging is a craft” notion is very interesting. How should one design an academic or trade course to move this from a “hand down from master to apprentice” approach to efficient education? The areas of SW- and system testing/verification have evolved rapidly the last few years with lots of good tools, methodologies etc generated. How can the same be achieved for debugging. Turn the black magic of debugging inte efficient engineering.

    I haven’t got the answers (but some ideas), but think it would be well worth discussion and pursuing. Esp by companies like yours. 😉

  2. Joachim Strömbergson :Aloha!
    The “debugging is a craft” notion is very interesting. How should one design an academic or trade course to move this from a “hand down from master to apprentice” approach to efficient education? The areas of SW- and system testing/verification have evolved rapidly the last few years with lots of good tools, methodologies etc generated. How can the same be achieved for debugging. Turn the black magic of debugging inte efficient engineering.

    I think a good starting point would be to actually start introducing debugging in technology and engineering programs dealing with software !

    Debugging is most often seen as something you learn in the field, through practice. However, if you haven’t had the time and opportunity to learn the tools and methods in the controlled setting of mentored schooling, learning them on the job might not be automatic. This is especially true given the current state of things where debugging is seen as black magic in the software industry.

    Debugging is about validating invariants: you think your program state should be a certain way, or the outcome a certain result, but you must validate throughout the execution that your assumptions are true. Good methodology, sound logic and the use of a source level debugger allow one to do this in a straightforward iterative process, when test cases are present. Although these skills don’t come naturally, they can (and must) be taught and practiced.

    I think debugging is hardly black magic, but rather a blind spot: a lot of practitioners in the software (and hardware) industries only know a fraction of what they must know to debug efficaciously.

    I strongly suggest the reading of “The Art of Debugging (with GDB, DDD and Eclipse)” by Norman Matloff and Peter Jay Salzman, No Starch Press 2008 (http://nostarch.com/debugging.htm).

    That book is a good proof that debugging is far from a complex skill to master. All that is required is methodology, practice and knowledge of the tools.

    Surely, there are some issues, like those Jakob’s colleagues raised, that are beyond what can be done straightforwardly. However, a lot could be done with little effort (think about the 80-20 rule) to raise the level of debugging proficiency of new graduates, which would help, in time, tackle the more complex problems with a larger pool of competent people.

    The starting point is to actually teach it as a skill that is equally important to that of synthesizing the coded solution to a problem.

  3. I think a good starting point would be to actually start introducing debugging in technology and engineering programs dealing with software !

    I think debugging is hardly black magic, but rather a blind spot: a lot of practitioners in the software (and hardware) industries only know a fraction of what they must know to debug efficaciously.

    I strongly suggest the reading of “The Art of Debugging (with GDB, DDD and Eclipse)” by Norman Matloff and Peter Jay Salzman, No Starch Press 2008 (http://nostarch.com/debugging.htm).

    I couldn’t agree more, and that book would be a good starting point for a academic course – with some well structured labs for hands on experience. But I would also like to bring in tools like Fuzzers to teach how to provoke invariants.

    I’ve been somewhat involved in developing book and course that tries to teach how to work in real SW-projects. That is, where you have existing legacy code that needs to be debugged, extended, modified etc. Most academic SW courses seem to assume a green field, but in reality that is rarely the case. There ar mountains of (crappy) code in which your new code should interwork. And your code will become someone elses (crappy) code to deal with.

    I believe that learning this fact, and the methodologies and tools to work in such a setting would make new SW designers (and HW designers too, in this case there is NO difference) much better equipped and do a better job.

    Debugging skillz (and esp methodologies) would fit very nicely into this.

Leave a Reply

Your email address will not be published. Required fields are marked *