S4D 2010

Looks like S4D (and the co-located FDL) is becoming my most regular conference. S4D is a very interactive event. With some 20 to 30 people in the room, many of them also presenting papers at the conference, it turns into a workshop at its best. There were plenty of discussion going on during sessions and the breaks, and I think we all got new insights and ideas.

S4D Talks, Themes, and Topics

More is available in “S4D part 2“.

Tracing and Instrumentation

The papers presented covered a wide variety of topics from a variety of angles. Still, everybody felt that two topics kept coming back in various forms in a majority of the papers and discussions: tracing and instrumentation.

Code instrumentation is not a dirty word anymore. The traditional judgment that inserting probes into your software is plain bad does not apply anymore, at least not in the minds of the people at S4D. Instrumentation was applied to drivers, OS kernels, and regular user-level software. I think the key insight is that there is clear value in having the developers that write a piece of software also mark points of interest in the code. When analyzing a trace of an execution, that means that the information in the trace becomes meaningful to the software developers, as it is on the right level of abstraction. Instrumentation naturally produces traces, which can be fed out using shared memory, networks, special-purpose hardware, and more.

One of the instrumentation trace solutions presented (the SVEN system from Intel Digital Home presented by Pat Brouillette), actually leaves the instrumentation in place in the shipping customer systems. In this way, you cannot really claim that instrumentation is intrusive – it is just part of the software, always. Customers can even activate the tracing in deployed systems, and ship the traces back to the developers for analysis of bugs found in the field. It is another approach to record and replay that touches on my paper on transporting bugs with checkpoints.

The increased interest in instrumentation probably has something to do with the nature of the systems that are being addressed. For systems using shared memory multicore hardware and general-purpose operating systems, the cost of instrumentation is easier to take than for very small constrained embedded systems. Essentially, as systems get more complex, instrumentation becomes more tractable.

Instrumentation can interact with hardware trace and debug functions is a neat way to build a system which is more powerful than a hardware or software system would be on its own. Especially for software stacks involving hypervisors and multiple complex operating systems, that is likely necessary.

Once we have a trace, just like last year, we need to have tools for analyzing the tons of data you get from tracing a modern system. ST talked about a tracing system that generated 100s of gigabytes of data.

One trace aspect that kept coming up was the need for time stamps on trace data. To reconcile multiple traces and understand how different concurrent units talk to each other, a global time stamping mechanism is crucial. There seems to be work on hardware to support this.

Security, Secrecy, and Debug

I moderated a panel on hardware support for debug, and posed the question on how to balance security and the need to debug. This generated a number of interesting answers from the panel and the audience.

The conflict between debuggability and secrecy is there. Even from the same customer you first get “you have to make the internal state of the controller inaccessible and hidden to avoid customers modifying their engines”… and then when a problem appears in the field, they ask for a way to analyze and trace that very same system. Hard to support both requirements in a reasonable way.

A sophisticated solution to debug security from companies like ARM, Infineon, and ST is debug that can be enabled using key exchange. The chips are built with a “locked door” in place, but the keys to the door are kept well-guarded. In this way the same chip can be used in development and in the field.

To support debug of systems involving secure modes like ARM TrustZone, ARM has defined several levels of access in their CoreSight hardware modules. This makes it possible for a debugger to be restricted to just debugging user-level code, just OS and user-level code, or all of the software stack. To me, this sounds like it could allow mobile phone manufacturers to “securely” let their application developers use hardware-based debug, without compromising operating systems or secure boot modes.

The classic technique of using fuses to turn off functions is also relevant, at least for systems with moderate levels of security. This can certainly be overcome using special tools to peel off the top of chips and reconnect the fuses, but the panel seemed to think that that level of attack was in general not worth protecting against. However, the audience pointed out that  this was actually being done to automotive engine controllers and there are people making a good living from such antics.

ESCUG Meeting

The ESCUG meeting was a mix of fairly slick commercial presentations from OVP/IMperas chief Simon Davidmann and SystemC guru John Aynsley, and research presentations of varying quality.

One thing that struck me was that the academics spent a significant time in all presentations about how their approaches were compatible with the existing SystemC structure, where they host their open-source efforts, etc. I guess that is good in that they show a certain concern for reality – but it is also a bit sad that they did not get time to actually talk that much about the core ideas they were bringing forward. I am personally much more interested in new ideas than infrastructure and project management. It does not bode well for European research if this is what people are forced to produce, in lieu of real innovation.

Thorsten Grötker’s Keynote

On Wednesday morning, Thorsten from Synopsys did a look back over the history of SystemC, free from product pitching. He only mentioned Synopsys in his introduction, where the high-level message was that the embedded software is really the key problem for industry today. I cannot disagree with that.

During the SystemC parts of his talk he did say a few things that I did not quite agree with… in particular that TLM was unknown prior to 1999. It was not called that, but it certainly existed in the field of full-system simulation. The main problem is that Thorsten only sees the EDA history of modeling, not the computer architecture and software-driven work that did simulations as far back as 1950 (the famous Gill paper), and fast simulation since at least 1967.

He also claims that with SystemC you have a single language for both detailed and TLM models. That is true… but you still need multiple models, one at each level of abstraction. So yes, one language, multiple models. However, that gluability really comes with a performance and complexity cost. It makes it too easy to slip into bad modeling even in TLM.

An interesting theme that Thorsten picked up from John’s talk at ESCUG is the use of SystemC to model software and RTOS, using the upcoming process control extensions. If you stretch that into the area of software synthesis, it means that SystemC is going to collide with the field of model-driven software development. Will you use SystemC, coming from the hardware world, or UML/MATLAB/Domain-specific languages coming from the software world?  Thorsten makes the interesting point that in order to integrate with that world, SystemC will require some concepts from that world (like pins and clocks enable interaction with RTL). I am not sure that is true, necessarily, I think you can just as well create point adaptors to the same effect.

Getting to Southampton

The University of Southampton hosted the event, and it took place in the university lecture halls. That means that we got free very fast WiFi (unlike any commercial conference venue I have ever seen). The university campus was full of services (unlike the desolate place that last year’s FDL/S4D choose). Housing in the Glen Eyre residential halls was a bit spartan but functional. Felt like being back in my days as a student living in student housing.

The instructions from the conference about how to get to the conference was a bit confusing and incomplete. In practice, it is very easy to get to Southampton from both Gatwick (direct train) and Heathrow (NationalExpess bus 203). At Heathrow, I had a bit of luck with the bus to Southampton. The instructions from the NationalExpress website had me believe that I had to get from Terminal 5 where we landed to the central bus station and then catch the bus at 15.00. As we landed 40 minutes late (14.40), this looked very hopeless… until I found the NationalExpress counter in the arrivals hall at Terminal 5 and they told me the bus would leave at 15.30. Nice, no stress. The bus to Southampton even had free Wifi on board!

Once in Southampton, you then had to take the bus U1A out to the university campus, and finding a bus stop for that was the most difficult part of the journey, actually. Some of the buses from Heathrow stop at Southampton university.

See also “S4D Part 2” for a few more tidbits from S4D.

12 thoughts on “S4D 2010”

  1. Aloha!

    First a quick errata: The text “the classic judgment that inserting probes into your software is plain bad does apply anymore it seems” seems to be missing an important “not”.

    Interesting comments on the HW debug vs security panel. I happen to disagree with the notion that fuses are a good security measure and popping the package is not a risk worth considering.

    If one looks at the low 1kUSD costs charged by companies doing this commercially (for “educational purposes”), it’s obvious that if you use a MCU in your product, fuses is the mechanism protecting the product from being ckined and the product market value is greater, then you have a problem. See for example: http://www.mikahk.com/

    Also the debug vs security discussion can (should) be extended to the SW instrumentation discussion you had. Esp instrumentation/probes that are included in the shipped SW and available. Compare to (for example) DTrace and the hacks using it to circumvent DRM protection mechanism.

    Instrumentation is very cool and have great potential, but the ideas used in for example CoreSight should probably be considered for SW systems too.. With the added complexity of providing isolation and secure barriers in a shared SW environment.

  2. Joachim Strömbergson :Aloha!
    First a quick errata: The text “the classic judgment that inserting probes into your software is plain bad does apply anymore it seems” seems to be missing an important “not”.

    I think I got it right. We used to think that changing the code with instruments was bad – but now it appears that many people think it is OK.

  3. Joachim Strömbergson :Aloha!
    Interesting comments on the HW debug vs security panel. I happen to disagree with the notion that fuses are a good security measure and popping the package is not a risk worth considering.

    Instrumentation is very cool and have great potential, but the ideas used in for example CoreSight should probably be considered for SW systems too.. With the added complexity of providing isolation and secure barriers in a shared SW environment.

    On the fuses: the reply surprised me too – but maybe the risk to a digital TV system is not that great. ARM said that we always need to weigh cost of implementation vs. the risk. So I guess it comes down to the nature of the market segment and product.

    Instrumentation at safety levels – maybe that’s what a Hypervisor could help do.

  4. @Jakob

    Jakob :

    Joachim Strömbergson :Aloha!
    First a quick errata: The text “the classic judgment that inserting probes into your software is plain bad does apply anymore it seems” seems to be missing an important “not”.

    I think I got it right. We used to think that changing the code with instruments was bad – but now it appears that many people think it is OK.

    Pardon my french, but if you want the text to mean what you say, then it should be “the classic judgment that inserting probes into your software is plain bad does *NOT* apply anymore it seems” (my emphasis added). As it is in the original text, it says that the old judgement still holds. And also the “does apply” should then be “still applies”.

  5. Jakob :

    Joachim Strömbergson :Aloha!
    Interesting comments on the HW debug vs security panel. I happen to disagree with the notion that fuses are a good security measure and popping the package is not a risk worth considering.

    Instrumentation is very cool and have great potential, but the ideas used in for example CoreSight should probably be considered for SW systems too.. With the added complexity of providing isolation and secure barriers in a shared SW environment.

    On the fuses: the reply surprised me too – but maybe the risk to a digital TV system is not that great. ARM said that we always need to weigh cost of implementation vs. the risk. So I guess it comes down to the nature of the market segment and product.
    Instrumentation at safety levels – maybe that’s what a Hypervisor could help do.

    (1) Digital TV is actually a good example of a high volume, pretty matore consumer markets where SW-functionality such as menues, codec formats are differentiatiors that drives price/marging. In these cases, having someone ripping your (fuse protected) SW, thereby being able to create knock-off copies is where fuses *should be* considered inadequate protection.

    As I wrote, if the market value for a given product is >> the ~1 kUSD it costs to rip out the firmware from the fuse protected MCU and you don’t have any other protections mechanism, then you are paddling upstream in pretty murky waters…

  6. Joachim Strömbergson :

    Jakob :

    Joachim Strömbergson :Aloha!
    Interesting comments on the HW debug vs security panel. I happen to disagree with the notion that fuses are a good security measure and popping the package is not a risk worth considering.

    Instrumentation is very cool and have great potential, but the ideas used in for example CoreSight should probably be considered for SW systems too.. With the added complexity of providing isolation and secure barriers in a shared SW environment.

    On the fuses: the reply surprised me too – but maybe the risk to a digital TV system is not that great. ARM said that we always need to weigh cost of implementation vs. the risk. So I guess it comes down to the nature of the market segment and product.
    Instrumentation at safety levels – maybe that’s what a Hypervisor could help do.

    (1) Digital TV is actually a good example of a high volume, pretty matore consumer markets where SW-functionality such as menues, codec formats are differentiatiors that drives price/marging. In these cases, having someone ripping your (fuse protected) SW, thereby being able to create knock-off copies is where fuses *should be* considered inadequate protection.
    As I wrote, if the market value for a given product is >> the ~1 kUSD it costs to rip out the firmware from the fuse protected MCU and you don’t have any other protections mechanism, then you are paddling upstream in pretty murky waters…

    Instrumentation and hypervisors: Yes! and a good, authenticated API. That is, the instrumentation authentication ensures that instrumentation access is granted to those allowed to access a given set of instruments. (Note: Could be multiple sets with multiple roles/ACLs etc). And the hypervisor adds isolation needed to ensure that the authentication mechanism isn’t being being sircumvented….

    … Until someone succeeds in putting the hypervisor in a hypervisor and sircumvent the protection from below. There are Blue Pills all the way down.

  7. … Until someone succeeds in putting the hypervisor in a hypervisor and sircumvent the protection from below. There are Blue Pills all the way down.

    Have a look at what the Wind River SoftICE is doing… behaving like a hardware debug unit by running a piece of code hidden from the rest of the system by the VT-X extensions on Intel architecture… essentially hypervising the hypervisor as I understand it.

    It seems that a simple but crucial part of security is getting in there first during the boot. I guess that’s why everyone is spending the effort on secure boot modes, trustzone, and hard-to-get-to boot code.

  8. Aloha!

    Jakob :

    … Until someone succeeds in putting the hypervisor in a hypervisor and sircumvent the protection from below. There are Blue Pills all the way down.

    Have a look at what the Wind River SoftICE is doing… behaving like a hardware debug unit by running a piece of code hidden from the rest of the system by the VT-X extensions on Intel architecture… essentially hypervising the hypervisor as I understand it.
    It seems that a simple but crucial part of security is getting in there first during the boot. I guess that’s why everyone is spending the effort on secure boot modes, trustzone, and hard-to-get-to boot code.

    Exactly. Joanna Rutkowska did the pioneering work on hypervisor rootkits with the blue pill:

    http://theinvisiblethings.blogspot.com/2006/06/introducing-blue-pill.html

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.