The TimeSys Embedded Linux Podcast (also called LinuxLink Radio) is a nice listen about embedded computing using Linux. Sometimes they are a bit too open-source centric, though, and ignore very good tools that live in the classic commercial world. One such example is the recent episode 20 on debugging tools, where they totally ignore modern high-powered hardware-based debugging.
They do talk about the use of JTAG for debugging and the old ICE systems, but miss the modern trend towards much more powerful on-chip debug hardware. Especially interesting today is the use of three technologies:
- Better on-chip supports like ARM’s Embedded Trace Macrocell, and the recent quite advanced CoreSight system gives much better insight into the system execution thanks to specialized buses and buffers for debug info.
- On-chip debug logic that makes it possible for the processors and logic on a chip to break on complex conditions and across different processor cores without involving the host debugger in the decision loop.
- Huge trace buffers that can take out several seconds worth of execution traces, and smart tools that take advantage of the data offline to do performance analysis and debugging and reverse debugging.
All of these are available from commercial vendors like ARM, GreenHills, and WindRiver systems, but there is no really good open-source support. Probably because the systems in question are fairly rare, and open-source tends to provide good support for the mainstream use case and technology and very poor support for everything else.Â Also, the reverse debuggers are usually tied to a particular trace system or debug agent, since reversibility is not part of any standard debug protocol (yet, there are several different attempts to introduce it to gdb for various backends). Finally, if you buy a very expensive piece of debug hardware, the cost of software to use it with does not really matter.
So thanks to the great power of hardware trace and powerful tracebuffers, and contrary to the opinions in the podcast, I actually believe that cross-debugging using hardware support — or, even better, virtual hardware — is a very good tool for application-level debug once the application and the hardware platform get sufficiently complex. You really want that nice unintrusive debug experience, rather than affecting your target machine with a debug agent, or even worse, by running it on the same time as you are debugging the code it runs.
You do need to have a debugger that is aware of virtual memory and the tasks running on the target system, but that is not that hard to do. Freescale’s CodeWarrior and WindRiver Workbench both do this for hardware-assisted debug of Linux and VxWorks targets. We at Virtutech have also done it using virtual hardware.
Just to clarify: as I have noted earlier, I still think that even the most ambitious hardware-debug approaches in the market today do not go far enough for multicore processors. For quickly getting to reasonable performance on a multicore platform, I think reducing peak performance by replacing performance-enhancing hardware with debug- and tuning-enhancing hardware makes perfect sense. But that is a tangent.
So what is the final take here?
- Hardware debug rocks.
- Virtual hardware debug also rocks, often much more.
- Remote/cross-debug tools should be used more, not less.
- Someone needs to package remote/cross-debug so that even PC types want to use it for their “native” applications.
- Commercial software development tools are often ahead of the open-source tools (but not always, Valgrind is a good example of an outstanding open-source solution).