In a report from FTF Paris 2007, Info World makes some interesting comments on security and locking-down of mobile devices. Info World Â» Blog Archive Â» â€˜Flat IPâ€™ mobile networks face new security challenges:
Freescale demonstrated a hardware reference platform with a number of security features for future mobile devices, its i.MX31 and i.MX31L multimedia applications processors. Based on the Arm 11 core designed by Arm Holdings, the chips have a run-time integrity checker that verifies the digital signature of code before executing it. This can help stop malware sneaking onto the device â€” although it could also be used to lock down a mobile device and prevent the installation of third-party applications, much as Apple has attempted to do with its iPhone.
Prototypes are often designed with additional standard circuitry to make it easier to observe their behavior under test. Probes applied to that circuitry, known as a JTAG interface, can even be used to issue debugging instructions to the microprocessor. The connections for the prototypeâ€™s JTAG interface often survive â€” in different positions on the circuit board â€” right through to final production. Identifying where these points were located on Appleâ€™s iPhone became one of the goals of those trying to unlock the devices as access to it might have allowed them to debug Appleâ€™s security code.
This is the same concern that was expressed in Strombergsons comments to my post on hardware support for parallel programming. Basically, that remnants of debug support for the development phase can be used in deployment to hack into the device.
And that is pretty hard to get around if you assume you want to do debugging on the device. Which I guess is needed for almost all devices, if nothing else in order to analyze performance on actual hardware. Otherwise, I do believe that virtual prototype platforms and simulators like Simics is a key technology to develop safe applications safely — using a virtual system for debug, you have no need for debug backdoors on the final hardware. That backdoor is only there in the virtual hardware, not in the physical manifestation of the hardware. Of course, it is then key that the virtual debugger cannot be used by bad guys to break into the software. I think that can be worked around by making it impossible to get a complete system image off of a target system, which is back to physical security.
I think we need some kind of thinking akin to the key tenet of crypto theory, that information should be safe even if all algorithms and mechanisms involved in encrypting it is known. The secrecy of the key is all that is needed. In the same vein, we need to ensure that a piece of software running on a particular piece of hardware is protected against access and intrusion, if the attacker gets access to the source code of the program and a simulator for the hardware or even debug hardware. There has to be some kind of “key” mechanism that can be used to ensure this.