The Security Now Podcast number 497 dealt with the topic of Vehicle Hacking. It was fairly interesting, if a bit too light on the really interesting thing which is what actually went on in the vechicle hack that was apparently demonstrated on US national television at some point earlier this year (I guess this CBS News transcript fits the description). It was still good to hear the guys from the Galois consulting firm (Lee Pike and Pat Hickey) talking about what they did. Sobering to realize just how little even a smart guy like Steve Gibson really knows about embedded systems and the reality of their programming. Embedded software really is pretty invisible in both a good way and a bad way.
As I understood it from the discussion, the car hack in question was basically performed by hacking into the Internet-connected telematics unit in the car, and using that in turn to send bad messages on the car’s CAN bus. This allowed the hackers to honk the horn and compromise the breaks on the car. They then went on to lament the lack of security in the highly resource-constrained CAN bus. I think that while technically true, the problem is really somewhere else.
The CAN bus in a vehicle is indeed a shared unprotected resource: but that in itself need not be a problem. I really do not see how any on-board bus could be reasonably secured against attacks from nodes on the network. It makes little sense in practice to have the different nodes in the network try to validate each other — if a node can be taken over in some way, it can sure sign messages with some kind of private key. Checks might work against adding nodes to the network, but the telematics unit used in this hack was a designed-into trusted member of the network. The nodes are built to work together to form a system, and what is really needed is to make sure the system overall composition is sound. I.e., security from boot and system start, validating that no nodes have been compromised. From what I understand, some level of such validation is already being done in cars, to make sure that the nodes are at least somewhat consistent with each other. Still, such checks cannot be too deep, since nobody would accept a boot time for a car on the order of minutes. We expect cars and other vehicles to turn on when we turn the ignition key or press the start button – not to start booting, forcing us to wait… so there are some issues in how to do this in a deep way.
It seems that the key is to make sure the car is robust against external attacks and secure the periphery. Things brings up the now-standard IoT architecture where weak “sensor” devices are connected to, managed by, and secured by “gateways”. In the case of this hack, we should ideally have a separate firewall between the exposed telematics unit and the internal network. Same thing for the garage-accessible service port: that should not be a connector to the on-board bus, but rather connect to some intermediary unit that makes sure that only approved and safe operations are performed. It is about using white listing as the security modus, and white listing only approved operations. For the interface to telematics, for example, you could imagine having a limited-power API that allows the unit the query the car for certain values, but not send any kind of control messages in.
This discussion never really happened in the podcast, though. What did happen was a long discussion around formal methods, which definitely was a valuable public service. Indeed, it seems that doing formal methods is the main business of Galois, from what Lee and Pat said in the podcast. However, I felt it was somewhat disingenious of the Galois people to talk about formal methods for algorithms (the Amazon data center example bring brought up) and mix in security.
It makes perfect sense to validate and prove an algorithm or protocol using formal methods. It is also actually comparatively easy, since you are working on a higher idealized abstraction compared to code. But even if the protocol is proven correct, security-related bugs can be introduced in the implementation. The Apple goto-fail bug is a nice example of this: the SSL/TLS protocol is robust, but if an implementation fails to be correct, it is subject to exploitation. Proving an actual real-world implementation safe from security issues seems very far-fetched to me, honestly. Not saying it cannot be done, but the semantics of a real machine and a real language do make things many times exponentially more difficult than verifying protocols. Verifying the absence of odd mistakes is not easy – the goto-fail bug would have been caught even by compiler warnings, but more subtle issues can go entirely undetected as they rely on undefined or vaguely-defined properties of a runtime system or language.
Overall, an interesting podcast, as always, but somewhat short on real substance.