The US Defense Advanced Projects Agency (DARPA) ran a “Cyber Grand Challenge” in 2016, where automated cyber-attack and cyber-defense systems were pitted against each other to drive progress in autonomous cyber-security. The competition was run on physical computers (obviously), but Simics was used in a parallel flow to check that competitors’ programs were not trying to undermine the infrastructure of the competition rather than compete fairly inside the rules of the competition.
This “vetting” system was explained in a paper from 2018, and I have a blog post up on my Intel Blog about how Simics was used. Overall, the system that was built was very impressive – following the execution of software in a detailed manner to detect if it was trying to break out of the operating system sandbox in which it ran.
The researchers also connected Simics to the HexRays* IDA Pro* Debugger, including the enabling of Simics reverse execution from IDA. It is an excellent example of what you can do given extensible and programmable platforms like Simics (and IDA). In the presentation, the researchers end with a nice slide about “Bring Your Own Simics” 🙂
There have been quite a few security exploits and covert channels based on timing measurements in recent years. Some examples include Spectre and Meltdown, Etienne Martineau’s technique from Def Con 23, the technique by Maurice et al from NDSS 2017, and attacks on crypto algorithms by observing the timing of execution. There are many more examples, and it is clear that measuring time, in particular in order to tell cache hits and cache misses apart, is a very useful primitive. Thus, it seems to make sense to make it harder for software to measure time, by reducing the precision of or adding jitter to timing sources. But it seems such attempts are rather useless in practice.
The new Windows 10 Controlled Folder Access (CFA) feature is a great idea – prevent unknown programs from modifying your files, to stop ransomware in its tracks. It is so good that I forced an early update to Windows 10 Build 1709 (“Fall Creators Update”) on a couple of my home machines and enabled it. Now, I have quickly disabled it, as it is not possible to actually use it in a real environment. It just stops things a bit too hard.
Intel Software Guard Extensions (SGX) is a pretty cool piece of technology that aims to make it possible for user programs to hide secrets from other user programs and the operating system itself. It establishes enclaves in the system that hides the data being processed and the code processing it from all other software. The original application for SGX was to support client-machine features like DRM, to create a safe space on a client that a server can trust. Recently, the people behind the Signal messaging system have provided a really interesting example of an application that makes use of the of SGX “in reverse”, to make it possible for a client to trust a server.
I read some news (ExtremeTech, Techcrunch) about how “smart” wifi-connected locks sold by Lockstate got bricked by an automatic over-the-network update. This sounds bad – but it is bad for a good reason. I think the company should be lauded for actually having the ability – and laughed out for royally botching it.
I have just published a piece about the Intel Excite project on my Software Evangelist blog at the Intel Developer Zone. The Excite project is using a combination of of symbolic execution, fuzzing, and concrete testing to find vulnerabilities in UEFI code, in particular in SMM. By combining symbolic and concrete techniques plus fuzzing, Excite achieves better performance and effect than using either technique alone.
The recent news that a hacked version of Apple Xcode has been used to insert bad code into quite a few programs for Apple iOS was both a bad surprise and an example of something that has been hypothesized for a very long time. For the news, I recommend the coverage on ArsTechnica of the XCodeGhost issue. It is very interesting to see this actually being pulled off for real – I recall seeing this discussed as a scenario back in the 1990s, going back to Ken Thompson’s 1983 ACM Turing Award lecture.
I had the great honor to be on a panel discussing IoT Security at the DAC back in June. The panel was part of the Embedded Techcon event that took place essentially as a little embedded corner inside the DAC – it was held in a couple of conference rooms next to the regular DAC sessions, and attendees were also mostly attending the DAC in general. Not a bad idea for meshing embedded and hardware design. The panel was a great one, and David Kleidermacher from Blackberry gave me a great take-away: unless security is allowed to gate releases of products, it is hard to think you take security seriously.
I have read a few news items and blog posts recently about how various types of software running on top of virtual machines and emulators have managed to either break the emulators or at least detect their presence and self-destruct. This is a fascinating topic, as it touches on the deep principles of computing: just because a piece of software can be Turing-equivalent to a piece of hardware does not mean that software that goes looking for the differences won’t find any or won’t be able to behave differently on a simulator and on the real thing.
I am not in the computer security business really, but I find the topic very interesting. The recent wide coverage and analysis of the Flame malware has been fascinating to follow. It is incredibly scary to see a “well-resourced (probably Western) nation-state” develop this kind of spyware, following on the confirmation that Stuxnet was made in the US (and Israel).
In any case, regardless of the resources behind the creation of such malware, one wonders if it could not be a bit more contained with a different way to structure our operating systems. In particular, Flame’s use of microphones, webcams, bluetooth, and screenshots to spy on users should be containable. Basically, wouldn’t cell-phone style sandboxing and capabilities settings make sense for a desktop OS too?
Episodes 299 and 301 of the SecurityNow podcast deal with the problem of how to get randomness out of a computer. As usual, Steve Gibson does a good job of explaining things, but I felt that there was some more that needed to be said about computers and randomness, as well as the related ideas of predictability, observability, repeatability, and determinism. I have worked and wrangled with these concepts for almost 15 years now, from my research into timing prediction for embedded processors to my current work with the repeatable and reversible Simics simulator.
I previously blogged about the HAVEGE algorithm that is billed as extracting randomness from microarchitectural variations in modern processors. Since it was supposed to rely on hardware timing variations, I wondered what would happen if I ran it on Simics that does not model the processor pipeline, caches, and branch predictor. Wouldn’t that make the randomness of HAVEGE go away?
When I was working on my PhD in WCET – Worst-Case Execution Time analysis – our goal was to utterly precisely predict the precise number of cycles that a processor would take to execute a certain piece of code. We and other groups designed analyses for caches, pipelines, even branch predictors, and ways to take into account information about program flow and variable values.
The complexity of modern processors – even a decade ago – was such that predictability was very difficult to achieve in practice. We used to joke that a complex enough processor would be like a random number generator.
Funnily enough, it turns out that someone has been using processors just like that. Guess that proves the point, in some way.
Looks like S4D (and the co-located FDL) is becoming my most regular conference. S4D is a very interactive event. With some 20 to 30 people in the room, many of them also presenting papers at the conference, it turns into a workshop at its best. There were plenty of discussion going on during sessions and the breaks, and I think we all got new insights and ideas.
There is a very interesting worm going around the world right now which is specifically targeting industrial control systems. According to Business Week, the worm is targeting a Siemens plant control system, probably with the intent to steal production secrets and maybe even information useful to create counterfeit products. This is the first instance I have seen of malware targeting the area of embedded systems. However, the actual systems targeted are not really embedded systems, but rather regular PCs running supervision and control software.
I have another blog up at Wind River. This one is about multicore bugs that cannot happen on multithreaded systems, and is called True Concurrency is Truly Different (Again). It bounces from a recent interesting Windows security flaw into how Simics works with multicore systems.
Some recent developments among development environments for mobile phones have made me consider the hereto unthinkable: that C might be on a decline as the universal programming language. Indeed, maybe there is even a chance that we will not have a universal low-level language in the future at all. What is happening is that the hitherto “given” role of C as the base language for a platform is being questioned. The reason appears to be security, which cannot be said to be a bad thing. However, a large-scale move away from C might hurt many of today’s higher-level languages and even model-based engineering. Continue reading “C in Danger – and thus Higher-Level Languages (?)”
We recently had a malfunction in our spam filters at work, so I had to go back and review the catch for possible false positives. I sort things into two bins using spamassassin, one for most likely spam, and one for probable spam. When things started to go bad, the most likely folder had reached more than 2 GB, and the probable some 500 MB.
Now I have had my yubikey for about a week, and I have put it on my keychain. It really works extremely well! The only small issue is that I tend not to have my keys immediately within reach while at home in the house or on travel, so there is a step of “go retrieve the keys” before I can use it for login.
I been listening to the SecurityNow! podcast raving about the coolness of the Yubikey, created by Swedish startup Yubico. It seems like the device has captured the imagination of quite a few people, and I have been kind of curious about it. So I was quite pleasantly surprised when I got one a few days ago, since we are testing it as a new way to authenticate to our VPN at work.
In a very roundabout way, I recently got to hear about a cool Sun server feature introduced sometime back in 2003 or 2004: the SCC System Configuration Card. This is a smart card that stores the system hostid and Ethernet MACs, along with other info, and which can be transferred from one server to another.
The Swedish national medical products agency is running a very cleverly marketed campaign right now to inform people about the perils of buying medicine over the Internet. They are running fake advertisement spots on television, mimicking the typical medical adverts found in the US (and the few other countries where such advertising is allowed for prescription medicine), with a trustworthy doctor talking about the benefits of this and that… and slowly going into weird land about how the products might not be want you think and maybe don’t contain the right stuff, etc.Finally, you are pointed to www.crimemedicine.com, a site setup for this campaign. All very clever. In fact, so clever that some people reported the spots to the consumer watchdog as being illegal advertisements… brilliant!
It is a week ago now, and sometimes it is good to let impressions sink in and get processed a bit before writing about an event like the SiCS Multicore Days. Overall, the event was serious fun, and I found the speakers very insightful and the panel discussion and audience questions added even more information.
Everybody seems to think the launch of the Google Chrome browser is very important and cool. Probably because Google itself is considered important and cool. I am a bit more skeptical about the whole Google thing, they seem to building themselves into a pretty dangerous monopoly company… but there are some interesting architectural and parallel computing aspects to Chrome — and Internet Explorer 8, it turns out.
In Episode 157 of Security Now,Steve Gibson and Leo Laporte discuss the recently discovered security issues with DNS. In particular, the cost of making a good fix in terms of bandwidth and computation capacity. Fundamentally, according to Steve, today’s DNS servers are running at a fairly high load, and there is no room to improve the security of DNS updates by for example sending extra UDP packets or switching to TCP/IP. As this theoretically means a doubling or tripling of the number of packets per query, I can believe that. The “real solutions” to DNS problems should lie in the adoption of a truly secured protocol like DNSSEC. As this uses public key crypto (PKC), it would add a processing load to the servers that would kill the DNS servers on the CPU side instead…
It must have been Google Alerts that send me a link to the HOTOS 2007 (Hot Topics in Operating Systems) paper by Tal Garfinkel, Keith Adams, Andrew Warfield, and Jason Franklin called Compatibility is not Transparency: VMM Detection Myths and Realities. This paper is slightly less than a year old today, so it is old by blog standards and quite recent by research paper standards. It deals with the interesting problem of whether a virtual machine can be made undetectable by software running on it — and software that is trying to detect it. Their conclusion is that it is not feasible, and I agree with that. The reason WHY that is the case can use some more discussion, though… and here is my take on that issue from a Simics/embedded systems virtualization perspective.
If you are looking for a good popular introduction to what spam is and how it works, the BBC World Service Documentary Podcast has a very good documentary up right now. I cannot find a direct link, but go to the overview page and then download “Doc: Assignment – On the trail of spammers 17 Jan 2007”. Enjoy!
I just listened to Episode 103 of the Security Now podcast, where Leo Laporte and Steve Gibson talk to the head of security at PayPal. PayPal is doing the right thing right now, issuing their customers with RSA security keys. Which gives them two-factor authentication (password and security key passnumber).
But for some reason, they do not enforce the use of security keys on their customers. Even when you have obtained a security key (which is optional, weirdly enough) and said you are using it, you can still login without it doing some additional security questions. For the reason of convenience! Which basically reduces the security added to nothing, since you can still login in a traditional fashion.