SecurityNow on Randomness

Episodes 299 and 301 of the SecurityNow podcast deal with the problem of how to get randomness out of a computer. As usual, Steve Gibson does a good job of explaining things, but I felt that there was some more that needed to be said about computers and randomness, as well as the related ideas of predictability, observability, repeatability, and determinism. I have worked and wrangled with these concepts for almost 15 years now, from my research into timing prediction for embedded processors to my current work with the repeatable and reversible Simics simulator.

Continue reading “SecurityNow on Randomness”

Evaluating HAVEGE Randomness

I previously blogged about the HAVEGE algorithm that is billed as extracting randomness from microarchitectural variations in modern processors. Since it was supposed to rely on hardware timing variations, I wondered what would happen if I ran it on Simics that does not model the processor pipeline, caches, and branch predictor. Wouldn’t that make the randomness of HAVEGE go away?

Continue reading “Evaluating HAVEGE Randomness”

Execution Time is Random, How Useful

When I was working on my PhD in WCET – Worst-Case Execution Time analysis – our goal was to utterly precisely predict the precise number of cycles that a processor would take to execute a certain piece of code.  We and other groups designed analyses for caches, pipelines, even branch predictors, and ways to take into account information about program flow and variable values.

The complexity of modern processors – even a decade ago – was such that predictability was very difficult to achieve in practice. We used to joke that a complex enough processor would be like a random number generator.

Funnily enough, it turns out that someone has been using processors just like that.  Guess that proves the point, in some way.

Continue reading “Execution Time is Random, How Useful”