BBC Documentary: On the Trail of Spammers

If you are looking for a good popular introduction to what spam is and how it works, the BBC World Service Documentary Podcast has a very good documentary up right now. I cannot find a direct link, but go to the overview page and then download “Doc: Assignment – On the trail of spammers 17 Jan 2007”. Enjoy!

Brilliant Virtualization Comic

I’ve never seen the comics at xkcd.com before, but they are really quite brilliant nerdy comics. Liking virtualization and simulation, I found number 350 at http://xkcd.com/350/ especially fun. And note that that is what some serious researchers are doing, using virtual machines as active honey pots (“honey monkeys“) to go out and contract infections by actively searching the web with machines in various stages of patching.

Continue reading “Brilliant Virtualization Comic”

Dekker’s Algorithm Does not Work, as Expected

Sometimes it is very reassuring that certain things do not work when tested in practice, especially when you have been telling people that for a long time. In my talks about Debugging Multicore Systems at the Embedded Systems Conference Silicon Valley in 2006 and 2007, I had a fairly long discussion about relaxed or weak memory consistency models and their effect on parallel software when run on a truly concurrent machine. I used Dekker’s Algorithm as an example of code that works just fine on a single-processor machine with a multitasking operating system, but that fails to work on a dual-processor machine. Over Christmas, I finally did a practical test of just how easy it was to make it fail in reality. Which turned out to showcase some interesting properties of various types and brands of hardware and software.

Continue reading “Dekker’s Algorithm Does not Work, as Expected”

Multithreading Game AI

Over at an online publication called AI Game Dev, there is an elucidating post on how to do multithreading of game AI code (posted in June 2007). Basically, the conclusion is that most of the CPU time in an AI system is spent doing collision detection, path finding, and animation. This focus of time in a few domain-given hot spots turns the problem of parallelizing the AI into one of parallelizing some core supporting algorithms, rather than trying to parallelize the actual decision making itself. The key to achieving this is to make the decision-making part able to work asynchronously with the other algorithms, which is not trivial but still much easier than threading the decision making itself. The threading of the most time-consuming parts turns into classic algorithm parallelization, which is more familiar and easier to do than threading general-purpose large code bases. A good read, basically, that taught me some more about parallelization in the games world.