Montalvo: Heterogeneous x86 Multicore

montalvo-fg.gifCNET (of all places) have a short article on what Montalvo Systems are up to: Secret recipe inside Intel’s latest competitor | CNET News.com. The article is a bit short on details, but it sounds like it is finally an example of a same-ISA, different-powered-cores heterogeneous multicore device in the mainstream. The idea has a lot of merit, and it will be very interesting to see the final results once silicon ships. I really believe is heterogeneous designs.

To be critical, trying to compete with Intel might not be the best idea around… but it never hurts to try. Also, the name is not unique, there is already a montalvo.com that is not montalvosystems.com. I think the old name “Memorylogix” was more interesting and less prone to website name collisions (yes, it seems to be the same company that briefly surfaced with some stripped-down x86 processor back in 2002 — I have an MPR article to prove it).

Book review: “Fruits of War”

A review of the book “Fruits of War” by Michael White. The book discusses how war has accelerated technological progress and provided unexpected benefits to society. The author is a bit defensive about not professing that war is good in any way, which is pretty obvious and does not really need to be an issue in reading the book. It is a fairly straight reporting of facts, rather than any attempt to editorialize or present radical opinions.
Continue reading “Book review: “Fruits of War””

ESC Silicon Valley 2008: Class 410

I am scheduled to talk at the ESC SV 2008 in the technical program. In 2006 and 2007 my topic was Multicore Debugging, but this year I have changed to Using Simulation Tools for Embedded Software Development. The date is April 17, the time 8.30 to 10.00, and the place the San Jose Convention Center.

See you there!

VIA/Centaur Isaiah: Multicore when needed, not earlier than that

Scott Wasson at the techreport.com has a short but fairly good write-up on the microarchitecture of VIAs new ‘Isaiah’ processor core, developed by their subsidiary Centaur technology. What is interesting is the part on multicore processing. Scott is quoting Glenn Henry from Centaur:

He points out that most people don’t and shouldn’t care what type of CPU they have in their PCs, so long as it gets the job done. When Centaur started, Henry says, they had to develop engineers with a different mindset, not “faster is better.” He set a series of targets involving die size limits and a ship date, and then directed his people to make the processor fast enough within those constraints that people would want to buy it.

Indeed, once you’ve absorbed the Centaur mindset, Henry’s answers to questions become somewhat predictable. Will Isaiah go multi-core? It can; it’s built that way, and Henry thinks Intel’s approach of a shared L2 cache makes sense. But he scoffs at the notion that people need multiple cores in basic computing devices right now. Henry says Centaur will go to multiple cores if it needs that level of performance or if Intel convinces people they have to have it.

It is a very interesting and different way to go about processor design. Aiming for “good enough for what people do now”. A Skoda, not a BMW, to use an old automotive analogy. But note that while in some markets “good enough” also means “bleak and boring”. But this is not necessarily the case in personal computing.

In computers, the software and system construction is what makes a PC stand out from another, not the raw performance of the processor. And as good enough in the Via case also means fairly radically low-power, you can build some cool and cool compact solutions from something like the Isaiah, provided that you absolutely want it to run a commodity OS like Windows or x86 standard Linux (which makes sense for a home machine).

If you care to do a bit more customization and create true consumer electronics, you can easily get the same features at half the power budget using an ARM, MIPS, or embedded POWER core.

The only bit that strikes me as interesting is that Henry thinks of multicore as a way to increase performance, rather than as a way to decrease power. Maybe the additional size of a second core has something do with this, but other players in the market, most notably ARM, is using multicore as a way to reduce the power consumption at a given level of performance. Could be that in the x86 world, anything slower than the current Isaiah design will just be too single-threaded slow to be viable running Windows.

Off-Topic: Blog Spam Statistics

Seems like my blog has been picked up by some spamming machine — at least I hope it is a machine, what a waste of manpower to manually send in spam since I am filtering all comments and not letting anything remotely spam-like through. Anyway, it is kind of interesting to see what kinds of things are being pushed using blog spam. Read on for more, but suffice to say that porn is dominant… Continue reading “Off-Topic: Blog Spam Statistics”

Off-Topic: Studying Malware Analysis at HUT.fi

The F-Secure weblog is one of my regular reads, and today they presented one of the coolest industry-academia items for a long time: F-Secure are teaching an entire course at the Helsinki University of Technology, called “Malware Analysis and Antivirus Technologies”. Kudos to F-Secure for the time and money that must have gone into doing that!

BBC Documentary: On the Trail of Spammers

If you are looking for a good popular introduction to what spam is and how it works, the BBC World Service Documentary Podcast has a very good documentary up right now. I cannot find a direct link, but go to the overview page and then download “Doc: Assignment – On the trail of spammers 17 Jan 2007”. Enjoy!

Brilliant Virtualization Comic

I’ve never seen the comics at xkcd.com before, but they are really quite brilliant nerdy comics. Liking virtualization and simulation, I found number 350 at http://xkcd.com/350/ especially fun. And note that that is what some serious researchers are doing, using virtual machines as active honey pots (“honey monkeys“) to go out and contract infections by actively searching the web with machines in various stages of patching.

Continue reading “Brilliant Virtualization Comic”

Dekker’s Algorithm Does not Work, as Expected

Sometimes it is very reassuring that certain things do not work when tested in practice, especially when you have been telling people that for a long time. In my talks about Debugging Multicore Systems at the Embedded Systems Conference Silicon Valley in 2006 and 2007, I had a fairly long discussion about relaxed or weak memory consistency models and their effect on parallel software when run on a truly concurrent machine. I used Dekker’s Algorithm as an example of code that works just fine on a single-processor machine with a multitasking operating system, but that fails to work on a dual-processor machine. Over Christmas, I finally did a practical test of just how easy it was to make it fail in reality. Which turned out to showcase some interesting properties of various types and brands of hardware and software.

Continue reading “Dekker’s Algorithm Does not Work, as Expected”

Multithreading Game AI

Over at an online publication called AI Game Dev, there is an elucidating post on how to do multithreading of game AI code (posted in June 2007). Basically, the conclusion is that most of the CPU time in an AI system is spent doing collision detection, path finding, and animation. This focus of time in a few domain-given hot spots turns the problem of parallelizing the AI into one of parallelizing some core supporting algorithms, rather than trying to parallelize the actual decision making itself. The key to achieving this is to make the decision-making part able to work asynchronously with the other algorithms, which is not trivial but still much easier than threading the decision making itself. The threading of the most time-consuming parts turns into classic algorithm parallelization, which is more familiar and easier to do than threading general-purpose large code bases. A good read, basically, that taught me some more about parallelization in the games world.

Wayne Wolf on “The Good News and the Bad News” of Embedded Multiprocessing

In a column called The Good News and the Bad News in IEEE Computer magazine (November 2007 issue), Prof. Wayne Wolf at Georgia Tech (and a regular columnist on embedded systems for Computer magazine) talks about the impact of multiprocessing systems (multicore, multichip) on embedded systems. In general, his tone is much more optimistic and upbeat than most pundits.

Continue reading “Wayne Wolf on “The Good News and the Bad News” of Embedded Multiprocessing”

SCDsource: Reality Check on Virtual Prototypes

Bill Murray of the “New Media Outlet” SCDsource has published one of the best articles that I have seen on the use of software simulators and virtual prototypes in industry. The examples in the article run from low-level code run on very accurate simulators all the way to very fast virtual systems that are used instead of actual hardware to train NASA operators. The article covers the end-user perspective and is not particularly oriented towards a particular vendor. It offers some nice insights into the expected and unexpected benefits that various companies have obtained from using simulators of various kinds. As well as some glimpses into the underlying technologies they have chosen, developed, and adapted.

Highly recommended.

Book Review: Intel’s Multicore Programming Book

Multicore programming book coverThe book “Multicore Programming – Increasing Performance through Software Multithreading” by Shameem Akhter and Jason Roberts is part of a series of books put out by Intel in their multicore software push. In case you have not noticed, Intel has a huge market push currently where they give seminars, publish articles and books, and give curricula to universities in order to get more parallel software in place. I read this book recently, and here is a short review.
Continue reading “Book Review: Intel’s Multicore Programming Book”

Mark Nelson’s Multicore Non-Panic and Embedded Systems

Via thinkingparallel.com I just found an interesting article from last Summer, about the actual non-imminence of the end of the computing world as we know it due to multicore. Written by Mark Nelson, the article makes some relevant and mostly correct claims, as long as we keep to the desktop land that he knows best. So here is a look at these claims in the context of embedded systems.
Continue reading “Mark Nelson’s Multicore Non-Panic and Embedded Systems”