The new version of Trango’s embedded “secure virtualizer” for the ARM Cortex-A9 MPCore is an interesting solution in that it directly applies virtualization technology to the issue of migrating solutions (complete software stacks) from single-core to multicore. The details are a bit sketchy in just how this is done, there is some hardware support in recent ARM architectures, but a little bit of adaptation of a guest OS using paravirtual techniques are likely not a blocker. It also touches on security, implemented using ARM’s trustzone technology. All in all, I think this is a typical example of something that we are going to see much more of.
Parallel Processing Requires Parallel IO
One common use-case for multicore processing on the desktop and elsewhere is “doing many things at the same time”. You could be running many user-interface programs at once, like the “typical today’s teenager template” of tens of IM clients, web sessions, email conversations, music and video players, downloading movies, etc. Or it is a more business-like background indexing of harddrives, backups being taken, downloading large business files, compiling software, updating source code repositories, etc.
I have been doing both of these modes to some extent, and the main problem with them at least on a PC is that while the processors might be good at multitasking and sharing the CPU load, my IO system is annoyingly non-parallel.
Power.Org Dev Con: C Domination a Problem for Multicore
I just read a EETimes report from a panel at the Power.org Developers Conference (actually, it is more accurately called the Power Architecture Developers Conference, of PADC), about programming multicore processors for the embedded market. Note that I was not there in person, so I can only take the few quotes in the article and comment on them. The main conclusions are that:
- C/C++ is going to be the dominant language for embedded for the near future. Nothing really surprising at that.
- C/C++ being dominant means that parallelism in multicore processors, especially shared-memory systems, will be harder to exploit. That is certainly true.
- Tool vendors have no good idea about what to do next.
- You cannot expect to get traction with a new language.
In a sense, blaming the market for not having the good sense to adapt new tools to tackle multicore.
I don’t think things have to be that bleak.
Continue reading “Power.Org Dev Con: C Domination a Problem for Multicore”
Golf Games and Computer Simulations
In my work at Virtutech trying to explain Simics and its simulation philosophy, it is often a struggle to get people to accept that what seems like pretty brutal simplifications of the world actually work quite nicely. Recently, I found a nice analogy in a golf game/simulator. The type where you swing a real club and send a real golf ball through the air.
Continue reading “Golf Games and Computer Simulations”
Comment on Joel Spolsky and Programming to “Moores Law”
Joel Spolsky is always worth a read, and in his post Strategy Letter VI he has a lot of smart things to say about how to consider programming. His basic message is that if you optimize your code too much to work well and fit in the memory of a current machine, by the time that you are done, you find yourself run over by competitors that just assumed machines would be faster and used the same programming time to implement cooler products.
I just have to take issue with this.
Continue reading “Comment on Joel Spolsky and Programming to “Moores Law””
Applications that can make use of more compute power (e.g., iPod Video)
A question that pops up quite often when computer architects and representatives from firms like Intel encounter a crowd today is but just what do you need more computing power for????. Most regular users are fairly happy with the speed at which they process words, surf the web, read email, do IP phone calls, crunch numbers in Excel, and other common tasks. It is hard to perceive the need for more speed in everyday tasks, unlike a decade or two ago when you could definitely ask for improvement. I remember scrolling a page in PageMaker on a Mac SE (8Mhz 68000). You counted the clicks and waited for the screen to jump, redraw, jump, redraw, stabilize… quite a different experience from working with modern computers and far more complex software that still responds instantaneously to almost any work.
Continue reading “Applications that can make use of more compute power (e.g., iPod Video)”
Handbook of Real-Time and Embedded Systems
The “Handbook of Real-Time and Embedded Systems” (ToC, Amazon, CRC Press) is now out. I and my university research colleague and friend Andreas Ermedahl have written a chapter on worst-case execution time analysis. We talk some about the theories and techniques, but we try to discuss practical experience in actual industrial use. Both static, dynamic, and hybrid techniques are covered.
I just got my personal copy, but my first impression of the book overall is very positive. The contents seems quite practical to a large extent, not as academic as one might have feared. Do check it out if you are into the field. It is not a collection of research paper, rather instructive chapters informed by solid research but with applications in mind.
Off-Topic: Video in the iPod Nano (Update, updated once)
A short update to the previous posting on how to compress video for the nano.
It turns out that the “iPod video” profile of Nero Recode is half aimed at showing video from your iPod on external devices. That’s the only good reason for the “high” resolution. I typically got a video size of 15MB per minute with these settings, which quickly fills up even gigabytes of space.
Using the “iPod Video-AVC” profile instead is optimized for viewing on the Nano itself and not on some external device. The resolution is down to 320×200-240 depending on source aspect ratio. And the resulting files are only about 5MB per minute, much more manageable for carrying a large video library on an iPod. I cannot see any difference in the quality of the output…
Update (2007-September-23): The default iPod-AVC setting has some issue with rapid cross-fades between scenes. To get around this, I set the quality settings to “2-pass” and “highest quality” in the detailed settings you can make in the second screen before moving on to actually encode things. This created very nice looking video that had no problems handling even the previously broken fades.
The cost was even more compute time. I think the current settings takes some 5 to 10 hours per material hour to encode (on my Athlon XP 2700+, not exactly a screamer by current standards).
AMD three-core Phenom
Spotted at EETimes.com – Odd move: AMD plans three-core CPU . Interesting that someone in the mainstream finally breaks the 1-2-4-8 progression that seems to be the norm.
Off-Topic: Getting video onto an iPod Nano (3G/Video)
This is not in my self-assigned range of topics, but I like when other people put up their helpful notes of how to accomplish some task that I am researching. Thus, I feel obliged to do the same when I have tested something reasonably new.
The task at hand here is “how to get video into an iPod Nano”.
Continue reading “Off-Topic: Getting video onto an iPod Nano (3G/Video)”
AMP vs Virtualization
It just dawned on me recently (and it sure must have been obvious to those working with configuring AMP — Assymtric Multiprocessing Systems) that in an AMP setup, the operating systems involved actually know about each other and have to account for the fact that they are sharing a single processor chip with other operating systems. So you cannot just take two single-core operating system images from an existing multiple-processor (local memory) solution and put them on a single chip and things just work. You do need to prepare the boot process and find a way to nicely share the common I/O devices, timers, accelerator engines and other resources on the chip. This is materially different from a virtualized setup.
SICS Multicore Day 2007 – More on Programming
Some more thoughts on how to program multicore machines that did not make it into my original posting from last week. Some of this was discussed at the multicore day, and others I have been thinking about for some time now.
One of the best ways to handle any hard problem is to make it “somebody else’s problem“. In computer science this is also known as abstraction, and it is a very useful principle for designing more productive programming languages and environments. Basically, the idea I am after is to let a programmer focus on the problem at hand, leaving somebody else to fill in the details and map the problem solution onto the execution substrate.
Continue reading “SICS Multicore Day 2007 – More on Programming”
A logo from 1996 and simulation for archival purposes
Back in 1996, DVP celebrated its 15th anniversary. When looking through my digital and paper archive, I found this gem: The official badge and logo for the 1996 anniversary! We also produced some mouse pads with this logo on them, one of which I still use for my daily job. Pretty good quality I must say.
The picture shown here was saved as GIF for use on the web. But scarily enough, apart from a few more GIF files, I could not open or even understand the file type of most of the files from that time, only ten years ago. Our digital archives are not very robust — more on that below.
Continue reading “A logo from 1996 and simulation for archival purposes”
SICS Multicore Day August 31
The SICS Multicore Day August 31 was a really great event! We had some fantastic speakers presenting the latest industry research view on multicores and how to program them. Marc Tremblay did the first presentation in Europe of Sun’s upcoming Rock processor. Tim Mattson from Intel tried hard to provoke the crowd, and Vijay Saraswat of IBM presented their X10 language. Erik Hagersten from Uppsala University provided a short scene-setting talk about how multicore is becoming the norm.
Fastscale minimal virtual machines — beautiful simple idea
A company called Fastscale Technologies has a product that is simple in concept and yet very powerful. Instead of using complete installs of heavy operating systems like Linux or Windows to run applications on virtual machines, they offer tools that provide minimal operating system configurations that are tailored to the needs of a particular application. Since only that application is going to be run on the virtual machine, this is sufficient. According to press reports, this means that you can run several times more virtual machines on a given host, compared to default OS installs. And boot an order of a magnitude faster.
Continue reading “Fastscale minimal virtual machines — beautiful simple idea”