The 1970 rule strikes again: Virtual Platform Principles in 1967

Being a bit of a computer history buff, I am often struck by how most key concepts and ideas in computer science and computer architecture were all invented in some form or the other before 1970. And commonly by IBM. This goes for caches, virtual memory, pipelining, out-of-order execution, virtual machines, operating systems, multitasking, byte-code machines, etc. Even so, I have found a quite extraordinary example of this that actually surprised me in its range of modern techniques employed. This is a follow-up to a previous post, after having actually digested the paper I talked about earlier.

Continue reading “The 1970 rule strikes again: Virtual Platform Principles in 1967”

Power Architecture Conference München 2008

Power.org LogoOn Tuesday next week, I will be presenting at the Power Architecture Conference (PAC) in München, Germany. The topics will be multicore debug using virtual hardware, and the new Simics Accelerator technology. Especially Simics Accelerator is pretty interesting technology.

It is a simple idea, using multiple host cores to run a virtual platform, with fairly amazing results. Now, using a single computer we can run fairly incredible simulations that were the realm of pure fantasy just a few years ago. We also got a nice new little box to demonstrate it with, an eight-core Dell with 16 GB of RAM. With 64-bit Linux, this thing makes my Core 2 Duo laptop with 32-bit Vista look like yesteryear’s snail…  And creates that giggling feeling that a really impressive new toy brings up in even the most grown up boys. Booting a 16-machine network of PowerPC boards was so fast it was not demoworthy.  I think we have to up the ante to some 100 target machines to make it interesting, and I have no doubt that a combination of multithreading and idle-loop optimization will make that thing be usefully interactive from the target command lines. There are many other wild things we could try on that demo box, once it gets back from the Power Architecture Conferences tour.

Tri-core or Tricore or TriCore(tm)

I do find it kind of funny when marketing names go bad in unexpected ways of collide in unexpected ways. There is this fairly old Infineon combined DSP/MCU core called TriCore (the name means it is both a RISC, a DSP, and an MCU). It was a nice name, easy to recognize, easy to pronounce, unlike the competition at the time. Today though, we are seeing multicore chips with three cores on the die. So what are these, if not tri-core chips, in analog with single- dual- quad- oct- etc.  And this makes it very necessary to use the hyphen. For example, the Freescale recent StarCore 8113 chip with three cores has its press release explicitly headed tri-core with an hyphen. I guess marketing would have liked the more visually pleasing tricore moniker along with dualcore, which looks fairly established.

Ah well, not to mention the fun Infineon will have if it launches a triple-core TriCore device. Maybe in a third generation TriCore 3? The power of three, indeed. TriTriTriCore possibly?

Real-time control when cores become free

ImageA very interesting idea that has been bandied around for a while in manycore land is the notion that in the future, we will see a total inversion in today’s cost intuition for computers. Today, we are all versed in the idea that processor cores and processing times are quite precious, while memory is free. For best performance, you need to care about the cache system, but in the end, the goal is to keep those processor pipelines as busy as possible. Processors have traditionally been the most expensive part of a system, and ideas such as Integrated Modular Avionics are invented to make the best use of a resource perceived as rare and expensive…

But is that really always going to be true? Is it reasonably to think of CPU cores are being free but other resources as expensive? And what happens to program and system design then?

Continue reading “Real-time control when cores become free”

Virtual Platform by Virtualization Extensions — 1969

By means of a trip down virtualization history, I found a real gem in 1969 paper called A program simulator by partial interpretation, by Kazuhiro Fuchi, Hozumi Tanaka, Yuriko Manago, Toshitsugu Yuba of the Japanese Government Electrotechnical Laboratory. It was published at the second symposium on Operating systems principles (SOSP) in 1969. It describes a system where regular target instructions are directly interpreted, and any privileged instructions are trapped and simulated. Very similar to how VmWare does it for x86, or any other modern virtualization solution.

Continue reading “Virtual Platform by Virtualization Extensions — 1969”

Simon Kågström, PhD

BTH logoYesterday, I had the honor of being the opponent at the PhD defense of Simon Kågström at Blekinge Tekniska Högskola (BTH, Blekinge University of Technology in English). His PhD thesis deals mainly with the multiprocessor port of an industrial in-house operating system, and a secondary theme was the design of the Cibyl C-programs-to-JVM translator. All of his papers are very well-written and a joy to read, and the engineering work behind it is very solid.

The most important data in the PhD thesis is really just how much work it is to do an SMP port of an OS kernel. And how hard it is to get performance up to good levels even with several years of work. Really emphasizes the point that hard work and perseverance and just lots of calendar time is what it takes to create a good SMP OS. That’s why Solaris and AIX are still years ahead of Linux in this respect — you just need to hit the snags, fix them, retest, and hit the next snag. It takes time to polish, basically.

So, if you have any interest in multiprocessor operating systems, Simon’s work is well-worth a read. Also check out his blog at http://simonkagstrom.livejournal.com/.  And by the way, he did pass.

Worst-Case Execution Time Survey Article in TECS

ACM Transactions on Embedded Computing Systems cover April 2008I just got another article published! In the April 2008 issue of the ACM Transactions on Embedded Computing Systems (TECS), we have an article called “The worst-case execution-time problem – overview of methods and survey of tools”. “We” is kind of understatement, the article has fifteen authors from three continents, and presents an overview of the state of the field of WCET (Worst-Case Execution Time) analysis. The article was started back in 2005, with submission in 2006, accepted in January of 2007, and then finally it appeared in 2008. It is probably my last shot in the WCET area where I did my PhD thesis (please see my list of publications for an idea of what all of that is about).

You can find the article at the ACM portal, or at the MRTC publications data base in Västerås.

David Ditzel Interview at The Register/Semicoherent Computing

TheRegister Radio LogoThe Register has a few podcasts in addition to their website, and the one called “Semicoherent Computing” has turned into a very nice series of interviews with interesting people from the computer industry. I recently listened to their interview from September 2007 with David Ditzel of Transmeta fame. He had a lot to say about the history of computing, as well as interesting things on where computing is going. Well worth a listen! Particular interesting highlights…

Continue reading “David Ditzel Interview at The Register/Semicoherent Computing”

Grant Martin on Manycore Multicore MPSoC AMP SMP Multi-X…

Grant Martin is a nice fellow from Tensilica who has a blog at ChipDesignMag. In a recent post, he raises the question of nomenclature and taxonomy for multicore processor designs:

…the discussion, and the need to constantly define our terms (and redefine them, and discuss them when people disagree) makes me wish that the world of electronics, system and software design had some agreement on what the right terms are and what they mean…

I think this is a good idea, but we need to keep the core count out of it…

Continue reading “Grant Martin on Manycore Multicore MPSoC AMP SMP Multi-X…”