Being a bit of a computer history buff, I am often struck by how most key concepts and ideas in computer science and computer architecture were all invented in some form or the other before 1970. And commonly by IBM. This goes for caches, virtual memory, pipelining, out-of-order execution, virtual machines, operating systems, multitasking, byte-code machines, etc. Even so, I have found a quite extraordinary example of this that actually surprised me in its range of modern techniques employed. This is a follow-up to a previous post, after having actually digested the paper I talked about earlier.
Continue reading “The 1970 rule strikes again: Virtual Platform Principles in 1967”
On Tuesday next week, I will be presenting at the
A very interesting idea that has been bandied around for a while in manycore land is the notion that in the future, we will see a total inversion in today’s cost intuition for computers. Today, we are all versed in the idea that processor cores and processing times are quite precious, while memory is free. For best performance, you need to care about the cache system, but in the end, the goal is to keep those processor pipelines as busy as possible. Processors have traditionally been the most expensive part of a system, and ideas such as
Yesterday, I had the honor of being the opponent at the PhD defense of
I just got another article published! In the April 2008 issue of the ACM Transactions on Embedded Computing Systems (TECS), we have an article called “The worst-case execution-time problem – overview of methods and survey of tools”. “We” is kind of understatement, the article has fifteen authors from three continents, and presents an overview of the state of the field of WCET (Worst-Case Execution Time) analysis. The article was started back in 2005, with submission in 2006, accepted in January of 2007, and then finally it appeared in 2008. It is probably my last shot in the WCET area where I did my PhD thesis (please see my