More from the SiCS multicore days 2008.
There were some interesting comments on how to define efficiency in a world of plentiful cores. The theme from my previous blog post called “Real-Time Control when Cores Become Free” came up several times during the talks, panels, and discussions. It seems that this year, everybody agreed that we are heading to 100s or 1000s of “self-respecting” cores on a single chip, and that with that kind of core count, it is not too important to keep them all busy at all times at any cost. As I stated earlier, cores and instructions are now free, while other aspects are limiting, turning the classic optimization imperatives of computing on its head. Operating systems will become more about space-sharing than time-sharing, and it might make sense to dedicate processing cores to the sole job of impersonating peripheral units or doing polling work. Operating systems can also be simplified when the job of time-sharing is taken away, even if communications and resource management might well bring in some new interesting issues.
So, what is efficiency in this kind of environment?
Only half an hour ago, the embargoes lifted. Freescale announced its new
A very interesting idea that has been bandied around for a while in manycore land is the notion that in the future, we will see a total inversion in today’s cost intuition for computers. Today, we are all versed in the idea that processor cores and processing times are quite precious, while memory is free. For best performance, you need to care about the cache system, but in the end, the goal is to keep those processor pipelines as busy as possible. Processors have traditionally been the most expensive part of a system, and ideas such as
I just got another article published! In the April 2008 issue of the ACM Transactions on Embedded Computing Systems (TECS), we have an article called “The worst-case execution-time problem – overview of methods and survey of tools”. “We” is kind of understatement, the article has fifteen authors from three continents, and presents an overview of the state of the field of WCET (Worst-Case Execution Time) analysis. The article was started back in 2005, with submission in 2006, accepted in January of 2007, and then finally it appeared in 2008. It is probably my last shot in the WCET area where I did my PhD thesis (please see my