Architecture Exploration by Free Market

This is a short maybe heretic post on the topic of architecture exploration.

It just struck me the other day that the idea prevalent in chip design that you want to explore the design space of a certain with great detail and precision might be the wrong way to go about things. It is in some sense similar to the ideas of planned economies: you decide a priori what is important, and try to optimize the design to do just that. If you are right, it is brilliant. If you are wrong, it can be very wrong. That is scary to say the least. It seems to assume that you have a pretty good idea of what you need to achieve, and that this is not likely to change for the lifetime of a design.

Continue reading “Architecture Exploration by Free Market”

Off-Topic: Colder Weather and (Consumer) Electronics

The colder season is coming fast here in Uppsala, and it is time to bring out gloves and warmer jackets. Even if we have had some nice sunny pretty warm days (up to 15 degrees Celsius!), we are getting into October soon, a month where there is usually some day of freak snow fall.

Another sign that it is getting colder is the reaction of consumer electronics.

Continue reading “Off-Topic: Colder Weather and (Consumer) Electronics”

SiCS Multicore Days: The Debate Points

It is a week ago now, and sometimes it is good to let impressions sink in and get processed a bit before writing about an event like the SiCS Multicore Days. Overall, the event was serious fun, and I found the speakers very insightful and the panel discussion and audience questions added even more information.

Continue reading “SiCS Multicore Days: The Debate Points”

Article on CPCI and ATCA Systems on Virtual System Development

The article/editoral “Using virtual platforms to improve AdvancedTCA software development practice” is now up at CompactPCI and AdvancedTCA Systems, an online and paper journal for the rack-based market. It is about our experience at Virtutech in using virtual platforms to drive system and software development for “pretty large” target systems, even those based on standard hardware.

And really, there is no such thing as a standard embedded system. Even if you use a standard backplane and buy off-the-shelf boards and cards to put in it, the combination of cards and added mezzanine cards makes each system quite unique. If you could use completely standard PC hardware for your system with no custom additions or special IO units, the thing would in likelihood not actually be an embedded system.

Continue reading “Article on CPCI and ATCA Systems on Virtual System Development”

What is Efficiency when Cores are Free?

More from the SiCS multicore days 2008.

There were some interesting comments on how to define efficiency in a world of plentiful cores. The theme from my previous blog post called “Real-Time Control when Cores Become Free” came up several times during the talks, panels, and discussions. It seems that this year, everybody agreed that we are heading to 100s or 1000s of “self-respecting” cores on a single chip, and that with that kind of core count, it is not too important to keep them all busy at all times at any cost. As I stated earlier, cores and instructions are now free, while other aspects are limiting, turning the classic optimization imperatives of computing on its head. Operating systems will become more about space-sharing than time-sharing, and it might make sense to dedicate processing cores to the sole job of impersonating peripheral units or doing polling work. Operating systems can also be simplified when the job of time-sharing is taken away, even if communications and resource management might well bring in some new interesting issues.

So, what is efficiency in this kind of environment?

Continue reading “What is Efficiency when Cores are Free?”

The JVM as Universal Parallel Glue?

The two days of the SiCS Multicore Days is now over, and it was a really fun event this year too. I will be writing a few things inspired by the event, and here is the first.

Kunle Olukotun‘s presentation on the work of the Stanford Pervasive Parallelism lab included a diagram where they showed a range of domain-specific languages (DSL) being compiled to a universal implementation language. That language is currently Scala, and in the end all applications end up being compiled into Scala byte codes, which are then optimized and dynamically reoptimized and executed on a particular hardware system based on the properties of that system. Fundamentally, the problem of creating and compiling a DSL, and combining program segments written in different DSLs, is solved by interposing a layer of indirection.

But this idea got me thinking about what the best such intermediary might be for large-scale general deployment.

Continue reading “The JVM as Universal Parallel Glue?”

Google Chrome and Parallel Browsing

Everybody seems to think the launch of the Google Chrome browser is very important and cool. Probably because Google itself is considered important and cool. I am a bit more skeptical about the whole Google thing, they seem to building themselves into a pretty dangerous monopoly company… but there are some interesting architectural and parallel computing aspects to Chrome — and Internet Explorer 8, it turns out.

Continue reading “Google Chrome and Parallel Browsing”

Lego Racers Boardgame — and why Old is Better in Software (mostly)

This might appear as a stretched analogy, but it struck as me as obvious when I tried playing the Lego Racers boardgame with my 3-year old this weekend. The game is ranked pretty low on Boardgamegeek, and deservedly so. The promise and premise is great: use Lego cars to race around a track and pick up new pieces to modify the powers of your car… sounds like great fun. Right? But it is not, and that’s where my analogy with the age of software comes in.

Continue reading “Lego Racers Boardgame — and why Old is Better in Software (mostly)”

Parallel Programming is Not Needed? I don’t quite agree…

This was a refreshingly different post: Too Many Cores, not Enough Brains:

More importantly, I believe the whole movement is misguided. Remember that we already know how to exploit multicore processors: with now-standard multithreading techniques. Multithreaded programming is notoriously difficult and error-prone, so the challenge is to invent techniques that will make it easier. But I just don’t see vast hordes of programmers needing to do multithreaded programming, and I don’t see large application domains where it is needed. Internet server apps are architected to scale across a CPU farm far beyond the limits of multicore. Likewise CGI rendering farms. Desktop apps don’t really need more CPU cycles: they just absorb them in lieu of performance tuning. It is mostly specialized performance-intensive domains that are truly in need of multithreading: like OS kernels and database engines and video codecs. Such code will continue to be written in C no matter what.

The argument at core is that multicore is about performance, and performance optimization is generally something that we do prematurely rather than focussing on how to solve the core problem in the best way. You have to respect Jonathan Edwards, and often this is true: programmers optimize themselves into a horrible design that is also slow.

Continue reading “Parallel Programming is Not Needed? I don’t quite agree…”