Parallel Programming is Not Needed? I don’t quite agree…

This was a refreshingly different post: Too Many Cores, not Enough Brains:

More importantly, I believe the whole movement is misguided. Remember that we already know how to exploit multicore processors: with now-standard multithreading techniques. Multithreaded programming is notoriously difficult and error-prone, so the challenge is to invent techniques that will make it easier. But I just don’t see vast hordes of programmers needing to do multithreaded programming, and I don’t see large application domains where it is needed. Internet server apps are architected to scale across a CPU farm far beyond the limits of multicore. Likewise CGI rendering farms. Desktop apps don’t really need more CPU cycles: they just absorb them in lieu of performance tuning. It is mostly specialized performance-intensive domains that are truly in need of multithreading: like OS kernels and database engines and video codecs. Such code will continue to be written in C no matter what.

The argument at core is that multicore is about performance, and performance optimization is generally something that we do prematurely rather than focussing on how to solve the core problem in the best way. You have to respect Jonathan Edwards, and often this is true: programmers optimize themselves into a horrible design that is also slow.

But he is missing the picture painted in embedded computing and the need to chase low-power using multiple slow cores rather than a few fast cores. The only way to solve many computational problems in our infrastructure (LTE mobile telephony or Internet security scanning, for example) is to use multiple cores and multiple application-specific accelerators. What this is saying is that there are more performance-critical areas where single-threaded programs will not work fast enough than Jonathan thinks.

Also, we do need to write efficient software, as wasteful software uses more energy to drive the computers it is running on, contributing to our overuse of the Earth’s resources and global warming and other fun. Programmers could do their part by making their programs run using less energy, which would mean fewer cycles and more parallelism. Optimization might never die, since there will always be a reason to do things more efficiently. The reasons will vary from lack of instruction cycles (1980s) to reduced power (2000s), but it is a fact of life in all other areas of human activity. So us programmers will also have to get used to it.

Also, I think that part of the “core problem” should be how to think in terms of parallel parts in a problem. Some things are sequential, but most real-world problems have plenty of parallelism in the domain simply because the world the software interacts with is parallel in so many ways.

4 thoughts on “Parallel Programming is Not Needed? I don’t quite agree…”

  1. Thanks! However, he still seems to be thinking about latency and in the desktop domain… and latency is a strange creature here: in my world, you usually increase throughput at the cost of increased latencies when going parallel from a serial implementation. Due to the machines involved becoming a little slower.

  2. I read somewhere where someone said that parallel programming is hard since people’s brains are wired sequentially. We have the potential to multi-task between thoughts, but can only focus on one thought at a time.

    I think it’s an education issue. We are simply not used to programming parallel programs, just as we weren’t used to programming at all when computers were new a couple of decades ago. Now that many computers are parallel and people start actually writing parallel software in practice, we will get the hang of it eventually.

    One thing that should help is to create new abstractions that looks sequential while at a lower level actually does work in parallel, such as multiplying matrices or something like that. However, I think parallel programming-thinking will be a very useful skill no matter which abstraction level you work on.

    Another thing that seems to be a hassle today is patterns for communication. Having a “communications processor” that can access the same memory as the “main processor” (essentially those two share memory) would help to make a fully distributed model (that only has message passing) look more like a shared memory model.

    Obviously this is just reiterating stuff that people already know; I’m just saying that I think it’ll get there, through various technological improvements and more experience, just give it some time.

  3. Thanks for the comment! I do think that a key to parallel programming is to make it be expressed as something sequential, OR as a very separate parallel things. What is hard, I think, is when several parallel things interact deeply and often.

    About the communications processor idea: I beg to differ… Programmers should not have to think about how data is actually transported, but rather deal with something a bit more abstract — message-passing is good in that respect, since it makes communication explicit but still abstract from the machine.

    Shared-memory is the fundamental mistake in parallel programming, as that is what makes things hard.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.