This was a refreshingly different post: Too Many Cores, not Enough Brains:
More importantly, I believe the whole movement is misguided. Remember that we already know how to exploit multicore processors: with now-standard multithreading techniques. Multithreaded programming is notoriously difficult and error-prone, so the challenge is to invent techniques that will make it easier. But I just don’t see vast hordes of programmers needing to do multithreaded programming, and I don’t see large application domains where it is needed. Internet server apps are architected to scale across a CPU farm far beyond the limits of multicore. Likewise CGI rendering farms. Desktop apps don’t really need more CPU cycles: they just absorb them in lieu of performance tuning. It is mostly specialized performance-intensive domains that are truly in need of multithreading: like OS kernels and database engines and video codecs. Such code will continue to be written in C no matter what.
The argument at core is that multicore is about performance, and performance optimization is generally something that we do prematurely rather than focussing on how to solve the core problem in the best way. You have to respect Jonathan Edwards, and often this is true: programmers optimize themselves into a horrible design that is also slow.
But he is missing the picture painted in embedded computing and the need to chase low-power using multiple slow cores rather than a few fast cores. The only way to solve many computational problems in our infrastructure (LTE mobile telephony or Internet security scanning, for example) is to use multiple cores and multiple application-specific accelerators. What this is saying is that there are more performance-critical areas where single-threaded programs will not work fast enough than Jonathan thinks.
Also, we do need to write efficient software, as wasteful software uses more energy to drive the computers it is running on, contributing to our overuse of the Earth’s resources and global warming and other fun. Programmers could do their part by making their programs run using less energy, which would mean fewer cycles and more parallelism. Optimization might never die, since there will always be a reason to do things more efficiently. The reasons will vary from lack of instruction cycles (1980s) to reduced power (2000s), but it is a fact of life in all other areas of human activity. So us programmers will also have to get used to it.
Also, I think that part of the “core problem” should be how to think in terms of parallel parts in a problem. Some things are sequential, but most real-world problems have plenty of parallelism in the domain simply because the world the software interacts with is parallel in so many ways.