Heterogeneous vs homogeneous systems, revisited

I got another email from my friend with the thesis that processors will become ever more homogeneous as time goes on, while I believe in a relative heterogenezation (is that a word?) of computer architecture with many special-purpose accelerators and helper processors. This argument is put forward in a previous blog post. In this round, the arguments for homogenization are from the gaming world.

In particular:

  • Alex St John interviewed by ExtremeTech. He is making the case that new Intel and AMD developments will create a new breakthrough for PC gaming. And the breakthrough is putting the GPU on the CPU chip. This will certainly enable interesting optimizations in the computer architecture, but it is really an accelerator on the chip. So we remove the separate GPU, but keep it on the chip. It is still not the same core as the CPU.
  • Nvidia scrapping the PhysX hardware and integrating the software into its CUDA GPU-driven offering. There goes the Physics Processing Engine as a separate design. This is a homogenization of the GPU/gaming accelerator market for PCs. But really a general trend. GPUs will become more programmable over time, and they will gobble up other accelerators. But I still think that this is mostly because there is a large segment of computing that is served by regular algorithms and signal-processing-style codes. I do not see that fusing with a general-purpose x86 CPU any day soon. There is still benefit to heterogeneity due to the very different target domain characteristics.
  • Intel Larrabee — interesting beast this. Seems to be a chip containing a large (unknown how many) x86-style cores with extra vector processing extensions. Since Intel is aiming at many cores on a chip, the cores cannot be as aggressive as the current Core microarchitecture. If we assume, for the purpose of argument, that they will indeed by compatible with current x86 code, what we get when we combine a Larrabee chip with a Core chip in the same system is a system where all cores in principle can run the same code. But where they have very different performance characteristics, so that code still needs to be targeted in the right way for the right core (or probably entire chip) to run well. So we see heterogeneity in the performance, but homogeneity (to some extent) in the ISA. And compared to a current NVIDIA-style GPU, it is certainly more similar to a general-purpose CPU. But that is also where NVIDIA is going, adding more and more general programmability while never giving up raw performance. So yes, it is homogenization in the sense that the GPUs become less extremely unlike a general-purpose x86 core.

To sum up, it would seem that in PC land, CPUs and GPUs will start to steal traits from each other in coming years. This leads to less variations in the programming models, but the GPUs (and I include Larrabee there) and CPUs will still be quite different since they are optimized for quite different types of workloads. I do not see general-purpose CPUs replacing the GPUs. GPUs appeared for a reason, and that is that graphics can always use more performance per chip area than you can ever get from a general-purpose CPU that has to try to be good at everything.

3 thoughts on “Heterogeneous vs homogeneous systems, revisited”

  1. My post at http://jakob.engbloms.se/archives/116 has a short comment from David Ditzel of Transmeta fame about the renewed general interest in accelerator technologies in servers. Which is an argument in the favor of heterogeneity.

    It should be noted that he does temper it with the note that “you need a large volume of kit for a particular task” to make an accelerator worth its design cost. But I think that this is very often the case in embedded computing in particular, and general-purpose and servers quite often.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.