‘Computronium’ is really ‘Unobtainium’

‘Computronium’ is really ‘Unobtainium’

S. Gildert January 2011

Computronium [1] is defined by some as a substance which approaches the theoretical limit of computational power that we can achieve through engineering of the matter around us. It would mean that every atom of a piece of matter would be put to useful work doing computation. Such a system would reside at the ultimate limits of efficiency, and the smallest amount of energy possible would be wasted through the generation of heat. Computronium crops up in science fiction a lot, usually as something that advanced civilizations have created, occasionally causing conflicts due to intensive harvesting of matter from their galaxy to further their processing power. The idea is also also linked with advanced machine intelligence: A block of matter which does nothing other than compute could presumably would be incredibly sought after by any artificial intelligence looking to get the most compact and powerful brain for its money!

Many fanciful ideas from science fiction stories of the past have indeed turned out to be part of our everyday lives today. And with a little further thought, computronium seems nowhere near as crazy as, say, flying cars, warp drives and lightsabres. In fact, most people would accept the idea of that transistors have been getting smaller and more efficient, marching to the beat of Moore’s law. We now take for granted that the processor of a cell phone would once have required an entire building and consume several kilowatts of power. So what is wrong with taking this idea to its ultimate limit? Why can’t transistors become smaller and smaller until they are the size of atoms, perhaps even the size of sub-atomic bits of matter? Wouldn’t that be something at least very similar to computronium, if not the real-deal itself?

Processing power

Let’s pick apart some of the statements above and think a little more carefully about them. To begin, consider the phrase ‘computational power’. This may seem easy to define. One could use FLOPs (floating point operations per second) as a definition. Or perhaps we could use the ‘clock speed’ of the processor – just how fast can we get those bits of matter to change state?

When looking for good metrics for the computational power of a processor, we come across some difficulties. You can’t just use the ‘clock speed’ of your computer. Imagine, for example, that you split the processor it into multiple cores. Some programs might now be able to run faster, even if the ‘clock speed’ was the same!

Imagine that you required a processor to perform transforms on lots of pixels at once – say you wanted a game engine where the state of each pixel was only related to the state of that pixel on the previous frame. (I.e. it didn’t care what all the other pixels were doing). A parallel processor would calculate the output of pixel values very effectively, as it could assign each pixel with its own core, and compute them all at the same time. In fact, the existence of this type of problem is why we often find that games consoles have Graphics-Processing-Units (GPUs) built in them. They are parallel processors which are good at doing pixel-transforms in this way.

But imagine instead if you wanted to perform a calculation which was more like the following scenario: Calculating the 100th value of the recursive equation zn+1=zn2+3zn+1. There’s not really an easy way you can parallelize this, because each time you calculate the value of z in the equation, you need to know the result from the previous calculation. Everything has to be done in order. So if you tried to run this on your thousand core GPU, it would only really be able to make use of one of the cores, and then your highly parallel processor wouldn’t look so efficient after all.

In fact, the only reason why metrics such as clock speed and FLOPs have been useful up to now is that we have mostly been using our processors to do very similar tasks, such as arithmetic. But the kind of things we want them to do in the future, such as natural language processing and complex fluid dynamics simulations no longer rely on straightforward arithmetic operations.

We start to see the the computational power of a piece of matter really depends upon what you want it to do!

Arranging matter usefully

So although in the future we may be able to make atomic-size transistors, we still need to decide how to arrange those transistors. If we arrange them in a slightly different way, the programs that we run on them may suddenly become much more efficient. Constructing serial and parallel processors is just one way to think about rearranging your matter to compute differently. There are many other trade-offs, for example how the processor accesses memory, and whether it is analog or digital in nature.

Perhaps, then, to get around this problem, we could create a piece of matter that reprograms itself, that rearranges the atoms depending upon what you wanted to do. Then, you could have a processor with one large core if it is running a serial algorithm (like the recursive equation), and many small cores if it is running a parallel algorithm (like the game engine) Aha! That would get around this problem. Then my computronium can compute anything once more, in the most efficient way possible for a particular task.

However, we find that you still cannot win, even with this method. The ‘reprogramming of the matter’ stage would require a program all of its own to be run on the matter. The more clever your reprogramming program, the more time your processor would spend reconfiguring itself, and less time would be spent actually solving problems! You also have to somehow know in advance how to write the program that reprograms your matter, which again requires knowledge of what problems you might want to solve.

Computing limits

You may be wondering why I haven’t yet mentioned the idea of the Universal Turing Machine [2], which is a theoretical description of a computer, able to compute anything. Can we not just arrange our atoms to make a Turing Machine, that could then run any program? It is certainly the case that you can run any classical digital program on a Turing machine, but the theory says nothing about how efficient its computation would be. If you would like an analogy, a Turing Machine is to a real computer program as an abacus is to Excel – there is no reason why you cannot sit down and do your weekly accounts using an abacus, but it might take you a very long time!

We find when we try to build Turing machines in real life that not everything is realistically computable. A Turing Machine in practice and a Turing Machine in principle are two very different beasts. This is because we are always limited by the resources that our real-world Turing machine has access to (it is obvious in our analogy that there is a limit to how quickly we can move the beads on the abacus). The efficiency of a computer is ALWAYS related to how you assemble it in the real world, what you are trying to do with it, and what resources you have available. One should be careful when dealing with models that assume no resource constraints. Just think how different the world would be if we had an unlimited supply of free, clean energy.

The delicate matter of computation

Let’s philosophise about matter and computation some more. Imagine that the thing you wanted to do was to compute the energy levels of a Helium atom (after all, this is a Physics blog). You could use a regular desktop computer and start to do some simulations of Helium atoms using the quantum mechanical equations. Instead, you could take a few real helium atoms and measure the spectrum of light emitted from them when they are excited with a voltage inside an (otherwise evacuated) tube. The point is that a single Helium atom is able to compute the spacing of its own energy levels using far fewer resources than our simulation (which may require several tens of grams of silicon processor).

So as we see, atoms are already very busy computing things. In fact, you can think of the Universe as computing itself. So in a way, matter is already computronium, because it is very efficiently computing itself. But we don’t really want matter to do just that. We want it to compute the things that WE care about (like the probability of a particular stock going up in value in the next few days). But in order to make the Universe compute the things that we care about, we find that there is an overhead – a price to pay for making matter do something different from what it is meant to do. Or, to give a complementary way of thinking about this: The closer your computation is to what the original piece of matter would do naturally, the more efficient it will be.

So in conclusion, we find that we always have to cajole matter into computing the things we care about. We must invest extra resources and energy into the computation. We find that the best way to arranging computing elements depends upon what we want them to do. There is no magic ‘arrangement of matter’ which is all things to all people, no fabled ‘computronium’. We have to configure the matter in different ways to do different tasks, and the most efficient configuration varies vastly from task to task.

Afterthoughts: Some philosophical fun

The following amusing (if somewhat contrived) story serves to illustrate the points I have been making. Imagine I am sitting in my apartment, musing over how my laptop seems to generate a lot of heat from running this wordprocessing software. I dream of a much more efficient computronium-like processor on which I can perform my wordprocessing vastly more efficiently.

Suddenly, the heating in my apartment fails. Shivering with the cold, I am now in no mood to wordprocess, however I notice that my laptop processor has suddenly become very efficient at performing a task which is of much greater relevance to me (namely how to warm up my legs). That processor is now much closer to being a nice little piece of computronium as far as solving my immediate problem goes…

[1] – Computronium – http://en.wikipedia.org/wiki/Computronium

[2] – Universal Turing Machine – http://en.wikipedia.org/wiki/Universal_turing_machine