‘Computronium’ is really ‘Unobtainium’

‘Computronium’ is really ‘Unobtainium’

S. Gildert January 2011

Computronium [1] is defined by some as a substance which approaches the theoretical limit of computational power that we can achieve through engineering of the matter around us. It would mean that every atom of a piece of matter would be put to useful work doing computation. Such a system would reside at the ultimate limits of efficiency, and the smallest amount of energy possible would be wasted through the generation of heat. Computronium crops up in science fiction a lot, usually as something that advanced civilizations have created, occasionally causing conflicts due to intensive harvesting of matter from their galaxy to further their processing power. The idea is also also linked with advanced machine intelligence: A block of matter which does nothing other than compute could presumably would be incredibly sought after by any artificial intelligence looking to get the most compact and powerful brain for its money!

Many fanciful ideas from science fiction stories of the past have indeed turned out to be part of our everyday lives today. And with a little further thought, computronium seems nowhere near as crazy as, say, flying cars, warp drives and lightsabres. In fact, most people would accept the idea of that transistors have been getting smaller and more efficient, marching to the beat of Moore’s law. We now take for granted that the processor of a cell phone would once have required an entire building and consume several kilowatts of power. So what is wrong with taking this idea to its ultimate limit? Why can’t transistors become smaller and smaller until they are the size of atoms, perhaps even the size of sub-atomic bits of matter? Wouldn’t that be something at least very similar to computronium, if not the real-deal itself?

Processing power

Let’s pick apart some of the statements above and think a little more carefully about them. To begin, consider the phrase ‘computational power’. This may seem easy to define. One could use FLOPs (floating point operations per second) as a definition. Or perhaps we could use the ‘clock speed’ of the processor – just how fast can we get those bits of matter to change state?

When looking for good metrics for the computational power of a processor, we come across some difficulties. You can’t just use the ‘clock speed’ of your computer. Imagine, for example, that you split the processor it into multiple cores. Some programs might now be able to run faster, even if the ‘clock speed’ was the same!

Imagine that you required a processor to perform transforms on lots of pixels at once – say you wanted a game engine where the state of each pixel was only related to the state of that pixel on the previous frame. (I.e. it didn’t care what all the other pixels were doing). A parallel processor would calculate the output of pixel values very effectively, as it could assign each pixel with its own core, and compute them all at the same time. In fact, the existence of this type of problem is why we often find that games consoles have Graphics-Processing-Units (GPUs) built in them. They are parallel processors which are good at doing pixel-transforms in this way.

But imagine instead if you wanted to perform a calculation which was more like the following scenario: Calculating the 100th value of the recursive equation zn+1=zn2+3zn+1. There’s not really an easy way you can parallelize this, because each time you calculate the value of z in the equation, you need to know the result from the previous calculation. Everything has to be done in order. So if you tried to run this on your thousand core GPU, it would only really be able to make use of one of the cores, and then your highly parallel processor wouldn’t look so efficient after all.

In fact, the only reason why metrics such as clock speed and FLOPs have been useful up to now is that we have mostly been using our processors to do very similar tasks, such as arithmetic. But the kind of things we want them to do in the future, such as natural language processing and complex fluid dynamics simulations no longer rely on straightforward arithmetic operations.

We start to see the the computational power of a piece of matter really depends upon what you want it to do!

Arranging matter usefully

So although in the future we may be able to make atomic-size transistors, we still need to decide how to arrange those transistors. If we arrange them in a slightly different way, the programs that we run on them may suddenly become much more efficient. Constructing serial and parallel processors is just one way to think about rearranging your matter to compute differently. There are many other trade-offs, for example how the processor accesses memory, and whether it is analog or digital in nature.

Perhaps, then, to get around this problem, we could create a piece of matter that reprograms itself, that rearranges the atoms depending upon what you wanted to do. Then, you could have a processor with one large core if it is running a serial algorithm (like the recursive equation), and many small cores if it is running a parallel algorithm (like the game engine) Aha! That would get around this problem. Then my computronium can compute anything once more, in the most efficient way possible for a particular task.

However, we find that you still cannot win, even with this method. The ‘reprogramming of the matter’ stage would require a program all of its own to be run on the matter. The more clever your reprogramming program, the more time your processor would spend reconfiguring itself, and less time would be spent actually solving problems! You also have to somehow know in advance how to write the program that reprograms your matter, which again requires knowledge of what problems you might want to solve.

Computing limits

You may be wondering why I haven’t yet mentioned the idea of the Universal Turing Machine [2], which is a theoretical description of a computer, able to compute anything. Can we not just arrange our atoms to make a Turing Machine, that could then run any program? It is certainly the case that you can run any classical digital program on a Turing machine, but the theory says nothing about how efficient its computation would be. If you would like an analogy, a Turing Machine is to a real computer program as an abacus is to Excel – there is no reason why you cannot sit down and do your weekly accounts using an abacus, but it might take you a very long time!

We find when we try to build Turing machines in real life that not everything is realistically computable. A Turing Machine in practice and a Turing Machine in principle are two very different beasts. This is because we are always limited by the resources that our real-world Turing machine has access to (it is obvious in our analogy that there is a limit to how quickly we can move the beads on the abacus). The efficiency of a computer is ALWAYS related to how you assemble it in the real world, what you are trying to do with it, and what resources you have available. One should be careful when dealing with models that assume no resource constraints. Just think how different the world would be if we had an unlimited supply of free, clean energy.

The delicate matter of computation

Let’s philosophise about matter and computation some more. Imagine that the thing you wanted to do was to compute the energy levels of a Helium atom (after all, this is a Physics blog). You could use a regular desktop computer and start to do some simulations of Helium atoms using the quantum mechanical equations. Instead, you could take a few real helium atoms and measure the spectrum of light emitted from them when they are excited with a voltage inside an (otherwise evacuated) tube. The point is that a single Helium atom is able to compute the spacing of its own energy levels using far fewer resources than our simulation (which may require several tens of grams of silicon processor).

So as we see, atoms are already very busy computing things. In fact, you can think of the Universe as computing itself. So in a way, matter is already computronium, because it is very efficiently computing itself. But we don’t really want matter to do just that. We want it to compute the things that WE care about (like the probability of a particular stock going up in value in the next few days). But in order to make the Universe compute the things that we care about, we find that there is an overhead – a price to pay for making matter do something different from what it is meant to do. Or, to give a complementary way of thinking about this: The closer your computation is to what the original piece of matter would do naturally, the more efficient it will be.

So in conclusion, we find that we always have to cajole matter into computing the things we care about. We must invest extra resources and energy into the computation. We find that the best way to arranging computing elements depends upon what we want them to do. There is no magic ‘arrangement of matter’ which is all things to all people, no fabled ‘computronium’. We have to configure the matter in different ways to do different tasks, and the most efficient configuration varies vastly from task to task.

Afterthoughts: Some philosophical fun

The following amusing (if somewhat contrived) story serves to illustrate the points I have been making. Imagine I am sitting in my apartment, musing over how my laptop seems to generate a lot of heat from running this wordprocessing software. I dream of a much more efficient computronium-like processor on which I can perform my wordprocessing vastly more efficiently.

Suddenly, the heating in my apartment fails. Shivering with the cold, I am now in no mood to wordprocess, however I notice that my laptop processor has suddenly become very efficient at performing a task which is of much greater relevance to me (namely how to warm up my legs). That processor is now much closer to being a nice little piece of computronium as far as solving my immediate problem goes…

[1] – Computronium – http://en.wikipedia.org/wiki/Computronium

[2] – Universal Turing Machine – http://en.wikipedia.org/wiki/Universal_turing_machine

24 thoughts on “‘Computronium’ is really ‘Unobtainium’

  1. randalkoene says:

    Wonderful article, Suzanne. The everyday usage of the term computronium (is there such a thing as everyday usage of such a term?) is perhaps not as strict as what you mentioned, but it is in any case true that no one architecture fits all optimally. I find nothing to disagree with in this post! 🙂

  2. Tim Tyler says:

    IMO, computronium was not originally billed as a “magic ‘arrangement of matter’ which is all things to all people” in the first place. The idea was more that it is an efficient 3D programmable massively-parallel computer.

    In fact, the Toffoli/Margolus “programmable matter” paper that helped to kick off the whole idea explicitly listed a number of variables which such matter might still exhibit: number of dimensions, aspect ratio, interconnectivity, states-per-site, transition function, and serial/parallel ratio.

  3. Alexander Kruel says:

    Great post, thanks!

  4. Ben Zaiboc says:

    Pessimist!

    I’m with Tim on this. My conception of ‘computronium’ is more along the lines of utility fog or ‘dust’. A programmable form of matter that can be configured to do any number of things, computing being only one of them.

    I think the design spec. would have to be flexible enough to allow for many different chemical elements to be used, and none of them would be essential. I’d imagine that any actual computation would be done in a nanomechanical fashion, so the machinery could be made of carbon, silicon oxides, iron compounds, whatever.

    Imagine a library of designs, each optimised for a different mix of available elements and compounds, but each resulting in the same general type of ‘foglet’ capable of communicating with other foglets, storing information, performing a standard set of physical actions (including computation), and displaying a set of physical properties (basically a matter of controlling an electron cloud, like an artificial atom). The library could then be distributed among a set of these foglets, and they’d be able to use it (plus loads of other information, obviously) to manufacture more foglets from the material available to them.

    Now maybe I’m still talking about ‘unobtanium’, I don’t know enough physics/chemistry to know. But it’s a certainty that almost all the matter in the universe is currently dumb, and I think it’s probable that it doesn’t have to be that way.

    Leaving aside all the hydrogen, that is. Probably.

    So maybe between 1 & 10% of all matter could be less dumb than it is now. That would be a huuuuge improvement.

    • physicsandcake says:

      Hey Ben, pessimism is a good thing! 😉

      Pessimism can also mean realism. If you have carefully considered the hard problems involved with doing something, you are much more likely to overcome them and succeed in making something happen.

      So I feel that by being a singularity skeptic and really questioning some of the ideas around it is actually more likely to drive progress forward rather than stymie it. In big projects, having more constraints often allows you to achieve more, as it focuses your effort.

      I also don’t agree with this idea of matter being ‘dumb’. Matter is pretty clever, it understands Physics much better than we do. It is just that it doesn’t spend it’s time doing things that WE want it to do.

      I believe that making a distinction between ‘clever’ and ‘dumb’ matter is a little naive.

      In order for matter to solve different types of problems it has to be rearranged in different ways – it can only be defined as ‘clever’ or ‘dumb’ with respect to a very narrow problem (which is kind of what the philosophical musing was attempting to illustrate).

      A chair isn’t dumb matter. It’s solving the problem of giving you somewhere to sit.

      Another example is if all the biomass of all plants on earth was rearranged into Pentium processors, it would probably be quite clever until we all suffocated 🙂

      • Ben Zaiboc says:

        “Pessimism is a good thing”

        Yeah, I know. The pessimist is often pleasantly surprised, the optimist rarely.

        Your chair is good at being a chair, yes, but it’s dumb compared to a chair which *in addition to being a chair*, can reorganise itself into about a hundred different chairs, sense your tension levels, adjust its softness, listen to what you’re saying, reply, give you a massage, clean itself, etc., etc.

        Self-organising vs. top-down commands:
        I know, a very interesting problem, and I trust more intelligent minds than mine are pondering it. I’m thinking there must be some happy medium, where the intelligence is distributed among different, overlapping populations of foglets (or snowstormlets!), for any specific task or domain, a bit like the way groups of people can display behaviour that individuals can’t, and this could have multiple hierarchical levels, in several different dimensions. A bit like the way an organisation can have links to other organisations, and subdivisions, and other organisational units that cut across them, all with different domains of expertise, acting at different levels.

        Continuing this analogy with people, each unit would be the same but different. The same physical architecture, but different algorithms running on different populations.

        Don’t ask me how this would be actually done, though!!

  5. Re “But we don’t really want matter to do just that [computing the universe according to the laws of physics] . We want it to compute the things that WE care about..

    Of course you are right here, and of course there is a price to pay if we want to force matter to compute what we want.

    There must be some sort of computational Bekenstein bound which is achieved only when matter does its own thing (computing the universe) and not even approached when matter does other things like running our computations.

    Yet, I guess the _useful_ computing power that we will be able to extract from matter is much, much larger than what we can achieve today, and I am confident that it will be sufficient for the things that we will want to do in practice (like computing billions of human minds in a few grains of sand).

  6. I think the philosophical fun (using your laptop as a heater) is very deep. It shows that utility and fitness for purpose is what really matters. The degree of computronium-ness of our machines can only be defined in terms of our actual requirements.

    I think we will be able to develop practical computronium beyond the wildest dreams of the most visionary science fiction writers. We may find some insurmountable limits when we want to do really god-like tasks, but I think we can have a few centuries or millennia uf unlimited fun before that moment.

  7. physicsandcake says:

    Re: The ‘utility fog’ comments, I changed ‘is defined’ to ‘is defined by some’, as I have certainly mostly come across the idea of computronium in the sense of the matrioshka brains and Dyson spheres, but there may be other ways that people think about it.

    I also think there are a lot of interesting questions to be asked about the idea of utility fog. For example, to have programmable matter, you need components that are somewhere on a sliding scale between two extremes:

    a.) totally self organizing relying on nearest neighbor interactions for transfer of information

    or

    b.) are all individually controlled by a central command system.

    In both extremes you run into tricky problems.

    If the system is self organizing, how does it know what to ‘create’ – for example you may be able to build a utility fog that could self organize into a chair, but it might not be very good at self-organizing into other things.

    On the other hand, if you have a top-down command system, you have to very carefully tell each component what you want it should do for you in this particular instance, (for example: Unit number #23759 go to location x,y,z and execute configuration #391). This would require a very large number of signals to be sent and a lot of computation to be done in the command center to work out what each of these signals should be and the order in which they are transmitted. It would also require each unit to be very versatile and be able to receive signals and act on them accordingly.

    I wonder how small and efficient each component can be in each of these cases? I bet it is a function of where you are on this sliding scale. The more programmable, versatile and individually reconfigurable you make your pieces, the bigger they have to be, the more onboard processing power each unit will need, and the more energy they will consume. I suspect this prevents you from having nanometer size utility fog components that are anything more than self-organizing. You might be able to have micrometer sized components with some reconfigurability. It would more likely then be a utility snowstorm than a utility fog 🙂

    Anyway, it is an interesting problem to consider, and worthy of a whole post in itself!

  8. kneemo says:

    The ultimate computronium is the quantum vacuum itself. Along these lines, in a string/M-theory context, invoking the correspondence between black holes and qubits is useful, giving hints as to how the vacuum might eventually serve as a computational substrate. See, for example:

    L. Borsten, M.J. Duff, A. Marrani, W. Rubens,On the Black-Hole/Qubit Correspondence.

    In M-theory, there exist stable non-perturbative states (BPS states) with mass equal to a fraction of the supersymmetry central charge. These states arise from configurations of two and five-dimensional branes, gravitational waves and monopoles. (Note there are no superstrings in M-theory. They arise from compactifications of M-branes in dimensional reduction from D=11 to D=10).

    The black hole/qubit correspondence so far has made use of toroidal compactifications of M-theory. That is, one begins with the full 11-dimensions of M-theory and starts to curl up dimensions so that, say n of them form a higher-dimensional torus (doughnut shape), T^n. This then describes a lower dimensional supergravity theory, in D-n dimensions.

    In the D-n dimensional supergravity theory, the BPS states arising from configurations in M-theory behave like microscopic black holes. These black holes are called extremal black holes, as they are the ground states of black holes undergoing Hawking radiation. These states have no analog in general relativity, but do exist in supergravity and M-theory which consider quantum effects.

    So far what has been found is that in M-theory compactifications down to dimensions D=3,4,5,6, BPS black hole solutions behave like entangled qubits and qutrits. More precisely, the invariants used to classify black holes with different fractions of supersymmetry, end up being the same invariants used to classify entanglement classes of qubits and qutrits. Even more, the black hole mathematical techniques classify qubits and qutrits over not only the real and complex numbers, but over higher dimensional division algebras in four and eight dimensions. So string theory actually predicts new types of qubits and qutrits and classifies their entanglement classes in advance.

    Now, in practice, if M-theory is correct, the vacuum should be teeming with such microscopic black holes. They would, in a sense, serve as the qudits of an M-theoretical computronium. Specific types of transformations in M-theory called U-duality transformations, that map between BPS black hole solutions, would then serve as ‘quantum gates’ for these qudits.

    Hence, to tell the M-theory vacuum what we would like to do, amounts to the programming of microscopic black holes via U-duality machine code.

  9. Dr.Dom says:

    Great article Suz!

    As an alternative to getting matter to compute things that we care about, you could just go and find the right bit of matter.

    I like to think that somewhere in the depths of the universe, in a far away galaxy, there is a tiny piece of rock that just happens to be quite naturally simulating the entire New York stock exchange…

  10. Computronium is a goal. It won’t be reached but there is a LOT of space between current chips and optimal arrangement of available minerals.

    I propose a lesser scale of computronium – “computronide” – a to some degree self-organizing nanoid-driven computational substrate that is available upon science and technology, safety constrains, energy constraints and material constraints. And supporting infrastructure.

    In metallic asteroids there will, a century from now, certain types of optimal computronides. This material will operate in cold states, with limited available metals, lots of volatile components and fair to modest state of technology and support infrastructure.

    Mercury will have breathtaking energy gradients, lots of heavy metals, may have a somewhat iffy state of science and technology (its twice as hard to fly to Mercury as it is to Saturn). So Merrcurial “servers'” will have rather ‘industrial’ computronite processes.

    My best bet for computronite for a long time, in terms of energy gradient (tectonic plates, atmospheric cooling), avaiulable support technology and materials remains, for a long time Terrestrial Conputronite.

  11. Computronide – nonlocal, freeforming, self-organizing

    Computronite – local, grown, cultivated, requires TLC

    ?

  12. Mike g says:

    Very interesting article. I always hear computing power discussed in terms of information processing which has some entropic costs. Yet as you point out the algorithm is very important as well, especially how easy it is to retool your computer to adopt the optimal algorithm for your desired computation. Nature is extremely efficient at solving the equations that govern its time evolution. The alternative brute force algorithm in which we use a computer to crunch through the time evolution equations is completely impractical. Yet, we can see how long it took us to figure out how to manipulate nature to the point that we have a program that can balance our checkbook.

  13. haig says:

    The concept of computronium is misleading because the concept of computation, the metaphor du jour as we colloquially associate it with current computing technology, is misleading as well. If instead of computation as defined by digital electronics (or equivalent technologies) we define it as information processing, then everything is computation because everything is information. This is far to broad to be useful, so we can limit it to just the information processing representing negentropic activity. What computronium was meant to represent was the future environment of this posthuman negentropic activity as it extends away from its earthly origin. When futurists imagine a future based on what we think humans will want, they envisage an environment that would provide the largest options and possibilities to transduce (from our physical universe) and generate (ab silico/cogito) information that can feed and modify our (and our posthuman descendants) conscious patterns indefinitely (ie maximize desired experiences).

    We might yet be able to salvage the computronium idea for use in conceptual dialogue. As a thought experiment, imagine unraveling a bounding volume of space comprising earth into some manifold encompassing all the negentropic activity. As posthumanity becomes adept at extending this negentropic activity out into space, the bounding volume must be expanded to include it, and thus the manifold increases. Parts of this posthuman activity may in fact look surprisingly like a computer system, with distinct sections dedicated to memory, others to processing, others to bus information around, others as providing energy/matter, yet others acting as the ‘display’ output, but in n-dimensional spacetime instead of a 2D monitor (one would argue that is what our universe currently is!) but at varying scales and dimensions. The fractal nature of the universe may even guarantee that all information systems, regardless of scale, look similar eventually. The manifold that represents all of this can be, I think, rightfully called computronium. If you want to think of this as a reconfigurable computational substrate, go for it!

  14. Adrian says:

    Your article is nice, but I think you could gain a lot if you looked up elementary complexity theory. Computer scientists have spent a lot of energy on making “computation” and “efficiency” very precise words. For example, the theory says very much about how efficient an algorithm will be when run on a Turing machine and how to make the computation faster by constant factors. There is also rich theory about which problems can be solved in parallel and how much parallelization can be achieved.

    • physicsandcake says:

      Hi Adrian,

      I have two problems with complexity theory. They are both to do with practice versus principle, and my view on the matter is entirely subjective because I deal primarily in the ‘practice’ camp.

      It is true that theory says how efficient an algorithm will be when run on a Turing machine, but in the real world that Turing machine has to be built from something, and so your efficiency definition starts to be polluted (or, looking at it from the experimental physics point of view – needs to be expanded), as it starts to encompass notions such as how the Turing machine is physically built.

      My second gripe is that complexity theory deals in limits. Everything is in the limit of infinitely large N, so an algorithm that scales as N^2 and one that scales as N^500 are often treated the same in complexity theory – they are bucketed under the ‘polynomial’ umbrella. In practice it makes a LOT of difference whether your algorithm scales as N^2 or N^500. In fact what matters more is the prefactor in front of the N term because most people will throw out anything that doesn’t scale linearly in the number of variables and then they just fuss over that prefactor.

      There will always be friction between those who build computers and those that investigate idealized versions of them – it’s just an example of a case in science where two different ways of looking at the world come into conflict!

  15. There is either a limit to the efficiency/operations per second (or however you wish to define it/refer to it) imposed by reality or there is not. The limit, reached, would be what we’d call “Computronium.”

    Only if computation efficiency has literally no upper limit and can improve infinitely would an upper limit on the efficiency of computing -and thus Computronium- be unobtainable by definition

  16. RepCores.com aims to turn common unrefined rock into something similar to ‘computronium’.

Leave a reply to Ben Zaiboc Cancel reply