Quantum brains

I’m going to talk about quantum brains. But before I do, I have to take a bit of a philosophical detour. So bear with me and we’ll get onto the meaty quantum bits (qubits?) soon.

Disclaimer 1: This is a very general introduction article – it is probably not suitable for QIP scientists who may attempt to dispose of me (probably with giant lasers) for lack of scientific rigor…. *ducks to avoid flying qubits*

We need to think about what we are trying to build. Say we want to build a brain (in silicon, for arguments sake). Well, for a start that’s not really enough information to get on with the task. What we actually want is a mind in a box. We want it to think, and do human-like things. So we run into a problem here because the mind is a pretty vague and fuzzy concept. So for the purpose of this argument, I’m going to use Penrose’s definition of 4 viewpoints of how the mind might be connected to the physical brain, which is given in his book Shadows of the Mind, but I will summarise here for those who are not familiar with the definitions:

There are basically 4 different ways you can interpret the way the mind is related to the actual signals buzzing around and the physics going on in that wet, squishy 3lb lump that sits in your skull. Here they are:

(A) – The ‘mind’ just comes about through electro-chemical signals in the brain. You could fully reproduce a ‘mind’ in any substrate using standard computer providing you could encode and simulate these signals accurately enough. It would think and be conscious and self-aware in exactly the same way as a human being.

(B) – The workings of the brain can be simulated in exactly the same way as in (A) but it would never be conscious or have self-awareness, it would just be a bunch of signals that ‘seemed’ to be behaving like a human from the outside. It would effectively be a zombie, there would be no ‘mind’ arising from it at all.

(C) – There’s no way you can simulate a mind with a standard computer because there’s some science going on that creates the ‘mind’ that we don’t yet know about (but we might discover it in the future).

(D) – There’s no way you can ever simulate a mind because our minds exist outside the realm of physical science. Period. Even that science which we are yet to discover. (This is a somewhat mystical / spiritual / religious argument).

Interestingly Penrose goes for C – mainly because he believes that there are quantum processes occurring in the brain, and the quantum mechanics going on in there cannot be simulated using a conventional computer. So it’s not that we don’t understand the science yet, but we can’t build computers that are able to take that science into account (i.e. model the quantum mechanics correctly). Or can we… don’t we have, like quantum computers now?

Now back to the quantum braaains…

What do I think is the most exciting prospect for quantum computers? Forget factoring, what about building quantum brains? Note: I’m using the phrase ‘brain’ here in a rather unscientific sense to mean a large collection of interconnected agents – essentially a large neural network.

I am a supporter of (A) – which is a variant of the Strong AI hypothesis. That is, a human-level intelligence could be fully simulated on an alternate substrate using a standard, ‘classical’ computer and actually BE conscious and self-aware. However, with this point of view, one might wonder what a similar level of integration would be capable of if it could use some aspects of quantum mechanics as an integral part of its operation.

My viewpoint conveniently makes my argument for the further development of QCs pretty watertight. If quantum computers ARE required to simulate the human brain, (which I do not believe to be the case), then we should probably develop them anyway. If they are NOT required, but are believed (at least by some) to be fundamentally more efficient for certain computational tasks, then wouldn’t it be a cool experiment to make a brain which could harness that extra computational power? I mean… it would be a fundamentally different type of intelligence. Doesn’t that sound cool? Doesn’t that just make you smile and make the hairs on the back of your neck stand on end? Or maybe that’s just me…

Attentive readers may note that I have subtley disregarded option D here. That’s because D stands for Deepak Chopra, who is much better at explaining how QM ties in with that viewpoint than I am.

Quantum Neural Networks have already been explored theoretically. (See here, here, here for just a taste). I think very small QNNs could be realised experimentally at present. If they can be shown to work in principle, they can be scaled up and investigated further.

Adiabatic Quantum Systems based on the Ising model are perfect for this task. Their structure and behaviour resembles a spin-glass, which is mathematically equivalent to certain types of neural network. A spin glass can store patterns in ‘stable’ configurations of spins, just as the brain stores memories as patterns in configurations of the synaptic strengths between neurons (a simplistic model but it’s kinda the main point).

Of course there’s always the problem of decoherence – and it most likely will be a problem in large scale quantum systems. There’s probably some puddles of coherence around the place, maybe they overlap, maybe they don’t. No-one really knows. Could those puddles of local coherence provide any extra computational power? How connected (or perhaps disconnected) would they have to be? Can we design scalable solid state systems with larger puddles?

Again, that sounds to me like something we should investigate.

In conclusion

We should be able to simulate anything that the brain is doing (even if we need quantum computers). If the brain IS using large scale coherence in its operation, it shows us that it IS possible to build large scale coherent quantum systems (if nature can do it then so can we). This would be useful for all sorts of things, like simulating protein folding. In fact this would arguable be the best outcome. I kinda hope Roger Penrose is right…

However, I don’t believe he is right, as I currently believe the level of large-scale quantum coherent phenomena in the brain is very close to ZERO. But that means we can only IMPROVE the level by which quantum mechanics could be leveraged in brain-like systems, by building huge and densely connected NNs using quantum devices such as superconducting qubits. We can explore completely new territory in the building of intelligent systems…

Thus we have a win-win situation 🙂

In other words, QCs are cool and we should build them.
And we need more money *ahem*

Note: I argue this and a bunch of other stuff in my QC & AI lecture. Here is the link to my post about that

Disclaimer 2: This topic has also probably been debated to death and back on various places around the internet but it’s always good to exhume it once more for a guest appearance. In fact if I wasn’t feeling so lazy (and cold, the heating in here appears to be broken at the moment) I might have bothered to dig up some references. It’s also a useful place to send people to if they want to know my point of view on this.

EDIT: To perfectly illustrate both my points that a.) there’s loads of stuff on the internet + I’m lazy and b.) software systems are surprisingly intelligent already (WordPress helpfully pointed out the link for me) here’s some stuff that Geordie wrote about this a while ago:

Can an artificial general intelligence arise from a purely classical software system?


27 thoughts on “Quantum brains

  1. 8-0-8 says:

    Interesting 😀 Personally I would have to go for option C myself, though even if D were the case I’m rather stubborn in what cannot be accomplished by a determined human – in one of the stories I’m writing they find that souls actually do exist and such and eventually develop an entirely new science dealing with that, including the destruction of souls to be rid of certain people, and weapons that kill gods as well… that might be a bit off topic though o.O

    I agree that the human brain doesn’t seem like a quantum computer; I think the greatest problem with replicating an actual brain on a different medium would be making it be so illogical 😄 like you said they function rather strangely.

    It is interesting to note if you could reproduce a sentient being in a computer, from a political standpoint. Would this being be given rights, or would it be stuck on some guidance chip to give a nuke .0001% more accuracy? Would they be mentally superior to a human, and if so, would a human be able to catch up to them ala Kevin Warwick style? and could one transport themselves onto this medium some how, living indefinitely in cyberspace? preferably an open-source model hah

    …okay my scifi tendencies are beginning to bubble over here ^_^’

    • physicsandcake says:

      The cool thing is… it’s not actually that far-fetched as sci-fi goes. I totally think that it’ll be more sci than fi in a few years time 🙂

      I think if self-improving AGI is realised, it will very quickly become much more ‘intelligent’ than humans. It could very quickly wipe us out – not even necessarily through malicious intent. For example, it might not even ‘notice’ we are there….

      And yes there are all kinds of thorny ethical issues around this whole area. It’s just that no-one’s paying them any attention yet, other than some movie plot lines 🙂

      • 8-0-8 says:

        Tell me about it ^^; I feel the need to reference that example where the AGI turns the world into a giant calculator by accident to solve a problem for us… can’t remember how that went exactly.

        But you’re right it is definitely more sci than fi, but as a fellow squishy machine I don’t really want to become obsolete; surely there would be a way to mix meat and metal so that we can break that barrier too… gah I can’t remember what that was called either! I *am* obsolete!! DX

  2. Mike says:

    I suppose all of the standard chemical and electrical inputs to the brain (including the physical senses) would have to be simulated as well, in order for the human intelligence being simulated to avoid the neurological equivalent of a kernel panic…

    Based on some casual reading, it appears that people in a state of sensory deprivation tend to come unhinged after a relatively short period of time. There might be a thread of truth to those rumors about the Mi-Go brain cylinders…

    I’d expect the realistic simulation of the physical environment might prove every bit as challenging as the simulation of the brain (esp. if your simulated brain controls a simulated body that is allowed to have complex interactions with its simulated environment).


    • physicsandcake says:

      I think that we could start with a ‘course grain’ simulation and gradually add detail – I believe that the mind wouldn’t ‘pop’ into existence at some point, rather it would be a sliding scale towards something recognisable.

      I think you could also ‘trim’ the input signals in the same way in the first instance. I don’t think there’s any need to simulate our envorinment to start with, just use some real input data and sensors. But it will be interesting to simulate other environments of course 🙂

      Interesting about sensory deprivation experiments, I have always thought that for a mind to be anything like recognisable as having human characteristics it will have to have similar sensory inputs to the ones that we have. I wonder if our ‘mind’ is a stable attractor in a sea of possibilities that evolution has just stumbled upon.

      I wonder where the other stable variants are, how many other configurations of sensory input vs. internal feedback could produce a self-aware entity and how it would ‘think’ and ‘feel’. How close are our nearest neighbours…?

      Putting ethics aside for a minute, this is something we could totally play with when we have a computer-based version of this one stable configuration that we know of – ourselves. We could then have a mechanism of copying it (one that doesn’t involve approximately 16 years and rather a lot of messy biological stuff) and then altering it easily.

      • Darren Reynolds says:

        > one that doesn’t involve a lot of messy biological stuff

        Still, that has its merits …

      • physicsandcake says:

        But maybe the progress of science shouldn’t count on…. I’m going to shut up now.

      • Darren Reynolds says:

        > It would be a sliding scale towards something
        > recognisable

        If some systems have mind and others don’t, and if human beings are systems that do, then this does suggest there is an evolutionary advantage to having mind.

        If there is such an advantage, I want to know what it is, and how it works.

        What do you think?

      • physicsandcake says:

        I think sometimes that we are most likely in an evolutionary dead-end because we developed a ‘mind’ that is able to comtemplate its own existence 😉

      • Geordie says:

        Darren: bacteria are far better competitors than humans in our biosphere by any metric you can imagine. I don’t think there is much evidence that intelligence is generically correlated with evolutionary fitness…. have you seen the trailers for Idiocracy?

  3. […] This post was mentioned on Twitter by Rod Furlan and James Benton, Suzanne Gildert. Suzanne Gildert said: P&C blogpost: Fancy a philosophical snack? Quantum braaaains – https://physicsandcake.wordpress.com/2010/01/04/quantum-brains/ […]

  4. Darren Reynolds says:

    Most things about living beings, it seems, come about through a process of natural selection.

    If a mind is an inevitable feature of an accurate encoding and simulation of the signals, then it seems that all matter has potential to host mind, and thus in some way mind is an innate quality of matter, not unlike colour or strength.

    If a mind isn’t an inevitable feature of an accurate encoding and simulation of the signals, but is instead a possible feature that confers some kind of selection advantage over a system with no mind, then one would have to wonder exactly what that advantage is, how it comes about, and how the advantage is exercised. If anyone here has the faintest clue, I would be fascinated.

    On the other hand, how about E). Perhaps it is impossible to tell whether you have created a mind versus a zombie (even when you do it the normal way using sperm and eggs). And does it matter which you’ve created, anyway? If it quacks like a duck …

    • physicsandcake says:

      I think E = B – A 🙂

      i.e. E is what you get when people argue for hours about the difference between A and B, qualia, zombie arguments etc. etc…. the answer is that you can never prove it one way or the other for the same reason that you can’t prove the person sat next to you isn’t a zombie either

      I think perhaps we touched on this at the QC+AI H+ meeting, but I imagine I was a bit like ‘NO NO NO guys let’s stick to Quantum Physics, it’s much more fun!’

      I also remember fondly the moment when I said that Strong AI is ‘obviously correct’ or something like that and I got this huge sucking in noise from the audience and people hissing and cursing at me…

      …Good times.

  5. Geordie says:

    Strong AI *is* obviously correct… although I often run into very smart people who don’t agree, for some reason they can never quite define. It seems pretty simple to me. Information is fundamentally defined only by the laws of physics of the device it is stored in. If the brain stores and processes information classically, then we can build a classical computation system that does everything a nervous system does (just better and faster) using conventional computing systems.

    I think the resistance to this idea comes from the same place that resistance to the idea that the earth is not the center of the universe comes from… humans have a need to feel special. The idea that a machine could be better in every possible way is repugnant to some. But that doesn’t change its inevitability.

    • physicsandcake says:


      Next week folks we’ll be chopping up and burning some small kittens to illustrate the non-existence of free will 😉

    • physicsandcake says:

      P.S. Oops, I should have linked to your post about this a while back – but it appears WordPress has noticed the semantic overlap anyway and helpfully put this as a further reading suggestion. Kind of ironic given the subject matter…

      Anyway I’ll put an edit 🙂

    • Darren Reynolds says:

      > Strong AI *is* obviously correct

      I’m gradually coming round to accepting this POV, but there’s nothing obvious about it.

      > I think the resistance to this idea comes from the idea
      > that humans have a need to feel special.

      In the nicest possible way, this is a tad patronising. On the basis of empirical evidence (n=1), I think the resistance comes from two places.

      First, humans often have an innate feeling that there is such a thing as free will. Free will might be an essential property of mind and any such free will must logically come from somewhere other than any physics we presently understand. That lack of understanding the physics wouldn’t prevent the simulation of mind (one can write a flight sim but know diddly squat about jet engines), but it might make it a whole lot trickier to do anything meaningfully realistic.

      Second is the zombie test problem (i.e. how do you know whether the system is a zombie?)

      There are probably a whole list of objections that could be identified with a survey. Rebutting them all is probably a pre-condition of acceptance of the idea.

      • physicsandcake says:

        I actually have an kind of self-consistent explantion about qualia and related stuff in my own head (no pun intended). I might even get around to writing something on here about it one day.

        What’s more likely is that I’ll first finish the sci-fi short story I’m writing to explore these ideas…!

      • Geordie says:

        I have trouble seeing how the free will issue and strong AI are related. They seem to be completely independent to me.

        Also I think the zombie type arguments are red herrings. The basic statement is that the specific substrate that information is stored in only matters insofar as its governing laws of physics matter. All classical information storage systems are equivalent in what they are capable of doing. Give me a good theoretical framework of how our nervous systems function and 5-10 years later you will have machines fundamentally better in every dimension we associate with intelligence.

        • physicsandcake says:

          I’m not sure Strong AI and free will are *totally* orthogonal. However, the ‘illusion of free will’ probably just arises from something mundane like chaos theory…

          Also if you are a MWI supporter you can sweep some of the free will stuff under the rug. Alas, it still leaves a lump – but it’s not quite so offensive.

          I dislike zombie arguments because you can never break the subjectivity loop. It’s a bit like arguing whether or not art is artistic.

          People who constantly argue about zombies should try turning some of their effort towards projects which would create a single hive mind from all known intelligence. Then the problem would disappear 😀

        • Darren Reynolds says:

          > I have trouble seeing how the free will issue
          > and strong AI are related.

          I started a pointless debate and humbly apologise.

          The *issues* are certainly related. If there is free will, then the decision-making entity may be the source of intelligence, and it might be absent from machines.

          But now I am invoking the spirit world, the sentence in response to the mere contemplation of which demands a serious purgatorial penalty.

          To be honest this makes my head hurt, and has been doing so for several years.

        • 8-0-8 says:

          Well, from my perspective, I think the two issues are related because if you could reduce “the soul” (aka the essence of sentience) to a finite equation or group of numbers, you could probably develop another equation for humans and predict a person’s course of action *to the letter*. You’d need to have a bunch of open spots for variables, and for optimum results have absolute data on every atom or energy in the universe so you could track it’s motion and the effects it would have (to map chaos theory, I think)… anyways it would make clear that freewill was an illusion if everything was following a set course that was impossible to change, almost like Destiny except not based on spirituality as it often is.

          …I actually have a book based on that too. It’s very dark.

  6. Darren Reynolds says:

    > for optimum results have absolute data on every
    > atom or energy in the universe so you could track
    > it’s motion and the effects it would have

    Except you can’t.


    • 8-0-8 says:

      I guess if the principle holds true, but I still believe it would be possible someday, just not by any current methods haha.

      In any case it’s worth noting I’m probably the least intelligent subscriber to this blog ^_^’

  7. Intelligence what is it?

    The source of one’s intelligence is in your vocabulary
    of WORDS. Intelligence is applied. Words are
    icon token patterns pointers to your ubiquitous
    universe, no more no less. These patterns can be
    both static and dynamic, hence the need for tokens.
    The “Word-Token-Pointers” to an experience of being
    which directly to one’s personal experience Data Base
    referenced by the “Word-Token-Pointer”.

    Basic Language relates to the five-senses, abstractions
    of words on words become words defining pattern
    relationships (Both static and Dynamic), with there
    assigned meanings by association of (patterns) or
    agreed defections of meaning. Again nothing more
    nothing less. Patterns are assigned meaning which
    can be brought about through an “Ontology” process
    of discovery or simply the experience of being, that is
    thinking about words as SYMBOL SETS themselves.

    Hear the essence of Abstraction is Compression and
    that Compression takes the form of English Words
    with there associated assigned patterns of meaning,
    (See: Kolmogorov Complexity for further clarification
    of Compression and Rule sets).

    The “Experience of Being” in the (human) sense includes
    all the STATE conditions chemicals and physical
    environmental inputs not necessarily related to or
    limited by one’s five-senses. Typical examples are
    perception of Weight (Mass), and Visual cues of
    Length and Distance.

    Can other SYSTEMS develop VOCABULARIES
    of the “Experience of Being” the answer is yes.

    Can other SYSTEMS develop abstractions on
    there own VOCABULARIES the answer is yes.

    At what point does an ALGORITM become
    a SYSTEM of the “Experience of Being” and
    cross over to Applied Intelligence just about
    the time you give that ALGORITM a body
    to relate to its UNIVERSE in its ENVIROMUNT.

    Now if that ENVIROMENTS includes interactions
    with humans and there literature of WORDS the
    relationship of Applied Intelligence becomes
    quick and rapidly moves beyond a single man
    Applied Intelligence level. What ever you want
    to call this more than one man intelligence it
    will happen first in “Second Life” Avatars,
    and then move rapidly to “Perfect Mate Robots”
    because the SYSTEM ENVIROMENTS exist

    In my opinion efforts of replicating neural-nets
    path circuits is a Red Haring Effort in AI, and
    a costly one in silicon, from a physical and
    software standpoint. If they want neural-nets
    grow them in Biological form and talk to them
    would yield a higher net-net for the R&D
    money, with far more spin-offs along the way.

    As for “Quantum Brains” as Suzanne Gildert
    Suggest at:

    I’ll be nice and say she should go fishing with
    those “Hip-Fishing-Boots” she has. I can understand
    she is looking for FUNDING and APPLICATIONS
    of her work in her field of Quantum Computers.

    The field of Applied Intelligence Mime Algorithms
    SYSTEMS developing VOCABULARIES of the
    “Experience of Being” is not one of them. Now, if
    you look at there underlying Genetic Algorithms
    Language Space of English WORD patterns of
    Meaning and Abstraction factorial “Pattern Space”
    which requires large amounts of CPU clock-cycles
    to solve s simple “CRYPTOGRAM” say in the
    London Times. The use of Quantum Computers
    in Pattern Recognition Genetic Algorithms is
    real and a pressing need at that.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s