IET Turing Lecture 2010 with Chris Bishop

Last night I attended the IET Turing lecture which was given by Chris Bishop, the Chief Research Scientist at Microsoft Research Cambridge. There was a great turnout, well over 400 people, and the event was fully booked! Some people may remember Chris Bishop from the 2008 Royal Institution Christmas lecture series, where he talked about the potential and limitations of computer technology to an audience of young scientists-to-be.

Here is the Promo video:


And here is the actual lecture:


IET/BCS Turing Lecture 2010 – Embracing Uncertainty: The new machine intelligence

Professor Christopher Bishop, Chief Research Scientist, Microsoft Research Cambridge Computers

From: The IET/BCS Turing Lecture

2010-02-25 00:00:00.0 IT Channel

The lecture was interesting, it focused mainly on Bayesian inference techniques and how they can help us in handling large data sets. Professor Bishop described how Microsoft have incorporated this research into a new tool called

I spoke to Professor Bishop after the lecture, specifically I asked him if these techniques could benefit from massively parallel architectures. He said yes they could. I then tried to ask about whether or not some of these techniques (for example the message passing part of the algorithms – watch the video at around 18:20) could potentially be mapped onto, say, an optimization approach. There is a connection here with Hopfield networks and energy minimization and the like here, but it’s not immediately obvious from the explanations given in the lecture. Unfortunately I wasn’t able to get very far with this discussion as there were lots of other people asking questions too. But it is an interesting train of thought, and as I didn’t want to take up all the speaker’s evening with this line of questioning, I thought I’d probably better buy his book and think over it a bit more instead 🙂

The all new ‘Resources’ tab

The regular reader(s) may have noticed that a shiny new tab, the ‘Resources’ tab has been added to the blog:

Hopefully I’ll put up here any presentations/essays/slides/generally useful information about the stuff that I am interested in and researching, which is basically anything to do with Josephson junction technology, superconducting electronics, experimental quantum computing, quantum neural networks, artificial general intelligence and the brain. I’m currently in the process of getting some new videos edited too, so they will be going on there soon. I’ll put a separate post about those. Until then, you can enjoy perusing slideshow PDFs of several presentations that I have given to a range of audiences.


Here’s some cake to enjoy with the slides:


physics and cake


(This one was from our regular Wednesday post-group-meeting ‘cake club’)

Josephson junction neurons

This is an interesting paper:

Josephson junction simulation of neurons

by Patrick Crotty, Daniel Schult, Ken Segall (Colgate University)

“With the goal of understanding the intricate behavior and dynamics of collections of neurons, we present superconducting circuits containing Josephson junctions that model biologically realistic neurons. These “Josephson junction neurons” reproduce many characteristic behaviors of biological neurons such as action potentials, refractory periods, and firing thresholds. They can be coupled together in ways that mimic electrical and chemical synapses. Using existing fabrication technologies, large interconnected networks of Josephson junction neurons would operate fully in parallel. They would be orders of magnitude faster than both traditional computer simulations and biological neural networks. Josephson junction neurons provide a new tool for exploring long-term large-scale dynamics for networks of neurons.”

Advantages of using RSFQ-style architectures include the non-linear response of the elements and the analogue processing capability which means that you can mimic more ‘logical’ neurons with fewer ‘physical’ elements. I’m pretty sure that this is true. In addition, you can think of other wonderful ideas such as using SQUIDs instead of single junctions (hmm, I wonder where this train of thought might lead) and then apply non-local (or global) magnetic fields to adjust the properties of the neural net. Which might be a bit like adjusting the global values of a particular neurotransmitter.

I’m a bit worried about this approach though. Current superconducting technologies tend to have a low number of wiring layers (<5), and as such are pretty much a 2 dimensional, planar technology. The maximum tiling connectivity you can get from a single layer planar architecture is presumably 6 nearest neighbour unit cell. (Hexagonal close packing). The three dimensional packing in a real brain gives you a higher intrinsic level of connectivity, even though the structure of the neocortex is only quasi-3-dimensional (it is more like 2D sheets crumpled up, but even these '2D' sheets have a fair amount of 3D connectivity when you look closely. In a real brain, each neuron can have tens of thousands of differently weighted inputs (the fan-in problem). Try building that into your mostly-planar circuit 🙂

One good thing about using analogue methods is that not all the neurons need to be identical. In fact having a parameter spread in this massively parallel architecture probably doesn't hurt you at all (it might even help). Which is good, as current Josephson junction foundries have issues with parameter spreads in the resulting digital circuitry (they are nowhere near as closely controlled as semiconductor foundries).

The paper claims that the tens of thousands of neurons in a neocortical column might be simulable using this method. I think that with present LSI JJ technology this is very optimistic personally… but even considering the connectivity, parameter spreading and fan-in problems, I think this is a very interesting area to investigate experimentally.

I’ve actually written a bit about this topic before:

Quantum Neural Networks 1 – the Superconducting Neuron model

In that blogpost there were some links to experiments performed on simple Josephson junction neuron circuits in the 1990’s.

A nice preprint and another talk

Here is a nice preprint comparing some of the methods of realizing qubits, including neutral atoms, ions, superconducting circuits, etc.

Natural and artificial atoms for quantum computation

I’m about to give a short talk on this very topic to an undergraduate Computer Science class. The talk will serve two purposes, it will be an introduction to the myriad of different methods by which qubits and quantum computers can actually be realised, and secondly it will be a nice insight into some of the things that experimentalists have to worry about when they are actually building quantum computers. Here is the talk overview:

Models of quantum computation
Ion traps – Optical photons / Neutral atoms – NMR – Superconducting circuits – Nanomechanical resonators
Example of operation
The Bloch sphere – The density matrix
Decoherence + limitations
The DiVincenzo criteria – Measuring T1 and T2 – Sources of decoherence

Here are the slides:

Unfortunately I won’t be recording this one so no videos this time. Boo.

Humanity+ UK 2010

This one-day conference will be the first of its kind, to promote and encourage the (currently fast growing) interest in future technologies and transhumanism in the UK and beyond.

Humanity+ UK 2010

Confirmed speakers include Rachel Armstrong, Nick Bostrom, Aubrey de Grey, Max More, David Orban, David Pearce, Anders Sandberg, Amon Twyman, and Natasha Vita-More. It’s a great opportunity for those who are curious about futurism, transhumanism and accelerating technological change to meet and talk to a wide range of people interested in these subjects.

There will be a conference dinner after the event at a nearby restaurant. Visit the website to find out more and register for the event.

Post-IOP-Talk thoughts

So I gave this talk last night entitled: Quantum Computing: Is the end near for the Silicon chip? It was an interesting experience. I’ve given talks of this size before, but I don’t think I have ever tried so cover quite so many topics in one go, and give so many demonstrations in the process. So with two radio microphones strapped to my waist, and 3 cameras recording the talk, I proceeded to enthusiastically extol the future potential for superconducting electronics technology, and warn about the limits of silicon technology. I gave an overview of superconductors for use in quantum computing, which culminated in a discussion of interesting applications in machine learning and brain emulation.

The main problem I had during the talk was that I wanted to stand in FRONT of the rather large podium/desk in order to talk to the audience, as I felt this would be a bit more personal (rather than ‘hiding’ behind the desk). However, the controls for the visualiser, (which is a camera pointing at an illuminated surface connected up to the projector so that the audience can look closely at objects you wish to show) were behind the desk, so I had to keep running backwards and forwards every few minutes to switch from visualiser -> laptop output. This was most irritating and is a really poor design in a lecture theatre. The control for the projector output really should have been somewhat more mobile.

The other moment of complete fail was when the large piece of YBCO stubbornly refused to cool to below 90K when immersed in the liquid nitrogen. Stupid smug piece of perovskite. I stood there for what seemed like hours, with over 80 pairs of curious eyes fixated upon my failing experiment, eagerly anticipating some badass superconducting action. And the damn magnet wouldn’t levitate. There was just way too much thermal mass in the YBCO block and its metal/wood housing to cool it quickly enough. I eventually gave up and swapped to the smaller YBCO piece, making some passing comment about physics experiments never working.

Anyway, those gripes over, the talk seemed to attract a lot of questions relating to the last 30% of the material I covered, namely the part about simulating the human brain and potentially building quantum elements into such machine intelligences.

Anyway I hope it inspired some of the younger members of the audience to consider working as scientists in these areas to be interesting career paths.

I’ll try and get the talk edited and put up on the web soon 🙂

Non-Abelian geometric phases in ground state Josephson devices

Interesting arxiv paper today….

Non-Abelian geometric phases in ground state Josephson devices

It’s a shame that the proposed experimental scheme is based around charge qubits 🙂

EDIT: A couple of other interesting recent Physics stories and links:

Quantum photosynthesis
Spin qubits at Princeton
A Simple n-Dimensional Intrinsically Universal Quantum Cellular Automaton
Fixed-gap adiabatic quantum computation

‘The Quantum Brain’ by Jeffrey Satinover

I’ve recently finished reading this book. I was slightly put off by the title when I first saw the book (I figured it would be another variant of Orch-OR or something similar), but I eventually got around to reading it, and I must say I was very pleasantly surprised.

The Quantum Brain – AMAZON

Satinover explains how quantum processes may underpin the workings of the brain, but not in the usual Penrose interpretation of microtubule activity leading to large scale quantum coherence, but more from a quantum chaos point of view. Satinover argues that quantum chaos can lead to enhanced pattern stability compared to classically chaotic systems, which then persist up to larger scales.

The philosophical implication from these arguments is intriguing: Because quantum mechanics is a non-deterministic process (to the best of our knowledge) then if the brain acts as a quantum amplifier in some way, then it too may also take advantage of this non-determinism.

The idea of a ‘quantum amplifier’ is introduced via our old friend Bob, who cannot decide which of two women to marry. He is struck by the fact that if he is a fully deterministic, mechanistic being, then the person whom he will marry will, in some way, have been preordained. He dislikes this idea, and so bases his choice on the outcome of a quantum mechanical experiment, such as the beam-splitter experiment, whereby a photon has a 50-50 chance of either passing through a half silver mirror or being reflected from it.

In this way the outcome of a quantum process *can* affect the behaviour of a macroscopic system.

Of course this is all very interesting with regards to the question of simulating the human brain at a very low level (one where QM does start to come into play). Just how low is low? Your opinion may change slightly after reading this 🙂

These are just a few of the very interesting ideas explored in this book. I’d strongly recommend it.

Blue Brain project progress

Here is a documentary made by Noah Hutton about EPFL’s Blue Brain Project:

The Beautiful Brain – A Documentary

The Blue Brain project is an immense undertaking to simulate the neuronal activity within an entire human brain. The project is also looking at the brains of smaller mammals along the way, such as mice and cats.

I’m very excited about this project. Even if it doesn’t yield fruitful ‘simulated’ behaviour in the first instance, I think that it will be invaluable as a resource for other projects. I feel it’s a bit like a connectome project. The map will be useful, even though it still won’t tell us the best way to get from A to B.

The worry I have about systems like this is the absence of interaction with a real environment. I think a brain simulation would need a lot of information from an environment which is similar to that in humans for it to even have a chance at simulating ‘human like’ patterns of behaviour. Additionally, a large proportion of the brain is dedicated to all the regulatory biology of the human body, and processing of sensory input information. What happens to all these inputs, for example the hormone regulation control and the immune system? Are they just to be left open circuit?

The documentary talked about how the plan was to use the simulation to control a virtual mouse or rat body in an artificial environment. In addition to the obvious motor control and sensory inputs, what other parts of the rodent biology will have to go into this simulation?

I guess the question that arises is: Will Blue Brain be given a Blue Body? And what kind of body would be suitable? As far as the senses go, is it easier just to interface with the real world rather than try and simulate an entirely virtual world with exquisitely controlled feedback just to provide the correct inputs for all the I/O systems? In other words, perhaps the simulated brain should be built into a cybernetic organism.

Such an organism would have a completely different ‘biology’ (in fact it wouldn’t be biological at all, but the brain would still need to control complex systems in such an entity) and therefore the brain simulation would have to be grossly hacked to make it compatible. However, this is both more ethical and somewhat easier than, say, cloning an organism without a brain and developing an entire Brain/Body-Computer-Interface just so it can be controlled by the simulation.

The other main thought I have on this is that to simulate a real brain of any kind the structure would have to change in response to new inputs (i.e. learn and form new memories by growing new connections). I wonder if this capability could be built into the simulation. I imagine it must be, otherwise you’d just end up with a purely reactive system, something more akin to a lizard brain with very little adaptive neocortical component.

I wonder how an organism would function if a ‘snapshot’ of it’s brain (including a neocortical component) was run as a simulation, but it was not able to grow any new connections. How would the organism behave? Would it work at all? Is this similar to any existing disabling conditions in humans? Presumably it would not be able to learn, or form new memories.

Any thoughts appreciated.