IET Turing Lecture 2010 with Chris Bishop

Last night I attended the IET Turing lecture which was given by Chris Bishop, the Chief Research Scientist at Microsoft Research Cambridge. There was a great turnout, well over 400 people, and the event was fully booked! Some people may remember Chris Bishop from the 2008 Royal Institution Christmas lecture series, where he talked about the potential and limitations of computer technology to an audience of young scientists-to-be.

Here is the Promo video:

PROMO VIDEO

And here is the actual lecture:

IET TURING LECTURE 2010

IET/BCS Turing Lecture 2010 – Embracing Uncertainty: The new machine intelligence

Professor Christopher Bishop, Chief Research Scientist, Microsoft Research Cambridge Computers

From: The IET/BCS Turing Lecture

2010-02-25 00:00:00.0 IT Channel


The lecture was interesting, it focused mainly on Bayesian inference techniques and how they can help us in handling large data sets. Professor Bishop described how Microsoft have incorporated this research into a new tool called Infer.net.

I spoke to Professor Bishop after the lecture, specifically I asked him if these techniques could benefit from massively parallel architectures. He said yes they could. I then tried to ask about whether or not some of these techniques (for example the message passing part of the algorithms – watch the video at around 18:20) could potentially be mapped onto, say, an optimization approach. There is a connection here with Hopfield networks and energy minimization and the like here, but it’s not immediately obvious from the explanations given in the lecture. Unfortunately I wasn’t able to get very far with this discussion as there were lots of other people asking questions too. But it is an interesting train of thought, and as I didn’t want to take up all the speaker’s evening with this line of questioning, I thought I’d probably better buy his book and think over it a bit more instead 🙂

The all new ‘Resources’ tab

The regular reader(s) may have noticed that a shiny new tab, the ‘Resources’ tab has been added to the blog:

Hopefully I’ll put up here any presentations/essays/slides/generally useful information about the stuff that I am interested in and researching, which is basically anything to do with Josephson junction technology, superconducting electronics, experimental quantum computing, quantum neural networks, artificial general intelligence and the brain. I’m currently in the process of getting some new videos edited too, so they will be going on there soon. I’ll put a separate post about those. Until then, you can enjoy perusing slideshow PDFs of several presentations that I have given to a range of audiences.

.

Here’s some cake to enjoy with the slides:

.

physics and cake

.

(This one was from our regular Wednesday post-group-meeting ‘cake club’)

Josephson junction neurons

This is an interesting paper:

Josephson junction simulation of neurons

by Patrick Crotty, Daniel Schult, Ken Segall (Colgate University)

“With the goal of understanding the intricate behavior and dynamics of collections of neurons, we present superconducting circuits containing Josephson junctions that model biologically realistic neurons. These “Josephson junction neurons” reproduce many characteristic behaviors of biological neurons such as action potentials, refractory periods, and firing thresholds. They can be coupled together in ways that mimic electrical and chemical synapses. Using existing fabrication technologies, large interconnected networks of Josephson junction neurons would operate fully in parallel. They would be orders of magnitude faster than both traditional computer simulations and biological neural networks. Josephson junction neurons provide a new tool for exploring long-term large-scale dynamics for networks of neurons.”

Advantages of using RSFQ-style architectures include the non-linear response of the elements and the analogue processing capability which means that you can mimic more ‘logical’ neurons with fewer ‘physical’ elements. I’m pretty sure that this is true. In addition, you can think of other wonderful ideas such as using SQUIDs instead of single junctions (hmm, I wonder where this train of thought might lead) and then apply non-local (or global) magnetic fields to adjust the properties of the neural net. Which might be a bit like adjusting the global values of a particular neurotransmitter.

I’m a bit worried about this approach though. Current superconducting technologies tend to have a low number of wiring layers (<5), and as such are pretty much a 2 dimensional, planar technology. The maximum tiling connectivity you can get from a single layer planar architecture is presumably 6 nearest neighbour unit cell. (Hexagonal close packing). The three dimensional packing in a real brain gives you a higher intrinsic level of connectivity, even though the structure of the neocortex is only quasi-3-dimensional (it is more like 2D sheets crumpled up, but even these '2D' sheets have a fair amount of 3D connectivity when you look closely. In a real brain, each neuron can have tens of thousands of differently weighted inputs (the fan-in problem). Try building that into your mostly-planar circuit 🙂

One good thing about using analogue methods is that not all the neurons need to be identical. In fact having a parameter spread in this massively parallel architecture probably doesn't hurt you at all (it might even help). Which is good, as current Josephson junction foundries have issues with parameter spreads in the resulting digital circuitry (they are nowhere near as closely controlled as semiconductor foundries).

The paper claims that the tens of thousands of neurons in a neocortical column might be simulable using this method. I think that with present LSI JJ technology this is very optimistic personally… but even considering the connectivity, parameter spreading and fan-in problems, I think this is a very interesting area to investigate experimentally.

I’ve actually written a bit about this topic before:

Quantum Neural Networks 1 – the Superconducting Neuron model

In that blogpost there were some links to experiments performed on simple Josephson junction neuron circuits in the 1990’s.

A nice preprint and another talk

Here is a nice preprint comparing some of the methods of realizing qubits, including neutral atoms, ions, superconducting circuits, etc.

Natural and artificial atoms for quantum computation

I’m about to give a short talk on this very topic to an undergraduate Computer Science class. The talk will serve two purposes, it will be an introduction to the myriad of different methods by which qubits and quantum computers can actually be realised, and secondly it will be a nice insight into some of the things that experimentalists have to worry about when they are actually building quantum computers. Here is the talk overview:

Models of quantum computation
Implementations
Ion traps – Optical photons / Neutral atoms – NMR – Superconducting circuits – Nanomechanical resonators
Example of operation
The Bloch sphere – The density matrix
Decoherence + limitations
The DiVincenzo criteria – Measuring T1 and T2 – Sources of decoherence

Here are the slides:

Unfortunately I won’t be recording this one so no videos this time. Boo.

Humanity+ UK 2010

This one-day conference will be the first of its kind, to promote and encourage the (currently fast growing) interest in future technologies and transhumanism in the UK and beyond.

Humanity+ UK 2010

Confirmed speakers include Rachel Armstrong, Nick Bostrom, Aubrey de Grey, Max More, David Orban, David Pearce, Anders Sandberg, Amon Twyman, and Natasha Vita-More. It’s a great opportunity for those who are curious about futurism, transhumanism and accelerating technological change to meet and talk to a wide range of people interested in these subjects.

There will be a conference dinner after the event at a nearby restaurant. Visit the website to find out more and register for the event.

Post-IOP-Talk thoughts

So I gave this talk last night entitled: Quantum Computing: Is the end near for the Silicon chip? It was an interesting experience. I’ve given talks of this size before, but I don’t think I have ever tried so cover quite so many topics in one go, and give so many demonstrations in the process. So with two radio microphones strapped to my waist, and 3 cameras recording the talk, I proceeded to enthusiastically extol the future potential for superconducting electronics technology, and warn about the limits of silicon technology. I gave an overview of superconductors for use in quantum computing, which culminated in a discussion of interesting applications in machine learning and brain emulation.

The main problem I had during the talk was that I wanted to stand in FRONT of the rather large podium/desk in order to talk to the audience, as I felt this would be a bit more personal (rather than ‘hiding’ behind the desk). However, the controls for the visualiser, (which is a camera pointing at an illuminated surface connected up to the projector so that the audience can look closely at objects you wish to show) were behind the desk, so I had to keep running backwards and forwards every few minutes to switch from visualiser -> laptop output. This was most irritating and is a really poor design in a lecture theatre. The control for the projector output really should have been somewhat more mobile.

The other moment of complete fail was when the large piece of YBCO stubbornly refused to cool to below 90K when immersed in the liquid nitrogen. Stupid smug piece of perovskite. I stood there for what seemed like hours, with over 80 pairs of curious eyes fixated upon my failing experiment, eagerly anticipating some badass superconducting action. And the damn magnet wouldn’t levitate. There was just way too much thermal mass in the YBCO block and its metal/wood housing to cool it quickly enough. I eventually gave up and swapped to the smaller YBCO piece, making some passing comment about physics experiments never working.

Anyway, those gripes over, the talk seemed to attract a lot of questions relating to the last 30% of the material I covered, namely the part about simulating the human brain and potentially building quantum elements into such machine intelligences.

Anyway I hope it inspired some of the younger members of the audience to consider working as scientists in these areas to be interesting career paths.

I’ll try and get the talk edited and put up on the web soon 🙂