Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor

Anyone who follows this blog and wants to get a real in-depth insight into the way that D-Wave’s processors are built, and how they solve problems, should definitely read this paper:

Phys. Rev. B. 82, 024511 (2010), R. Harris et al.

The paper itself is quite long (15 pages) but it really gives a great description of how an 8-qubit ‘portion’ of the processor is designed, fabricated, fit to a physical (quantum mechanical) model, calibrated, and then used to solve problems.

If you don’t have access to the Phys Rev B journal, you can read a free preprint of the article here. And if you’ve never tried reading a journal paper before, why not give it a go! (This is an experimental paper, which means there are lots of pretty pictures to look at, even if the Physics gets hard to follow). For example, a microphotograph of the 8-qubit cell:

What is quantum co-tunneling and why is it cool?

You may have see this cool new paper on the ArXiv:

Observation of Co-tunneling in Pairs of Coupled Flux Qubits

(I believe there is something called a ‘paper dance’ that I am supposed to be doing)….

Anyway, here I’ll try and write a little review article describing what this paper is all about. I’m assuming some knowledge of elementary quantum mechanics here. You can read up about the background QM needed here., and here.

First of all, what is macroscopic resonant tunneling (MRT)?

I’ll start by introducing energy wells. These are very common in the analysis of quantum mechanical systems. When you solve the Schrodinger equation, you put into the equation an energy landscape (also known as a ‘potential’), and out pop the wavefunctions and their associated eigenvalues (the energies that the system is allowed to have). This is usually illustrated with a square well potential, or a harmonic oscillator (parabolic) potential, like this:

Well, the flux qubit (quantum bit), which is what we build, has an energy landscape that looks a bit like a double well. This is useful for quantum computation as you can call one of the wells ‘0’ and the other ‘1’. When you measure the system, you find that the state will be in one well or the other, and the value of your ‘bit’ will be 0 or 1. The double well potential as you might imagine also contains energy levels, and the neat thing is that these energy levels can see each other through the barrier, because the wavefunction ‘leaks’ a little bit from one well into the neighbouring one:

One can imagine tilting the two wells with respect to one another, so the system becomes asymmetric and the energy levels in each well move with respect to one another. In flux qubit-land, we ’tilt’ the wells by applying small magnetic fields to the superconducting loops which form the qubits. Very crudely, when energy levels ‘line up’ the two wells see each other, and you can get quantum tunneling between the two states.

This effect is known as macroscopic resonant tunneling. So how do you measure it? You start by initializing the system so that state is localised in just one well (for example, by biasing the potential very hard in one direction so that there is effectively only one well), like this:

and then tilt the well-system back a little bit. At each tilt value, you stochastically monitor which well the state ends up in, then return it to the initialisation state and repeat lots and lots of times for different levels of tilt. As mentioned before, when the energy levels line up, you can get some tunneling and you are more likely to find the system on the other side of the barrier:

.

.

.

.

.

.

In this way you can build up a picture of when the system is tunneling and when it isn’t as a function of tilt. Classically, the particle would remain mostly in the state it started in, until the tilt gets so large that the particle can be thermally activated OVER the barrier. So classically the probability of the state being found on the right hand side ‘state 1’ as a function of tilt looks something like this:

Quantum mechanically, as the energy levels ‘line up’, the particle can tunnel through the barrier – and so you get a little resonance in the probability of finding it on the other side (hence the name MRT). There are lots of energy levels in the wells, so as you tilt the system more and more, you encounter many such resonances. So the probability as a function of tilt now looks something like this:

This is a really cool result as it demonstrates that your system is quantum mechanical. There’s just no way you can get these resonances classically, as there’s no way that particle can get through the barrier classically.

Note: This is slightly different from macroscopic quantum tunneling, when the state tunnels out of the well-system altogether, in the same way that an alpha particle ‘tunnels’ out of the nucleus during radioactive decay and flies off into the ether. But that is a topic for another post.

So what’s all this co-tunneling stuff?

It’s all very nice showing that a single qubit is behaving quantum mechanically. Big deal, that’s easy. But stacking them together like qubit lego and showing that the resulting structure is quantum mechanical is harder.

Anyway, that is what this paper is all about. Two flux qubits are locked together by magnetic coupling, and therefore the double well potential is now actually 4-dimensional. If you don’t like thinking in 4D, you can imagine two separate double-wells, which are locked together so that they mimic each other. Getting the double well potentials similar enough to be able to lock them together in the first place is also really hard with superconducting flux qubits. It’s actually easier with atoms or ions than superconducting loops, because nature gives you identical systems to start with. But flux qubits are more versatile for other reasons, so the effort that has to go into making them identical is worthwhile.

Once they are locked together, you can again start tilting the ‘two-qubit-potential’. The spacing of the energy levels will now be different (think about a mass on the end of the spring – if you glue another mass to it, the resonant frequencies of the system will change, and the energies levels of the system along with them. We have sort of made our qubit ‘heavier’ by adding another one to it.

But we still see the resonant peaks! Which means that two qubits locked together still behave as a nice quantum mechanical object. The peaks don’t look quite as obvious as the ones I have drawn in my cartoon above. If you want to see what they really look like check out Figure 3 of the preprint. (Note that the figure shows MRT ‘rate’ rather than ‘probability’, but the two are very closely linked)

From the little resonant peaks that you see, you can extract Delta – which is a measure of the energy level spacing in the wells. In this particular flux-qubit system, the energy level spacing (and therefore Delta) can be tuned finely by using another superconducting loop attached to the main qubit loop. So you can make your qubit mass-on-a-spring effectively heavier or lighter by this method too. When the second tuning loop is adjusted, the resulting change in the energy level separation agrees well with theoretical predictions.

As you add more and more qubits, it gets harder to measure Delta, as the energy levels get very close together, and the peaks start to become washed out by noise. You can use the ‘tuning’ loop to make Delta bigger, but it can only help so much, as the tuning also has a side effect: It lowers the overall ‘signal’ level of the resonant peaks that you measure.

In summary:

Looking at the quantum properties of coupled qubits is very important, as it helps us experimentally characterise quantum computing systems.
Coupling qubits together makes them ‘heavier’ and their quantum energy levels become harder to measure.
Here two coupled qubits are still behaving quantum mechanically, so this is promising. This means that in the quantum computation occurring on these chips involves at least 2-qubits interacting in a quantum mechanical way. Physicists calls these ‘2-qubit processes’. There may be processes of much higher order happening too.
This is pretty impressive considering that these qubits are surrounded by lots of other qubits, and connected to many, many other elements in the circuitry. (Most other quantum computing devices explored so far are much more isolated from other nearby elements).

An adiabatic tragedy of advocates and sceptics

I think the recent musings around the blogosphere and beyond are completely missing a huge and fundamental point about why building AQC and AQO-based machines will be not only useful, but something that we will wonder how we lived without. I’m not going to talk about the specifics of implementing superconducting technology for cool applications (I can defer that to subsequent posts). Here I just want to explain a little about why we should do it, and how the main hurdle to progressing such technology is nothing to do with our ability to build and understand the technology itself. In fact, it is a rather sad story.

Let’s think for a second: Do we really have the right computational substrates for realising concepts such as highly connected neural networks, and thinking machines? Is the Von Neumann architecture really the best way to support such things? We are currently solving the problem of simulating intelligent systems by throwing more and more computational power at them. Though there’s something very odd about this, we have little choice, as we have become very good at designing processors that behave in a particular way, that talk to memory in a particular way, and that have a small number of cores. The problem with the commonly used architectures is that they just cannot embed things like neural nets, and other highly parallel structures, very efficiently.

Could adiabatic quantum processors be used for neural net and highly parallel processing purposes? Of course! The architectures are very compatible. AQO processors are very similar to Boltzmann machines, which underlie the operation of way many pattern-recognising systems, such as our own brain. There are other delicious fruits of superconducting logic, for example the ease with which we can implement reversible logic, and the exceedingly low power dissipation of such circuits. These systems also exhibit macroscopic quantum effects, which may be a great resource in computation, or it may not. But even if it does not, we should not ignore the fact that actually building such machines is the only way to answer this, and many other questions.

I think that superconducting processors are offering us a wonderful gift here, and yet we refuse to take advantage of it. Why? The reasons are a little hard to stomach.

It seems that while we’d rather just get going building some cool things, we end up spending a large proportion of our time and effort debating issues like whether or not quantum computers can solve NP-Complete problems in polynomial time. What?! I hear you say. My answer: Exactly. Obscure as they are, these questions are the most important thing in the world to a vanishingly small fraction of the population. Yet these seemingly lofty theoretical debates are casting a shadow over the use of superconducting QC architectures in every area of research, including things like the novel implementations of hardware-based neural nets, which may prove to be an even more fruitful avenue than optimization.

It will take a large amount of financing to commercialize general purpose superconducting processors, and an unimaginably large effort on behalf of the scientific researchers and engineers who devote their time to trying to progress this technology. The step from fundamental research to commercialisation cannot and will not work in an academic environment. Why? Because in order to fabricate integrated circuits of any kind you need investments of hundreds of millions of dollars into foundries. Robust and scalable technologies can only be realised in an industrial setting. Sure, small scale systems can be realised in many labs, but there are no industrial uses for devices produced in this setting: Anything that they demonstrate can be outperformed with a standard computing system. And it will stay that way until we realise that we need to take a risk as a society and really invest now in a technology of the future. I’ve seen distressing things happening at RSFQ roadmapping meetings. The conclusion of the roadmapping somewhat tragically boils down to ‘make another roadmap’ – because there are no applications beyond a few incredibly specialised circuits that can be realised within the scope of government research grants (~$10K-5M. Oh and by the way they aren’t very reproducible either). There is no funding on the necessary scale, and therefore whatever money is put into this distributed and broken field rather swiftly evaporates, even though if considered cumulatively it may have been enough to get a single, well focused effort going.

So, my plea to the academic world is to realise that there comes a point where quarrelling over things should be secondary to our solidarity as a community who really want to see something achieved. I’d rather try and fail than sit smugly in my ivory tower proclaiming that something is 20, or 50 years away (read – therefore not my problem guys!) Day after day people (with little or no knowledge in the field) exclaim that these things will never work, you are not building anything useful, your results are fraudulent, your devices are vapour-ware….etc etc.

This is more than cautious scientific scepticism, this is sheer hatred. It seems very unusual, and from my point of view, very detrimental to the entire field. I’d first advise such people to read some of our peer-reviewed papers to get their facts straight. I’d then say the following:

If those working in the fields of superconducting electronics, flux qubits, and quantum information really cared about the future of a technology that they had helped show had huge theoretical promise, they should rally along with us rather than spitefully burning their own bridges. As a community, we should promote the potential of our research and try to convince whoever needs convincing that these endeavours, these ‘Manhattan’ style projects are the only way in which we can bring disruptive technologies to fruition. And even if such projects fail, even if there is no other possible spin off that comes out of them, think about this: Such projects cost about the same amount as a couple of ICBMs. I know which I’d rather have more of in this world.

AQC / AQO video talk

Here is a video lecture that I gave a while ago about Adiabatic Quantum Computing and Adiabatic Quantum Optimization (specifically describing some cool things that you can do with D-Wave hardware) to my former colleagues at the University of Birmingham. This is a slightly higher level talk than the previous ones I have posted. Thanks again to my kind colleague and good friend (soon to be Dr.) Dominic Walliman for editing and posting these videos!

The talk is entitled ‘Playing with adiabatic hardware: From designer potentials to quantum brains’ although it certainly isn’t quite as ‘brain’ focused as some of the previous talks I have given, heh 🙂

.

Here are the other parts (they should be linked from that one, but just in case people can’t find them):

AQC Part 2
AQC Part 3
AQC Part 4
AQC Part 5
AQC Part 6

P.S. I wasn’t trying to be mean to the gate model (or computer scientists for that matter) – it just kinda happened…

P.P.S Some of the notation is a bit off – the J’s should be K’s to be consistent with the literature I believe…

Watch my IOP talk – Building Quantum Computers – now on YouTube

You may remember a while back I mentioned that I’d put the video of my IOP talk up online. Well here it is. Thanks go to my kind colleague Dom for editing and posting these videos. Here is the first installment. I have posted links to the other 6 parts below. The talk is aimed at a general audience. It was given to a class of about 80 pupils of ages 14-18, and their teachers, although it is suitable for anyone who is interested in Physics, superconductors, superconducting processors and quantum computing. I apologise that the question and answer session (in parts 6 and 7) is a little difficult to hear, as the room was not set up to record audio in this way.
.
I’ll be putting a permanent link to this talk in the Resources section at some point soon. The slides are already available there if anyone wishes to look at them in more detail. Comments and feedback appreciated… Enjoy!
.

Part 2
Part 3
Part 4
Part 5
Part 6
Part 7

MQT Paper update… now in colour

Oh my goodness, this MQT paper is becoming a TOME….

So yesterday we had the red ink debacle which spurred me to write the Paper Algorithm:

1.) Write paper
2.) Give to colleague
3.) Get returned
4.) Shake off excess red ink.
5.) Rework.

Repeat steps 3-5 and hope for convergence.
Note this algorithm may have bad runtime scaling due to T(step 4) -T(step 3).

A friend of mine tried to suggest some further steps involving the journal submission process, but unfortunately those kind of delightful games are beyond my event horizon at the moment!

Here is a picture of the red ink debacle (which by the way looks worse now as I’m covered it in my own rebuttals of and agreements with the arguments – in black ink I hasten to add).

Anyway, the new version of the document is better for the corrections now but I fear it may have to be streamlined as it’s packing 7 pages of Physics awesomeness already… and I’m wondering about going further into the details of thermal escape from the washboard potential. Maybe I shouldn’t do that.

Superconducting processors get some competition?

EPFL and ETH (Switzerland) are undertaking a four year project named CMOSAIC with the goal of extending Moore’s law into the third dimension:

The project page is here

And here’s an IBM write-up of the effort

Also see here for a nice schematic of the device

“Unlike current processors, the CMOSAIC project considers a 3D stack-architecture of multiple cores with a interconnect density from 100 to 10,000 connections per millimeter square. Researchers believe that these tiny connections and the use of hair-thin, liquid cooling microchannels measuring only 50 microns in diameter between the active chips are the missing links to achieving high-performance computing with future 3D chip stacks.”

Just my personal opinion of course… but…. this seems like a case of fixing the symptoms rather than finding a cure. Will bringing a microfluidic angle into Moore’s law really help us out?

Why do we put up with this kind of heating problem in the first place? One could, for example, consider an alternative investment in the development of reversible, low disspation superconducting electronics.

I guess the project will be interesting just from a point of view of 3D manufacturing and incorporation of fluidics into microchips – this kind of technology could be indispensable for progress in areas such as lab-on-a-chip technology. But as far as raw processing power goes, this approach seems a bit like ignoring the elephant in the room.

Josephson junction neurons

This is an interesting paper:

Josephson junction simulation of neurons

by Patrick Crotty, Daniel Schult, Ken Segall (Colgate University)

“With the goal of understanding the intricate behavior and dynamics of collections of neurons, we present superconducting circuits containing Josephson junctions that model biologically realistic neurons. These “Josephson junction neurons” reproduce many characteristic behaviors of biological neurons such as action potentials, refractory periods, and firing thresholds. They can be coupled together in ways that mimic electrical and chemical synapses. Using existing fabrication technologies, large interconnected networks of Josephson junction neurons would operate fully in parallel. They would be orders of magnitude faster than both traditional computer simulations and biological neural networks. Josephson junction neurons provide a new tool for exploring long-term large-scale dynamics for networks of neurons.”

Advantages of using RSFQ-style architectures include the non-linear response of the elements and the analogue processing capability which means that you can mimic more ‘logical’ neurons with fewer ‘physical’ elements. I’m pretty sure that this is true. In addition, you can think of other wonderful ideas such as using SQUIDs instead of single junctions (hmm, I wonder where this train of thought might lead) and then apply non-local (or global) magnetic fields to adjust the properties of the neural net. Which might be a bit like adjusting the global values of a particular neurotransmitter.

I’m a bit worried about this approach though. Current superconducting technologies tend to have a low number of wiring layers (<5), and as such are pretty much a 2 dimensional, planar technology. The maximum tiling connectivity you can get from a single layer planar architecture is presumably 6 nearest neighbour unit cell. (Hexagonal close packing). The three dimensional packing in a real brain gives you a higher intrinsic level of connectivity, even though the structure of the neocortex is only quasi-3-dimensional (it is more like 2D sheets crumpled up, but even these '2D' sheets have a fair amount of 3D connectivity when you look closely. In a real brain, each neuron can have tens of thousands of differently weighted inputs (the fan-in problem). Try building that into your mostly-planar circuit 🙂

One good thing about using analogue methods is that not all the neurons need to be identical. In fact having a parameter spread in this massively parallel architecture probably doesn't hurt you at all (it might even help). Which is good, as current Josephson junction foundries have issues with parameter spreads in the resulting digital circuitry (they are nowhere near as closely controlled as semiconductor foundries).

The paper claims that the tens of thousands of neurons in a neocortical column might be simulable using this method. I think that with present LSI JJ technology this is very optimistic personally… but even considering the connectivity, parameter spreading and fan-in problems, I think this is a very interesting area to investigate experimentally.

I’ve actually written a bit about this topic before:

Quantum Neural Networks 1 – the Superconducting Neuron model

In that blogpost there were some links to experiments performed on simple Josephson junction neuron circuits in the 1990’s.

A nice preprint and another talk

Here is a nice preprint comparing some of the methods of realizing qubits, including neutral atoms, ions, superconducting circuits, etc.

Natural and artificial atoms for quantum computation

I’m about to give a short talk on this very topic to an undergraduate Computer Science class. The talk will serve two purposes, it will be an introduction to the myriad of different methods by which qubits and quantum computers can actually be realised, and secondly it will be a nice insight into some of the things that experimentalists have to worry about when they are actually building quantum computers. Here is the talk overview:

Models of quantum computation
Implementations
Ion traps – Optical photons / Neutral atoms – NMR – Superconducting circuits – Nanomechanical resonators
Example of operation
The Bloch sphere – The density matrix
Decoherence + limitations
The DiVincenzo criteria – Measuring T1 and T2 – Sources of decoherence

Here are the slides:

Unfortunately I won’t be recording this one so no videos this time. Boo.

Post-IOP-Talk thoughts

So I gave this talk last night entitled: Quantum Computing: Is the end near for the Silicon chip? It was an interesting experience. I’ve given talks of this size before, but I don’t think I have ever tried so cover quite so many topics in one go, and give so many demonstrations in the process. So with two radio microphones strapped to my waist, and 3 cameras recording the talk, I proceeded to enthusiastically extol the future potential for superconducting electronics technology, and warn about the limits of silicon technology. I gave an overview of superconductors for use in quantum computing, which culminated in a discussion of interesting applications in machine learning and brain emulation.

The main problem I had during the talk was that I wanted to stand in FRONT of the rather large podium/desk in order to talk to the audience, as I felt this would be a bit more personal (rather than ‘hiding’ behind the desk). However, the controls for the visualiser, (which is a camera pointing at an illuminated surface connected up to the projector so that the audience can look closely at objects you wish to show) were behind the desk, so I had to keep running backwards and forwards every few minutes to switch from visualiser -> laptop output. This was most irritating and is a really poor design in a lecture theatre. The control for the projector output really should have been somewhat more mobile.

The other moment of complete fail was when the large piece of YBCO stubbornly refused to cool to below 90K when immersed in the liquid nitrogen. Stupid smug piece of perovskite. I stood there for what seemed like hours, with over 80 pairs of curious eyes fixated upon my failing experiment, eagerly anticipating some badass superconducting action. And the damn magnet wouldn’t levitate. There was just way too much thermal mass in the YBCO block and its metal/wood housing to cool it quickly enough. I eventually gave up and swapped to the smaller YBCO piece, making some passing comment about physics experiments never working.

Anyway, those gripes over, the talk seemed to attract a lot of questions relating to the last 30% of the material I covered, namely the part about simulating the human brain and potentially building quantum elements into such machine intelligences.

Anyway I hope it inspired some of the younger members of the audience to consider working as scientists in these areas to be interesting career paths.

I’ll try and get the talk edited and put up on the web soon 🙂