Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor

Anyone who follows this blog and wants to get a real in-depth insight into the way that D-Wave’s processors are built, and how they solve problems, should definitely read this paper:

Phys. Rev. B. 82, 024511 (2010), R. Harris et al.

The paper itself is quite long (15 pages) but it really gives a great description of how an 8-qubit ‘portion’ of the processor is designed, fabricated, fit to a physical (quantum mechanical) model, calibrated, and then used to solve problems.

If you don’t have access to the Phys Rev B journal, you can read a free preprint of the article here. And if you’ve never tried reading a journal paper before, why not give it a go! (This is an experimental paper, which means there are lots of pretty pictures to look at, even if the Physics gets hard to follow). For example, a microphotograph of the 8-qubit cell:

What is quantum co-tunneling and why is it cool?

You may have see this cool new paper on the ArXiv:

Observation of Co-tunneling in Pairs of Coupled Flux Qubits

(I believe there is something called a ‘paper dance’ that I am supposed to be doing)….

Anyway, here I’ll try and write a little review article describing what this paper is all about. I’m assuming some knowledge of elementary quantum mechanics here. You can read up about the background QM needed here., and here.

First of all, what is macroscopic resonant tunneling (MRT)?

I’ll start by introducing energy wells. These are very common in the analysis of quantum mechanical systems. When you solve the Schrodinger equation, you put into the equation an energy landscape (also known as a ‘potential’), and out pop the wavefunctions and their associated eigenvalues (the energies that the system is allowed to have). This is usually illustrated with a square well potential, or a harmonic oscillator (parabolic) potential, like this:

Well, the flux qubit (quantum bit), which is what we build, has an energy landscape that looks a bit like a double well. This is useful for quantum computation as you can call one of the wells ‘0’ and the other ‘1’. When you measure the system, you find that the state will be in one well or the other, and the value of your ‘bit’ will be 0 or 1. The double well potential as you might imagine also contains energy levels, and the neat thing is that these energy levels can see each other through the barrier, because the wavefunction ‘leaks’ a little bit from one well into the neighbouring one:

One can imagine tilting the two wells with respect to one another, so the system becomes asymmetric and the energy levels in each well move with respect to one another. In flux qubit-land, we ’tilt’ the wells by applying small magnetic fields to the superconducting loops which form the qubits. Very crudely, when energy levels ‘line up’ the two wells see each other, and you can get quantum tunneling between the two states.

This effect is known as macroscopic resonant tunneling. So how do you measure it? You start by initializing the system so that state is localised in just one well (for example, by biasing the potential very hard in one direction so that there is effectively only one well), like this:

and then tilt the well-system back a little bit. At each tilt value, you stochastically monitor which well the state ends up in, then return it to the initialisation state and repeat lots and lots of times for different levels of tilt. As mentioned before, when the energy levels line up, you can get some tunneling and you are more likely to find the system on the other side of the barrier:

.

.

.

.

.

.

In this way you can build up a picture of when the system is tunneling and when it isn’t as a function of tilt. Classically, the particle would remain mostly in the state it started in, until the tilt gets so large that the particle can be thermally activated OVER the barrier. So classically the probability of the state being found on the right hand side ‘state 1’ as a function of tilt looks something like this:

Quantum mechanically, as the energy levels ‘line up’, the particle can tunnel through the barrier – and so you get a little resonance in the probability of finding it on the other side (hence the name MRT). There are lots of energy levels in the wells, so as you tilt the system more and more, you encounter many such resonances. So the probability as a function of tilt now looks something like this:

This is a really cool result as it demonstrates that your system is quantum mechanical. There’s just no way you can get these resonances classically, as there’s no way that particle can get through the barrier classically.

Note: This is slightly different from macroscopic quantum tunneling, when the state tunnels out of the well-system altogether, in the same way that an alpha particle ‘tunnels’ out of the nucleus during radioactive decay and flies off into the ether. But that is a topic for another post.

So what’s all this co-tunneling stuff?

It’s all very nice showing that a single qubit is behaving quantum mechanically. Big deal, that’s easy. But stacking them together like qubit lego and showing that the resulting structure is quantum mechanical is harder.

Anyway, that is what this paper is all about. Two flux qubits are locked together by magnetic coupling, and therefore the double well potential is now actually 4-dimensional. If you don’t like thinking in 4D, you can imagine two separate double-wells, which are locked together so that they mimic each other. Getting the double well potentials similar enough to be able to lock them together in the first place is also really hard with superconducting flux qubits. It’s actually easier with atoms or ions than superconducting loops, because nature gives you identical systems to start with. But flux qubits are more versatile for other reasons, so the effort that has to go into making them identical is worthwhile.

Once they are locked together, you can again start tilting the ‘two-qubit-potential’. The spacing of the energy levels will now be different (think about a mass on the end of the spring – if you glue another mass to it, the resonant frequencies of the system will change, and the energies levels of the system along with them. We have sort of made our qubit ‘heavier’ by adding another one to it.

But we still see the resonant peaks! Which means that two qubits locked together still behave as a nice quantum mechanical object. The peaks don’t look quite as obvious as the ones I have drawn in my cartoon above. If you want to see what they really look like check out Figure 3 of the preprint. (Note that the figure shows MRT ‘rate’ rather than ‘probability’, but the two are very closely linked)

From the little resonant peaks that you see, you can extract Delta – which is a measure of the energy level spacing in the wells. In this particular flux-qubit system, the energy level spacing (and therefore Delta) can be tuned finely by using another superconducting loop attached to the main qubit loop. So you can make your qubit mass-on-a-spring effectively heavier or lighter by this method too. When the second tuning loop is adjusted, the resulting change in the energy level separation agrees well with theoretical predictions.

As you add more and more qubits, it gets harder to measure Delta, as the energy levels get very close together, and the peaks start to become washed out by noise. You can use the ‘tuning’ loop to make Delta bigger, but it can only help so much, as the tuning also has a side effect: It lowers the overall ‘signal’ level of the resonant peaks that you measure.

In summary:

Looking at the quantum properties of coupled qubits is very important, as it helps us experimentally characterise quantum computing systems.
Coupling qubits together makes them ‘heavier’ and their quantum energy levels become harder to measure.
Here two coupled qubits are still behaving quantum mechanically, so this is promising. This means that in the quantum computation occurring on these chips involves at least 2-qubits interacting in a quantum mechanical way. Physicists calls these ‘2-qubit processes’. There may be processes of much higher order happening too.
This is pretty impressive considering that these qubits are surrounded by lots of other qubits, and connected to many, many other elements in the circuitry. (Most other quantum computing devices explored so far are much more isolated from other nearby elements).

An adiabatic tragedy of advocates and sceptics

I think the recent musings around the blogosphere and beyond are completely missing a huge and fundamental point about why building AQC and AQO-based machines will be not only useful, but something that we will wonder how we lived without. I’m not going to talk about the specifics of implementing superconducting technology for cool applications (I can defer that to subsequent posts). Here I just want to explain a little about why we should do it, and how the main hurdle to progressing such technology is nothing to do with our ability to build and understand the technology itself. In fact, it is a rather sad story.

Let’s think for a second: Do we really have the right computational substrates for realising concepts such as highly connected neural networks, and thinking machines? Is the Von Neumann architecture really the best way to support such things? We are currently solving the problem of simulating intelligent systems by throwing more and more computational power at them. Though there’s something very odd about this, we have little choice, as we have become very good at designing processors that behave in a particular way, that talk to memory in a particular way, and that have a small number of cores. The problem with the commonly used architectures is that they just cannot embed things like neural nets, and other highly parallel structures, very efficiently.

Could adiabatic quantum processors be used for neural net and highly parallel processing purposes? Of course! The architectures are very compatible. AQO processors are very similar to Boltzmann machines, which underlie the operation of way many pattern-recognising systems, such as our own brain. There are other delicious fruits of superconducting logic, for example the ease with which we can implement reversible logic, and the exceedingly low power dissipation of such circuits. These systems also exhibit macroscopic quantum effects, which may be a great resource in computation, or it may not. But even if it does not, we should not ignore the fact that actually building such machines is the only way to answer this, and many other questions.

I think that superconducting processors are offering us a wonderful gift here, and yet we refuse to take advantage of it. Why? The reasons are a little hard to stomach.

It seems that while we’d rather just get going building some cool things, we end up spending a large proportion of our time and effort debating issues like whether or not quantum computers can solve NP-Complete problems in polynomial time. What?! I hear you say. My answer: Exactly. Obscure as they are, these questions are the most important thing in the world to a vanishingly small fraction of the population. Yet these seemingly lofty theoretical debates are casting a shadow over the use of superconducting QC architectures in every area of research, including things like the novel implementations of hardware-based neural nets, which may prove to be an even more fruitful avenue than optimization.

It will take a large amount of financing to commercialize general purpose superconducting processors, and an unimaginably large effort on behalf of the scientific researchers and engineers who devote their time to trying to progress this technology. The step from fundamental research to commercialisation cannot and will not work in an academic environment. Why? Because in order to fabricate integrated circuits of any kind you need investments of hundreds of millions of dollars into foundries. Robust and scalable technologies can only be realised in an industrial setting. Sure, small scale systems can be realised in many labs, but there are no industrial uses for devices produced in this setting: Anything that they demonstrate can be outperformed with a standard computing system. And it will stay that way until we realise that we need to take a risk as a society and really invest now in a technology of the future. I’ve seen distressing things happening at RSFQ roadmapping meetings. The conclusion of the roadmapping somewhat tragically boils down to ‘make another roadmap’ – because there are no applications beyond a few incredibly specialised circuits that can be realised within the scope of government research grants (~$10K-5M. Oh and by the way they aren’t very reproducible either). There is no funding on the necessary scale, and therefore whatever money is put into this distributed and broken field rather swiftly evaporates, even though if considered cumulatively it may have been enough to get a single, well focused effort going.

So, my plea to the academic world is to realise that there comes a point where quarrelling over things should be secondary to our solidarity as a community who really want to see something achieved. I’d rather try and fail than sit smugly in my ivory tower proclaiming that something is 20, or 50 years away (read – therefore not my problem guys!) Day after day people (with little or no knowledge in the field) exclaim that these things will never work, you are not building anything useful, your results are fraudulent, your devices are vapour-ware….etc etc.

This is more than cautious scientific scepticism, this is sheer hatred. It seems very unusual, and from my point of view, very detrimental to the entire field. I’d first advise such people to read some of our peer-reviewed papers to get their facts straight. I’d then say the following:

If those working in the fields of superconducting electronics, flux qubits, and quantum information really cared about the future of a technology that they had helped show had huge theoretical promise, they should rally along with us rather than spitefully burning their own bridges. As a community, we should promote the potential of our research and try to convince whoever needs convincing that these endeavours, these ‘Manhattan’ style projects are the only way in which we can bring disruptive technologies to fruition. And even if such projects fail, even if there is no other possible spin off that comes out of them, think about this: Such projects cost about the same amount as a couple of ICBMs. I know which I’d rather have more of in this world.

AQC / AQO video talk

Here is a video lecture that I gave a while ago about Adiabatic Quantum Computing and Adiabatic Quantum Optimization (specifically describing some cool things that you can do with D-Wave hardware) to my former colleagues at the University of Birmingham. This is a slightly higher level talk than the previous ones I have posted. Thanks again to my kind colleague and good friend (soon to be Dr.) Dominic Walliman for editing and posting these videos!

The talk is entitled ‘Playing with adiabatic hardware: From designer potentials to quantum brains’ although it certainly isn’t quite as ‘brain’ focused as some of the previous talks I have given, heh 🙂

.

Here are the other parts (they should be linked from that one, but just in case people can’t find them):

AQC Part 2
AQC Part 3
AQC Part 4
AQC Part 5
AQC Part 6

P.S. I wasn’t trying to be mean to the gate model (or computer scientists for that matter) – it just kinda happened…

P.P.S Some of the notation is a bit off – the J’s should be K’s to be consistent with the literature I believe…

Watch my IOP talk – Building Quantum Computers – now on YouTube

You may remember a while back I mentioned that I’d put the video of my IOP talk up online. Well here it is. Thanks go to my kind colleague Dom for editing and posting these videos. Here is the first installment. I have posted links to the other 6 parts below. The talk is aimed at a general audience. It was given to a class of about 80 pupils of ages 14-18, and their teachers, although it is suitable for anyone who is interested in Physics, superconductors, superconducting processors and quantum computing. I apologise that the question and answer session (in parts 6 and 7) is a little difficult to hear, as the room was not set up to record audio in this way.
.
I’ll be putting a permanent link to this talk in the Resources section at some point soon. The slides are already available there if anyone wishes to look at them in more detail. Comments and feedback appreciated… Enjoy!
.

Part 2
Part 3
Part 4
Part 5
Part 6
Part 7

MQT Paper update… now in colour

Oh my goodness, this MQT paper is becoming a TOME….

So yesterday we had the red ink debacle which spurred me to write the Paper Algorithm:

1.) Write paper
2.) Give to colleague
3.) Get returned
4.) Shake off excess red ink.
5.) Rework.

Repeat steps 3-5 and hope for convergence.
Note this algorithm may have bad runtime scaling due to T(step 4) -T(step 3).

A friend of mine tried to suggest some further steps involving the journal submission process, but unfortunately those kind of delightful games are beyond my event horizon at the moment!

Here is a picture of the red ink debacle (which by the way looks worse now as I’m covered it in my own rebuttals of and agreements with the arguments – in black ink I hasten to add).

Anyway, the new version of the document is better for the corrections now but I fear it may have to be streamlined as it’s packing 7 pages of Physics awesomeness already… and I’m wondering about going further into the details of thermal escape from the washboard potential. Maybe I shouldn’t do that.

Superconducting processors get some competition?

EPFL and ETH (Switzerland) are undertaking a four year project named CMOSAIC with the goal of extending Moore’s law into the third dimension:

The project page is here

And here’s an IBM write-up of the effort

Also see here for a nice schematic of the device

“Unlike current processors, the CMOSAIC project considers a 3D stack-architecture of multiple cores with a interconnect density from 100 to 10,000 connections per millimeter square. Researchers believe that these tiny connections and the use of hair-thin, liquid cooling microchannels measuring only 50 microns in diameter between the active chips are the missing links to achieving high-performance computing with future 3D chip stacks.”

Just my personal opinion of course… but…. this seems like a case of fixing the symptoms rather than finding a cure. Will bringing a microfluidic angle into Moore’s law really help us out?

Why do we put up with this kind of heating problem in the first place? One could, for example, consider an alternative investment in the development of reversible, low disspation superconducting electronics.

I guess the project will be interesting just from a point of view of 3D manufacturing and incorporation of fluidics into microchips – this kind of technology could be indispensable for progress in areas such as lab-on-a-chip technology. But as far as raw processing power goes, this approach seems a bit like ignoring the elephant in the room.