What is quantum co-tunneling and why is it cool?

You may have see this cool new paper on the ArXiv:

Observation of Co-tunneling in Pairs of Coupled Flux Qubits

(I believe there is something called a ‘paper dance’ that I am supposed to be doing)….

Anyway, here I’ll try and write a little review article describing what this paper is all about. I’m assuming some knowledge of elementary quantum mechanics here. You can read up about the background QM needed here., and here.

First of all, what is macroscopic resonant tunneling (MRT)?

I’ll start by introducing energy wells. These are very common in the analysis of quantum mechanical systems. When you solve the Schrodinger equation, you put into the equation an energy landscape (also known as a ‘potential’), and out pop the wavefunctions and their associated eigenvalues (the energies that the system is allowed to have). This is usually illustrated with a square well potential, or a harmonic oscillator (parabolic) potential, like this:

Well, the flux qubit (quantum bit), which is what we build, has an energy landscape that looks a bit like a double well. This is useful for quantum computation as you can call one of the wells ‘0’ and the other ‘1’. When you measure the system, you find that the state will be in one well or the other, and the value of your ‘bit’ will be 0 or 1. The double well potential as you might imagine also contains energy levels, and the neat thing is that these energy levels can see each other through the barrier, because the wavefunction ‘leaks’ a little bit from one well into the neighbouring one:

One can imagine tilting the two wells with respect to one another, so the system becomes asymmetric and the energy levels in each well move with respect to one another. In flux qubit-land, we ’tilt’ the wells by applying small magnetic fields to the superconducting loops which form the qubits. Very crudely, when energy levels ‘line up’ the two wells see each other, and you can get quantum tunneling between the two states.

This effect is known as macroscopic resonant tunneling. So how do you measure it? You start by initializing the system so that state is localised in just one well (for example, by biasing the potential very hard in one direction so that there is effectively only one well), like this:

and then tilt the well-system back a little bit. At each tilt value, you stochastically monitor which well the state ends up in, then return it to the initialisation state and repeat lots and lots of times for different levels of tilt. As mentioned before, when the energy levels line up, you can get some tunneling and you are more likely to find the system on the other side of the barrier:







In this way you can build up a picture of when the system is tunneling and when it isn’t as a function of tilt. Classically, the particle would remain mostly in the state it started in, until the tilt gets so large that the particle can be thermally activated OVER the barrier. So classically the probability of the state being found on the right hand side ‘state 1’ as a function of tilt looks something like this:

Quantum mechanically, as the energy levels ‘line up’, the particle can tunnel through the barrier – and so you get a little resonance in the probability of finding it on the other side (hence the name MRT). There are lots of energy levels in the wells, so as you tilt the system more and more, you encounter many such resonances. So the probability as a function of tilt now looks something like this:

This is a really cool result as it demonstrates that your system is quantum mechanical. There’s just no way you can get these resonances classically, as there’s no way that particle can get through the barrier classically.

Note: This is slightly different from macroscopic quantum tunneling, when the state tunnels out of the well-system altogether, in the same way that an alpha particle ‘tunnels’ out of the nucleus during radioactive decay and flies off into the ether. But that is a topic for another post.

So what’s all this co-tunneling stuff?

It’s all very nice showing that a single qubit is behaving quantum mechanically. Big deal, that’s easy. But stacking them together like qubit lego and showing that the resulting structure is quantum mechanical is harder.

Anyway, that is what this paper is all about. Two flux qubits are locked together by magnetic coupling, and therefore the double well potential is now actually 4-dimensional. If you don’t like thinking in 4D, you can imagine two separate double-wells, which are locked together so that they mimic each other. Getting the double well potentials similar enough to be able to lock them together in the first place is also really hard with superconducting flux qubits. It’s actually easier with atoms or ions than superconducting loops, because nature gives you identical systems to start with. But flux qubits are more versatile for other reasons, so the effort that has to go into making them identical is worthwhile.

Once they are locked together, you can again start tilting the ‘two-qubit-potential’. The spacing of the energy levels will now be different (think about a mass on the end of the spring – if you glue another mass to it, the resonant frequencies of the system will change, and the energies levels of the system along with them. We have sort of made our qubit ‘heavier’ by adding another one to it.

But we still see the resonant peaks! Which means that two qubits locked together still behave as a nice quantum mechanical object. The peaks don’t look quite as obvious as the ones I have drawn in my cartoon above. If you want to see what they really look like check out Figure 3 of the preprint. (Note that the figure shows MRT ‘rate’ rather than ‘probability’, but the two are very closely linked)

From the little resonant peaks that you see, you can extract Delta – which is a measure of the energy level spacing in the wells. In this particular flux-qubit system, the energy level spacing (and therefore Delta) can be tuned finely by using another superconducting loop attached to the main qubit loop. So you can make your qubit mass-on-a-spring effectively heavier or lighter by this method too. When the second tuning loop is adjusted, the resulting change in the energy level separation agrees well with theoretical predictions.

As you add more and more qubits, it gets harder to measure Delta, as the energy levels get very close together, and the peaks start to become washed out by noise. You can use the ‘tuning’ loop to make Delta bigger, but it can only help so much, as the tuning also has a side effect: It lowers the overall ‘signal’ level of the resonant peaks that you measure.

In summary:

Looking at the quantum properties of coupled qubits is very important, as it helps us experimentally characterise quantum computing systems.
Coupling qubits together makes them ‘heavier’ and their quantum energy levels become harder to measure.
Here two coupled qubits are still behaving quantum mechanically, so this is promising. This means that in the quantum computation occurring on these chips involves at least 2-qubits interacting in a quantum mechanical way. Physicists calls these ‘2-qubit processes’. There may be processes of much higher order happening too.
This is pretty impressive considering that these qubits are surrounded by lots of other qubits, and connected to many, many other elements in the circuitry. (Most other quantum computing devices explored so far are much more isolated from other nearby elements).

H+ Summit 2010 @ Harvard

I’m currently in Harvard listening to some pretty awesome talks at the H+ Summit. I always really enjoying attending these events, the atmosphere is truly awesome. So far we have had talks about brain preservation, diy genomics, neural networks, robots on stage, AI, consciousness, synthetic biology, crowdsourcing scientific discovery, and lots lots more.

The talks are all being livestreamed, which is pretty cool too. I can’t really describe the conference in words, so here are some pictures from the conference so far:

Audience pic:

General overview:

Here is a picture of Geordie’s talk about D-Wave, quantum computing and Intelligence:

Here is a picture of me next to the Aiken-IBM Automatic Sequence Controlled Calculator Mark I. This thing is truly amazing, a real piece of computing history.

The MIT museum was also really cool 🙂 More news soon!

An adiabatic tragedy of advocates and sceptics

I think the recent musings around the blogosphere and beyond are completely missing a huge and fundamental point about why building AQC and AQO-based machines will be not only useful, but something that we will wonder how we lived without. I’m not going to talk about the specifics of implementing superconducting technology for cool applications (I can defer that to subsequent posts). Here I just want to explain a little about why we should do it, and how the main hurdle to progressing such technology is nothing to do with our ability to build and understand the technology itself. In fact, it is a rather sad story.

Let’s think for a second: Do we really have the right computational substrates for realising concepts such as highly connected neural networks, and thinking machines? Is the Von Neumann architecture really the best way to support such things? We are currently solving the problem of simulating intelligent systems by throwing more and more computational power at them. Though there’s something very odd about this, we have little choice, as we have become very good at designing processors that behave in a particular way, that talk to memory in a particular way, and that have a small number of cores. The problem with the commonly used architectures is that they just cannot embed things like neural nets, and other highly parallel structures, very efficiently.

Could adiabatic quantum processors be used for neural net and highly parallel processing purposes? Of course! The architectures are very compatible. AQO processors are very similar to Boltzmann machines, which underlie the operation of way many pattern-recognising systems, such as our own brain. There are other delicious fruits of superconducting logic, for example the ease with which we can implement reversible logic, and the exceedingly low power dissipation of such circuits. These systems also exhibit macroscopic quantum effects, which may be a great resource in computation, or it may not. But even if it does not, we should not ignore the fact that actually building such machines is the only way to answer this, and many other questions.

I think that superconducting processors are offering us a wonderful gift here, and yet we refuse to take advantage of it. Why? The reasons are a little hard to stomach.

It seems that while we’d rather just get going building some cool things, we end up spending a large proportion of our time and effort debating issues like whether or not quantum computers can solve NP-Complete problems in polynomial time. What?! I hear you say. My answer: Exactly. Obscure as they are, these questions are the most important thing in the world to a vanishingly small fraction of the population. Yet these seemingly lofty theoretical debates are casting a shadow over the use of superconducting QC architectures in every area of research, including things like the novel implementations of hardware-based neural nets, which may prove to be an even more fruitful avenue than optimization.

It will take a large amount of financing to commercialize general purpose superconducting processors, and an unimaginably large effort on behalf of the scientific researchers and engineers who devote their time to trying to progress this technology. The step from fundamental research to commercialisation cannot and will not work in an academic environment. Why? Because in order to fabricate integrated circuits of any kind you need investments of hundreds of millions of dollars into foundries. Robust and scalable technologies can only be realised in an industrial setting. Sure, small scale systems can be realised in many labs, but there are no industrial uses for devices produced in this setting: Anything that they demonstrate can be outperformed with a standard computing system. And it will stay that way until we realise that we need to take a risk as a society and really invest now in a technology of the future. I’ve seen distressing things happening at RSFQ roadmapping meetings. The conclusion of the roadmapping somewhat tragically boils down to ‘make another roadmap’ – because there are no applications beyond a few incredibly specialised circuits that can be realised within the scope of government research grants (~$10K-5M. Oh and by the way they aren’t very reproducible either). There is no funding on the necessary scale, and therefore whatever money is put into this distributed and broken field rather swiftly evaporates, even though if considered cumulatively it may have been enough to get a single, well focused effort going.

So, my plea to the academic world is to realise that there comes a point where quarrelling over things should be secondary to our solidarity as a community who really want to see something achieved. I’d rather try and fail than sit smugly in my ivory tower proclaiming that something is 20, or 50 years away (read – therefore not my problem guys!) Day after day people (with little or no knowledge in the field) exclaim that these things will never work, you are not building anything useful, your results are fraudulent, your devices are vapour-ware….etc etc.

This is more than cautious scientific scepticism, this is sheer hatred. It seems very unusual, and from my point of view, very detrimental to the entire field. I’d first advise such people to read some of our peer-reviewed papers to get their facts straight. I’d then say the following:

If those working in the fields of superconducting electronics, flux qubits, and quantum information really cared about the future of a technology that they had helped show had huge theoretical promise, they should rally along with us rather than spitefully burning their own bridges. As a community, we should promote the potential of our research and try to convince whoever needs convincing that these endeavours, these ‘Manhattan’ style projects are the only way in which we can bring disruptive technologies to fruition. And even if such projects fail, even if there is no other possible spin off that comes out of them, think about this: Such projects cost about the same amount as a couple of ICBMs. I know which I’d rather have more of in this world.

Responses to some post-singularity Physics comments

So the post I wrote on Post-Singularity Physics got linked a couple of times, and although people are very reluctant to comment on original source material these days, there were many many discussions over at io9.

Here are some points:

svenhoek likes guns: “A ceiling? Wow, I think we have heard this kind of speak all through history. People that say this always are made to look stupid by history. plus, there will still be scientists needed to analyze the data and figures the machines bring up.”

But why shouldn’t there be a ceiling, just because there hasn’t been one up until now? 😉 To let someone else do the work for me, I think that this comment summarises nicely how I feel about this ‘ceiling’ argument:

Derek Pegritz: “I think one of the reasons there hasn’t been a major breakthrough in theoretical physics since the early days of quantum mechanics is simply that the baseline human mind is no longer capable of crunching the multidimensional data required to view the universe in its totality. As we are now, humans are perfectly capable of observing, analyzing, and describing the way in which the 4-dimensional submanifold our minds exist in primarily. Once you start adding dimensions and reduce physics to dealing with incredibly fine, emphemeral particle and field transactions, the 1.0 human mind is simply cannot envision the higher orders of logic which may obtain. This is not to say that we can’t see and understand certain *glimpses* of higher-order multidimensional physics–string theory, M theory, et. al. indicate that we can certainly comprehend it to *some* degree–but to really get to the heart of post-quantum physics, we’ll probably need a measure of intelligence amplification. Combining a machine-mind’s speed and data-crunching capacity with the human brain’s excellent pattern-matching faculties could very well lead to a true unified TOE.”

(There were several other comments along this line)
I do agree that humans mostly work by analogy and building up a mental picture of the world to whatever level of detail is necessary for us to survive. So the argument goes that we just aren’t evolved to be able to make logical arguments in N-dimensions – it doesn’t come naturally to us, so we use tools to help, but we are still at a disadvantage in that the links are no longer ‘obvious’ to us. How often have you heard the phrase ‘quantum mechanics is weird’ or ‘spooky’ – even Physicists don’t like QM because it is non-intuitive (it is also incomplete, but let’s not go there – it’s pretty good, and a better theory that closed some of those loopholes probably wouldn’t look *much* more ‘intuitive’ to us.)

Makidian: “I feel like only part of this will ever be true. I don’t think physicists will ever want to give up the aspect of their job that causes them to ask fundamental questions of everything. It would counterproductive and against what they have been working for to just have a computer do it for them. These futurists are so in love witht the idea of the singularity that they just start posing theories without considering that humans may want to do it and still work out the answer for themselves.
I’m all about making a better human to a certain degree, but not at the cost of losing my own humanity because isn’t that really the point!?”

I think the point here is not that they wouldn’t still want to do their job, but that that they would HAVE to give it up if something could do it much better than them, as no-one would pay them (at a higher rate) to continue. Physicists don’t sit around pondering the great problems of the Universe for fun – most do it to earn a living. At the least they would have to relegate it to a hobby (in the same way that people still build radios from discrete components). If machines do something better than people, they will be replaced, unless something about our global socio-economic system changes radically in the next few decades (which I won’t rule out completely). This is also assuming that human-physicists remain the same throughout this entire process, which I also do not believe will happen. We are already starting to augment ourselves as a community (how many of us read arXiv on smart-phones on the train in the morning?) There are many ways in which we can continue this trend, hence my point in the original post about experimentalists possible undergoing a human-machine merger just before their jobs get stolen 😉

artum: “Enough of this Singularity BS. It’s just another get-immortal-quick scheme for morons. Also research of any kind like this would be a horrible idea because there’d be no one to confirm it as with ordinary science, only other science-bots”

This comment made me smile. Gotta love that bio-Luddite ad-hominem attack touch. Anyway, I think that the machines would very much enjoy sharing the information with each other, if their hardwired goal was to discover a model with which to predict the behaviour of the Universe, it would make them very happy! Much happier than it would humans, who tend to think: ‘Damn, I’ve been scooped again by my competitor’.

RandomThought: “As for the rest of your article, I personally have come to the opinion, that in general more stays the same than changes. People have always thought they were heading towards the big everything-changing event and somehow life mostly just goes on. “

The Singularity meme is a double edged sword. It gives people an idea of what I am talking about without me having to describe it in minute detail, but then again I have to accept all the baggage that comes along with it. So yes, I admit, I was being lazy, and I wasn’t necessarily referring to the Singularity in all its ‘wonderful glory’ but just rather the part where AI technologies start to become better at doing Physics than we are. Personally, this really does radically affect my life, seeing as I am a professional Physicist for a living 😉

Just in case you missed the links:
Original post
Discussion over at io9