Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor

Anyone who follows this blog and wants to get a real in-depth insight into the way that D-Wave’s processors are built, and how they solve problems, should definitely read this paper:

Phys. Rev. B. 82, 024511 (2010), R. Harris et al.

The paper itself is quite long (15 pages) but it really gives a great description of how an 8-qubit ‘portion’ of the processor is designed, fabricated, fit to a physical (quantum mechanical) model, calibrated, and then used to solve problems.

If you don’t have access to the Phys Rev B journal, you can read a free preprint of the article here. And if you’ve never tried reading a journal paper before, why not give it a go! (This is an experimental paper, which means there are lots of pretty pictures to look at, even if the Physics gets hard to follow). For example, a microphotograph of the 8-qubit cell:

Advertisements

An adiabatic tragedy of advocates and sceptics

I think the recent musings around the blogosphere and beyond are completely missing a huge and fundamental point about why building AQC and AQO-based machines will be not only useful, but something that we will wonder how we lived without. I’m not going to talk about the specifics of implementing superconducting technology for cool applications (I can defer that to subsequent posts). Here I just want to explain a little about why we should do it, and how the main hurdle to progressing such technology is nothing to do with our ability to build and understand the technology itself. In fact, it is a rather sad story.

Let’s think for a second: Do we really have the right computational substrates for realising concepts such as highly connected neural networks, and thinking machines? Is the Von Neumann architecture really the best way to support such things? We are currently solving the problem of simulating intelligent systems by throwing more and more computational power at them. Though there’s something very odd about this, we have little choice, as we have become very good at designing processors that behave in a particular way, that talk to memory in a particular way, and that have a small number of cores. The problem with the commonly used architectures is that they just cannot embed things like neural nets, and other highly parallel structures, very efficiently.

Could adiabatic quantum processors be used for neural net and highly parallel processing purposes? Of course! The architectures are very compatible. AQO processors are very similar to Boltzmann machines, which underlie the operation of way many pattern-recognising systems, such as our own brain. There are other delicious fruits of superconducting logic, for example the ease with which we can implement reversible logic, and the exceedingly low power dissipation of such circuits. These systems also exhibit macroscopic quantum effects, which may be a great resource in computation, or it may not. But even if it does not, we should not ignore the fact that actually building such machines is the only way to answer this, and many other questions.

I think that superconducting processors are offering us a wonderful gift here, and yet we refuse to take advantage of it. Why? The reasons are a little hard to stomach.

It seems that while we’d rather just get going building some cool things, we end up spending a large proportion of our time and effort debating issues like whether or not quantum computers can solve NP-Complete problems in polynomial time. What?! I hear you say. My answer: Exactly. Obscure as they are, these questions are the most important thing in the world to a vanishingly small fraction of the population. Yet these seemingly lofty theoretical debates are casting a shadow over the use of superconducting QC architectures in every area of research, including things like the novel implementations of hardware-based neural nets, which may prove to be an even more fruitful avenue than optimization.

It will take a large amount of financing to commercialize general purpose superconducting processors, and an unimaginably large effort on behalf of the scientific researchers and engineers who devote their time to trying to progress this technology. The step from fundamental research to commercialisation cannot and will not work in an academic environment. Why? Because in order to fabricate integrated circuits of any kind you need investments of hundreds of millions of dollars into foundries. Robust and scalable technologies can only be realised in an industrial setting. Sure, small scale systems can be realised in many labs, but there are no industrial uses for devices produced in this setting: Anything that they demonstrate can be outperformed with a standard computing system. And it will stay that way until we realise that we need to take a risk as a society and really invest now in a technology of the future. I’ve seen distressing things happening at RSFQ roadmapping meetings. The conclusion of the roadmapping somewhat tragically boils down to ‘make another roadmap’ – because there are no applications beyond a few incredibly specialised circuits that can be realised within the scope of government research grants (~$10K-5M. Oh and by the way they aren’t very reproducible either). There is no funding on the necessary scale, and therefore whatever money is put into this distributed and broken field rather swiftly evaporates, even though if considered cumulatively it may have been enough to get a single, well focused effort going.

So, my plea to the academic world is to realise that there comes a point where quarrelling over things should be secondary to our solidarity as a community who really want to see something achieved. I’d rather try and fail than sit smugly in my ivory tower proclaiming that something is 20, or 50 years away (read – therefore not my problem guys!) Day after day people (with little or no knowledge in the field) exclaim that these things will never work, you are not building anything useful, your results are fraudulent, your devices are vapour-ware….etc etc.

This is more than cautious scientific scepticism, this is sheer hatred. It seems very unusual, and from my point of view, very detrimental to the entire field. I’d first advise such people to read some of our peer-reviewed papers to get their facts straight. I’d then say the following:

If those working in the fields of superconducting electronics, flux qubits, and quantum information really cared about the future of a technology that they had helped show had huge theoretical promise, they should rally along with us rather than spitefully burning their own bridges. As a community, we should promote the potential of our research and try to convince whoever needs convincing that these endeavours, these ‘Manhattan’ style projects are the only way in which we can bring disruptive technologies to fruition. And even if such projects fail, even if there is no other possible spin off that comes out of them, think about this: Such projects cost about the same amount as a couple of ICBMs. I know which I’d rather have more of in this world.

AQC / AQO video talk

Here is a video lecture that I gave a while ago about Adiabatic Quantum Computing and Adiabatic Quantum Optimization (specifically describing some cool things that you can do with D-Wave hardware) to my former colleagues at the University of Birmingham. This is a slightly higher level talk than the previous ones I have posted. Thanks again to my kind colleague and good friend (soon to be Dr.) Dominic Walliman for editing and posting these videos!

The talk is entitled ‘Playing with adiabatic hardware: From designer potentials to quantum brains’ although it certainly isn’t quite as ‘brain’ focused as some of the previous talks I have given, heh 🙂

.

Here are the other parts (they should be linked from that one, but just in case people can’t find them):

AQC Part 2
AQC Part 3
AQC Part 4
AQC Part 5
AQC Part 6

P.S. I wasn’t trying to be mean to the gate model (or computer scientists for that matter) – it just kinda happened…

P.P.S Some of the notation is a bit off – the J’s should be K’s to be consistent with the literature I believe…

Josephson junction neurons

This is an interesting paper:

Josephson junction simulation of neurons

by Patrick Crotty, Daniel Schult, Ken Segall (Colgate University)

“With the goal of understanding the intricate behavior and dynamics of collections of neurons, we present superconducting circuits containing Josephson junctions that model biologically realistic neurons. These “Josephson junction neurons” reproduce many characteristic behaviors of biological neurons such as action potentials, refractory periods, and firing thresholds. They can be coupled together in ways that mimic electrical and chemical synapses. Using existing fabrication technologies, large interconnected networks of Josephson junction neurons would operate fully in parallel. They would be orders of magnitude faster than both traditional computer simulations and biological neural networks. Josephson junction neurons provide a new tool for exploring long-term large-scale dynamics for networks of neurons.”

Advantages of using RSFQ-style architectures include the non-linear response of the elements and the analogue processing capability which means that you can mimic more ‘logical’ neurons with fewer ‘physical’ elements. I’m pretty sure that this is true. In addition, you can think of other wonderful ideas such as using SQUIDs instead of single junctions (hmm, I wonder where this train of thought might lead) and then apply non-local (or global) magnetic fields to adjust the properties of the neural net. Which might be a bit like adjusting the global values of a particular neurotransmitter.

I’m a bit worried about this approach though. Current superconducting technologies tend to have a low number of wiring layers (<5), and as such are pretty much a 2 dimensional, planar technology. The maximum tiling connectivity you can get from a single layer planar architecture is presumably 6 nearest neighbour unit cell. (Hexagonal close packing). The three dimensional packing in a real brain gives you a higher intrinsic level of connectivity, even though the structure of the neocortex is only quasi-3-dimensional (it is more like 2D sheets crumpled up, but even these '2D' sheets have a fair amount of 3D connectivity when you look closely. In a real brain, each neuron can have tens of thousands of differently weighted inputs (the fan-in problem). Try building that into your mostly-planar circuit 🙂

One good thing about using analogue methods is that not all the neurons need to be identical. In fact having a parameter spread in this massively parallel architecture probably doesn't hurt you at all (it might even help). Which is good, as current Josephson junction foundries have issues with parameter spreads in the resulting digital circuitry (they are nowhere near as closely controlled as semiconductor foundries).

The paper claims that the tens of thousands of neurons in a neocortical column might be simulable using this method. I think that with present LSI JJ technology this is very optimistic personally… but even considering the connectivity, parameter spreading and fan-in problems, I think this is a very interesting area to investigate experimentally.

I’ve actually written a bit about this topic before:

Quantum Neural Networks 1 – the Superconducting Neuron model

In that blogpost there were some links to experiments performed on simple Josephson junction neuron circuits in the 1990’s.

Some nice D-Wave info

I suspect this may be redundant information as my readership is probably entirely contained in the superset of D-Wave’s blog readership, but… For anyone who didn’t see it, there is a series of new posts over at Geordie’s blog about D-Wave’s technology, aims, results, fabrication and mostly anything else you could wish to know about the company’s quantum computing efforts. There are several links to presentations containing plenty of data to mull over.

What D-Wave are trying to build

The posts are still being added at the moment, so make sure you check back often to look for updates. For my colleagues who read this blog (I know who you are…) you should definitely check it out.

Designing qubit circuits

It’s hard work being the only postdoc in the village. One day I’m fixing wiring on the fridge, the next I’m analysing the effect of spin-flip scattering on my superconductor-ferromagnet data. Today I’m being the local RSFQ/SQUID layout afficionado.

I’m designing some qubit circuits. Process design rules are a pain, there are about 10 layers in a Standard Niobium process and you have to get all the holes and structures spaced correctly (do I hear a tiny violin?). Luckily I have several helpful guides such as Ustinov’s group website, which contains information (mostly in the doctoral theses) on their structures which were fabricated by VTT and HYPRES.

Here are some pictures of what I’m doing:

kicpic1

kicpic2

They are very preliminary designs at the moment, I haven’t even got all the layers in there yet.
I also have to calculate the mutual inductances between the structures using finite element techniques, which lets you know how well your qubit couples to your readout circuitry (In this case, DC SQUIDs and microwave resonators, depending on the design). It’s quite fun to do circuit layout though. These circuits will probably be realised at the European FLUXONICS foundry at IPHT.

Quantum Neural Networks 1 – the Superconducting Neuron model

I’m interested in Quantum Neural Networks, specifically how to actually build the things. Any input would be greatly appreciated on this one. This is open notebook science in an extreme sense: I’m discussing here something I’d like to go into eventually, it may be several years down the line, but it’s worth thinking about it in the meantime.

The first point I’d like to address is the Superconducting Neuron model – this is an approach which attempts to build real life biologically inspired neural nets from superconducting hardware. I’ll discuss some other approaches to utilising the ‘quantum’ aspect of QNNs more efficiently in subsequent posts, for now this discussion is limited to this one hardware model.

Here are some papers I’ve been reading on the subject:

Mizugaki et al., IEEE Trans. Appl. Supercond., 4, (1), 1994

Rippert et al., IEEE Trans, Appl. Supercond., 7, (2), 1997

Hidaka et al., Supercond. Sci. Technol., 4 (654-657), 1991

There are several advantages to using SC hardware to build NNs. The RSFQ framework makes it much easier to implement, for example, fan-in and fan-out systems. Flux pulses can correspond directly to nerve-firings. The circuit elements dissipate much less power than their silicon counterparts. And you could simulate factors such as neurotransmitter levels and polarity using flux-couplers and bias leads, which (I believe) seems to be a much more natural way of doing things than trying to invent a way to mimic this in semiconductor technology.

What I understand about this field so far: In the 1990’s a couple of Japanese groups tried to demonstrate principles of superconducting neuron circuits. They built a few, and they even worked up to a point. So what has happened to this research?

Four Problems

1.) Well one school of thought is that the device tolerance is just not up to scratch. It is true that when you make Josephson junction circuits, the tolerances on the individual components tends not to be better than ~5%. However, is this really a problem? I can’t see that being the case, I’m sure that the similarity between biological neurons can’t be that good.

2.) Another potential problem is that research into neural networks generally has diminished (partly due to the so-called AI winter). If people using supercomputers can’t get their simulated neural networks to do anything *that* interesting, why bother with building the things in hardware? Such realizations would have far fewer neurons anyway! I guess the answer is that simulating superconducting circuits is still quite hard, and there could be some real advantages to building the things – similar to the reasons for building modern ASICs.

3.) A third problem is device integration level. Even with the best fab facilities available, superconducting circuits can only be made to low level VLSI (10,000’s of junctions). Again my point is – well why not try something on this scale? Unfortunately, cell libraries for RSFQ design probably don’t natively support the kind of realisations you need to build superconducting neurons. (For example, you need a great deal of FAN-IN FAN-OUT). So you’d probably have to go fully custom, but that’s just a design challenge.

4.) And then there’s a theoretical problem that has been bugging me for a while now. Although you can simulate any level of connectivity in highly abstracted models of NNs (given enough processing power and memory), if you actually want to build one, are you limited by the current 2-dimensional planar nature of the fabrication process? In a 3-dimensional interconnected system such as a real human brain, you are able to connect distant regions via UNIQUE, DIRECT links. In a 2D system, you are limited by the circuit layout and can (essentially) only make nearest neighbour connections. I’m pretty sure there’s a graph theory proof pinging somewhere around the edge of my mind here about connectivity in different-dimensional systems. The question is, does this limitation mean that it is theoretically impossible to build biologically inspired neural networks in planar hardware?

The field of RSFQ / Superconducting digital electronics is suffering low funding at the moment from ‘lack of applications’ syndrome. The number of people investigating applications of RSFQ circuits and Josephson logic seems to be much lower than the number of people working on the fundamental Physics of the devices. It’s a problem with the way research is funded. No-one will fund mid-term technology development, it’s either fundamental Physics or applications breakthroughs.

There may well be research being done in this area that I am unaware of, and I would be most intrigued to learn of any progress, and whether there are there problems in addition to the four presented here. However, if the research is not being done, why not? And would it be possible to get funding for projects in this area…