I’m interested in Quantum Neural Networks, specifically how to actually build the things. Any input would be greatly appreciated on this one. This is open notebook science in an extreme sense: I’m discussing here something I’d like to go into eventually, it may be several years down the line, but it’s worth thinking about it in the meantime.

The first point I’d like to address is the Superconducting Neuron model – this is an approach which attempts to build real life biologically inspired neural nets from superconducting hardware. I’ll discuss some other approaches to utilising the ‘quantum’ aspect of QNNs more efficiently in subsequent posts, for now this discussion is limited to this one hardware model.

Here are some papers I’ve been reading on the subject:

Mizugaki et al., IEEE Trans. Appl. Supercond., **4**, (1), 1994

Rippert et al., IEEE Trans, Appl. Supercond., **7**, (2), 1997

Hidaka et al., Supercond. Sci. Technol., **4** (654-657), 1991

There are several advantages to using SC hardware to build NNs. The RSFQ framework makes it much easier to implement, for example, fan-in and fan-out systems. Flux pulses can correspond directly to nerve-firings. The circuit elements dissipate much less power than their silicon counterparts. And you could simulate factors such as neurotransmitter levels and polarity using flux-couplers and bias leads, which (I believe) seems to be a much more natural way of doing things than trying to invent a way to mimic this in semiconductor technology.

What I understand about this field so far: In the 1990’s a couple of Japanese groups tried to demonstrate principles of superconducting neuron circuits. They built a few, and they even worked up to a point. So what has happened to this research?

*Four Problems*

1.) Well one school of thought is that the device tolerance is just not up to scratch. It is true that when you make Josephson junction circuits, the tolerances on the individual components tends not to be better than ~5%. However, is this really a problem? I can’t see that being the case, I’m sure that the similarity between biological neurons can’t be that good.

2.) Another potential problem is that research into neural networks generally has diminished (partly due to the so-called AI winter). If people using supercomputers can’t get their simulated neural networks to do anything *that* interesting, why bother with building the things in hardware? Such realizations would have far fewer neurons anyway! I guess the answer is that simulating superconducting circuits is still quite hard, and there could be some real advantages to building the things – similar to the reasons for building modern ASICs.

3.) A third problem is device integration level. Even with the best fab facilities available, superconducting circuits can only be made to low level VLSI (10,000’s of junctions). Again my point is – well why not try something on this scale? Unfortunately, cell libraries for RSFQ design probably don’t natively support the kind of realisations you need to build superconducting neurons. (For example, you need a great deal of FAN-IN FAN-OUT). So you’d probably have to go fully custom, but that’s just a design challenge.

4.) And then there’s a theoretical problem that has been bugging me for a while now. Although you can simulate any level of connectivity in highly abstracted models of NNs (given enough processing power and memory), if you actually want to build one, are you limited by the current 2-dimensional planar nature of the fabrication process? In a 3-dimensional interconnected system such as a real human brain, you are able to connect distant regions via UNIQUE, DIRECT links. In a 2D system, you are limited by the circuit layout and can (essentially) only make nearest neighbour connections. I’m pretty sure there’s a graph theory proof pinging somewhere around the edge of my mind here about connectivity in different-dimensional systems. The question is, does this limitation mean that it is theoretically impossible to build biologically inspired neural networks in planar hardware?

The field of RSFQ / Superconducting digital electronics is suffering low funding at the moment from ‘lack of applications’ syndrome. The number of people investigating applications of RSFQ circuits and Josephson logic seems to be much lower than the number of people working on the fundamental Physics of the devices. It’s a problem with the way research is funded. No-one will fund mid-term technology development, it’s either fundamental Physics or applications breakthroughs.

There may well be research being done in this area that I am unaware of, and I would be most intrigued to learn of any progress, and whether there are there problems in addition to the four presented here. However, if the research is not being done, why not? And would it be possible to get funding for projects in this area…

I think Chris Altman (of cohærence fame) has done some work in this area. You should talk to him.

Yes, I spoke to him a little bit about this, but you are right, there is much more to be discussed. I’m just throwing it out there to see if anyone I don’t know about stumbles upon it 🙂

I don’t have any direct experience with QNNs, but I recently wrote a paper dealing with the convexity of parallel channels that might have some relevance (it’s on the arXiv).

Re neural circuits, you can build learning algorithms based on solving discrete optimization problems which naturally map to AQC. Industrial scale learning algorithms are mostly based on boosting or support vector machines, which are both neural net techniques. Interestingly it looks as though the global optimization based techniques give better accuracy, more compact sets of classifiers and are faster (!) than the workhorse algorithms Google etc. use to train their classifiers, even with standard software solvers. If this turns out to be correct this would be orders of magnitude more important than factoring as an application of QC/superconducting electronics. Would you like to take a look at where this is at?

I think that AQC is the most promising way forward.

I was going to write a separate post on AQC NN 🙂 I’m trying to get a grasp of what all the potential candidates are for the broader picture of (superconducting) quantum neural networks, writing posts on the blog is a great way to make yourself learn about stuff! Specifically, AQC is the only architecture that is advanced enough to support such networks at the moment. I’m envisaging a new type of hardware configuration that combines AQC, RSFQ and perhaps gate model elements. I certainly haven’t worked through any of the details yet, but I think that a few gate-model qubits could be used to store topological information about neural nets in the temporally-varying quantum states of a smaller number of qubits than the number of neurons you are trying to simulate as investigated theoretically by Behrman et al. (I’ll post some papers on this idea soon as it’s difficult to explain). It

maybe possible to combine these small but powerful ‘elements’ using RSFQ circuitry into a larger AQC framework.But certainly before launching into any such crazy-complicated ideas, showing stuff in general working in AQC would be a really good way to get started.

In short: I was wondering whether to apply for some money for a research project based on this work. It would require extensive collaboration, but some of the more blue sky elements (like the experimental gate-model bits) might be easier to fund from the research side than the commercial side of QC? It would be interesting to hear your opinion 🙂

I agree with physicsandcake that AQC is the best way forward. However, my suggestion for a physical system approach is quite different. Instead of trying to BUILD the thing, exploit the emergent self-organizational properties of the correct auto-catalytic set.

Stuart Kauffman’s book Origins of Order demonstrates how and why a self-organizing neural network is an emergent property of certain complex systems held at the criticality point.

1993, Origins of Order: Self-Organization and Selection in Evolution by Stuart Kauffman, Oxford University Press, Technical monograph. ISBN 0-19-507951-5

A system composed of anyons (soliton waves) of many frequencies in a 2DEG could form such an autocatalytic set. The resultant emergent neural network would thus be a topological quantum neural network. I expect it would then take several years of evolutionary programming to get useful results. The result to strive for would be a winner-take-all style entanglement/teleportation recurrent topological quantum neural network.

Since a multi-layer neural network is Turing-complete (by virtue of emulating an XOR gate), a multi-layer TQNN would also be Turing-complete, and can emulate a CNOT gate. Thus, any sort of advanced QNN is also a working quantum computer. This is why progress with QNN technology would generally be classified.

This author believes this project was already done by the Five Eyes nations, on a top secret basis, starting around 1990 and entering production circa 1996. Still, it would be an excellent academic project to replicate the process, since the results have never been published.

Oh and the fab process isn’t 2D, it’s 3D (you get several layers of metal). There is a limit on connectivity but if you don’t care about your neurons acting like qubits you can make it quite high, as the neurons can be very large loops which can provide inductive ports to large numbers of couplers.

One word young woman: Bayesian

And here’s to you, Mrs. Robinson, Jesus loves you more than you will know, LA-LA-LA, LA-LA-LA

rr: If you tell me how to build a Quantum Bayesian Network out of Josephson junctions and what that would achieve, I’d be more than happy to do so. But I’m afraid I don’t quite understand how the word alone can help me.

Quantum Bayesian networks are a different way of writing qubit circuits, so let’s just talk here about qubit circuits and *classical* Bayesian nets. If you can build a qubit circuit (gate model) quantum computer with Josephson junctions, then I can tell you how to write a qubit circuit for any given *classical* Bayesian network, such that this qubit circuit allows you to sample that classical Bayesian network. Why is this useful? Sampling classical Bayesian networks is an important part of AI. For example, Google samples Bayesian Networks to correct your spelling, and Autonomy, the largest software company in the UK, uses Bayesian methods to do data searches. Of course, one could also use a classical computer made out of Josephson junctions to do Bayesian network calculations, but I believe that the quantum computer version will give you a time complexity advantage over the classical computer.

rr: Thanks for the clarification. How many gate-model qubits (minimum) do you think you would need to be able to implement something useful? What level of entanglement would you need between them?

Well, if you need a theorist, let me know. Otherwise I’m probably useless (as many theorists are 🙂 ).

Theorists are always useful, they come up with cool ideas for experiments! 😉 Seriously, I’ll let you know if I get more heavily involved in this area and get any projects running.

This blog entry was very high on google search result related to:

http://arxiv.org/abs/1002.2892

[…] Quantum Neural Networks 1 – the Superconducting Neuron model […]

[…] Quantum Neural Networks 1: The Superconducting Neuron Model […]