Josephson junction neurons

This is an interesting paper:

Josephson junction simulation of neurons

by Patrick Crotty, Daniel Schult, Ken Segall (Colgate University)

“With the goal of understanding the intricate behavior and dynamics of collections of neurons, we present superconducting circuits containing Josephson junctions that model biologically realistic neurons. These “Josephson junction neurons” reproduce many characteristic behaviors of biological neurons such as action potentials, refractory periods, and firing thresholds. They can be coupled together in ways that mimic electrical and chemical synapses. Using existing fabrication technologies, large interconnected networks of Josephson junction neurons would operate fully in parallel. They would be orders of magnitude faster than both traditional computer simulations and biological neural networks. Josephson junction neurons provide a new tool for exploring long-term large-scale dynamics for networks of neurons.”

Advantages of using RSFQ-style architectures include the non-linear response of the elements and the analogue processing capability which means that you can mimic more ‘logical’ neurons with fewer ‘physical’ elements. I’m pretty sure that this is true. In addition, you can think of other wonderful ideas such as using SQUIDs instead of single junctions (hmm, I wonder where this train of thought might lead) and then apply non-local (or global) magnetic fields to adjust the properties of the neural net. Which might be a bit like adjusting the global values of a particular neurotransmitter.

I’m a bit worried about this approach though. Current superconducting technologies tend to have a low number of wiring layers (<5), and as such are pretty much a 2 dimensional, planar technology. The maximum tiling connectivity you can get from a single layer planar architecture is presumably 6 nearest neighbour unit cell. (Hexagonal close packing). The three dimensional packing in a real brain gives you a higher intrinsic level of connectivity, even though the structure of the neocortex is only quasi-3-dimensional (it is more like 2D sheets crumpled up, but even these '2D' sheets have a fair amount of 3D connectivity when you look closely. In a real brain, each neuron can have tens of thousands of differently weighted inputs (the fan-in problem). Try building that into your mostly-planar circuit ๐Ÿ™‚

One good thing about using analogue methods is that not all the neurons need to be identical. In fact having a parameter spread in this massively parallel architecture probably doesn't hurt you at all (it might even help). Which is good, as current Josephson junction foundries have issues with parameter spreads in the resulting digital circuitry (they are nowhere near as closely controlled as semiconductor foundries).

The paper claims that the tens of thousands of neurons in a neocortical column might be simulable using this method. I think that with present LSI JJ technology this is very optimistic personally… but even considering the connectivity, parameter spreading and fan-in problems, I think this is a very interesting area to investigate experimentally.

I’ve actually written a bit about this topic before:

Quantum Neural Networks 1 โ€“ the Superconducting Neuron model

In that blogpost there were some links to experiments performed on simple Josephson junction neuron circuits in the 1990’s.

10 thoughts on “Josephson junction neurons

  1. quantummoxie says:

    I’ve actually met Ken. A few years ago I gave a talk at Colgate and spent some time talking to him about Josephson junctions. This is pretty cool.

  2. null says:

    In the paper the authors write:
    “We show speeds for [..] *dense (all neurons
    connected to all others)* networks ..
    Model | FLOPS/AP| N = 1| N=1000 (sparse)| N=1000 (dense)
    ———————————————————————–
    JJ Neuron โ€” 2.0ร—10^10 | 2.0ร—10^10 | 2.0ร—10^10

    Connections have no speed impact on [..] JJ Neuron models as they are naturally parallel”
    And
    “Based on similar circuits constructed for RSFQ circuits [6],
    network simulations of 20,000 *densely coupled* neurons are reasonable and could simulate one trillion APs for each of these neurons in a few minutes”

    How might the 2.0ร—10^10 for N=1000(dense) or more be realized with current tech?

    And reconcile with
    “The maximum tiling connectivity you can get from a single layer planar architecture is presumably 6 nearest neighbour unit cell” ?

    • physicsandcake says:

      The phrase ‘densely coupled’ is not really well defined: The paper doesn’t state the connectivity of the neural circuit that would be desired (note nothing has actually been built here). The 20,000 just comes from the current integration level of RSFQ circuits, i.e. the number of Josepshon junction elements. You can increase connectivity locally by using more JJs per logical neuron but I don’t see how you can do it over a large area.

      I suspect here ‘densely coupled’ means all nearest-neighbour connections are active, although I may be wrong.

      An alternative approach would be to ‘virtually’ connect the neurons via shuttling SFQ pulses (or whatever) around the chip and then loading them into a different part of the circuit which has flux storage and some form of integrator as a ‘pre-processing element’ before the final signal enters the neuron itself. At present this is probably the ONLY way you could virtually couple all neurons to 10,000 other neurons (maybe).

      But this requires some serious circuit complexity overheads. You’d lose any speed-up you had envisaged very quickly. It also kind of abstracts the computation away from the bio-inspired 1 physical connection = 1 logical connection which makes this approach so appealing.

      I think that trying to demonstrate highly connected circuits at the limit of the current level of RSFQ integration would be difficult and pointless. Instead, the small circuits – single neuron up to maybe tens of connected neurons – should be studied carefully and compared with in-vitro experiments, to see if there really IS any qualitative improvement over conventional simulations.

      Perhaps even more interesting would be getting the two experiments to talk to one another – but that’s another post entirely ๐Ÿ˜‰

  3. rrtucci says:

    If this NN is going to work as a classical computer, then I don’t see why using JJs to model the neurons is necessary or advantageous.

    • physicsandcake says:

      Where is your sense of scientific curiosity??!! j/k.

      The answer I would give to this is along the lines of a ‘it would be a special purpose analogue-computer’. If the JJs accurately model several features of (analogue) biological neurons natively, then they will be rather more computationally efficient than their universal digital counterparts. (Some neuron models have up to 21 variables, for example the Izikhevich or Hodgkin-Huxley models.) Keeping track of ~20 variables per neuron at arbitrary bit-precision is going to have a somewhat astronomical overhead.

      Of course we might find that we don’t need such a complex model, but at the moment we just don’t know!

      BlueBrain may be able to simulate cortical columns at this level of neuron complexity, but just look at the computer it uses! 10,000 neurons and 10^8 synapses running on BlueGene uses ~120kW of power. Because our brain does this *so* much more efficiently, I wonder if we should explore other computational and hardware possibilities rather than just throwing digital Von Neumann (or variations on this theme) at everything.

      Reversible superconducting logic would be a start. And if projects like JJ-Neuron were properly financed, they could provide an incentive to drive JJ technology into a new regime where it might even become competitive with silicon based systems ๐Ÿ™‚

      And if the brain turns out to be using QM at a level that is relevant to its computation (unlikely but not impossible), then at least we’ll already have an architecture that is compatible with QC (or at least superconducting qubits).

    • deng says:

      It is advantageous because Josephson Junction oscillate at GHz range frequencies. What would take months to simulate on a computer would take seconds or minutes to measure in the lab

  4. null says:

    In the target paper the authors write: “Izhikevich 5.0ร—10^2 [N=1000 (dense); at 10^9 flops/sec and JJ at] 2.0ร—10^10 [N=1000 (dense)]”
    The 2009 ACM Gordon Bell Prize was given for SC simulation of NN at the scale of about 10^7 AP/sec (see The Cat is Out of the Bag: Cortical Simulations with 10^9 Neurons, 10^13 Synapses) although very different in many respects, one wonders what kind of modifications would be needed to bring them closer together giving 3 to 6 order of a magnitude increase in speed to the SC simulation (qualitative changes might result at this scale!)

  5. Jasper says:

    what is diamagnetism?

    • physicsandcake says:

      Diamagnetism occurs when a material brought close to a magnet generates an opposite magnetic field inside and around it. This is different to a ferromagnet or a paramagnet, where the material tries to align with (have the same direction as) the nearby magnet.

      An (ideal) superconductor can be thought of as a perfect diamagnet as it can create a magnetic field which perfectly opposes one from a magnet nearby. This is one of the reasons why you can use superconductors to levitate magnets.

      Frogs and other small animals are actually slightly diamagnetic, so with a strong enough magnetic field you can actually get them to levitate too ๐Ÿ™‚ The superconductor works much better though, you only need a small field nearby.

      http://en.wikipedia.org/wiki/Diamagnetism

Leave a comment