Bio-inspired massively parallel computation

I attended a Computer Science seminar yesterday given by Professor Steve Furber (see here also) of Manchester University (he helped design the BBC Micro and the ARM microprocessor – which can be found in many mobile phones and other portable devices.)

The seminar was about taking inspiration from the architecture of the brain to help design new types of parallel processors. The brain is very efficient in terms of energy used per computation, (better by a factor of a million than current microprocessors). Furber argued that transistors are now so cheap that they can be considered an unlimited resource, and the ‘cost’ consideration of increasing processing power is now limited by the energy cost per computation. The brain is also exceptionally fault tolerant considering the number of neurons (~1E11-1E12), it is adaptive, and uses no ‘high performance’ parts – electrically at least, everything operates <100Hz, and at low speeds of transmission between neurons.

The models that the team are working on, the SpiNNAker project imitates the firing of a ‘data packet’ through a highly connected array of cores, in the same way that the firing of a neuron allows signal transmission by alerting the nearest neighbour neurons to the fact that it has fired. If the packet encounters a broken link, it can reroute. The model is similar to brain architecture in it’s high connectivity of a huge number of simple processing components. The SpiNNAker project hopes to simulate the behaviour of about a billion ‘neurons’ (1% of the human brain, or a whole rat brain!).

Another argument was that using synchronous algorithms and architectures is not a natural way of computing solutions to asynchronous physical systems, e.g. interacting particles. In the SpiNNaker project, most of the signal transmission (‘neural spiking’) is done asynchronously. Admittedly, the team then do have a lot of problems getting their simulations to ‘talk’ to conventional buses, peripheral ROM etc.

The brain

It has been found that the underlying architecture of the brain is similar over the whole of the cortical structure – there is no specialised architecture for different types of task, even though different ‘regions’ within the brain do tend to be adopted for different tasks. It seems over-engineered πŸ™‚ This suggests that if a highly connected ‘building block’ can be scaled up enough, it should potentially mimic the brain, which is exceptionally good at performing high level tasks, e.g. language processing, pattern matching, image recognition, edge detection etc.

Massively parallel architectures such as the brain are good at these tasks, but (apart from a few exceptions) are not so good at raw data-crunching (multiply/accumulate etc.). I wonder if this is because the architecture fundamentally does not support these kind of calculations very well, or just that the algorithms which convert input signals into computational tasks are not very well developed in the brain for this type of problem. We have no evolutionary need at present to perform large series sums or matrix multiplications to survive.

Ponderings…

I wonder if it is possible to reprogram the brain (artificially) so that it WOULD be good at these low-level problems? Would doing so be detrimental to the higher level functionalilty? I guess the question can be summarised as: Is the type of problem that can be solved efficiently dependent on the architecture in this case?

I don’t know much about how the brain works out numerical calculations, etc. although I suspect the efficiency of the algorithms that our brain uses for such caluculations is very low. Conversely, we aren’t very good at writing algorithms to run on parallel architectures… but apparently the brain is!

In a nicely symbiotic way, maybe we can learn something about how the brain works by modelling this class of architecture, and by learning more about the brain, we can understand how to better build massively parallel processors (and how to program them).

I was slightly disappointed that the seminar made no mention of other models/methods of massively parallel computation (hint, quantum computing, hint) and there seemed to be some confusion amongst the audience between a fundamentally deterministic problem and a non-deterministic problem in the sense of modelling real-world physics but overall I really enjoyed the discussions. Exciting stuff πŸ™‚ I did want to ask the speaker if he believed that the brain behaved as a Turing machine (presumably he does for his project to work), but I didn’t get the chance…

Anyway as this isn’t really my area I suggest reading more about the project here

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s