Blue Brain project progress

Here is a documentary made by Noah Hutton about EPFL’s Blue Brain Project:

The Beautiful Brain – A Documentary

The Blue Brain project is an immense undertaking to simulate the neuronal activity within an entire human brain. The project is also looking at the brains of smaller mammals along the way, such as mice and cats.

I’m very excited about this project. Even if it doesn’t yield fruitful ‘simulated’ behaviour in the first instance, I think that it will be invaluable as a resource for other projects. I feel it’s a bit like a connectome project. The map will be useful, even though it still won’t tell us the best way to get from A to B.

The worry I have about systems like this is the absence of interaction with a real environment. I think a brain simulation would need a lot of information from an environment which is similar to that in humans for it to even have a chance at simulating ‘human like’ patterns of behaviour. Additionally, a large proportion of the brain is dedicated to all the regulatory biology of the human body, and processing of sensory input information. What happens to all these inputs, for example the hormone regulation control and the immune system? Are they just to be left open circuit?

The documentary talked about how the plan was to use the simulation to control a virtual mouse or rat body in an artificial environment. In addition to the obvious motor control and sensory inputs, what other parts of the rodent biology will have to go into this simulation?

I guess the question that arises is: Will Blue Brain be given a Blue Body? And what kind of body would be suitable? As far as the senses go, is it easier just to interface with the real world rather than try and simulate an entirely virtual world with exquisitely controlled feedback just to provide the correct inputs for all the I/O systems? In other words, perhaps the simulated brain should be built into a cybernetic organism.

Such an organism would have a completely different ‘biology’ (in fact it wouldn’t be biological at all, but the brain would still need to control complex systems in such an entity) and therefore the brain simulation would have to be grossly hacked to make it compatible. However, this is both more ethical and somewhat easier than, say, cloning an organism without a brain and developing an entire Brain/Body-Computer-Interface just so it can be controlled by the simulation.

The other main thought I have on this is that to simulate a real brain of any kind the structure would have to change in response to new inputs (i.e. learn and form new memories by growing new connections). I wonder if this capability could be built into the simulation. I imagine it must be, otherwise you’d just end up with a purely reactive system, something more akin to a lizard brain with very little adaptive neocortical component.

I wonder how an organism would function if a ‘snapshot’ of it’s brain (including a neocortical component) was run as a simulation, but it was not able to grow any new connections. How would the organism behave? Would it work at all? Is this similar to any existing disabling conditions in humans? Presumably it would not be able to learn, or form new memories.

Any thoughts appreciated.

Advertisements

8 thoughts on “Blue Brain project progress

  1. Ben Zaiboc says:

    I’ve always had some misgivings about this project, I think you’ve hit the nail on the head, talking about the interface to a body. I’ve never seen any reference to this issue. Not sure about the adaptiveness of the system, whether Blue Brain is incorporating mechanisms that mimic the formation and pruning of synaptic connections or whether it’s a static snapshot. Presumably the former, as it’s supposed to be aiming at a “simulation of the entire human brain, down to the molecular level”

    I’d have thought it would be much more sensible and fruitful to simulate a simple organism’s CNS to start off with, something like a nematode, so you can more easily create a virtual or robotic body, and use that model to sort out the problems you mention, then once that works ok, move on up the phylogenetic tree. By the time you’ve got to reptile brains, I’d think you could then much more confidently start to address the higher cognitive machinery with a solid base of already-working limbic systems.

    Of course, maybe their brute-force approach will be quicker, but I can’t help but think, even if they have the body problem in hand, if they can do what they say within 10 years (5 years now), what they’ll get will be a baby’s brain.

    It’s not going to suddenly start talking about the pros and cons of manned space exploration, or the political subtext of Renoir’s paintings, it’s going to be saying “goo-goo”, and nobody will actually know if it’s worked or not.

    It’ll need another 16 years of nappy changing, keeping away from the stairs, learning why riding the doggy is not a good idea, asking if horses go to heaven, gazing out of the window during maths lessons, making itself sick with it’s first cigarette, and being told “I don’t care what your friends are wearing, you’re not going out in that dress, young lady” before anyone will know if it’s worked.

  2. Ben Zaiboc says:

    Just watched the video, and it’s 10 years from now, not from the start, and it’s meant to be a diagnostic and modelling tool, not an artificial mind.
    So most of my previous comment is irrelevant.

    • Ben, your previous comment IS relevant, the brain that they are building should eventually behave just like a normal human brain, it should have self consciousness and feelings, it WILL be an artificial mind, with a REAL artificial intelligence.
      .
      OK, so they want to use it as a diagnostic and modelling tool, but as Markram himself said more than once, if this brain will start feeling pain, or ask “Please don’t do this experiments on me, I don’t like it”, then they’ll have a serious ethical problem and they will have to decide what to do about it.
      .
      About the long learning process, actually when the model will be ready they should be able to put inside a detailed data that was scaned from a grown up person’s brain, so when they will turn it on and run it, it should behave just like a grown up person (and should have all the memories and personality of this person).
      .
      Check the links that I just posted to read more.

  3. Ben Zaiboc says:

    Aargh! This is what’s wrong with science journalism:

    (from Seed Magazine)
    “The simulation is getting to the point,” Schürmann says, “where it gives us better results than an actual experiment. We get the same data, but with less noise and human error.” The model, in other words, has exceeded its own inputs. The virtual neurons are more real than reality.

    Those last 2 sentences are not only superfluous, they’re actual lies. I understand the need to hype things up, but the hype too often takes on a life of it’s own and eclipses the facts, giving people a totally false impression.

    • Brain 2010 says:

      Hi Ben, I don’t know why you are so angry, if you look at the whole picture I think that they are doing a great job on the project, they are doing something that no one has done before them, sure not at this kind of level of details and precision.

      I don’t know exactly what they meant and I don’t want to talk for them, you can send them an email and ask them, but as far as I know when you measure voltage for example in real life you get lot of noise with the measurement, so instead of 24.00V you will get 24.007543, or 23.9997842 and so on, but if you will simulate this exact electrical circuit on your PC, then at the same test point you will get a clean sharp 24.00000V without any noise.

      If you want to add noise to this clean measurement then it’s very easy to add random noise above it so you will get exactly the same data results that you got on the real circuit.

      By the way, did you see the 2 lectures of Markram? I think that they are really good.

  4. Mike says:

    It’s odd to think that many of the hardwired bits of human psychology and behaviour might someday be reducible to a set of connection-oriented invariants in the neural networks of our juicy brains (with most probably being aggregates). An intriguing consideration is whether the evolved graphs of the brain are optimal for the tasks they perform; a set of simpler, functionally equivalent (or moderately reduced) graphs might get the job done every bit as well, especially in non-biological systems where you could justify the reduction in redundant connectivity (analyzing the modes of failure would certainly be interesting–would there be situations where breaking a single trace or experiencing a random cosmic ray strike results in a cascade that eliminates empathy, or turns your ambulatory easy-bake oven into a homicidal variant of the Bagger 288).

    Obligatory video of the Bagger 288 in action:

    All joking aside, I suppose a pre-flight checklist for a simulation could be relatively short, but I think almost every single requirement would lie beyond the capabilities of present knowledge and technology (again, only a matter of time, funding and effort).

    My rather amateurish checklist would run as follows:

    1) develop the ability to copy (or at least satisfactorily approximate) the topology and state of an extent human neural network (brain)

    2) exhaustively enumerate *all* of the standard inputs that our particular network has evolved to handle (sensory as well as biochemical) and the range over which those inputs vary (to the extent possible)

    3) exhaustivley enumerate *all* of the relationships among inputs (with rigorous descriptions of state flow)

    4) supply (or empirically reverse-engineer) an all-systems-nominal set of minimal inputs that would allow us to boot the brain to a state of being conscious, but in sensory deprivation mode (to minimize our computational burden)

    5) later, develop a very basic five-senses plan to create a minimal, interactive environment for longer duration studies

    For humans, I’ve often imagined that a nice, non-disruptive environment for a simulated person (that, for whatever reason, was not briefed beforehand that they were about to be placed in a simulation) would be to have them gradually awaken in a small, comfortable bedroom with some type of narrative calmly explaining where they are, why they are there, what a simulation is and what the limitations will be, etc. Of course, a multi-week briefing and a well-rehearsed set of protocols would be preferable, but it pays to be flexible. In addition, I think I would give all of the simulated people a small device with a single button on it, and explain that if they ever felt scared or out of control, they could press the button to automatically return to the (perceived) safety of the environment where they first awoke. Given that the behaviour of objects and environments in the simulation will vary considerably from the earlier cumulative experiences of the simulated person (accustomed to the fidelity and variability present in the real world), they will probably need quite a bit of reassurance that the simulated world generally functions according to their expectations.

    Some differences to consider (that will probably be immediately apparent to an individual within the sim):

    1) the way textures feel may not match their appearance (to limit the complexity of the environment, you may have to make use of haptic archetypes–e.g. one paper texture for all instances of paper, one or two cloth textures, one wood texture, one skin texture, etc.)

    2) real world objects do not have bounding boxes or view-dependent rendering artifacts; real object edges tend to be slightly irregular without geomtrically precise straight lines (noise could be added, of course) and real curves appear infinitely smooth without tesselation…

    3) the heft of objects would likely seem skewed (density and materials distribution in real objects compared to a nicely-skinned polyhedron of some average density–for instance, a CRT television or a microwave oven)

    4) the absence of unique scents or tastes; most objects, by default, would probably not have a scent assigned to them (or generic defaults based on the material type); lol, there’s only one type of cake (in the simulated universe), I hope you like it… 🙂

    5) the simulated body would likely feel very, very different from a normal one–esp. if artificial limitations have been introduced for the sake of simulability (skeletal range of motion and speed of movement, internal force distribution and compressability, binocular vision, directional hearing, etc.)

    I guess these comparisons could probably go on forever. It would be very different regime, without a doubt. I wonder if the simulated people could eventually develop an experiment to detect whether you have externally paused their simulation (i.e. halted the passage of time, an event normally invisible to them). Good food for thought…

    Cheers,
    Mike

  5. con says:

    Responding to the original discussion (even if blue brain is nowadays more about modelling) I would think that the obvious step to experimentally test this (assuming now it would be an A.I. construct) would be to somehow replace a rat neocortical column with this and see how that works. Theoretically that should be doable, n’est pas?

    This just leaves us with a baffling question; “is there some kind of interpretation going on in the brain and how on earth does that happen?” Say you’d replace every single neuron in the brain with a device that would deal with the inputs and outputs (neurotransmitters being biological components, not quite sure myself what this even means) of the said neurons, would I expect that the brain could still function?

    I’m just afraid that some day when we’ve finally built what we think will be a “sufficiently detailed copy of the human brain” it will only be for the gain that we realize that we have grossly underestimated what it takes for phenomenon like conscsiousness to emerge. As long as we have no clue (‘really’) how consciousness emerges, one might not want to start building things. Otherwise you’ll just end up with a Frankenstein. I think we better just keep experimenting with the phenomenon to get an understanding of what it is we want to be able to reproduce.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s