I’ll be blogging over at ‘Hack The Multiverse’!

So what’s with the hiatus?

My blogging life has been split in two! So mostly I’ll now be blogging over at Hack The Multiverse – the official D-Wave blog (formerly rose.blog) – about the fun I’ve been having programming quantum computers. You might want to check out the latest posts there as there will be a fair amount of activity going on! I’m going to reserve P&C for discussing other things (probably mostly to do with AI, AGI, etc.)

New scheduling for ‘Thinking about the Hardware of thinking’

I was scheduled to give a live virtual seminar, streamed to the Transvision conference in Italy on October 23rd. Unfortunately I was not able to deliver the presentation due to technical problems at the conference venue.

But the good news is, I will be giving the talk this weekend instead!

Here is the abstract (slightly updated as the talk will be a little longer than originally planned)

Thinking about the hardware of thinking:
Can disruptive technologies help us achieve uploading?

S. Gildert,
Teleplace, 28th November 2010
10am PST (1pm EST, 6pm UK, 7pm continental EU).

We are surrounded by devices that rely on general purpose silicon processors, which are mostly very similar in terms of their design. But is this the only possibility? As we begin to run larger and more brain-like emulations, will our current methods of simulating neural networks be enough, even in principle? Why does the brain, with 100 billion neurons, consume less than 30W of power, whilst our attempts to simulate tens of thousands of neurons (for example in the blue brain project) consumes tens of KW? As we wish to run computations faster and more efficiently, we might we need to consider if the design of the hardware that we all take for granted is optimal. In this presentation I will discuss the recent return to a focus upon co-design – that is, designing specialized software algorithms running on specialized hardware, and how this approach may help us create much more powerful applications in the future. As an example, I will discuss some possible ways of running AI algorithms on novel forms of computer hardware, such as superconducting quantum computing processors. These behave entirely differently to our current silicon chips, and help to emphasize just how important disruptive technologies may be to our attempts to build intelligent machines.

Here is a link to the Teleplace announcement.

Hope to see you there!

The ‘observer with a hammer’ effect

Here is another short essay about quantum mechanics-related stuff. It’s a very high level essay, so any practising quantum physicists probably shouldn’t read it 😉 It is more aimed at a general audience (and news reporters!) and talks about the ‘spooky’ and ‘weird’ properties of superposition and decoherence that people seem to like to tie in with consciousness, cats, and ‘the observer effect’. It doesn’t really go into entanglement directly, I think that should be an issue for a separate post! It is also a fun introduction to some issues when trying to perform experimental quantum computing and quantum physics in general.

I’ve also put this essay in the Resources section as a permanent link.

.
The not-so spooky after all ‘observer-with-a-hammer’ effect

S. Gildert November 2010

I’m so sick of people using phrases like this:

“Looking at, nay, even thinking about a quantum computer will destroy its delicate computation. Even scientists do not understand this strange and counter-intuitive property of quantum mechanics”

or worse:

“The act of a conscious observer making a measurement on a quantum computer whilst it is performing a calculation causes the wavefunction to collapse. The spooky nature of these devices means that they just don’t work when we are looking at them!”

ARGGHHHHH!!!!!!!!

These kind of phrases spread like viral memes because they are easy to remember and they pique people’s curiosity. People like the idea of anthropomorphizing inanimate systems. It makes them seem unusual and special. This misunderstanding, the idea that a quantum system somehow ‘cares’ or is emotionally sensitive to what a human is doing, is actually what causes this meme to perpetuate.

So I’m going to put a new meme out there into the-internet-ether-blogosphere-tubes. Maybe someone will pick up on this analogy and it will become totally viral. It probably won’t, because it seems pretty dull in comparison to spooky ethereal all-seeing quantum systems, but if it flicks a light switch in the mind of but a single reader, if on contemplating my words someone’s conceptual picture of quantum mechanics as a mystical, ever elusive resource is reduced even by the tiniest amount, then my work here will be done.

Memetic surgery

Let’s start by cutting the yukky tumorous part from this meme and dissecting it on our operating table:

“Looking at a quantum system changes it.”

Now I don’t necessarily disagree with this statement, but I think you need to define what you mean by ‘looking’….

Usually when physicists ‘look’ at things, they are trying to measure something to extract information from it. To measure something, you need to interact with it in some way or other. In fact, everything in the world interacts with many other things around it (that’s why Physics is interesting!). Everything one could ever wish to measure is actually sitting in a little bath of other things that are constantly interacting with it. Usually, we can ignore this and concentrate on the one thing we care about. But sometimes this interacting-background property can cause unwanted problems.

Measuring small things

Brownian motion can give us a nice example of a nasty background interaction. Imagine that a scientist wanted to investigate the repulsion (or attraction) of some tiny magnetic particles in a solution that had just precipitated out of an awesomely cool chemical reaction. (I don’t know why you’d want to do this, but scientists have some weird ideas). So she starts to take measurements of the positions of the little magnetic particles over time, and finds that they are not obeying the laws of magnetism. How dare they! What could be wrong with the experiment? So our good scientist takes the solution in her beaker and you start to adjust various parameters to try and figure out what is going on. It turns out that when she cools the solution, the particles start to behave more in line with what is expected. She figures that the Brownian motion – all the other molecules jostling and wiggling around near the magnetic particles – are actually kicking the experiment around, ruining the results. But by lowering the temperature, it is possible to stop the environment in which the particles sit from disturbing them as much.

In this example, the scientist was able to measure the positions of the particles with something like a ruler or a laser or some other cool technique, and it was fairly easy, even though the environment had become irritatingly convolved with our experiment. Once she had got around how to stop the interaction with the environment, then our experiment worked well.

Quantum systems are small, and small things are delicate. But quantum systems are so small that the environment, the ‘background-interaction’ around them, is no longer something that they, or we, can ignore. It pushes them around. In order to have a chance at engineering quantum systems, researchers have to isolate them carefully from the environment (or at least the bits of the environment that kick them around). Scientists spend a lot of time trying to stop the environment from interacting with their qubits. For example, superconducting processors need to be operated at very cold temperatures, in extremely low magnetic field environments. But I won’t digress into the experimental details. The main idea is that no matter how you build your quantum computer, you will have to solve this problem in some way or other. And even after all this careful engineering, the darn things still interact with the environment to some degree.

It gets worse

But with quantum systems, there is an extra problem. The problem is not just the environment. To illustrate this problem, I’ll propose another little story of the striving scientists.

Imagine that our scientists have developed a technique to measure the diameter of bird eggs using a robotic arm. The arm has a hand that grasps the eggs, measures them, and then displays the diameter on a neat built-in display. (Alternatively, you can Bluetooth the results to your iPhone, so the scientists tell me). Anyway, this robotic arm is so ridiculously precise that it can measure the diameter of eggs more accurately than any pair or vernier calipers, any laser-interferometer array or any other cool way of measuring eggs that has ever existed. The National Standards laboratories are intrigued.

However, there is a slight problem. Every time the robot tries to measure an egg, it breaks the darn thing. There is no way to get around this. The scientific breakthrough relating to the accuracy of the new machine comes from the fact that the robot squeezes the egg slightly. Try and change the way that the measurement is performed, and you just can’t get good results anymore. It seems that we just cannot avoid breaking the eggs. The interaction of the robot with the egg is ruining our experiment.

Of course, a robot-egg measuring system like this sounds ridiculous, but this is exactly the problem that we have with quantum systems. The measuring apparatus is huge compared to the quantum system, and it interacts with it, just like the pesky environment does. It pushes and squeezes our quantum system. The result is that anything huge that we use to try to perform a delicate measurement will break it. And worse still, we can’t just try to ‘turn it off completely’ like we could with the environment surrounding the particles in the solution. By the very nature of what we are trying to do, we need the measurement apparatus to interact with the qubits, otherwise how can we measure them? What a pain. We end up measuring a kind of qubit-environment-combination mess, just like trying to measure the diameter of a broken egg whose contents are running all over our robotic measurement apparatus.

I can’t stress enough how comparatively big and clumsy quantum measurement apparatus is. Whilst scientists are trying to build better measurement techniques that don’t have such a bad effect on quantum systems, ultimately you just can’t get around this problem, because the large-scale things that we care about are just not compatible with the small-scale of the quantum world.

This doesn’t mean that quantum computers aren’t useful. It just means that the information we can extract from such systems is not neat, clean and unique to the thing we were trying to measure. We have to ‘reconstruct’ information from the inevitable conglomerate that we get out of a measurement. In some cases, this is enough to help us do useful computations.

Hammering the message home

Nowhere here does one need to invoke any spookiness, consciousness, roles of the observer, or animal cruelty involving cats and boxes. In fact, the so-called ‘observer’ effect could perhaps be more appropriately termed the ‘observer-with-a-hammer’ effect. We take for granted that we can measure large classical systems, like the 0 or 1 binary states of transistors, without affecting them too much. But measuring a quantum system is like trying the determine the voltage states of a single transistor by taking a hammer to the motherboard and counting the number of electrons that ended up sticking to the end of it. It kind of upsets the computation that you were in the middle of. It’s not the observer that’s the problem here, it’s the hammer.

So, the perhaps-not-so-viral phraseology for one to take away from my relentless ranting is thus:

“When you try and measure a delicate quantum system with clumsy apparatus, you actually end up with a messy combination of both!”

Alternatively, you could say ‘you can’t make a quantum measurement without breaking a few eggs’ – But if that terrible pun sticks then I will forever be embarrassed.

Building more intelligent machines: Can ‘co-design’ help?

Here is a little essay I wrote in response to an article on HPCWire about hardware-software co-design and how it relates to D-Wave’s processors. I’ve also put this essay in the Resources section as a permanent link.

.
Building more intelligent machines: Can ‘co-design’ help?

S. Gildert November 2010

There are many challenges that we face as we consider the future of computer architectures, and as the type of problem that people require such architectures to solve changes in scale and complexity. A recent article written for HPCwire [1] on ‘co-design’ highlights some of these issues, and demonstrates that the High Performance Computing community is very interested in new visions of breakthrough system architectures. Simply scaling up the number of cores of current technologies seems to be getting more difficult, more expensive, and more energy-hungry. One might imagine that in the face of such diminishing returns, there could be innovations in architectures that are vastly different from anything currently in existence. It seems clear that people are becoming more open to the idea that something revolutionary in this area may be required to make the leap to ‘exascale’ machines and beyond. The desire for larger and more powerful machines is driving people to try to create more ‘clever’ ways of solving problems (algorithmic and software development), rather than just increasing the speed and sheer number of transistors doing the processing. Co-design is one example of a buzzword that is sneakily spreading these memes which hint at ‘clever’ computing into the HPC community.

Generalization and specialization

I will explain the idea of co-design by using a colorful biological analogy. Imagine trying to design a general purpose animal: Our beast can fly, run, swim, dig tunnels and climb trees. It can survive in many different environments. However, anyone trying to design such an animal would soon discover that the large wings prevented it from digging tunnels effectively; that the thick fur coat to survive the extreme cold was not helpful in achieving a streamlined, fast swimmer. Any animal that was even slightly more specialized in one of these areas would quickly out-compete our general design. Indeed, for this very reason, natural selection causes specialization and therefore great diversity amongst the species that we see around us. Particular species are very good at surviving in particular environments.

How does this tie in with computer processing?

The problems that processors are designed to solve today are mostly all very similar. One can view this as being a bit like the ‘environmental landscape’ that our general purpose creatures live in. If the problems that they encounter around their environment on a day-to-day basis are of the same type, then there is no reason to diversify. Similarly, a large proportion of all computing resources today address some very similar problems, which can be solved quite well using general purpose architectures such as Intel Centrino chips. These tasks include the calculations that underlie familiar everyday tasks such as word-processing, and displaying web pages. But there do exist problems that have been previously thought to be very difficult for computers to solve, problems which seem out of reach of conventional computing. Examples of such problems are face-recognition, realistic speech synthesis, the discovery of patterns in large amounts of genetic data, and the extraction of ‘meaning’ from poetry or prose. These problems are like the trees and cliffs and oceans of our evolutionary landscape. The general purpose animals simply cannot exploit these features, they cannot solve these problems, so the problems are typically ignored or deemed ‘too hard’ for current computing platforms.

But there are companies and industries that do care about these problems. They require computing power to be harnessed for some very specific tasks. A few examples include extracting information from genetic data in the biotechnology companies, improving patient diagnosis and medical knowledge of expert systems in the healthcare sector, improving computer graphics for gaming experiences in entertainment businesses, and developing intelligent military tools for the defense industry. These fields all require the searching and sorting of data in parallel, and the manipulation of data on a much more abstract level for it to be efficient and worthwhile. This parallel operation and abstraction is something that general purpose processors are not very good at. They can attempt such a feat, but it takes the power of a supercomputer-size machine to tackle even very small instances of these specialized problems, using speed and brute force to overwhelm the difficulty. The result is very expensive, very inefficient, and does not scale well to larger problems of the same type.

It is this incorporation of variety and structure, the addition of trees, cliffs and oceans, into our computational problems causes our general-purpose processors to be very inefficient at these tasks. So why not allow the processors to specialize and diversify, just like natural selection explores the problem environment defined by our biological needs?

Following nature’s example

Co-design attempts to address this problem. It tries to design solutions around the structure of the problem type, resulting in an ability to solve that one problem very well indeed. In practice this is done by meticulous crafting of both software and hardware in synchrony. This allows software which complements the hardware and utilizes subtleties in the construction of the processor to help speed things up, rather than software which runs on a general architecture and incurs a much larger overhead. The result is a blindingly fast and efficient special purpose architecture and algorithm that is extremely good at tackling a particular problem. Though the resulting processor may not be very good at certain tasks we take for granted using general-purpose processors, solving specialized problems instead can be just as valuable, and perhaps will be even more valuable in the future.

A selection of processors which are starting to specialize are discussed in the HPCwire article. These include MDGRAPE-3, which calculates inter-atomic forces, and Anton, a system specifically designed to model the behaviour of molecules and proteins. More common names in the processor world are also beginning to explore possible specializations. Nvidia’s GPU based architectures are gaining in popularity, and FPGA and ASIC alternatives are now often considered for inclusion in HPC systems, such as some of Xilinx’s products. As better software and more special purpose algorithms are written to exploit these new architectures, they become cheaper and smaller than the brute-force general purpose alternatives. The size of the market for these products increases accordingly.

The quantum processors built by D-Wave Systems [2] are a perfect example of specialized animals, and give an insightful look into some of the ideas behind co-design. The D-Wave machines don’t look much like regular computers. They require complex refrigeration equipment and magnetic shielding. They use superconducting electronics rather than semiconducting transistors. They are, at first inspection, very unusual indeed. But they are carefully designed and built in a way that allows an intimate match between the hardware and the software algorithm that they run. As such they are very specialized, but this property allows them to tackle very well a particular class of problems known as discrete optimization problems,. This class of problems may appear highly mathematical, but looks can be deceiving. It turns out that once you start looking, examples of these problems are found in many interesting areas of industry and research. Most importantly, optimization forms the basis of many of the problems mentioned earlier, such as pattern recognition, machine learning, and meaning analysis. These are exactly the problems which are deemed ‘too hard’ for most computer processors, and yet could be of incredible market value. In short, there are many, many trees, cliffs and oceans in our problem landscape, and a wealth of opportunity for specialized processors to exploit this wonderful evolutionary niche!

Co-design is an important ideas in computing, and hopefully it will open people’s minds to the potential of new types of architecture that they may never have imagined before. I believe it will grow ever more important in the future, as we expect a larger and more complex variety of problems to be solved by our machines. The first time one sees footage of a tropical rainforest, one can but stare in awe at the wonders of never-before-seen species, each perfectly engineered to efficiently solve a particular biological problem. I hope that in the future, we will open our eyes to the possibility of an eco-sphere of computer architectures, populated by similarly diverse, beautiful and unusual creatures.

[1] http://www.hpcwire.com/features/Compilers-and-More-Hardware-Software-Codesign-106554093.html

[2] http://www.dwavesys.com/

Transvision2010 presentation: Thinking about the hardware of thinking

I will be giving a presentation at Transvision2010, which takes place next weekend. The talk will be about how we should consider novel computing substrates on which to develop AI and ASIM (advanced substrate independent minds) technologies, rather than relying on conventional silicon processors. My main example will be that of developing learning applications on Quantum Computing processors (not entirely unpredictable!), but the method is generalisable to other technologies such as GPUs, biologically based computer architectures, etc…

.

The conference is physically located in Italy, but I unfortunately cannot make in in person, as I will be attending another workshop. I will therefore be giving the talk remotely via the teleconferencing software Teleplace.

Anyway, here is some information about the talk, kindly posted by Giulio Prisco:

Thinking about the hardware of thinking:
Can disruptive technologies help us achieve uploading?

Interesting news coverage of Teleplace QC talk

So I enjoyed giving my Teleplace talk on Quantum Computing on Satuday, and I received quite a lot of feedback about it (mostly good!).

My talk was reported on Slashdot via a Next Big Future writeup, which in turn linked to Giulio’s Teleplace blog! This level of coverage for a talk has been very interesting, I’ve never had anything linked from /. before. They unfortunately got my NAME WRONG which was most irritating. Although I’m fairly impressed now that if you Google for ‘my name spelt incorrectly + quantum computing’, it does actually ask if you meant ‘my name spelt correctly + quantum computing’ which is a small but not insignificant victory 🙂 Note: I’m not actually going to write out my name spelt incorrectly out here, as it would diminish the SNR!!

The talk also prompted this guest post written by Matt Swayne on the Quantum Bayesian Networks blog. Matt was present at the talk.

I’ve had a lot of people asking if I will post the slides online. Well here they are:

LINK TO SLIDES for QUANTUM COMPUTING: SEPARATING HOPE FROM HYPE
Teleplace seminar, S. Gildert, 04/09/10

quantum computing

Or rather, that’s a direct link to them. They are also available along with the VIDEOS of the talk and a bunch of other lectures and stuff are on the Resources page. Here are the links to the VIDEOS of the talk, and look, you have so many choices!!

  • VIDEO 1: 600×400 resolution, 1h 32 min
  • VIDEO 2: 600×400 resolution, 1h 33 min, taken from a fixed point of view
  • VIDEO 3: 600×400 resolution, 2h 33 min, including the initial chat and introductions and the very interesting last hour of discussion, recorded by Jameson Dungan
  • VIDEO 4: 600×400 resolution, 2h 18 min, including the very interesting last hour of discussion, recorded by Antoine Van de Ven
  • Here are a couple of screenshots from the talk:


    Online seminar on Quantum Computing

    I’m giving a VIRTUAL seminar in Teleplace this Saturday…

    I’m going to entitle the talk:

    ‘Quantum Computing: Separating Hope from Hype’
    Saturday 4th September, 10am PST

    “The talk will explain why quantum computers are useful, and also dispel some of the myths about what they can and cannot do. It will address some of the practical ways in which we can build quantum computers and give realistic timescales for how far away commercially useful systems might be.”

    Here’s Giulio’s advertisement for the talk:
    GIULIO’S BLOGPOST about quantum computing seminar which is much more explanatory than the briefly thrown together blogpost you are being subjected to here.

    Anyone wishing to watch the talk can obtain a Teleplace login by e-mailing Giulio Prisco (who can be contacted via the link above). Teleplace is a piece of software that is simple to download and quick to install on your computer and has an interface a bit like Second life. Now is a great time to get an account, as there will be many more interesting lectures and events hosted via this software as the community grows. Note the time – 10am PST Saturday morning (as in West Coast U.S. time zone, California, Vancouver, etc.)

    The seminar is also listed as a Facebook Event if you would like to register interest that way!

    Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor

    Anyone who follows this blog and wants to get a real in-depth insight into the way that D-Wave’s processors are built, and how they solve problems, should definitely read this paper:

    Phys. Rev. B. 82, 024511 (2010), R. Harris et al.

    The paper itself is quite long (15 pages) but it really gives a great description of how an 8-qubit ‘portion’ of the processor is designed, fabricated, fit to a physical (quantum mechanical) model, calibrated, and then used to solve problems.

    If you don’t have access to the Phys Rev B journal, you can read a free preprint of the article here. And if you’ve never tried reading a journal paper before, why not give it a go! (This is an experimental paper, which means there are lots of pretty pictures to look at, even if the Physics gets hard to follow). For example, a microphotograph of the 8-qubit cell:

    Simulating Chemistry using Quantum Computers

    Nice preprint from the Harvard group introducing quantum computing for chemical simulation, including a great deal about AQC and how to apply it to such systems, e.g. lattice protein folding and small molecules. Includes references to some experimental and simulation work done at D-Wave (write-up for that in progress).

    Simulating Chemistry using Quantum Computers

    Quantum Computing – cool new video!

    Here’s a neat video made by my friend and colleague Dr. Dominic Walliman, which gives a great an introduction to all those budding Quantum Computer Engineers of the future 🙂

    .

    .

    Not only is this a Physics-based educational and entertainment extravaganza, but the video is interspersed with some cool shots of my old lab at Birmingham, and my old dilution refrigerator – I miss you, Frosty… *sniff*