Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor

Anyone who follows this blog and wants to get a real in-depth insight into the way that D-Wave’s processors are built, and how they solve problems, should definitely read this paper:

Phys. Rev. B. 82, 024511 (2010), R. Harris et al.

The paper itself is quite long (15 pages) but it really gives a great description of how an 8-qubit ‘portion’ of the processor is designed, fabricated, fit to a physical (quantum mechanical) model, calibrated, and then used to solve problems.

If you don’t have access to the Phys Rev B journal, you can read a free preprint of the article here. And if you’ve never tried reading a journal paper before, why not give it a go! (This is an experimental paper, which means there are lots of pretty pictures to look at, even if the Physics gets hard to follow). For example, a microphotograph of the 8-qubit cell:

Simulating Chemistry using Quantum Computers

Nice preprint from the Harvard group introducing quantum computing for chemical simulation, including a great deal about AQC and how to apply it to such systems, e.g. lattice protein folding and small molecules. Includes references to some experimental and simulation work done at D-Wave (write-up for that in progress).

Simulating Chemistry using Quantum Computers

A particularly bad attack on the Singularity

Whilst I am not a raging advocate of ‘sitting back and ‘waiting’ for the Singularity to happen (I prefer to get excited about the technologies that underlie the concept of it), I feel that I have a responsibility to defend the poor meme in the case where an argument against is is actually very wrong, such as in this article from Science Not Fiction:

Genomics Has Bad News For The Singularity

The basic argument that the article puts forward is that the cost of sequencing the human genome has fallen following a super-exponential trend over the past 10 years. And yet, we do not have amazing breakthroughs in drug treatment and designer therapies. So how could we expect to have “genuine artificial intelligence, self-replicating nanorobots, and human-machine hybrids” even though Moore’s law is ensuring that the cost of processing power is falling? And it is falling at a much slower rate than genome sequencing costs!

The article states:

“In less than a decade, genomics has shown that improvements in the cost and speed of a technology do not guarantee real-world applications or immediate paradigm shifts in how we live our day-to-day lives.”

I feel however, that the article is somewhat comparing apples and oranges. I have two arguments against the comparison:

The first is that sequencing the genome just gives us data. There’s no algorithmic component. We still have little idea of how most of the code is actually implemented in the making of an organism. We don’t have the protein algorithmics. It’s like having the source code for an AGI without a compiler. But we do have reasonable physical and algorithmic models for neurons (and even entire brains!), we just lack the computational power to simulate billions of neurons in a highly connected structure. We can simulate larger and larger neural networks as hardware increases in speed, connectivity, and efficiency. And given that the algorithm is ‘captured’ in the very structure of the neural net, the algorithm advances as the hardware improves. This is not the case in genome sequencing.

The second argument is that sequencing genomes is not a process that can be bootstrapped. The very process of knowing a genome sequence isn’t going to help us sequence genomes faster or help you engineer designer drugs. But building smart AI systems – or “genuine artificial intelligence” as the article states – CAN enable you to bootstrap the process, as you will have access to copyable capital for almost no cost: Intelligent machines which can be put to the task of designing more intelligent machines. If we can build AIs that pass a particular threshold in terms of being able to design improved algorithmic versions of themselves, why should this be limited by hardware requirements at all? Moore’s law really just gives us an upper bound on the resources necessary to build intelligent systems if we approach the problem using a brute-force method.

We still need people working on the algorithmic side of things in AI – just as we need people working on how genes are actually expressed and give rise to characteristics in organisms. But in the case of AI, we already have an existence proof for such an object – the human brain, and so even with no algorithmic advances, we should be able to build one in-silico. Applications for genomics do not have such a clearly defined goal based on something that exists naturally (though harnessing effetcs like the way in which cancer cells avoid apoptosis might be a good place to start).

I’d be interested in hearing people thoughts on this.

Essay: Language, learning, and social interaction – Insights and pitfalls on the road to AGI

This is the first in a series of essays that I’m going to make available through this site. I’m going to put them in the resources page as I write them.

I’m doing this for a couple of reasons. The first is that I like writing. The second is that a blog is not the best format to present ideas that you need to frequently refer to, and often when you get asked the same question by multiple people it is better to point them to a permanent resource rather than repeating yourself each time. The third is that I would like to add some ideas to the general thought-provoking mash-up of AGI memes around the internet. The fourth is that I think people enjoy reading short articles and opinion pieces somewhat more than entire books. The fifth is that (somewhat in contradiction to my previous reason) I’m hoping to eventually write a neat book about related topics, and whilst I have already started doing this, I feel that I need a lot more practice writing several-page documents before I feel I can make something that is 100+ pages long which I am happy with. Note, the PhD thesis does not count!! 😉

So here we go. Click on the title to download the PDF.
You can also find a copy of the essay in the resources tab.
.

ESSAY TITLE:
Language, learning, and social interaction
Insights and pitfalls on the road to AGI

Abstract:
Why is language important? What purpose does it serve on a fundamental level? How is it related to the ways in which we learn? Why did it evolve? In this short essay I’m going to take a Darwinian look at language and why we must be careful when considering the role played by language in the building of intelligent machines which are not human-like.

Quantum Computing – cool new video!

Here’s a neat video made by my friend and colleague Dr. Dominic Walliman, which gives a great an introduction to all those budding Quantum Computer Engineers of the future 🙂

.

.

Not only is this a Physics-based educational and entertainment extravaganza, but the video is interspersed with some cool shots of my old lab at Birmingham, and my old dilution refrigerator – I miss you, Frosty… *sniff*