Humanity+ Conference 2010 Caltech

I gave a presentation yesterday at the H+ conference at Caltech. The session in which I spoke was the ‘Redefining Artificial Intelligence’ session. I’ll try to get the video of the talk up here as soon as possible along with slides.

Other talks in this session were given by Randal Koene, Geordie Rose, Alex Peake, Paul Rosenbloom, Adrian Stoica, Moran Cerf and Ben Goertzel.

My talk was entitled ‘Pavlov’s AI: What do superintelligences really want?’ I discussed the foundations of AGI, and what I believe to be a problem (or at least an interesting philosophical gold-seam) in the idea of building self-improving artificial intelligences. I’ll be writing a lot more on this topic in the future, hopefully in the form of essays, blogposts and papers. I think it is very important to assess what we are trying to do in the area of AI, what the overall objectives are, and looking at what we can build from an objective point of view is helpful in framing our progress.

The conference was livestreamed, which was great. I think my talk had around 500 viewers. Add to that the 200 or so in the lecture hall; 700 is a pretty big audience! Some of talks had over 1300 remote viewers. Livestreaming really is a great way to reach a much bigger audience than is possible with real-life events alone.

I didn’t get to see much of the Caltech campus, but the courtyard at the Beckman Institute where the conference was held was beautiful. I enjoyed the fact that coffee and lunch was served outside in the courtyard. It was very pleasant! Sitting around outside in L.A. in December was surprisingly similar to a British summer!

I got to talk to some great people. I enjoy transhumanism-focused conferences as the people you meet tend to have many diverse interests and multidisciplinary backgrounds.

I was very inspired to continue exploring and documenting my journey into the interesting world of AGI. One of the things I really love doing is looking into the fundamental science behind Singularity-focused technologies. I try to be impartial to this and give both an optimistic account of the promise of future technologies whilst maintaining a skeptical curiosity about whether such technologies are fundamentally possible, and what roadmaps might lead to their successful implementation. So stay tuned for more Skepto-advocate Singularity fun!

New scheduling for ‘Thinking about the Hardware of thinking’

I was scheduled to give a live virtual seminar, streamed to the Transvision conference in Italy on October 23rd. Unfortunately I was not able to deliver the presentation due to technical problems at the conference venue.

But the good news is, I will be giving the talk this weekend instead!

Here is the abstract (slightly updated as the talk will be a little longer than originally planned)

Thinking about the hardware of thinking:
Can disruptive technologies help us achieve uploading?

S. Gildert,
Teleplace, 28th November 2010
10am PST (1pm EST, 6pm UK, 7pm continental EU).

We are surrounded by devices that rely on general purpose silicon processors, which are mostly very similar in terms of their design. But is this the only possibility? As we begin to run larger and more brain-like emulations, will our current methods of simulating neural networks be enough, even in principle? Why does the brain, with 100 billion neurons, consume less than 30W of power, whilst our attempts to simulate tens of thousands of neurons (for example in the blue brain project) consumes tens of KW? As we wish to run computations faster and more efficiently, we might we need to consider if the design of the hardware that we all take for granted is optimal. In this presentation I will discuss the recent return to a focus upon co-design – that is, designing specialized software algorithms running on specialized hardware, and how this approach may help us create much more powerful applications in the future. As an example, I will discuss some possible ways of running AI algorithms on novel forms of computer hardware, such as superconducting quantum computing processors. These behave entirely differently to our current silicon chips, and help to emphasize just how important disruptive technologies may be to our attempts to build intelligent machines.

Here is a link to the Teleplace announcement.

Hope to see you there!

Interesting news coverage of Teleplace QC talk

So I enjoyed giving my Teleplace talk on Quantum Computing on Satuday, and I received quite a lot of feedback about it (mostly good!).

My talk was reported on Slashdot via a Next Big Future writeup, which in turn linked to Giulio’s Teleplace blog! This level of coverage for a talk has been very interesting, I’ve never had anything linked from /. before. They unfortunately got my NAME WRONG which was most irritating. Although I’m fairly impressed now that if you Google for ‘my name spelt incorrectly + quantum computing’, it does actually ask if you meant ‘my name spelt correctly + quantum computing’ which is a small but not insignificant victory 🙂 Note: I’m not actually going to write out my name spelt incorrectly out here, as it would diminish the SNR!!

The talk also prompted this guest post written by Matt Swayne on the Quantum Bayesian Networks blog. Matt was present at the talk.

I’ve had a lot of people asking if I will post the slides online. Well here they are:

LINK TO SLIDES for QUANTUM COMPUTING: SEPARATING HOPE FROM HYPE
Teleplace seminar, S. Gildert, 04/09/10

quantum computing

Or rather, that’s a direct link to them. They are also available along with the VIDEOS of the talk and a bunch of other lectures and stuff are on the Resources page. Here are the links to the VIDEOS of the talk, and look, you have so many choices!!

  • VIDEO 1: 600×400 resolution, 1h 32 min
  • VIDEO 2: 600×400 resolution, 1h 33 min, taken from a fixed point of view
  • VIDEO 3: 600×400 resolution, 2h 33 min, including the initial chat and introductions and the very interesting last hour of discussion, recorded by Jameson Dungan
  • VIDEO 4: 600×400 resolution, 2h 18 min, including the very interesting last hour of discussion, recorded by Antoine Van de Ven
  • Here are a couple of screenshots from the talk:


    Online seminar on Quantum Computing

    I’m giving a VIRTUAL seminar in Teleplace this Saturday…

    I’m going to entitle the talk:

    ‘Quantum Computing: Separating Hope from Hype’
    Saturday 4th September, 10am PST

    “The talk will explain why quantum computers are useful, and also dispel some of the myths about what they can and cannot do. It will address some of the practical ways in which we can build quantum computers and give realistic timescales for how far away commercially useful systems might be.”

    Here’s Giulio’s advertisement for the talk:
    GIULIO’S BLOGPOST about quantum computing seminar which is much more explanatory than the briefly thrown together blogpost you are being subjected to here.

    Anyone wishing to watch the talk can obtain a Teleplace login by e-mailing Giulio Prisco (who can be contacted via the link above). Teleplace is a piece of software that is simple to download and quick to install on your computer and has an interface a bit like Second life. Now is a great time to get an account, as there will be many more interesting lectures and events hosted via this software as the community grows. Note the time – 10am PST Saturday morning (as in West Coast U.S. time zone, California, Vancouver, etc.)

    The seminar is also listed as a Facebook Event if you would like to register interest that way!

    A particularly bad attack on the Singularity

    Whilst I am not a raging advocate of ‘sitting back and ‘waiting’ for the Singularity to happen (I prefer to get excited about the technologies that underlie the concept of it), I feel that I have a responsibility to defend the poor meme in the case where an argument against is is actually very wrong, such as in this article from Science Not Fiction:

    Genomics Has Bad News For The Singularity

    The basic argument that the article puts forward is that the cost of sequencing the human genome has fallen following a super-exponential trend over the past 10 years. And yet, we do not have amazing breakthroughs in drug treatment and designer therapies. So how could we expect to have “genuine artificial intelligence, self-replicating nanorobots, and human-machine hybrids” even though Moore’s law is ensuring that the cost of processing power is falling? And it is falling at a much slower rate than genome sequencing costs!

    The article states:

    “In less than a decade, genomics has shown that improvements in the cost and speed of a technology do not guarantee real-world applications or immediate paradigm shifts in how we live our day-to-day lives.”

    I feel however, that the article is somewhat comparing apples and oranges. I have two arguments against the comparison:

    The first is that sequencing the genome just gives us data. There’s no algorithmic component. We still have little idea of how most of the code is actually implemented in the making of an organism. We don’t have the protein algorithmics. It’s like having the source code for an AGI without a compiler. But we do have reasonable physical and algorithmic models for neurons (and even entire brains!), we just lack the computational power to simulate billions of neurons in a highly connected structure. We can simulate larger and larger neural networks as hardware increases in speed, connectivity, and efficiency. And given that the algorithm is ‘captured’ in the very structure of the neural net, the algorithm advances as the hardware improves. This is not the case in genome sequencing.

    The second argument is that sequencing genomes is not a process that can be bootstrapped. The very process of knowing a genome sequence isn’t going to help us sequence genomes faster or help you engineer designer drugs. But building smart AI systems – or “genuine artificial intelligence” as the article states – CAN enable you to bootstrap the process, as you will have access to copyable capital for almost no cost: Intelligent machines which can be put to the task of designing more intelligent machines. If we can build AIs that pass a particular threshold in terms of being able to design improved algorithmic versions of themselves, why should this be limited by hardware requirements at all? Moore’s law really just gives us an upper bound on the resources necessary to build intelligent systems if we approach the problem using a brute-force method.

    We still need people working on the algorithmic side of things in AI – just as we need people working on how genes are actually expressed and give rise to characteristics in organisms. But in the case of AI, we already have an existence proof for such an object – the human brain, and so even with no algorithmic advances, we should be able to build one in-silico. Applications for genomics do not have such a clearly defined goal based on something that exists naturally (though harnessing effetcs like the way in which cancer cells avoid apoptosis might be a good place to start).

    I’d be interested in hearing people thoughts on this.

    An adiabatic tragedy of advocates and sceptics

    I think the recent musings around the blogosphere and beyond are completely missing a huge and fundamental point about why building AQC and AQO-based machines will be not only useful, but something that we will wonder how we lived without. I’m not going to talk about the specifics of implementing superconducting technology for cool applications (I can defer that to subsequent posts). Here I just want to explain a little about why we should do it, and how the main hurdle to progressing such technology is nothing to do with our ability to build and understand the technology itself. In fact, it is a rather sad story.

    Let’s think for a second: Do we really have the right computational substrates for realising concepts such as highly connected neural networks, and thinking machines? Is the Von Neumann architecture really the best way to support such things? We are currently solving the problem of simulating intelligent systems by throwing more and more computational power at them. Though there’s something very odd about this, we have little choice, as we have become very good at designing processors that behave in a particular way, that talk to memory in a particular way, and that have a small number of cores. The problem with the commonly used architectures is that they just cannot embed things like neural nets, and other highly parallel structures, very efficiently.

    Could adiabatic quantum processors be used for neural net and highly parallel processing purposes? Of course! The architectures are very compatible. AQO processors are very similar to Boltzmann machines, which underlie the operation of way many pattern-recognising systems, such as our own brain. There are other delicious fruits of superconducting logic, for example the ease with which we can implement reversible logic, and the exceedingly low power dissipation of such circuits. These systems also exhibit macroscopic quantum effects, which may be a great resource in computation, or it may not. But even if it does not, we should not ignore the fact that actually building such machines is the only way to answer this, and many other questions.

    I think that superconducting processors are offering us a wonderful gift here, and yet we refuse to take advantage of it. Why? The reasons are a little hard to stomach.

    It seems that while we’d rather just get going building some cool things, we end up spending a large proportion of our time and effort debating issues like whether or not quantum computers can solve NP-Complete problems in polynomial time. What?! I hear you say. My answer: Exactly. Obscure as they are, these questions are the most important thing in the world to a vanishingly small fraction of the population. Yet these seemingly lofty theoretical debates are casting a shadow over the use of superconducting QC architectures in every area of research, including things like the novel implementations of hardware-based neural nets, which may prove to be an even more fruitful avenue than optimization.

    It will take a large amount of financing to commercialize general purpose superconducting processors, and an unimaginably large effort on behalf of the scientific researchers and engineers who devote their time to trying to progress this technology. The step from fundamental research to commercialisation cannot and will not work in an academic environment. Why? Because in order to fabricate integrated circuits of any kind you need investments of hundreds of millions of dollars into foundries. Robust and scalable technologies can only be realised in an industrial setting. Sure, small scale systems can be realised in many labs, but there are no industrial uses for devices produced in this setting: Anything that they demonstrate can be outperformed with a standard computing system. And it will stay that way until we realise that we need to take a risk as a society and really invest now in a technology of the future. I’ve seen distressing things happening at RSFQ roadmapping meetings. The conclusion of the roadmapping somewhat tragically boils down to ‘make another roadmap’ – because there are no applications beyond a few incredibly specialised circuits that can be realised within the scope of government research grants (~$10K-5M. Oh and by the way they aren’t very reproducible either). There is no funding on the necessary scale, and therefore whatever money is put into this distributed and broken field rather swiftly evaporates, even though if considered cumulatively it may have been enough to get a single, well focused effort going.

    So, my plea to the academic world is to realise that there comes a point where quarrelling over things should be secondary to our solidarity as a community who really want to see something achieved. I’d rather try and fail than sit smugly in my ivory tower proclaiming that something is 20, or 50 years away (read – therefore not my problem guys!) Day after day people (with little or no knowledge in the field) exclaim that these things will never work, you are not building anything useful, your results are fraudulent, your devices are vapour-ware….etc etc.

    This is more than cautious scientific scepticism, this is sheer hatred. It seems very unusual, and from my point of view, very detrimental to the entire field. I’d first advise such people to read some of our peer-reviewed papers to get their facts straight. I’d then say the following:

    If those working in the fields of superconducting electronics, flux qubits, and quantum information really cared about the future of a technology that they had helped show had huge theoretical promise, they should rally along with us rather than spitefully burning their own bridges. As a community, we should promote the potential of our research and try to convince whoever needs convincing that these endeavours, these ‘Manhattan’ style projects are the only way in which we can bring disruptive technologies to fruition. And even if such projects fail, even if there is no other possible spin off that comes out of them, think about this: Such projects cost about the same amount as a couple of ICBMs. I know which I’d rather have more of in this world.

    BANG! The Universe Verse

    I was asked to review this rather cute book:

    BANG! The Universe Verse (Book I). The book is a portrayal of how the laws of Physics as we know them today arose in the short period of time after the Big Bang. The book also explains how matter forms, and how nuclear fusion and stellar activity plays a significant role in explaining why the Universe appears as it does at present.

    But the cool thing about the book is that is is presented in a comic book format, with two cute characters guiding you through the science. Here is an excerpt:

    “The proton in the centre may not be alone
    As another has access to this VIP Zone
    The neutron may not be quite as attractive
    But it is quiet, well mannered, and rarely reactive”

    This would be great to read to kids 🙂

    You can read the PDF version online or support the author and buy the book.

    AQC / AQO video talk

    Here is a video lecture that I gave a while ago about Adiabatic Quantum Computing and Adiabatic Quantum Optimization (specifically describing some cool things that you can do with D-Wave hardware) to my former colleagues at the University of Birmingham. This is a slightly higher level talk than the previous ones I have posted. Thanks again to my kind colleague and good friend (soon to be Dr.) Dominic Walliman for editing and posting these videos!

    The talk is entitled ‘Playing with adiabatic hardware: From designer potentials to quantum brains’ although it certainly isn’t quite as ‘brain’ focused as some of the previous talks I have given, heh 🙂

    .

    Here are the other parts (they should be linked from that one, but just in case people can’t find them):

    AQC Part 2
    AQC Part 3
    AQC Part 4
    AQC Part 5
    AQC Part 6

    P.S. I wasn’t trying to be mean to the gate model (or computer scientists for that matter) – it just kinda happened…

    P.P.S Some of the notation is a bit off – the J’s should be K’s to be consistent with the literature I believe…

    Pointers toward the future of Quantum Computing…

    I got quoted in this PhysicsWorld article:

    Single atoms go transparent – PhysicsWorld April 21st 2010

    The article was written in response to an ArXiv’ed paper from Nakamura’s group who investigate superconducting qubits (usually of the charge variety) in this case coupled to a microwave transmission line:

    Electromagnetically induced transparency on a single artificial atom

    From the article:

    “Making an opaque material transparent might seem like magic. But for well over a decade, physicists have been able to do just that in atomic gases using the phenomenon of electromagnetically induced transparency (EIT). Now, however, this seemingly magical effect has been observed in single atoms – and in “artificial” atoms consisting of a superconducting loop – for the first time. “

    I made the point in the article that combining flying qubits (basically entangled photons) which have the advantanges of long decoherence times and easy transportation of information, with solid-state implementations such as superconducting loops (good at storage and high-fidelity readout) would be a great step forwards in the generation of scalable architectures for quantum computing. Of course, I am slightly more biased towards the adiabatic QC approach at the moment (go figure) but there’s no reason why these developments couldn’t lead to some awesome hybrid quantum circuits involving a combination of gate model solid-state, AQC solid state and flying qubits. We just need to figure out how to put all the pieces together.

    Watch my IOP talk – Building Quantum Computers – now on YouTube

    You may remember a while back I mentioned that I’d put the video of my IOP talk up online. Well here it is. Thanks go to my kind colleague Dom for editing and posting these videos. Here is the first installment. I have posted links to the other 6 parts below. The talk is aimed at a general audience. It was given to a class of about 80 pupils of ages 14-18, and their teachers, although it is suitable for anyone who is interested in Physics, superconductors, superconducting processors and quantum computing. I apologise that the question and answer session (in parts 6 and 7) is a little difficult to hear, as the room was not set up to record audio in this way.
    .
    I’ll be putting a permanent link to this talk in the Resources section at some point soon. The slides are already available there if anyone wishes to look at them in more detail. Comments and feedback appreciated… Enjoy!
    .

    Part 2
    Part 3
    Part 4
    Part 5
    Part 6
    Part 7