Transhumanism and objectivity: An introduction

I have been involved in the transhumanism community for a fair while now, and I have heard many arguments arising from both proponents and skeptics of the ‘movement’. However, many of these arguments seem to stem from instinctive reactions rather than critical thinking. Transhumanism proponents will sometimes dogmatically defend their assumptions without considering whether or not what they believe may actually be physically possible. The reasoning behind this is fairly easy to understand: Transhumanism promises escape from some of humanity’s deepest built in fears. However, the belief that something of value will arise if one’s assumptions are correct can often leave us afraid to question those assumptions.

I would currently class myself as neither a proponent or a skeptic of the transhumanism movement. However I do love to explore and investigate the subject, as it seems to dance close to the very limits of our understanding of what is possible in the Universe. Can we learn something from analyzing the assumptions upon which this philosophical movement is based? I would answer not only yes, but that to do so yields one of the most exciting applications of the scientific method that we have encountered as a society.

I find myself increasingly drawn toward talking about how we can explore transhumanism from a more rational and objective point of view. I think all transhumanists should be obliged to take this standpoint, to avoid falling into a trap of dogmatic delusion. By playing devil’s advocate and challenging some of the basic tenets and assumptions, I doubt any harm can be done. At the least those tenets and assumptions will have to be rethought. But moreover, we may find that the lessons learned from encountering philosophical green lights and stop signs may inform the way we steer our engineering of the future.

I’ve thus decided to shift the focus of this blog a little towards some of these ideas. In a way I have already implemented some of this shift: I have written a couple of essays and posts before. But from now on, expect to see a lot more of this in the future. A blog format is an excellent way of disseminating information on this subject: It is dynamic, and can in principle reach a large audience. I also think that it fits in well with the Physics and Cake ethos – applying the principles of Physics to this area will form a large part of the investigations. And, of course, everything should always be discussed over coffee and a slice of cake! Another advantage is that this is something that everyone can think about and contribute to. You don’t need an expensive lab or a PhD in theoretical Physics to muse over these issues. In a lot of cases, curiosity, rationality, and the patience to follow an argument is all that is necessary.

Humanity+ Conference 2010 Caltech

I gave a presentation yesterday at the H+ conference at Caltech. The session in which I spoke was the ‘Redefining Artificial Intelligence’ session. I’ll try to get the video of the talk up here as soon as possible along with slides.

Other talks in this session were given by Randal Koene, Geordie Rose, Alex Peake, Paul Rosenbloom, Adrian Stoica, Moran Cerf and Ben Goertzel.

My talk was entitled ‘Pavlov’s AI: What do superintelligences really want?’ I discussed the foundations of AGI, and what I believe to be a problem (or at least an interesting philosophical gold-seam) in the idea of building self-improving artificial intelligences. I’ll be writing a lot more on this topic in the future, hopefully in the form of essays, blogposts and papers. I think it is very important to assess what we are trying to do in the area of AI, what the overall objectives are, and looking at what we can build from an objective point of view is helpful in framing our progress.

The conference was livestreamed, which was great. I think my talk had around 500 viewers. Add to that the 200 or so in the lecture hall; 700 is a pretty big audience! Some of talks had over 1300 remote viewers. Livestreaming really is a great way to reach a much bigger audience than is possible with real-life events alone.

I didn’t get to see much of the Caltech campus, but the courtyard at the Beckman Institute where the conference was held was beautiful. I enjoyed the fact that coffee and lunch was served outside in the courtyard. It was very pleasant! Sitting around outside in L.A. in December was surprisingly similar to a British summer!

I got to talk to some great people. I enjoy transhumanism-focused conferences as the people you meet tend to have many diverse interests and multidisciplinary backgrounds.

I was very inspired to continue exploring and documenting my journey into the interesting world of AGI. One of the things I really love doing is looking into the fundamental science behind Singularity-focused technologies. I try to be impartial to this and give both an optimistic account of the promise of future technologies whilst maintaining a skeptical curiosity about whether such technologies are fundamentally possible, and what roadmaps might lead to their successful implementation. So stay tuned for more Skepto-advocate Singularity fun!

New scheduling for ‘Thinking about the Hardware of thinking’

I was scheduled to give a live virtual seminar, streamed to the Transvision conference in Italy on October 23rd. Unfortunately I was not able to deliver the presentation due to technical problems at the conference venue.

But the good news is, I will be giving the talk this weekend instead!

Here is the abstract (slightly updated as the talk will be a little longer than originally planned)

Thinking about the hardware of thinking:
Can disruptive technologies help us achieve uploading?

S. Gildert,
Teleplace, 28th November 2010
10am PST (1pm EST, 6pm UK, 7pm continental EU).

We are surrounded by devices that rely on general purpose silicon processors, which are mostly very similar in terms of their design. But is this the only possibility? As we begin to run larger and more brain-like emulations, will our current methods of simulating neural networks be enough, even in principle? Why does the brain, with 100 billion neurons, consume less than 30W of power, whilst our attempts to simulate tens of thousands of neurons (for example in the blue brain project) consumes tens of KW? As we wish to run computations faster and more efficiently, we might we need to consider if the design of the hardware that we all take for granted is optimal. In this presentation I will discuss the recent return to a focus upon co-design – that is, designing specialized software algorithms running on specialized hardware, and how this approach may help us create much more powerful applications in the future. As an example, I will discuss some possible ways of running AI algorithms on novel forms of computer hardware, such as superconducting quantum computing processors. These behave entirely differently to our current silicon chips, and help to emphasize just how important disruptive technologies may be to our attempts to build intelligent machines.

Here is a link to the Teleplace announcement.

Hope to see you there!

Transvision2010 presentation: Thinking about the hardware of thinking

I will be giving a presentation at Transvision2010, which takes place next weekend. The talk will be about how we should consider novel computing substrates on which to develop AI and ASIM (advanced substrate independent minds) technologies, rather than relying on conventional silicon processors. My main example will be that of developing learning applications on Quantum Computing processors (not entirely unpredictable!), but the method is generalisable to other technologies such as GPUs, biologically based computer architectures, etc…

.

The conference is physically located in Italy, but I unfortunately cannot make in in person, as I will be attending another workshop. I will therefore be giving the talk remotely via the teleconferencing software Teleplace.

Anyway, here is some information about the talk, kindly posted by Giulio Prisco:

Thinking about the hardware of thinking:
Can disruptive technologies help us achieve uploading?

ASIM-2010 – not quite Singularity but close :)

So I’ll post something about the Singularity Summit soon, but first I just wanted to talk a little about the ASIM-2010 conference that I helped organise along with Randal Koene.

The main idea of the conference was to hold a satellite workshop to the Singularity Summit, with the purpose of sparking discussion around the topics of Substrate Independent Minds. See the carboncopies website for more information on that! Ah, I love the format of blogging. I’m explaining what happened at a workshop without having introduced the idea of what the workshop was trying to achieve or what our new organisation actually *is*. Well, I promise that I’ll get round to explaining it soon, but until then it will have to be a shadowy unknown. The carboncopies website is also in the process of being filled with content, so I apologise if it is a little skeletal at the moment!

One interesting thing that we tried to do with the workshop was to combine a real life and a virtual space component. It was an interesting experience trying to bring together VR and IRL. In a way it was very fitting for a workshop based around the idea of substrate independent minds. Here we were somewhat along the way to substrate independent speakers! I am hoping that this will inspire more people to run workshops in this way, which will force the technology to improve.

I was very pleased too see so many people turning out. We had about 30 people in meatspace and about another 15 in virtual space on both days. Giulio Prisco has some nice write-up material about the workshops, including PICTURES and VIDEOS! Here are the links to his posts:

General overview
First day in detail
Second day in detail

For a first attempt, I don’t think that things went too badly! The technology wasn’t perfect, but we gave it a good try. The main problem was with the audio. Teleplace, the conferencing software we were using, works well when everyone in online with a headset and mic, there are no feedback problems. However, when you try and include an entire room as one attendee, it becomes a little more tricky.

This could be improved by either everyone in the room having headsets and mics, and then having a mixer which incorporated all the input into a single Teleplace attendee. The other way is that everyone in the room could also be logged into Teleplace with their own headsets and mics. *Make* that Singularity happen, ooh yeah! (/sarcasm)

H+ Summit 2010 @ Harvard

I’m currently in Harvard listening to some pretty awesome talks at the H+ Summit. I always really enjoying attending these events, the atmosphere is truly awesome. So far we have had talks about brain preservation, diy genomics, neural networks, robots on stage, AI, consciousness, synthetic biology, crowdsourcing scientific discovery, and lots lots more.

The talks are all being livestreamed, which is pretty cool too. I can’t really describe the conference in words, so here are some pictures from the conference so far:

Audience pic:

General overview:

Here is a picture of Geordie’s talk about D-Wave, quantum computing and Intelligence:

Here is a picture of me next to the Aiken-IBM Automatic Sequence Controlled Calculator Mark I. This thing is truly amazing, a real piece of computing history.

The MIT museum was also really cool 🙂 More news soon!

Physics post-singularity….

As post-singularitarians, what will happen to scientific discovery after the inevitable continued progress of automation? Will physicists be needed at all?

I think that there really won’t be much point in studying Physics post-singularity, as expert systems and AIs will be able to advance scientific development and understanding, put forward theories and perform experiments themselves. Or rather, there won’t be much point in studying it as humans. It would be like studying weaving, except there won’t be a market for quaint, slightly wobbly ‘hand-made theories’ and ‘hand-analysed data’ in the same way that people look for beads on strings at craft-fayres in country barns.

I’m afraid to say that the first guys out will be the theorists. Theorem proving machines are already gaining traction, and I don’t think it will be long before these systems will start to become commonplace, as soon as Artificial General Intelligence (AGI) gets even a foothold in this area. These kind of systems which are given the chance to play about in a theoretical, platonic realm of logic and inference constitute a nice step between AI and AGI, and they are likely to be exploited as such. Our beautiful theoretical physics will be demoted to a mere rung on the ladder of systems trying to become more ‘human’ and less like theoretical physicists!

Coming a close second will be computational Physicists. I mean, they need expert systems already to even start! Their only hope is to cling on to the ability to test their simulations against the real world – which an automated system might find tricky.

I think that experimentalists will be the last to go, as they interact the most closely with the material world… These brave soldiers will hold out the longest in the trenches of this war, fighting the new breed of scientist with their ability to gracefully control and manipulate the most delicate of scientific instruments. Instruments which are indeed the lower sentience predecessors of Intelligent Scientific Equipment. In fact, experimentalists might even hang around long enough to see the merging of humans with machines, in which case they can probably keep their jobs throughout 🙂

I think that the process of writing papers could probably be automated sooner than we think. But if our scientific equipment really does start to out-investigate us, who will read the papers? The equipment will just quietly and democratically communicate information across a global network, with each piece of kit having an instantly updated database of the current ‘cutting edge’ results. The system will know everything. It will have the capability to search and analyse every single paper ever written on a scientific subject. It will have a deduction and inference engine that can allow it to leapfrog any literature-savvy scientist in an instant, even with limited AGI capabilities. Such machines would not need to go to conferences, all necessary information can communicated almost effortlessly over the network. Peer review will probably still happen (systems will compare results), but it will be done via distributed computing, and it will be democratic – there’s no need for one system to be ‘better’ than another, these machines don’t care about getting first author on that Nature paper. They care about sharing the information and producing a model of the Universe that best describes it. This can be hardwired into their algorithms. It will be their very raison d’etre. They can become perfect scientists within their experimental capabilities (which will improve over time too).

2050, Researchers carrying piles of paperwork from their offices. (Many never did go paperless. Most of the yellowing documents are still from the 1990s)

Curious-2010-researcher: What’s happening? Why are you guys leaving? You’re eminent scientists! Pinnacles of wisdom and knowledge!

2050-research-group: Well, the University had to make cuts…

Curious-2010-researcher: How could you guys ever let this happen?

2050-research-group: You can’t get a grant these days unless you’ve got the latest Auto-Prof-300. Nothing can analyse data quite like it! It doesn’t take tea breaks or lunch. It doesn’t claim conference expenses. It uses less electricity than our computers did, not to mention the savings on pens and envelopes! It even writes our papers for us, heh! There’s just no use for us anymore. But it’s OK, we have a plan. We’re going to start a museum 🙂

What is the point of all this rambling…. Well, I just thought I’d explain one of the reasons why I’m interested in AI and AGI. I think that we can develop AGI as a tool for improving the way we do Physics. As for the consequences of that, well I am not in a position to judge. Technology is agnostic, it will provide advantages and disadvantages for the human race as we know it. But being a Physicist, one is always looking for better ways to be a physicist, and to do better Physics. I feel that the best way to do Physics is to build machines to do Physics for us. After all, we’re good at building machines to do things for us. I also believe that there are fundamental reasons why we are not best placed as agents in this environment to do Physics anymore. I feel that we are approaching somewhat of a ‘ceiling’ regarding our own ability to understand the way in which the universe operates.

Hopefully this lighthearted and somewhat tongue-in-cheek post will be the forerunner to some more posts about how machines such as Auto-Prof-300 can actually be realised. I’ll also talk a bit more about why I believe this ‘ceiling’ to our understanding exists.