Moving over to a new blog

In case anyone still reads this (!) or finds me online through this channel, I’d like to post a link to my new blog at my central website:

https://suzannegildert.com/blog/

On this new blog I’ll be talking about all the stuff I have been getting up to in my post Physics and Cake life. Enjoy!

3D printing – some baby steps – getting Blender STL to print

3D printing seems to be in equal measures ultra-fun and a total pain-in-the-ass. I’ve had quite a few problems with the Replicator 2, in that during prints it seems to randomly fail. It’s pretty soul-destroying when this happens at 97% of a 5-hour print πŸ˜› What happens is that the printer stops extruding normally and instead just intermittently occasionally extrudes a kind of fine spider-web thread of plastic that makes a cotton-candy mess on top of your piece. The printer carries on attempting to print, so the nozzle just ends up moving around in the air. The only way to fix it is to unload and reload the filament, or (if things really screw up) to take apart the extruded mechanism and remove any blockage that you might find there. I might try tightening up the plunger mechanism, perhaps it is slipping resulting in the filament not coming through.

Anyway, I guess these things will always happen with any new technology. So I don’t just want to moan about the 3D printer. I want to describe my progress with it also. So today I learned how to create a 3D shape using a CAD program and print it. Yes… big wow, but I was quite impressed with myself. These are the beginnings of something cool πŸ™‚ I hadn’t been able to get Blender to export the STL files correctly before, but I think this was something to do with the fact that a.) you have to convert your shape using “mesh from curve” and also you have to have the object selected to actually export it. I was following this tutorial: http://curriculum.makerbot.com/daily_lessons/april/blender.html#

Here is a screenshot from Blender:

blender_3d_printing_svg_example

Here is the finished print still on the MakerBot:

3d_printing_svg_example1

And here is the piece! Not very impressive to look at, but hey, at least I can print my own poker chips (or maybe pogs?) now:

3d_printing_svg_example2

3D printing robots – embodied AGI has to start somewhere :)

I’ve been getting quite interested in building robots lately. The reason is that I strongly believe in the embodiment of AGI systems in order for them to make sense as perception-action and learning agents.

To further my machine-learning algorithm embodiment goals, I recently acquired a MakerBot Relicator 2 Desktop 3D printing system. Here is a picture of the system with some 3D parts I’ve printed:

3d_printer_and_parts

I’ll talk more about the system specifically and what I’ve discovered through using it in a later post.

I decided to start with some robot builds, specifically this one:
InMoov 3D printed robot
Here is the blog of the robot designer, hairygael:
InMoov Blog
All credit goes to him for the beautiful design of these pieces!

I think I first saw this robot reported on a tech news site and thought it looked awesome. I’ve since been printing and assembling some of the components. I started with the hand. Here are a few pics!

robot_hand_parts_1

robot_hand_parts_2

robot_hand_parts_4

robot_hand_parts_5

robot_hand_parts_6

The entire hand is printed using PLA – interestingly not the material that the designer used to build the original robot (ABS, the β€œother” 3D printer material was used in the build documented on the blog link above).

The hand works using an animatronic-inspired “tendon” method, where strings run the length of the fingers. Tension can be applied to each string to either curl or uncurl the corresponding finger. Here is a picture of some of the tendons threaded through the robot hand:

robot_hand_parts_3

I also tried to make a smaller hand, printed at 50% of the scale. There are a lot of difficulties in this process, including the fact that the servos I bought were not exactly 50% of the scale. Here is the result so far:

robot_hand_and_mini_robot_hand

I’ll have to experiment with printing the hand at different sizes to see if my smaller servos can be made to fit. I’m a little worried that even if I get the hand servos to work, there will be some problem later on with the servos for the shoulder, head etc. Anyway, it is all a learning process!

I think it is super-cool that people can print their own robots at home now. We definitely need many more designs submitted to Thingiverse (one of several 3D printing open-source design warehouse) for robots of all kinds. The more robots exist, the easier it will be to develop, test and improve machine learning algorithms for their brains.

The Robot Artist Project

I’ve been interested in art for as long as I can remember. But lately I have been also extremely interested in robotics. And yet another interest I have is in AGI (artificial general intelligence) and specifically machine creativity. So I’ve been trying to combine these interests in a fun and unique project. As you do.

This project isn’t very far along yet, but I thought I’d start talking about it anyway… I have several projects on the go and talking about them will help me keep working on them! I’m going to separate projects with different logos and tags, so people can easily find them.

Background:

One of the things that we (and by we I mean the Artificial Intelligence/Machine Learning communities in general) are not so very good at is demonstrating cool algorithmic results in a way that people can understand and connect to. For example, this:

L0_sparse_coding_eqn

Is pretty exciting if you’re presenting at a machine learning conference. However, this:

robot_awesomeness

Is more exciting (or at least thought provoking) for nearly everyone else.

At work we’d been musing for a while over the idea that we needed to craft more cool demos. And so I made a little deal with 2 of my co-workers that we’d each try to come up with a cool, creative, robot-based project to work on in our spare time, power it with some serious AI algorithms, and pitch the resulting entities against one another to achieve the highest level of nerdy robot coolness.

The challenge: Build a creative robot demo powered by an advanced machine Learning algorithm (in your spare time).

I code-named my attempt The Robot Artist Project. The robot I would like to build would be an embodied cognitive system with a β€˜savant-like’ ability in the graphic arts, specifically canvas painting (more on this later). Once I had decided I liked the idea of building a robot artist, I had a look around the internet for drawing and painting robots. You can get quite a long way towards your goal by learning from successes and mistakes made in previous attempts to do similar things. I didn’t find that many socially-challenged but creatively gifted savant painting robots in my search, but this project stood out:

The thing I like especially about this is the artist’s story and reasoning behind building the robot – he suddenly had an overwhelming feeling that he could not produce art anymore, had lost his creativity, and needed to create something to do the work for him. I sympathize with this artist’s viewpoint. He is a troubled, angst-ridden creative soul and I can relate to that. Personally, my desire to build artist robots comes from slightly different reasoning. I have a strong negative reaction to anthropocentrism, and as such I strive to build systems and objects that blur the boundary between humans and other entities. Through building intelligent systems, I wish to unveil some of the contradictions evident in people’s kin-selecting, anthropocentric belief systems. Anyway, that’s veering more into the philosophy of AGI, and I came here to talk about robots. So, back to the build!

I wanted to get to something that drew stuff on paper AS FAST AS POSSIBLE, so I didn’t really care how it looked or to what MacGyverish extent I had to take things to get the robot drawing.

Hardware:

I had a couple of servos lying around, some lego mindstorms NXT pieces and an Arduino, which I purchased last summer. The first thing I did was connect the servos to a makeshift frame up using some pieces of Lego Mindstorms. Interestingly the NXT brick wasn’t used for any active control; but it did prove useful as a weight to anchor the robot arm! I also used some custom parts (picture hanging hooks) to attach the lego to the servo-rotor. I ended up with two position-controllable joints in the system, one for the arm of the artist (moving the hand in an arc) and one which lifted the pen up and down (A bit like turtle http://en.wikipedia.org/wiki/Turtle_graphics)

robot_artist_project_1

Software:

I used the servo sketch in my Sparkfun electronics kit as a template to get the Arduino to control the servo. Next I needed to get the whole thing hooked up to my PC to enable higher level control from Python, so I installed pySerial in order to send simple serial commands over USB to the Arduino. The Arduino listens to the serial ports, and adjusts the position of the servo based on what Python sends to that port.

Getting the loop delays sorted out was a little fiddley. When controlling servos in this way, I found that you have to carefully balance two delays, an inner delay (which sets the duty cycle of the Pulse Width Modulated signal the servo interprets) and an outer delay which gives the servo enough time to reach the desired position before trying to move to the next position.

robot_artist_project_2

Testing:

The first thing the robot did was manoeuvre the pen off the edge of the paper and onto the surface of my table. Supposedly a dry-wipe board pen, it kind of had the effect of a permanent marker. I chastised the robot; frowned at the table-marks. Then I thought how awesome it was that I had created a robot artist and the first thing it had done was scribbled angstily on my furniture. Sweet.

Here are some results from the first tests.

robot_artist_project_3

robot_artist_project_4

I knew that the little proto-artist would be dissembled and used for other builds in the future; it was way too hacked together to be kept intact. But it was fun to play with and I was inspired to continue with this project. There’s no machine learning or creativity in there yet, but there will be soon…

(Image credits: Shutterstock 114077734)

I’ll be blogging over at ‘Hack The Multiverse’!

So what’s with the hiatus?

My blogging life has been split in two! So mostly I’ll now be blogging over at Hack The Multiverse – the official D-Wave blog (formerly rose.blog) – about the fun I’ve been having programming quantum computers. You might want to check out the latest posts there as there will be a fair amount of activity going on! I’m going to reserve P&C for discussing other things (probably mostly to do with AI, AGI, etc.)

Gothic Fall – English edition available from Heavy Metal

For something a bit different; some light relief from AGI, quantum computing and whatnot…

The new edition of my Gothic Fall book is now available!

For those who have no idea what I am talking about, let me explain:
One of my main hobbies is fantasy and dark art (which is mainly in the form of digital painting, although I do paint with acrylics too) and I have published a book of these works along with accompanying poetry. The first edition of the book was published in Spanish, but now the English version is available! Quote from the site:

“If you never leave home without wearing black you will enjoy this potent combination of black magic, graveyards, knee high boots, chains, candelabras, wings, cathedrals and masks. Suzanne Gildert is a UK based artist, specialising in fantasy, gothic and dark art. She combines traditional drawing and painting methods with more modern digital techniques to establish a unique blend of elements and an easily recognisable style.”

Here is the link to the book:
Buy GF book online!

I’m really happy that this edition has finally been released. it has given me a new motivation do do more artwork πŸ™‚ My next challenge is to get the book on the shelves in Chapters!

Enjoy!

‘Computronium’ is really ‘Unobtainium’

‘Computronium’ is really ‘Unobtainium’

S. Gildert January 2011

Computronium [1] is defined by some as a substance which approaches the theoretical limit of computational power that we can achieve through engineering of the matter around us. It would mean that every atom of a piece of matter would be put to useful work doing computation. Such a system would reside at the ultimate limits of efficiency, and the smallest amount of energy possible would be wasted through the generation of heat. Computronium crops up in science fiction a lot, usually as something that advanced civilizations have created, occasionally causing conflicts due to intensive harvesting of matter from their galaxy to further their processing power. The idea is also also linked with advanced machine intelligence: A block of matter which does nothing other than compute could presumably would be incredibly sought after by any artificial intelligence looking to get the most compact and powerful brain for its money!

Many fanciful ideas from science fiction stories of the past have indeed turned out to be part of our everyday lives today. And with a little further thought, computronium seems nowhere near as crazy as, say, flying cars, warp drives and lightsabres. In fact, most people would accept the idea of that transistors have been getting smaller and more efficient, marching to the beat of Moore’s law. We now take for granted that the processor of a cell phone would once have required an entire building and consume several kilowatts of power. So what is wrong with taking this idea to its ultimate limit? Why can’t transistors become smaller and smaller until they are the size of atoms, perhaps even the size of sub-atomic bits of matter? Wouldn’t that be something at least very similar to computronium, if not the real-deal itself?

Processing power

Let’s pick apart some of the statements above and think a little more carefully about them. To begin, consider the phrase ‘computational power’. This may seem easy to define. One could use FLOPs (floating point operations per second) as a definition. Or perhaps we could use the ‘clock speed’ of the processor – just how fast can we get those bits of matter to change state?

When looking for good metrics for the computational power of a processor, we come across some difficulties. You can’t just use the ‘clock speed’ of your computer. Imagine, for example, that you split the processor it into multiple cores. Some programs might now be able to run faster, even if the ‘clock speed’ was the same!

Imagine that you required a processor to perform transforms on lots of pixels at once – say you wanted a game engine where the state of each pixel was only related to the state of that pixel on the previous frame. (I.e. it didn’t care what all the other pixels were doing). A parallel processor would calculate the output of pixel values very effectively, as it could assign each pixel with its own core, and compute them all at the same time. In fact, the existence of this type of problem is why we often find that games consoles have Graphics-Processing-Units (GPUs) built in them. They are parallel processors which are good at doing pixel-transforms in this way.

But imagine instead if you wanted to perform a calculation which was more like the following scenario: Calculating the 100th value of the recursive equation zn+1=zn2+3zn+1. There’s not really an easy way you can parallelize this, because each time you calculate the value of z in the equation, you need to know the result from the previous calculation. Everything has to be done in order. So if you tried to run this on your thousand core GPU, it would only really be able to make use of one of the cores, and then your highly parallel processor wouldn’t look so efficient after all.

In fact, the only reason why metrics such as clock speed and FLOPs have been useful up to now is that we have mostly been using our processors to do very similar tasks, such as arithmetic. But the kind of things we want them to do in the future, such as natural language processing and complex fluid dynamics simulations no longer rely on straightforward arithmetic operations.

We start to see the the computational power of a piece of matter really depends upon what you want it to do!

Arranging matter usefully

So although in the future we may be able to make atomic-size transistors, we still need to decide how to arrange those transistors. If we arrange them in a slightly different way, the programs that we run on them may suddenly become much more efficient. Constructing serial and parallel processors is just one way to think about rearranging your matter to compute differently. There are many other trade-offs, for example how the processor accesses memory, and whether it is analog or digital in nature.

Perhaps, then, to get around this problem, we could create a piece of matter that reprograms itself, that rearranges the atoms depending upon what you wanted to do. Then, you could have a processor with one large core if it is running a serial algorithm (like the recursive equation), and many small cores if it is running a parallel algorithm (like the game engine) Aha! That would get around this problem. Then my computronium can compute anything once more, in the most efficient way possible for a particular task.

However, we find that you still cannot win, even with this method. The ‘reprogramming of the matter’ stage would require a program all of its own to be run on the matter. The more clever your reprogramming program, the more time your processor would spend reconfiguring itself, and less time would be spent actually solving problems! You also have to somehow know in advance how to write the program that reprograms your matter, which again requires knowledge of what problems you might want to solve.

Computing limits

You may be wondering why I haven’t yet mentioned the idea of the Universal Turing Machine [2], which is a theoretical description of a computer, able to compute anything. Can we not just arrange our atoms to make a Turing Machine, that could then run any program? It is certainly the case that you can run any classical digital program on a Turing machine, but the theory says nothing about how efficient its computation would be. If you would like an analogy, a Turing Machine is to a real computer program as an abacus is to Excel – there is no reason why you cannot sit down and do your weekly accounts using an abacus, but it might take you a very long time!

We find when we try to build Turing machines in real life that not everything is realistically computable. A Turing Machine in practice and a Turing Machine in principle are two very different beasts. This is because we are always limited by the resources that our real-world Turing machine has access to (it is obvious in our analogy that there is a limit to how quickly we can move the beads on the abacus). The efficiency of a computer is ALWAYS related to how you assemble it in the real world, what you are trying to do with it, and what resources you have available. One should be careful when dealing with models that assume no resource constraints. Just think how different the world would be if we had an unlimited supply of free, clean energy.

The delicate matter of computation

Let’s philosophise about matter and computation some more. Imagine that the thing you wanted to do was to compute the energy levels of a Helium atom (after all, this is a Physics blog). You could use a regular desktop computer and start to do some simulations of Helium atoms using the quantum mechanical equations. Instead, you could take a few real helium atoms and measure the spectrum of light emitted from them when they are excited with a voltage inside an (otherwise evacuated) tube. The point is that a single Helium atom is able to compute the spacing of its own energy levels using far fewer resources than our simulation (which may require several tens of grams of silicon processor).

So as we see, atoms are already very busy computing things. In fact, you can think of the Universe as computing itself. So in a way, matter is already computronium, because it is very efficiently computing itself. But we don’t really want matter to do just that. We want it to compute the things that WE care about (like the probability of a particular stock going up in value in the next few days). But in order to make the Universe compute the things that we care about, we find that there is an overhead – a price to pay for making matter do something different from what it is meant to do. Or, to give a complementary way of thinking about this: The closer your computation is to what the original piece of matter would do naturally, the more efficient it will be.

So in conclusion, we find that we always have to cajole matter into computing the things we care about. We must invest extra resources and energy into the computation. We find that the best way to arranging computing elements depends upon what we want them to do. There is no magic ‘arrangement of matter’ which is all things to all people, no fabled ‘computronium’. We have to configure the matter in different ways to do different tasks, and the most efficient configuration varies vastly from task to task.

Afterthoughts: Some philosophical fun

The following amusing (if somewhat contrived) story serves to illustrate the points I have been making. Imagine I am sitting in my apartment, musing over how my laptop seems to generate a lot of heat from running this wordprocessing software. I dream of a much more efficient computronium-like processor on which I can perform my wordprocessing vastly more efficiently.

Suddenly, the heating in my apartment fails. Shivering with the cold, I am now in no mood to wordprocess, however I notice that my laptop processor has suddenly become very efficient at performing a task which is of much greater relevance to me (namely how to warm up my legs). That processor is now much closer to being a nice little piece of computronium as far as solving my immediate problem goes…

[1] – Computronium – http://en.wikipedia.org/wiki/Computronium

[2] – Universal Turing Machine – http://en.wikipedia.org/wiki/Universal_turing_machine

Pavlov’s AI – What did it mean?

So recently I gave a talk at the H+ Summit in Los Angeles. However, I got the impression that the talk, which was about the fundamentals of Artificial General Intelligence (something I decided to call ‘foundations of AGI’) was not fully understood. I apologize to anyone in the audience who didn’t quite ‘get’ it, as the blame must fall upon the speaker in such instances. Although, in my defense, I had only 15 minutes to describe a series of arguments and philosophical threads that I had been musing over for a good few months πŸ™‚

If you haven’t seen the talk, and would like to watch it, here it is:

However, this article is written as a standalone resources, so don’t worry if you haven’t seen the talk.

What I would like to do is start exploring some of those issues on this blog. So, here is my attempt to describe the first of the points that I set out to try and explore in the talk. I’ve used a slightly modified argument, to try and complement the talk for those who have already seen it.

————————————————————————————————–

Pavlov’s AI:
What do superintelligences really want?


S. Gildert November 2010

(Photo Β© Thomas Saur)

.

Humans are pretty intelligent. Most people would not argue with this. We spend a large majority of our lives trying to become MORE intelligent. Some of us spend nearly three decades of our lives in school, learning about the world. We also strive to work together in groups, as nations, and as a species, to better tackle the problems that face us.

Fairly recently in the history of man, we have developed tools, industrial machines, and lately computer systems to help us in our pursuit of this goal. Some particular humans (specifically some transhumanists) believe that their purpose in life is to try and become better than human. In practice this usually means striving to live longer, to become more intelligent, healthier, more aware and more connected with others. The use of technology plays a key role in this ideology.

A second track of transhumanism is to facilitate and support improvement of machines in parallel to improvements in human quality of life. Many people argue that we have also already built complex computer programs which show a glimmer of autonomous intelligence, and that in the future we will be able to create computer programs that are equal to, or have a much greater level of intelligence than humans. Such an intelligent system will be able to self-improve, just as we humans identify gaps in our knowledge and try to fill them by going to school and by learning all we can from others. Our computer programs will soon be able to read Wikipedia and Google Books to learn, just like their creators.

A perfect scientist?

But the design of our computer programs can be much more efficient in places where we, as humans, are rather limited. They will not get ‘bored’ in mathematics classes. They will work for hours on end, with no exhaustion, no fatigue, no wandering thoughts or daydreams. There would be no need for such a system to take a 2-hour lunch break, to sleep, or to worry about where its next meal will come from. The programs will also be able to analyze data in many more interesting ways than a human could, perhaps becoming a super-scientist. These programs will be far greater workers, far greater scholars, perhaps far greater citizens, than we could ever be.

It will be useful in analyzing the way such a machine would think about the world by starting with an analysis of humans. Why do humans want to learn things? I believe it is because there is a reward for doing so. If we excel in various subjects, we can get good jobs, a good income, and time to spend with others. By learning about the way the world works and becoming more intelligent, we can make our lives more comfortable. We know that if we put in the hard work, eventually it will pay off. There seem to be reward mechanisms built into humans, causing us to go out and do things in the world, knowing that there will be a payoff. These mechanisms act at such a deep level that we just follow them on a day-to-day basis – we don’t often think about why they might be there. Where do these reward mechanisms come from? Let’s take an example:

Why do you go to work every day?
To make money?
To pay for the education of your children?
To socialize and exchange information with your peers?
To gain respect and status in your organization?
To win prizes, to achieve success and fame?

I believe that ALL these rewards – and in fact EVERY reward – can be tied back to a basic human instinct. And that is the instinct to survive. We all want to survive and live happily in the world, and we also want to ensure that our children and those we care about have a good chance of surviving in the world too. In order to do this, and as our society becomes more and more complex, we have to become more and more intelligent to find ways to survive, such as those in the list above. When you trace back through the reasoning behind each of these things, when you strip back the complex social and personal layers, the driving motivations for everything we do are very simple. They form a small collection of desires. Furthermore, each one of those desires is something we do to maximize our chance at survival in the world.

So all these complex reward mechanisms we find in society are built up around simple desires. What are those desires? Those desires are to eat, to find water, to sleep, to be warm and comfortable, to avoid pain, to procreate and to protect those in our close social group. Our intelligence has evolved over thousands of years to make us better and better at fulfilling these desires. Why? Because if we weren’t good at doing that, we wouldn’t be here! And we have found more and more intelligent ways of wrapping these desires in complex reward mechanisms. Why do we obfuscate the underlying motivations? In a world where all the other members of the species are trying to do the same thing, we must find more intelligent, more complex ways of fulfilling these desires, so that we can outdo our rivals. Some of the ways in which we go about satisfying basic desires have become very complex and clever indeed! But I hope that you can see through that veil of complexity, to see that our intelligence is intrinsically linked to our survival, and this link is manifested in the world as these desires, these reward mechanisms, those things that drive us.

Building intelligent machines

Now, after that little deviation into human desires, I shall return to the main track of this article! Remember earlier I talked about building machines (computer systems) that may become much more intelligent than we are in the future. As I mentioned, the belief that this is possible is a commonly held view. In fact, most people not only believe that this is possible, but that such systems will self-improve, learn, and boost their own intelligence SO QUICKLY that once they surpass human level understanding they will become the dominant species on the planet, and may well wipe us out in the process. Such scenarios are often portrayed in the plotlines of movies, such as ‘Terminator’, or ‘The Matrix’.

I’m going to argue against this. I’m going to argue that the idea of building something that can ‘self-improve’ in an unlimited fashion is flawed. I believe there to be a hole in the argument. That flaw is uncovered when we try to apply the above analysis of desires and rewards in humans to machine intelligences. And I hope now that the title of this article starts to make sense – recall the famous experiments done by Pavlov [1] in which a dog was conditioned to expect rewards when certain things happened in the world. Hence, we will now try to assess what happens when you try to condition artificial intelligences (computer programs) in a similar way.

In artificial intelligence, just as with humans, we find that the idea of reward crops up all the time. There is a field of artificial intelligence called reinforcement learning [2], which is the idea of teaching a computer program new tricks by giving it a reward each time it gets something right. How can you give a computer program a reward? Well, just as an example, you could have within a computer program a piece of code (a mathematical function) which tries to maximize a number. Each time the computer does something which is ‘good’, the number gets increased.

The computer program therefore tries to increase the number, so you can make the computer do ‘good things’ by allowing it to ‘add 1’ to its number every time it performs a useful action. So a computer can discover which things are ‘good’ and which things are ‘bad’ simply by seeing if the value of the number is increasing. In a way the computer is being ‘rewarded’ for a good job. One would write the code such that the program was also able to remember which actions helped to increase its number, so that it can take those actions again in the future. (I challenge you to try to think of a way to write a computer program which can learn and take useful actions but doesn’t use a ‘reward’ technique similar to this one. It’s actually quite hard.)

Even in our deepest theories of machine intelligence, the idea of reward comes up. There is a theoretical model of intelligence called AIXI, developed by Marcus Hutter [3], which is basically a mathematical model which describes a very general, theoretical way in which an intelligent piece of code can work. This model is highly abstract, and allows, for example, all possible combinations of computer program code snippets to be considered in the construction of an intelligent system. Because of this, it hasn’t actually ever been implemented in a real computer. But, also because of this, the model is very general, and captures a description of the most intelligent program that could possibly exist. Note that in order to try and build something that even approximates this model is way beyond our computing capability at the moment, but we are talking now about computer systems that may in the future may be much more powerful. Anyway, the interesting thing about this model is that one of the parameters is a term describing… you guessed it… REWARD.

Changing your own code

We, as humans, are clever enough to look at this model, to understand it, and see that there is a reward term in there. And if we can see it, then any computer system that is based on this highly intelligent model will certainly be able to understand this model, and see the reward term too. But – and here’s the catch – the computer system that we build based on this model has the ability to change its own code! (In fact it had to in order to become more intelligent than us in the first place, once it realized we were such lousy programmers and took over programming itself!)

So imagine a simple example – our case from earlier – where a computer gets an additional ‘1’ added to a numerical value for each good thing it does, and it tries to maximize the total by doing more good things. But if the computer program is clever enough, why can’t it just rewrite it’s own code and replace that piece of code that says ‘add 1’ with an ‘add 2’? Now the program gets twice the reward for every good thing that it does! And why stop at 2? Why not 3, or 4? Soon, the program will spend so much time thinking about adjusting its reward number that it will ignore the good task it was doing in the first place!
It seems that being intelligent enough to start modifying your own reward mechanisms is not necessarily a good thing!

But wait a minute, I said earlier that humans are intelligent. Don’t we have this same problem? Indeed, humans are intelligent. In fact, we are intelligent enough that in some ways we CAN analyze our own code. We can look at the way we are built, we can see all those things that I mentioned earlier – all those drives for food, warmth, sex. We too can see our own ‘reward function’. But the difference in humans is that we cannot change it. It is just too difficult! Our reward mechanisms are hard-coded by biology. They have evolved over millions of years to be locked into our genes, locked into the structure of the way our brains are wired. We can try to change them, perhaps by meditation or attending a motivational course. But in the end, biology always wins out. We always seem to have those basic needs.

All those things that I mentioned earlier that seem to limit humans – that seem to make us ‘inferior’ to that super-intelligent-scientist-machine we imagined – are there for a very good reason. They are what drive us to do everything we do. If we could change them, we’d be in exactly the same boat as the computer program. We’d be obsessed with changing our reward mechanisms to give us more reward rather than actually being driven to do things in the world in order to get that reward. And the ability to change our reward mechanisms is certainly NOT linked to survival! We quickly forget about all those things that are there for a reason, there to protect us and drive us to continue passing on our genes into the future.

So here’s the dilemna – we either hard code reward mechanisms into our computer programs – which means they can never be as intelligent as we are – they must never be able to see or adjust those reward mechanisms or change them. The second option is that we allow the programs full access to be able to adjust their own code, in which case they are in danger of becoming obsessed with changing their own reward function, and doing nothing else. This is why I refer to as humans being self-consistent – we see our own reward function but we do not have access to our own code. It is also the reason why I believe super-intelligent computer programs would not be self-consistent, because any system intelligent enough to understand itself AND change itself will no longer be driven to do useful things in the world and to continue improving itself.

In Conclusion:

In the case of humans, everything that we do that seems intelligent is part of a large, complex mechanism in which we are engaged to ensure our survival. This is so hardwired into us that we do not see it easily, and we certainly cannot change it very much. However, superintelligent computer programs are not limited in this way. They understand the way that they work, can change their own code, and are not limited by any particular reward mechanism. I argue that because of this fact, such entities are not self-consistent. In fact, if our superintelligent program has no hard-coded survival mechanism, it is more likely to switch itself off than to destroy the human race willfully.

Footnote:

As this analysis stands, it is a very simple argument, and of course there are many cases which are not covered here. But that does not mean they have been neglected! I hope to address some of these problems in subsequent posts, as including them here would make this article way too long.

[1] – Pavlov’s dog experiment – http://en.wikipedia.org/wiki/Classical_conditioning

[2] – Reinforcement Learning – http://en.wikipedia.org/wiki/Reinforcement_learning

[3] – AIXI Model, M Hutter el el. – http://www.hutter1.net/official/index.htm

The Physics World is my Oyster

Physics and Cake got a mention in Physics World this month! πŸ™‚ As a long time reader of Physics World, I’m really happy to see this! I guess this means I’ll have to blog more about Physics and less about the speculative promises and hidden possibilities of Artificial General Intelligence… (especially as AGI apparently didn’t make the transcription below). Though I ‘m afraid I cannot currently shake my desire to explore the intersection between AGI and Physics!

Hmm, looking at this post in the browser is oddly fractal! Though not quite enough to become a Strange Loop. (H/T Douglas Hofstadter, you are awesome).

Transhumanism and objectivity: An introduction

I have been involved in the transhumanism community for a fair while now, and I have heard many arguments arising from both proponents and skeptics of the ‘movement’. However, many of these arguments seem to stem from instinctive reactions rather than critical thinking. Transhumanism proponents will sometimes dogmatically defend their assumptions without considering whether or not what they believe may actually be physically possible. The reasoning behind this is fairly easy to understand: Transhumanism promises escape from some of humanity’s deepest built in fears. However, the belief that something of value will arise if one’s assumptions are correct can often leave us afraid to question those assumptions.

I would currently class myself as neither a proponent or a skeptic of the transhumanism movement. However I do love to explore and investigate the subject, as it seems to dance close to the very limits of our understanding of what is possible in the Universe. Can we learn something from analyzing the assumptions upon which this philosophical movement is based? I would answer not only yes, but that to do so yields one of the most exciting applications of the scientific method that we have encountered as a society.

I find myself increasingly drawn toward talking about how we can explore transhumanism from a more rational and objective point of view. I think all transhumanists should be obliged to take this standpoint, to avoid falling into a trap of dogmatic delusion. By playing devil’s advocate and challenging some of the basic tenets and assumptions, I doubt any harm can be done. At the least those tenets and assumptions will have to be rethought. But moreover, we may find that the lessons learned from encountering philosophical green lights and stop signs may inform the way we steer our engineering of the future.

I’ve thus decided to shift the focus of this blog a little towards some of these ideas. In a way I have already implemented some of this shift: I have written a couple of essays and posts before. But from now on, expect to see a lot more of this in the future. A blog format is an excellent way of disseminating information on this subject: It is dynamic, and can in principle reach a large audience. I also think that it fits in well with the Physics and Cake ethos – applying the principles of Physics to this area will form a large part of the investigations. And, of course, everything should always be discussed over coffee and a slice of cake! Another advantage is that this is something that everyone can think about and contribute to. You don’t need an expensive lab or a PhD in theoretical Physics to muse over these issues. In a lot of cases, curiosity, rationality, and the patience to follow an argument is all that is necessary.