AGI-10 Monday

Last day of the AGI-10 conference. As usual the live-blogging attempt failed 🙂 but that was kind of to-be expected. I blame the tiny netbook keyboard, which makes it very hard to type. Additionally, I found myself taking quite a lot of notes.

So what have I learnt from this conference? (I’ll probably go into all these ideas in much more detail in subsequent posts, but for now I’m just putting down some thoughts.)

* AGI is a young field, with many disputes and disagreements, which makes the conference both interesting and useful.

* People seem very passionate about the subject, which manifests both as optimism about the field and fierce debates over the problems anticipated, and already being encountered.

* There is a wide range of people here with very diverse backgrounds. I’ve spoken to computer scientists, physicists, mathematicians, philoshophers, neuroscientists, software programmers, entrepreneurs, and many others.

* There is an interesting split between the theoretical (understanding, defining and bounding what AGI is) and the experimental (building candidate systems). It actually strikes me as being similar to the QIP community, except QIP has had about 20 extra years for the theory to race ahead of the experimental verification. I worry that the same might happen to AGI.

* There is another split, which is a bit more subtle, between those that believe that bio/brain inspired investigation can help push AGI forward, and those that believe it won’t – or even worse, that it might cause the field to go backward, by ‘distracting’ researchers who would be working on other potential areas.

* The major problem is that people still can’t agree on a definition of intelligence, or even if there is, or can be one.

* There is also a problem in that the people actually trying to build systems do not know what cognitive architectures will support full AGI, so lots of people are trying lots of different architectures, basically ‘stamp collecting’, until more rigorous theories of cognitive architecture emerge. Some (most) of the current architectures that are being used are bio-inspired.

* There were a few presentations that I thought were much closer to narrow AI than AGI, especially on the more practical side. I guess this is to be expected, but I didn’t get the feeling that the generalization of these techniques was being pursued with vigour.


4 thoughts on “AGI-10 Monday

  1. […] AGI-10, the third conference on artificial general intelligence, is over. Read about it here and here. […]

  2. Ben Zaiboc says:

    Any indication of anyone trying to emulate an evolutionary process on something really simple, like a model of a jellyfish or hydra nerve net, with a bunch of increasingly difficult fitness functions that echo a more and more challenging environment that includes other increasingly complex critters?

    That’s how biology did it, so it seems likely that we could fast-forward the same process in a virtual environment.

    • physicsandcake says:

      In short: Not at this conference. The worry is that it wouldn’t be generalisable. Many people weren’t even convinced that a biological human brain inspired approach would help…

  3. Ben Zaiboc says:

    Yes, I’ve seen this before. I just don’t understand the attitude that studying the /only known example/ of a phenomenon won’t help us to understand it. It seems there are a lot of people keen to invent it from first principles, without actually knowing what it is they’re trying to invent.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s