Life Logging – an urge to create a sparse but useful dataset?

I have a strange urge and desire to life-log, which I am unable to explain. Since I was very young, I have always kept a journal of some form or another. More recently I have moved to a digital journal format.

In my very young days, I would keep a diary because I was told to. Later, in high single-digit and early teen years I would keep a diary because it was somewhere I could write private thoughts, fulfilling the role of what some might have thought of as an ‘imaginary friend’ – I very much talked to my journal in a ‘dear diary’ style, as though it understood my concerns about the world.

My recent reasons for logging have generally been because I’m very busy, I’m enjoying life, and I’m doing a lot of things. I find it great to read back over my journal entries and relive the experiences. I especially like comparing the anticipation of an event with the memories of how it went and what I learnt from it. It really reinforces the idea of events which you may be nervous about never being as bad as you expect. It can be a really insightful thing to do.

Last weekend I spent a lot of time scanning old photographs into digital format. It’s amazing how each photograph opens up an entire set of memories, thoughts, and feelings. I’m also scanning my entire back archive of paper artwork (hundreds of a4 and a5 images). I like the idea of having all this stuff in a digital format such that it may eventually be tagged and have a proper semantic referencing system, when an appropriate framework for this kind of thing is developed.

However, I have a slightly more practical (and somewhat more controversial) reason for lifelogging, which I would like to explore in the next few years (or maybe decades).

Creating an upload from an extended lifelog

I like the idea of creating an AI that could take all this data and infer things from it. It could perhaps infer what kind of a person I was, and what kind of a person I am now. It might be a useful dataset for an AGI trying to understand human development, or developing itself.

An even more interesting idea is to create a virtual version of yourself by giving it access to all this information and a timeline. (You’d effectively be giving it memories).

One currently in vogue lifelogging technique is recording your entire set of experiences using an on-person video camera with built-in audio. However I feel that this method has its flaws. The stream obviously only records external input. You would ideally have a technique which also monitors streams such as internal reasoning, understanding, feelings and personal thoughts. Some of this could be automatically recorded using secondary effects, for example heart rate, hormone levels, blood sugar levels. But even those techniques just can’t capture that oh-so-elusive personal subjective experience.

Journal keeping is one way to do get around this problem, but you have to learn to write your journal in a very specific way. So something like “I listened to some music today” would be pretty information lean, whereas “I listened to song X today and it made me feel rather melancholy because it reminded me of the time when I first heard it, I was doing Y, and that inspired me to draw this piece of artwork Z. Now everytime I hear that piece of music I’m inspired to create more artwork”. In addition, I think that tagging stuff will be easier in text and image formats than in a video stream.

A dream diary can also contribute to the dataset, as it could give an AI more data about how the subjective experience during sleep can be different to normal.

In short, there’s no way to create an exhaustive dataset, but a sparse one may still be useful. I guess I’ll continue doing it as long as I find it fun.

MQT Paper update… now in colour

Oh my goodness, this MQT paper is becoming a TOME….

So yesterday we had the red ink debacle which spurred me to write the Paper Algorithm:

1.) Write paper
2.) Give to colleague
3.) Get returned
4.) Shake off excess red ink.
5.) Rework.

Repeat steps 3-5 and hope for convergence.
Note this algorithm may have bad runtime scaling due to T(step 4) -T(step 3).

A friend of mine tried to suggest some further steps involving the journal submission process, but unfortunately those kind of delightful games are beyond my event horizon at the moment!

Here is a picture of the red ink debacle (which by the way looks worse now as I’m covered it in my own rebuttals of and agreements with the arguments – in black ink I hasten to add).

Anyway, the new version of the document is better for the corrections now but I fear it may have to be streamlined as it’s packing 7 pages of Physics awesomeness already… and I’m wondering about going further into the details of thermal escape from the washboard potential. Maybe I shouldn’t do that.

Let there be cake…

Our weekly cake club has an elevated status. It is now possible to find ‘Cake seminars’ on the Talks@Bham page, which means our cake club agenda is not only publicly available, but anyone from around the University can drop in if they noticed the event….

…which is a bad thing, as I don’t think I baked enough cakes for everyone in the Uni. Anyway, here are the ones I baked for the weekly gathering…

Just in case you hadn’t heard…

I should probably announce this ‘officially’ on here, although I doubt there’s anyone who doesn’t already know or at least have a clue:

I’m going to be leaving the delightful world of academia to go and work for the even more delightful world of D-Wave Systems very soon. I’ll be working as an Experimental Physicist.

I’m very excited about this new job 🙂

I feel that academia has been an interesting and enjoyable experience in many respects, but in order to work on really big picture, far reaching projects one needs to move outside the University research-grant-oriented framework. I think I’ve mentioned on this blog before about what I call the ‘funding gap’ – there is a lack of grant support to push research from a fundamental level to a commercial level in many areas of science. University schemes exist to help spin-off companies to start up, but there is not always a smooth transition between research and market. In particular, there are cases where there is an inescapable need to really ENGINEER a technology to a commercial level, perhaps for several years.

I would class superconducting electronics and quantum computing as two such cases.

Superconducting processors get some competition?

EPFL and ETH (Switzerland) are undertaking a four year project named CMOSAIC with the goal of extending Moore’s law into the third dimension:

The project page is here

And here’s an IBM write-up of the effort

Also see here for a nice schematic of the device

“Unlike current processors, the CMOSAIC project considers a 3D stack-architecture of multiple cores with a interconnect density from 100 to 10,000 connections per millimeter square. Researchers believe that these tiny connections and the use of hair-thin, liquid cooling microchannels measuring only 50 microns in diameter between the active chips are the missing links to achieving high-performance computing with future 3D chip stacks.”

Just my personal opinion of course… but…. this seems like a case of fixing the symptoms rather than finding a cure. Will bringing a microfluidic angle into Moore’s law really help us out?

Why do we put up with this kind of heating problem in the first place? One could, for example, consider an alternative investment in the development of reversible, low disspation superconducting electronics.

I guess the project will be interesting just from a point of view of 3D manufacturing and incorporation of fluidics into microchips – this kind of technology could be indispensable for progress in areas such as lab-on-a-chip technology. But as far as raw processing power goes, this approach seems a bit like ignoring the elephant in the room.

Totally cool IC video…

If you’ve ever wondered how silicon chips are made, watch this video. It’s really cool:

Similar processes are used to make quantum computing chips, albeit with different materials. These semiconductor industry techniques have been developed over decades into the complex processes you see in this video. So understanding why it is so hard to make reproducible quantum processors should be obvious – we need to develop similarly complicated and well controlled processes for the new technologies.

H/T Kostas Hatalis

Helium!!!

There’s a new club night in Birmingham called ‘Helium’:

The DJs sport names such as ‘Diesel’, ‘Vermin’ and ‘Yumbolt’. Now if I were spinnin’ choonz at this club I would have called myself DJ Supersolid, but then again maybe they haven’t been keeping up to date with the latest developments in condensed matter physics.

The club has an ‘elements’ theme in general, with an Oxygen room, a Nitrogen room and a Carbon room. Cute. No Niobium room though, booo…

Beautiful AQUA@Home simulation results

There are some lovely simulation results coming out of D-Wave’s AQUA@Home project:

The project is designed to simulate what D-Wave’s adiabatic quantum optimization chips are doing during their quantum annealing cycles. The chips consist of coupled qubits which behave like an array of interconnected spins, in an Ising-type configuration. The spin system can be highly frustrated depending upon the sign and magnitude of the coupling between the spins and on the bias of each spin site. All possible configurations of the spins have an associated energy, and the chips try to find the configuration of the spins which minimises this energy.

The AQUA project simulates these spin systems by using a Quantum Monte Carlo technique to calculate the energy gap between the ground state of the system (the lowest energy ‘solution’ to the problem) and the first excited state (next best solution) for some ‘ultra hard’ problems (which you can think of as ones where the degree of frustration can be very high). You can look at the results for 8, 16, 32, 48, 72 and 96 qubit problems here:

AQUA@Home results so far

I love looking at these minimum gap curves, their statistical properties are very interesting. For example, you often get two minima regions in the energy gap as the system is annealed. There is also some very fine structure in the curves. I wonder if any generalizations can be made by analysing the similarities and differences of many such curves as the number of qubits increases.

You can help contribute to this project by downloading BOINC and donating spare CPU cycles (the program runs as a background task). The information for how to do this can be found under the Join AQUA@Home section on this page. Go, crowdsourced quantum science, go!!

AGI-10 Monday

Last day of the AGI-10 conference. As usual the live-blogging attempt failed 🙂 but that was kind of to-be expected. I blame the tiny netbook keyboard, which makes it very hard to type. Additionally, I found myself taking quite a lot of notes.

So what have I learnt from this conference? (I’ll probably go into all these ideas in much more detail in subsequent posts, but for now I’m just putting down some thoughts.)

* AGI is a young field, with many disputes and disagreements, which makes the conference both interesting and useful.

* People seem very passionate about the subject, which manifests both as optimism about the field and fierce debates over the problems anticipated, and already being encountered.

* There is a wide range of people here with very diverse backgrounds. I’ve spoken to computer scientists, physicists, mathematicians, philoshophers, neuroscientists, software programmers, entrepreneurs, and many others.

* There is an interesting split between the theoretical (understanding, defining and bounding what AGI is) and the experimental (building candidate systems). It actually strikes me as being similar to the QIP community, except QIP has had about 20 extra years for the theory to race ahead of the experimental verification. I worry that the same might happen to AGI.

* There is another split, which is a bit more subtle, between those that believe that bio/brain inspired investigation can help push AGI forward, and those that believe it won’t – or even worse, that it might cause the field to go backward, by ‘distracting’ researchers who would be working on other potential areas.

* The major problem is that people still can’t agree on a definition of intelligence, or even if there is, or can be one.

* There is also a problem in that the people actually trying to build systems do not know what cognitive architectures will support full AGI, so lots of people are trying lots of different architectures, basically ‘stamp collecting’, until more rigorous theories of cognitive architecture emerge. Some (most) of the current architectures that are being used are bio-inspired.

* There were a few presentations that I thought were much closer to narrow AI than AGI, especially on the more practical side. I guess this is to be expected, but I didn’t get the feeling that the generalization of these techniques was being pursued with vigour.

AGI-10 Friday session

Brilliant conference so far, and it’s only the first day (*just* a workshop session). We’ve had four ‘tutorials’ today:

Marcus Hutter
Hutter described the AIXI model, which is a theoretically best case version of a goal-driven intelligent system.

Mosche Looks
Looks discussed program learning and ways in which this can be implemented. Program learning involves a system ‘discovering’ a program via altering its own code.

Ben Goertzel
Goertzel talked about developments of AI agents interacting with virtual environments versus robots interacing with real world environments and discussed the advantages and disadvantages of both appraoches.

Randal Koene
Talked about whether or not we can find correlations between some of the concepts encountered in AI/AGI an some of the neuronal mechanisms that are occurring in the brain. The specific concept under scrutiny was reinforcement learning. Koene demonstrated that collections of neurons can be modelled by cortical minicolumns and by connecting thes together in large groups, they simulate the same behaviour as is seen in trials undertaken by monkeys given a reward / no reward mechanism for recongnising certain images.

Another thing I love is the lecture theatre/conference venue (The USI). There are power sockets at every seat and there is free, easy-to-connect wifi. I don’t think I’ve been to a single conference where both those criteria have been fulfilled before. Lunch is provided on-site, and coffee and snack breaks are scheduled often. Those are really the only things I ask in terms of the conference venue to be honest. Power, wifi, coffee, lunch.

Yesterday I had a great meal at one of the local Lugano pizzaria restaurants, and talked about AGI and what it actually means.

Anyway, more soon, it’s buffet-networking time now…