Physics post-singularity….

As post-singularitarians, what will happen to scientific discovery after the inevitable continued progress of automation? Will physicists be needed at all?

I think that there really won’t be much point in studying Physics post-singularity, as expert systems and AIs will be able to advance scientific development and understanding, put forward theories and perform experiments themselves. Or rather, there won’t be much point in studying it as humans. It would be like studying weaving, except there won’t be a market for quaint, slightly wobbly ‘hand-made theories’ and ‘hand-analysed data’ in the same way that people look for beads on strings at craft-fayres in country barns.

I’m afraid to say that the first guys out will be the theorists. Theorem proving machines are already gaining traction, and I don’t think it will be long before these systems will start to become commonplace, as soon as Artificial General Intelligence (AGI) gets even a foothold in this area. These kind of systems which are given the chance to play about in a theoretical, platonic realm of logic and inference constitute a nice step between AI and AGI, and they are likely to be exploited as such. Our beautiful theoretical physics will be demoted to a mere rung on the ladder of systems trying to become more ‘human’ and less like theoretical physicists!

Coming a close second will be computational Physicists. I mean, they need expert systems already to even start! Their only hope is to cling on to the ability to test their simulations against the real world – which an automated system might find tricky.

I think that experimentalists will be the last to go, as they interact the most closely with the material world… These brave soldiers will hold out the longest in the trenches of this war, fighting the new breed of scientist with their ability to gracefully control and manipulate the most delicate of scientific instruments. Instruments which are indeed the lower sentience predecessors of Intelligent Scientific Equipment. In fact, experimentalists might even hang around long enough to see the merging of humans with machines, in which case they can probably keep their jobs throughout 🙂

I think that the process of writing papers could probably be automated sooner than we think. But if our scientific equipment really does start to out-investigate us, who will read the papers? The equipment will just quietly and democratically communicate information across a global network, with each piece of kit having an instantly updated database of the current ‘cutting edge’ results. The system will know everything. It will have the capability to search and analyse every single paper ever written on a scientific subject. It will have a deduction and inference engine that can allow it to leapfrog any literature-savvy scientist in an instant, even with limited AGI capabilities. Such machines would not need to go to conferences, all necessary information can communicated almost effortlessly over the network. Peer review will probably still happen (systems will compare results), but it will be done via distributed computing, and it will be democratic – there’s no need for one system to be ‘better’ than another, these machines don’t care about getting first author on that Nature paper. They care about sharing the information and producing a model of the Universe that best describes it. This can be hardwired into their algorithms. It will be their very raison d’etre. They can become perfect scientists within their experimental capabilities (which will improve over time too).

2050, Researchers carrying piles of paperwork from their offices. (Many never did go paperless. Most of the yellowing documents are still from the 1990s)

Curious-2010-researcher: What’s happening? Why are you guys leaving? You’re eminent scientists! Pinnacles of wisdom and knowledge!

2050-research-group: Well, the University had to make cuts…

Curious-2010-researcher: How could you guys ever let this happen?

2050-research-group: You can’t get a grant these days unless you’ve got the latest Auto-Prof-300. Nothing can analyse data quite like it! It doesn’t take tea breaks or lunch. It doesn’t claim conference expenses. It uses less electricity than our computers did, not to mention the savings on pens and envelopes! It even writes our papers for us, heh! There’s just no use for us anymore. But it’s OK, we have a plan. We’re going to start a museum 🙂

What is the point of all this rambling…. Well, I just thought I’d explain one of the reasons why I’m interested in AI and AGI. I think that we can develop AGI as a tool for improving the way we do Physics. As for the consequences of that, well I am not in a position to judge. Technology is agnostic, it will provide advantages and disadvantages for the human race as we know it. But being a Physicist, one is always looking for better ways to be a physicist, and to do better Physics. I feel that the best way to do Physics is to build machines to do Physics for us. After all, we’re good at building machines to do things for us. I also believe that there are fundamental reasons why we are not best placed as agents in this environment to do Physics anymore. I feel that we are approaching somewhat of a ‘ceiling’ regarding our own ability to understand the way in which the universe operates.

Hopefully this lighthearted and somewhat tongue-in-cheek post will be the forerunner to some more posts about how machines such as Auto-Prof-300 can actually be realised. I’ll also talk a bit more about why I believe this ‘ceiling’ to our understanding exists.

MQT Paper update… now in colour

Oh my goodness, this MQT paper is becoming a TOME….

So yesterday we had the red ink debacle which spurred me to write the Paper Algorithm:

1.) Write paper
2.) Give to colleague
3.) Get returned
4.) Shake off excess red ink.
5.) Rework.

Repeat steps 3-5 and hope for convergence.
Note this algorithm may have bad runtime scaling due to T(step 4) -T(step 3).

A friend of mine tried to suggest some further steps involving the journal submission process, but unfortunately those kind of delightful games are beyond my event horizon at the moment!

Here is a picture of the red ink debacle (which by the way looks worse now as I’m covered it in my own rebuttals of and agreements with the arguments – in black ink I hasten to add).

Anyway, the new version of the document is better for the corrections now but I fear it may have to be streamlined as it’s packing 7 pages of Physics awesomeness already… and I’m wondering about going further into the details of thermal escape from the washboard potential. Maybe I shouldn’t do that.

QIP 2010 – Further thoughts and CAKE!

So I’m back from QIP now, and full of chocolate. One might say I am maximally satisfied. However I didn’t have time to post this final update so I’ll do it now.

I really enjoyed 2 talks on Thursday afternoon session. The first was by Roderich Moessner and the second by Julia Kempe. They were entitled:

“Random quantum satisfiability: statistical mechanics of disordered quantum optimization” and “A quantum Lovasz Local Lemma” respectively.

I enjoyed these talks because they weren’t completely theoretically based, even though the titles made them sound like they might have been. In particular, I liked the way that random, average and typical instances were considered.

The bounds of ‘hardness’ (going from always satisfiable (easy) to possibly satisfiable (hard) to unsatisfiable (easy)) as you increase the number of clauses compared to the number of variables in a SAT problem were explored, and what kind of phase transitions occur throughout this process. Entanglement can help make some of the possibly satisfiable ones easier, so effectively utilising quantum mechanics allows you to tighten the boundaries of the ‘region of hardness’.

One final thought that I had about the conference was that I think that QIP people need to think about Physics a bit more. Physics seems to underlie all these processes and ties them to the real world in some way. I found that quite a few people were advocating the point of view that Computer Science underlies Physics, but I believe this to be the wrong way of looking at the problem. Physics is all we are given really, it is fruitful to remember this and perhaps just considering it once in a while might help keep you a little more grounded in reality.

Anyway, enough Physics, lets talk about cake. So I mentioned in a previous post about this cake shop I found in Zurich called Cakefriends. Well now I have pictures.

The cake that I chose was a heterostructure of deliciously thick cream (almost cheesecake thick) with interstitial poppy seed sponge layers. To complete the unit cell there was some raspberry sauce around the outside of each sponge layers. It was served in a glass:

Here is a picture of me enjoying said cake. And yes, there were Physics discussions throughout the cakey experience, which should always be the case.

And a photo from the Cakefriends menu:

Yes. Yes we do.

Also thanks to this conference I finally understand the meaning of the complexity class qpoly. Thanks QIP for clearing this one up for me.

QIP 2010 – Day 1

Hmmm. I think I AM the only experimental physicist here 🙂

Still, I’ve absorbed a fair amount. Interestingly quite a few of the conference participants seem to be Theoretical Computer Scientists with only very slight inclinations towards the quantum, although I haven’t sampled a large set of conversations yet.

I had lunch with Ed Farhi and several other people working in the area of AQO/AQC. it was really interesting to discuss some of the open questions regarding the adiabatic quantum algorithm.

I really enjoyed Daniel Gottesman’s (Perimeter Institute) talk as he discussed SAT problems in spin systems, which are things that we can actually make and play with, and see if they are behaving quantum mechanically, so I’ll have to talk to people a bit more about that. I’m not sure if there’s any way to distinguish between extremely similar complexity classes such as ‘QMA’ and ‘QMAEXP’ experimentally using such systems, but it might be something worth thinking about.

I also finally met Quantum Moxie.

I really like the way that they have put chocolate bars on all the seats in the lecture theatre…

Anyway, more later.

Fridge surgery

Take a look at this picture:

mwahaha

Yes I am hacksawing a dilution refrigerator….

One of the entry ports to the IVC has been hardsoldered with a stainless steel placeholder bush. We need to replace this with our custom made copper bush with feedthroughs for coaxes and DC lines. It is virtually impossible to remove this part given the small space around it, and we decided that we don’t want to put power tools nearby, lest we accidentally buzz through the dilution unit. Which would be a bit like putting a scalpel through the jugular.

So hacksaw it is.

I wonder how many low temperature physicists have wanted to saw their dilution fridges in half before. Today I got to indulge in that pleasure. The results weren’t pretty at times:

that's gotta hurt

Although half way through I started thinking ‘I hope that this is the right part I’m sawing…’

Herding quantum cats

Two interesting arXiv papers this week:

Adiabatic quantum computation along quasienergies

A potentially new model of Quantum Computation, which is a discretized variant of Adiabatic Quantum Computation (AQC). Is it equivalent to the standard model? Is it useful? No-one knows.

This paper also got me thinking:

Electronic structure of superposition states in flux qubits.

How do you measure the cattiness of a flux qubit? Cattiness being defined as the ability of a system to exhibit quantum properties as it approaches a classical limit in terms of mass, size, or some other measure. The name comes from the question of whether or not it is possible to put an entire ‘Schrodinger’s cat’ into a macroscopic superposition of states.

I have been wondering about this problem with regards to flux qubits for a while. You might think it is possible just to ‘count’ the number of electrons involved in the Josephson tunneling, giving around 1^10 particles. But wait, the electrons all form a macroscopic state – do you count the condensate as a single particle instead? This paper argues that the actual cat state is somewhere between these two extremes. This is good news, because although the upper bound would have been cooler in terms of Macroscopic Quantum Coherence, the superconducting flux qubit might still be the ‘cattiest thing in town’.

I’m also wondering about the cattiness of nanomechanical resonators coupled to optical or microwave cavities. This system can be put in a superposition of two mechanical states relating to the position and motion of the atoms in the bar. For example, the ground state can be thought of as the fundamental harmonic of the bar (think of it like a guitar string), with an antinode in the centre, wheras the first excited state has a node in the centre and two antinodes at 1/4 and 3/4 of the way along the bar. But here we find a similar problem to that of the flux qubit: Does the number of atoms in the bar matter?

For fun let’s calculate the number of atoms in a Niobium nanomechanical resonator:

Let’s say the mechanical bar is 20nm x 20nm x 1um.
The volume of the bar is therefore 4e-22m^3
The density of Nb is 8.57g/cm^3
The mass of the bar is therefore 3.428e-17kg
The atomic mass of Niobium is 92.906amu = 1.54e-25kg.
The number of atoms in the bar is ~2.2e8

To check that value:
The atomic radius of a Nb atom: 142.9pm = 0.1429nm
In 20nm there are 139.958 atoms,
and in 1um there are 6997.9 atoms.
Therefore in the bar there are 1.37e8 atoms

which is roughly the same as by the previous method.

So does that mean the ‘cattiness of the bar’ has an upper bound of 2e8? This would make it more catty than the flux qubit. Or do you have to assign more (or less) than one ‘quantum degree of freedom’ per atom? It’s not as simple as tunneling electrons, where the quantum state is determined by the direction of current flow around the loop. If anyone has any thoughts on this they would be appreciated. Just what exactly are the quantum degrees of freedom here?

The bar is obviously constrained by its end points, albeit not ideally. The displacement of the bar may therefore probably behave more classically near the ends, or the wavefunction may extend into the structural supporting region. This may affect the actual number of atoms in the superposition. What fraction of the length of the bar is behaving quantum mechanically?

Note that the mass of both the electron condensate in the case of the flux qubit AND that of the nanomechanical bar are both much lower than Penrose’s quantum mass limit of about 1e-8kg – so we can’t test that hypothesis in the lab yet. Note this relates to a post I wrote a while ago about electrons in a lump of superconductor – there are enough electrons in a bulk sample for the mass to be greater than the Penrose limit, but they aren’t doing any useful quantum computation, you can’t put them into a well defined superposition of states for example. We need to ENGINEER and CONTROL these cat states…

Anyhow, after that complicated Physics we are definitely in need of some cake:

mrkipling_frenchfancies

We had this type of cake yesterday (amongst others) to celebrate a colleague passing his PhD viva 🙂

Quantum Neural Networks 1 – the Superconducting Neuron model

I’m interested in Quantum Neural Networks, specifically how to actually build the things. Any input would be greatly appreciated on this one. This is open notebook science in an extreme sense: I’m discussing here something I’d like to go into eventually, it may be several years down the line, but it’s worth thinking about it in the meantime.

The first point I’d like to address is the Superconducting Neuron model – this is an approach which attempts to build real life biologically inspired neural nets from superconducting hardware. I’ll discuss some other approaches to utilising the ‘quantum’ aspect of QNNs more efficiently in subsequent posts, for now this discussion is limited to this one hardware model.

Here are some papers I’ve been reading on the subject:

Mizugaki et al., IEEE Trans. Appl. Supercond., 4, (1), 1994

Rippert et al., IEEE Trans, Appl. Supercond., 7, (2), 1997

Hidaka et al., Supercond. Sci. Technol., 4 (654-657), 1991

There are several advantages to using SC hardware to build NNs. The RSFQ framework makes it much easier to implement, for example, fan-in and fan-out systems. Flux pulses can correspond directly to nerve-firings. The circuit elements dissipate much less power than their silicon counterparts. And you could simulate factors such as neurotransmitter levels and polarity using flux-couplers and bias leads, which (I believe) seems to be a much more natural way of doing things than trying to invent a way to mimic this in semiconductor technology.

What I understand about this field so far: In the 1990’s a couple of Japanese groups tried to demonstrate principles of superconducting neuron circuits. They built a few, and they even worked up to a point. So what has happened to this research?

Four Problems

1.) Well one school of thought is that the device tolerance is just not up to scratch. It is true that when you make Josephson junction circuits, the tolerances on the individual components tends not to be better than ~5%. However, is this really a problem? I can’t see that being the case, I’m sure that the similarity between biological neurons can’t be that good.

2.) Another potential problem is that research into neural networks generally has diminished (partly due to the so-called AI winter). If people using supercomputers can’t get their simulated neural networks to do anything *that* interesting, why bother with building the things in hardware? Such realizations would have far fewer neurons anyway! I guess the answer is that simulating superconducting circuits is still quite hard, and there could be some real advantages to building the things – similar to the reasons for building modern ASICs.

3.) A third problem is device integration level. Even with the best fab facilities available, superconducting circuits can only be made to low level VLSI (10,000’s of junctions). Again my point is – well why not try something on this scale? Unfortunately, cell libraries for RSFQ design probably don’t natively support the kind of realisations you need to build superconducting neurons. (For example, you need a great deal of FAN-IN FAN-OUT). So you’d probably have to go fully custom, but that’s just a design challenge.

4.) And then there’s a theoretical problem that has been bugging me for a while now. Although you can simulate any level of connectivity in highly abstracted models of NNs (given enough processing power and memory), if you actually want to build one, are you limited by the current 2-dimensional planar nature of the fabrication process? In a 3-dimensional interconnected system such as a real human brain, you are able to connect distant regions via UNIQUE, DIRECT links. In a 2D system, you are limited by the circuit layout and can (essentially) only make nearest neighbour connections. I’m pretty sure there’s a graph theory proof pinging somewhere around the edge of my mind here about connectivity in different-dimensional systems. The question is, does this limitation mean that it is theoretically impossible to build biologically inspired neural networks in planar hardware?

The field of RSFQ / Superconducting digital electronics is suffering low funding at the moment from ‘lack of applications’ syndrome. The number of people investigating applications of RSFQ circuits and Josephson logic seems to be much lower than the number of people working on the fundamental Physics of the devices. It’s a problem with the way research is funded. No-one will fund mid-term technology development, it’s either fundamental Physics or applications breakthroughs.

There may well be research being done in this area that I am unaware of, and I would be most intrigued to learn of any progress, and whether there are there problems in addition to the four presented here. However, if the research is not being done, why not? And would it be possible to get funding for projects in this area…

Engines of Creation by K. Eric Drexler

Engines_of_CreationThis has to be one of my favourite books ever. I’m so embarrassed that I hadn’t read it before now.

The book concentrated on how nanosystems will be used to transform our lives, our bodies, and the environment. There was also a discussion on how we will control nanosystems such that they do not replicate uncontrollably. The book was a nice introduction to the topic, not too heavy, and written with a powerfully optimistic style. I felt that the chapters on government policy were the weakest point, although to be honest that’s probably just my personal taste. They were well written, just not quite as gripping as the discussion of the actual technology itself.

The entire book was great reading, although I felt Chapter 14 in particular had something important to say; a lesson to be learnt. The focus of this section “A network of knowledge” was on the then future technology of hypertext, linked media and general freedom of information/knowledge aggregation techniques. It really stood out for me, because it’s the only chapter in the book where that technology today has not only been realised, but has exceeded Drexler’s foresight tenfold. Reading this chapter was so beautifully quaint, until a thought struck me…

…It could have been any of the chapters that had been fully realised.

Presumably all the chapters were written with a similar level of foresight, it just happened that the correct set of factors converged to cause 14 to be the first. I’m sure in due course more will follow, but this chapter sat somewhat uncomfortably and laughably in stark contrast to the rest of the (seemingly visionary) book. It humbly served to highlight our different behaviour towards incredible concepts that have already been realised, and those that still harbour engineering problems to be solved. Some would refer to the latter as (rather derogatorily) pure science fiction.

I also really enjoyed the last chapter too. Drexler’s writing style seriously moved me. I hope that people find this book a call to arms, and that those who read it when it was first published (1986) will take the time to re-read, and realise that we are closer to these dreams, enough so to really do something about it. In 10 years time I want to feel that quaint warmth when I read ALL the chapters, not just the one about hypertext.

I have the nanosystems textbook waiting on my bookshelf for some more in-depth learning.

Ion trap quantum computing

On Thursday I attended a seminar by Simon Webster from the Ion trap QC group in Oxford. I didn’t realise that ion trap QC was so advanced. Having spent so much time in the happy world of LSI Josephson logic, I had a prehistoric picture of an ion trap being a large metallic cavity surrounded by huge electrodes similar to plasma confinement systems. But no, it can be done to micrometer precision on chip with ions trapped in tiny channels. You can shuttle individual ions, or little chains of them around the chip, allow them to interact and evolve to perform the computation, and then move them elsewhere, or read them out.

planar_chip1
Photo © Oxford Ion trap group

Alas, it is still only possible to manipulate a couple of qubits at the moment, as is the case with most QC realizations. The qubits themselves are formed by manipulating transitions between energy levels of the ions. In this case, Ca+ ions. Entanglement can occur for example between the ion and the photon emitted during a relaxation from an excited state. Therefore one advantage of Ion trap QC is that it is natively good at handling static (ionic) and flying (photonic) qubits with the same technology, and quantum information can therefore be transferred over long distances and on/off chip quite easily.

Exciting stuff. I’ve still got my money on Josephson junctions, but competition in experimental QC is healthy 🙂

Links:

Overview of the Oxford Ion trap
Nature paper on Ion trap QC
Info from MIT
PhysOrg report

Thoughts on meetings

In my experience, physicists academics aren’t generally very good at meetings.

Yesterday was amazing – it’s the first time I think we’ve had a meeting that was productive! We actually planned it. There was an agenda, several of the participants gave short talks, and I drafted up an action plan and took minutes. The meeting kept to the time plan and was informative and I think everyone actually gained from it.

So I raise the question: How is it possible to make physicists better at project management? My fine colleagues in engineering actually have courses on such things. Yet there seems to be little emphasis on business savvy, industrial collaboration or management skills in a typical undergraduate physics course. We learn by trial and error.

There is indeed a lot of material to cover in a physics course, especially as school leavers don’t seem to know mathematics to a suitable level anymore, and as such the more esoteric skills (arguably more useful in the real-world) are pushed to the bottom of the priority list. I’m not sure I’d have taken an optional project management/industrial liason/wider-research-impact course at the time of my undergraduate degree, but I sure as hell would now with hindsight!

In fairness, we do run a Physics with Business Studies course, but that is a specific degree in itself. Maybe there should be something integral to pure Physics degrees too.

This actually goes against my usual viewpoint of the ‘dilution’ of a subject being a bad thing, but nowadays interdisciplinary collaborations are so commonplace that understanding of management and having a wider scope is a necessary skillset if you wish to pursue an academic career (and of course if you don’t).