Quarantine by Greg Egan


I’ve just been reading Quarantine, a Science Fiction book by Greg Egan (1992). It was brilliant, I loved it! I’ll definitely be buying his other books.

— Warning – spoiler ahead! —

The first few chapters of the book introduce the idea that in the future earth has been quarantined by an extra-terrestrial species, but offers no explanation as to why. The quarantine manifests as a large bubble surrounding the solar system, somewhat akin to the event horizon of a black hole. The story follows Nick the protagonist’s exploits as a highly-trained, specialised security agent, with various neural modifications, working in a shadowy hi-tech corporation whose scientific employees are performing research and experiments. Though employed only as a security guard, he discovers the true nature of the research, specifically that humans are beginning to understand, manipulate and prevent the collapse of the quantum-mechanical wavefunction (which has been determined – in the near future setting of the novel – to occur in the human brain itself). The ability to prevent the collapse results in a delicious cornucopia of logical contradictions, many worlds scenarios and quantum metaphysical exploration during which Nick (and indeed the reader) try to retain some semblence of rational thought.

The story culminates in the corporation developing a neural mod which renders individuals unable to collapse the wavefunction and thus exist in the many-worlds plane – removing their observer qualities and *possibly* their free will. This means that they can select a ridiculously low probability eigenstate from their superposition – such as one in which they choose the correct factors of an extremely large number, or are able to throw a 1 on a fair die hundreds of times, repeatedly. In turn the reason behind the quarantine is revealed. (But I won’t give everything away). Tasty book – read it! It’s measurement-problem-tastic.

I like the idea of shadowy corporations doing QM research and messing about with the nature of reality. For some reason that appeals to me 🙂

Friday Seminar – Nanomechanics and the Casimir force

We just had a rather cool seminar about the Casimir force and how it affects Nanoscale electrical and mechanic (NEMS) systems. The talk was given by Ramin Golestanian from Sheffield University.

The Casimir force is usually described as the attractive force that arises between two metallic plates in a vacuum due to the quantization of the vacuum fluctuations between the two plates (virtual photons). The somewhat handwavey way I think of this is that the boundary conditions of the two plates only allow certain wavelengths or energies of photons between them, whereas outside the plates you are not constrained in this way. So you have more stuff ‘pushing’ on the outside that on the inside 🙂

Interestingly I was told that the Van der Waal’s attraction between molecules (Lennard-Jones potential) is a solution of the same master equation as that which predicts the Casimir Force, just with different boundary/limiting conditions. Which I did not know – but is pretty obvious when you think about it. Maybe I just never had 🙂

The Casimir force is quite small, (of the order of pN), however, with modern micro and nano-engineering technologies, it is possible to make tiny gears, levers and other moving components on the scale that the force actually starts to become rather dominant in the system, moreso than electrostatic interactions, for example.

Here’s a picture of a micro-mechanical device from Sandia labs


Not quite small enough yet to harness the Casimir force, but the technologies for producing these things are advancing all the time.

Anyway, the seminar focused on some theoretical and computational modelling work done on nanoscale rack and pinion mechanisms.

The Casimir effect can be a pain if it causes parts of your nano-system to unwantedly ‘stick’ together, however it can also be used to eliminate the need for contact between, say, the rack and pinion, and thus reducing wear. The teeth on the two parts are attracted to one another and can couple the motion from the rack to the pinion. With a Casimir-type coupling instead of a mechanical contact, the pinion can also be made to go at different velocity to the rack, or even backwards, under specific conditions, which is difficult to imagine in the case where the two elements are in physical contact. The system is highly non-linear and chaotic, demonstrating bistability and even amplification under some circumstances.

It was all very interesting, I love these tiny NEMS and MEMS machines.

Physics versus Art

Can you be a physicist and an artist at the same time?

My answer is it depends on how you define ‘same time’. I’m pretty sure that different areas of the brain are actually used for the two subjects. ‘Areas of the brain’ can be interpreted as ‘ways of thinking’ etc., I don’t mean literally physical areas (although that might be the case too). I certainly feel predominantly in either a physics or an artistic ‘mood’ – and although there are crossovers, it’s pretty easy to tell them apart.

Crossovers can be useful. For example, getting the technical aspects of a piece of art correct can be an entirely scientific and logical process. Also artistic and aesthetic tendencies are very useful when working with powerpoint presentations, preparing figures and schematics for documents, or writing popular-science style articles, where you have to be interesting in addition to being factual. I also find fractals and graphical representations of functions (e.g. the Riemann Zeta function in the complex plane) very artistically pleasing. I put them on my desktop wallpaper sometimes.

The difficult bit is controlling which ‘mood’ to be in at any one time. To change over the period of a day is difficult. For example, it’s pretty hard to work on physics all day and then do some painting in the evening. There just isn’t enough time for the psychological shift. I find that ‘cycles’ come and go with a period of several weeks (although it can be months). At the peak of the cycle, the dominant ‘mood’ will be completely engrossing (to the point of neglecting to eat/sleep etc.) and I have no idea why I am even interested in the other one. Obviously the two cycles are asymmetric – I have to do more Physics than Art overall as that’s my main source of income 🙂

I have never completely given up on one in favour of the other, the restoring force always comes back eventually. It worries me a bit that I will never be completely engrossed in one subject (and therefore will probably not be able to master one thing, but rather I will be OK at several things). On the other hand it’s nice to have some variety.

Fun with Magnets

So today I made a magnet, MacGyver stylee. It consists mainly of paper tape and wire 🙂 I decided to use Cu wire rather than a superconducting wire as I can probably get enough field from a normal magnet. It’s all very makeshift, I just need a magnet for the next fridge run. I could have got a superconducting one made professionally by the workshop with a tuffnell former, superconduting heat switch, etc., but it probably would have taken 2 weeks or something. If the mock-up works I suppose I’ll get a proper one made for next time. My old superconducting magnet (which worked fine) was “removed” because the new fridge didn’t fit down the centre bore. Which was slightly irritating.

MacGyver magnet also fits neatly inside my mu-metal shield, although I need to do some field measurements to check the attenuation level that the mu-metal is providing (It’s tricky to calculate that). Mu metal essentially attenuates nearby fields by ‘sucking in’ magnetic flux. The purpose of the shield is to attenuate the earth’s magnetic field and to reduce magnetic noise from nearby equipment such as VDUs (does anyone even use the acronym VDU anymore?), power supplies, mechanical pumps etc. The downside is that it eats any desired fields too. Most people ‘nest’ at least three mu-metal/cryoperm/superconducting shields to get a good low noise environment inside the cryostat. I’ve only got one at the moment so it will have to do until I order some more.

Here’s a piccy of the mag on the vacuum can, the mu-metal shield is sitting next to it.


I’ve made 500 turns already on the MacGyver magnet, and I’m considering adding another 500.

Assuming solenoid limit, B/I~mu0*N/L, so if I apply 100mA, B~4mT/A. So 100mA gives 0.4mT.

My old superconducting magnet calibration revealed that you need about 0.37mT to get phi0 in the junction (to traverse the Fraunhofer period), so this should be OK.

The wire is about 3.5Ohm per 40turns, so 500 turns will be 45Ohm. But the resitance will probably be 10 times less in the Liquid Helium, so 100mA into 4.5Ohms gives 45mW dissipation, which should be OK.

If 500 turns doesn’t sound like very many, well, it is 🙂 To put it in perspective, I got through 2 Delirium albums and a Blutengel album so far (I tend to listen to music whilst in the lab doing boring repetitive tasks.)

It didn’t help with my RSI.

Listening to Blutengel (a German vampiric ‘EBM-goth-eurosynth’ band) is also a bad idea – it makes me want to dress up and go bite people. Yum 😀

Kevin Warwick at Birmingham

Last Tuesday I went to a talk by Professor Kevin Warwick from Reading University entitled ‘Neural implants: A new medicine or the next evolutionary step’.

For anyone who does not know, Kevin Warwick works in the field of cybernetics, and is somewhat infamous for having several implants, including an RFID tag and a small array of electrodes (implanted into the median nerve of his arm), the latter of which was connected via the internet allowing remote control of several pieces of equipment by signals generated directly from the human nervous system. I’ve heard two of his lectures now (I went to one a couple of years ago) and I enjoyed both of them, I think that the work done is quite pioneering.

The talk was addressed as a public science lecture, so the technical content was minimal, but I still enjoyed hearing about some of the interesting aspects of the field. Approximately half of the talk was about said implants, and the other half was about developing autonomous robots using biological brains. The brains are grown from rat brain cells. and are cultured in an incubating environment at 37C. Whilst they are growing and ‘learning’ in their nice warm surroundings, they interface to the robots via bluetooth which are out exploring the real world (a square box in which they can move around freely), and send back signals from their sensory inputs (usually ultrasonic). If I understood the research correctly, the main result is that the robots ‘learn’ to avoid the walls, without being given any prior instructions. They just instinctively decide that this behaviour is more useful to them. Interesting, huh?

Now there has indeed been a lot of bad press about this research, but I do sometimes think that any publicity is a good thing. News coverage of this type of work is always mostly hype, which is why people working in research usually get irritated with this kind of thing. However, because of the state of television broadcasting at present (a whole blog post in itself, for another time maybe) hype is the only thing that will get people even remotely interested in science these days. You can’t tell it how it actually is, or people would be even less interested.

I am very ‘pro’ the idea of human-machine interfacing, especially neural and nervous system interfaces. I very much understand that some people may find this slightly disturbing, but I wonder… Do they criticise the research from a scientific viewpoint, or just the slight discomfort that they may feel when instinctively pondering the ethical issues that may arise from such studies? I strongly suspect the latter, having read some of the review articles on this subject.

Some may see mass media coverage as detrimental or devaluing to the research, but at least getting the public interested in and provoked by research gives the chance for debate on the issues years before anything becomes a commercial reality. (Compare the still largely unresolved debates on GM foods).

I had a nice chat with Prof. Warwick after the talk about (amongst other things) how ethics often seems to get in the way of scientific progress. It’s nice to meet people who are so enthusiastic about their research. Of course, that’s just my opinion!

Coax of woe

Anyone who even vaguely knows me will know that I have certain issues with small coaxial cables. I’ve tried various varieties of coax and connector to wire the fridge for the low temperature, low noise experiments. However I’ve always had failures due to these coaxial lines just being too complicated (too tiny, too many filters, connectors, etc running down the length of them). The previous coax I used was a 0.3mm stainless steel variety, as I mentioned in an earlier post. I have also tried flexible braided stainless steel coax (which also have the issue of the difficulty of soldering to stainless), and twisted pair, in which case I connected the filters to the twisted pair using small pins, which was a mistake as too many solder joints mean that again they are likely to fail.

Well I’ve just installed some new 0.5mm CuNi coaxes. These are much easier to solder to, and easier to work with in general (they don’t break every time you breathe on them) so we’ll see how they go, I’m going to cool them down today.

Quantum Control – Theory meets experiment…

Yesterday we had a visit and seminar from Sonia Schirmer of the Cambridge Centre for Quantum Computation. The seminar was entitled ‘Quantum Engineering – Control Paradigms, Algorithms and Applications’

Whilst it was a little difficult for some of us experimentalists to follow, there’s definitely something interesting happens when experimentalists and theorists find a common discussion ground. I think the main problem is sometimes that theoretical work can be a bit too general for experimentalists to follow. Put it in terms of an example; a real-world system and it’s much easier. The loss of generality isn’t too much of a sacrifice from the experimentalist’s point of view. Conversely, sometimes it is difficult for theorists to realise what is easy (and what is hard) to implement in the lab.

So let’s talk!

We got into a rather intense discussion about superconducting flux qubit (SCFQ) realisations, and feedback control of such systems. For example, to manipulate SCFQ systems, you need a minimum of one control line. You can do some really simple but neat experiments with this system. (Disclaimer, when I say things are ‘simple’ I mean that they are possible in the best case scenario. The likelyhood of an actual experiment working is a probability of all its constituent parts working, which usually passes through my mind in the form of Psucess(experiment) = Psuccess(1)*Psuccess(2)*Psuccess(3)*….[1] See the end of this post for an example.[2] But lets not get into the realm of half empty glasses; for now we will assume that everything works.)

For example, arbitrary rotations about the Bloch sphere are pretty simple to implement. Just apply a microwave flux pulse to your qubit at the frequency corresponding to the energy level splitting E(|1>)-E(|0>) and the qubit will swap between |0> and |1> at the Rabi frequency, going through a whole range of quantum superpositions α |0> + β |1> along the way. So usually experimentalists just apply a square shaped pulse modulated by a microwave frequency for a certain amount of time to bring the qubit into a particular superposition of states. For example, you might turn the microwaves on for 0.5ns to apply a π/2 pulse.

But you can also mess about with the shape of the pulse. Some people are already working on this (see for example THIS PAPER from MIT/NIST). This is where the control comes in. For the specific case of SCFQ, the control parameter would be the amplitude, phase, frequency, and pulse envelope shape of the microwave flux pulse applied to drive your qubit’s Rabi oscillations.

Quantum control

Open loop control – where you adjust the control parameters and check the output ‘figure of merit’ – this is usually something like the fidelity of the gate in question. (Like how often a CNOT gate gives the correct answer when you feed it known inputs).

Closed loop control – where you feed back the aforementioned figure of merit through some kind of algorithm to adjust the original control parameter.


SCFQ suffer from decoherence, which is believed to be mainly due to fluctuating electrons (fluctuators) in the junction barrier being excited into resonance at the same time as your qubit. If they are closely coupled to your qubit, they will interact (swap energy) and therefore act as a loss mechanism. By addressing the qubit with specially shaped pulses, it may be possible to ‘tiptoe’ around the fluctuators without waking them up.

Unfortunately (it seems) the systems are complex enough that you can’t predict what pulse shapes will work. But you can find algorithms which will converge onto good pulse-shape solutions if you have a simulation which gives you feedback. The best simulation of all is to use the real-world version. Hook up your control algorithm to a real experiment, and watch it go.

The other (more subtle but perhaps more interesting) thing is that it’s a two-way system. Your algorithm will also tweak the Hamiltonian which you are using to simulate the system in order to make it fit the data better. Your Hamiltonian will actually converge on one that accurately describes the physical system…. you win on both fronts! Your qubit gets treated better thus you get better data, and you get a more accurate theoretical picture of the mechanisms that were causing the issues in the first place.

The only two remaining problems are:
What algorithm to use to do this optimally?
What on earth does the complex mess of a Hamiltonian that you get out correspond to physically?

While I’m certainly not an expert in this field, (I’ve only really come across the idea of quantum control recently) it does seem like an interesting approach, especially when you can apply it to experimental realisations. Real-world data can keep simulations realistic and help optimise quantum control algorithms.
Wow, that even sounds like a conclusion.


[1] I chose the notation Psuccess for ease of calculation, although it is not really the way you end up thinking about scientific experiments after a while; Pfail is a better paradigm to adopt. Doubt and you’ll usually be right, but pleasantly surprised if you are wrong.

[2] I thought I’d give an actual example here with realistic values. All numbers are determined experimentally 😉

P(experimental success) = Psuccess(Helium leak doesn’t occur)*Psuccess(fridge cools to base correctly)*Psuccess(wiring doesn’t fail)*Psuccess(junction works)*Psuccess(noise level not too high) ~ 0.95*0.95*0.25*0.9*0.5
(Technically wiring doesn’t fail and noise level not too high are not independent events, but we’ll ignore this for now)
~ 0.101. So you’ll have to run the experiment 10 times to get a successful result. Each low temperature run takes about 2 weeks. And that’s why experiments take so long.