The Penrose interpretation of quantum mechanics…

…states that the mass of a system affects the system’s ability to maintain quantum coherence. This is the basis for some theories of quantum gravity. Above the Planck mass, which is ~1E-8kg, a system can no longer maintain coherence for any measureable time, due to the onset of gravitational interactions.

This has been irritating me for a while. Let’s think about superconductors: Does the effective ‘mass’ of the superconducting condensate affect the coherence time of it’s macroscopic wavefunction?

1 mol of a metal contains ~ 6e23 conduction electrons

which have a mass of ~ 5e-7 kg

which is greater than the Planck mass. But I don’t see any reason why a macroscopic superconducting wavefunction cannot be established in a large single crystal of a material such as Niobium/Lead/Aluminium. You can demonstrate the Meissner effect with a huge lump of superconductor:

I haven’t been able find much information about this. Maybe that’s why I think I’m just being dumb here. So I guess the question would be: Is there a fundamental limit on the *size* of a superconducting macrocopic quantum wavefunction? Does the distribution of mass affect the wavefunction, i.e. are the gravitational effects seen by the QM wavefunction on average reduced by the distributed mass of the surrounding condensate? Does superconductivity, being a collective phenomenon, somehow negate the entire thing? I don’t know the answer to this problem so I thought I’d throw it out there 🙂

### Like this:

Like Loading...

*Related*

I recently came accross your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.

Kate

http://educationonline-101.com

I’m not a big expert on this, but I believe that even in standard spontaneous collapse schemes, things like macroscopic superposition in SQUIDs don’t cause wavefunction collapse. In order to get an appreciable probability of collapse you need a superposition of two or more states in which a macroscopic number of particles have a very different position. This is because the collapse mechanism works on the position variable of the fundamental particles, not on any other fundamental variable, and not on any effective variable used in models of superconductivity. In other words, not all macroscopic superpositions are equal in collapse theories, some cause collapse and some don’t.

I imagine something similar would happen in Penrose’s proposed theory, but since there isn’t actually a fully worked out theory it is a little bit difficult to say anything useful (it would be better to call it the “Penrose vague idea” than the “Penrose interpretation” at this point). Obviously, any completion of the proposal that *does* rule out superconductivity would be falsified straight away.

There are experiments underway to test Penrose’s ideas more directly by using cavity QED to generate a macroscopic superposition of a mirror in two positions. My guess is that some version of this experiment will effectively rule out Penrose’s ideas, i.e. the constraints on the parameters will become too unpalatable, within the next 5 years or so.

Maybe Penrose is wrong?

Okay, I don’t know what I’m talking about but here is some balloney.

My impression is that decoherence has nothing to do with mass directly. It has to do with number of microscopic degrees of freedom necessary to describe the system. Too many of those, and you get a propensity to decoherence due to a combinatorial explosion (more possible combinations). That’s why I’m not surprised that you can get quantum coherence in low temperature systems (in case of systems for which you can define temperature at all). Because what is temperature, really. DQ/DS. Low temperature systems are hyper-sensitive to heat energy input, just a small heat input creates a unit change in entropy (entropy = disarrangement, number of possible combinations). Similarly, quantum systems are really sensitive to heat energy input

Is “size” volume, area, or length? Length as such is inert for supercon solenoids – very long LHC supercon windings. Single crystal volume is not problematic in high temp ceramic supercon spectroscopies. Coherent states remain coupled over kilometers’ separation through fiberoptic re the Bell Inequality and the EPR paradox.

Gravitation qua gravitation is tricky. All small gap gravitation observations support 1/r^2 without apology. Two experiments:

1) Casimatter – nothing but 120 nm optical path Casimir etalons – via continuous bifilar vacuum deposition onto an inert rotating disk of 70 nm aluminum then 37 nm 60:40 MgF2:LiF as the CTE-matched optical spacer. Round and round. Slice out squares of 37 wt-% ZPF-depleted gaps. Does Casimatter go weird vs. a simple lump in an Eotvos experiment?

2) Probe wavefunction symmetry,

geradeorungerade. Load an Eotvos experiment with single crystals of quartz in enantiomorphic space groups P3(1)21 and P3(2)21. Do opposite geometric parity mass distributions output a net non-zero signal?Matt: I agree, I think the problem I was having with this is that there just isn’t enough information to extract any kind of model or theory to test. I do like your wording of ‘the vague idea’. Superconductivity is a great test for any theory involving macroscopic quantum phenomena, as it’s a rather unusual case! I’ll have a look into those recent cavity QED experiments 🙂

Matt:

In order to get an appreciable probability of collapse you need a superposition of two or more states in which a macroscopic number of particles have a very different position.Isn’t that only true for fermionic states?

Skip superconductivity. Think about superfluidity in 4He or 3He. You can easily get 1 kg of superfluid 4He to show macroscopic quantum coherence. Personally, I think Penrose is just wrong. It comes down to entanglement of many degrees of freedom with the environment rather than some gravity-related effect.

BTW, nice blog, physicsandcake!

Doug: Good point! I tend to ‘frame’ everything using superconductivity, I guess that is what floats around in my head most of the time. But you are right, the effect is even more distinct in superfluidity.

Superconductivity is a great test for any theory involving macroscopic quantum phenomena, as it’s a rather unusual case!I was just thinking that it might be a great way to test Partovi’s recent claim: http://arxiv.org/abs/0708.2515. Perhaps superfluidity is another option. I would love to see a test of his theory, though, since it throws the whole notion of “quantumness” on its head.

I am not sure of the merits of Penrose’s specific attempt to find “objective collapse” but his basic sense that “decoherence” by itself does not lead to loss of the alternative possible final state (like dead cat if we see it alive) is spot on. (IOW, why we don’t see macroscopic superpositions – some would say, explaining the “appearance” of collapse.) The decollusion (my coinage) argument fails for many reasons. One is its being a circular argument. The probabilities from moduli squared are entered into the density matrix as if already accounted for – but in order to explain how collapse of the wave function happens without some inexplicable intrusion, you should have to set up with amplitudes by themselves and then show how selections of some bases but not others happens to make the probabilities that we are trying to explain.

Furthermore, it isn’t even appropriate for a theorist producing a model to use a density matrix showing the chance of various wave functions being present. That is understandable if I have a specific physical system I am trying to represent, but the theorist gets to pick his own parameters to make a point. Then the wave functions would be selected states with no probabilities of their presence to readily confuse and conflate with the other probabilities that come from the squared moduli. Such a legitimate exposition would also make it harder to sow confusion by conflating the idea of the chance of this or that state, with an actual “mixture” of multiple states or at different times. Don’t even ask about the nutty Multi-Worlds Interpretation, but I can dish on that too if it does come up.

I think the point Penrose was making is this: let’s say you have a quantum mechanical two state system, where the two states have different distributions of mass relative to some (eg fixed classical) massive reference object. For example, let’s say your two qubit states were to somehow correspond to two distributions of mass where the first was centered at L microns above some reference point and the second was at L+dL above the same reference point. Such things have actually been built in superconducting electronics by Ustinov’s group at Erlangen (the “heart shaped qubit”). In the Ustinov qubit case, the two states correspond to a vortex trapped in two different locations in a long JJ ring.

In this case the difference in potential energy coming from gravity can be calculated. As far as I understand Penrose’s argument, he says that this energy sets a timescale via energy/time uncertainty relation, so that dt ~ 1/dE where dE is the potential energy difference due to gravity.

This doesn’t seem obviously wrong to me. In the case of the heart shaped qubit you might be able to experimentally test this although I think that this effect is probably a lot smaller than other sources of decoherence.

If all parts of the experiment are constrained to follow the same path, a Beckman-Coulter Optima MAX ultracentrifuge delivers a million gees at the rim with a large radial gradient. If any of the experiment is unconstrained all it gets is a transverse Doppler shift. There will be mechanical vibration.

This is not a 3.2 cm Harvard Tower experiment. A photon emitted from the rim propagates through free space and does not “climb” up any gravitational field toward the hub. A high refractive index liquid disk (no stress birefringence; e.g., diphenyl sulfide, n_D = 1.6327, mp = -40 C) and optical emission will shift about a part-per-billion relative – if you believe it shifts at all.

[…] about 1e-8kg – so we can’t test that hypothesis in the lab yet. Note this relates to a post I wrote a while ago about electrons in a lump of superconductor – there are enough electrons in a bulk sample for […]