I think the recent musings around the blogosphere and beyond are completely missing a huge and fundamental point about why building AQC and AQO-based machines will be not only useful, but something that we will wonder how we lived without. I’m not going to talk about the specifics of implementing superconducting technology for cool applications (I can defer that to subsequent posts). Here I just want to explain a little about why we should do it, and how the main hurdle to progressing such technology is nothing to do with our ability to build and understand the technology itself. In fact, it is a rather sad story.

Let’s think for a second: Do we really have the right computational substrates for realising concepts such as highly connected neural networks, and thinking machines? Is the Von Neumann architecture really the best way to support such things? We are currently solving the problem of simulating intelligent systems by throwing more and more computational power at them. Though there’s something very odd about this, we have little choice, as we have become very good at designing processors that behave in a particular way, that talk to memory in a particular way, and that have a small number of cores. The problem with the commonly used architectures is that they just cannot embed things like neural nets, and other highly parallel structures, very efficiently.

Could adiabatic quantum processors be used for neural net and highly parallel processing purposes? Of course! The architectures are very compatible. AQO processors are very similar to Boltzmann machines, which underlie the operation of way many pattern-recognising systems, such as our own brain. There are other delicious fruits of superconducting logic, for example the ease with which we can implement reversible logic, and the exceedingly low power dissipation of such circuits. These systems also exhibit macroscopic quantum effects, which may be a great resource in computation, or it may not. But even if it does not, we should not ignore the fact that actually building such machines is the only way to answer this, and many other questions.

I think that superconducting processors are offering us a wonderful gift here, and yet we refuse to take advantage of it. Why? The reasons are a little hard to stomach.

It seems that while we’d rather just get going building some cool things, we end up spending a large proportion of our time and effort debating issues like whether or not quantum computers can solve NP-Complete problems in polynomial time. What?! I hear you say. My answer: Exactly. Obscure as they are, these questions are the most important thing in the world to a vanishingly small fraction of the population. Yet these seemingly lofty theoretical debates are casting a shadow over the use of superconducting QC architectures in every area of research, including things like the novel implementations of hardware-based neural nets, which may prove to be an even more fruitful avenue than optimization.

It will take a large amount of financing to commercialize general purpose superconducting processors, and an unimaginably large effort on behalf of the scientific researchers and engineers who devote their time to trying to progress this technology. The step from fundamental research to commercialisation cannot and will not work in an academic environment. Why? Because in order to fabricate integrated circuits of any kind you need investments of hundreds of millions of dollars into foundries. Robust and scalable technologies can only be realised in an industrial setting. Sure, small scale systems can be realised in many labs, but there are no industrial uses for devices produced in this setting: Anything that they demonstrate can be outperformed with a standard computing system. And it will stay that way until we realise that we need to take a risk as a society and really invest now in a technology of the future. I’ve seen distressing things happening at RSFQ roadmapping meetings. The conclusion of the roadmapping somewhat tragically boils down to ‘make another roadmap’ – because there are no applications beyond a few incredibly specialised circuits that can be realised within the scope of government research grants (~$10K-5M. Oh and by the way they aren’t very reproducible either). There is no funding on the necessary scale, and therefore whatever money is put into this distributed and broken field rather swiftly evaporates, even though if considered cumulatively it may have been enough to get a single, well focused effort going.

So, my plea to the academic world is to realise that there comes a point where quarrelling over things should be secondary to our solidarity as a community who really want to see something achieved. I’d rather try and fail than sit smugly in my ivory tower proclaiming that something is 20, or 50 years away (read – therefore not my problem guys!) Day after day people (with little or no knowledge in the field) exclaim that these things will never work, you are not building anything useful, your results are fraudulent, your devices are vapour-ware….etc etc.

This is more than cautious scientific scepticism, this is sheer hatred. It seems very unusual, and from my point of view, very detrimental to the entire field. I’d first advise such people to read some of our peer-reviewed papers to get their facts straight. I’d then say the following:

If those working in the fields of superconducting electronics, flux qubits, and quantum information really cared about the future of a technology that they had helped show had huge theoretical promise, they should rally along with us rather than spitefully burning their own bridges. As a community, we should promote the potential of our research and try to convince whoever needs convincing that these endeavours, these ‘Manhattan’ style projects are the only way in which we can bring disruptive technologies to fruition. And even if such projects fail, **even** if there is *no other possible spin off* that comes out of them, think about this: Such projects cost about the same amount as a couple of ICBMs. I know which I’d rather have more of in this world.

[…] Adiabatic quantum computing is the greatest idea in the history of humanity June 5, 2010 Posted by Geordie in World Domination. trackback Update 20100607: Physics and Cake’s take. […]

Hi Suz,

I think perhaps there are a few issues that you are bundling together here. First off, the von Neumann architecture is not exactly equivalent to other architectures in terms of resource use, and certainly parallel architectures have some advantage. The important thing to note, however, is that there is only O(n) in the difference. While this makes a difference in practice, it isn’t such a huge difference. We don’t need adiabatic QCs to exploit such parallelism. I’m not sure how much it costs DWave to produce a 128 qubit device, but I doubt it is less than the £400 it would cost me to buy a 512 core graphics card, which offers me far more parallelism (though of course this isn’t the only reason for persuing AQC). I really don’t see DWave devices offering much of an advantage over cheaper conventional technology, unless you can gain a quantum speed-up.

From what I have heard about the coherence times in those devices, I honestly do not believe there is anything going on that will lead to speed-ups over classical systems, and I don’t see how DWave can compete with silicon technology without such a speed up. Superconducting electronics is simply so much harder to do.

On the other hand, I do admire the DWave approach of simply trying to build ambitious devices as quickly as possible, and I do believe that something like that is needed to produce a large scale QC. The academic efforts are different to an industrial R&D approach, in that academics are usually interested in doing interesting science along the way, which certainly slows things down. Additionally, the funding situation certainly favours more incremental approaches. If anyone wanted to give me $100m to build a QC as quickly as possible without slowing to explore interesting physics along the way, I’d jump at the challenge. I’d love to see a QC, and certainly have some ideas about how we should go about building one.

The problem is that I very strongly believe DWave has picked the wrong model. The qubits are far too noisy. Superconducting qubits, though, I think are a very good way to go, but you need qubits to be below threshold first (and some are already!). The adversarial stance to the academic community isn’t helpful either. We really aren’t the enemy (hey, you used to be one of us!). I guess many of us are simply aware that the DWave approach will probably fail, and having DWave thrash or discount so much of the academic research that is taking place. The most successful approaches so far are ions and quantum optics, and DWave has demonstrated any ability to perform anything equivalent to the quantum algorithms demonstrated in these set-ups. The optimization tasks really aren’t much evidence because they degrade gracefully to a classical approach as the noise is increased.

So I have no animosity towards DWave, I am just convinced that the approach being taken at the moment won’t succeed in building a real quantum computer. Don’t get me wrong, I would seriously consider industry if I thought the approach would actually yield a QC, and support any efforts in that direction.

Lastly, as regards the whole NP question, this is actually an incredibly important question. If P=NP, then there is no public key crypto, among many other consequences. This is true also if BQP=NP, and we can actually build scalable QCs.

Anyway, please don’t take this as adversarial. I only meant it as a defense of my stance, and not a criticism of yours.

Joe,

Please support your assertion that D-Wave’s qubits are noisy, in light of:

http://arxiv.org/abs/0909.4321

[Physical Review B 81, 134510 (2010)]

http://arxiv.org/abs/0812.0378

[Phys. Rev. B 79 060509 (2009)]

http://arxiv.org/abs/0712.0838

[Phys. Rev. Lett. 101, 117003 (2008)]

http://arxiv.org/abs/1006.0028

etc.

Hi Joe,

Thanks for the comments. I appreciate your viewpoint and the work that you do in this area.

A couple of personal thoughts:

I think that superconducting processors could be very useful, even if we can’t use them to build QCs. For example, I can’t think of any way to build a truly 3-dimensional conventional processor without a serious overheating problem (if anyone has found a way around this I’d love to know!). Yet superconducting processors could be a good candidate. Just imagine the intrinsic connectivity gain you could get with a fully 3D processor block! 🙂

Superconducting electronics is only harder to do because the technology isn’t as mature. If we threw half as much money at it as we do at the semiconductor industry I think it could be very promising. Unfortunately the problem with disruptive technologies is that they have to not only prove their own worth, but so much so that they dislodge the competition. That is difficult, but I don’t think that it is hopeless. There could be many new avenues opened up by such a pursuit.

I didn’t mean that NP question isn’t incredibly important, I just meant that sometimes it seems very disproportionally represented in the blogosphere 😉 Compare with, let’s say, decoherence, which is also a pretty important topic that doesn’t get anywhere near as much limelight!

The qubits are actually rather quiet – and what’s more, they are reproducible and tunable, which is something that most research groups have only just started to realise is a requirement in multi-qubit experiments as they start to go towards more than a handful of devices. In fact, gate model efforts could learn about how to build better qubits by studying the designs and best-practices fabrication methods that are employed by D-Wave. There is a lot of good Physics that is being done (and published!) here that seems to be ignored on a regular basis.

I’d still love to read about any further results towards a fault-tolerance threshold theorem in AQC. I think I’m a bit out of date on this topic, would love to know how this is progressing. Maybe I missed something big!

Rumors: I didn’t think there is any denial that the qubits are extremely noisy. I’m just going on info about T2 times that I got directly from Geordie and Suz when they visited me. Maybe I am misremembering, but I am sure they said T2 was several orders of magnitude less than the annealing timescale. Perhaps I am wrong, or perhaps we are simply using noisy to mean different things. I mean phase noise, not depolarization/relaxation/bit flip errors.

Suz: The 3D block is an interesting idea, but I don’t think it is a realistic option for most users (including corporations). Superconductors require a cryogenics infrastructure that is not so easy to maintain for most potential customers. If we had room temperature superconductors, of course, this objection would disappear.

As regards decoherence, Scott has a post saying it is the main hurdle to scalable QCs, but I strongly disagree with this. There are a few implementations where decoherence is the dominant problem, but others such as ion traps are already below threshold, and the problems with scalability are of a different nature (cooling, controlability, etc).

I didn’t mean to imply that you weren’t doing any physics, just that doing all of the interesting physics along the way has the potential to slow any academic effort.

Hi Joe: the rf-squid qubits we’re building are not extremely noisy. They are among the lowest noise superconducting qubits ever built, and they achieve these noise levels in a real processor architecture that is actually scalable in practice.

I suppose you could argue that no solid state qubit can work. But you can’t argue that ours are somehow worse than other solid state qubits, that’s not correct.

T2 in the sigma-z basis is much shorter than the annealing time, yes. But then the computation isn’t done in the sigma-z basis…

Geordie: I think we are perhaps going in different directions here. I meant that on an absolutely scale, in terms of the noise levels necessary to have a properly coherent system, the qubits would still seem to be way off. On a relative scale, as both you and Suz mention, these may well be regarded as low noise when compared to other such systems. Perhaps I have chosen unfortunate wording.

Suz: Saying the computation isn’t performed in that basis is a bit dodgy. All bases are important, unless you are using decoherence free subspaces, which you aren’t as far as I am aware. The argument that only the Hamiltonian basis is important is fundamentally wrong (I’m not attributing this to you, but it is something I have heard wheeled out in relation to DWave devices). Larger entangled intermediate states will have lower overlap with the actual state of the system as local decoherence is increased, because the distance between any separable (or close to separable) state and the necessary state will grow as the instance size increases for the problem. The short T2 rapidly erodes off diagonal terms in the z-basis (independent of the Hamiltonian), and this will limit your ability to pass through entangled states.

Joe: Maybe you’ve hit on the key technical issue upon which we seem to be disagreeing. I’m going to challenge your statement that T2 erodes off diagonal terms in the z basis. Maybe we can try to pinpoint where exactly in the argument we don’t agree — here goes!

* Throughout the following assume a weak coupling limit (we can make this precise later) to an environment in thermal equilibrium at temperature T.*

The dominant effect of coupling to an environment is the thermalization of the central quantum system. Forgetting about timescales for the moment, a closed quantum system with energy eigenvalues E0, E1, … , EN with E j-1 < E j, which is then weakly coupled (in a sense that can be made precise) to an environment in thermal equilibrium at temperature T, has equilibrium occupation probability of the jth energy eigenstate of exp (-Ej/kT) / Z. In other words if we write an open systems Hamiltonian with closed quantum system + thermal environment + coupling, eventually the quantum system reaches thermal equilibrium with the environment (2nd law of thermo). Do you agree with this?

These are equilibrium probability distributions. They do not decay in time. The eigenstates corresponding to these eigenvalues can be highly entangled. They are set by the closed quantum system's Hamiltonian. Correct?

T2 is the timescale for decay of off-diagonal elements of the density matrix (written in the energy eigenbasis of the closed quantum system). It describes the timescale over which superpositions of energy eigenstates dephase. It does NOT describe timescales over which superpositions of Z basis states in a particular energy eigenstate dephase. They don't dephase — they are protected by the Hamiltonian (by energy level quantization), as long as the physical effect leading to T2 doesn't cause broadening of the energy levels into bands that overlap (technically this is what sets the "weak coupling limit" referred to above). Do you agree with this so far?

Hi Geordie,

I believe the T2 we are talking about here is not the timescale for the decay of of diagonal terms in the general system, but rather the single qubit dephasing rate. Am I not correct?

If so, this is more than simply a shift in relative energy levels, but rather a perturbation to the wavefunction of each eigenstate. Thus the ground state would have a reduced overlap with the target state. As more qubits are added to the system, you will find that you no longer have a significant overlap at all. So even if you start with an eigenstate of the Hamiltonian, just these T2 terms will lead to thermal excitations.

As the size of the system increases, the probability of being in the ground state will go to zero due to the vast number of low-lying excited states. The short T2 times will guarantee that these become populated.

Even though the eigenstates of the Hamiltonian can be highly entangled, the populating of these low-lying states will likely act to limit the amount of entanglement in the system.

@Joe: I’m certainly not a physicist, but you seem to be talking in terms of blaming a baseball for letting itself be hit by a bat instead of blaming the bat for hitting the ball.

T2 does not cause thermal noise. It’s thermal noise that could cause a T2 to exist. As such, saying that “these T2 terms will lead to thermal excitations” makes no sense whatsoever, even more so since T2 is not even close to a complete description of the noise in the system. Even MORE so, it’s not even the dominant impact of the thermal noise. The spectral densities and structure of the bath are what describe any thermal excitations in the system, and thermal excitations are primarily caused by diagonal elements of the density matrix.

In this sense, the basis does matter significantly, because the bath is unlikely to act non-negligibly on all bases of the Hamiltonian (at least the evidence in the peer-reviewed literature appears to suggest that it’s well-described by noise in a single basis). For example, having instantaneous decoherence in the energy basis (i.e. T2 of the energy basis = 0) means that all transitions are adiabatic if you ignore thermal excitations between diagonal elements of the density matrix (since Landau-Zener transitions are only caused by off-diagonal elements in the energy basis). From that perspective, T2=0 would the optimal decoherence time; the problem is that T2 isn’t the dominant effect, so you’ll probably get tons of action between diagonal elements.

To summarize: T2 alone effectively means nothing. The spectral densities and structure of the bath are what really matter.

Neil: I know T2 is not a complete description of the system, but dephasing is the dominant form of decoherence in many many systems. And so it is particularly relevant. Single qubit dephasing can most definitely lead to thermal excitations. I stress I am not talking about the global energy basis, nor should I. The microscopic processes which lead to decoherence are predominantly local in nature.

I think perhaps you are misunderstanding my argument.

Joe: I think perhaps you are misunderstanding what “noise” is.

“dephasing is the dominant form of decoherence”

Depending on how you define “decoherence”, that is a tautology, and doesn’t contradict anything I said. What I stated is that decoherence is a very minor effect of the thermal noise compared to other effects which have nothing to do with decoherence.

For example, excitations due to diagonal-to-diagonal interactions, as I already stated, are much more significant and still don’t pose a significant problem for current scales of computation (if the temperature is ~20mK). For certain computations, you can even assume that there are ONLY diagonal elements in the energy basis and still see significant quantum effects, so those effects in those computations are clearly not impacted at all by decoherence of the energy basis. If they were affected by decoherence, setting the off-diagonal elements to zero would change the results, but it doesn’t. Therefore, there are cases where decoherence in the energy basis is irrelevant.

Whoops, that first sentence was supposed to be “…what “noise” is [present in real systems].” Sorry for unintentionally sounding like such a jerk there. 😦

Neil: What I said is certainly not a tautology. There is no reasonable definition of decoherence which is a synonym of dephasing. It totally neglects depolarization, relaxation, etc. I do not accept your argument at all.

As a really trivial example of what I am getting at, think of the ground state of a Hamiltonian = xx+yy+zz. Clearly the ground state of this is the anti-symmetric state. However if we have -only- dephasing on each of the qubits, this state will tend towards the mixed state 0.5 |01><10| more quickly than the fully thermalized state. For Hamiltonians with entangled ground states, this can bring you out of the ground state very quickly.

“There is no reasonable definition of decoherence which is a synonym of dephasing. It totally neglects depolarization, relaxation, etc. I do not accept your argument at all.”

See, I think that’s where the disconnect is. By “decoherence”, you’re including relaxation between diagonal elements, which IS the dominant effect of the thermal noise in these computations, but is not included in T2. Therefore, by those definitions of “dephasing” and “decoherence”, I can present plenty of examples where “dephasing” has zero impact (to as much precision than matlab’s diagonalization can give), while relaxation between diagonal elements has a very measurable impact.

For example, annealing a single flux qubit in the bath that D-Wave has observed, with a final z bias smaller than the plasma frequency. Explicitly changing T2 (from zero to infinity and anywhere in between) doesn’t change the result, whereas changing the relaxation between diagonal elements does. The same occurs when looking at 8 qubits. I think that’s fairly clear evidence that T2 is comparatively irrelevant for small numbers of qubits.

That’s because the algorithm degrades gracefully to simulated annealing. Even with dephasing you still eventually get to the right answer with reasonable probability if you wait long enough. The question is how long this takes.

It doesn’t degrade into classical annealing in the cases I mentioned unless you raise the temperature. It is definitively and clearly dominated by quantum tunnelling in the annealing, because if you treat the system with classical dynamics (as I did in absurd detail using thousands of computers for 6 months), the results are not just quantitatively different, but qualitatively different, even if the classical model is run for milliseconds. The classical model results look *nothing* like quantum model results with T2=0 at 20mK.

i.e. Setting T2=0 makes it no more similar to classical than setting T2=infinity, because as I said, dephasing has *no impact* on the computation, even though it is still definitively a quantum, non-classical computation.

If you want to see these results for yourself, you can attend the presentations at Waterloo sometime around the 25th. Heck, if you still don’t believe it, you can run the simulations yourself. I’m fairly sure that once the paper on the experiment is peer-reviewed, there should be no problem listing an exhaustive set of simulation parameters we used (most will probably be in the supplementary material).

You seem completely unwilling to shake the false idea that “T2=0” implies “classical”, when all it takes to see otherwise is a fairly simple simulation.

Neil, if it is completely invariant to single qubit dephasing, then the density matrix in the Z basis must be diagonal (or at least behave equivalently to a diagonal matrix). This means that the state never contains superposition, or is at least equivalent to such a state. Do you agree with this, or are you refuting it?

Also, I feel like I have taken over Suz’s comment section. Sorry Suz, it was unintentional.

No no no, I’m talking about dephasing in the *energy* basis. The density matrix in the *energy* basis can be diagonal and still show significant quantum effects. 🙂

I haven’t checked, but I’m fairly certain that the dephasing and relaxation are most significant in the X basis (at least by orders of magnitude), which would be one factor contributing to why the system falls into the ground state so easily for the initial Hamiltonian. I don’t know exactly what the impact of dephasing in the Z basis is, but whatever dephasing the chips do have in the Z basis, it appears not to destroy the significant quantum, non-classical effects observed, even if the annealing is done over a whole millisecond.

Neil, I believe if you read through my previous comments you will see that I have gone to pains to point out that I am talking about the single qubit dephasing rate, which is not the same as dephasing in the energy basis.

The point remains that you haven’t provided any evidence, or idea for a feasible experiment to check for evidence, that single qubit dephasing rates are causing the computation to act classically. We’ve run both experiments and simulations showing strong evidence to the contrary, so the entire reason I’m continuing to reply here is that I keep hoping you’ll suggest something to either support your hypothesis or let the hypothesis be tested in a new way.

Quite frankly, I’d *love* to hear your thoughts on how to feasibly test your hypothesis, partly so that this guessing game can finally be put to rest instead of coming up every time Scott Aaronson makes a blog post.

Neil, are you claiming that the basis for the noise makes no difference? In that case it would follow that no type of noise could cause any problems, which is clearly nonsense. Clearly with single qubit dephasing, you must remain close to a product state during the computation, with the maximum distance from the product state determined by the dephasing rate. So single qubit dephasing can definitely break the algorithm.

It would be relatively easy to test how close you can get to the entangled state by choosing several adiabatic evolutions which should begin in the same way, and then diverge when you reach the entangled state. You could do this if you could gradually change the relative strengths of 3 terms in the Hamiltonian rather than two. But given the rate of dephasing it is pretty easy to calculate an upper bound on the entanglement of any bipartition of the system, and it is very very low. The Margolus-Levitin theorem will tell you how quickly you can increase the off diagonal elements (in the Z basis representation), and the single qubit dephasing time will tell you approximately how quickly they decay.

Honestly, though, it is extremely annoying when you talk down to me. This isn’t random crap I’ve made up, it’s the same concern that almost everyone in the community raises: if your qubits decohere rapidly then you never achieve much (if any) entanglement, and there is no known way to achieve a super-polynomial speedup without entanglement.

My apologies for annoying you. I’m also not making things up, and I’m certainly not trying to talk down to you, but from my perspective your arguing by authority makes it sound like you think I’m so completely uninformed that I’m not even worth presenting your evidence to, so I’m feeling a bit offended. All I’m doing is looking at the evidence I have and the established physics and making hypotheses based on that.

Let’s take this one step at a time:

“Neil, are you claiming that the basis for the noise makes no difference?”

No, I’m claiming the exact opposite of that.

“Clearly with single qubit dephasing, you must remain close to a product state during the computation, with the maximum distance from the product state determined by the dephasing rate.”

That’s not in dispute. One thing in dispute is whether single qubit dephasing is non-negligible at all points during the computation. The initial Hamiltonian (i.e. strongly -X with large X decoherence), for example, forces an in-phase superposition, so it could stay coherent indefinitely. That’s not entangled, but it’s nonetheless important. A much more thorough example, showing little impact from single qubit dephasing time up to 20 qubits: http://arxiv.org/abs/0803.1196

“given the rate of dephasing”

Unfortunately, we don’t have super-fast control lines like in: http://iopscience.iop.org/1367-2630/12/4/043047/pdf/1367-2630_12_4_043047.pdf so measuring the rates of dephasing at different points during a computation may not be feasible using conventional approaches.

“achieve a super-polynomial speedup”

Nobody’s claiming any scaling speedup here, let alone super-polynomial. However, even if one had only incoherent quantum tunneling (which is what we have late in the computation), it might or might not provide a large constant factor speedup over classical.

“It would be relatively easy to test how close you can get to the entangled state by choosing several adiabatic evolutions which should begin in the same way, and then diverge when you reach the entangled state. You could do this if you could gradually change the relative strengths of 3 terms in the Hamiltonian rather than two.”

Well, hopefully you’ll be happy to know that we’ve done an experiment that sounds qualitatively similar to that over the past few months and the results will be out once the ton of supplementary material is compiled. They’ll also be presented at University of Waterloo on June 25th. I look forward to hearing your thoughts on it. 🙂

Neil, I’ve just realised that I have been mistaken you for someone else, and so may have made some unwarranted assumptions about your background, so if I have oversimplified things or something similar, then I am sorry.

I’m not really sure what I have said that can be interpretted as an argument from authority, but let me recap the reasons for my misgivings, to avoid any confusion:

1) Systems with relatively quick loss of coherence in any local basis do not support much entanglement, unless there is some process used to actively suppress this loss of coherence (i.e. fault-tolerant computation,distillation, etc.)

2) Without entanglement, a significant difference in the scaling of time with instance size between adiabatic system and a classical computer is essentially impossible.

3) DWave does not seem to do anything to suppress the noise.

4) DWave’s qubits have significant local noise (this may sound like an argument from authority, but actually I have to fall back on what I have heard from people at DWave here)

5) Given the above, it seems essentially impossible to achieve a meaningful scaling advantage.

Now my argument is only with claims that you can achieve a significant scaling advantage for optimization problems, because this really doesn’t seem to gel with the above. But I notice in you last comment you step back from any claims of a scaling advantage, in which case I’m not really arguing with you. But in that case you wouldn’t really have something that could meaningfully be called a quantum computer, and I really can’t see the point of such a device. NVidia graphics cards give you a cost of about £1 per core, and that would be very very hard to compete with without a scaling advantage.

I notice we seem to have completely hijacked Suz’s comment section, so unless she is actually interested in this, perhaps email would be a better way to continue the discussion.

Let me finish by saying that I really do respect some of the work being done at DWave both in terms of engineering and physics, it is simply that I cannot see how such a device can give a substantial advantage in solving large optimization instances, given the issue of noise.

H iJoe,

At the risk of further hijacking this comments section, I jsut want to put out there that I don’t agree with many of the technical assertions you’re making above. Mohammad Amin has written quite extensively on these issues and I think there is a conceptual issue that’s somehow not being gotten across, that’s related to this idea of a single qubit T2 time.

As far as I understand your argument, you’re thinking that the single qubit T2 time somehow inherently represents the dephasing time between Z-basis eigenstates (ie readout states). It doesn’t. It specifically represents dephasing between ENERGY eigenstates, always, regardless of the Hamiltonian of the single qubit. There are special bases (like Neil was saying above). When you go to multi-qubit systems, dephasing still has the exact same effect — it removes phase coherence between energy eigenstates.

I think the mental picture you have is this, correct me if this is wrong: (a) T2 is short, therefore phase coherence is any basis is quickly lost; (b) therefore in multi-qubit systems, the system quickly “factorizes” into something like a product state.

Both of these are wrong — the quantitative arguments can be found in Mohammad’s work on these issues.

Have I framed this right? I just want to make sure that I get the point across that I’m arguing that your central argument is incorrect.

Hi Geordie,

So perhaps we are using slightly different definitions of T2. I certainly agree that dephasing in the energy basis is probably not much of a problem in an adiabatic computation, and I would like to stress that this is not what worries me. What does worry me is that many of the processes which give rise to noise are necessarily local on a microscopic scale. Any local noise in the system will reduce the entanglement achievable. Now I am assuming that the measured single qubit T2 is of roughly the same order of magnitude as the worst of the inherent local noise. Now if I’m wrong by 3 or 4 orders of magnitude, then this argument won’t hold for your current systems and will only kick in at larger scales, but this is almost certainly not the case. Variation in local Hamiltonians independent of coupling terms will necessarily introduce local noise, rather than purely dephasing noise in the energy basis.

By the way I totally don’t mind at all if people are hijacking my thread (as long as it is relevant of course – which is the case here). Part of the idea is that there should be open forums where people can debate and resolve misunderstandings. Actually I feel pretty bad for not having commented more on this thread!

Suz: Form your latest post it looks like you have much more interesting things to be hearing about at the moment!

Hi Joe

The main reference talking about this stuff is http://arxiv.org/abs/0803.1196 . It would be awesome if you could look it over and tell me what you think.

Thanks, will take a look.