MQT Paper update… now in colour

Oh my goodness, this MQT paper is becoming a TOME….

So yesterday we had the red ink debacle which spurred me to write the Paper Algorithm:

1.) Write paper
2.) Give to colleague
3.) Get returned
4.) Shake off excess red ink.
5.) Rework.

Repeat steps 3-5 and hope for convergence.
Note this algorithm may have bad runtime scaling due to T(step 4) -T(step 3).

A friend of mine tried to suggest some further steps involving the journal submission process, but unfortunately those kind of delightful games are beyond my event horizon at the moment!

Here is a picture of the red ink debacle (which by the way looks worse now as I’m covered it in my own rebuttals of and agreements with the arguments – in black ink I hasten to add).

Anyway, the new version of the document is better for the corrections now but I fear it may have to be streamlined as it’s packing 7 pages of Physics awesomeness already… and I’m wondering about going further into the details of thermal escape from the washboard potential. Maybe I shouldn’t do that.

Advertisements

Post-IOP-Talk thoughts

So I gave this talk last night entitled: Quantum Computing: Is the end near for the Silicon chip? It was an interesting experience. I’ve given talks of this size before, but I don’t think I have ever tried so cover quite so many topics in one go, and give so many demonstrations in the process. So with two radio microphones strapped to my waist, and 3 cameras recording the talk, I proceeded to enthusiastically extol the future potential for superconducting electronics technology, and warn about the limits of silicon technology. I gave an overview of superconductors for use in quantum computing, which culminated in a discussion of interesting applications in machine learning and brain emulation.

The main problem I had during the talk was that I wanted to stand in FRONT of the rather large podium/desk in order to talk to the audience, as I felt this would be a bit more personal (rather than ‘hiding’ behind the desk). However, the controls for the visualiser, (which is a camera pointing at an illuminated surface connected up to the projector so that the audience can look closely at objects you wish to show) were behind the desk, so I had to keep running backwards and forwards every few minutes to switch from visualiser -> laptop output. This was most irritating and is a really poor design in a lecture theatre. The control for the projector output really should have been somewhat more mobile.

The other moment of complete fail was when the large piece of YBCO stubbornly refused to cool to below 90K when immersed in the liquid nitrogen. Stupid smug piece of perovskite. I stood there for what seemed like hours, with over 80 pairs of curious eyes fixated upon my failing experiment, eagerly anticipating some badass superconducting action. And the damn magnet wouldn’t levitate. There was just way too much thermal mass in the YBCO block and its metal/wood housing to cool it quickly enough. I eventually gave up and swapped to the smaller YBCO piece, making some passing comment about physics experiments never working.

Anyway, those gripes over, the talk seemed to attract a lot of questions relating to the last 30% of the material I covered, namely the part about simulating the human brain and potentially building quantum elements into such machine intelligences.

Anyway I hope it inspired some of the younger members of the audience to consider working as scientists in these areas to be interesting career paths.

I’ll try and get the talk edited and put up on the web soon 🙂

Fridge surgery

Take a look at this picture:

mwahaha

Yes I am hacksawing a dilution refrigerator….

One of the entry ports to the IVC has been hardsoldered with a stainless steel placeholder bush. We need to replace this with our custom made copper bush with feedthroughs for coaxes and DC lines. It is virtually impossible to remove this part given the small space around it, and we decided that we don’t want to put power tools nearby, lest we accidentally buzz through the dilution unit. Which would be a bit like putting a scalpel through the jugular.

So hacksaw it is.

I wonder how many low temperature physicists have wanted to saw their dilution fridges in half before. Today I got to indulge in that pleasure. The results weren’t pretty at times:

that's gotta hurt

Although half way through I started thinking ‘I hope that this is the right part I’m sawing…’

World’s first on-chip quantum computer! Oh wait…

So I opened my freshly delivered copy of this month’s Physics World and began to mull over the articles whilst enjoying a Kitkat. On the first page I found an piece about Ion Trap quantum computing proclaiming that “Researchers in the US claim to have created the first small scale device that can perform all the steps needed for large-scale quantum computation” referring to NIST’s latest ion trap chip.

You can read about it here:
Scalable ion traps for quantum information processing

Here is the PhysicsWorld web version of the article:
Tiny device is first complete ‘quantum computer’

Haven’t superconducting flux/charge qubits have been able to do this for ages – what’s all this ‘first’ business? Besides, didn’t Yale do something similar a few months back? Oh no, wait a minute, they actually ran an algorithm…
And that’s without even considering the advances in AQC, hmmm.

Anyway, that didn’t bother me too much. But then the article goes on to say:

“The implication was that quantum computers could operate at ultra-high speeds, which could be applied to solving complex problems like cracking some of today’s most widely used encryption codes.”

Is that the BEST application you can think of to run on a QC? Really? Come on… Get creative! Who cares about cracking RSA anyway? They’ll just keep adding more digits 😉

In fairness the magazine also had quite a nice article about the violation of Bell’s inequalities and some of the potential loopholes in those experiments. Which made me happier.

Quantum computing fail…again.

Courtesy of Quantum Bayesian Networks, an article entitled “The Quantum Leap of Quantum Computing” on Penny Sleuth. It’s great to see a wider business and market audience becoming interested in QC.

However, this is slightly irritating:

“This means computers would become exponentially more powerful because each “quantum bit” (qubit) could store a much greater range of numbers than the two that binary math restricts us to. Imagine a laptop with the computing power of the world’s 10 most powerful supercomputers. Then you begin to grasp the potential of quantum computing.”

In the spirit of a very popular television program:

talent_fail

Let me explain for any readers who are slightly confused at this point: Quantum computers will be very good at solving certain types of hard problems somewhat faster than classical computers. This should become some sort of mantra. (If anyone can think of a catchy version that would be cool).

They won’t be general purpose machines. The best way to think of a QC is more like a co-processor (say like a hardware graphics accelerator).

The types of problems that they will be good at solving are exciting and interesting in themselves. Quantum computers are cool enough without the overhype 🙂