There are some lovely simulation results coming out of D-Wave’s AQUA@Home project:

The project is designed to simulate what D-Wave’s adiabatic quantum optimization chips are doing during their quantum annealing cycles. The chips consist of coupled qubits which behave like an array of interconnected spins, in an Ising-type configuration. The spin system can be highly frustrated depending upon the sign and magnitude of the coupling between the spins and on the bias of each spin site. All possible configurations of the spins have an associated energy, and the chips try to find the configuration of the spins which minimises this energy.

The AQUA project simulates these spin systems by using a Quantum Monte Carlo technique to calculate the energy gap between the ground state of the system (the lowest energy ‘solution’ to the problem) and the first excited state (next best solution) for some ‘ultra hard’ problems (which you can think of as ones where the degree of frustration can be very high). You can look at the results for 8, 16, 32, 48, 72 and 96 qubit problems here:

AQUA@Home results so far

I love looking at these minimum gap curves, their statistical properties are very interesting. For example, you often get two minima regions in the energy gap as the system is annealed. There is also some very fine structure in the curves. I wonder if any generalizations can be made by analysing the similarities and differences of many such curves as the number of qubits increases.

You can help contribute to this project by downloading BOINC and donating spare CPU cycles (the program runs as a background task). The information for how to do this can be found under the Join AQUA@Home section on this page. Go, crowdsourced quantum science, go!!

### Like this:

Like Loading...

*Related*

The main thing I find fascinating is how fast the adiabatic timescale actually is when you use real units for superconducting AQCs. Usually people think of adiabatic as “slow” but the median adiabatic times for these problems are many orders of magnitude shorter than anything you can achieve using COTS processors. I think the fascination with asymptotic scaling on worst-case problems obfuscates the fact that these things can be just ridiculously fast.

I guess some kind of metric for ‘average’ problems rather than ultra-hard ones could be useful. It comes back to this question of just what is the distribution and density of ultra-hard vs. less-hard instances in ‘constraint space’ for particular problems…

Maybe by running real world apps we can kind of judge the hardness-level to expect.

On a less practical note, I wonder if structure in the distributions of what typical instances are found in nature can tell us something profoundly deep about our universe 🙂

Well these particular results are “typical case” (ie median values) on random problems generated from a “hard” instance class… like everything else in discrete optimization, saying anything meaningful in general seems to be very hard!