> One researcher did find D-Wave performed 3600 times faster than a classical device.
Well, that was done with the 128-qubit version, right? And now they have one that is 2^384 faster than that one, so we'll see what happens next.
If it was the 512-qubit one, then we'll need to wait for the 2048-qubit D-Wave Three that should come out in 2015 (they seem to double the qubits every year, but only release a new model every 2 years or so). That one should be 2^1536 faster than the current model.
Quantum computers don't get twice as fast for every qubit you add. You're confusing the state space required for a naive classical simulation with speed.
Yes they should do. Maybe not 2x with every cubid, but definitely exponentially - for larger "state space" as you call it. This is what makes them different from classical model.
Not generally. Only for certain problems. There are precious few problems that quantum computers are known to be asymptotically better at than classical computers, and one of them (Grover's search) is only a sqrt(N) speedup.
Just speaking in terms of "quantum size" (and not computing power, which is not understood yet), it depends on the graph of possible entanglements among the qubits. If it were a complete graph (allowing arbitrary entanglement) then the size of the relevant Hilbert space would exactly double with each additional qubit (i.e. one would need twice as many complex numbers to describe any particular wave function in the space). But the D-Wave chips operate with a fixed topology that (I think) is far from fully-connected (e.g. the 128-bit chip used a "Chimera graph", which they describe in blog posts and publications). The growth would only be truly exponential with sufficient connectivity (e.g. a planar graph would mean sub-exponential growth).
Have the same feeling (I think): If you sacrifice "global" entanglement then you're probably not doubling the "reachable" state/solution space with every new qubit.
The fact that D-wave seem able to double number of qubits every year while general purpose quantum computer qubit count seems to grow more linearly with time is in my mind one reason to be skeptical of the whole approach.
It wasn't really shown to be faster; see http://www.scottaaronson.com/blog/?p=1400. To focus on the "3600 times" issue, I suggest searching for the strings " Ising" and "CPLEX". Don't miss the extremely thorough discussion in the comments, which includes comments from Cathy McGeoch (the author of the "3600 times" work), Peter Shor (as in Shor's factoring algorithm), Scott Aaronson, Greg Kuperberg, and many others.
That's a property of quantum computers (but as people already explained, it's not that simple), why do you want classical computers to behave the same way?
Or, in a short answer, no, no relation at all with that.
Well, that was done with the 128-qubit version, right? And now they have one that is 2^384 faster than that one, so we'll see what happens next.
If it was the 512-qubit one, then we'll need to wait for the 2048-qubit D-Wave Three that should come out in 2015 (they seem to double the qubits every year, but only release a new model every 2 years or so). That one should be 2^1536 faster than the current model.