this post was submitted on 09 Jan 2026
86 points (96.7% liked)

Technology

78511 readers
3054 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bunchberry@lemmy.world 1 points 15 hours ago* (last edited 14 hours ago)

The reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.

Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.

The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.

The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.

The exponential complexity arises from considering what physicists call an "ensemble" of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).

This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.

You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).

Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)

Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.

However, this is where it gets interesting.

As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.

As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.

This is what I mean by quantum systems being inherently "non-separable." You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.

In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.

For reference: https://arxiv.org/abs/0711.4770