# A first look into quantum computing

Recently the field of quantum computing started to attract the attention of some of my academic colleagues at EPITA and I enthusiastically took part into a work group tasked to provide students with a first technical look of the field as well as a global view of the current state of the art.

“Anyone who is not shocked by quantum theory has not understood it.”

Niels Bohr

For a plain, simple, boring classical computer computer scientist as myself, the first logical task was to understand the basic underlying concepts and mathematical models, fortunately I had in my possession the excellent introductory book “Quantum Computing for Computer Scientists” from Noson S. Yanofsky and Mirco A. Mannucci that Kevin D. Kissel recommended to me at his “Quantum Computing at Google and in the Cloud” conference during FOSDEM 2019 in Brussel.

Of course a lot of public resources also existed like the Pr. Umesh Vaziran’s Berkeley CS191 lectures, the Anastasios Kyrillidis’s notes on that book were extremely helpful to pinpoint the crucial parts to highlight in order to provide a clear and concise mathematical basis on the subject. You will find the first section of our review paper quasi-identical to those notes, simply because I couldn’t imagine a clearer and more concise approach on the subject.

Of course at first I was inhabited with the naive hope of being less confused about the nature of reality after starting studying quantum mechanics, and I can’t articulate how wrong I was.

Even with dismissing the many worlds interpretation of quantum mechanics, which might seem like a desperate (and very efficient way to start a fight between two theoretical physicists) attempt to bring some sense to the superposition of possible states inside a quantum systems wave function, and reduce the problem to a simple probabilistic technicality, it was very difficult to accept it without damaging my personal intuition of reality.

$$\left | \psi \right > = c_{0} \left | 0 \right > + c_{1} \left | 1 \right >$$

With $$\left | c_{0} \right | ^{2} + \left | c_{1} \right | ^{2} = 1$$ and $$(c_{0}, c_{1}) \in C^{2}$$, which represents the probabilities of each outcome state ($$\left | 0 \right >$$ and $$\left | 1 \right >$$) when the wave function collapses upon measurement.

But the Hugh Everett thesis of many world interpretation of quantum mechanics states that instead of collapsing to only one outcome states, all the possible states and copies of the observer performing the measurement continues to exists in branching “parallel universes”.

Thus exploiting this superposition effect by storing information into quantum bits which can be for example implemented with polarized photons, we can store $$2^{n}$$ classical bits into $$n$$ qubits, with our qubits collapsing to a single classical bit upon measurement.

But one cool but not necessarily popular nor rigorous way to look about it is that through the many world interpretation, we can thus considers that by exploiting quantum systems in superposition to perform computation, we are actually “performing distributed computation across parallel universes”. Which is, whatever theoretical physicists thinks about it, a pretty cool image to first introduce quantum computing to CS students.

Of course this article is not a quantum computing lecture nor an abstract of the state of the art paper which I talked about above, but just a reflection on how quantum mechanics picked my interest. Attached to this article is the tentative of state of the art review of quantum computing, which is mostly academic press clipping on the subject but with an effort of clarification to fit with the introductory mathematical concepts depicted in the first section.