# Susskind QM, Ch 1: Systems and Experiments

This is the first post in the Theoretical Minimum study series.

What is this chapter all about? First, a discussion of why you should be prepared for this to be a difficult subject. Okay, whatever. Lots to learn.

States and measurements are two different things, and the relationship between them is subtle and nonintuitive. (p3)

I think the problem is that we can learn the rules, sure, but you have to remember that the game is a different game than you're using to playing, and you can only get solid intuition for special examples. And, by the way, this is the world you're used to, at a way lower level.

That would be the case if video game people were investigating binary code too, I guess.

### 1.2 Spins and Qubits

I think we're getting at the idea of quantum numbers, these numbers you can attach to a particle that give it an identity.

Q. Why spin? I think the origin here is that people thought, based on the action in an electric field, the particle must be whipping around, and its angular momentum, pointing either up or down, caused its reaction. That exact thing was wrong but the name stuck.

He doesn't really go into this; you're going to be fine if you'll just swallow it and move on... but if you feel that way, how can you develop an intuition?

### 1.3 An Experiment

Measurement is a violent process. He introduces the "apparatus", which is not the most intuitive example. I think it would have been better to present the actual experimental results, what people were seeing.

Which is this, I think, the Stern-Gerlach experiment: http://electron6.phys.utk.edu/phys250/modules/module%203/spin.htm

He also mentions electrons, which I think have spin 1/2? Should have used the photon, maybe.

Okay, we have an example now that you could build yourself just using the intuition that:

• there is some "state" that is pointing in a direction. Up, if you prepare the quantum state that way by making a filter and only letting items that register "up" pass through.
• If you measure again using the apparatus in that up direction, you'll see 1; you'll see -1 if you flip.
• If you rotate the device at an angle, you'll see 1 with $cos^2(\frac{\theta}{2})$ probability, or -1 with $sin^2(\frac{\theta}{2}) = 1 - cos^2(\frac{\theta}{2})$. Project the vector onto the axis of the apparatus and interpret the length squared as a probability. (You use $\frac{\theta}{2}$ instead of $\theta$ so you can slide smoothly from probability 1 to 0 for seeing 1, and 0 to 1 for seeing -1.)

As Susskind notes,

Determinism has broken down, but in a particular way. (p9).

I think we're starting to see quantum weirdness. What does it mean to say there's some "chance"? Do we have a random number generator baked into reality? Is there some hidden variable behind the scenes??? Don't worry, keep reading.

### Expected Value

I had some trouble here with the fact that the expected value is $\cos(\theta)$. It makes sense on the axes... vertical, horizontal, negative-vertical. But how does that square with the probability interpretation?

I think the key is that... when, later, we cover how to break down the actual quantum state in terms of those angles, we find that if we rotate the apparatus we get a quantum state of $\cos(\frac{\theta}{2})\left| u \right> + \sin(\frac{\theta}{2})\left| d \right>$.

So the expected value is $1 \cdot \cos^2 \frac{\theta}{2} + -1 \cdot \sin^2 \frac{\theta}{2}$. Then, use the half angle identities:

$$\sin \frac{\theta }{2} = \sqrt {\frac{{1 - \cos \theta }}{2}}$$

and

$$\cos \frac{\theta }{2} = \sqrt {\frac{{1 + \cos \theta }}{2}}$$

To get:

stem $$cos^2 \frac{\theta}{2} - sin^2 \frac{\theta}{2} = \frac{{1 - \cos \theta - 1 + \cos \theta }}{2} = \cos \theta$$

As claimed.

That happens to also be equal to the dot product between the angle you put the apparatus at and the angle it was at when you measured 1.

$$\left< \sigma \right> = \hat{n} \cdot \hat{m}$$

### 1.4 Experiments are Never Gentle

Something strange is happening behind the scenes when you measure.

Classically, an ideal measuring apparatus has a vanishingly small effect on the system it is measuring. (p12)

I'm still suspicious of this. There has to be some level at which you can't get any more precision. Plank length? And if that's true... are quantities like measurement really continuous?

Spiel about how you're affecting the system... but no one has a solid story about how this works, so this is a just-so story.

### 1.5 and 1.6

He lays out boolean logic... I think to introduce an algebra where it's possible to measure for two things, and make statements about multiple items at once. And and Or gates.

Funny, this answers my thought about why you have to make a quantum computing device reversible. Or, at least, why you can't build one that's NOT reversible... and the reason why is that you can't build an AND or an OR gate, since, as Susskind demonstrates, these operations won't work the way they work in a classical system, so you can't rely on them.

Measurement, as we see later, mathematically looks like projection onto some operator's basis. This strictly loses information; that operation doesn't necessarily commute, if you try to do it with two different operators in a row. The algebra is non-commutative.

There is one claim I found a little strange.

The particle has position x and the particle has momentum p.

vs:

The particle has position x or the particle has momentum p.
However, in quantum physics, the first of these propositions is completely meaningless (not even wrong), and the second one means something quite different from what you might think.

I think I get the first claim. But what does the second claim mean? I think it may be a claim about probabilities.

### 1.8 Mathematical Interlude: Complex Numbers

This is really key, so key that I wrote this other post about these things. Link.

The most critical equation, IMO, is

$$z = re^{i\theta} = r(\cos \theta + i \sin \theta)$$

there is a special class of complex numbers that I'll call "phase-factors." A phase-factor is simply a complex number whose r-component is 1.

Think of the pointer rotating around the unit circle. If you stick a $t$ up in the exponent — $e^{it}$ — then you can imagine, as time proceeds, the point represented by that quantity rotates around in its own 2d complex plane.

### 1.9 Mathematical Interlude: Vector Spaces

I did a deep dive on this when I went through the Applied Quantum Computing book, and found the more turbo mathematical treatment helpful.

If you know anything about vectors, you're probably familiar with using vectors to represent geometric points. A 2d vector corresponds to a point in a flat plane:

$$\begin{bmatrix}x \\ y\end{bmatrix}$$

Adding entries gives you the ability to reference more spatial dimensions. This 3-vector references a point $(x, y, z)$ in 3d space:

$$\begin{bmatrix}x \\ y \\ z\end{bmatrix}$$

The important thing to note here is that this notation can also be used as an accounting device for a much more general idea. Think of the entries in the column vector as coefficients on some set of "basis elements" of a "vector space". Technically, a vector space is some abelian group $\mathbb{G}$, paired up with a field $\mathbb{F}$ in such a way that you can write linear transformations. Scalars come from $\mathbb{F}$ and can scale elements of $\mathbb{G}$... okay, dig more in.

BUT! Why make things so abstract? It's important because we bring the machinery of linear algebra to bear on a number of different elements.

• ordinary 3-vectors,
• vectors in a "Hilbert space" of potentially very many dimensions
• vectors where the basis elements are actually matrices - the Pauli matrices that we'll meet later. If they're "orthogonal" in some sense, and you can scale and combine them linearly, why not?
• Polynomials form a group, so the machinery works there too.
• He gives an example of "continuous complex-valued functions of a variable $x$" on page 27. You can go through the axioms on page 26 and convince yourself that these form a group ("adding" two functions means, make a new function that adds the results of the two original functions, like $h(x) = f(x) + g(x)$, etc) but I can't see this sticking out at someone as a big "OH!" moment if they don't already get the point here.

The goal of the abstraction here is to be able to rely on a huge number of other results, tests and implementations imported from linear algebra. That doesn't mean that every result about Hilbert spaces has physical meaning, I don't think.

Okay, phew. Next, Dirac notation, which I'm coming to love.

What's important to know?

• Inner Products