Lecture 7: Metrology, shot noise and Heisenberg limit, Part 2

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, the professor first talked about "Ion trapped in a solid", then discussed metrology, shot noise and Heisenberg limit.

Instructor: Wolfgang Ketterle

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: Good afternoon. So it has been a little bit more than a week when we met the last time. Before I continue our discussion of metrology and interferometry. I just want to share something I saw on my visit to the Netherlands.

When I visited the University of Delft and the Kavli Institute, they had just accomplished the entanglement of two NV centers. And we had just talked in class about the entanglement of two ions.

So I'm sort of excited to show you that people have just taken the next step. And what often the next step means is that in atomic physics, we use pristine systems-- ions, neutral atoms in [INAUDIBLE] vacuum chamber. And we create new forms of entanglement. Or with quantum gases, new forms of quantum matter.

But ultimately, we hope that those concepts, those methods, and this knowledge translates to some room temperature materials or solid state materials which can be handled more easily, and are therefore much closer to applications.

So NV centers are kind of nature's natural ion trap, or nature's natural neutral atom trap. Let's not discuss whether this is neutral or ionized. It's a vacancy in nitrogen. And it has a spectrum which looks like an atom.

So you can say once you have such a defect in nitrogen, you have an atom, a single atom in an atom trap or an ion in an ion trap. And you don't have to create the vacuum. It's there. Every time you look at it, it's there.

And you can excite it with the laser. It has a spectrum similar to atoms. It has spin structure So you have these vacancies in diamond. So these are little quantum dots.

But now you have two problems. One is you want to collect the light emitted by them. And what is best is to mill a lens right into the material. So this is the diamond material. A lens is milled. And that already gives some [? culmination ?] of the light emitted by it.

But the bit problem until recently has been when you create those some people call it artificial atoms. Every atom is a little bit different, because it experience a slightly different environment. A crystal has strain, so if you have seemingly two identical defects in a diamond crystal, the two defects will have a resonance line, which is a few gigahertz difference.

You would say, well, maybe it's just [? apart ?] in 10 to the 5, but it means the photons are distinguishable. So if you want to do entanglement by having two such artificial atoms emitting a photon onto a beam splitter, and then by performing a measurement we project the atoms into Bell state-- I hope you all remember what we discussed for the trapped ions-- then you have to make sure that fundamentally, those photons cannot be distinguished.

And the trick here is that they put on some electrodes. And by adding an electric field, they can change the relative frequency. And therefore, within the frequency uncertainty given by Heisenberg's Uncertainty Relation, they can make the two photons identical.

And when, let's say, experiment. Well, you have two NV centers, defects in diamond. You can manipulate with microwaves coherently the spin. You need lasers for initialization and readout. But then closer to what we want to discuss, you need laser beams which excite the NV center. And then the two NV centers emit photons.

And what you see now is exactly what we discussed schematically and in context of the ions, that the two NV centers now emit photons. And by using polarization tricks and the beam splitter, you do a measurement after the beam splitter. And based on the outcome of the measurement, you have successfully projected the two NV centers into a [INAUDIBLE] state.

OK, good. All right. Let me just summarize what our current discussion is about.

This section is called Quantum Metrology. It is a section where we want to apply the concepts we have learned to precision measurements. It's actually a chapter which I find nice. We're not really introducing new concepts. We're using concepts previously introduced. And now you see how powerful those concepts are, what they can be used for.

So we want to discuss the precision we can obtain in quantum measurement. We apply it here to an atom in the interferometer, to an optical interferometer. We could also discuss the precision of spectroscopic measurements.

A lot of precision measurements have many things in common. So what we discuss here is as a generic example for precision measurement that we send light from our center interferometer. Here is a phase shift. And the question is, how accurately can we measure the phase shift when we use n photons as a resource?

And of course, you are all familiar with the fundamental limit of standard measurements, which is short noise. And sort of as a warm up in our last class Wednesday, week ago, I showed you that when we use coherent states of light at the input of the interferometer, we obtain the short noise.

Well, it may not be surprising, because coherent light is as close as possible, has similar to classical light. But then we discussed single mode, single photon input. And by using the formalism of the Mach-Zehnder interferometer, we've found that the phase uncertainty is again 1 over square root n.

So then the question is, how can we go to the Heisenberg limit, where we have an uncertainty in the phase of 1 over n? And just as a reminder, I find it very helpful. I told you that you can always envision if you have n photons and you multiply it, put the n photons together by multiplying the frequency by n, then you have one photon which [INAUDIBLE] with n times the frequency.

And it's clear if you do a measurement at n times the frequency, your precision in phase is n times better. So what we have to do is we have to sort of put the n photons together. And then we can get a precision of the measurement, which is not square root n, but n times better.

So this is what we want to continue today. This is the outline I gave you in the last class. If you have this optical interferometer, this Mach-Zehnder interferometer. And if you use coherent sets or single photons as the input state, we obtain the short noise.

Now we have to change something. And we can change the input state, we can change the beam splitter, or we can change the readout. So we have to change something where we entangle the n photons, make sure in some sense they act as one giant photons, with either n times the frequency, but definitely with n times the sensitivity to phase shift.

Any questions? Good. The first example, which we pretty much covered in the last class, was an entangled state interferometer. So instead of having the Mach-Zehnder interferometer as we had before, we have sort of a Bell state creation device. Then we provide the phase shift. And then we have a Bell analysis device.

And I want to use here the formalism and the symbols we had introduced earlier. So just as a reminder, what we need is I need two gates. One is the Hadamard gate, which in matrix representation has this matrix form.

And that means if you have a qubit which is either up or down and you apply the matrix to it, you put it in a superposition state of up and down. It's a single qubit rotation. You can say for the bloch on the Bloch sphere, it's a 90 degree rotation.

The second gate we need is the controlled NOT. And the controlled NOT we discussed previously can be implemented by having an interferometer and using a non-linear Kerr medium coupling another photon to the interferometer in such a way that if you have a photon in mode C, it creates a phase shift with the interferometer. If there is no photon in mode C, it does not create an additional phase shift. And as we have discussed, this can implement the controlled NOT.

So these are the ingredients. And at the end of the last class, I just showed you what those quantum gates can do for us. If you have two qubits at the input, we apply the-- and I will assume they are both in logical 0-- the Hadamard gate makes the coherent superposition. And then, we have a controlled NOT where this is the target gate.

Well, if this is 0, if the control beat is 0, the target beat stays 0. So we get 0, 0. If the control beat is 1, the target beat is flipped to 1. So we get 1, 1.

So the result is that we have now created a state 0, 0 plus 1, 1. And if we apply a phase shift to all the photons coming out on the right hand side, we get a phase shift which is too fine. So we already get the idea.

If we take advantage of a state 1, 1, it has twice the [? face ?] sensitivity as a single photon. And we may therefore get the full benefit of the factor of 2, and not just square root 2. And this is what this discussion is about.

Questions? OK, so that's where we ended. We can now use another controlled NOT and bring in the third qubit. So what we create here. Just make a reference, this is where we start today.

So we create here the state, which is either all beats are 0, all beats are 1, and then the phase shift gives us three times the phase shift phi. And therefore, by bringing in more and more qubits, I've shown you n equals 1, n equals 2, n equals 3. So now you can go to n. We obtain states which have n times the phase sensitivity.

Let me just mention in passing that for n equals 3, the superposition of 0, 0, 0 and 1, 1, 1 goes by the name not gigahertz, GHZ. Greenberger. This third one is Zeilinger. The second one is--

AUDIENCE: Horne.

PROFESSOR: Horne. Thank you. Greenberger-Horne-Zeilinger state. And those states play an important role in test of Bell's inequality. Or more generally, states which are macroscopically distinct are also regarded cat states or Schrodinger cat states.

If we apply a phase shift and go now through the entangler in reverse, just a reverse sequence of CNOT gates and Hadamard gates, then we have all the other n minus, have the n minus 1 qubits. All the qubits except for the first reset to 0. But the first qubit is now in a superposition state where we have the phase into the power n.

And now, we can make a measurement. And P is now the probability to find a single photon in the first qubit. So in other words, our measurement is exactly the same as we had before where we had the normal Mach-Zehnder interferometer.

We put in one photon at a time. And we determined the probability. What is the photon at one of the outputs of the interferometer? But the only difference is that we have now a factor of n in the exponent for the phase shift. And I want to show you what is caused by this factor of n.

Let me first remind you how we analyzed the sensitivity of an interferometer for single photon input before. So for a moment, set n equals 1 now. Then you get what we discussed two weeks ago. The probability is [INAUDIBLE] as an interferometer.

Sine also, cosine also infringes with cosine phi. If you do measurements with a probability P, that's a binomial distribution. Then the deviation or the square root of the variance in the binomial distribution is P times 1 minus P. And by inserting p here, we get sine phi over 2.

The derivative of P, which is our signal, with a phase is given by that. And then just using error propagation, the uncertainty in the phase is the uncertainty in P divided by dp dt. So then we repeat the whole experiment n times, and if your binomial distribution and we do m [INAUDIBLE] of our [INAUDIBLE] with probability P, we get an m under the square root. And then we get 1 over square root m. And this was what I showed you two weeks ago, the standard short noise limit.

But now we have this additional factor of n here, which means that our probability has a cosine which goes with n phi. When we take the derivative, we get a factor of n here. And then dp d phi has the derivative. But then we have to use a chain rule, which has another factor of n. And therefore, we obtain now that the phase sensitivity goes as 1 over n. And therefore, we reach the Heisenberg limit.

So if you have now your n photon entangler, the one I just-- the entanglement operation with these n cubits is special beam spreader replaced by the entangler, we have now a sensitivity, which goes 1 over n. And then of course, if you want, you can repeat the experiment n times. And whenever you repeat an experiment, you gain an addition with the square root of m.

Just give me one second. We have this. OK, so this is an example where we had n qubits entangled, like here. And the sensitivity of this interferometer scales now as the Heisenberg limit with 1 over n.

OK, I want to give you, because they're all nice and they're all special, I want to give you three more example how you can reach the Heisenberg limit. The next one goes by the name super beam splitter method. It's actually a very fancy beam splitter where you have two prods of your beam splitter. One has 0 and one has n photons.

And behind the beam splitter, you have two options. Either all the n photons are in one state, or the n photons are in the other state. You never have any other mixture.

Of course, you know your standard half silvered mirror will not do that for you, because it will split the n photon state with some Poissonian statistics or whatnot into the two arms and such. And you can calculate exactly what a normal beam splitter does to that. It's very, very different. This here is a very, very special beam splitter.

Just to demystify this beam splitter a little bit, I want to show you that, at least conceptually, there is an easy implementation with atoms. I really like in this section to grab an example from atoms and to grab an example for photons, because it really brings out that the language may develop equally applies to atoms and photon.

So if I take now an easy example, if I take an example of the Bose-Einstein condensate with n atoms, let's assume we have a double well. But now we have attractive interactions. This is not your standard BEC. Your standard Bose-Einstein condensate has repulsive interaction, with attractive interaction. If you go beyond a certain size, certain a atom number, the condensate will collapse.

So let's assume we have a Bose-Einstein condensate with n atoms. There are some attractive interactions, but we stay within the stability diagram that those atoms do not-- that those atoms do not collapse. So now, if the atoms are attractive, they all want to be together. Because then they lower the energy of each other. So if you have something with attractive interaction, you want to have n atoms together. But if your double well potential is absolutely symmetric, there is no symmetry breaking, and you have an equal amplitude for all the atoms to be in the other well.

So therefore, under the very idealized situation I described here, you will actually create a superposition state of n atoms in one well with n atoms in the other well. And this is exactly what we tried to accomplish with this beam splitter.

OK, so if we have this special beam splitter, which creates that state, then when we add a phase shift to one arm of the interferometer, we obtain-- which looks very promising-- a phase shift phi multiplied by n. And if we read it out, we have now if you pass it through the other super beam splitter, we create again a superposition of n0 and 0n.

But now, because of the phase shift with cosine and sine factors, which involve n times the phase shift. For obvious reasons, those states also go by the name noon states. That's the name which everybody uses for those states, because if you just read n00n, it gives noon.

So this is the famous noon state. So it seems it smells already, right, because it has a phase shift of n phi. Let me just convince you that this is indeed the case.

So the probability of reading out 0 photons in one arm of the interferometer scales now with the cosine square. If you do one single measurement, the binomial distribution is the same as before, but with a factor of n. The derivative has those factors of n.

And that means that the variance in the phase measurement is now 1 over n. If you want a reference, I put it down here. As far as I know, the experiment has been done with three photons, but not with larger photon number. Questions? Collin.

AUDIENCE: This picture with the double well, isn't there going to be-- all right, some people would say you spontaneously break the symmetry and [INAUDIBLE] either ends up in one side or the other. Like isn't this [INAUDIBLE]?

PROFESSOR: Well, Bose-Einstein condensation with attractive interactions was observed in 1995. And now, 18 years later, nobody has done this seemingly simple experiment. And what happens is really that you have to be very, very careful against any experimental imperfection. If the two double well potentials are not exactly identical, the bosons always want to go to the lowest quantum state. Well, that's their job, so to speak. That's their job description.

And if you have a minuscule difference between the two, the dates of the two wells, you will not get a superposition state. You will simply populate one state. And how to make it experimentally [INAUDIBLE], this is a really big challenge. [? Tino ?].

AUDIENCE: I have a question. Let's say we're somehow able to make the double well potential perfect. But if we didn't have attractive interactions, then wouldn't he just get a big product state of each atom being in either well?

PROFESSOR: OK, if you're a non-interacting system, the ground state of a double well potential is just your metric state 1 plus 2 over square root 2. And for non-interactive BEC, you figure out what is the ground state for one particle and then take it to the power n. This is the non-interactive BEC.

STUDENT: So then entanglement is because of the interaction, right?

PROFESSOR: If you have strong repulsive interaction, you have something which should remind you of the [INAUDIBLE] insulator. You have n atoms. And n over 2 atoms go to one well. n over 2 go to the other well.

Because any form of number fluctuations would be costly. It will cost you additional repulsive interaction. So therefore, the condensate wants to break up into two equal parts.

So that's actually a way how you would create another non-classical state, the dual flux state of n over 2, n over 2 particle. And for attractive interactions, well you should create the known state.

OK, so this was now a state n0 and 0n. There is another state, which you have encountered in your homework. And this is a superposition not of n0 and 0n and n0.

It's a superposition of n minus 1n and n n minus 1. This state goes by the person I think who invented it, the [? Yerkey ?] state. And you showed in your homework that with that, you also reach Heisenberg Limited Interferometry, where the phase scales is 1 over n.

What I want to add here to it is how one can create such a [? Yerkey ?] state. And again, I want to use the example with an atomic Bose-Einstein condensate. And here's the reference where this was very nicely discussed.

So let's assume we can create two Bose-Einstein condensates. And they have exactly n atoms. And I would actually refer to [? Tino's ?] questions how to make them have two n atoms in a trap and then make a double well potential, deform the harmonic oscillator potential to a double well potential. Then for strong repulsive interactions, the condensate will symmetrically split into two flux states, each of which has n atoms.

So now how can we create the [? Yerkey ?] state from that? Well, we simply have-- we leak out atoms. We leak atoms out of the trap. My group demonstrated an RF beam splitter, how you can just split in very controlled way start rotating the cloud on non-trapped state. And then atoms slowly leak out.

Well, when you can measure, of course, you can take an atom detector and measure that an atom has been out-coupled, that an atom has leaked out of the trap. If you don't like RF rotation, you can also think that there is a tunneling barrier, and atoms slowly leak out by whatever mechanism. And the moment you detect that an atom, you then project the state in the trap to n minus 1 atom, because you've measured that one atom has come out.

But now you use a beam splitter. And a beam splitter could simply be a focused laser beam. And the atoms have a 50% probability of being reflected or end up tunneling through.

So therefore, if you have now a detector which measures the atoms on one side and the atoms on the other side, then you don't know anymore when the detector makes click form which atom trap the atom came. Or more formally, a beam splitter transforms the two input modes ab into a plus b and a minus b normalized by square root 2.

So therefore, if this detector clicks, when you project the remaining atoms into the symmetric state, here you detect one atom in the mode a plus b. And that means that the remaining atoms have been projected into the [? Yerkey ?] state. If the lower atom detector would click, well, you get a minus sign here.

So that's one way how at least in a conceptually simple situation, you can prepare this highly non-classical state by starting with a dual flux state of Bose-Einstein condensates, and then using-- and this is an ongoing theme here-- by using a measurement, and then the post-measurement state is the non-classical state you wanted to prepare. Question? Yes.

AUDIENCE: How can you ensure that only one atom leaves the [INAUDIBLE]?

PROFESSOR: The idea is that we leak atoms out very, very slowly. And then we have a detector, which we assume has 100% quantum efficiency. So therefore, we simply wait until the detector tells us that one atom has leaked out. And in an idealized experiment, we know the atoms either have been measured by the detector, or they are still in the trap.

AUDIENCE: And also, is there a property that holds the [INAUDIBLE] you detect atoms [INAUDIBLE]?

PROFESSOR: In principle yes, but the idea here is if you have a very slow leakage process, the probability that you detect two atoms at the same time is really zero. You leak them continuously and slowly, but then quantum mechanically, that means for most of the time, you measure nothing.

That means the leakage hasn't taken place. The quantum mechanical system has not developed yet. But the moment you perform a measurement, you project-- it's really the same if you say you have n radioactive atoms. You have a detector.

And when the detector makes click, you know you have n minus 1 radioactive atoms left. It's just applied here to two atom types. Other questions?

AUDIENCE: [INAUDIBLE]?

PROFESSOR: No. And maybe I'll tell you now why not. We have discussed the noon state. And we have discussed here a highly non-classical superposition state. Let's just go back to the noon state. We have n atoms here, zero here, or n atoms, or the reverse.

But now, assume a single atom is lost, is lost from your trap by some background gas collisions. And you have surrounded the trap with a detector.

So if you have the noon state, the symmetric superposition state, all atoms here and all atoms there. But by your background process, by an inelastic collision, you lose one atom. And you detect it. You could set up your detectors that you know from which that you figure out from which trap was the particle lost.

So therefore, a single particle lost if you localize from which trap the particle is lost would immediately project the noon state into a state where you know I have n or n minus 1 atoms in one well, and 0 in the other well. So I've already told you, with the attenuator, you can never assume an attenuator is just attenuating a beam.

An attenuator can always regard it as a beam splitter. And you can do measurements at both arms of the beam splitter. Or you have to consider the vacuum noise which enters through the other part of the beam splitter.

And if you are now add that those atoms, n atoms, in a trap have some natural loss by inelastic collision or background gas scattering, the loss is actually like a beam splitter that particles don't stay in the trap, but go out through the other part. And then you can measure it.

So in other words, every loss process should be regarded as a possible measurement. And it doesn't matter whether you perform the measurement or not. And I think it's just obvious that the noon state, the moment one particle is lost and you reduce this particle to figure out if n particles are here or n particles are there, the whole superposition state is lost if you just lose one single atom out of n.

So the lifetime of a noon state is then not your usual trap lifetime, where you lose half of the atoms, 1 over e of the atoms. It is n times faster because it is the first atom lost, which is already completely removing the entanglement of your state.

So more quantitatively, to say it more specifically, the limitation is loss. When you have a fully entangled state, maximally entangled state, usually a loss of one particle immediately removes the entanglement. We had the situa-- no, that's not a good example. But for the most entangled state, usually-- and for the noon state, it's trivial to see-- a single particle lost allows you to measure on which side of the potential barrier all the atoms are, and all the beauty of the non-classical state is lost.

So if you assume that in a time window, you have an infinitesimal loss, a loss of epsilon, what usually happens is if you have an entanglement of n particles, then you lose your entanglement. If you expand 1 minus epsilon to n, it becomes 1 minus n times epsilon.

So that's one reason why people have not scaled up those schemes to a large number of photons, or a large number of atoms. Because the larger n is, the more sensitive you are to even very, very small losses. Questions?

AUDIENCE: [INAUDIBLE]

PROFESSOR: Yes.

AUDIENCE: Or is that physically?

PROFESSOR: The super beam splitter would create the noon state. I've not explained to you what it physically would be for photons, but I gave you the example for atoms to BECs in a double well potential with attractive interaction. So to start with the condensate in--

AUDIENCE: Start with a bigger [INAUDIBLE].

PROFESSOR: If you start with a double well potential, and you put in n atoms initially in, but then you switch on tunnel coupling, then you would create the noon state if the interactions are attractive, and if everything is idealized, that you have a completely symmetric double well potential.

OK, the last example is the squeeze light Interferometer. I just want to mention it briefly, because we've talked so much about squeezed light. So now I want to show you that squeezed light can also be used to realize Heisenberg Limited Interferometery.

So the idea is, when we plot the electric field versus time, and if you do squeezing in one quadrature, then for certain times, the electric field has lower noise and higher noise. We discussed that. And the idea is that if there is no noise at the zero crossing, that this means we can determine the zero crossing of the light, and therefore the phase shift with higher accuracy.

You may also argue if you have this quasi-probabilities, and with squeezing, we have squeezed the coherent light into an ellipse. And things propagate with e to the i omega t. Then you can determine a phase shift, which is sort of an angular sector in this diagram, with higher precision if you have squeezed this circle into an ellipse like that.

So that's the idea. And well, it's fairly clear that squeezing, if done correctly, can provide a better phase measurement. And what I want to show you here in a few minutes is how you can think about it.

So we have discussed at length the optical Interferometer, where we have just a coherence state at the input. This is your standard laser Interferometer. But of course, very importantly, the second input port has the vacuum state. And we discussed the importance of that.

So the one difference we want to do now is that we replace the vacuum at the second input by the squeezed vacuum where r is the parameter of the squeezed vacuum.

OK, so that's pretty much what we do. We take this state, we plug it into our equation, we use exactly the same formalism we have used for coherence states. And the question, what is the result?

Well, the result will be that the squeezing factor appears. Just as a reminder, for our Interferometer, we derived the sensitivity of the Interferometer. We had the quantities x and y. And the noise is delta in this y operator squared divided by x. This was the result when we operate the Interferometer at the phase shift of 90 degrees.

Just a reminder. That's what we have done. That's how we analyze the situation with a coherent state.

The signal x is now-- the signal x is the number measurement a dagger a. And we have now the input of the coherent state. And b dagger b. We have an input mode a and a mode b. They get split. And then we measure it at the output. And we can now at the output have photons-- a dagger a, which come from the coherent state, and b dagger b, which come from the squeeze vacuum.

So this is now using the beam splitter formalism applied to the Interferometer. So this is now the result we obtain. And in the strong local oscillator approximation, it is only the first part which contributes. And this is simply the number or photons in the coherent beam.

The expectation value of y is 0, because it involves a b and b dagger operator. And the squeezed vacuum has only, if you write it down, in the n basis, in the flux basis, has only even n. So if you change n by one, you lose overlap with the squeezed vacuum. So therefore, this expectation value is 0.

For the operator y squared, you take this and square it. And you get many, many terms, which I don't want to discuss. I use the strong local oscillator. Limited a dagger and a can be replaced by the eigenvalue alpha of the coherent state. So therefore, I factor out alpha squared in the strong local oscillator limit. And then what is left is b plus b dagger squared. And since we have squeezed the vacuum, this gives us a factor into the minus r.

So if we put all those results together, we find that the phase uncertainty is now what we obtained when we had a coherent state with the ordinary vacuum. And in the strong local oscillator limit, the only difference to the ordinary vacuum is that in this term, we've got the exponential factor e to the minus r. And since we have taken a square root, it's e to the minus r over 2.

So that result would actually suggest that the more we squeeze, that delta phi should go to 0. So it seems even better than the Heisenberg limit. However-- well, this is too good to be true-- what I've neglected here is just the following.

When you squeeze more and more, the more you squeeze the vacuum, the more photons are in the squeezed vacuum, because this ellipse stitches further and further out and has overlap with flux states at higher and higher photon number. So therefore, when you go to the limit of infinite squeezing, you squeeze out of the limit where you can regard the local oscillator as strong, because the squeezed vacuum has more photons than your local oscillator. And then you have to consider additional terms.

So let me just write that down. However, the squeezed vacuum has non-zero average photon number. And the photon number of the squeezed vacuum is, of course, apply b dagger b to the squeezed vacuum. This gives us a sinc function. And we can call this the number of photons in the squeezed vacuum.

So we have to consider now this contribution to y square. So we have to consider the quadrator of the ellipse, the long part of the ellipse, the non-squeezed quadrator component. And we have to consider that when we calculate the expectation value of y square.

And then we find additional terms, which I don't want to derive here. And the question is then, if you squeeze too much, you lose. So there's an optimal amount of squeezing.

And for this optimal amount of squeezing, the phase uncertainty becomes approximately one over the number of photons in the coherent state, plus the number of photons in your squeeze vacuum. So this is, again, very close to the Heisenberg limit. So the situation with squeezed light is less elegant, because if you squeeze too much, you have to consider additional terms.

This is why I gave you the example of the squeezed light and the squeezed vacuum as the last. But again, the Heisenberg limit is very fundamental as we discussed. And for an optimum arrangement of the squeezing, you can also use a squeezed vacuum input to the Interferometer to realize the Heisenberg limit. Any questions?

Why is squeezing important? Well, squeezing caught the attention of the physics community when it was suggested in connection with gravitational-- with the detection of gravitational waves. As you know, the laser Interferometer, the most advanced one is LIGO, has a monumental task in detecting a very small signal. And pretty much everything which precision metrology can provide is being implemented for that purpose.

So you can see, this is like precision measurement. Like maybe the trip to the moon was for aviation in several decades ago. So everything is really-- a lot of things pushing the frontier of precision measurement is motivated by the precision needed for gravitational wave detection. And what I want to show you here is a diagram for what is called advanced LIGO. LIGO is currently operating, but there is an upgrade to LIGO called advanced LIGO.

And what you recognize here is we have a laser which goes into a Michelson Interferometer. And this is how you want to detect gravitational waves. But now you realize that the addition here is a squeezed source.

And what you are squeezing is not, while it should be clear to you now, we're not squeezing the laser beam. This would be much, much harder, because many, many photons are involved. But it is sufficient to squeeze the vacuum and couple in squeezed vacuum into your cavitational wave detector.

If you wonder, it's a little bit more complicated because people want to recycle light and have put in other bells and whistles. But in essence, a squeezed vacuum source is a major addition to advanced LIGO. Yes.

AUDIENCE: Where is the squeeze actually coming into the system? I see where it's drawn, but where is it actually entering the interferometer? At that first beam splitter?

PROFESSOR: OK. We have to now-- there are more things added here. Ideally, you would think you have a beam splitter here, the laser comes in here, and you simply want to enter the squeezed vacuum here. And this is how we have explained it.

We have one beam splitter in our Interferometer. There is an input port and an open port. But what is important here is also that the measurement-- here you have a detector for reading out the Interferometer.

And what is important is that the phase is balanced close to the point where no light is coming out. So you're measuring the zero crossing of a fringe. But that would mean most of the light would then exit the Interferometer at the other port.

But high power lasers are very important for keeping the classical short noise down. So you want to work with the highest power possible. And therefore, you can't allow the light to exit. You want to recycle it. You want to use enhancement cavities.

And what I can tell you is that this set up here integrates, I think, the signal recycling, the measurement at the zero fringe. And you see that kind of those different parts are copied in a way which I didn't prepare to explain it to you. All I wanted you to do is pretty much recognize that a squeeze light generator is important. And this enters the interferometer as a squeezed vacuum.

What I find very interesting, and this is what I want to discuss now is, that when you have an interferometer like LIGO, cavitational wave interferometer, and now you want to squeeze. Does it really help to squeeze? Does it always help to squeeze? Or what is the situation? And this is what I want to discuss with you.

So let's forget about signal recycling and enhancement cavities and things like this. Let's just discuss the basic cavitation wave detector, where we have an input, we have the two arms of the Michelson interferometer. And to have more sensitivity, the light bounces back and forth in an enhancement cavity.

You can say if the light bounces back and forth 100 times, it is as if you had an arm length which is 100 times larger. And now we put in squeezed vacuum at the open part of the interferometer. And here we have our photo-diodes to perform the measurement.

So the goal is to measure a small length scale. If a cavitation wave comes by, cavitational waves have quadrupolar character. So the metric will be such that there's a quadrupolar perturbation in the metric of space. And that means that, in essence, one of the mirrors is slightly moving out. The other mirror is slightly moving in. So therefore, the interferometer needs a very, very high sensitivity to displacement of one of the mirrors by an amount delta z.

And if you normalize delta z to the arm length, or the arm length times the number of bounces, the task is to measure sensitivity in a length displacement of 10 to the minus 21. That's one of the smallest numbers which have ever been measured. And therefore, it is clear that this interferometer should operate as close as possible to the quantum limit of measurement.

So what you want to measure here is, with the highest accuracy possible, the displacement of an object delta z. However, your object fulfills and uncertainty relationship, that if you want to measure the position very accurately, you also have to consider that it has a momentum uncertainty. And this fulfills Heisenberg's uncertainty relation. You will say, well, why should I care about the momentum uncertainty if all I want to measure is the position.

Well, you should care because momentum uncertainty after a time tau turns into position uncertainty, because position uncertainty is uncertainty in velocity. And if I multiply it by the time tau it takes you to perform the measurement, you have now an uncertainty in position, which comes from the original uncertainty in momentum.

So if I use the expression for Heisenberg's Uncertainty Relation, I find this. And now, what we have to minimize to get the highest precision is the total uncertainty in position, which is the original uncertainty, plus the uncertainty due to the motion of the mirror during the measurement process.

So what we have here is we have delta z. We have a contribution which scores as 1 over delta z. And by just finding out what is the optimum choice of delta z, you find the result above. Or if you want to say you want that this delta z tau is comparable to delta z, just set this equal to delta z, solve for delta z, and you find the quantum limit for the interferometer, which is given up there.

So this has nothing to do with squeezing. And you cannot improve on this quantum limit by squeezing. This is what you got. It only depends on the duration of the measurement. And it depends on the mass of the mirror.

Now-- just get my notes ready-- there is a very influential and seminal paper by Caves-- the reference is given here-- which was really laying out the concepts and the theory for quantum limited measurements with such an interferometer, and the use of squeezed light. Let me just summarize the most important findings.

So this paper explains that you have two contributions to the noise, which depend on the laser power you use for your measurement. The first one is the photon counting noise. If you use more and more laser power, you have a better and better signal, and your short noise is reduced. So therefore, you have a better read out of the interferometer.

And this is given here. Alpha is the eigenvalue of the coherent state. But there is a second aspect which you may not have thought about it, and this is the following. If you split a laser beam into two parts, you have fluctuations. The number of photons left and right are not identical.

You have a coherent bean and you split it into two coherent beams, and then you have Poissonian fluctuations on either side. But if you have now Poissonian fluctuations in the photon number, if those photons are reflected off a mirror, they transfer photon recoil to the mirror. And the mirror is pushed by the radiation pressure. And it's pushed, and it has-- there is a differential motion of the two mirrors relative to each other due to the fluctuations in the photon number in the two arms of the interferometer.

So therefore, what happens is you have a delta z deviation or variance in the measurement of the mirror, which comes from radiation pressure. It's a differential radiation pressure between the two arms. And what Caves showed in this paper is that the two effects which contribute to the precision of the measurement come from two different quadrature component.

For the photon counting, we always want to squeeze the light in such a way that we have the narrow part of the ellipse in the quadrature component of our coherent beam. We've discussed it several times. So therefore, you want to squeeze it by e to the minus r.

However, when what has a good effect for the photon counting has a bad effect for the fluctuations due to radiation pressure. So therefore, what happens is-- let's forget squeezing for a moment. If you have two contributions, one goes to the noise, one goes with alpha squared of the number of photons. One goes inverse with the square root of the number of photons.

You will find out that even in the interferometer without squeezing, there is an optimum laser power, which you want to use. Because if you use two lower power, you lose in photon counting. If you use two higher power, you lose in the fluctuations of the radiation pressure.

So even without squeezing, there is an optimum laser power. And for typical parameters, so there is an optimum power. And what we're shown in this paper is whenever you choose the optimum power which keeps a balance between photon counting and radiation pressure, then you reach the standard quantum limit of your interferometer.

But it turns out that for typical parameters, this optimum is 8,000 watt. So that's why people at LIGO work harder and harder to develop more and more powerful lasers, because more laser power brings them closer and closer to the optimum power. But once they had the optimum power, additional squeezing will not help them because they are already at the fundamental quantum limit.

So the one thing which squeezing does for you, it changes the optimum power in your input beam by a squeezing factor. So therefore, if you have lasers which have maybe 100 watt and not 8,000 watt, then squeezing helps you to reach the fundamental quantum limit of your interferometer.

So that's pretty much all I wanted to say about precision measurements. I hope the last example-- it's too complex to go through the whole analysis-- but it gives you at least a feel that you have to keep your eye on both quadrature components. You can squeeze, you can get an improvement, in one physical effect, but you have to be careful to consider what happens in the other quadrature component. And in the end, you have to keep the two of them balanced.

Any questions? Oops. OK. Well, we can get started with a very short chapter, which is about g2. The g2 measurement for light and atoms.

I don't think you will find the discussion I want to present to you in any textbook. It is about whether g2 is 1 or 2, whether we have fluctuations or not. And the discussion will be whether g2 of 2 and g2 of 1 are quantum effect or classical effects.

So I want to give you here in this discussion four different derivations of whether g2 is 1 or g2 is 2. And they look very, very different. Some are based on classical physics. Some are based on the concept of interference. And some are based on the quantum indistinguishability of particles. And once you see you all those four different explanations, I think you'll see the whole picture. And I hope you understand something.

So again, it's a long story about factors of 2. But there are some factors of 2 which are purely calculational, and there are other factors of 2 which involve a hell of a lot of physics. I mean, there are people who say the g2 factor of 2 is really the difference between ordinary light and laser light. For light from a light bulb, g2 is 2. For light from a laser, g2 is 1. And this is the only fundamental difference between laser light and ordinary light.

So this factor of 2 is important. And I want to therefore have this additional discussion of the g2 function. So let me remind you that g2 of 0 is the normalized probability to detect two photons or two particles simultaneously.

And so far, we have discussed it for light. And the result we obtained by using our quantum formulation of light with creation annihilation operator, we found that g2 is 0. In the situation that we had black body radiation, which we can call thermal light, it's sometimes goes by the name chaotic light. Sometimes it's called classical light, but that may be a misnomer, because I regard the laser beam as a very classical form of light.

This is sometimes called bunching because 2 is larger than 1. So pairs of photons appear bunched up. You have a higher probability than you would naively expect of detecting two photons simultaneously. And then we had the situation of laser light and coherent light where the g2 function was 1.

And I want to shed some light on those two cases. We have discussed the extreme case of a single photon where the probability of detecting two photons is 0 for trivial reasons. So you have a g2 function of 0. But this is not what I want to discuss here. I want to shed some light on when do we get a g2 function of 1. When do we get a g2 function of 2.

And one question we want to address, when we have a g2 function of 2, is this a classical or quantum effect? Do you have an opinion? Who thinks-- question.

AUDIENCE: When we did the homework problem where we had the linear's position, like alpha minus alpha [INAUDIBLE] plus alpha, we found that one can have a g2 greater than 1, and one can have a g2 less than 1. So maybe g2 isn't a great discriminator, whether it's very quantum or very classic.

PROFESSOR: Let's hold this thought, yeah. You may be right. Let's come back to that. I think that's one opinion. The g2 function may not be a discriminator, because we can have g2 of 1 and g2 of 2 purely classical. But why classical light behaves classically, maybe that's what we can understand there.

And maybe what I want to tell you is that a lot of classical properties of classical light can be traced down to the indistinguishability of bosons, which are photons. So in other words, we shouldn't be surprised that something which seems purely classical is deeply rooted in quantum physics. But I'm ahead of my agenda.

So let me start now. I want to offer you four different views. And the first one is that we have random intensity fluctuations. Think of a classical light source.

And we assume that if things are really random, they are described by a Gaussian distribution. And if you switch on a light bulb, what you get if you measure the intensity when you measure what is the probability that the momentary intensity is I, well, you have to normalize it to the average intensity. But what you get is pretty much an exponential distribution.

And this exponential distribution has a maximum at I equals 0. So the most probable intensity of all intensities when you switch on a light bulb is that you have 0 intensity at a given moment. But the average intensity is I average.

So you can easily, for such a distribution, for such an exponential distribution, figure out what is the average of I to the power n. It is related to I average to the power n, but it has an n factorial. And what is important is the case for our discussion of n equals 2, where the square of the intensity averaged is two times the average intensity squared.

And classically, g2 is the probability of detecting two photons simultaneously, which is proportional to I square. We have to normalize it. And we normalize it by I average square. And this gives 2.

So simply light with Gaussian fluctuations would give rise to a g2 function of 2. Since it's random fluctuations, it's also called chaotic light. And the physical pictures is the following. If you detect a photon, the light is fluctuating.

But whenever you detect the first photon, it is more probable that you detect take the first photon when the intensity happened to be high. And then since the intensity is high, the probability for the next photon is higher than the average probability. So therefore, you get necessarily a g2 function of 2. So this is the physics of it.

So let me just write that down. The first photon is more likely to be detected when the intensity fluctuation-- when intensity fluctuations give high intensity. And then we get this result.

Yes. Let us discuss a second classical view, which I can call wave interference. This is really important. A lot of people get confused about it.

If you have light in only one mode, this would be the laser of, for atoms, the Bose-Einstein condensate. One mode means a single wave. So if we have plain waves, we can describe all the photons or all the atoms by this wave function.

So what is the g2 function for an object like this?

AUDIENCE: 1.

PROFESSOR: Trivially 1, because if something is a clean wave, a single wave, all correlation functions factorize. You have the situation that I to the power n average is I average to the power n. And that means that gn is 1 for all n.

OK, but let's now assume that we have two. We can also use more, but I want to restrict to two. That we have only two modes. And two modes can interfere.

So let me apply to those two modes a simple model. And whether it's simple or not is relative. So it goes like as follows. If you have two modes, both of Unity intensity, then the average intensity is 2.

But if you have interference, then the normalized intensity will vary between 0 and 4. Constructive interference means you get twice as much as average. Destructive means you get nothing.

So therefore, the [? nt ?] squared will vary between 0 and 16. So if I just use the two extremes, it works out well. You have an I square, which is 8, the average of constructive interference and destructive interference. And this is two times the average intensity, which is 2 squared.

So therefore, if you simply allow fluctuations due to the interference of two modes, we find that the g2 function is 2. So this demonstrates that g2 of 2 has its deep origin in wave interference. And indeed, if you take a light bulb which emits photons, you have many, many atoms in your tungsten filament which can emit, re-emit waves.

And since they have different positions, the waves arrive at your detector with random phases. And if you really write that down in a model-- this is nicely done in the book by Loudon-- you realize that random interference between waves results in an exponential distribution of intensity. So most people wouldn't make the connection. But there is a deep and fundamental relationship between random interference and the most random distribution, the exponential distribution, intensity which characterises thermal light or chaotic light.

So let me just write that down. So the Gaussian intensity distribution-- actually it's an exponential intensity distribution. But if you write the intensity distribution as a distribution in the electric field, intensity becomes e square. Then it becomes a Gaussian [INAUDIBLE].

So Gaussian or exponential intensity distribution in view number one is the result, is indeed the result, of interference. Any questions? So these are the two classical views. I think we should stop now.

But on Wednesday, I will present you alternative derivations, which are completely focused on quantum operators and quantum counting statistics. Just a reminder, we have class today and Friday. And we have class this week on Friday.