Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this video, the professor reviewed second order perturbation theory and discussed beyond the quadratic Stark effect, field ionization, and atoms in oscillating electric fields.
Instructor: Wolfgang Ketterle
Lecture 11: Atoms in Extern...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.
To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Good afternoon. So we are still in a discussion of atoms in external magnetic fields. And we are working our way up from simple static fields to time dependent fields.
Last week and the week before, we covered external magnetic fields, Zeeman shifts, different coupling limits, strong field, and weak field.
Last class on Friday, we talked about what happens when you put atoms into DC electric fields. So what we did was simple. Lowest order, which means in this case, second order perturbation theory.
And we derived an explicit expression for the polarizability, alpha. And this polarizability tells us how energy levels are shifted quadratically with the electric field.
I put some emphasis in talking about what is inside perturbation theory and identified with you that, yes, if you have an electric field, we have electrostatic energy.
But in order to polarize the atoms, we have to create an internal energy. In a concrete example, if we have an s state, we have to mix a p state to create a dipole moment.
And this costs energy. Exactly in the same way as when you have a spring and pull on the spring with gravity, you gain gravitational energy. But you have two pay exactly half of it to create internal energy in your spring.
And this is actually the reason for this factor of 1/2 as we discussed in great length. Question is do you have any questions about that part? Because we want to go to the next level. Perturbation theory? DC polorizability?
So anyway, the menu for today is we have done perturbation theory here for weak electric fields. But the question came already up in class, is it really valid? Or for what regime is it invalid?
So today I want to talk to you briefly what happens when we go beyond perturbation theory. When we go beyond the quadratic Stark effect.
And that leads us to a discussion on stability of atoms in strong electric field and field ionization.
I like to sort of feature it because it allows me to tell you something about the peculiar properties of Rydberg atoms. And also the ionization of Rydberg atoms through electric field.
This is how people in our field create cold plasmas. And it's also a way to do a very sensitive detection of atoms. So what I'm telling you today is interesting for its own sake, but also because it's an important tool for manipulating atoms, creating plasmas, or sensitive detection.
So this will probably only take 10 or 20 minutes. And then we want to go form DC electric fields to AC electric fields. So we then discuss the AC polarizability. And, well, that will take us from perturbation theory in a time independent way what we have done now for DC fields to time dependent perturbation theory.
So all the topics are rather basic aspects of quantum physics. But as usual, I try to give you some special perspectives from the atomic physics side.
So in perturbation theory, we have a mixture of other states. And this said mixture is done with the matrix element.
And in perturbation theory, we always have an energy denominator. Intermediate state to ground state. And if the electric field is smaller than this value, then we have mixture of other states into the ground state.
So usually when we estimate the validity of perturbation theory, we look for the state which is closest to the ground state. For which the energy denominator is smallest.
So therefore, i here is the nearest state, but, and this is important, of opposite parity.
Otherwise, because of the parity selection rule, the matrix element would be 0 and electric field does nothing. I made a comment on Friday, but let me do it again.
That, of course, means that when we apply electric field to our favorite atoms, we don't have to worry about the other hyperfine states. The other hyperfine states have the same spatial wave function. Have the same parity.
So we are really here talking about the first excited state. And a concrete example for those of you who work with alkalis with an s ground state, the relevant energy scale here is the excitation energy to the first p state.
So let's just estimate it for a single electron atom. And well, this is a hydrogenic estimate. The excitation energy to the first excited state.
1s to 2p is about 1 Rydberg. And the Rydberg is nothing, or Hartree, and that's nothing else than e square over a0.
It's electric potential of two charges separated by Bohr radius. And if we estimate that the matrix element for strong transition is on the order of the Bohr radius, there are no other length scales in the problem, we find that the value for the electric field in atomic units is the charge divided by the Bohr radius squared.
And this is really high. It is on the order of 5 times 10 to the 9 volt per centimeter. And this is 1,000 times larger than laboratory electric fields.
Those fields would just create sparking along the electrodes. You cannot apply such high electric fields in a laboratory. So, therefore, nothing to worry about.
If you have ground state atoms, the Stark effect in perturbation theory is all you need. Actually, to be a little bit more precise when I used the Rydberg and a0, I made a little bit overestimate.
So that the typically the critical electric field which would cause a breakdown of perturbation theory for the ground state is around 10 to the 9 volt per centimeter.
OK, so we are safe when we talk about ground states. But once we got to the excited state, we have degeneracy. p states have a three fold degeneracy and they are states with opposite parity.
So we can't have mixing there. And actually as we will see for excited states, we have already a breakdown of perturbation theory at very, very small electric field.
So let us discuss hydrogenic orbits with principal quantum number, n. And if you estimate what is the size of the matrix element? Well, it's not just a0.
There is a scaling with n which is n square. The matrix element in higher and higher excited state scales with n square.
Well, how does the energy separation scale? Well, let's not discuss hydrogen here. Because in hydrogen, energy levels are degenerate. And we would immediately get a breakdown of perturbation theory.
Let's rather formulate it for general atoms. And we had this nice discussion about the quantum defect. So if we compare the energy of two l states, they scale as 1 over n square.
But for different l states, we have different quantum defects. Delta l plus 1. And here we have delta l.
So, therefore, doing an expansion in m which we assume to be large, we find that the energy difference is proportional to the difference between the quantum defects for the two states we want to mix with the electric field. And then again, the scaling with the principal quantum number is n cube.
So, therefore, we find for the critical field using the criterion I mentioned above. We take the energy splitting. The energy the denominator which appears in perturbation theory.
We divide by the value of the matrix element. Well, we had the Rydberg constant or two times the Rydberg constant is nothing else than b squared over a0.
The matrix element was 1 over a0. And then we have the difference between quantum defects. And now-- and this makes it really so dramatic.
We had an n square scaling of the matrix element. And we have an into the minus 3 scaling of energy differences. So that means the critical field scales by n to the 5.
Go to an excited state with n equals 10. And the breakdown of perturbation theory happens 100,000 times earlier. So some of the scaling in atomic physics is very, very dramatic when you go to more highly excited state.
If you threw in that quantum defects become very small, once you go beyond s and p states. The higher states just don't penetrate into the nucleus.
So if l is larger than 2, if you have more complicated atoms, you may add the angular momentum of the core here.
But so if you put n to the 5 scaling and the small quantum defect together, you find that critical electric fields are smaller than one volt per centimeter already for principal quantum numbers as low as 7.
So that means bring a 1.5 volt battery close to your atom and you drive it crazy. You drive it out of perturbation theory.
So what we have is the following. We have, of course, the structure of atoms in excited field. Here is the electric field.
And let me just pick three n values. 18, 19, 20. And now the structure, of course, is that by the criterion I actually gave you was the criterion for the application of perturbation theory.
If the energy levels are smaller than the matrix element, you have to rediagonalize between those levels. And that gives you then not a quadratic, but a linear effect.
So, therefore, the structure is here that you have a region where you have strong l mixing. So you have to use degenerate perturbation theory for the different l states.
But the many folds in n, the principle quantum number, are still well separated. But then, eventually, when you go further, you have a region which is called n mixing.
So now the electric field is really completely rediagonalizing your states with different quantum numbers, n.
So the result of this discussion is that highly excited states of atoms behave very differently from ground state atoms. And n to the 5 scaling is sensitivity to volt per centimeter.
Level mixing all over the place. And that's why for those highly excited states, people have coined the word Rydberg atoms or Rydberg matter. That means atoms with higher principal quantum numbers.
And the study of Rydberg atoms was pioneered, well, the early pioneering work by our own, Dan Kleppner.
And then Herbert Walther in Munich who happened to be my Ph.D. Advisor. And finally, Serge Haroche, who was recognized with the last Nobel Prize together with Dave Wineland.
So this is not just theory. What I am showing to you here is spectroscopy done at MIT by Dan Kleppner.
So what is done here is from the ground state, they excite to an excited state. And whenever you hit an excited state, you see a signal.
Let's focus on the upper part. So if at a given electric field, you scan the laser, you get one of those traces. You find peaks, peaks, peaks.
And those peaks correspond to the different and many folds with strong Stark mixing. And eventually when you go to somewhat higher field, you have states all over the place. And this is the regime where you have done n mixing.
So in the '70s and '80s, there was really, those experiments obtained a clear understanding and description of atoms in, well, I would say high electric fields. But the fields were not so high.
It was just the atoms were very sensitive that already is at lower electric fields, they reached what is regarded as the high-field limit.
Now the question is when we recorded signal, suddenly the traces stop. And that means the electric field is now so high that the atom no longer has a stable state.
The electric field is so high that it literally rips the electron away from the atom. And if you go to higher states, the electric field where this happens is lower.
This is the process of field ionization and that's what we want to discuss next. Question?
AUDIENCE: Um, for what atoms?
PROFESSOR: Those studies were actually done for lithium. It's actually peculiar. Dan Kleppner really liked hydrogen. Dan Kleppner is the person who tried to do almost all experiments with hydrogen.
The famous BEC experiment. He also had the Rydberg experiment which was just in building 26 where Vladan Vuletic teaches now, his labs.
This is where spectroscopy of hydrogen were done with the goal of a precision measurement of the Rydberg constant.
So we excited hydrogen to some of those high levels. But as probably the experts know, the hydrogen atom is the hardest atom to work with. Because you need [INAUDIBLE] and alpha.
You have this huge gap to the first excited state. And that's why if you can get away, you try to work with other atoms. And in those experiments, those people worked with the lithium atom.
So the lithium atom has a quantum defect which will actually contrast to hydrogen where the quantum defect is 0. And this will actually be very, very important for field ionization as I want to discuss now.
Other questions? OK, so at a given electric field, states, so to speak, just disappear. They're no longer stable.
And this is the process which is called field ionization. So the phenomenon is that sufficiently strong electric fields ionize the atom.
And whenever there is a simple model and I can give you an analytic answer, I try to do that. Because I feel a lot of our intuition is shaped by understanding simple models.
And the simplest model for field ionization is just the classical model by calculating what is the settle point in the combined potential. The combined potential of the nucleus which is a Coulomb potential and the external magnetic field.
So many features of the experiment can be understood by this simple three line derivation. So we have a potential.
One part of it is the Coulomb potential. And focusing on one spatial direction here. And then, in addition, we apply an electric field.
And the electric field creates a linear potential. And if I take the sum of the two, well, at large distances, see, it's the electric field which dominates. Then the Coulomb potential takes over.
So that's how it looks like. So now we have the situation if we would put in atoms and we would look at the energy eigenvalues. At this point, this is the maximum excited state in the atom which is still stable.
So what I want to derive for you is what determines the stability is simply the settle point. When the binding energy of the excited state for which we use the Rydberg formula is not stronger than the position of the settle point, the atom becomes unstable it and becomes field ionized.
And we'll discuss a little bit later if this really applies to real atoms. The quick answer is for lithium and all the other atoms, it applies. For hydrogen, it doesn't. Because hydrogen has too many symmetries. Too many exact degeneracies.
OK, so the total potential is the Coulomb potential plus the electric potential.
What we need is the position of the settle point. Where we have a maximum in this one. This is a one dimensional cut. And this one dimensional cut has a maximum at this position.
And by taking the derivative of the total potential, you immediately find this to be of that value. And now what we are calculating next is what is the potential energy at this point.
And, well, this is just copy from the notes, e to the 3/2. And now what we want to do is we want to postulate that for field ionization, this should be able to the binding energy of the electron. Which is nothing else than the Rydberg constant divided by n square.
OK, now here, we have the square root of the electric field from this calculation. So that means the critical electric field will scale as 1 over n to the 4.
And this is a famous scaling which can be found in many textbooks. That the critical electric field for ionization equals, and now, that's the beauty of atomic units, it is 1/16, n to the 4.
Beautiful formula derived from the settle point criteria. Of course, what I mean here is, and if you do the derivation, it's in atomic units. Which means in units of the atomic unit of the electric field which is e over the Bohr radius squared.
So it's a simple model. It's an analytic result. The question is is it valid? Does it make any sense? And the answer is yes, but in a quantum mechanical problem, you would actually solve Schroedinger's equation, such a potential.
But then, the onset of field ionization comes when tunnelling becomes possible for this barrier. But it is the nature of tunnelling that if you're a little bit too low, tunneling is negligible.
You may have ionization rates of 1 per millisecond or so. And if you just go a little bit closer, it becomes exponentially larger. So, therefore, this scaling is very, very accurate.
Because the transition where you go from weak tunneling, to strong tunneling, to spilling over the barrier, it's a very narrow range of electric fields.
But, yes, people have looked at it in great details and have calculated corrections due to tunneling. So these are quantum corrections to the classical threshold which I just calculated.
But now in hydrogen, a lot of n, l mixing matrix elements, matrix elements due to the electric field between an l states vanish.
Hydrogen is just too pure, too precise. There is actually parabolic quantum numbers where you can exactly diagonalize hydrogen electric fields.
And you find some stable states which do not decay. And they are above the classical threshold we have just calculated.
So as Dan Kleppner would have said, the simplest of all atoms is the most complicated when it comes to field ionization. Because it has a lot of stable states above the classical barrier.
So you can sort of envision that there will be orbits which are just confined to this region. And the electron never samples the settle point.
And if you look at it this diagram on the wiki, these are actually calculations for hydrogen which include ionization rates.
You will find that the states which are the ones which go down, which are on the downhill side of the electric field, you see always here marked an onset for ionization. And then you see a rapid increase in the ionization rate.
But you also see those hydrogenic state which go upward in energy and they refuse to ionize. Because of the symmetry of parabolic coordinates and the things I've mentioned.
Anyway, it's too especially to spend more time here in class on it, but I just think you should at least know qualitatively what is different for hydrogen.
OK, so that's what I wanted to tell you about high electric fields and field ionization in principle. Let me now briefly mention important applications of field ionization.
One is close to 100% detection efficiency for atoms. If you want to detect single atoms, for instance, you'll have a krypton sample and you want to find a rare isotope of krypton for dating the material.
You need an extremely high sensitivity and you may just have a few single atoms in the sample. One way to do it would be that you excite the atom maybe through an intermediate state, to Rydberg state.
And then by just applying a few volt per centimeter, you get an ion. And ions can be counted by particle detectors. You can accelerate the ion, smash it into a surface and count the particles with close to 100% efficiency.
And this is one of the most sensitive detection schemes. I remember in the aftermath of Chernobyl, there was an interest in detection schemes for radioactive strontium. And on the wiki, I give you a reference where some people developed this resonance ionization spectroscopy for some atomic isotopes.
Which unfortunately appeared more frequently after the Chernobyl disaster. And they developed a method based on excitation to Rydberg states. Which was more sensitive than other methods.
You may ask why don't you ionize it with a laser? Well, the fact is, you can photoionize it with a laser. It's another alternative, but it takes much more laser power.
Because if you excite into the continuum, the matrix element is much smaller. And often, if you want to have 100% ionization probability to go into the continuum, you need such high laser power that you may get some background of resonant ionization of other elements and such.
So Rydberg atoms is really the smart way to go. You go to an almost bound to an almost unbound electron. And then it's just the electric field which causes the final act of ionization.
I also want to briefly mention that the famous experiments on Rydberg atoms by Herbert Walther and Serge Haroche and collaborators would not have been possible without field ionization.
I give you this one reference, but I'm sure you'll find more in the actually very, very nicely written Nobel lecture of Serge Haroche. I just read it a few weeks ago and it's a delight to read how he exposes the field.
So they did QED experiments by having microwave transitions between atoms in two highly excited states. Let's say with principal quantum number 50 and 51. So that's now conveniently in the microwave regime.
Then those atoms are passed through a cavity. And in this cavity, single atoms interact with single photons. And they have done beautiful quantum non-demolition experiments of single photons. I mean that's really wonderful, state of the art experiments.
Back to Rydberg atoms and field ionization. Eventually, the read out of those experiments was you prepare atoms in the state 50 or 51.
And afterwards, if they have absorbed or emitted a photon, they should be in a different state. So you were interested in a very high detection efficiency which could distinguish between 50 and 51.
And of course there is a way to distinguish that. And this is because of n to the 4. You first apply an electric field which can only field ionized 51. And then you allow the atoms to propagate into a slightly higher electric field. And then the 50s are ionized.
So the standard experiment is that you pass those atoms between two field plates. And by putting an angle between them that indicates the voltage is increasing.
And then you have two little holes. You have some channel-drawn particle detectors. And the first detector will detect the 51s and the second detector will detect the lower lying states.
So you can detect every atom with high probability, but also in a state selective way.
So this way of doing state selective field ionization based on the discussion we had earlier, this is sort of the method of choice for experiments involving Rydberg atoms.
Any questions about atoms in the electric fields? Well, then. Frequently we add time dependence.
So what we should do next is atoms in oscillating electric fields. It's also a good way to review what we have done. Because the first thing to do now is we calculate the polarizability for AC fields.
We calculate the AC Stark effect. And, of course, if in the AC Stark effect we said omega 2, 0 will retrieve the DC Stark effect. So in a way, what I'm doing for you now is I'm using time dependent perturbation theory to obtain a new result.
But it will reproduce the result of time independent perturbation theory which we have just discussed. All right, so atoms in oscillating electric fields.
Of course the next step is, and this is where we are working towards, we will in the next few lectures starting on Wednesday. Well, then there is Spring Break, but in the next lectures develop a deep understanding what happens when atoms interact with light.
And oscillating electric fields is already pretty close to light. And I want to actually also show you that we capture already a lot of the phenomena which happen with light except for a full understanding of spontaneous emission.
So we pretty much when we use an oscillating electric field, we allow the atom to interact with just one mode of the electromagnetic field. Which is filled with a coherent state.
And this is so classical that we don't even need field quantization. We just use a classical electric field. And this already gives us the interaction of atoms with light except for spontaneous emission which involves all the other modes.
So that's what we do later. But today, we just do the same classical description of an atom in an oscillating electric field. And this is the theory of the AC Stark effect.
So all we do is application of time dependent perturbation theory. So our electric field is now time dependent. It has a value epsilon, polarization e hat, and it oscillates.
Our perturbation Hamiltonian is exactly the same dipole Hamiltonian we had before for the DC Stark effect. But now it's a time dependent one.
And it will be useful to break up the oscillating term in e to the i omega, t and e to the minus i omega t. I don't want to bore you with perturbation theory in quantum mechanics because you've all seen it in 805 or 806.
I just want to jump to the result. You find more details about it on the wiki. But all you do is you parametrize your wave function, you expand it into eigenstates with amplitude an.
And then you put it into the Schroedinger equation and assume that for short times the atom is in the ground state. The amplitude of the ground state is 1.
And the amplitude of the excited state is infinitesimal that you can use the lowest order perturbation. This immediately gives you the first order result for the amplitude in an excited state, k.
It only comes about because your initial state, the ground state, is coupled by the matrix element to the excited state. It's linear in the applied electric field.
So what you do is you have the Schroedinger equation and you integrate it from times 0 to time 1. And since you have e to the i omega, t and e to the minus i omega t, you get two time dependent terms. e to the i.
So what appears now is we have the frequencies omega n of the excited state. And now when we couple the excited state, k, to the ground state, what appears is the frequency difference between the two. That's pretty much the excitation gap.
And we have a time dependent oscillation at omega. And then we have, of course, the same term where we flip the sign from omega-- let me write that more clearly.
Where we flip the sign from omega to minus omega. So that's the second term.
By integrating an exponential function with respect to time, we get an energy and a frequency denominator which is this one.
So this is really just straight forward, most basic plain vanilla application of perturbation theory. The only thing I want to discuss because it sometimes confuses people is that we integrate from times 0 to the finite time.
And then you integrate, you get contributions from the upper integration limit. And from the lower integration limit. And this contribution at the lower integration limit is actually a transient.
It is at the frequency. It doesn't depend on omega. So it's a beat node between the ground and excited state. I could say at frequency omega k, but then it's together with the ground state, it's omega kg.
And this is due to if you switch on a perturbation, it's like you suddenly switch on the drive of an harmonic oscillator. And you have some ringing. You have a transient at the natural frequency of the harmonic oscillator.
It has nothing to do with the drive. It's just a sudden onset. It's transient and frequent due to the sudden switch on.
So like any transient, we haven't included damping in here. We don't have spontaneous emission. Everything is undamped. But eventually, all those transients will damp out with time.
And as we should have known also from the beginning, when we drive a system, when we switch on a perturbation, just think about an harmonic oscillator.
You have a response at the harmonic oscillator frequency which is always transient in nature. And then you have a response at the dry frequency.
And we are interested, of course, only in the driven response The driven response is at frequency plus minus omega. That's how we drive it.
But now I have to be careful since I'm looking at the amplitude and I factor out the time dependence of the time dependence of the wave function, the time dependence of the eigenstates.
I'm now looking for drive terms in this expression for the amplitude which are at frequency omega, but modified by the ground. Or the frequency of the ground state.
But anyway, what I mean is the relevant term is the one which depends on omega. And in the following discussion, I simply drop the minus 1 because it's a transient. If you would switch on your time dependent electric field in a smoother way, this term would disappear.
OK, let's now be specific. Let's assume the electric field points in the z direction. For an isotropic medium, the dipole moment, the time dependent dipole moment which we induce is also pointing in the z direction.
And so we want to calculate now what is the dipole moment which is created by the drive term? By the driven electric Field
And for that, we simply use the perturbation theory we have just applied. We take the ground state and its first order correction. And calculate the expectation value of the dipole moment.
In the line at the top, we have the first order correction to the ground state wave function. And so we just plug it in. And what we obtain is result where we have the matrix elements squared.
Remember, we do first order perturbation theory which is one occurrence of the matrix element, but now we take a second matrix element because we're interested in the dipole operator.
So this gives us now a sum over matrix element squared. We have e to the plus i omega t and e to the minus i omega t. This means we get 2 times the real part of this expression.
And the time dependent term is e to the i omega t. And then we have the term with plus and minus omega. Yes.
And so, most importantly, everything is driven by the electric field. We can now just to write the result in an easier way, we can use this result with omega minus omega.
And the real part and write that. We can write that as 2 times omega k, g. Omega k, g squared minus omega squared, times cosine omega t, times the electric field.
And now finally, we have the matrix element. We have integrated the e to the i omega t function and such.
So what we have here now is the time dependent electric field. And what we have here is the factor by which we multiply the electric field to obtain the bipole moment. And this is the definition of the now time dependent or frequency dependent polarizability.
AUDIENCE: Excuse me.
PROFESSOR: Yes?
AUDIENCE: Why are we only getting the cosine and not [INAUDIBLE]? You only multiply by the other terms to get the [INAUDIBLE]? In terms of the cosine.
PROFESSOR: I haven't done the math yesterday when I prepared for the class. I did it a while ago when I wrote those notes, but you know, one comment, I know you don't want to hear it, but it's a following.
This system has no dissipation at all. And when I drive it, I will always get a response which is in phase with the drive. It can be cosine omega t, or minus cosine omega t. You cannot get a quadrature at this point.
So if you find I've made a mistake, and you would say, there is a sine, omega term, I've made a mistake.
I know for physical reasons, I cannot get a phase shift. You only get a phase shift in the response of a system to drive when you have dissipation.
AUDIENCE: OK. I'll have to talk with you.
PROFESSOR: But why don't-- I mean, I know the result is correct. And this is just, I hate to spend class time in trying to figure out if I've omitted one term.
AUDIENCE: The real part.
PROFESSOR: But the real part what I've probably done is--
AUDIENCE: I think that's probably what you did. Yeah. Thank you.
PROFESSOR: I think that's what I did. So let me, therefore, also say we have now included the real part of it. Yes. OK, thank you.
OK, this part here can actually-- that's how we often report. And that's how you often find in textbook the result, the frequency dependence of the AC polarizability.
But I like to rewrite it now in a different way which is identical. And it shows now that there are two contributions.
And I will discuss them in a moment. But those two contributions, one has in the denominator, let's assume we excite the system close to resonance. Omega is close to omega kg.
Then one term is much, much closer. It's to a near resonant excitation. And from our discussion about rotating frames, we have all of this near resonant excitation corresponds to a corotating term.
And the other one corresponds to a counter rotating term. We've not assumed any rotating fields here, but we find those terms with the same mathematical signature. And I will discuss that little bit later.
But the physics is between the corotating and counterrotating term. And it's the corotating term which is the term which [INAUDIBLE] the so-called rotating wave approximation.
I just want to identify those two terms and let's hold the thought for until we have the discussion. What I first want to say is the limiting case. We have not made any assumptions about frequency.
When we let omega go to 0, we obtain the DC result. It is important to point out that when we have the DC result, we can only get to correct results because we have equal contributions from co and counterrotating terms.
So that's sort of a question one could ask. You know, which mistake do you do for the DC polarizability when you do the rotating wave approximation? Well, you miss out on exactly 50% of it. Because both terms become equally important.
I have deliberately focused here on the calculation of the dipole moment. Because the dipole, I simply calculated the dipole moment as being proportion to the electric field.
And the coefficient in front of it is alpha the polarizability. You may remember that when we calculated the effect of a static electric field, we looked for the DC Stark shift for the shift of energy levels.
We can now discuss also the AC Stark shift. Which is a shift of the energy levels due to the time dependent field. But I have to say you have to be a little bit careful.
And sometimes when I looked at equations like this, there is a moment of confusion what you want to actually, what the question is. Because the wave function is now a time dependent wave function.
It's a driven system. It's no longer your time independent Schroedinger equation and you are asked, what is the shift in the value of eigenvalues?
So the AC Stark shift here is now given by the frequency dependent polarizability. And then, and I know some textbooks do it right away. And at the end of the day, it may confuse you.
It uses an average of e square. So in other words, if you have an electric field which is cosine omega t, and you calculate what is the AC Stark shift, you get another factor of 1/2.
Because cosine square omega t time average is 1/2. So anyway, just think about that. It's one of those factors of 1/2 which is confusing. Will, you have a question?
AUDIENCE: So when we take omega goes to 0 from our previous results, are we still justified in neglecting the transient term?
PROFESSOR: Yes. But why? What will happen is the transient term is really a term which has time dependence.
And even if omega is 0, just the step function of switching it on creates an oscillation in the atom at a frequency which is omega excited state minus omega ground state.
You may think about it like this. I give you more the intuitive answer. Take an atoms and put it in electric field. If you gradually switch on the electric field, you create a dipole moment by mixing at 0 frequency a p state into the s state. And that displaces the electron from the origin.
But if you suddenly switch on the electric field, you actually create a response of the atoms which has a beat node between the excitation frequency of the p state and the excitation frequency of the s state.
And what you regard is the DC response of the atom is everything except for this transient term. However, and this tells you maybe something about the different formulas in quantum mechanics when we talked about the time independent perturbation theory.
We never worried about the switch on because we just did time independent perturbation theory and we sort of assumed that the perturbation term had already existed from the beginning of the universe.
So it's not that we excluded the term. We formulated the theory in such a way that the term just didn't appear.
But if you switch on a DC field, you should actually if you want an accurate description, do time dependent perturbation theory, you get the transient term even for DC field. And then you discuss it away the way I did.
OK, so if you want, these are the textbook results. We could stop here. But I want to add three points to the discussion.
You can also see at this point, we really understand that AC Stark shift theory as you find it in generic quantum mechanics textbooks. And now I want to give you a little bit of sort of extra insight based on my knowledge of atomic physics.
So there are three points I would like to discuss here. The first one is the relation to the dressed-atom picture. The second one is I want to parametrize the results for the polarizability using the concept of oscillator strengths.
And thirdly, I want to tell you how you can, already at this point, take our result for the AC polarizability and calculate how do atoms absorb light and what is the dispersive phase shift which atoms generate when they're exposed to light.
Or in other words, I want to show you that based on this simple result, we have pretty much already all the information we need to understand how absorption imaging and dispersive imaging is done in the laboratory.
So these are the three directions I would like to take it. So let's start with number one. The relation to the dressed-atom.
So what I want to show you now is that a result we obtained in time dependent perturbation theory, we could've actually obtained in time independent perturbation theory by not using a coherent electric field which oscillates.
But just assuming that they're stationary Fock states of photons. And this is actually the dressed-atom picture. I know I'm throwing now a lot of sort of lingo at you. It's actually very, very trivial.
But I want to show you that you have a result where if you just open your eyes, you see actually the dressed-atom shining through.
So what we had is we had an energy shift here. Which was let me just summarize what we have derived. Was this result with a polarizability which we derived. And let me rewrite the results from above.
So this is why I made the remark about the averaging of the electric field. If you combine it with, as you will see, Rabi frequencies and dressed-atom picture, you'll need the amplitude of the electric field.
And formulated in the amplitude of the electric field, you have a one quarter. And this is not a mistake. And you can really trace it down to the time average of the cosine term.
OK, so this is now-- I'm really copying from the previous page. We had the difference frequency between state 1 and state 2. So I'm simply assuming that we couple two states.
Now an s and p state, If you want. We have a matrix element which is the matrix element of the position operator, z, between state 1 and state 2 squared. We have an energy denominator which was this one.
And we have-- so I'm just rewriting the previous result. But now I usually hate matrix elements when they appear in an equation. I mean, who knows matrix elements.
What is the relevant thing when we couple two different states is the Rabi frequency. Frequency units is what we want. So, therefore, I have prepared the formula that I can take the matrix element with the electric field.
And this is nothing else than the Rabi frequency squared. Or actually, one is measured as energy units. The other is frequency units. So there is an h bar square.
So, therefore, I have now written this result in what I think is a more physically insightful way by explicitly identifying the Rabi frequency which couples ground and excited states.
And I also want to separate, want to introduce the detuning of the time dependent oscillating electric field from resonance.
And then I obtain this result. Doesn't it look so much simpler than what we had before? And it has a lot of physics we can discuss now. One over delta is sort of like an AC Stark effect in one limit.
It's a far-off resonant case of an optical trapping potential. So this formula has a lot of insight which I want to provide now.
The second part I give you the name and the interpretation will become obvious in a moment. Is the important Bloch-Siegert shift. It is the AC Stark shift due to the counterrotating term.
So what I'm motivating here is just don't get confused. What I write down is very simple. And I sometimes use advanced language for those of you who have heard those buzz words.
But what I really mean is what I want to discuss and you to follow are the simple steps we do here.
So anyway, what I've just done is I've rewritten the result from the previous page by just introducing what I suggest as more physically appealing symbols.
And now I want to remind you that this result for the AC Stark effect doesn't it look very similar not to a result, to the standard result of time independent perturbation theory?
And, of course, you remember that in time independent perturbation theory, you get an energy shift which is the square of a matrix element divided by detuning.
So it seems when we inspect our result for the AC Stark effect which came from time dependent perturbation theory, that this result here has actually-- if we map it on time independent perturbation theory, it has two terms.
Both coupled by the Rabi frequency. But one has a detuning of delta. And the other has a detuning of minus 2 omega, minus delta.
So it seems that the result for the AC Stark shift can be completely understood by a mixture of not one, but two different states with different detunings.
And this is exactly what we will do in 8.422 in the dressed-atom picture when we have quantized the electromagnetic field. In other words, we have photons.
Because then what we have is the following. We have the ground state with n photons, n gamma.
Well, there are sort of n quanta in the system. n photons. But what we can do is we can now have one quantum of excitation with the atom. And n minus 1 photon.
So it's almost like absorbing a photon. And this state has a detuning of delta.
But then we can also consider an excited state. In other words, here, we connect to the excited state by absorbing a photon. But we can also, we talk about it more later, we can also connect to the excited state by emitting a photon.
So this state has now not one quantum of excitation, there's three quanta of excitation. One in the atom and two in the photon.
So, therefore, its detuning is now much, much larger. Actually, if we're on resonant, the detuning would be just 2 omega. But if you are detuned, there is the delta.
So in other words, we can just say if it would do time independent-- I'm not doing it here. And I leave kind of all the beauty to when we discuss the dressed-atom picture in its full-fledged version.
But all I'm telling you is that the result for the AC Stark shift looks like type independent perturbation theory with those two detunings.
And I'm now offering you the physical picture behind it by saying, look, when we have the ground state with n photons, and we have those two other states, they have exactly the detuning which our results suggests.
And yes, indeed, if you look at the many folds for those three states and we would simply do time independent perturbation theory, we would find exactly the same frequency shifts, the AC Stark shift as we just obtained in a time dependent picture.
So in other words, what I'm telling you is there are two ways to obtain the AC Stark shift. One is you do time dependent perturbation theory assuming oscillating electric field.
And that would mean in the quantized language you assume the electromagnetic field is in a coherent state.
Alternatively, you can quantize the electric field and introduce Fock states. And then because Fock states are type independent, then these are the eigenstates of the electromagnetic field in a cavity.
And now you obtain the same atomic level shift in time independent perturbation theory.
So in other words, we can have photon number states. And do time independent perturbation theory. Or alternatively, we can use a semi-classical electric field.
Which means we have a classical electric field. And then we can treat it in time dependent.
All the textbooks generally use the latter approach. Because it uses a same classical electric field. But I can tell you, I strongly prefer the first approach. Because in the first approach, you have no problems whatsoever.
What is the time dependent wave function? What is an energy level shift when you have a driven system?
In a time independent way, everything just is simple and right there in a simple way. But it's two different physical regimes. Questions?
OK, second point of discussion is the concept of the oscillator strengths. So what I'm teaching in the next five minutes is so old-fashioned that I sometimes wonder should I still teach it or not.
On the other hand, you find it in all the textbooks. You also want to understand a little bit the tradition. And at least I'm giving you some motivation to learn about it that if I parametrize the matrix element with an oscillators strength.
And most of your atoms, most of the alkali atoms have an oscillator strength for the s to p transition for the D-lines. Which is unity.
You can actually write down what is the matrix element. What is the spontaneous lifetime of your atom without knowing anything about atomic structure. Just memorizing that f equals 1, the oscillator strength is 1 is pretty much all you have to know about your atom.
And the rest, the only other thing you have to know is what is the resonant frequency of your laser? 780 nanometer? 589 nanometer? 671 nanometer?
So the modern motivation for this old-fashioned concept is for simple atoms where the oscillator strength is close to 1, this is probably the parametrization you want to use because you can forget about the atomic structure.
So but the derivation would go as follows. I want to compare our result. How an atom responds to an electric field. I want to compare this result to the result of a classical oscillator.
So compare our result for the AC polarizability to a classical harmonic oscillator.
So I assume this classical harmonic oscillator has a charge, a mass term, and a frequency. And both the atom and the classical harmonic oscillator are driven by the time dependent electric field. Which we have already parametrized by cosine omega t.
OK. If you look at the classical harmonic oscillator, you find that the-- you drive it at omega and ask what is it time dependent dipole moment?
It's driven by the cosine term. And what I mean, of course, is dipole moment of a classical harmonic oscillator is nothing else than charge times displacement.
And, well, if you spend one minute and solve the equation for the driven harmonic oscillator, you find that the response, the amplitude, the steady state amplitude, [? zk ?] of the harmonic oscillator is cosine omega t. Times a prefactor which I'm writing down now.
There is this resonant behavior. So that's the response of the classical harmonic oscillator. And yeah.
So this is just classical harmonic oscillator physics. And I now want to define a quantity which I call the oscillator strength of the atom. So I'm just jumping now from the harmonic oscillator to the atom and then I combine a the two.
And the oscillator strength is nothing else than a parametrization of the matrix element with between different states. But it's dimensionless. And it's made dimensionless by using the mass. By using h-bar. And by using the transition frequency.
So the atomic-- let me just make sure we take care of it. This is the result for the classical harmonic oscillator. And this is now the result for the atom that we have found already they the result for the atom before.
And now I'm rewriting it simply by expressing the matrix element square by the oscillator strength. And this here is just another expression for the polarizability alpha.
Well, let's now compare the result of a quantum mechanical atom exactly described by time dependent perturbation theory to the result of a classical harmonic oscillator.
The frequency structure is the same. So if I would know say we have an ensemble of harmonic oscillators, and the harmonic oscillators may have different frequencies and different charges.
Then I have made those formulas exactly equal. And I can now formulate that the atom reacts to an electric field to time dependent electric field. Exactly as an ensemble of classical oscillators with effective charge.
If I would say I have an ensemble of oscillators with effective charge, then the response of the atom and the response of the ensemble of classical oscillators is absolutely identical.
So the atom response is a set classical oscillators with an effective charge which is given here.
So, therefore, you don't have to go further if you want to have any intuition how an atom reacts to light. The classical harmonic oscillator is not an approximation. It is exact.
So that result is relevant because it allows us to clearly formulate the classical correspondence.
The second thing is as you can easily show with basic commutator algebra, there is the Thomas Kuhn sum rule which is discussed in all texts in quantum physics which says that the sum over all oscillator strengths is 1.
So, therefore, we know if we have transitions from the ground state to different states, the sum of all the oscillator strengths to all the states can only be 1.
And another advantage of the formulation with oscillator strength is that it is a dimensionless unit. It's a dimensionless parameter which tells us how the atom corresponds to an external electromagnetic field.
I just need two or three more minutes to show you what that means. If you have hydrogen, the 1s to 2p transition is the strongest transition. And it has a matrix element which corresponds to an oscillator strength of about 0.4.
So the rest comes from more highly excited states. However, for alkali atoms, the D-line, the s to p transition has an oscillator strength. I didn't write down the second digit, but it's with excellent approximation, 1.98 or something.
So not just qualitatively, almost quantitatively, you capture the response from the atom by saying f equals 1.
So if you use for the alkali atoms f equals 1, then simply the transition frequency of the D-line gives you the polarizability alpha. And as we will see later, because we haven't introduced it is, but it will also give you gamma the natural line beats.
Because all the coupling to the electromagnetic field to an external field is really captured by saying what the matrix element is. And f equals 1 is nothing else than saying the matrix element is such and such.
And, indeed, if I now use the definition of the oscillator strength in reverse, the matrix element between two transitions, between two states is the matrix element squared is the oscillator length times one half.
And if you just go back and look at the formula, you find that this is now dimensionless. So we need now because the left hand side is the length square, we need two lengths.
One is the Compton wave length which is h-bar over 2 times the mass of the electron. And lambda bar is the transition wave lengths, lambda divided by 2 pi.
So, therefore, I haven't found it anywhere in textbooks, but this is my sort of summary of this. If you have a strong transition, and strong means that the oscillator length is close to 1, then the matrix element for the transition is approximately the geometric mean of the Compton wave length of the electron.
And the reduced wave lengths of the resonant transition. So now if you want to know what is the matrix element for the d-line of rubidium, take the wave length of 780 nanometer.
Take the Compton wave lengths of the electron and you get an accurate expression.
I know time is over. Any questions? All right.