Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, the professor discussed index of refraction and started to talk about atom-light Interactions.
Instructor: Wolfgang Ketterle
Lecture 12: Atoms in Extern...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Good afternoon. Yes. Let's have an on time departure. The topic of last class was the ac Stark effect. And I said what I've been saying to you, was additional [INAUDIBLE] of the ac Stark difference from perturbation theory. But then I have three points for discussion.
The first one was [INAUDIBLE] for the [INAUDIBLE] problem, [INAUDIBLE]. The second one was the oscillator strength, which we almost finished. And today, I want to say a few words about [INAUDIBLE] in [INAUDIBLE]. And eventually how the ac Stark effect leads to the absorption coefficient, which many of you use [INAUDIBLE].
So I mentioned that the oscillator strength is actually one way to parametrize the matrix element. We'll actually talk most of the lecture today about matrix element, but I come to that in a few minutes. From here, the matrix element is nothing, but you get on with [INAUDIBLE]. Sure, it's a dimension of the Bohr radius but we don't really think in terms of lengths.
There are two ways how you want to use a matrix element. One is the matrix elements squared, having some other effect with which to [INAUDIBLE] this likeness, an actual likeness against a number you should know how to [INAUDIBLE].
Or another way to parametrize matrix elements is by this [INAUDIBLE] number, which is what we will see in this class. And that's, of course, also related to semi-classical physics. And I mentioned that in the last class, that the oscillator strength allows us to connect the quantum mechanical treatment of an atom to the response of a classical harmonic oscillator through the charge of an external [INAUDIBLE].
So I used this expression, it was the end of class last. I used this expression for the matrix element in terms of the oscillator strengths to give you a nice relationship for strong transition as in the alkaline E lines, where the oscillator strength is close to 1. You see that the matrix element is 1 over square of 2 times the geometric mean of the complete wave lengths of the electron, and the wave lengths of the transition.
So that's actually important for the dipole approximation, because the concurrent wave lengths of the electron is very, very small. So therefore if the matrix element is the geometric mean of the two, that means that the matrix element is smaller than the optical wave lengths. And therefore, the matrix element is the length scale, the relevant length scale of the atom for transition. It is smaller than the optical wave lengths.
And that means-- and we'll talk about it today-- that in the expression E to the IKR, KR is really small. So we can do the dipole approximation. Now I have a question for you. I did it, frankly, because we don't have many questions to discuss with.
But I mentioned that there is a sum rule for the oscillator strengths. And the sum rule limits it to 1. And if there's a really strong transition, it kind of exhausts the sum rule, and this is what I was talking about.
But because the thought crossed my mind after last lecture, if I don't add something to it, you may actually use it as a proof, which would be wrong. That the matrix element cannot be larger than what you see on the right hand side. On the other hand, we discussed in the context of Rydberg atoms that Rydberg atoms can have huge matrix elements. Matrix elements which [INAUDIBLE] but they can really become huge.
So how do you reconcile the fact that Rydberg atoms in large instates, in states with large strengths of [INAUDIBLE] numbers can have huge matrix elements. How do you reconcile it with our discussion of oscillator strengths and sum rules? The answer is simple but subtle.
So is it a proof that the matrix elements are impossible? Or if big matrix elements, huge matrix elements, are possible, what property of the oscillator strengths did I not emphasize?
I mean, keep watching. So first, are there atoms which have huge matrix elements? Yes. Rydberg atoms. So the left hand side can be really huge. On the right hand side, we have the wave lengths of the transition which can get large. You have to count the wave lengths. But then you have the oscillator strengths.
And the oscillator strengths, the sum of all oscillator strengths is 1. But you have to be careful that not all oscillator strengths have to be possible. The definition of the oscillator strengths involves frequency from one state to the next. And depending whether you go up or down, the frequency has positive or negative side.
So therefore we have a sum rule. For the ground state, all frequencies are positive because you can only go up. So therefore the sum rule is only very useful for the ground state, because all oscillator strengths are positive. And if one oscillator strength is 1, you know no other transition has any strength at all.
However when you have an excited state and you find an oscillator strength which is 1, you could still have a lot of other strong transitions, but their oscillator strengths are positive and negative and they compensate for each other in the sum rule. So just be careful about that.
For the ground state-- and this is what I've proven to you here, I think fairly rigorously-- if you are in the ground state, the transition matrix element cannot be larger than the geometric mean of the [INAUDIBLE] wave lengths and the [INAUDIBLE] wave lengths. But for excited states, you have to be careful, because the oscillator strengths can have cosines. OK.
Any questions about that? Then let's talk quickly about the third point I wanted to discuss. And this is the relationship to the index of refraction. The polarizability here is responsible for the index of refraction. Well, the polarizability determines [INAUDIBLE] evaluate the constant and use whatever relation you want, but for a common physics purpose, this is sort of the useful relation.
This is the index of refraction. It's related to the polarizability. And assuming that the polarizability is small, you get this, the parallel extension. So roughly saying n minus 1. The difference of the index of refraction from the vacuum is proportionate to the atomic density times the polarizability.
OK. And we know this is what we derive from the perturbation theory, that the polarizability depends on a matrix element squared and 1 over the cube. Now I want to sort of show you that by just using that concept and putting in phenomenologically dissipation, we can get a full expression what happens to a laser beam crossing an atomic medium, like a [INAUDIBLE] condensate. How much of the laser beam is absorbed? And what is the feature of the laser beam.
In order to get interesting or simple formula, I want to now parametrize the matrix element by this correlated gamma. This will turn out with an actual [INAUDIBLE], but right now, since we haven't talked about spontaneous emission, just say I replace the matrix element by gamma. And then I can write the index of refraction in the following way. But the matrix element squared is now parametrized by gamma.
I have also introduced the coordinate sigma 0, which will later be the lesson in absorption for section for atoms. But again here, just use it as a parametrization. I mean, I've not introduced any new concepts. I've just rewritten this expression in this way, involved sigma 0, which is a cross-section, and gamma, which is an actual light.
OK. The polarizability depended-- that was perturbation theory-- 1 over the energy denominator, or 1 over the [INAUDIBLE]. Now we have used, with the oscillator strengths, the analogy to a classical harmonic oscillator. And at some level, you know that every harmonic oscillator must have little bit of damping. And so the same applies to atomic oscillators.
And if you can account for the damping by putting into the frequency denominator of the [INAUDIBLE], 1 over delta. Give it an imaginary part. So I'm just telling you here, yes. Every oscillator has some damping, and I can phenomenologically accounting for the damping by making the resonant frequency, or the tuning, slightly complex by adding imaginary part.
Later on I want to express all the tunings normalized to this line which is parallel to a normalized [INAUDIBLE] imaginary part. OK. So we have the situation that this expression, gamma over delta-- this was the expression for the polarizability-- now acquires a small imaginary part, and that means it's a complex number. And I can now take this expression separate it into an imaginary part and a real part.
So why do I do that? Well, the index of refraction appears in the propagation of the plane wave. If you're a plane wave, the key vector is no longer the key vector in vacuum. It's multiplied with the index of refraction, and that's what I do here. And now you see that immediately, an imaginary part of the index of refraction leads to absorption. And the real part leads to [INAUDIBLE].
OK. So what we have now is we have this expression for the plane wave after it is propagated through the medium. And we see there is exponential absorption. Exponential absorption. On resonance, we have an optimal density which is given by this expression. And the second term here give us a phase shift.
And it's clear since we have one medium which has a certain thickness that the optical density and the phase shift are related. Yes. If you're on resonance, you have the maximum optical density and the maximum absorption. And the optical density, of course, is 1 over the [INAUDIBLE] squared. [INAUDIBLE].
Whereas for large detuning, the phase shift goes as 1 over delta. It's the dispersive scaling with the tuning, and this is also what we had when we started out. When we would have started out with the polarizability and not a decaying dissipation, because far away from resonance, you simply have a regional of dispersive effect, and dissipation doesn't occur.
So anyway, I thought I would just show that to you because this is a full understanding of what happens to optical views when they pass the atomic [INAUDIBLE]. It's nothing else than the polarizability or the harmonic oscillator response of an atom to electromagnetic radiation.
The only thing which is non-derivative here is-- and that's what we will discuss later in this course-- is in order to get the final result, which you may want to use in analyzing your data, you have to set the dissipative, the damping of the harmonic oscillator, equal with this gamma parameter. And that's something I cannot tell you without talking about spontaneous emission.
So just to remind you, I introduce big gamma just as a parametrization of the matrix element. Then I introduce phenomenologically little gamma as just the damping of the harmonic oscillator of those two quantities for two levels [INAUDIBLE] are identical. And that's something I cannot show you here, because this requires discussion of spontaneous emission and the other modes of the electromagnetic field. This is not covered by a discussion of the ac polarizability.
OK. Any questions? But still-- I know I'm repeating myself-- I always find it interesting that the whole physics of absorption, the beams are absorbed, you can pretty much pull it out of the response of a harmonic oscillator. It's only the damping rate of the harmonic oscillator that this is simply spontaneous emission. That's the only point where you have to go beyond the ac Stark shift and bring in the [INAUDIBLE] nature of the electromagnetic field and all the empty modes of the vacuum.
Any questions about waves, or in general interactions of atoms with electric and magnetic fields? Well, then let's move on. We have talked about electric and magnetic fields intimately for very good reasons, because low frequency fields are either electric or magnetic in nature. But ultimately, we want to understand what happens when atoms interact with light.
And this is our discussion for today. The outline of this chapter is as follows. Today, all I want to discuss is the coupling matrix in an atom. In other words, when we have a Hamiltonian between state A and B, we have an off-diagonal matrix element.
And this off-diagonal matrix element causes transitions. Or when we calculate it, the effect of electric field-- the ec Stark effect or the ac Stark effect, we assume there was a quantity, state 1, state 2, and we put some operator in between.
So this was a matrix element which connects the two states. And so far, we have always assumed using the classical multiple expansion of electromagnetic field energy that this matrix element has the form electric field times dipole matrix element. And therefore, the relevant matrix element was the position operator of the electron connecting the two states.
So in a way, we have used it all the time. But in any more advanced discussion of atomic physics, you have two states. And as long as they are coupled, you have something which you call H1 2 or you call it matrix element N1 2, and you're not even asking where it comes from.
When two levels are coupled, they undergo Rabi oscillations. You have transitions. If you're in the excited state and you have a matrix element, you get spontaneous emission.
So a lot of the things we want to discuss about atomic physics, all they require a number which is a matrix element between state 1 and state 2. And today, I want to talk to you about different mechanisms which can lead to matrix element. We talked about the dipole operator, but you also talk about higher order possibilities, magnetic dipole coupling, or electric [INAUDIBLE] coupling.
And a little bit later-- this is an outlook-- we will talk about twofold uncoupling. But in the end, even if it's a twofold uncoupling, for a lot or phenomena, all you need is a coupling. A coupling means a Rabi frequency. A Rabi frequency means Rabi oscillation, and so on.
So today the focus is, what is the structure, what are the principles behind this number, which is a relevant matrix element. OK. So that's today. Next week is spring break, but when we resume, we then want to talk about what is the matrix element doing?
And there are two cases which we have to consider. One is we can see it's narrow band and broad band. This matrix element can couple just to one move of the electromagnetic field. Everything is coherent. We have coherent Rabi oscillation.
Or you can couple to many modes, and then you have a broad band situation, which may be described [INAUDIBLE] and this [INAUDIBLE] unit. Very, very different behavior. So that's what we then want to discuss. When atoms interact with light, we have two very different limiting cases, narrow band and broad band. And eventually, we want to go beyond the semi-classical formulation of the electromagnetic field and we'll talk about the quantized electromagnetic field.
OK. So let's talk about the coupling between atoms and electromagnetic fields. I want to give you very brief derivation of the canonical coupling between electromagnetic fields and atoms. I know this is covered in many textbooks, and we have a very deep discussion about it using the full QED formulas in A422, but I feel this course A421 would not be complete without a discussion like that.
And secondly-- this is so important if you hear it twice from two different angles, that's probably useful-- I just want to remind you that if you use classical electromagnetism and the Lagrangian formalism, you find the very elegant result that the coupling to the electromagnetic field can be introduced by modifying the momentum. That the canonical momentum which appears in the Lagrangian equation is no longer the mechanical momentum which gives rise to kinetic energy, But? It is modified by the vector potential.
So therefore the Hamiltonian-- at this point it's the classical Hamiltonian, but then we use the same expression in quantum mechanics. The Hamiltonian has kinetic energy and potential energy. But the kinetic energy is no longer p square. Because by p, I mean now the canonical momentum.
We have to correct for the vector potential to go from the canonical momentum to the mechanical momentum. And if we squared it, this gives the kinetic energy. If you worry about this sign here, I will assume now that the electron has a charge. The charge q f the electron was minus e, so that's why we have a plus sign here and not a minus sign.
OK. So we will use this fact that we have to substitute the momentum operator by the canonical momentum, which is no longer the mechanical momentum. We used it also in our quantum mechanical Hamiltonian in the Schrodinger equation.
If you ask me, can you prove it? No. Nobody can prove it. We can never prove what a quantum mechanical equation is. We can just use physical understanding, physical analogies. And that's how quantum mechanics was developed in [INAUDIBLE]. Based on the classic quantum mechanical correspondence, this seems to be a very reasonable assumption.
And ultimately, this very reasonable assumption has now withstood the test of time for many, many decades, over almost 100 years. So that's why we assume that doing the same substitution in the Schrodinger equation for the momentum operator gives the right result, but there is no way how you can rigorously derive fundamental equations in physics. You can observe nature, postulate them, and verify them.
So therefore, this line, which was the classical Hamiltonian, by interpreting p as an operator, is now our quantum mechanical Hamiltonian. And the one thing we have to consider now is that once we have quantum mechanical operator, we have to be careful about compute order. That certain quantities have to be ordered. It matters which one we put first.
What is convenient now is to use the Coulomb gauge. In the Coulomb gauge, diversions of A is zero. But the del operator, the deliberative operator, is the operator of the canonical momentum. And therefore, if you use the Coulomb gauge, then we have commutativity between the canonical momentum, p, and the vector potential. And that's sort of nice, because if you take this square and square it out, we get p dot A plus A dot p, but they are the same in the Coulomb gauge.
So therefore our Hamiltonian has now the following terms. We call it H naught. And we assume that the Coulomb potential of the nucleus, which eventually gets the electronic structure of the atom, is included here. So this is the Hamiltonian part which describes the structure of atoms.
And then we have two terms which couple to the electromagnetic field. So this is p dot A or A dot p. It doesn't matter in the Coulomb gauge. We call this the interaction Hamiltonian. And now we get a second order term, which is the A square term. Well, since it's second order, we designate it. We use the symbol H2.
In the following, we neglect the second order term. The argument is for weak fields. It doesn't matter. It's not so important because it's higher order.
However, in A422, we look at it more carefully and we can actually do a canonical transformation which eliminates the A square term. So to drop it is not necessary. You can actually show with a canonical transformation that I'm doing is actually more exact than just saying small and delineated.
But anyway, I don't want to spend too much time on it. All I wanted to remind you, what are the steps to obtain-- just give me one second-- to obtain the interaction Hamiltonian, and that's what I want to use for the remainder of this class.
That we have the coupling of momentum to the vector potential. We assume that the vector potential-- we are not quantizing the electromagnetic field. It's a semi-classical field. It's therefore a classical vector. And we will investigate what is now the coupling between an atom for this Hamiltonian to a plane wave of the electromagnetic field.
So therefore we are introducing for the vector potential the expression that the vector potential has an Hamiltonian naught. The polarization p hat, and then the plane wave factor e to the ikr minus i omega t. So this can be divided in a much more rigorous way, but this is now the starting point of our discussion. [INAUDIBLE]. OK. Any questions about that? Yes. Matt.
AUDIENCE: Is it a little bit weird that you're going to take the divergence of its particular A that's not 0? Since it's divergence of e to the ik dot r, just k dot A?
PROFESSOR: Sorry. You said the divergence of A-- so you're asking what is the divergence of A, if A is a plane wave e to the ikr?
AUDIENCE: [INAUDIBLE] the propagation of [INAUDIBLE]?
PROFESSOR: Yeah. What happens is, the polarization is A, and if you take the divergence of the plane wave, you get a K vector, so you get this [INAUDIBLE] product of e dot k. And electromagnetic waves propagate. The polarizations [INAUDIBLE] propagation. So ultimately what you find is with Coulomb gauge and the radiation field is transverse. It is only polarization [INAUDIBLE] propagation.
OK. So we want to talk about matrix element. So the matrix element between two states. Du to the interaction Hamiltonian for one plane wave has the e to the minus i omega t dependence, which is trivial. I mean, in terms of we will assume that there is a monochromatic wave.
And what you want to discuss now is this time independent factor, H ba. Let me get the factors. Let me get the polarization. So your relevant operator which connects the two states A and B is the momentum operator.
And now for the plane wave, we want to do an expansion in orders of kr. And the leading term will be the dipole approximation and the next order term will give rise to magnetic dipole and electric quadrupole transitions. So first, the fact that I can do this expansion requires that k dot A is much smaller than 1.
So it's a long wave length approximation, which you can say is valid as long as the atom is smaller than the wave length we are talking about. And I will show you later when I discuss the next higher order term that for atoms in the ground state, this is actually also an expansion alpha. So again we retrieve the fine-structure constant.
So when we do this expansion of this plane wave exponential, every term is smaller than the previous term by the fine-structure constant alpha, which is 1 over 137. OK. So the leading term is the 1, and this gives rise to the dipole approximation.
So what I want to show you now in one or two minutes is that this is indeed the dipole approximation. It leads to a matrix element which is electric field times r, the dipole moment. What we have right now is A dot p, the vector potential, times the momentum. So in other words, I want to show you that the A dot p matrix element is equivalent to an e dot r matrix element.
This can be done with a canonical transformation, which we'll do in A422, but here let me sort of just show you the elementary discussion. We want to replace the vector potential by the electric field. So therefore, we use the fact that the electric field is the derivative of the vector potential.
And the derivative of the vector potential means since we have an E the minus i omega t dependence, that we simply multiply with a factor of omega of the frequency. And what we obtain here is then the amplitude of the electric field, E naught. So with that, we have a matrix element which now involves the electric field. But still the matrix element of the momentum operator.
But we can easily go from the momentum operator to the position operator by taking the fact that the momentum operator is nothing else than the commutator of r with the Hamiltonian H naught. The kinetic energy in H naught is p square, and the commutator of r with p square is simply p. So this is what I'm using here. And these are the p factors.
So therefore if you take the momentum operator between two states, we have this relation between the momentum operator and the position operator. But we have not the momentum operator. We have the commutator. And r H naught gives us r. But when H naught acts on the a, it gives the energy in a.
For the other part of the commutator, which is H naught r, we can have H naught act on the left hand side on b, and this gives us the energy ab. For dividing by H bar, what we get is the energy difference, eb minus ea. Or in frequency units, omega ba.
So this is now how we have implemented the commutator. So that by inserting this into this equation, we now finally obtain our result for the matrix element in the dipole approximation. We have e, the electric field amplitude, the polarization, the matrix element b r a, omega ba, and here we have an 1 over omega in the denominator, which came from the derivative of the vector potential da dt.
This here is the matrix element of the dipole operator. So in other words, I have derived for you that the interaction matrix element in the dipole approximation is the dipole operator times the electric field as long as I make the approximation that this frequency factor's on the order of [INAUDIBLE].
Now this is clearly the case in your resonance, so you should be safe. However, if you do the more rigorous derivation of the dipole Hamiltonian using canonical transformation, this factor of omega ba over omega does not appear.
So again, the dipole approximations is better than I presented today. The two approximations topping the A squared term and approximating this ratio of frequencies with 1 is not necessary if you use the other methods using canonical transformation. Any questions about that?
Let me make one comment. I've shown you in this derivation that the A dot p term, the A dot p interaction within the dipole approximation is identical to the e dot r interaction. So these are the same operators, and if you would exactly calculate them in atomic structure calculation, you would get the same coupling, the same matrix element.
However, in practice, there are important differences. Because the E dot r operator, the r operator, has a lot of weight where the electron is far away from the nucleus because it's multiplied with r. Whereas the p operator emphasizes the derivative of the wave function, and this is usually strong at close distances.
So in the end, the results have to be the same. But if you make an approximation to your wave function, one formulation may be numerically much more accurate than the other one. But fundamentally, the two terms are the same and they are related the way how I derived it for you. Yes?
AUDIENCE: So when we say it's the dipole approximation, does it mean that it's a long wave length, or be the small field or everything together. So what's the absolute fundamental assumption of this?
PROFESSOR: The absolute fundamental assumption of the dipole approximation is that we assume-- let me just scroll up.
In the equation above, we had an expression for the vector potential. And the dipole approximation means that we can neglect the position dependence of the vector potential A of r, so we approximate A of r as being A evaluated at the origin of the atom. And over the extent, over the size of the atom, we do not have to consider a spacial dependence of the vector potential. So this is sufficient.
The other things that e square is small and that we're near resonance-- I have to make those assumptions if I want to use this elementary derivation. But with a canonical transformation, as is discussed in atom-photon interactions, you do not need those additional assumptions. The only assumption behind the dipole approximation is that one, and this means that the extent of the atom is much smaller than the optical wave lengths.
OK. But there are situations where the leading approximation vanishes. For instance, if two levels have the same parity, then the dipole operator between the two of them is 0. And then if you want to have a transition between, or if you want to consider a transition between those two levels, it will come from next order terms.
So let's now discuss higher order radiation processes. So the motivation why I want to discuss higher order radiation processes is because in some basic courses, you only need the dipole approximation. And you think the dipole coupling is the only coupling which exists in the world.
And by going to higher order, I want to show you that this is not the case. Also I want to sort of give you an idea what it means if you have leading order transitions are forbidden. I want to sort of show you how other terms come in which can couple two levels.
And also, actually, when you drive transitions within the hyper-fine structure using radio frequency fields, you are not driving them with the electric dipole approximation. You're driving them with a magnetic dipole, and this is actually the next approximation. So there's a number of reasons why I want to show you what the next order terms are, and how they actually lead to beautiful result, magnetic dipole, and electric quadrupole transitions.
So our coupling term-- let me just rewrite the equation. Let me now simplify the notation by assuming that the polarization is in the z direction and the propagation is in the x direction. So then the coupling had the dipole term. And the next order term is ikx, and this is what we want to investigate now.
So the term kx or kr is smaller. And now I want to show you explicitly that it's smaller by alpha. The k vector of the photon is related to the frequency H bar omega divided by H bar z.
If I approximate for r, the relevant r when we indicate over the wave function will be the Bohr radius. The relevant frequency, H bar omega, well, it's a Rydberg. And the Rydberg is e square over the Bohr radius. So now if I insert it, the Bohr radius cancels out.
And what we find is, well, kr is dimensionless. We have to find something which is dimensionless, but expressed by the fundamental constants of atomic physics. And the only quantity which is available for that-- it's not a surprise-- is alpha. The fine-structure constant.
So therefore the dipole approximation is actually the result of an expansion of the plane wave e the ikr factor in units of expansion in the fine-structure constant. OK. We want to look now at the second term.
And well, sometimes if you deal with a term, we first make it more complicated and then we simplify it. I want to sort of symmetrize and anti-symmetrize it in the following way. This is p z x. So let me subtract z p x. But then, of course, I have to add it. So now we have two terms. 1 has a minus sign, one has a plus sign.
And as will see in just a minute, is the first one is magnetic dipole transition. The second one is electric quadrupole transition. And we see that the first one can be regarded as-- p z x is like p dot r. Its the y component of the vector product and is therefore the y component of the orbital angular momentum.
OK. So let's focus for now on this part. The second term, which is the electric quadrupole term, we'll do in a few moments. The relevant matrix element is now the matrix element of the orbital angular momentum Ly.
The p factor-- let me just collect all of the constant imaginary unit ehr AK over 2 mc. That looks complicated, but it immediately simplifies when we realize that this here is the Bohr magneton. And well, we still have the vector potential. But the magnetic field is the curl of the vector potential, and that means we will assume that we have a vector potential propagating in x, polarized in z. That means that our magnetic field is that.
So therefore ka, which appears in our expression for the coupling, is just the magnetic field. So therefore we find the result that the next order term in the coupling of the atoms to the electromagnetic field have the simple form to the electromagnetic field. That it is the magnetic field part of the electromagnetic wave times the Bohr magneton times the matrix element, due to the orbital angular momentum operator.
And actually, if you take the orbital angular momentum and multiply it with the Bohr magneton, this is actually the operator of the magnetic moment. Well, with a minus sign because the electron is negative charged. Remember we had introduced the operator for the magnetic moment. And the magnetic moment was the g factor times the Bohr magneton times the orbital angular momentum. And the g factor for the orbital motion is 1.
So therefore the interaction we are talking about is the Bohr magneton times the magnetic field. And, of course, what we realize is the operator-- maybe I should back up for a second and say what we actually realize is that operator which couples to the electromagnetic field has actually the form u dot b.
This is exactly what we use for the Zeeman effect in a DC magnetic field. But now the same form u dot b appears for time dependent magnetic field. And time depending magnetic fields are not only creating level shifts. They can also use transitions through the matrix element.
So in other words, this whole exercise show you that the form u dot b, which appeared naturally in the formulation for DC magnetic field, also applies to AC magnetic fields.
But with that, I can say, wait a moment. There are now two sources for the magnetic movement of the atom. One is due to orbital angular momentum, and the other one is due to spin angular momentum. But the spin angular momentum has a g factor, which is different.
The approximation of the [INAUDIBLE] creation is 2. So therefore I mean, we will never get spin out of a semi-classical discussion. Remember, we started with a classical canonical treatment of the electromagnetic field, canonical momentum. And now we are running with it and we find that there is a coupling between the magnetic field and the magnetic moment of the atom.
But, of course, we only get the magnetic moment to the extent that it comes from orbital motion. But in a semi-classical way, I'm waving my hands now and say, well, what is valid for the coupling to orbit to the magnetic moment of the orbital angular momentum also applies to the speed. And I'm simply adding the spin here.
And with that, I've derived for you the expression for the interaction matrix element called M1. M1 is the magnetic dipole transitions. Let me just write it down here. So this is nothing else than u dot b. Any questions?
Let me just summarize. You may find it sort of interesting, when we discussed static electric and static magnetic fields. For the static electric field, we had an electric field times the dipole. And we find this now as a time independent term which can drive transitions for interactions with time dependent electromagnetic field, but we find it in the dipole approximation.
The magnetic part, u dot b, we find when we go to the next order. We find it as a magnetic dipole term. But there are more terms, and I just want you to illustrate that and then I'll stop with that multiple expansion.
We had the second term, the kr term, was this one. But then we sort of anti-symmetrized it and symmetrized it. The first one here, we could relate to orbital angular momentum and to the magnetic moment. And now I would want to discuss the second term.
So that term uses a mixture of position and momentum operators. But we know already how we can get rid of momentum operators, namely by expressing momentum operators as commutators with the Hamiltonian. So this gives us the commutator of z with H naught times x plus z times the commutator of x with H naught.
OK. So we have two commutators. Each of them has two terms. That means a total of four terms. And if you write them down, you see that two are opposite but equal and cancel. So therefore we are left with two terms, which are minus H naught zx plus zx times H naught.
So this term has now the following contribution to the coupling between the levels A and B. So we have this coupling matrix element. We have b factor mc Am.
OK. So using the same approach we used for the dipole matrix element, the H naught can act on the state A on the right side, and can act on the state B on the left hand side. So this gives us simply the energy difference.
And what is left is now a matrix element which only involves position operators, but it uses a product of 2. And now, again, we express the vector potential by the electric field, as we had done before. And therefore we have now expressed our coupling yes, by an electric field. And we assume that in the near resonant approximation, as we had used for the dipole coupling, that this is the frequency omega.
OK. So what we are realizing is that we have one part of the interaction Hamiltonian which couples levels A and B, has-- we call it E2, this electric quadrupole because it involves elements of the quadrupole tensor, or products of coordinates x, y, y, z, x, z. And for this geometry, which we assume with our plane wave, it is z, x. It couples to the electric field. And this is the p factor.
For a more general geometry of plane waves with different polarization going in different directions, we would have obtained different products of coordinates. So let me indicate that, that what we have picked out here is one specific component of a tensor, which is sort of the tensor form by using the position vector r twice.
So in this derivation, we found when we go beyond the dipole approximation, when we take the kr term in leading order, that we have two contributions. One is M1, magnetic dipole. The other one was electric quadrupole.
We realized-- I tried to keep of all the p factors-- that the electric quadrupole matrix element is imaginary, whereas the magnetic dipole was u dot b. There was no imaginary unit. It's real.
That means that you will never have any interference effect between magnetic dipole and electric quadrupole. Or in other words, when we have processes like spontaneous emission where we take the square of the matrix element, the square of the matrix element will be the sum of the squares of the matrix element for magnetic dipole and electric quadrupole transitions.
So let me summarize. We have discussed three different ways how we can have coupling matrix elements between two states, electric dipole, magnetic dipole, electric quadrupole, E1, M1, E2. The operator was the electric dipole operator. Here it was the operator of the magnetic movement, which is orbital angular momentum and spin angular momentum.
And for the quadrupole, it was the quadratic expression in the spatial coordinates. You can also ask what is the parity? The electric dipole operator connects states with opposite parity, whereas both magnetic dipole and electric quadrupole connect states even with the same parity.
Magnetic dipole and electric quadrupole transition are often called forbidden transitions. Well, you would say it's a misnomer because they are transitions, so they are allowed, but they are weak. But this is the language we use. Weak transitions are forbidden, which simply means they don't appear. They are forbidden in leading order, but when you go to higher order, they are allowed.
You can, of course, say if they were completely forbidden, there would be no need to discuss them. But since they are only forbidden at a certain level, then, of course, it's interesting to discuss them. And a lot of narrow transitions which are relevant for atomic clocks are highly forbidden transitions.
The strength of them, which usually scales with transitions with the square of the matrix element, is on the order of 5 times 10 to the minus 5. So those transitions are four or five orders of magnitude weaker Yeah than an allowed E1 transition. So that's what they are. Questions?
AUDIENCE: When people talk about highly forbidden transitions, does this mean that's like a optical transition, or how do we distinguish it from just regularly forbidden?
PROFESSOR: Actually I would say forbidden transitions are weaker by alpha to the power n. Here we have situations where the matrix element is just smaller by alpha. But, yes. Sometimes, yes. Some transitions are highly forbidden.
For instance-- I try to remember-- if you have hydrogen 1s and 2s. Because of s states-- actually, you are asking question about the next chapter, namely about selection rules. Let me give it in words.
The s, if you connect two s states, they have both 0 angular momentum.
AUDIENCE: Yeah.
PROFESSOR: So you cannot have a quadrupole operator connecting the two. It would violate the triangle rule. You cannot have angular momentum of 0 and angular momentum of 2 and get angular momentum of 0.
So therefore you have a transition between two states of the same parity. There is no dipole operator. There is no quadrupole operator. So you soon run into a situation where it's highly forbidden. Sometimes you have the situation that something is forbidden in non-relativistic physics, but they're a relativistic term, which makes it allowed.
Well, there's also that relativistic terms are fine-structure terms is also an expansion alpha squared. So the symmetry may allow it, but only in connection with relativistic terms. I'm not an expert on forbidden transitions, but usually you have transitions which are multiply forbidden.
They are forbidden by spacial symmetry. For instance, in the helium atom, singular triplet are forbidden by spin symmetry. So if you have multiple layers of being forbidden, then you get extremely weak transitions.
And one example, actually, are the singular triplet transition in helium, or the 1s to 2s transition in hydrogen. They are not allowed by simply going to the next order in a multiple expansion. Yes?
AUDIENCE: In ordinary situations when you do have to possibly do at least even more [INAUDIBLE] for the other transitions?
PROFESSOR: I'm not an expert on that. I'm not sure if there is an atom which has a relevant transition which is a [INAUDIBLE] transition. I've not heard about that. At least relevant examples which are the fundamental atoms, helium and hydrogen.
For hydrogen, at the 2s to 1 transition, the leading order is the emission, the simultaneous emission of two photons. So you have not just one photon, you have two photons which, of course, requires in the perturbation expansion the immediate step.
We discuss two photon transitions at the end of this course. So in that case, it's not in higher order in the single photon multiple expansion, it becomes a multi-photon transition. So this is one relevant case.
And for helium, triplet to singlet, this involves relativistic physics. Actually, I mentioned it in the other class on helium. I tried to look it up and I want to show here, you have to go to this order to get a transition. But when I tried to look into the literature, I couldn't find a clear answer.
Ultimately, it was a relativistic term in the fully relativistic formulation of the coupling of electromagnetic fields to the atom. I'm not sure if you can put a label on it and it would say, this is this and this order term. It may actually involve-- and we know that this happens in the [INAUDIBLE] creation-- that spin and spacial degrees become treated together in the [INAUDIBLE] creation, and maybe it's one of those terms. Yes?
AUDIENCE: So what's the actual meaning of interacting with different Hamiltonians? Like, you wrote square root of Hamiltonians and then--
PROFESSOR: No. I meant actually the square of the matrix element.
AUDIENCE: Oh. Sorry.
PROFESSOR: If you have, for instance-- actually I will talk about it in the next lecture when we discuss Fermi's golden rule. The transition strength in Fermi's golden rule is proportionate to the square of the matrix element. But now you would ask the question, well, could we have some interference between the two different processes?
And I wanted to point out at this level that we don't, because one matrix element is imaginary, the other one is real. And if you take the complex matrix element between two states and calculate the square, the square of the complex matrix element is the square of the real part plus the square of the imaginary part, and there's no interference trend between the two.
So in other words, if you have an atom which has a weak decay through M1 and a weak decay through quadrupole, the two parts cannot destructively interfere, because one is real, one is imaginary. They add up in quadrature, That's what I wanted to say.
OK. So these were examples of higher order transitions. And as the questions have shown, this leads us to a discussion of selection rules.
Selection rules is nothing else than a classification of possible transitions according to symmetry. And it's a way of using well, [INAUDIBLE] coefficients, angular momentum coupling, or using, you would say, symmetry to figure out if matrix elements are non-vanishing or vanishing.
And I gave you already one example, and I want to formalize it now. If you go between two s states which have 0 angular momentum, you cannot have an operator which is a quadrupole operator because-- and this is what I want to tell you now-- the quadrupole operator is a spherical tensor with two units of angular momentum.
And this would forbid the triangle rule. This would forbid "conservation" of angular momentum. So that's what I want to discuss now in the next chapter, or at least get started for the next five minutes by discussing selection rules.
So the introduction to selection rules is that we have forbidden transitions. Forbidden transitions are suppressed because we are forced to go to higher order, and this is usually higher order in alpha. So forbidden transitions are weaker by some power of alpha. And that means they require higher approximations.
And, of course, the comparison is always the dipole transition. This is the dominant transition. This is the industrial strengths transition. And from there on, it can get weaker. So if can get weaker by multiple expansion. We've just discussed that.
It can get weaker because you have to have a cascade of dipole transitions. This would be multi-photon processes, as we discuss later in the course. It can get weaker because they are exactly 0 in a non-relativistic approximation and require relativistic effects.
The example we have encountered in this course is the singlet to triplet transition in helium. Or there are transitions which would not be allowed just for the electron, but if we invoke hyperfine interactions with a nucleus, then they become allowed.
So it's a rich subject, and I'm not an expert and I cannot do full justice to it. But I want to at least give you some general rules how we discussed matrix elements. So what is always a good quantum number, what is always a label for our atomic states is angular momentum.
Because atoms [INAUDIBLE] through space, and there is rotation in variance. So we always categorize our atoms with angular momentum with the quantum numbers JM. And we are asking, are there transitions between a state JM to a state J prime M prime?
And all other quantum numbers, we can now summarize with a label n. And now we have an operator. I gave you examples for the operator. The magnetic moment, the electric dipole, the quadrupole operator.
But in general, every operator can be written, can be expanded, in a sum of spherical tensors. So what is discussed in the classification of matrix elements. Our matrix elements involving components, the operator are components of a spherical tensor.
So T is a spherical tensor of rank l. And if you want a simple definition of what is a spherical tensor, you try to write an operator-- like the position operator r-- you try to write it as a sum of terms, and each term transforms like a spherical harmonic Ylm.
So in other words, we can write every operator as a sum of spherical tensors. And a spherical tensor is characterized that it transforms under rotations exactly as the spherical harmonics Ylm. so every operator is now a sum of spherical tensors.
I don't want to get too much into symmetry classification, but the story is that you know the Ylm functions are compensator functions. Every function can be expanded and swell to harmonics. And similarly, if you have an operator, you can decompose it into objects which transform under rotation, and it's the Ylms.
So you have a part which transforms according to an object. And Ylm is the classification of wave functions with angular momentum. And so therefore each operator may not have a specific angular momentum, but can be written as the sum of operators, each of which has the same symmetry, the same transformation properties, as a state of angular momentum.
And I think I should stop here, but let me just give you the final message. So by doing that, by separating the operator into a sum of spherical tensors, we are actually now back in angular momentum.
If you have a matrix element-- and just think how you calculate it in the Schrodinger representation. You have a wave function, operator, and a wave function we indicate go by. So what you have is, you have the product of the objects. And now we can classify them by angular momentum.
And this is what I will show you in the next class, is that ultimately the question whether this matrix element is 0 or not will boil down to the question whether J prime and M prime angular momentum can be added to l and m. And there is overlap with angular momentum of JM. So we are back to the rules for adding angular momentum. We get the triangle rule. We get the [INAUDIBLE] quote unquote [INAUDIBLE].
OK. Enjoy the spring break, and we meet on Monday after the spring break.