PROFESSOR: Today, we're going to continue with the adiabatic subject. And our main topic is going to be Berry's Phase. It's interesting part of the phase that goes in adiabatic process. And we want to understand what it is and why people care about it.
And then, we'll turn to another subject in which the adiabatic approximation is of interest. And it's a subject of molecules. So I don't think I'll manage to get through all of that today, but will we'll make an effort.
So let me remind you of what we had so far. So we imagine we have a Hamiltonian that depends on time and maybe had no dependents before time equals 0 turns on. And it has no further variation after some time t. So the Hamiltonian changes like that.
And the adiabatic theorem states that if you have a state at time equals 0, which is a particular instantaneous eigenstate, that is the instantaneous eigenstate, then that's the full wave function at time equals 0 and it coincides. Then, at time at any time in this process, if it's slow, the process, the state of the system, the full wave function, psi of t, will tend to remain in that instantaneous eigenstate.
And the way it's stated precisely is that psi of t minus-- I'll write it like this-- psi prime n of t, the norm of this state is of order 1/T for any t in between 0 and T. So I'm trying to state the adiabatic theorem in a way that is mathematically precise. And let me remind you the norm of a wave function is you integrate the wave function square and take the square root. It's sort of the usual definition of the norm of a vector is the inner product of the vector with itself square root. So that's the norm for wave function.
And here, what this means is that with some suitable choice of phase, the instantaneous eigenstate is very close to the true state. And the error is a Fourier 1/T. So if the process is slow, it means that the change occurs over long t, this is a small number. And there is some instantaneous eigenstate with some peculiar phase-- that's why I put the prime-- for which this difference is very small.
And we calculated this phase, and we found the state, psi of t is roughly equal to e to theta n of t, e to the I gamma n of t, psi n of t. And in this statement, this is what I would call the psi n prime of t. And that's why the real state is just approximately equal to that one.
And we have these phases in which theta of t is minus 1 over h bar integral from 0 to t E n of t prime dt prime. That is kind of a familiar phase. If you had a normal energy eigenstate, time independent 1, this would be 1 to the-- well, would be minus e times t over h bar with an I would be the familiar phase that you put to an energy eigenstate.
Then it comes the gamma n of t, which is an integral from 0 to t of some new n of t prime dt prime. And this new n of t is I psi n of t psi n dot of t. I think I have it right.
So the second part of the phase is the integral of this new function. And this new function is real, because this part we showed before is imaginary, where with an I, this is real. And this second part, this gamma n, is called the geometric phase. This is the phase that has to do with Berry's phase. And it's a phase that we want to understand.
And it's geometrical because of one reason that we're going to show that makes it quite surprising and quite different from the phase theta. The phase theta is a little like a clock, because it runs with time. The more time you wait on an energy eigenstate, the more this phase changes.
What will happen with this geometric phase is that somehow properly viewed is independent of the time it takes the adiabatic process to occur. So whether it takes us small time or a long time to produce this change of the system, the geometric phase will be essentially the same. That's very, very unusual.
So that's the main thing we want to understand about this geometric phase, that it depends only on the evolution of the state in that configuration space-- we'll make that clear, what it means-- and not the time it takes this evolution to occur. It's a little more subtle, this phase, than the other phase.
So I want to introduce this idea of a configuration space. So basically, we have that-- let me forget about time dependence for one second and think of the Hamiltonian as a function of a set of coordinates, or parameters. So the Rs are some coordinates. R1, R2, maybe up to R capital N are some coordinates inside some vector space RN.
So its N components. And what does that mean? It means maybe that your Hamiltonian has capital N parameters. And those are these things.
So you buy this Hamiltonian. It comes with some parameters. You buy another one. It comes with another set of parameters. Those parameters can be changed. Or you construct them in the lab, your Hamiltonians with different parameters. Those are the parameters of the Hamiltonian.
And suppose you have learned to solve this Hamiltonian for all values of the parameters. That is whatever the Rs are you know how to find the energy eigenstates. So H of R times-- there are some eigenstates, psi n of R with energies En of R, psi n of R. And n maybe is 1, 2, 3. And these are orthonormal states, those energy eigenstates.
So this equation says that you have been able to solve this Hamiltonian whatever the values of the parameters are. And you have found all the states of the system, n equal 1, 2 3, 4, 5, 6. All of them are in now.
So this is a general situation. And now, we imagine that for some reason, these parameters start to begin to depend on time. So they become time dependent parameters-- can become time dependent. So that you now have R of t vector. These are 1 of t up to our Rn of t.
So how do we represent this? Well, this is a Cartesian space of parameters. This is not our normal space. This is a space where one axis could be the magnetic field. Another axis could be the electric field. Another axis could be the spring constant. Those are abstract axis of configuration space. Or this could be R1, R2, the axis, R3. And those are your axes.
And now, how do you represent in this configuration space the evolution of the system? What is the evolution of the system in this configuration space? How does it look? Is it a point? A line? A surface? What is it? Sorry?
STUDENT: A path.
PROFESSOR: It's a path. It's a line. Indeed, you look at your clock. And at time equals 0, well, it takes some values. And you're fine, OK, here it at time equals 0. At time equal 1, the values change. There's one parameter, which is time. So this traces a path. As time goes by, the core in this changing in time and this is a line parameterized by time-- so a path gamma parameterized by time.
And that represents the evolution of your system. At time equals 0, this point could be R at t equals 0. And maybe this point is R a t equal T final. And this system is going like that.
You should imagine the system as traveling in that configuration space. That's what it does. That's why we put the configuration space.
And we now have a set-- not the set-- a time dependent Hamiltonian, because while H was a function of R from the beginning, now R is a function of time. So this is your new Hamiltonian. And this is time dependent-- dependent Hamiltonian.
But now, the interesting thing is that the work you did before in finding the energy eigenstates for any position in this configuration space is giving you the instantaneous energy eigenstates, because if this equation here holds for any value of R, it certainly holds for the values of R corresponding to some particular time.
So psi n of R of t is equal to En of R of t psi n of R of t. So it's an interesting interplay in which the act that you know your energy eigenstates everywhere in your configuration space allows you to find the time evolved states, the time dependent energy eigenstates, the instantaneous energy eigenstates are found here.
So what we want to do now is evaluate in this language the geometric phase, this phase. I want to understand what this phase is in this geometric language.