Lecture 1: Wave Mechanics

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, the professor talked about "The Schrodinger Equation", "Stationary Solutions", etc.

Instructor: Barton Zwiebach

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: All right. So, we'll get started. And as I mentioned, to some degree this is going to be review on the setting of our notation and conventions clear.

So, our first topic is the Schrodinger equation. So this Schrodinger equation is an equation that takes the following form. I h bar partial derivative of this object called the wave function that depends on x and t is equal to minus h squared over 2m second derivative with respect to x plus v of x and t Psi of x and t. And that's the full equation. That's the Schrodinger equation.

Now actually, this is not the Schrodinger equation in most generality, but it's the Schrodinger equation for the case that you have a potential that depends on x and t. For the case that we are doing non-relativistic physics, because this thing you may remember is p squared over 2m is the kinetic energy operator. So p squared over 2m is non-relativistic. That's a non-relativistic kinetic energy. So this is non-relativistic. Moreover, we have just one x here. That means it's a particle in one dimension.

So we've done a few things, but this is generally enough to illustrate our ideas. And the most important thing that should be said at this point is that Psi of x and t-- which is the wave function-- belongs to the complex numbers.

It's a complex number. And that's by necessity. If Psi would be real, this quantity-- the right hand side-- would be real. The potential is a real number. On the left hand side, on the other hand, if Psi is real, its derivative would be real, and this would be imaginary. So, it's just impossible to get the solution of this equation if Psi is real. So, Psi complex is really the fundamental thing that can be said about this wave function.

Now, you've used complex numbers in physics all the time, and even in electromagnetism, you use complex numbers. But you use them really in an auxiliary way only. You didn't use them in an absolutely necessary way.

So, for example. In E&M, you had an electric field, for example, for a circularly polarized wave. And you would write it as this. Let me put the z here. Zero. X hat plus y hat-- those are unit vectors. I is a complex number. It's the square root of minus 1. E to the IKZ minus omega t. You typically wrote things like that, but, in fact, you always meant real part.

An electric field is a real quantity. And the Maxwell's equations are real equations. This is a circularly polarized wave. And this whole thing-- by the time you take the real part of this, all these complex numbers play absolutely no role. It's just a neat way of writing a complicated electric field in which the x component and the y component are out of phase, and that you have a wave at the same time propagating in the z direction. So this-- in the here, E is real, and all i's are auxiliary. This is completely different from the case of the Schrodinger equation. This i there is fundamental. The Psi is the dynamical variable, and it has to be complex.

So, we make a few remarks about the Schrodinger equation to get started. First remark is that this is first order differential equation in time. This has implications. Those two derivatives are maybe-- for some funny Hamiltonians, you can have even more than two derivatives or more complicated things. But definitely there's just one derivative in time.

So, what this means is that if you know the wave function all over space, you can calculate what it's going to be a little time later. Because if you know it all over space, you can calculate this right hand side and know what is the time derivative. And with the time derivative, you can figure it out what it's going to be later. A first order differential equation in time is something that if you know the quantity at one time, the differential equation tells you what it's going to be later. So, that's really sufficient. Psi of x-- of all x's-- at some time t naught determines Psi at all times.

Second property, fundamental property. The equation is linear. So, if you have two solutions, you can form a third by superimposing them, and you can superimpose them with complex coefficients. So, if you have two solutions, Psi 1 and Psi 2, then a 1 Psi 1 plus A2 Psi 2 is a solution. And here the a's belong to the complex numbers. So A 1 and A 2 are complex numbers.

As far as complex numbers are concerned, the first thing you just need to know is the definition of the length of a complex number. So, if you have z, a typical name people use for a complex number, having two components. A plus ib, where a and b are real.

There's the definition of the complex conjugate, which is a minus ib, and there's the definition of the length of the complex number, which is square root of a squared plus b squared, which is the square root of z times z star. So, that's for your complex number.

So, the property that this makes this into a physical theory and goes beyond math is what you know is the interpretation of the wave function as a probability. So, what do we construct? We construct p of x and t, or sometimes called the row of x and t as a density. And it's defined as Psi star of x t.

Now, here the notation means this Psi star-- we'd put the star here-- it really means Psi of x and t complex conjugate. You complex conjugate the wave function. And you get that. We'd put the star here, and typically don't put the parentheses, unless you have to complex conjugate something that's a little ambiguous.

So, Psi star of x and t times Psi of x and t. And this is called the probability density. Probability density. And the interpretation is that if you take p of x and t and multiply by little dx, this is the probability to find the particle in the interval x comma x plus dx at time t.

So, this is our probability density. It's a way to make physics out of the wave function. It's a postulate. And so the consequence of this postulate, since we're describing just one particle, is that we must have the particle as somewhere. So, if we add the probabilities that the particle is somewhere all over space, this is the probability that the particle is in this little dx we integrated that must be equal to 1. And this must hold for all times.

In terms of things to notice here, maybe one thing you can notice is the units of Psi. The units of Psi must be 1 over square root of length, because when we square it, then we multiply it by length, we get one, which has no units.

Key property of the Schrodinger equation. We will revisit the Schrodinger equation later and derive it, sort of the way [? De ?] [? Rack ?] derives it in his textbook. As just a consequence of unitary time evolution, it would be a very neat derivation. It will give you a feeling that you really understand something deep about quantum mechanics. And it will be true, that feeling. But here, we're going to go the other way around.

Just simply ask the question-- suppose you have a wave function such that the integral of this quantity at some specific time is equal to one. Will this integral be equal to one for all times, given that it is one at some given time? Now, you say, well, why do you ask that?

I ask that because actually this could be a problem. We've said that if you know the wave function all over space at one time, it's determined everywhere. So any time later. Therefore, if I know the wave function at time equal zero is good-- time equal t zero-- is a good wave function, I might warranty that when I saw the Schrodinger equation, the wave function will be normalized, well, later? Yes, you are.

And it's a simple or interesting exercise that we'll call it the quick calculation that I'll leave it for you to do. Which is show that d dt of this integral Psi of x and t squared dx is equal to zero. So, basically what this is saying. You got one but, think of this integral-- I'm sorry, I'm missing a dx here-- think of this integral for all times. Now it could be a function of time, because you put an arbitrary time here. The integral might depend on time.

So, it's a good question to think of that integral that may be a function of time and take its derivative. If its derivative is zero for all times, and that sometimes equal to one, it will be one forever. So, you must show that this is true. Now, this I think you've done one way or another several ways maybe in 804. But I ask you to do it again. So this is left for you as a way to warm up on this object.

And you will see actually that it's a little subtle. It's a little delicate, because how is it going to go? You're going to go in and take the derivative of Psi Psi star. You're going to take the derivative of Psi and you're going to use the Schrodinger equation. You're going to take the derivative of Psi star, and you're going to use the complex conjugate of the Schrodinger equation. It's going to be a little messy.

But then you're going to do integration by parts, and you're going to get zero, but only if you throw away the terms at infinity. And what gives you the right to throw them away? You will have to think. And the answer is that you will throw them away if the wave function goes to zero at infinity, which must do it. The wave function must go to zero at infinity, because if it didn't go to zero at infinity, it went to a constant at infinity, it would pick up an un-normalizable thing here. So the wave function definitely has to go to zero at infinity.

But that will also not be quite enough if you're careful about what you're doing. You will have to demand that the derivative of the wave function doesn't blow up. It's not asking too much, but it's asking something. A function could go to zero, presumably, and its derivative at the same time blow up, but it would be a very pathological function.

This will bring us to something that we said. We're going to try to be precise, but it's not so easy to be precise. When you try to be precise, you can exaggerate and go precise to a point that you're paralyzed with fear with every equation. We don't want to get that far. We want you to notice what happens and just look at it and state what you need.

Why can't we be precise? Because at the end of the day, this equation is extraordinarily complicated, and maybe crazy. The potential is crazy enough. So, functions-- mathematicians can invent crazy functions, things like a function that is one for every rational number and zero for every rational number. Put that for a potential here, and who knows what one gets.

So, we're going to take mild functions. We're not going to make them a very complicated, and we're going to be stating very soon what we need. So, what you need for this to work is that the function goes to zero and the relative goes to zero. Yes.

AUDIENCE: The potential has to be real always?

PROFESSOR: The potential is real at this moment. Yes. For the discussion that we're doing here, v is also a real number.

AUDIENCE: So it can't be complex?

PROFESSOR: Sorry?

AUDIENCE: Can it be complex?

PROFESSOR: It could be in certain applications for particles in electromagnetic fields. You can have something that looks like a complex Hamiltonian. So we will not discuss that in this couple of lectures, but maybe later. Yes.

AUDIENCE: Are there any conditions that the potential has to be time-dependent?

PROFESSOR: Well, at this moment, I put it time dependent. Also, it complicated potentials, but they're sometimes necessary. And we will discuss some of them. We will have very simple time dependencies. Otherwise, it's difficult to solve this equation. But very soon-- in about five minutes, I will say-- let's consider time-independent things to review the things that are a little more basic and important and that you should definitely remember well.

OK, so that's this part of the Schrodinger equation. I want to remind you of another concept called the current-- probability current. Probability current. What is it? It's a j of x and t-- that you will review in the homework-- is given by h over m, the imaginary part of Psi star d Psi over dx. So, it's a real quantity. And it's called a probability current. And it goes together with this probability density, this probability density that we wrote over here. So it's the current associated to that density.

Let's think a second what this means. In electromagnetism, you have currents and charged densities. So in E&M, you have a current. It's a vector and a charged density. Now, this current could also be a vector. If you're working in more than one dimension, it would be a vector.

But if you have electromagnetism, the most famous thing associated to electromagnetism currents and charged densities is the so-called conservation law. This differential equations satisfied by the current and the density. Divergence of j plus d Rho dt is equal to zero. That means charge conservation.

You may or may not remember that. If you don't, it's a good time to review it in E&M and check on that, discuss it in recitation. Think about it.

This means charge conservation as we usually understand, and the way to do it-- I'm saying just in words-- is you think of a volume, you can see how much charge is inside, and you see that the rate of change of the charge is proportional to the current that is escaping the volume. Which is to say, charge is never destroyed or created. It can escape a volume, because the charges are moving, but if it doesn't escape, well, the charge remains the same.

So, this is charge conservation. And this is the same thing. So the divergence of j in this case reduces to dj dx plus d Rho dt equals zero. It has a very similar interpretation. So, perhaps in equations, it's easier to think of interpretation.

Consider the real line and the point a and b, with a less than b. And define the probability pab of t of finding the particle in this interval between a and b at any time. You should be able to show-- and it's again another thing to review. This you can review.

And this review as well. You will use this differential equation, things like that, to show that dpab dt-- the rate at which the probability that you find the particle in this interval changes depends on what the current is doing here and what the current is doing here. So, it's actually given by j of a and t minus j at b at time t. You can show, and please try to show it.

So, what does that mean? You can have the particle here at any time. But if you want to know how the probability changing, you must see how it's leaking from a or how it's leaking from b.

Now j's are defined, by convention, positive to the right. So, if there's a current-- a bit of current at a, it increases the probability. This particle is sort of moving into the interval. And here at b, there's a positive current decreases the probability. Finally, for wave functions, the last thing we say is that these wave functions are-- you want them normalized, but we can work with them and they're physically equivalent if they differ just by a constant. So Psi 1 and Psi 2 are said to be equivalent if Psi 1 of x and t is equal to some complex constant of Psi 2 of x and t.

Now, you would say, well, I don't like that. I like normalized wave functions, and you could have a point there. But even if these are normalized functions, they could differ by a phase. And they would still be physically equivalent.

This part of the definition of the theory-- the definition of the theory is that these wave functions are really physically equivalent and indistinguishable. And that puts a constraint on the way we define observables. Any observable should have this property that, whether we used this wave function or the other, they give you the same observables.

So, if your wave functions are normalized, this can be complex constant of length one. Then one normalized implies the other is normalized. If they're not normalized, you can say, look, the only reason I'm not normalizing it because I don't gain all that much by normalizing it, in fact. I can do almost everything without normalizing the wave function. So, why should I bother? And we'll explain that also as well very soon. So, this is something that this part of the physical interpretation that we should keep.

So, now we've reviewed the Schrodinger equation. Next thing we want to say is the most important solutions of the Schrodinger equations are those energy Eigenstates, stationary states. And let's just go through that subject and explain what it was. So, I'm going to start erasing here.

So we're going to look at-- whoops-- stationary solutions. Now, I've used this week wave function with a capital Psi for a purpose, because I want to distinguish it from another Psi that we're going to encounter very soon. So, stationary solutions. And we'll take it-- from now assume v is time-independent. The case is sufficiently important that we may as well do it.

So, in this case, the Schrodinger equation is written as I h bar d Psi dt, and we'll write it with something called an h hat acting on Psi. And h hat at this point is nothing else than minus h squared over 2m second derivative with respect to x plus v of x.

We say that h hat is an operator acting on the wave function Psi on the right. Operator acting on that-- what does that mean? Basically, when we say an operator acts on some space, we mean that it takes elements of that space and moves them around in the space.

So, you've got a wave function, which is a complex number that depends on x and t ultimately, and then you act with this thing, which involves taking derivatives, multiplying by v of x, and you still got some complex function of x and t. So, this is called the Hamiltonian operator, and it's written like that. This Hamiltonian operator is time-independent.

So, what is a stationary state? A stationary state-- the way it's defined is as follows. A stationary state of energy e-- which is a real number-- is a Psi of x and t of the following form. It's a simple form. It's a pure exponential in time times a function that just depends on x. So, it's a pretty simple object.

So what is it? We say that this is a stationary state. e to the minus i Et over H bar Psi of x. And this Psi is in purpose different from this Psi. It doesn't have the bar at the bottom, and that signals to you that that's the time-independent one. So this also belongs to the complex numbers, but doesn't depend on time. So, it's called stationary because, as it turns out, when we will compute expectation values of any observable on this state, in this stationary state, it will be time-independent.

In particular, you know, one thing that observable is the probability density. And when you look at that, you have Psi star and Psi. Since E is real, this phase cancels-- this is really a face, because E is real. Therefore, Psi star Psi, the e cancels, and all the time dependence cancels and goes away.

Same thing here for the j. The x derivative over here it doesn't do anything to that phase. Therefore, the phase e to the i Et over H bar cancels from there two. And the current also has no time dependence.

So, this will be the case for any operator that is called a time-independent operator. It will have time-independent expectation values. So you can ask anything about some familiar operator-- energy operator, momentum operator, angular momentum operator-- all the famous operators of quantum mechanics, and it will have real expectation values.

So, as you, you're supposed to now plug this into this equation. And it's a famous result. Let's just do it. Plug back into the top equation. So, we have I H bar. The DET will only act on the phase, because the Psi has no time-dependence. And on the other hand, on the right hand side, the H has nothing to do with time, and therefore it can slide through the exponential until it hits Psi.

So here we have H-- well, I'll put the exponential in front-- H on little Psi. So, we multiply here, and what do we get? Well, the H bars cancel. The i at minus i gives you one. You get that E in front. So you get E times this phase Psi of x. And the phase is supposed to be here, but I cancel it with this phase as well. And I get on the right hand side H Psi. I will put it as a left hand Psi. And this is the time-independent Schrodinger equation.

So far this is really a simple matter. We've written a solution that will represent the stationary state, but then this energy should be such that you can solve this equation. And as you've learned before, it's something not so easy to solve that equation. So what do we want to say about this equation? Well, we have a lot to say, and a few things will be pointed out now that are very important.

So, we have a differential equation now. This differential equation has second derivatives with respect to x. Now it has no time derivatives. The time has been factored out. Time is not a problem anymore. This equation, in fact, looks quite real in that it seems that Psi could even be real here. And in fact, yes, there's no problem with this Psi being real. The total Psi just can't be real in general. But this one can be a real, and we'll consider those cases as well.

So, things that we want to say is that this is a second order differential equations in space. So second order differential equation in space. You could write it here. The H operator has partial derivatives, but this time time, you might as well say that this is minus h squared over 2m. The second Psi vx squared plus v of x tines Psi of x. Because Psi only depends on x, might as well write it as complete derivative. So, second order differential equation.

And therefore, the strategy for this equation is a little out there in relation to the Schrodinger equation. We said, in the Schrodinger equation, we know the wave function everywhere, you know it later. Here, if you know it at one point-- the wave function-- and you know the derivative at that one point, you have it everywhere.

Why is that? Because that's how you solve a differential equation. If you know the wave function and the derivative at the point, you go to the equation and say, I know the wave function and I know the first derivative, and I know the second derivative. So, a little later I can know what the first derivative is, and if I know what the first derivative is a little later, I can then know what the wave function is a little later, and you just integrate it numerically.

So, you just need to know the wave function Psi of x zero and Psi prime at x zero suffice for a solution when v is regular. But this v is not too complicated, or too strange, because you can always find exceptions. You have the square well potential, and you say, oh, I know the wave function is here and its derivative is zero. Does that determine the solution? No, because it's infinite. There's no space here, really, and you should work here. So, basically, unless v is really pathological, Psi and Psi prime are enough to solve for everything.

And that actually means something very important, that if Psi is equal to zero at x zero is equal to zero, and Psi prime at x zero is equal to zero, then under these regular conditions, Psi of all x is zero. Because you have a differential equation which the initial value is zero, the Psi prime is zero. And you go through the equation, you see that every solution has to be zero. It's the only possibility here.

So what happens now is the following-- that you have a physical understanding that your wave function, when it becomes zero-- it may do it slowly that it's becoming zero, but never quite being zero-- but if it's zero, it does it with Psi prime different from zero, so the wave function is not zero all over. So, this is a pretty important fact that is useful many times when you try to understand the nature of solutions.

So what else do we have here? Well, we have energy Eigenstates on the spectrum. So, what is an energy Eigenstate? Well, it's a solution of this equation. So a solution Psi-- a solution for Psi is an energy Eigenstate. Then, this set of values of E is this spectrum. And these two values of E-- if there's a value of E that has more than one solution, we say the spectrum is degenerate. So a degenerate spectrum is more than one Psi for a given E.

So, these are just definitions, but they're used all the time. So, our energy Eigenstates are the solutions of this. The funny thing about this equation is that sometimes the requirement that Psi be normalized means that you can't always find the solution for any value of E. So, only specific values of Es are allowed-- you know that for the harmonic oscillator, for example-- and therefore there's something called the spectrum, which is the allowed values. And many times you have degeneracies, and that makes for very interesting physics.

Let's say a couple more things about the nature of this wave function. So, what kind of potentials do we allow? We will allow potentials that can fail to be bounded. What do we allow? We allow failure of continuity. Certainly, we must allow that in our potentials that we consider, because you have even the finite square well. The potential is not continuous. You can allow as well failure to be bounded.

So, what is a typical example? The harmonic oscillator, the x squared potential. It's not bounded. It goes to infinity. So, we can fail to be continuous, but we can fail at one point, another point, but we shouldn't fail at infinitely many points, presumably.

So, it's piecewise continuous. It can fail to be bounded, and it can include delta functions. Which is pretty interesting, because a lot of physics uses delta functions, but a delta function is a complicated thing. We'll include delta functions but not derivatives of them, nor powers. So we won't take anything more strange than delta functions, collections of delta functions.

So, this is really how delicate your potentials will be. They will not be more complicated than that. But for that, we will assume, and it will be completely consistent to require the following for the wave functions. So Psi is continuous-- Psi of x-- is continuous and bounded. And its derivative is bounded. Psi prime is bounded.

AUDIENCE: What about Psi's behavior at infinity?

PROFESSOR: Sorry?

AUDIENCE: What kind of extra conditions do we have to impose of Psi's behavior at infinity?

PROFESSOR: Well, I will not impose any condition that is further than that, except the condition that they've been normalizable. And even that we will be a little-- how would I say, not too demanding on that. Because there will be wave functions, like momentum Eigenstates that can't be normalized. So, we'll leave it at that.

I think probably this is what you should really box, because for a momentum Eigenstate, e to the ipx over h bar. This is a momentum Eigenstate. This is continuous. It's bounded. The derivative is bounded. It is not normalizable, but it's so useful that we must include in the list of things that we allow. So, bound states and non-bound states are things that are not normalizable. So, I don't put normalization.

Now, if you put normalization, then the wave function will go to zero at infinity. And that's all you would want to impose. Nothing else. So, really in some sense, this is it. You don't want more than that.

AUDIENCE: Is normalization sufficient to ensure the derivative also goes to zero at infinity?

PROFESSOR: Sorry?

AUDIENCE: Is normalization sufficient to ensure that the--

PROFESSOR: Not that I know. I don't think so.

AUDIENCE: Then why is integration by price generically valid?

PROFESSOR: It's probably valid for restricted kinds of potentials. So you could not prove it in general. So, you know, there may be things that one can generalize and be a little more general, but I'm trying to be conservative. I know that for any decent potential-- and we definitely need Psi prime bounded. And wave functions that go to zero, the only ones I know that also have Psi prime going to zero. But I don't think it's easy to prove that's generic, unless you make more assumptions.

So, all right. So, this we'll have for our wave functions, and now I want to say a couple of things about properties of the Eigenstates. Now, we will calculate many of these Eigenstates, but we need to understand some of the basic properties that they have. And there's really two types of identities that I want you to be very aware that they play some sort of dual role-- a pretty interesting dual role-- that has to do with these wave functions.

So, the Eigenstates of-- Eigenstates of H hat-- these are the energy Eigenstates. you can consider them and make a list of them. So, you have an energy E zero less than or equal an E 1, E 2. Just goes like that. And you have a Psi zero, Psi 1. All this wave functions. And then H hat Psi N is equal to E N Psi N. You have a set of solutions.

So, this is what will happen if you have a good problem. A reasonable potential, and nothing terribly strange going on. There would be a lot of solutions, and they can be chosen to be orthonormal. Now at first sight, it's a funny term to use-- orthonormal. This is a term that we use for vectors. Two vectors are orthogonal, and we say they're orthonormal if they have unit length, and things like that.

But what do we mean the two functions are orthonormal? Well, our function's vectors. Well, that's a little dubious. But the way we will think in quantum mechanics is that, in some sense, functions are vectors in an infinite dimensional space. So, they're just vectors, but not in three dimensions. Why? Think of it. If you have a function, you have to give values-- independent values-- at many points-- Infinitely many. And if you give all those values, you've got the function. If you have a vector, you have to give components, and you've got the vector.

So, in a sense, to give a function, I have to give a lot of numbers. And I can say the first vector is the value along the direction-- the first component is the value around zero. The second unit vector is the value of about 0.01, 9.02, going on and on. And then list of them, and you have a vector of infinite dimensions.

You say, totally useless. [LAUGHTER] No, it's not totally useless. Actually, if you visualize that-- and we'll do it later more-- you will be able to understand many formulas as natural extensions.

So, what does it mean that these two functions are orthonormal? Well, a dot product, or orthonormality, is to say that the dot product is zero. And the way we dot product functions Psi m and Psi n of x is we take their values at the same point with star one, and we integrate. And, if this is equal to delta mn, we say the functions are orthonormal.

So, ortho, for orthogonal, which says that if m is different from n, the Kronecker delta, that symbol is equal to one if the two labels are the same, or zero otherwise. If they're different, you get zero.

The inner product-- this left hand side is called the inner product-- is zero. On the other hand, if they are the same, if m is equal to n, it says that the Psi squared is one. Kind of like a wave function that is well normalized. So we say normal for orthonormal. So these are orthonormal wave functions, and that's good. This is called orthonormality.

But then there is a more subtle property, which is that this set of functions is enough to expand any function in this interval that you're doing your quantum mechanics. So, if you have any reasonable function, it can be written as a superposition of these ones. So, this differential equation furnishes for you a collection of functions that are very useful. So this is orthonormality.

This is also completeness, which is to say that any function can be written as a sum of of this function. So I will write it as this. Psi of x-- an arbitrary Psi of x-- can be written as bm Psi n of x n equals zero to infinity, where the bn's are complex.

So, this is an assumption, but it's a very solid assumption. When you study differential equations of this type-- Sturm-Liouville problem-- this is one thing that mathematicians prove for you, and it's not all that easy. But the collection of wave functions is good in this sense. It provides you a complete set of things that any function can be written in terms of that.

I'm not saying this satisfies any particular equation. You see, this function satisfies very particular equations-- those equations-- but this is an arbitrary function. And it can be written as a sum of this. See, these equations have different energies for different Psi's. This Psi here satisfies no obvious equation.

But here is a problem that this is useful for. Suppose you're given a wave function at, at the given time, you know what it looks like. So, here is your wave function. Psi. And you know that Psi at x and time equals to zero happens to be equal to this Psi of x that we wrote above. So, it's equal to bn Psi n of x.

Well, if you know that, if you can calculate this coefficient, the wave function of time equals zero is known, say, and it was given by this thing, which is then written in this form. If you can write it in this form, you've solved the problem of time evolution, because then Psi of x at any time is just simply obtained by evolving each component. Which is bn e to the minus iEnt over h bar Psi n of x. So this is the important result.

Now, look what has happened. We have replaced each term. We added this exponential. Why? Because then each one of these is a solution of the full Schrodinger equation. And therefore a superposition with complex coefficients is still a solution of the Schrodinger equation.

Therefore, this thing I've put by hand is, you would say it's ad hoc. No, it's not. We've put it by hand, yes, but we've produced a solution of the Schrodinger equation, which has another virtue. When t is equal to zero, it becomes what you know the wave function is.

So, since this solves the Schrodinger equation-- time equals zero gives you the right answer. And you remember that the Schrodinger equation, if you know that time equals zero, the term is a wave function everywhere-- this is the solution. It's not just a solution. It's the solution.

So, you've solved this equation, and it's a very nice thing. It all depends, of course, on having found the coefficients bn. Because typically at time equals zero, you may know what the wave function is, but you may not know how to write it in terms of these coefficients bn.

So, what do you do then? If you don't know those coefficients, you can calculate them. How do you calculate them? Well, you use orthonormality. So you actually take this and integrate against another Psi star. So you take a Psi star sub m and integrate-- multiply and integrate. And then the right hand side will get the Kronecker delta that will pick out one term.

So, I'm just saying in words a two line calculation that you should do if you don't see this as obvious. Because it's a kind of calculation that you do a few times in life. Then it becomes obvious and you never do it again. It's minus infinity to infinity dx Psi m star of x Psi of x dx. So, bm is given by this quantity, or bn is given by this quantity. You obtain it from here plus orthonormality.

So, once you have this bn, you can do something that may-- if you look at these things and say, well, I'm bored, what should I do? I say, well, you have bm. Plug it back. What happens then? You say, why would I plug it back? I don't need to plug it back.

And that's true, but it's not a crazy thing to do, because it somehow must lead to some identity. Because you solve an equation and then plug it back and try to see if somehow it makes sense. So either it makes sense, or you learned something new.

So, we were supposed to calculate the bn's. And now we have them, so I can plug this back here. So what do I Get Psi of x now is equal to the sum from n equals zero to infinity. bn-- but this bn is the integral of Psi n star of x prime. I put here Psi of x prime. dx prime. I don't want to confuse the x's with x prime, so I should put the x primes all over here. Psi n of x.

Well, can I do the integral? No. So, have I gained anything? Well, you've gained something if you write it in a way that Psi is equal to something times Psi. That doesn't look all that simple, but we can at least organize it.

Let's assume things are convergent enough that you can change orders of sums and integrals. That's an assumption we always do. I'll write it like this. dx prime. And now I'll put the sum here equals zero to infinity of Psi n star of x prime. And I'll put the other Psi here as well. The Psi n of x over here. I'll put the parentheses, and finally the Psi of x prime here.

So, now it's put in a nice way. And it's a nice way because it allows you to learn something new and interesting about this. And what is that? That this must be a very peculiar function, such that integrated against Psi gives you Psi. And what could it be? Well, this is of the form, if you wish-- the x prime-- some function of x and x prime-- times Psi of x prime. So, this k is this thing.

Well, you can try to think what this is. If you put the delta function here-- which may be a little bit of a cheat-- you will figure out the right answer. This must be a function that sort of picks out the value of the function at x by integrating. So it only cares about the value at x. So, it must be a delta function. So, in fact, this is a delta function, or should be a delta function.

And therefore the claim is that we now have a very curious identity that looks as follows. It looks like n equal zero to infinity, Psi n star of x prime Psi n of x is actually delta of x minus x prime.

So, this must be true. If what we said at the beginning is true, that you can expand any function in terms of the Eigenfunctions, then, well, that's not such a trivial assumption. And therefore, it allows you to prove something fairly surprising, that this must be true, that this identity must be true.

And I want you to realize and compare and contrast with this identity here. One is completeness. One is orthonormality. There are two kinds of sums going on here. Here is sum over space, and you keep labels arbitrary-- label indices arbitrary. So, sum over space. These functions depend on space and on labels. Sum over space, and keep the labels, and you get sort of a unit matrix in this space, in the space of labels.

Here, you keep the positions arbitrary, but sum over labels. And now you get like a unit matrix in the space of positions. Something is one-- but actually infinite, but you couldn't do better-- when x is equal to x prime. So, if you think of it as a matrix, this function in x and x prime is a very strange matrix, with two indices, x and x prime. And when x is different from x prime, it's zero, but when x is equal to x prime, it's one. But it has to be a delta function, because continuous variables. But it's the same idea.

So, actually if you think of these two things, x and m as dual variables, this is a matrix variable, and then you're sort of keeping these two indices open and summing over the other index. Multiplying in one way you get a unit matrix. Here, you do the other way around. You have a matrix in m and n. This is a more familiar matrix, but then you sum over the other things.

So, they're dual, and two properties that look very different in the way you express them in words. One is that they're orthonormal. The other is that they're complete. And then suddenly then the mathematics tells you there's a nice duality between them.

So, the last thing I want to say today is about expectation values, which is another concept we have to review and recall. So let's give those ideas.

So, if we have a time-dependent operator-- no, a time independent-- we'll do a time-independent operator, I'm sorry. Time-Independent operator. And this operator will be called A hat. No time dependence on the operator. So, then we have the expectation value of this operator on a normalized state.

So what does that mean? The expectation value of this operator on a state-- on a wave function here. Now, this wave function is time-dependent. So this expectation value of this operator is expected to be a function of time.

And how is it defined? It's defined by doing the following integral. Again, from minus infinity to infinity, dx Psi star of x and t, and then the operator A acting on Psi of x and t. And Psi is supposed to be a normalized state.

So, notice the notation here. We put the Psi here because of the expectation-- whenever somebody asks you the expectation value for an operator, it has to be on a given state. So you put the state. Then you realize that this is a time-dependent wave function typically, so it could depend on time.

Now, we said about stationary states that if the state is stationary, there's a single time exponential here. There's just one term, e to the minus iEt over h bar. And if A, of course, is a time-independent operator, you won't care about the exponential. You will cancel this one, and there will not be a time dependence there.

But if this state is not stationary-- like most states are not stationary-- remember it's very important. If you have a stationary state, and you superimpose another stationary state, the result is not stationary. Stationary is a single exponential. More than one exponential is not stationary. So when you have this, you could have time dependence. So that's why we wrote it. Whenever you have a state that is not stationary, there is time dependence.

Now, you could do the following thing. So here is a simple but important calculation that should be done. And it's the expectation value of H. So what is the expectation value of the Hamiltonian at time t on this wave function Psi that we've computed there?

So, we would have to do that whole integral. And in fact, I ask you that you do it. It's not too hard. In fact, I will say it's relatively simple. And you have H on Psi of x and t, and then you must substitute this Psi equal the sum of bn Psi n.

And you have two sums. And the H acting on each side n-- you know what it is. And then the two sums-- you can do the integral using orthonormality. It's a relatively standard calculation. You should be able to do it. If you find it hard, you will see it, of course, in the notes. But it's the kind of thing that I want you to review.

So, what is the answer here? It's a famous answer. It's bm squared En. So, you get the expected value of the energy. It's a weighted average over all of the stationary states that are involved in this state that you've been building. So your state has a little bit of Psi zero, Psi 1, Psi 2, Psi 3. And for each one, you square its component and multiply by En. And this is time-independent.

And you say, well, you told me that only for stationary states, things are time-independent. Yes, only for stationary states, all operators are time-independent, but the Hamiltonian is a very special operator. It's an energy operator, and this is a time independent system. It's not being driven by something, so you would expect the energy to be conserved. And this is pretty much the statement of conservation of energy, the time-independence of this thing.

My last remark is technical about normalizations, and it's something you may find useful. If you have a wave function that is Psi, which is not normalized, you may say, OK, let's normalize it. So, what is the normalized wave function? The normalized wave function is Psi divided by the square root of the integral of Psi star Psi dx. You see, this is a number, and you take the square root of it. And this is the Psi of x and t. If the Psi is not normalized, this thing is normalized.

So, think of doing this here. Suppose you don't want to work too hard, and you want to normalize your wave function. So, your Psi is not normalized. Well, then this is definitely normalized. You should check that. Square it, an integrate it, and you'll see. You'll get one. But then I can then calculate the expectation value of A on that state, and wherever I see a Psi that should be normalized, I put this whole thing.

So what do I end up with? I end up with this integral from infinity to infinity dx Psi star A A hat Psi divided by the integral from minus infinity to infinity of Psi star Psi dx. If you don't want to normalize a wave function, that's OK. You can still calculate its expectation value by working with a not-normalized wave function. So in this definition, Psi is not normalized, but you still get the right value.

OK, so that's it for today. Next time we'll do properties of the spectrum in one dimension and begin something new called the variational problem. All right.

[APPLAUSE]

Thank you, thank you.

Free Downloads

Video


Caption

  • English-US (SRT)