Lecture 5: Operators and the Schrödinger Equation

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Zweibach gives a mathematical preliminary on operators. He then introduces postulates of quantum mechanics concerning observables and measurement. The last part of the lecture is devoted to the origins of the Schrödinger equation.

Instructor: Barton Zweibach

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: Today we're going to just continue what Allan Adams was doing. He's away on a trip in Europe, so he asked me to give their lecture today. And we'll just follow what he told me to do. He was sort of sad to give me this lecture, because it's one of the most interesting ones. This is the one where you get to see the Schrodinger equation. But anyway, had to be a way, so we'll do it here. He also told me to take off my shoes, but I won't do that.

So let's go ahead. So what do we have so far. It's going to be a list of items that we have. And what have we learned? We know that particles, or systems, are governed by wave functions, described by wave functions. Wave functions.

And those are psi of x at this moment. And these are complex numbers, belong to the complex numbers. And they're continuous and normalizable. Their derivatives need not be continuous, but the wave function has to be continuous. It should be also normalizable, and those are two important properties of it, continuous and normalizable.

Second there's a probability associated with this thing. And this probability is described by this special p of x. And given that x is a continuous variable, you can say well, what is the probability that the particle is just at this point would be zero in general.

You have to ask, typically, what's the probability that I find it in a range. It's any continuous variable that postulated to be given by the square of the wave function and the x.

Third there's superposition of allowed states. So particles can be in superpositions. So a wave function that depends of x may quickly or generally be given as a superposition of two wave functions. And this is seen in many ways. You have these boxes. A particle was a superposition. A top, and side, and hard, and soft. And photon superposition of linearly polarized here or that way. That's always possible to explain.

Now in addition to this, to motivate the next one we talk about relations between operators. There was an abstraction going on in this course in the previous lectures in which the idea of the momentum of a particle became that of an operator on the wave function.

So as an aside, operators momentum, we have the momentum of a particle has been associated with an operator h bar over i [INAUDIBLE] x. Now two things here. My taste, I like to put h bar over i. Allan likes to put I h bar. That's exactly the same thing, but no minus sign is a good thing in my opinion. So I avoid them when possible.

Now there's is d dx here is a partial derivative, and there seems to be no need for partial derivatives here. Why partial derivatives? I only see functions of x. Anybody? Why? Yes.

AUDIENCE: The complete wave function also depends on time, doesn't it?

PROFESSOR: Complete wave function depends on time as well. Yes, exactly. That's where we're going to get to today. This is the description of this particle at some instant. So within [INAUDIBLE] time, the time here is implicit. It could be at some time, now, later, some time, but that's all we know. So in physics you're allowed to ask the question, well, if I know the wave function of this time and that seems to be what I need to describe the physics, what will it be later? And later time will come in, so therefore we'll stick to this partial d dx in here.

All right, so how do we use that? We think of this operator as acting on the wave functions to give you roughly the momentum of the particle. And we've made it in such a way that when we talk about the expectation value of the momentum, the expected value of the momentum of the particle, we compute the following quantity.

We compute the integral from minus infinity to infinity dx. We put the conjugate of the wave function here. And we put the operator, h bar over i d dx acting on the wave function here. And that's supposed to be sort of like saying that this evaluates the momentum of the wave function.

Why is that so? It is because if you're trying to say, oh any wave function, a general wave function need not be state in which the particle has definite momentum. So I kind of just say the momentum of a particle is the value of this operator on the function. Because if I act with this operator on the function, on the wave function, it might not return me, the wave function.

In fact in general, we've seen that, for special wave functions, wave functions of the form psi, a number, e to the ikx. Then p, let's think p hat as the operator on psi. Would be h bar over i d dx on psi. Gives you what?

Well, this h over I, the h remains there. When you differentiate with respect to x, the ik goes down, and the i cancels, so you get hk times the same wave function. And for this, we think that this wave function is a wave function with momentum hk. Because if you act with the operator p on that wave function, it returns for you hk. So we think of this as has p equal hk, h bar k.

So the thing that we want to do now is to make this a little more general. This is just talking about momentum, but in quantum mechanics we're going to have all kinds of operators. So we need to be more general. So this is going to be, as Allan calls it, a math interlude. Based on the following question, what this is an operator? And then the physics question, what do measurable things have to do with our operators?

So about operators aren't measurable things, quantities. Now your view of operators is going to evolve. It's going to evolve in this course, It's going to evolve in 805. It probably will continue to evolve. So we need to think of what operators are. And there is a simple way of thinking of operators that is going to be the basis of much of the intuition. It's a mathematical way of thinking of operators, and what we'll sometimes use it as a crutch. And the idea is that basically operators are things that act on objects and scramble them.

So whenever you have an operator, you really have to figure out what are the objects you're talking about. And then the operator is some instruction on how to scramble it, scramble this object. So for example, you have an operator. You must see what it does to any object here. The operator acts on the object and gives you another object.

So in a picture you have all kinds of objects sets, a set of objects. And the operators are things. You can have a list of operators. And they come here and move those objects, scramble them, do something to them. And we should distinguish them, because the objects are not operators, and the operators are not the objects.

So what is the simplest example in mathematics of this thing is vectors and matrices. Simplest example. Objects are vectors. Operators are matrices. And how does that work? Well, you have a two by two, the case where you have a set in which you have vectors with two components.

So example, a vector that has two components v1 v2. And the matrices, this is the object. And the operator is a matrix. a 11, a 12, a 21, a 22 as an operator. An m on a vector is a vector. If you are with a matrix on a vector, this 2 by 2 matrix on this common vector, you get another vector. So that's this simplest example of operators acting on objects.

In our case, we're going to talk about a more-- we're going to have to begin, in quantum mechanics we're required to begin with a more sophisticated one in which the objects are going to be-- objects are going to be functions. In fact, I will write complex functions of x.

So let's see it, the list of operators. And what do the operators do? The operators act on the functions. So what is an operator? It's a rule on how to take any function, and you must give a rule on how to obtain from that function another function. So let's start with an examples. It's probably the easiest thing to do.

So an operator acts on the functions. So an operator for any function f of x will give you another function. Function of x. And here's operator o, we put a hat sometimes for operators. So the simplest operator, the operator one.

We always, mathematicians, love to begin with trivial examples. Illustrate anything almost, and just kind of confuse you many times. But actually it's good to get them of the way.

So what is the operator one? One possibility it takes any function and gives you the number one. Do you think that's it? Who thinks that's it? Nobody? Very good. that definitely is not a good thing to do to give you the number one.

So this operator does the following. I will write it like that. The operator one takes f of x and gives you what?

AUDIENCE: f of x.

PROFESSOR: f of x. Correct. Good. So it's a very simple operator, but it's an operator. It's like what matrix? The identity matrix. Very good. There could be a zero operator that gives you nothing and would be the zero matrix.

So let's write the more interesting operator. The operator would d dx. That's interesting. The derivative can be thought of as an operator because if you start with f of x, it gives you another function, d dx of f of x. And that's a rule to get from one function to another. Therefore that's an operator, qualifies as an operator.

Another operator that typically can confuse you is the operator x. x an operator? What does that mean? Well, you just have to define it. At this moment, it could mean many things. But you will see that [INAUDIBLE] is the only thing that probably makes some sense.

So what is the operator x? Well, it's the operator that, given f of x, gives you the function x times f of x. That's a reasonable thing to do. It's multiplying by x. It changes the function. You could define the operator x squared that multiplies by x squared. And the only reasonable thing is to multiply it by. You could divide by it, and you may need to divide by it as well. And you could define the operator 1 over x gives you the function times 1 over x. We will need that sometime, but not now.

Let's see another set of operators where we give a name. It doesn't have a name because it's not all that's useful in fact. But it's good to illustrate things. They operator s sub q for squared. S q for the first two letters of the word square. That takes f of x into f of x squared. That's another function. You could define more functions like that.

The operator p 42. That's another silly operator. Well certainly a lot more silly than this one. That's not too bad. But the p 42 takes f of x And gives you the number 42 times the constant function. So that's a function of x. It's trivial function of x.

Now enough examples. So you get the idea. Operators act on functions and give you functions. And we just need to define them, and then we know what we're talking about. Yes?

AUDIENCE: Is the direct delta and operator?

PROFESSOR: The direct delta, well, you can think of it as an operator. So it all depends how you define things. So how could I do find the direct delta function to be an operator? So delta of x minus a. Can I call it the operator delta hat of a? Well, I would have to tell you what it does in functions. And probably I would say delta had on a on a function of x is equal to delta of x minus a times the function of x. And I'd say it's an operator.

Now the question is, really, is it a useful operator? And sometimes it will be useful in fact. This is a more general case of another operator that maybe I could define. o sub h of x is the operator that takes f of x into h of x times f of x. So that would be another operator.

Now there are operators that are particularly nice, and there are the so-called linear operators. So what is a linear operator? It's one that respects superposition. So linear operator respects superposition.

So o hat is linear. o hat is a linear operator. If o hat on a times f of x plus b times g of x does what you would imagine it should do, it that's on the first, and then it acts on the second. Acting on the first, the number goes out and doesn't do anything, say, on the number. It's linear. It's part of that idea. And it gives you o on f of x plus b times o on g of x.

So your operator may be linear, or it may not be linear. And we have to just guess them. And you would imagine that we can decide that, of the list of operators that we have, let's see, one d dx-- how much? Which one? Sq, p 42, and o sub h of x. Let's see. Let's vote on each one whether it's linear or not. A shouting match whether I hear a stronger yes or no. OK? One is a linear operator, yes?

AUDIENCE: Yes.

PROFESSOR: No? Yes. All right. d dx linear. Yes?

AUDIENCE: Yes.

PROFESSOR: Good. That's strong enough. Don't need to hear the other one. x hat. Linear operator? Yes or no?

AUDIENCE: Yes.

PROFESSOR: Yes, good. Squaring, linear operator?

AUDIENCE: No.

PROFESSOR: No. No way it could be a linear operator. It just doesn't happen. If you have sq on f plus g, it would be f plus g squared, which is f squared plus g squared plus, unfortunately too, fg. And this thing ruins it, because this is sq of f plus sq of g. It's even worse than that. You put sq on af, by linearity it should be a times the operator. But when you square a times f, you get a squared f squared. So you don't even need two functions to see that it's not real. So definitely no. How about p 42?

AUDIENCE: No.

PROFESSOR: No, of course not. Because if you add two functions, it still gives you 42. It doesn't get you 84, so no. How about oh of x?

AUDIENCE: Yes.

PROFESSOR: Yes, it does that. If you act with this operator on a sum of functions distributive law, it works. So this is linear. Good.

Linear operators are important to us because we have some superposition of allowed states. So if this is a state and this is a state, this is also good state. So if we want superposition to work well with our theory, we want linear operator. So that's good. So we have those linear operators.

And now operators have another thing that makes them something special. It is the idea that there's simpler object they can act on. We don't assume you've studied linear algebra in this course, so whatever I'm going to say, take it as motivation to learn some linear algebra at some stage. You will be a little more linear algebra in 805. But at this moment, just basic idea.

So whenever you have matrices, one thing that people do is to see if there are special vectors. Any arbitrary vector, when you act within the matrix, is going to just jump and go somewhere else, point in another direction. But there are some special vectors that do act-- if you have a given matrix m, there are some funny vectors sometimes that acted by n remain the same direction. They may grow a little or become smaller, but they remain the same direction. These are called eigenvectors. And that constant of proportionality, proportional to the action of the operator on the vector, is called the eigenvalue.

So these things have generalizations for our operators. So operators can have special functions, eigenfunctions. What are these eigenfunctions? So let's consider that operator a hat. It's some operator. I don't know which one of these, be we're going to talk about linear operator. So linear operators have eigenfunctions.

A hat. So a hat. There may be functions that, when you act with the operator, you sort of get the same function up to possibly a constant a. So you may get a times the function. And that's a pretty unusual function, because, on most functions, any given operator is going to make a mess out of the function. But sometimes it does that.

So to label them better with respect to operator, I would put a subscript a, which means that there's some special function that has a parameter a, for which this operator gives you a times that special function. And that special function is called-- this is the eigenfunction and this is the eigenvalue. And that the eigenvalue is a number. It's a complex number c there over there.

So these are special things. They don't necessarily happen all the time to exist, but sometimes they do, and then they're pretty useful. And we have one example of them that is quite nice. For the operator a equal p, we have eigenfunctions e to the ikx with eigenvalue hk.

So this is the connection to this whole thing. We wanted to make clear for you that what you saw here, that this operator acting on this function gives you something times this function is a general fact about the operators. Operators have eigenfunctions. So eigenfunction e of x with eigenvalue hk, because indeed p hat on this e to ikx, as you see this h bar k times e to the ikx. So here you have p hat is the a. This is the function labeled a would be like k. Here is something like k again. And here is this thing. But the main thing operator on the function number times the function is an eigenfunction. Yes?

AUDIENCE: For a given operator, is the eigenvalue [INAUDIBLE]?

PROFESSOR: Well, for a given operator good question. a is a list of values. So there may be many, many, many eigenfunctions. Many cases infinitely many eigenfunctions. In fact, here I can put for k any number I want, and I get a different function. So a belongs to c and may take many, or even infinite, values.

If you remember for nice matrices, n by n matrix may be a nice n by n matrix because n eigenvectors and eigenvalues are sometimes hard to generate, sometimes eigenvalues have the same numbers and things like that.

OK. Linearity is this some of two eigenvectors and eigenvector. Yes? No?

AUDIENCE: No.

PROFESSOR: No, no. Correct. That's not necessarily true. If you have two eigenvectors, they have different eigenvalues. So things don't work out well necessarily. So an eigenvector plus another eigenvector is not an eigenvector.

So you have here, for example, A f1 equals a1f1. And A f2 equal a2f2, then a on f1 plus f2 would be a1f1 plus a2f2, and that's not equal to something times f1 plus f2. It would have to be something times f1 plus f2 to be an eigenvector. So this is not necessarily an eigenvector.

And it doesn't help to put a constant in front of here. Nothing helps. There's no way to construct an eigenvector from two eigenvectors by adding or subtracting.

The size of the eigenvector is not fixed either. If f is an eigenvector, then 3 times f is also an eigenvector. And we call it the same eigenvector. Nobody would call it a different eigenvector. It's really the same.

OK, so how does that relate to physics? Well, we've seen it here already. that one operator that we've learned to work with is the momentum operator. It has those eigenfunctions.

So back to physics. We have other operators. Therefore we have the P operator. That's good. We have the X operator. That's nice. It's multiplication by x. And why do we use it? Because sometimes you have the energy operator. And what is the energy operator? The energy operator is just the energy that you've always known, but think of it as an operator.

So how do we do that? Well, what is the energy of a particle we've written p squared over 2m plus v of x. Well, that was the energy of a particle, the momentum squared over 2m plus v of x. So the energy operator is hat here, hat there.

And now it becomes an interesting object. This energy operator will be called E hat. It acts and functions. It is not a number. The energy is a number, but the energy operator is not a number. Far from a number in fact. The energy operator is minus h squared over 2m. d second the x squared.

Why that? Well, because p was h bar over i d dx as an operator. So this sort of arrow here, it sort of the introduction. But after a while you just say P hat is h bar over a i d dx. End of story. It's not like double arrow. It's just what it is. That operator. That's what we call it. So when we square it, the i squares, the minus h squares, and d dx and d dx applied twice is the second derivative.

And here we get v of X hat, which is your good potential, whatever potential you're interested in, in which, whenever you see an x, you put an X hat. And now this is an operator. So you see this is not a number, not the function. It's just an operator. The operator has this sort of operator v of x.

Now what is this v of x here as an operator? This is v of x as an operator is just multiplication by the function v of x. You see, you have here that the operator x is f of x like that. I could have written the operator X hat to the n. What would it be? Well, if I add to the function, this is a lot of X hats acting on the function. Well, let the first one out. You let x times f of x. The second, that's another x, another x. So this is just x to the n times f of x. So lots of X hats. X hats To the 100th on a function is just X to the 100th times a function. So v of x on a function is just v of the number x on a function. It's just like this operator, the O in which you multiply by a function.

So please I hope this is completely clear what this means as an operator. You take the wave function, take two derivatives, and add the product of the wave function times v of plane x. So I'll write it here maybe. So important.

E hat and psi of x is therefore minus h squared over 2m the [INAUDIBLE] the x squared of psi of x plus just plain v of x times psi of x. That's what it does. That's an operator.

And for these operators in general. Math interlude, is it over? Not quite. Wow. No, yes. Allan said at this moment it's over, when you introduce it here. I'll say something more here, but it's going to be over now.

Our three continues here then with four. Four, to each observable we have an associated operator. So for momentum, we have the operator P hat. And for position we have the operator X hat. And for energy we have the operator E hat. And these are examples of operators. Example operator A hat could be any of those.

And there may be more observables depending on the system you're working. If you have particles in a line, there's not too many more observables at this moment. If you have a particle in general, you can have angular momentum. That's an interesting observable. It can be others.

So for any of those, our definition is just like with it for momentum. The expectation value of the operator is computed by doing what you did for momentum. You act with the operator on the wave function here and multiply by the compass conjugate function. And integrate just like you did for momentum.

This is going to be the value that you expect. After many trials on this wave function, you would expect the measured value of this exhibit a distribution which its expectation value, the mean, is given by this.

Now there are other definitions. One definition that already has been mentioned is that the uncertainty of the operator on the state psi, the uncertainty, is computed by taking the square root of the expectation value of A squared minus the expectation value of A, as a number, squared.

Now the expectation value of A squared, just simply here instead of A you put A squared, so you've got A squared here. That unless the function is very special, it's very different whole is bigger than the expectation value of A squared. So this is a number, and it's called the uncertainty. And that's the uncertainty of the uncertainty principle.

So for operators, we need to have another observation that comes from matrices that is going to be crucial for us is the observation that operators don't necessarily commute. And we'll do the most important example of that.

So we'll try to see in the operators associated with momentum and position commute. And what we mean by commute or don't communicate? Whether the order of multiplication matters.

Now we talked about matrices at the beginning, and we said matrices act on vectors to give you vectors. So do they commute? Well, matrices don't commute. The order matters for matrices multiplication. So these operators we're inventing here for physics, the order does matter as well.

So commutation. So let's try to see if we compute the operator p and x hat. Is it equal to the operator x hat times p? This is a very good question. These are two operators that we've defined. And we just want to know if the order matters or if it doesn't matter.

So how can I check it? I cannot just check it like this, because operators are only clear what they do is when they act on functions. So the only thing that I can do is test if this thing acting on functions give the same. So I'm going act with this on the function f of x. And I'm going to have act with this on the function f of x.

Now what do I mean by acting with p times x hat on the function f of x. This is by definition you act first with the operator that is next to the f and then with the other. So this is p hat on the function x hat times f of x. So here I would have, this is x hat on the function p hat f of x.

See, if you have a series of matrices, m1, m2, m3 acting on a vector, what do you mean? Act with this on the vector, then with this on the vector, then with this. That's multiplication. So we're doing that.

So let's evaluate. What is x operator on f of x? This is p hat on x times f of x. That's what the x operator in the function is. Here, what this x hat? And now I have this, so I have here h over i d dx of f.

Let's go one more step here. This is h over i d ddx now of this function, x f of x. And here I have just the x function times this function. So h over i x df dx. Well, are these the same? No, because this d dx here is not only acting on f like here. It's acting on the x. So this gives you two terms. One extra term on the d dx acts on the x. And then one term that is equal to this. So you don't get the same.

So you get from here h over i f of x, when you [INAUDIBLE] the x plus h over i x the df dx. So you don't get the same. So when I subtract them, so when I do xp minus px acting on the function f of x, what do I get?

Well, I put them in this order, x before the p. Doesn't matter which one you take, but many people like this. Well, these terms cancel and I get minus this thing. So I get minus h over i f of x, or i h bar f of x. Wow.

You got something very strange. The x times p minus p times x gives you a number-- an imaginary number, even worse-- times f of x. So from this we write the following. We say look, operators are defined by the action and function, but for any function, the only effect of xp minus px, which we call the commutator of x with p. This definition, the bracket of two things, of A B. Is defined to be A B minus B A. It's called the commutator. x p is an operator acting on f of x, gives you i h bar times f of x. So our kind of silly operator that does nothing has appeared here. Because I could now say that x hat with p is equal to i h bar times the unit operator.

Apart from the Schrodinger equation, this is probably the most important equation in quantum mechanics. It's the fact that x and b are incompatible operators as you will see later. They don't commute. Their order matters. What's going to mean is that when you measure one, you have difficulties measuring the other. They interfere. They cannot be measured simultaneously. All those things are encapsulated in a very lovely mathematical formula, which says that this is the way these operators work. Any questions? Yes?

AUDIENCE: When x-- the commutator of x and p is itself an operator, right?

PROFESSOR: RIght.

AUDIENCE: So is that what we're saying? When we had operators before, we can't simply just cancel the f of x. I mean we're not really canceling it, but it just because I h bar is the only eigenvalue of the operator?

PROFESSOR: Well, basically what we've shown by this calculation is that this operator, this combination is really the same as the identity operator. That's all we've shown, that some particular combination is the identity operator.

Now this is very deep, this equation. In fact, that's the way Heisenberg invented quantum mechanics. He called it the matrix mechanics, because he knew that operators were related to matrices. It's a beautiful story how he came up with this idea. It's very different from what we're doing today that we're going to follow Schrodinger today.

But basically his analysis led very quickly to this idea. And this is deep. Why is it deep? Depends who you ask. If you ask a mathematician, they would probably tell you this equation is not deep. This is scary equation.

And why is it scary? Because whenever a mathematician see operators, they want to write matrices. So the mathematician, you show him this equation, will say OK, Let me try to figure out which matrices you're talking about. And this mathematician will start doing calculations with two by two matrices, and will say, no, I can't find two by two matrices that behave like these operators. I can't find three by three matrices either. And four by four. And five by five. And finds no matrix really can do that, except if the matrix is infinite dimensional. Infinite by infinite matrices. So that's why it's very hard for a mathematician.

This is the beginning of quantum mechanics. This looks like a trivial equation, and mathematicians get scared by it. You show them for physicists there will be angular momentum. The operators are like this, and there's complicated into the [INAUDIBLE]. The three components of angular momentum have this commutation relation. And h bar here as well.

Complicated. Three operators. They mix with each other. Show it to a mathematician, he starts laughing at you. He says that best, the simplest case, this is easy. This is complicated. It's very strange. But the reason this is easier, the mathematician goes and, after five minutes, comes to you with three by three matrices that satisfies this relation. And here there weren't. And four by four that satisfy, and five by five, and two by two, and all of them. We can calculate all of them for you. But this one it's infinite dimensional matrices, and it's very surprising, very interesting, and very deep.

All right, so we move on a little bit more to the other observable. So after this, we have more general observable. So let's talk a little about them. That's another postulate of quantum mechanics that continues with this one, postulate number five.

So once you measure, upon measuring an observable A associated with the operator A hat, two things happen. You measure this quantity that could be momentum, could be energy, could be position, you name it. The measured value must be a number. It's one of the eigenvalues of A hat.

So actually those eigenvalues, remember the definition of the eigenvalues. It's there. I said many, but whenever you measure, the only possibilities that you get this number. So you measure the momentum, you must get this hk, for example. So observables, we have an associated operator, and the measured values are the eigenvalues.

Now these eigenvalues, in order to be observable, they should be a real numbers. And we said oh, they can be complex. Well, we will limit the kind of observables to things that have real eigenvalues, and these are going to be called later on permission operators. At this moment, the notes, they don't mention them. You're going to get them confused.

So anyway, special operators that have real eigenvalues. So we mentioned here they will have to be a real. Have to be real. And then the second one, which is an even stranger thing that happens is something you've already seen in examples. After you measure, the whole wave function goes into the state which is the eigenfunction of the operator.

So after measurement system collapses into psi a. The measure value is one over the eigenvalues a of A. And the system collapses into psi a. So psi a is such that A hat psi a is a psi a. So this is the eigenvector with eigenvalue a that you measured.

So after you measure the momentum and you found that its h bar k, the wave function is the wave function of momentum h bar k. If at the beginning, it was a superposition of many, as Fourier told you, then after measuring, if you get one component of momentum, that's all that is left of the wave function. It collapses.

This collapse is a very strange thing, and is something about quantum mechanics that people are a little uncomfortable with, and try to understand better, but surprisingly nobody has understood it better after 60 years of thinking about it. And it works very well.

It's a very strange thing. Because for example, if you have a wave function that says your particle can be anywhere, after you measure it where it is, the whole wave function becomes a delta function at the position that you measure. So everything on the wave function, when you do a measurement, basically collapses as we'll see.

Now for example, let's do an example. Position. So you have a wave function psi of x. You find measure and find the particle at x0. Measure and you find the particle at x0. So measure what? I should be clear. Measure position. So we said two things. The measured value is one of the eigenvalues of a, and after measurement, the system collapses to eigenfunctions.

Now here we really need a little of your intuition. Our position eigenstate is a particle a localized at one place. What is the best function associated to a position eigenstate? It's a delta function. The function that says it's at some point and nowhere else. So eigenfunctions delta of x minus x0, it's a function as a function of x. It peaks at x0, and it's 0 everywhere else. And this is, when you find a particle at x0, this is the wave function. The wave function must be proportional to this quantity.

Now you can't normalize this wave function. It's a small complication, but we shouldn't worry about it too much. Basically you really can't localize a particle perfectly, so that's the little problem with this wave function. You've studied how you can represent delta functions as limits and probably intuitively those limits are the best things.

But this is the wave function, so after you measure the system, you go into an eigenstate of the operator. Is this an eigenstate of the x operator? What a strange question. But it is.

Look, if you put the x operator on delta of x minus x zero, what is it supposed to do? It's supposed to multiply by x. So it's x times delta of x minus x zero. If you had a little experience with delta functions, you'd know that this function is 0 everywhere, except when x is equal to x0, so this x can be turned into x0. It just never at any other place it contributes.

This x really can be turned into x0 times delta of x minus x0. Because delta functions are really used to do integrals. And if you do the integral of this function, you will see that it gives you the same value as the integral of this function.

So there you have it. The operator acting on the eigenfunction is a number times this. So these are indeed eigenfunctions of the x operator. And what you measured was an eigenvalue of the x operator. Eigenvalue of x. And this is an eigenfunction of x.

So we can do the same with the momentum. Eigenvalues and eigenfunctions, we've seen them more properly. Now we'll go to the sixth postulate, the last postulate that we'll want to talk about is the one of general operators, and general eigenfunctions, and what happens with them.

So let's take now our operator A and its functions that can be found. So six, given an observable A hat and its eigenfunctions phi a of x. So an a runs over many values, many values.

OK so let's consider this case. Now eigenfunctions of an operator are very interesting objects. You see the eigenfunctions of momentum were of this form. And they allow you to expand via the Fourier any wave function as super positions of these things. Fourier told you you can't expand any function in eigenfunctions of the momentum, or the result is more general.

For observables in general you can expand functions, arbitrary functions, in terms of the eigenfunctions. Now for that, remember, an eigenfunction is not determined up to scale. You change multiplied by three, it's an eigenfunction still.

So people like to normalize them nicely, the eigenfunctions. You construct them like this, and you normalize them nicely. So how do you normalize them? Normalize them by saying that the integral over x of psi a star of x psi b of x is going to be what?

OK, basically what you want is that these eigenfunctions be basically orthogonal. Each one orthogonal to the next. So you want this to be 0 unless these two different eigenfunctions are different. And when they are the same, you want them to be just like wave functions, that their total integral of psi squared is equal to 1. So what you put here is delta ab.

Now this is something that you can always do with eigenfunctions. It's proven in mathematics books. It's not all that simple to prove, but this can always be done. And when we need examples, we'll do it ourselves.

So given an operator that you have its eigenfunctions like that, two things happen. One can expand psi as psi of x. Any arbitrary wave function as the sum. Or sometimes an integral. So some people like to write this and put them an integral on top of that. You can write it whichever way you want. It doesn't matter. Of coefficients times the eigenfunctions.

So just like any wave could be a written, a Fourier coefficient [INAUDIBLE] Fourier function. Any state can be written a superposition of these things. So that's one.

And two, the probability of measuring A hat and getting a. So a one of the particular values that you can get. That probability is given by the square of this coefficient. Ca squared. So this is P, the probability, to measure in psi and to get a. I think, actually, let's put an a0 there.

So here we go. So it's a very interesting thing. Basically you expand the wave function in terms of these eigenfunctions, and these coefficients give you the probabilities of measuring these numbers.

So we can illustrate that again with the delta functions, and we'll do it quick, because we get to the punchline of this lecture with the Schrodinger equation. So what do we have? Well, let me think of the operator X example. Operator X, the eigenfunctions are delta of x minus x0 for all x0. These are the eigenfunctions.

And we'll write sort of a trivial equation, but it sort of illustrates what's going on. Psi of x as a superposition over an integral over x0 of delta of x minus x0. Psi of x0. Delta of x minus x zero.

OK, first let's check that this make sense. Here we're integrating over x0. x0 is the variable. This thing shoots and fires whenever x is equal to x-- whenever x0 is equal to x. Therefore the whole result of that integral is psi of x is a little funny how it's written, because you have x minus 0, which is the same as delta of x0 minus x is just the same thing. And you integrate over x0, and you get just psi of x.

But what have you achieved here? You've achieved the analogue of this equation in which these are the psi a. These are the coefficients Ca. And this is the sum, this integral. So there you go. Any wave function can be written as the sum of coefficients times the eigenfunctions of the operator.

And what is the probability to find the particle at x0? Well, it's from here. The coefficients, the a squared. That's exactly what we had before. So this is getting basically what we want.

So this brings us to the final stage of this lecture in which we have to get the time evolution finally. So how does it happen? Well it happens in a very interesting way. So maybe I'll call it seven, Schrodinger equation.

So as with any fundamental equation in physics, there's experimental evidence and suddenly, however, you have to do a conceptual leap. Experimental evidence doesn't tell you the equation. It suggests the equation. And it tells you probably what you're doing is right.

So what we're going to do now is collect some of the evidence we had and look at an equation, and then just have a flash of inspiration, change something very little, and suddenly that's the Schrodinger equation. Allan told me, in fact, still sometimes are disappointed that we don't derive the Schrodinger equation. Now let's derive it mathematically.

But you also don't derive Newton's equations. F equal ma. You have an inspiration, you get it. Newton got it. And then you use it and you see it makes sense. It has to be a sensible equation, and you can test very quickly whether your equation is sensible. But you can't quite derive it.

In 805 we come a little closer to deriving the Schrodinger equation, which we say unitary time evolution, something that I haven't explained what it is, implies the Schrodinger equation. And that's a mathematical fact. And you can begin unitary time evolution, define it, and you derive the Schrodinger equation.

But that's just saying that you've substituted the Schrodinger equation by saying there is unitary time evolution. The Schrodinger question really comes from something a little deeper than that. Experimentally it comes from something else.

So how does it come? Well, you've studied some of the history of this subject, and you've seen that Planck postulated quantized energies in multiples of h bar omega. And then came Einstein and said look, in fact, the energy of a photon is h bar omega. And the momentum of the photon was h bar k.

So all these people, starting with Planck and then Einstein, understood what the photon is. The quantum of photons for photons, you have E is equal h bar omega, and the momentum is equal to h bar k. I write them as a vector, because the momentum is a vector, but we also write them in this because p equal h bar k, assuming you move just in one direction. And that's the way it's been written.

So this is the result of much work beginning by Planck, Einstein, and Compton. So you may recall Einstein said in 1905 for that there seemed to be this quantum of light that carry energy h omega. Planck didn't quite like that. And people were not all that convinced. Experiments were done by Millikan in 1915, and people were still not quite convinced. And then came Compton and did Compton scattering. And then people said, yeah, they seem to be particles. No way out of that. And they satisfy such a relation.

Now there was something about this that was quite nice, that these photons are associated with waves, and that was not too surprising, because people understood that electromagnetic waves are waves that correspond to photons. So you can also see that this says that E p is equal to h bar omega k as an equation between vectors. You see the E is the first, and the p is the second equation.

And this is actually a relativistic equation. It's a wonderful relativistic equation, because energy and momentum form what is called a relativity of four vector. It's the four vector-- this is a little aside on relativity-- four vector. The index mew runs from 0, 1, 2, 3. Just like the x mews, which are t and x. Run from x0, which is t, x1, which is x, x2 which is y, three-- these are four vectors. And this is a four vector. This is a four vector. This all seemed quite pretty, and this was associated to photons.

But then came De Broglie. And De Broglie had a very daring idea that even though this was written for photons, it was true for particles as well, for any particle. De Broglie says good for particles, all particles. And these particles are really waves. So what if he write-- he wrote psi of x and t is equal a wave associated to a matter particle. And it would be an e to the i kx minus omega t. That's a wave.

And you know that this wave has momentum p equal h bar k. If k is positive, look at this sign. If this sign is like that, then k is positive. This is a wave that is moving to the right. So p being hk. If k is positive, p is positive, is moving to the right, this is a wave moving to the right, and has this momentum.

So it should also have an energy. Compton said that this is relativistic because this all comes from photons. So if the momentum is given by that, and the energy must also be given by a similar relation.

In fact, he mostly said, look, you must have the energy being equal to h bar omega. The momentum, therefore, would be equal to h bar k. And I will sometimes erase these things.

So what happens with this thing? Well, momentum equal to hk. We've already understood this as momentum operator being h bar over i d dx. So this fact that these two must go together and be true for particles was De Broglie's insight, and the connection to relativity.

Now here we have this. So now we just have to try to figure out what could we do for the energy. Could we have an energy operator? What would the energy operator have to do? Well, if the energy operator is supposed to give us h bar omega, the only thing it could be is that the energy is i h bar d dt.

Why? Because you go again at the wave function. And you think i h bar d dt, and what do you get? i h bar d dt on the wave function is equal to i h bar. You take the d dt, you get minus i omega times the whole wave function. So this is equal h bar omega times the wave function, times the wave function like that.

So here it is. This is the operator that realizes the energy, just like this is the operator that realizes the momentum. You could say these are the main relations that we have.

So if you have this wave function, it corresponds to a particle with momentum hk and energy h omega. So now we write this. So for this psi that we have here, h bar over i d dx of psi of x and t is equal the value of the momentum times psi of x and t. That is something we've seen.

But then there's a second one. For this psi, we also that i h bar d dt of psi of x and t is equal to the energy of that particle times x and t, because the energy of that particle is h bar omega.

And look, this is familiar. And here the t plays no role, but here the t plays a role. And this is prescribing you how a wave function of energy E evolves in time. So you're almost there. You have something very deep in here. It's telling you if you know the wave function and it has energy E, this is how it looks later. You can take this derivative and solve this differential equation.

Now this differential equation is kind of trivial because E is a number here. But if you know that you have a particle with energy E, that's how it evolves in time.

So came Schrodinger and looked at this equation. Psi of x and t equal E psi of x and t. This is true for any particle that has energy E. How can I make out of this a full equation? Because maybe I don't know what is the energy E. The energy E might be anything in general. What can I do?

Very simple. One single replacement in that equation. Done. It's over. That's the Schrodinger equation. It's the energy operator that we introduced before. Inspiration. Change E to E hat. This is the Schrodinger equation.

Now what has really happened here, this equation that was trivial for a wave function that represented a particle with energy E, if this is the energy operator, this is not so easy anymore. Because remember, the energy operator, for example, was p squared over 2m plus v of x. And this was minus h squared over 2m d second dx squared plus v of x acting on wave functions.

So now you've got a really interesting equation, because you don't assume that the energy is a number, because you don't know it. In general, if the particle is moving in a complicated potential, you don't know what are the possible energies. But this is symbolically what must be happening, because if this particle has a definite energy, then this energy operator gives you the energy acting on the function, and then you recover what you know is true for a particle of a given energy.

So in general, the Schrodinger equation is a complicated equation. Let's write it now completely. So this is the Schrodinger equation. And if we write it completely, it will read i h bar d psi dt is equal to minus h bar squared over 2m d second dx squared of psi plus v of x times psi-- psi of x and t, psi of x and t.

So it's an equation, a differential equation. It's first order in time, and second order in space. So let me say three things about this equation and finish. First, it requires complex numbers. If psi would be real, everything on the right hand side would be real. But with an i it would spoil it, so complex numbers have to be there.

Second, it's a linear equation. It satisfies the proposition. So if one wave function satisfies the Schrodinger equation, the sum of wave functions, and another wave function does, the sum does.

Third, it's deterministic. If you know psi at x and time equals 0, you can calculate psi at any later time, because this is a first order differential equation in time. This equation will be the subject of all what we'll do in this course. So that's it for today. Thank you.

[APPLAUSE]