Lecture 3: The Wave Function

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Adams introduces wavefunctions as the fundamental quantity in describing quantum systems. Basic properties of wavefunctions are covered. Uncertainty and superposition are reiterated in the language of wavefunctions.

Instructor: Allan Adams

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: So today, before we get into the meat of today's lecture, Matt has very kindly-- Professor Evans has very kindly agreed to do an experiment.

Yeah, so for those of you all who are in recitations both he and Barton talked about polarization in recitation last week. And Matt will pick it up from there.

MATTHEW EVANS: So back to the ancient past-- this was a week ago. We had our hyper-intelligent monkeys that were sorting things. It all seemed very theoretical. And in recitation, I said things about polarizers. And I said, look, if we use polarizers, we can do exactly the same thing as these monkeys. We just need to set up a little polarization experiment and the results are identical. You can use the one figure out the other.

But I didn't have this or a nice polarizer ready then to give a demo, so here we go. What I'm going to show you is that, if we start with something polarized here with all white-- and right now I have all vertical polarization here-- and if I just put on this other box there, which is going to be another polarizer, if I put it the same way, this is all of our white electrons coming through all white. See? It doesn't really do much.

And if I look at the black output over here of the second Keller sorting box, that's the same as turning my polarizer 90 degrees, so nothing comes out black. So if we remove this guy from the middle, you have just exactly what you'd expect. You sort here, you have white, and you get all white out. Great. Everyone thought that was easy. We all had that figured out.

This box got thrown in the center here and it became sort of confusing, because you thought, well, they were white-- I'm going to throw my box in the middle here-- that's this guy at 45 degrees. And then if I throw this guy on the end again, the idea was, well, they were all white here, so maybe this guy identified the soft ones from the white ones. And now we have white and soft. And it should still all be white, right?

So I put this guy on up here. They should all come out but they sort of don't. And if you say, well, are they black? Well, no, they're not really black, either. They're some sort of strange combination of the two.

All right, so that's this experiment done in polarizers. But let me just play the polarizer trick a little bit, because it's fun. So this is if I say, vertical polarization and how many of them come out horizontal? So here I'm saying, white, and how many of them will come out black? That's the analogy. The answer is none of them.

And strangely, if I take this thing, which seems to just attenuate-- this is our middle box here-- and I just stuff it in between them, I can get something to come out even though I still have crossed polarizers on the side. So you can see the middle region is now brighter and you can still see the dark corners there of the crossed polarizers. And as I turn this guy around, I can make that better or worse. The maximum is somewhere right there, and then it goes off again.

So this is a way of understanding our electron-sorting, hyper-intelligent monkeys in terms of polarizations. And here it's just a vector projected on another vector projected on another vector-- something everybody knows how to do. So here's the polarization analogy of the Stern-Gerlach experiment.

PROFESSOR: Awesome. So the polarization analogy for interference effects in quantum mechanics is a canonical one in the texts of quantum mechanics. So you'll find lots of books talking about this. It's a very useful analogy, and I encourage you to read more about it. We won't talk about it a whole lot more, but it's a useful one.

All right, before I get going, any questions from last lecture? Last lecture was pretty much self-contained. It was experimental results. No, nothing? All right.

The one thing that I want to add to the last lecture-- one last experimental observation. I glossed over something that's kind of important, which is the following. So we started off by saying, look, we know that if I have a ray of light, it's an electromagnetic wave, and it has some wavelength lambda.

And yet the photoelectric effect tells us that, in addition to having the wavelength lambda, the energy-- it has a frequency, as well, a frequency in time. And the photoelectric effect suggested that the energy is proportional to the frequency. And we write this as h nu and h bar is equal to h upon 2 pi and omega is equal to 2 pi nu. So this is just the angular frequency, rather than the number-per-time frequency. And h bar is the reduced Planck constant. So I'll typically write h bar omega rather than h nu, because these two pi's will just cause us endless pain if we don't use the bar.

Anyway, so to an electromagnetic wave, we have a wavelength and a frequency and the photoelectric effect led us to predict that the energy is linearly proportional to the frequency, with the linear proportionality coefficient h bar-- Planck constant-- and the momentum is equal to h upon lambda, also known as-- I'm going to write this as h bar k, which is equal to h upon lambda, where here, again, h bar is h upon 2 pi. And so k is equal to 2 pi upon lambda. So k is called the wave number and you should have seen this in 8.03.

So these are our basic relations for light. We know that light, as an electromagnetic wave, has a frequency and a wavelength-- or a wave number, an inverse wavelength. And the claim of the photoelectric effect is that the energy and the momenta of that light are thus quantized, that light comes in chunks. So it has a wave-like aspect and it also has properties that are more familiar from particles.

Now, early on shortly after Einstein proposed this, a young French physicist named de Broglie said, well, look, OK, this is true of light. Light has both wave-like and particle-like properties. Why is it just light? The world would be much more parsimonious if this relation were true not just of light, but also of all particles.

I am thus conjecturing, with no evidence whatsoever, that, in fact, this relation holds not just for light, but for any object. Any object with momentum p has associated to it a wavelength or a wave number, which is p upon h bar. Every object that has energy E has associated with it a wave with frequency omega.

To those electrons that we send through the Davisson-Germer experiment apparatus, which are sent in with definite energy, there must be a frequency associated with it, omega and a wavelength lambda associated with it. And what we saw from the Davisson-Germer experiment was experimental confirmation of that prediction-- that electrons have both particulate and wave-like features simultaneously.

So these relations are called the de Broglie relations or "de BROG-lee"-- I leave it up to you to decide how to pronounce that. And those relations are going to play an important role for us in the next few lectures. I just wanted to give them a name and a little context.

This is a good example of parsimony and elegance-- the theoretical elegance leading you to an idea that turns out to be true of the world. Now, that's a dangerous strategy for finding truth. Boy, wouldn't it be nice if--? Wouldn't it be nice if we didn't have to pay taxes but we also had Medicare?

So it's not a terribly useful guide all the time, but sometimes it really does lead you in the right direction. And this is a great example of physical intuition, wildly divorced from experiment, pushing you in the right direction. I'm making it sound a little more shocking than-- well, it was shocking. It was just shocking.

OK, so with that said, let me introduce the moves for the next few lectures. For the next several lectures, here's what we're going to do. I am not going to give you experimental motivation. I've given you experimental motivation. I'm going to give you a set of rules, a set of postulates.

These are going to be the rules of quantum mechanics. And what quantum mechanics is for us is a set of rules to allow us to make predictions about the world. And these rules will be awesome if their predictions are good. And if their predictions are bad, these rules will suck.

We will avoid bad rules to the degree possible. I'm going to give you what we've learned over the past 100 years-- wow-- of developing quantum mechanics. That is amazing. Wow. OK, yeah, over the past 100 years of developing quantum mechanics. And I'm going to give them to you as a series of postulates and then we're going to work through the consequences, and then we're going to spend the rest of the semester studying examples to develop an understanding for what the rules of quantum mechanics are giving you.

So we're just going to scrap classical mechanics and start over from scratch. So let me do that. And to begin, let me start with the definition of a system. And to understand that definition, I want to start with classical mechanics as a guide. So in classical mechanics-- let's think about the easiest classical system you can-- just a single particle sitting somewhere. In classical mechanics of a single particle, how do you specify the configuration, or the state-- just different words for the same thing-- how do you specify the configuration or state of the system?

AUDIENCE: By position and momentum.

PROFESSOR: Specify the position and momentum, exactly. So in classical mechanics, if you want to completely specify the configuration of a system, all you have to do is give me x and p for my particle. And if you tell me this, I know everything. If you know these numbers, you know everything.

In particular, what I mean by saying you know everything is that, if there's anything else you want to measure-- the energy, for example. The energy is just some function of the position and momentum. And if you know the position and momentum, you can unambiguously calculate the energy. Similarly, the angular momentum, which is a vector, you can calculate it if you know x and p-- which is just r cross p.

So this gives you complete knowledge of the system. There's nothing more to know if you know that data.

Now, there are certainly still questions that you can't answer given knowledge of x and p. For example, are there 14 invisible monkeys standing behind me? I'm here. I'm not moving. Are there 14 invisible monkeys standing behind me? You can't answer that. It's a stupid question, right?

OK, let me give you another example. The electron is x and p, some position. Is it happy? Right, so there are still questions you can't answer. The point is, complete knowledge of the system to answer any physically observable question-- any question that could be meaningfully turned into an experiment, the answer is contained in knowing the state of the system.

But this can't possibly be true in quantum mechanics, because, as you saw in the problem set and as we've discussed previously, there's an uncertainty relation which says that your knowledge-- or your uncertainty, rather, in the position of a particle, quantum mechanically-- I'm not even going to say quantum mechanically. I'm just going to say the real world.

So in the real world, our uncertainty in the position of our point-like object and our uncertainty in the momentum is always greater than or roughly equal to something that's proportional to Planck's constant. You can't be arbitrarily confident of the position and of the momentum simultaneously.

You worked through a good example of this on the problem set. We saw this in the two-slit experiment and the interference of electrons. This is something we're going to have to deal with.

So as a consequence, you can't possibly specify the position and the momentum with confidence of a system. You can't do it. This was a myth. It was a good approximation-- turned out to be false.

So the first thing we need is to specify the state, the configuration of a system. So what specifies the configuration of a system?

And so this brings us to the first postulate. The configuration, or state, of a system-- and here again, just for simplicity, I'm going to talk about a single object-- of a quantum object is completely specified by a single function, a wave function, which I will denote generally psi of x, which is a complex function. The state of the quantum object is completely specified once you know the wave function of the system, which is a function of position.

Let me emphasize that this is a first pass at the postulates. What we're going to do is go through the basic postulates of quantum mechanics, then we'll go through them again and give them a little more generality. And then we'll go through them again and give them full generality. That last pass is 8.05.

So let me give you some examples. Let me just draw some characteristic wave functions. And these are going to turn out to be useful for us.

So for example, consider the following function. So here is 0 and we're plotting as a function of x. And then plotting the real part of psi of x. So first consider a very narrowly supported function. It's basically 0 everywhere, except it has some particular spot at what I'll call x1.

Here's another wave function-- 0. It's basically 0 except for some special spot at x2. And again, I'm plotting the real part of psi. And I'm plotting the real part of psi because A, psi is a complex function-- at every point it specifies a complex number. And B, I can't draw complex numbers. So to keep my head from exploding, I'm just plotting the real part of the wave function. But you should never forget that the wave function is complex.

So for the moment, I'm going to assume that the imaginary part is 0. I'm just going to draw the real parts. So let me draw a couple more examples. What else could be a good wave function? Well, those are fine. What about-- again, we want a function of x and I'm going to draw the real part. And another one. So this is going to be a perfectly good wave function.

And let me draw two more. So what else could be a reasonable wave function? Well-- this is harder than you'd think. Oh, God. OK, so that could be the wave function, I don't know. That is actually my signature. My wife calls it a little [INAUDIBLE].

OK, so here's the deal. Psi is a complex function. Psi also needs to not be a stupid function. OK so you have to ask me-- look, could it be any function? Any arbitrary function? So this is going to be a job for us. We're going to define what it means to be not-stupid function.

Well, this is a completely reasonable function-- it's fine. This is a reasonable function. Another reasonable function. Reasonable. That's a little weird, but it's not horrible. That's stupid.

So we're going to have to come up with a good definition of what not stupid means.

So fine, these are all functions. One of them is multivalued and that looks a little worrying, but they're all functions. So here's the problem. What does it mean?

So postulate 2-- The meaning of the wave function is that the probability that upon measurement the object is found at the position x is equal to the norm squared of psi of x. If you know the system is ascribed to the wave function psi, and you want to look at point x, you want to know with what probability will I find the particle there, the answer is psi squared.

Notice that this is a complex number, but absolute value squared, or norm squared, of a complex number is always a real, non-negative number. And that's important because we want our probabilities to be real, non-negative numbers. Could be 0, right? Could be 0 chance of something. Can't be negative 7 chance.

Incidentally, there also can't be probability 2. So that means that the total probability had better be normalized. So let me just say this in words, though, first.

So P, which is the norm squared of psi, determines the probability-- and, in particular, the probability density-- that the object in state psi, in the state given by the wave function psi of x, will be found at x. So there's the second postulate.

So in particular, when I say it's a probability density, what I mean is the probability that it is found between the position x and x plus dx is equal to P of x dx, which is equal to psi of x squared dx. Does that make sense? So the probability that it's found in this infinitesimal interval is equal to this density times dx or psi squared dx.

Now again, it's crucial that the wave function is in fact properly normalized. Because if I say, look, something could either be here or it could be here, what's the sum of the probability that it's here plus the probability that it's here? It had better be 1, or there's some other possibility. So probabilities have to sum to 1. Total probability that you find something somewhere must be 1.

So what that tells you is that total probability, which is equal to the integral over all possible values of x-- so if I sum over all possible values of P of x-- all values-- should be equal to 1. And we can write this as integral dx over all values of x. And I write "all" here rather than putting minus infinity to infinity because some systems will be defined from 1 to minus 1, some systems will be defined from minus infinity to infinity-- all just means integrate over all possible values-- hold on one sec-- of psi squared.

AUDIENCE: Are you going to use different notation for probability density than probability?

PROFESSOR: I'm not going to. Probability density is going to have just one argument, and total probability is going to have an interval as an argument. So they're distinct and this is just the notation I like. Other questions?

Just as a side note, what are the dimensions of the wave function? So everyone think about this one for second. What are the dimensions?

AUDIENCE: Is it 1 over square root length

PROFESSOR: Awesome. Yes. It's 1 over root length. The dimensions of psi are 1 over root length. And the way to see that is that this should be equal to 1. It's a total probability. This is an infinitesimal length, so this has dimensions of length. This has no dimension, so this must have dimensions of 1 over length. And so psi itself of x most have dimensions of 1 over length.

Now, something I want to emphasize, I'm going to emphasize, over and over in this class is dimensional analysis. You need to become comfortable with dimensional analysis. It's absolutely essential.

It's essential for two reasons. First off, it's essential because I'm going to be merciless in taking off points if you do write down a dimensionally false thing. If you write down something on a problem set or an exam that's like, a length is equal to a velocity-- ooh, not good.

But the second thing is, forget the fact that I'm going to take off points. Dimensional analysis is an incredibly powerful tool for you. You can check something that you've just calculated and, better yet, sometimes you can just avoid a calculation entirely by doing a dimensional analysis and seeing that there's only one possible way to build something of dimensions length in your system. So we'll do that over and over again.

But this is a question I want you guys to start asking yourselves at every step along the way of a calculation-- what are the dimensions of all the objects in my system? Something smells like smoke.

So with that said, if that's the meaning of the wave function, what physically can we take away from knowing these wave functions? Well, if this is the wave function, let's draw the probability distribution. What's the probability distribution? P of x. And the probability distribution here is really very simple. It's again 0 squared is still 0 so it's still just a big spike at x1 and this one is a big spike at x2. Everyone cool with that?

So what do you know when I tell you that this is the wave function describing your system? You know that with great confidence, you will find the particle to be sitting at x1 if you look. So what this is telling you is you expect x is roughly x1 and our uncertainty in x is small. Everyone cool with that? Similarly, here you see that the position is likely to be x2, and your uncertainty in your measurement-- your confidence in your prediction is another way to say it-- is quite good, so your uncertainty is small.

Now what about these guys? Well, now it's norm squared. I need to tell you what the wave function is. Here, the wave function that I want-- so here is 0-- is e to the i k1 x. And here the wave function is equal to e to the i k2 x.

And remember, I'm drawing the real part because of practical limitations. So the real part is just a sinusoid-- or, in fact, the cosine-- and similarly, here, the real part is a cosine. And I really should put 0 in the appropriate place, but-- that worked out well.

So now the question is, what's the probability distribution, P of x, associated to these wave functions? So what's the norm squared of minus e to the i k1 x? If I have a complex number of phase e to the i alpha, and I take its norm squared, what do I get? 1. Right? But remember complex numbers. If we have a complex number alpha-- or sorry, if we have a complex number beta, then beta squared is by definition beta complex conjugate times beta. So e to the i alpha, if the complex conjugate is e to the minus i alpha, e to the i alpha times e to the minus i alpha, they cancel out-- that's 1.

So if this is the wave function, what's the probability distribution? Well, it's 1. It's independent of x. So from this we've learned two important things. The first is, this is not properly normalized. That's not so key. But the most important thing is, if this is our wave function, and we subsequently measure the position of the particle-- we look at it, we say ah, there's the particle-- where are we likely to find it? Yeah, it could be anywhere. So what's the value of x you expect-- typical x? I have no idea, no information whatsoever. None.

But and correspondingly, what is our uncertainty in the position of x that we'll measure? It's very large, exactly. Now, in order to tell you it's actually infinite, I need to stretch this off and tell you that it's actually constant off to infinity, and my arms aren't that long, so I'll just say large.

Similarly here, if our wave function is e to the i k2 x-- here k2 is larger, the wavelength is shorter-- what's the probability distribution? It's, again, constant. So-- this is 0, 0. So again, x-- we have no idea, and our uncertainty in the x is large. And in fact it's very large. Questions?

What about these guys? OK, this is the real challenge. OK, so if this is our wave function, and let's just say that it's real-- hard as it is to believe that-- then what's our probably distribution? Well, something like-- I don't know, something-- you get the point.

OK, so if this is our probability distribution, where are we likely to find the particle? Well, now it's a little more difficult, right? Because we're unlikely to find it here, while it's reasonably likely to find here, unlikely here, reasonably likely, unlikely, like-- you know, it's a mess.

So where is this? I'm not really sure. What's our uncertainty? Well, our uncertainty is not infinite because-- OK, my name ends at some point. So this is going to go to 0.

So whatever else we know, we know it's in this region. So it's not infinite, it's not small, we'll say. But it's not arbitrarily small-- it's not tiny.

Or sorry, it's not gigantic is what I meant. Our uncertainty is not gigantic. But it's still pretty nontrivial, because I can say with some confidence that it's more likely to be here than here, but I really don't know which of those peaks it's going to be found.

OK, now what about this guy? What's the probability distribution well now you see why this is a stupid wave function, because it's multiply valued. It has multiple different values at every value of x. So what's the probability? Well, it might be root 2, maybe it's 1 over root 3. I'm really not sure.

So this tells us an important lesson-- this is stupid. And what I mean by stupid is, it is multiply valued. So the wave function-- we just learned a lesson-- should be single valued.

And we will explore some more on your problem set, which will be posted immediately after lecture. There are problems that walk you through a variety of other potential pathologies of the wave function and guide you to some more intuition. For example, the wave function really needs to be continuous as well. You'll see why.

All right. Questions at this point? No? OK.

So these look like pretty useful wave functions, because they corresponded to the particle being at some definite spot. And I, for example, am at a reasonably definite spot.

These two wave functions, though, look pretty much useless, because they give us no information whatsoever about what the position is. Everyone agree with that? Except-- remember the de Broglie relations.

The de Broglie relations say that associated to a particle is also some wave. And the momentum of that particle is determined by the wavelength. It's inversely related to the wavelength. It's proportional to the wave number. Any energy is proportional to the frequency.

Now, look at those wave functions. Those wave functions give us no position information whatsoever, but they have very definite wavelengths. Those are periodic functions with definite wavelengths. In particular, this guy has a wavelength of from here to here. It has a wave number k1.

So that tells us that if we measure the momentum of this particle, we can be pretty confident, because it has a reasonably well-defined wavelength corresponding to some wave number k-- 2 pi upon the wavelength. It has some momentum, and if we measure it, we should be pretty confident that the momentum will be h-bar k1. Everybody agree with that?

Looks like a sine wave. And de Broglie tells us that if you have a wave of wavelength lambda, that corresponds to a particle having momentum p. Now, how confident can we be in that estimation of the momentum? Well, if I tell you it's e to the i k x, that's exactly a periodic function with wavelength lambda 2 pi upon k.

So how confident are we? Pretty confident. So our uncertainty in the momentum is tiny. Everyone agree?

Similarly, for this wave, again we have a wavelength-- it's a periodic function, but the wavelength is much shorter. If the wavelength is much shorter, then k is much larger-- the momentum is much larger. So the momentum we expect to measure, which is roughly h-bar k2, is going to be much larger.

What about our uncertainty? Again, it's a perfect periodic function so our uncertainty in the momentum is small. Everyone cool with that?

And that comes, again, from the de Broglie relations. So questions at this point? You guys are real quiet today. Questions?

AUDIENCE: So delta P is 0, basically?

PROFESSOR: It's pretty small. Now, again, I haven't drawn this off to infinity, but if it's exactly the i k x, then yeah, it turns out to be 0.

Now, an important thing, so let me rephrase your question slightly. So the question was, is delta P 0? Is it really 0? So here's a problem for us right now. We don't have a definition for delta P. So what is the definition of delta P? I haven't given you one.

So here, when I said delta P is small, what I mean is, intuitively, just by eyeball, our confidence in that momentum is pretty good, using the de Broglie relations. I have not given you a definition, and that will be part of my job over the next couple of lectures. Very good question. Yeah.

AUDIENCE: How do you code noise in that function?

PROFESSOR: Awesome.

AUDIENCE: Do you just have different wavelengths

PROFESSOR: Yeah

AUDIENCE: As you go along?

PROFESSOR: Awesome. So for example, this-- does it have a definite wavelength? Not so much. So hold that question and wait until you see the next examples that I put on this board, and if that doesn't answer your question, ask it again, becayse it's a very important question. OK.

AUDIENCE: When you talk about a photon, you always say a photon has a certain frequency. Doesn't that mean that it must be a wave because you have to fix the wave number k?

PROFESSOR: Awesome question. Does every wave packet of light that hits your eye, does it always have a single, unique frequency? No, you can take multiple frequency sources and superpose them. An interesting choice of words I used there.

All right, so the question is, since light has some wavelength, does every chunk of light have a definite-- this is the question, roughly. Yeah, so and the answer is, light doesn't always have a single-- You can have light coming at you that has many different wavelengths and put it in a prism and break it up into its various components. So you can have a superposition of different frequencies of light. We'll see the same effect happening for us.

OK, so again, de Broglie made this conjecture that E is h-bar omega and P is h-bar k. This was verified in the Davisson-Germer experiment that we ran. But here, one of the things that's sort of latent in this is, what he means is, look, associated to every particle with energy N and momentum P is a plane wave of the form e to the i kx minus omega t. And this, properly, in three dimensions should be k dot x.

But at this point, this is an important simplification. For the rest of 8.04, until otherwise specified, we are going to be doing one-dimensional quantum mechanics. So I'm going to remove arrow marks and dot products. There's going to be one spatial dimension and one time dimension.

We're always going to have just one time dimension, but sometimes we'll have more spatial dimensions. But it's going to be a while until we get there. So for now, we're just going to have kx.

So this is a general plane wave. And what de Broglie really was saying is that, somehow, associated to the particle with energy E and momentum P should be some wave, a plane wave, with wave number k and frequency omega. And that's the wave function associated to it.

The thing is, not every wave function is a plane wave. Some wave functions are well localized. Some of them are just complicated morasses. Some of them are just a mess.

So now is the most important postulate in quantum mechanics. I remember vividly, vividly, when I took the analog of this class. It was called Physics 143A at Harvard. And the professor at this point said-- I know him well now, he's a friend-- he said, this is what quantum mechanics is all about. And I was so psyched. And then he told me And it was like, that's ridiculous. Seriously? That's what quantum mechanics is all about?

So I always felt like this is some weird thing, where old physicists go crazy. But it turns out I'm going to say exactly the same thing. This is the most important thing in all of quantum mechanics. It is all contained in the following proposition. Everything-- the two slit experiments, the box experiments, all the cool stuff in quantum mechanics, all the strange and counter intuitive stuff comes directly from the next postulate. So here it is. I love this.

Three-- put a star on it. Given two possible wave functions or states-- I'll say configurations-- of a quantum system-- I wish there was "Ride of the Valkyries" playing in the background-- corresponding to two distinct wave functions-- f with an upper ns is going to be my notation for functions because I have to write it a lot-- psi1 and psi2-- and I'll say, of x-- the system-- is down-- can also be in a superposition. of psi1 and psi2, where alpha and beta are complex numbers.

Given any two possible configurations of the system, there is also an allowed configuration of the system corresponding to being in an arbitrary superposition of them. If an electron can be hard and it can be soft, it can also be in an arbitrary superposition of being hard and soft. And what I mean by that is that hard corresponds to some particular wave function, soft will correspond to some particular wave function, and the superposition corresponds to a different wave function which is a linear combination of them.

AUDIENCE: [INAUDIBLE] combination also have to be normalized?

PROFESSOR: Yeah, OK, that's a very good question. So and alpha and beta are some complex numbers subject to the normalization condition. So indeed, this wave function should be properly normalized.

Now, let me step back for second. There's an alternate way to phrase the probability distribution here, which goes like this, and I'm going to put it here. The alternate statement of the probability distribution is that the probability density at x is equal to psi of x norm squared divided by the integral over all x dx of psi squared.

So notice that, if we properly normalize the wave function, this denominator is equal to 1-- and so it's not there, right, and then it's equivalent. But if we haven't properly normalized it, then this probability distribution is automatically properly normalized. Because this is a constant, when we integrate the top, that's equal to the bottom, it integrates to 1.

So I prefer, personally, in thinking about this for the first pass to just require that we always be careful to choose some normalization. That won't always be easy, and so sometimes it's useful to forget about normalizing and just define the probability distribution that way. Is that cool?

OK. This is the beating soul of quantum mechanics. Everything in quantum mechanics is in here. Everything in quantum mechanics is forced on us from these few principles and a couple of requirements of matching to reality.

AUDIENCE: When you do this-- some of linear, some of two wave functions, can you get interference?

PROFESSOR: Yes. Excellent. So the question is, when you have a sum of two wave functions, can you get some sort of interference effect? And the answer is, absolutely. And that's exactly we're going to do next.

So in particular, let me look at a particular pair of superpositions. So let's swap these boards around so the parallelism is a little more obvious. So let's scrap these rather silly wave functions and come up with something that's a little more interesting.

So instead of using those as characteristic wave functions, I want to build superpositions. So in particular, I want to start by taking an arbitrary-- both of these wave functions have a simple interpretation. This corresponds to a particle being here. This corresponds to a particle being here. I want to take a superposition of them.

So here's my superposition. Oops, let's try that again. And my superposition-- so here is 0 and here is x1 and here is x2-- is going to be some amount times the first one plus some amount times the second one. There's a superposition.

Similarly, I could have taken a superposition of the two functions on the second chalkboard. And again I'm taking a superposition of the complex e to the i k1 x and e to the i k2 x and then taking the real part. So that's a particular superposition, a particular linear combination.

So now let's go back to this. This was a particle that was here. This is a particle that was there. When we take the superposition, what is the probability distribution?

Where is this particle? Well, there's some amplitude that it's here, and there's some amplitude that it's here. And there's rather more amplitude that it's over here, but there's still some probability that it's over here.

Where am I going to find the particle? I'm not so sure anymore. It's either going to be here or here, but I'm not positive. It's more likely to be here than it is to be here, but not a whole lot more.

So where am I going to find the particle? Well, now we have to define this-- where am I going to find the particle? Look, if I did this experiment a whole bunch of times, it'd be over here more than it would be over here. So the average will be somewhere around here-- it'll be in between the two. So x is somewhere in between. That's where we expect to find it, on average.

What's our uncertainty in the position? Well, it's not that small anymore. It's now of order x1 minus x2. Everyone agree with that?

Now, what about this guy? Well, does this thing have a single wavelength? No. This is like light that comes at you from the sun. It has many wavelengths. In this case, it has just two-- I've added those two together. So this is a plane wave which is psi is e to the i k1 x plus e to the i k2 x.

So in fact, it has two wavelengths associated with it. lambda1 lambda2. And so the probability distribution now, if we take the norm squared of this-- the probability distribution is the norm squared of this guy-- is no longer constant, but there's an interference term.

And let's just see how that works out. Let me be very explicit about this. Note that the probability in our superposition of psi1 plus psi2, which I'll call e to the i k1 x plus e to the i k2 x, is equal to the norm squared of the wave function, which is the superposition psi1 plus beta psi2, which is equal to alpha squared psi1 squared plus beta squared psi2 squared plus alpha star psi1 star-- actually, let me write this over here-- beta psi2 plus alpha psi1 beta star psi2 star, where star means complex conjugation.

But notice that this is equal to-- that first term is alpha squared times the first probability, or the probability of this thing, of alpha psi1, is equal to probability 1. This term, beta squared psi2 squared, is the probability that the second thing happens.

But these terms can't be understood in terms of the probabilities of psi1 or the probability of psi2 alone. They're interference terms. So the superposition principle, together with the interpretation of the probability as the norm squared of the wave function, gives us a correction to the classical addition of probabilities, which is these interference terms. Everyone happy with that?

Now, here's something very important to keep in mind. These things are norms squared of complex numbers. That means they're real, but in particular, they're non-negative. So these two are both real and non-negative.

But what about this? This is not the norm squared of anything. However, this is its complex conjugate. When you take something and its complex conjugate and you add them together, you get something that's necessarily real. But it's not necessarily positive.

So this is a funny thing. The probability that something happens if we add together our two configurations, we superpose two configurations, has a positive probability term. But it's also got terms that don't have a definite sign, that could be negative. It's always real. And you can check but this quantity is always greater than or equal to 0. It's never negative, the total quantity.

So remember Bell's inequality that we talked about? Bell's inequality said, look, if we have the probability of one thing happening being P1, and the probability of the other thing happening being P2, the probability of both things happening is just P1 plus P2.

And here we see that, in quantum mechanics, probabilities don't add that way. The wave functions add-- and the probability is the norm squared of the wave function. The wave functions add, not the probabilities. And that is what underlies all of the interference effects we've seen. And it's going to be the heart of the rest of quantum mechanics.

So you're probably all going, in your head, more or less like I was when I took Intro Quantum, like-- yeah, but I mean, it's just, you know, you're adding complex numbers. But trust me on this one. This is where it's all starting.

OK so let's go back to this. Similarly, let's look at this example. We've taken the norm squared. And now we have an interference effect. And now, our probability distribution, instead of being totally trivial and containing no information, our probability distribution now contains some information about the position of the object. It's likely to be here. It is unlikely to be here, likely and unlikely. We now have some position information. We don't have enough to say where it is. But x is-- you have some information.

Now, our uncertainty still gigantic. Delta x is still huge. But OK, we just added together two plane waves. Yeah?

AUDIENCE: Why is the probability not big, small, small, big, small, small?

PROFESSOR: Excellent. This was the real part of the wave function. And the wave function is a complex quantity. When you take e to the i k1, and let me do this on the chalkboard. When we take e to the i k1 x plus e to the i k2 x-- Let me write this slightly differently-- e to the i a plus e to the i b and take its norm squared.

So this is equal to-- I'm going to write this in a slightly more suggestive way-- the norm squared of e to the i a times 1 plus e to the i b minus a parentheses norm squared.

So first off, the norm squared of a product of things is the product of the norm squareds. So I can do that. And this overall phase, the norm squared of a phase is just 1, so that's just 1.

So now we have the norm squared of 1 plus a complex number. And so the norm squared of 1 is going to give me 1. The norm squared of the complex number is going to give me 1. And the cross terms are going to give me the real part-- twice the real part-- of e to the i b minus a, which is going to be equal to cosine of b minus a. And so what you see here is that you have a single frequency in the superposition. So good, our uncertainty is large.

So let's look at this second example in a little more detail. By superimposing two states with wavelength lambda1 and lambda2 or k1 and k2 we get something that, OK, it's still not well localized-- we don't know where the particle is going to be-- but it's better localized than it was before, right?

What happens if we superpose with three wavelengths, or four, or more? So for that, I want to pull out a Mathematica package. You guys should all have seen Fourier analysis in 18.03, but just in case, I'm putting on the web page, on the Stellar page, a notebook that walks you through the basics of Fourier analysis in Mathematica.

You should all be fluent in Mathematica. If you're not, you should probably come up to speed on it. That's not what we wanted. Let's try that again. There we go. Oh, that's awesome-- where by awesome, I mean not. It's coming. OK, good. I'm not even going to mess with the screens after last time.

So I'm not going to go through the details of this package, but what this does is walk you through the superposition of wave packets. So here I'm looking at the probability distribution coming from summing up a bunch of plane waves with some definite frequency. So here it's just one. That's one wave, so first we have-- let me make this bigger-- yes, stupid Mathematica tricks.

So here we have the wave function and here we have the probability distribution, the norm squared. And it's sort of badly normalized here. So that's for a single wave. And as you see, the probability distribution is constant. And that's not 0, that's 0.15, it's just that I arbitrarily normalized this.

So let's add two plane waves. And now what you see is the same effect as we had here. You see a slightly more localized wave function. Now you have a little bit of structure in the probability distribution. So there's the structure in the probability distribution. We have a little more information about where the particle is more likely to be here than it is to be here.

Let's add one more. And as we keep adding more and more plane waves to our superposition, the wave function and the probability distribution associated with it become more and more well-localized until, as we go to very high numbers of plane waves that we're superposing, we get an extremely narrow probability distribution-- and wave function, for that matter-- extremely narrow corresponding to a particle that's very likely to be here and unlikely to be anywhere else. Everyone cool with that?

What's the expense? Want have we lost in the process? Well we know with great confidence now that the particle will be found here upon observation. But what will its momentum be? Yeah, now it's the superposition of a whole bunch of different momenta.

So if it's a superposition of a whole bunch of different momenta, here this is like superposition of a whole bunch of different positions-- likely to be here, likely to be here, likely to be here, likely to be here. What's our knowledge of its position? It's not very good.

Similarly, now that we have superposed many different momenta with comparable strength. In fact, here they were all with unit strength. We now have no information about what the momentum is anymore. It could be anything in that superposition. So now we're seeing quite sharply the uncertainty relation. And here it is.

So the uncertainty relation is now pretty clear from these guys. So that didn't work? And I'm going to leave it alone. This is enough for the Fourier analysis, but that Fourier package is available with extensive commentary on the Stellar web page.

AUDIENCE: Now is that sharp definition in the position caused by the interference between all those waves and all that--

PROFESSOR: That's exactly what it is. Precisely. It's precisely the interference between the different momentum nodes that leads to certainty in the position. That's exactly right. Yeah.

AUDIENCE: So as we're certain of the position, we will not be certain of the momentum.

PROFESSOR: Exactly. And here we are. So in this example, we have no idea what the position is, but we're quite confident of the momentum. Here we have no idea what the position is, but we have great confidence in the momentum. Similarly here, we have less perfect confidence of the position, and here we have less perfect confidence in the momentum. It would be nice to be able to estimate what our uncertainty is in the momentum here and what our uncertainty is in the position here. So we're going to have to do that. That's going to be one of our next tasks. Other questions? Yeah.

AUDIENCE: In this half of the blackboard, you said, obviously, if we do it a bunch of times it'll have more in the x2 than in the x1.

PROFESSOR: Yes.

AUDIENCE: The average, it will never physically be at that--

PROFESSOR: Yeah, that's right, so, because it's a probability distribution, it won't be exactly at that point. But it'll be nearby. OK, so in order to be more precise-- And so for example for this what we do here's a quick question. How well do you know the position of this particle? Pretty well, right? But how well do you know its momentum? Well, we'd all like to say not very, but tell me why. Why is your uncertainty in the momentum of the particle large?

AUDIENCE: Heisenberg's uncertainty principle.

PROFESSOR: Yeah, but that's a cheat because we haven't actually proved Heisenberg's uncertainty principle. It's just something we're inheriting.

AUDIENCE: I believe it.

PROFESSOR: I believe it, too. But I want a better argument because I believe all sorts of crazy stuff. So-- I really do. Black holes, fluids, I mean look, don't get me started. Yeah.

AUDIENCE: You can take the Fourier transform of it.

PROFESSOR: Yeah, excellent. OK, we'll get to that in just one sec. So before taking the Fourier transform, which is an excellent-- so the answer was, just take a Fourier transform, that's going to give you some information. We're going to do that in just a moment.

But before we do Fourier transform, just intuitively, why would de Broglie look at this and say, no, that doesn't have a definite momentum.

AUDIENCE: There's no clear wavelength.

PROFESSOR: Yeah, there's no wavelength, right? It's not periodic by any stretch of the imagination. It doesn't look like a thing with a definite wavelength. And de Broglie said, look, if you have a definite wavelength then you have a definite momentum. And if you have a definite momentum, you have a definite wavelength. This is not a wave with a definite wavelength, so it is not corresponding to the wave function for a particle with a definite momentum.

So our momentum is unknown-- so this is large. And similarly, here, our uncertainty in the momentum is large.

So to do better than this, we need to introduce the Fourier transform, and I want to do that now. So you should all have seen Fourier series in 8.03. Now we're going to do the Fourier transform. And I'm going to introduce this to you in 8.04 conventions in the following way.

And the theorem says the following-- we're not going to prove it by any stretch of the imagination, but the theorem says-- any function f of x that is sufficiently well-behaved-- it shouldn't be discontinuous, it shouldn't be singular-- any reasonably well-behaved, non-stupid function f of x can be built by superposing enough plane waves of the form e to the ikx. Enough may be infinite.

So any function f of x can be expressed as 1 over root 2 pi, and this root 2 pi is a choice of normalization-- everyone has their own conventions, and these are the ones we'll be using in 8.04 throughout-- minus infinity to infinity dk f tilde of k e to the ikx.

So here, what we're doing is, we're summing over plane waves of the form e to the ikx. These are modes with a definite wavelength 2 pi upon k. f tilde of k is telling us the amplitude of the wave with wavelengths lambda or wave number k. And we sum over all possible values.

And the claim is, any function can be expressed as a superposition of plane waves in this form. Cool? And this is for functions which are non-periodic on the real line, rather than periodic functions on the interval, which is what you should've seen in 8.03.

Now, conveniently, if you know f tilde of k, you can compute f of x by doing the sum. But suppose you know f of x and you want to determine what the coefficients are, the expansion coefficients. That's the inverse Fourier transform. And the statement for that is that f tilde of k is equal to 1 over root 2 pi integral from minus infinity to infinity dx f of x e to the minus ikx. OK, that's sometimes referred to as the inverse Fourier transform.

And here's something absolutely essential. f tilde of k, the Fourier transform coefficients of f of x, are completely equivalent. If you know f of x, you can determine f tilde of k. And if you know f tilde of k, you can determine f of x by just doing a sum, by just adding them up.

So now here's the physical version of this-- oh, I can't slide that out-- I'm now going to put here. Oh. No, I'm not. I'm going to put that down here.

So the physical version of this is that any wave function psi of x can be expressed as the superposition in the form psi of x is equal to 1 over root 2 pi integral from minus infinity to infinity dk psi tilde of k e to the ikx of states, or wave functions, with a definite momentum p is equal to h bar k.

And so now, it's useful to sketch the Fourier transforms of each of these functions. In fact, we want this up here. So here we have the function and its probability distribution. Now I want to draw the Fourier transforms of these guys.

So here's psi tilde of k, a function of a different variable than of x, but nonetheless, it's illuminating to draw them next to each other. And again, I'm drawing the real part.

And here, x2-- had I had my druthers about me, I would have put x2 at a larger value. Good, so it's further off to the right there. I'm so loathe to erase the superposition principle. But fortunately, I'm not there yet.

Let's look at the Fourier transform of these guys. The Fourier transform of this guy-- this is k. Psi tilde of k well, that's something with a definite value of k. And it's Fourier transform-- this is 0-- there's k1. And for this guy-- there's 0-- k2.

And now if we look at the Fourier transforms of these guys, see, this way I don't have to erase the superposition principle-- and the Fourier transform of this guy, so note that there's a sort of pleasing symmetry here.

If your wave function is well localized, corresponding to a reasonably well-defined position, then your Fourier transform is not well localized, corresponding to not having a definite momentum. On the other hand, if you have definite momentum, your position is not well defined, but the Fourier transform has a single peak at the value of k corresponding to the momentum of your wave function. Everyone cool with that?

So here's a question-- sorry, there was a raised hand. Yeah?

AUDIENCE: Are we going to learn in this class how to determine the Fourier transforms of these non-stupid functions?

PROFESSOR: Yes, that will be your homework. On your homework is an extensive list of functions for you to compute Fourier transforms of. And that will be the job of problem sets and recitation.

So Fourier series and computing-- yeah, you know what's coming-- Fourier series are assumed to have been covered for everyone in 8.03 and 18.03 in some linear combination thereof. And Fourier transforms--

[LAUGHTER AND GROANS]

I couldn't help it. So Fourier transforms are a slight embiggening of the space of Fourier series, because we're not looking at periodic functions.

AUDIENCE: So when we're doing the Fourier transforms of a wave function, we're basically writing it as a continuous set of different waves. Can we write it as a discrete set? So as a Fourier series?

PROFESSOR: Absolutely, so, however, what is true about Fourier series? When you use a discrete set of momenta, which are linear, which are-- It must be a periodic function, exactly. So here what we've done is, we've said, look, we're writing our wave function, our arbitrary wave function, as a continuous superposition of a continuous value of possible momenta. This is absolutely correct. This is exactly what we're doing.

However, that's kind of annoying, because maybe you just want one momentum and two momenta and three momenta. What if you want a discrete series? So discrete is fine.

But if you make that discrete series integer-related to each other, which is what you do with Fourier series, you force the function f of x to be periodic. And we don't want that, in general, because life isn't periodic. Thank goodness, right? I mean, there's like one film in which it-- but so-- it's a good movie.

So that's the essential difference between Fourier series and Fourier transforms. Fourier transforms are continuous in k and do not assume periodicity of the function. Other questions? Yeah.

AUDIENCE: So basically, the Fourier transform associates an amplitude and a phase for each of the individual momenta.

PROFESSOR: Precisely. Precisely correct. So let me say that again. So the question was-- so a Fourier transform effectively associates a magnitude and a phase for each possible wave vector. And that's exactly right.

So here there's some amplitude and phase-- this is a complex number, because this is a complex function-- there's some complex number which is an amplitude and a phase associated to every possible momentum going into the superposition. That amplitude may be 0. There may be no contribution for a large number of momenta, or maybe insignificantly small.

But it is indeed doing precisely that. It is associating an amplitude and a phase for every plane wave, with every different value of momentum. And you can compute, before panicking, precisely what that amplitude and phase is by using the inverse Fourier transform. So there's no magic here. You just calculate. You can use your calculator, literally-- I hate that word.

OK so now, here's a natural question. So if this is the Fourier transform of our wave function, we already knew that this wave function corresponded to having a definite-- from de Broglie, we know that it has a definite momentum. We also see that its Fourier transform looks like this.

So that leads to a reasonable guess. What do you think the probability distribution P of k is-- the probability density to find the momentum to have wave vector h-bar k?

AUDIENCE: [INAUDIBLE]

PROFESSOR: Yeah, that's a pretty reasonable guess. So we're totally pulling this out of the dark-- psi of k norm squared. OK, well let's see if that works.

So psy of k norm squared for this is going to give us a nice, well localized function. And so that makes a lot of sense. That's exactly what we expected, right? Definite value of P with very small uncertainty.

Similarly here. Definite value of P, with a very small uncertainty. Rock on.

However, let's look at this guy. What is the expected value of P if this is the Fourier transform? Well remember, we have to take the norm squared, and psi of k was e to the i k x1-- the Fourier transform. You will do much practice on taking Fourier transforms on the problem set. Where did my eraser go? There it is. Farewell, principle one.

So what does norm squared of psi tilde look like? Well, just like before, the norm squared is constant, because the norm squared of a phase is constant. And again, the norm squared-- this is psi tilde of k norm squared-- we believe, we're conjecturing this is P of k.

You will prove this relation on your problem set. You'll prove that it follow from what we said before. And similarly, this is constant-- e to the i k x2.

So now we have no knowledge of the momenta. So that also fits. The momenta is, we have no idea. And uncertainty is large. And the momenta is, we have no idea. And the uncertainty is large.

So in all these cases, we see that we satisfy quite nicely the uncertainty relation-- small position momentum, large momentum uncertainty. Large position uncertainty, we're allowed to have small momentum uncertainty.

And here, it's a little more complicated. We have a little bit of knowledge of position, and we have a little bit of knowledge of the momenta. We have a little bit of knowledge of position, and we have a little bit of knowledge of momenta.

So we'll walk through examples with superposition like this on the problem set. Last questions before we get going?

OK so I have two things to do before we're done. The first is, after lecture ends, I have clickers. And anyone who wants to borrow clickers, you're welcome to come down and pick them up on a first come first served basis. I will start using the clickers in the next lecture. So if you don't already have one, get one now.

But the second thing is-- don't get started yet. I have a demo to do. And last time I told you-- this is awesome. It's like I'm an experimentalist for a day. Last time I told you that one of the experimental facts of lice is-- of lice. One of the experimental facts of lice.

One of the experimental facts of life is that there is uncertainty in the world and that there is probability. There are unlikely events that happen with some probability, some finite probability. And a good example of the randomness of the real world involves radiation.

So hopefully you can hear this. Apparently, I'm not very radioactive. You'd be surprised at the things that are radioactive. Ah, got a little tick. Shh.

This is a plate sold at an Amish county fair. It's called vaseline ware and it's made of local clays.

[GEIGER COUNTER CLICKS]

It's got uranium in it. But

I want to emphasize-- exactly when something goes click, it sounds pretty random. And it's actually a better random number generator than anything you'll find in Mathematica or C. In fact, for some purposes, the decay of radioactive isotopes is used as the perfect random number generator. Because it really is totally random, as far as anyone can tell. But here's my favorite. People used to eat off these.

[MUCH LOUDER, DENSER CLICKS]

See you next time.