Recitation 12

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Prof. Gilbert Strang

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR STRANG: OK, so review session on the first part of the Fourier chapter, these two topics that we've done and that homework is now coming on. Fourier series, the classical facts. Plus, paying attention to the rate of decay of the Fourier coefficients, it's an aspect not always mentioned. And the energy equality is important. So it's not just here's the function, find the coefficient. That's part of it but not all of it. And then the discrete series we were doing today. OK, so those are the two topics for today, and then the next review session, which would be two weeks from now, would focus especially on convolution and Fourier integrals. OK, so I'm open to questions on the homework, or off the homework. Always fine. I didn't know how many questions to ask you on the homework. I wanted you to have enough practice doing this stuff because the time for this Fourier part of the course is a little shorter. Thanksgiving comes into it, so needed to do some exercise. And you've got a good question. Thanks.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Number 18 on the homework, OK. Ah yes, OK. Right, alright. So the idea of that problem, I'm really asking you to read the two pages, the last two pages of the section. That use Fourier series to solve the heat equation. So we've used, briefly, Fourier series to solve Laplace's equation. So that was, just to recall. So Fourier series to solve Laplace's equation was when the region was a circle. The function was given, the boundary values were given. It's 2pi periodic because it is a circle. And we solved Laplace inside. Because on the boundary, the perfect thing we needed was the Fourier series to match the boundary. Now, I'm taking another, classical, classical application too. The heat equation. So I made it heat equation. So this direction is time, this direction is space. The heat equation is u_t=u_xx, if the coefficient-- Everybody here knows there would be a c in there, but let's take it to be one. Then what are the solutions, and how does a Fourier series help you to match the initial functions? So I'm matching, I'm given u(x,0) here. OK. Along this is at time zero. So that says at a time zero, so I have a bar. I have a conducting bar. And this is such a classical example that I didn't feel you could miss it completely. Even though we look beyond formulas. But here's one where the formula shows us something important. OK, so what are solutions to this equation?

You look for solutions, so the classical idea of separate variables. Look for solutions that are of the form some function of t, maybe I'll try to use the same letters as the text. Some function of t, times some function of x, OK? And the text uses, look for solutions u(x,t) that are of this form. Some function of t, and I didn't remember. Ah yes, it's A(x)B(t). OK, that's what I mean by separating. So those will be especially simple solutions. And when we go to match the initial condition, I'll just plug in t=0 and I'll see the A(x)'s. Well, what are they? In this case, so their eigenfunction-- oh, I have to tell you about the remaining boundary conditions, don't I? Because that will decide what the A(x) has to satisfy, and will decide what those eigenfunctions are. So, let's see. In this problem I think I picked free conditions. So I made the interval minus pi to pi, that's a change. Minus pi to pi just so we have nice Fourier series. And here I have this boundary is free, du/dx, u', at x=-pi, for all time, so up this line, is zero. And also u' at plus pi, and all time is zero. So up those lines, heat's going out. That's what that means. Is that what that means, or does that mean heat can't go out? No, so what's happening? Heat's not going out. Is that right? The slope is zero, right? The slope is the temperature gradient, we're requiring the temperature gradient. So the ends of the bar are insulated. So this is insulated. No passage through, right? Is that what that means? Yeah. It's like there's nobody, it's cut off there. The rod isn't extended for heat to go further.

OK, so that tells us what the x and the t, what the A(x) and B(t) are. So here's the point. You plug that hoped-for solution into the equation, right? So I plug it in here. What do I have on the left side, just the time derivative? So that's A times-- A(x) B'(t), you see taking the time derivative doesn't touch A(x). On the right-hand side, I don't touch B(t), but I have a second derivative of the x part. So far, so good. Now, a little trick. If I divide both sides by A and by B, I get B'/B equaling A''/A. Right? Just, put the A under here, put the B under here. Now what? This is neat, because that function is only depending on time. This function depends only on x, so that the both sides have to be constant. One can't actually change with time, because this side is not changing with time. And this couldn't actually change with x, because that's not changing with x. So those are both constants. So both constants. Let's see, I'll maybe just put a constant. And various constants are possible. OK, so now you see the point here. Now I have two separate equations, I have an equation for the B part, dB/dt, B'. If I bring the B up there, I have equals, the constant times B. And I know the solution to that. B(t) is, as everybody knows, what's the solution to a first order constant coefficient equation? Just e^(ct)*B(0).

Good, we've got B. We've got a B(t) that works, and now what's the A that also works? That has A''. And I bring the A up, so now I have A''=cA, so the A that goes with it is? Oh, OK, I've used a c there, so what's the good? I want-- Two derivatives should bring down a c. Let me change c to minus lambda squared. How about if I look ahead, change this constant to minus lambda squared, because I want something where two derivatives bring down minus lambda squared, and what will do that? Any amount of cos(lambda*x), right? Because two derivatives will bring down a minus lambda squared. And any amount of sin(lambda*x). And this is now e to the minus lambda squared t. I'm doing this fast, but actually it's totally simple. The conclusion is that I now have a bunch of solutions of this special separated form. Where B(t) could be that and A(x) could be either of those or any combination of those. And I have to use the same lambda for each, so that the two equations will work in the original problem. Good. Now, so far, no boundary conditions. What I've got so far is just a lot of solutions. These times that. With any lambda. But of course the boundary conditions will tell me the lambdas, first of all. And how do they tell me? The only boundary condition I have is this free stuff. So it's free, the slope should be zero at pi, and zero at?

So that's the x direction. So that's going to tell me-- I've forgotten. Do I want cosines or sines? Cosines. I want the derivative to be zero at pi. Yeah, so I think I want cosines, good. And then lambdas can't be anything at all. Because, should lambda be an integer or something like that? I think maybe lambda should be an integer, because I want to plug in pi. So let me take the second derivative. Is minus lambda squared cos(lambda*x). And then I want to plug in x=pi. And I want this to be zero. So lambda should be an integer, is that right? At multiples of pi, the cosine is zero, yes. Is it? No. Did I want sine? Maybe I wanted sine. Oh, it's the first derivative I'm looking at, thank you. Thank you. OK, good. OK, now I've got it. Thank you. And now I see lambda should be an integer. Lambda should be, zero is, yeah zero's alright. That's the constant term, yeah, we need that. Zero, one, two, and so on. So do you see that I've now got, now I can take-- I've got a lot of solutions, and I have a linear problem. So I can take any combination. So finally I have final solution is that u(x,t) is any combination, with coefficients I'm free to choose, of A(x), which is cos(nx), because lambda had to be an n. And n could be anywhere from zero on up. Times e to the minus, lambda is n, so that's n squared t.

Did I get that? Let me draw a circle and step back. What's up?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: I could have negative n's, they wouldn't give me anything new, right? I mean cos(nx) and cos(-nx) are just, one's just the negative of the other. So these are the good guys. I've got a cosine series because I've got free n's, right. Because of the boundary conditions. Do you see that? That's pretty nice. There's my A(x), there's my B(t), I can take any combination. Usual stuff. You get to that solution. OK, and now I have to meet the initial conditions, right? Boundary conditions are now built-in because I chose cosine. Or you did. Now, this will tell me what the c's are. I'm going to set t=0. At t=0, I'm given the initial condition, u(x,0), and I have the same sum of c_n*cos(nx)*e^0. So this will tell me the c's are the cosine coefficients of the given initial conditions. So I expand, so here's where Fourier series has paid off. Expand the initial function in a cosine series. And then go forward in time. This is just the old e^(lambda*t). Only the lambda we're talking about here is minus n squared. And you see what's happening here? What's going to happen for large time? So this is a very physical problem. That I think you cannot take 18.085 without seeing this problem. You can't learn about Fourier series without using it for the initial value. And then propagating in time with the usual exponentials for each one. And now as n increases what do I see? Faster and faster decay.

For large n, these are going to zero extremely fast. So that what you see with a solid bar, which starts with the temperature u in some-- Oh, probably not negative unless it's a really cold bar. But, anyway, it starts with some initial temperature. That flattens out fast. The heat flows, to equilibrate. What I approach is the constant term, c_0. This approaches c_0. Because all these other n positives, they go to zero. So the heat distributes itself equally. OK, and now I guess the particular u(0) in the problem was a delta. OK, and so the particular u(0) was all, was from that really hot point. So we know the coefficients. We know the cosine coefficients for the delta function, we know these c's, and what were they? 1/(2pi) was c_0. And the other c's were 1/pi, I think. Is that right? Maybe they're all 1/(2pi). Maybe. Yeah. Whatever. They disappear fast. And this is what we approach. So the heat from the delta function is, yeah. So is that everything the problem wanted? Yeah. Yeah. I think we've done it. We'll put in the c_n's to complete that picture. Into here. And then c_0 is the one that survives over time. Yeah. I guess you've, once I got rolling I couldn't stop and that's u.

For investing time this afternoon you get a fast look at this classical, classical problem of separating the variables using the Fourier series for the initial function. And recognizing that we're doing this on a finite interval. If the bar was infinitely long, then we would be talking about Fourier integrals. And that's what's coming up a bit later. We would integrate instead of sum. Yeah. But the idea would not be different, if we had infinite bar then we would not be restricted to n equals zero, one, two, three; any n, any cosine, wouldn't have to be an integer at all. Any number, any frequency would be allowed. And so we would have to integrate that, yeah. And that's a classical problem too, again. It's come up in a modern way, that the famous Black-Scholes equation. So. The heat equation is for 18.086. Here, we brought it up because we could solve it fast. But the actual, yeah. The most important solution I could give you to the heat equation would be the one that starts from that point source of heat. But on the whole line. The one that would be integrals instead of sums. Yeah. So we came pretty close to solving the most important heat equation problem. But we're doing the periodic case, with just cosines, and the infinite line case would be the most famous of all. Yeah. And it has a beautiful form, and I was going to say that the heat equation's pretty classical.

But let's see, where can I write the magic words, Black-Scholes. Next to the heat equation, so that's the heat equation but it's also-- do you know these names, Black and Scholes? Anybody in Mathematics of Finance? So these much-despised option, derivative options, so people on Wall Street, traders, will carry around a little calculator that solves the Black-Scholes equation so they can price the options that they're bidding to buy and sell, so they can price them fast. And that little calculator does a finite differences, or a Fourier series solution to this Black-Scholes equation, which actually, if you change variables on it, is the heat equation. So what you see here as is actually important on Wall Street except, it's probably not the right moment to mention it. Right? So you can blame the whole meltdown on mathematicians. But that wouldn't be entirely fair. They didn't mean it, anyway. But that's been the biggest source of employment, I guess, apart from teaching, in the last ten years. People who could work out the partial differential equations, and they get more complicated than the heat equation, you can be sure. And so the classical one, these guys are economists at MIT and Harvard, or they were. And I guess maybe the Nobel Prize in Economics came to part of that group. And also to Merton. Maybe Black, possibly Black died before the time of the Nobel Prize.

Anyway, they were the first. And it's a beautiful paper, beautiful paper too. Just to figure out how do you price the, what's the value of an option to buy, that allows you to buy or sell a stock at a later time? Yeah. So it's, well of course you have to make assumptions on what's going to happen over that time and that's where Wall Street came to grief, I guess. If you had to put it in a nutshell, I mean, the options, the standard straightforward options, those are fine. Using Black-Scholes, and then what's happened is they now price more and more complicated things. To the point that the banks were buying and selling credit default swaps, insurance swaps, that practically nobody understood what they were. They just assumed that if there was always a market for them, like insurance, somehow it wouldn't happen. And you get on. But it happened. So, now we're in trouble. OK, that's not 18.085, fortunately. Or math, but. Anyway, a lot of people got involved with things they didn't really know about. And then were selling them as well as of course the mortgage problems. Anyway.

Ready for other questions on our homework, or these topics. Yeah. OK.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: OK

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: OK, right, yeah. Have an image of waves, so --

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Yeah. I suppose if I had to have a picture of the discrete thing, if my picture of the function case was a bunch of sines, and cosines, somehow adding up to my function. And if time, if I'm solving the heat equation then those separate waves are maybe decaying in time. Here. So that when I add them up at a later time I get something different. Or if it was the wave equation, which is probably your image, they're moving in time. So they add up to different things at different times, because they can move at different speeds. Yes, so a function is a sum of waves, right? Then what would the discrete guy be? I guess I would just have to imagine the function as only having those n values. And my wave would just be, a wave might be just n values there. But still, if I have a time-dependent problem, maybe that thing is pulsing up and down. It's just that it's only got a fixed number of points. And I'm not looking at the whole wave on a, yeah. But I don't think it's essentially different. And of course, the fast Fourier transform and the discrete case is used to approximate the continuous one. Yeah. You can look in Numerical Recipes for a discussion of approximating the Fourier series by discrete Fourier series. I mean, that's an important question. Because of course, Fourier series has got all these integrals. The coefficients come from an integral formula. We're not going to do those integrals exactly, so we have to do them some approximate way. And one way would be to use equally spaced points and do the DFT.

Can you just remind me what that integral formula is? I don't want you to, it was on the board today. What's the formula for the coefficient c_k in the Fourier series. I'm really just asking you this because I think you should have it in some memory cache, you know in fast memory, rather than in the textbook. OK, so what's the formula for c_k, in the Fourier series case? Everybody think about it. It's going to be an integral, right? And I'll take it over zero to 2pi, I don't mind. Or minus pi, pi. And what do I integrate? This is the Fourier coefficients of the function f(x)? So I take f(x), I remember to divide by 2pi, I'm doing the continuous one, what do I multiply by to get the coefficients? e^(-ikx). So I've forgotten whether I assigned some of these very early questions. It just gave you the function and said what's the coefficient. If I just look at one or two. Suppose my function is f(x)=x. I guess in that problem, in Problem 1, I made it minus pi to pi. Suppose f(x)=x. So you have to integrate x times e^(-ikx). Well, you got an integral to do. OK, it's doable but not instantly doable.

Let me ask you some questions and you tell me about it. So I draw the function. The function is x from minus pi to pi. And tell me about the coefficients, how quickly do they decay? This is like some constant over k to some power p, and what's p? What rate of decay are you expecting for the coefficients? Well you say to yourself, it looks like a pretty nice function, smooth as can be. But, what's the answer here? The rate of decay will be, what will that power be? One. Because the function jumps. The function has a jump there, and the Fourier coefficients have got to deal with it. So the Fourier series for this is going to be, if I took 100 terms it would be really close. And then it'll go down here to the, but it's got to get there. And got to start, and pick up below there. So it's got the same issue, the Gibbs phenomenon, that the square wave had. It's got that jump. So it'll go like 1/k. OK, coming back. Could you actually find those numbers? And do you remember how to do integral like that? Well, look it up, I guess is the best answer. But whoever did it the first time, well, it's integration by parts, somehow. The derivative of this makes it real simple, and this we can integrate really easily, right? So we integrate that, take the derivative of that. We get a boundary term, so I don't exactly remember the formula. But it'll have a couple of terms, but not bad. And we'll see this 1/k. OK, and that's the formula.

If I changed x to something else, let's see. As long as I'm looking at number one, what if I took e^x? Oh, easy. Right? e^x, yeah, let's do e^x, because that's an easy one. Now, e^x looks like, so what's my graph of e^x? It's quite small here at minus pi, and pretty large at e^pi. What's the rate of decay of the Fourier coefficients for that guy? Same. Got to jump again. Drops down here. So let's find it. Let's find those Fourier, that integral. That's e^(x-ik). Sorry, let's put down what I'm integrating here. I'm integrating e to the x times (1-ik). Right? That's what I've got to integrate. The integral is the same thing divided by 1-ik. I plug in the endpoints, minus pi and pi, and I've got the answer. And I divide by 2pi. Yeah. So that's a totally doable one. Let's plug it in. 1/(2pi). And then the denominator is this 1-ik, yeah, well there you see it already, right? You see already the 1/k in the denominator. That's giving us the slow decay rate. And now I'm plugging in equal-- equal-- e^(pi(1-ik)) minus e^(-pi(1-ik)). So as k gets big, this is slow decay. Now, what's happening here as k gets large? Oh, k is multiplied by i there. So this e^(pi*ik) is just sitting, it may even be just one or something, or minus one, or whatever. Is it? Yeah. k is just an integer, this is k*pi*i, that's just minus one to something. So all that numerator is minus one to the to the k-th power or something. Times e^pi. So e^pi is there. e^(-ik) at-- e^(-i*pi*k) is just minus one. And this is maybe e to the minus pi times the same, e^(+i*pi*k).

I think that's minus one to the k. Well, wait a minute. Maybe they're not both, whatever. It's a number. It's a number. That's just of this size. There's the big number, there's the small one. And divided by that, that's the thing that shows is yes, there is this jump. Right? OK, that's a couple of sets of Fourier coefficients. You could ask yourself, because on the quiz there'll probably be one, and I'll try to pick a function that's interesting. And I mean, I don't plan to pick xe^x or anything. Yeah. OK, good. Yes, thanks.

AUDIENCE: [INAUDIBLE] Fourier series has twice the energy as another, what's that mean?

PROFESSOR STRANG: I don't know. I guess. Hm. Maybe we're talking about power, and things like if we were dealing with electronics, I guess I would interpret that energy in terms of power. So that's what I'd be seeing. I'm not thinking of a really good answer to say well, why is that energy equality, but it's really useful. You know, so much of signal processing, and we'll do a bit of signal processing, is simply based on that energy equality there, and the moving into frequency space. And convolution. Actually, they would call it filtering. So we'll call it filtering when we use convolution. But it's pure convolution. Pure linear algebra, for these special bases. So, OK. I could try to come up with a better answer, or more focused answer than just to say power or, to use an electric power word is just one step. Yeah, thanks. Yes. AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: 4.3, eight or nine. Let's just have a look and see what they're about. OK, just some regular guys. Yeah so that, OK, let me look at, you want me to do 4.3 eight?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Ah, yes. Good, good question. OK, so 4.3 eight gave a couple of vectors, a little bit like the ones I did this morning. I mean here the c in 4.3 eight has a couple of ones, and this morning I just had a [1, 0, 0, 0]. But no big deal. Now, yeah. So if two vectors are orthogonal, are their transforms orthogonal? I think, yes. Yes. The Fourier, so the, yeah. So maybe this is worth a moment, here. Yeah. Because what do we, let me write down the letters for my question, and then answer the question. OK, so suppose I have two vectors, c and d. And they're orthogonal. And then I want to ask about their, if I multiply by the Fourier matrices, are those orthogonal? That's sort of the question. So here the vectors in frequency space, here they are in physical space. I don't mind if you started in physical space and went to frequency with F inverse, same question. Does the Fourier matrix preserve angles? Does it preserve angles? Do matrices, and F wouldn't be the only one with this property, do-- They preserve length, right? If you preserve length, do you preserve angles too? Preserve length, you're just looking at one vector. We know that we preserve length, and how do we know that? Let's just remember that. So here's the length question. And this'll be the angle question. So this is the energy inequality, coming back to that key thing. This is c bar transpose c, and over here we have (Fc,Fc) and this is what I did this morning. So that's c bar transpose F bar transpose F c, right?

That's, why the bar as well as the transpose? Because we're doing complex. OK, let me make a little more space. And what did we do this morning? We replaced that by, well I wish it were the identity. But it's a multiple of the identity, that's what matters. So this was N, this is the identity with an N, c bar transpose c. So the conclusion was that this thing is just N times this thing. Well, and times this one. I'm expecting oh, but over here we got a zero. So my N is going to wash out. OK, let's just do the same thing for angle. This is the key energy equality for length. And all I want to say is, this Fourier matrix, like other orthogonal matrices, is just rotating the space. It's sort of amazing to think you have one space, physical space, and you rotate it. I'll use that word. Because somehow that's what an orthogonal matrix does. Well it's complex N-dimensional space. Sorry, so it's not so easy to visualize rotations in C^N, but it's a rotation. Angles don't change, and let's just see why? The inner product of this with this is c bar transpose, F bar transpose, because I have to transpose that. Times Fd, what do I do now? This one was c bar transpose d, with zero. You see you have it? Again, this fact that this rotation and inverse, and the transpose is the identity, apart from an N. So again, inner products are multiplied by N. Not only the length squared, which is inner product with itself, but all inner products are just multiplied by N. So if they start zero they end up zero. Yeah. That was your question, right? Yep, right. So if I have a couple of vectors, as I guess that problem proposed, it happened to propose two inputs that happen to be orthogonal, then you should be able to see that the transforms are, too. Yeah. Yep.

Can I ask you about question two, was that on the homework? Problem two in Section 4.3? Was it? All I want to say is that's a simple fact. Let me just write that fact down so we're looking at that fact. If I look at this F matrix, it's so simple but it leads to lots of good, it's just a key fact. But if I look at my F matrix, am I looking at rows here? Yeah, I happened to look at rows, columns would be the same, it's symmetric. So here's row one, here's row two, I'm sorry that's row zero. This is row one. And let me look at row N-1. This is w, w^(N-1), this was w^2, this is w^(2(N-1)). And so on. Everybody's got the idea of the-- And then all these in-between rows, two, three, et cetera. And the question asks, show that this and this are complex conjugates. That that row and that row are the same, except conjugates. So if we look at the row number one in F, we'll be looking at row number N-1 in F bar. Now, why is that row the conjugate of that row? Why is this the conjugate of this? Why are those two conjugates? So I'm asking you to explain to me why w bar is the, why is the conjugate of this, this? So it's just one more neat fact about these important, crucial, numbers w. And so how do I see this? Well, this is w-- This is one way to do it. This is w^N times w^(-1). And what's w^N? One. Everybody knows, w^N is one.

So I have w, so I'm trying to show that. Well, we know that this is true. So we know that the conjugate of w, right? There's w. Here's its conjugate, and it's also the inverse. This w is some e^(i*theta), this is some e to the the i some angle. Then always, it's sitting on the unit circle. So its reciprocal is also on the unit circle. The reciprocal has e^(-i*theta), so it's just the conjugate. That's a great fact. That's a beautiful fact about all these w's. And their powers. Conjugate and inverse the same. Right. So that was the key to problem two. I don't think I asked you to go through all the steps of problem three, but just in case you didn't read problem three, let me tell you in one tiny space what happens. In problem three you discover that the fourth power of F is the identity. Except there is an N squared, because from two F's we got an N, so from four F's-- So that's another fantastic fact about the Fourier matrix. That its fourth power, it must be somehow, the Fourier matrix is rotating, yeah. Somehow, I don't know, how do you understand that the Fourier matrix is, its fourth power brings you back to the identity. Apart from, we just didn't normalize it. So that we wouldn't, to avoid that N squared, so we had to use it. Yeah. That's pretty amazing. Pretty amazing. So if I had normalized it right, if I took F over the square root of N, that's the exact normalization to give me a, to put the N's where they belong, you could say. Then the fourth power of that matrix is the identity. Yeah.

New math and new applications keep coming for these things. I'll tell you, actually. You could tell me eigenvalues of this matrix. Yeah, this is a good question. What are the eigenvalues of a matrix whose fourth power is the identity?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Yeah, one. What else could the eigenvalue be? If the fourth power of the matrix is I, what are the possible eigenvalues? So let me, so this matrix is, can I call it M for the-- Or maybe U for the matrix F-- So if U^4 is the identity, what are the eigenvalues? What could, well of course u could be the identity, but it's not. It's the Fourier matrix. So the eigenvalues could be one, what else could they be? Minus one is possible. Because minus the identity, that'd be fine. i, and minus i. Four possible eigenvalues. And when the matrix reaches, I think if you put the four by four Fourier-- I don't know. If you, say, put the four by four Fourier matrix into MATLAB, and see what you get for eigenvalues, I've forgotten whether you get one of each of these. Or whether somebody's repeated at the level four, but then go up to five or six you'll see these guys start showing up. Different multiplicity. Right? The 1,024 matrix has got these guys. Some number of times, adding up to a 1,024. Right, yeah. So it's quite neat. Now, here's a question, which I actually just learned a good answer to. What are the eigenvectors? What are the eigenvectors? You could give a sort of half-baked description. Because once you know the eigenvalues. But to really get a handle on the eigenvectors, that's has been a problem that was studied in IEEE transactions papers. But not really the right, not a nice specific description of the eigenvectors. And somebody was in my office, this fall, a guy, post-doc at Berkeley who's seen the right way to look at that problem and describe the eigenvectors of the Fourier matrix. So that's, like, amazing to me that such a fundamental question was waiting. And turned out to involve quite important, deep ideas. OK, ready for one more for today? Anything? Or not. OK, so I hope you're getting the good stuff now on the Fourier series. Fourier discrete transform. Please come on Friday, because Friday will be the big day for convolution. And that's the essential thing that we still have to do. OK, see you Friday.