Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Weirstrass M-test; using power series to evaluate definite integrals when we do not know the anti-derivative of the integrand.
Instructor/speaker: Prof. Herbert Gross
Ready to move on to Calculus II: Functions of Several Variables? Professor Gross has posted links to the next videos in the series on his Mathematics As A Second Language website.
Lecture 6: Uniform Converge...
Related Resources
This section contains documents that are inaccessible to screen reader software. A "#" symbol is used to denote such documents.
Part V, VI & VII Study Guide (PDF - 35MB)#
Supplementary Notes (PDF - 46MB)#
Blackboard Photos (PDF - 8MB)#
ANNOUNCER: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. Today is the day we've all been waiting for. We've come to the last lecture of our course, and it's, I think, a rather satisfying lecture today, not just because it is the last one, but content-wise too. Today we're going to talk about uniform convergence of series. Remember, a series can be viewed as a sequence of partial sums, and consequently our discussion on uniform convergence applies here.
And what we're going to do-- you see, again, it's the same old story. Now that we know what the concept means, is there an easy way to tell when we have the property? And I figure again that, this being the last lecture, I should give you some big names to remember. And name-dropping-wise, I come to our first concept today, which I call-- I don't call it that. It's named the Weierstrauss M-test. The Weierstrauss M-test is a very, very convenient method for determining whether a given series converges uniformly or not.
By the way, let me just make one little aside. Instead of saying the sequence of partial sums converges uniformly to the infinite series, we usually abbreviate that simply by saying, the infinite series is uniformly convergent. Let me just say that one more time. If I say that the series is uniformly convergent, that's an abbreviation for saying that the sequence of partial sums converges uniformly to the series.
But at any rate, let me now go over the so-called Weierstrauss M-test with you. It's a rather simple test. I will give you both the proof of the test and some applications of it. The test simply says this-- and this is where the name M-test comes from. See, because you put an 'M' in over here. If I put a 'C' over here, they probably would have called it the Weierstrass C-test. But the idea is this. Suppose that the series, summation 'n' goes from 1 to infinity, capital 'M sub n', is a positive convergent series-- this is going to be sort of like the comparison test-- and that the absolute value of ''f sub n' of x' is less than or equal to 'M sub n' for each 'n' and for each 'x' in some interval from 'a' to 'b'.
Then if this condition is obeyed, the sequence of partial sums-- some 'k' goes from 1 to 'n', of ''f sub k' of x'-- converges uniformly to its limit function, namely what we call the series 'n' goes from 1 to infinity, ''f sub n' of x', on the interval from 'a' to 'b'. See, in other words, if I can get this kind of a bound on what's going on, where these form the terms of a positive convergent series, then this series converges uniformly.
Now, again, let me outline the proof. Once again, let me remind you all of these proofs are done rigorously and also with supplementary wording in the notes, so that you can pick this up again if you missed the high points in the lecture.
You see, to prove convergence in the first place, I have to show what? That the sequence of partial sums can be made to differ from this limit, which I call 'f of x', by as small an amount as I want, just by taking 'n' sufficiently large. And the way I work this thing is I say, OK, let's take a look at this difference. Well, remembering that another name for 'f of x' is the limit of this thing as 'k' goes from one to infinity, I just substitute this in place of 'f of x'. I then get this. And now you see, if I subtract the first 'n' terms from this infinite series, what's left is all the terms in the 'n plus first' on. In other words, that this is in turn equal to the absolute value of the summation, 'k' goes from 'n plus 1' to infinity, ''f sub k' of x'.
Again, don't be upset by the 'k's replacing the 'n'. Remember that 'k' is a dummy variable. The hardship would be that if I use the subscript 'n' over here, I would have a logical contradiction. How can you say 'n' goes from 'n plus 1' to infinity? 'n' can't equal 'n plus 1', so I just change the subscript here.
At any rate, if I now invoke the triangle inequality for absolute values-- namely, that the absolute value of a sum is less than or equal to the sum of the absolute values-- I then get that the thing that I'm investigating is less than or equal to the sum, 'k' goes from 'n plus 1' to infinity, absolute value ''f sub k' of x'.
Now, remember what the Weierstrass condition was. It was that for each 'k', the absolute value of ''f sub k' of x', for all 'x' in my interval, was less than or equal to 'M sub k'. So I can now go from here to here. But what do I know about the series summation, 'M sub k'? I know that that's a positive convergent series. In particular, because it's convergent, that means that the tail end-- meaning from 'n plus 1' to infinity-- that sum can be made smaller than any prescribed epsilon, smaller than any given epsilon, simply by picking 'n' sufficiently large. See, in other words, I can make this difference less than epsilon for 'n' sufficiently large.
And here's the key step. Since 'M sub k' does not depend on 'x', this inequality was done independently of the choice of 'x'. You see, in other words, not only do I have that this converges to this, but since this was done independently of the choice of 'x', this is uniform convergence. And maybe I should write that down, because that's important to remember, that this is independently of 'x'. That's what makes the convergence uniform.
Well, let's continue on, I think, by means of an example might come in handy now. I think after you do any of these abstract things, I think a nice example shows what's coming off very nicely. And so I very creatively call this 'An Example'.
Let's look at the following series. 'cosine x' plus 'cosine '4x over 4' plus et cetera 'cosine ''n squared x' over 'n squared'', where 'n' is the number of the term over here. For example, the third term would be 'cosine 9x' over 9, et cetera. And I now want to look at the limit function here. See, is this a convergent series? Is it absolutely convergent? Is it uniformly convergent? What is it?
And here's how the Weierstrauss M-test works. We look at the absolute value of the term making up this particular series. See, the absolute value of ''cosine 'n squared x'' over 'n squared''. Now, let's take a look at this. Since 'n' is an integer, 'n squared' is positive, so the absolute value of 'n squared' is still in squared. Since the cosine has to be between minus 1 and 1, the magnitude of the cosine, regardless of the value of 'n squared x', is less than or equal to 1. In particular, then, the absolute value of ''cosine 'n squared x'' over 'n squared'' is less than or equal to '1 over 'n squared'' for each 'n'. And I might add, and all 'x', because no matter what 'x' is and no matter what 'n' is, 'cosine 'n squared x'' in magnitude cannot exceed 1.
On the other hand, we already know, in particular by the integral test, that summation 'n' goes from 1 to infinity or 0 to infinity is a positive convergent series. Consequently, by the Weierstrauss M-test, this series here is uniformly convergent, which again means what? That the sequence of partial sums, namely sum 'k' goes from 1 to 'n', ''cosine 'k squared x'' over 'k squared'', converges uniformly to summation 'n' goes from 1 to infinity ''cosine 'n squared x'' over 'n squared''.
That's how the test works. But now, from an engineering point, the more important question-- what does it mean? How can we use it? Well, for example, let's suppose that in working some particular applied problem, one had to integrate this particular series, 'cosine x' plus ''cosine 4x' over 4' plus ''cosine 9x' over 9', et cetera, the infinite series from 0 to some value 't'.
Now here's the key point. And by the way, I just write it this way-- to get used to the ominous notation, notice that what looks a little bit ominous here is just another way of writing out, longhand, what the terms inside the parentheses are. The idea is this. Notice what this says-- this says first compute the sum and then integrate.
For example, we might not know what limit this sum approaches. Or if we do know it, it may be a particularly complicated thing to write down. You see, in other words, if we follow the problem in the order that the operations are given, we must first sum this series and then integrate it. But what have we already shown? We've already shown that this series is uniformly convergent, but for a uniform convergent series, we saw last time that you can interchange the order of summation and integration.
In other words, by uniform convergence, what I can now do is integrate this thing here, term by term. See, 'sine x' plus ''sine 4x' over 16'. Just the usual technique of integration. And now you see I can integrate first and then compute the limit. In other words, I can say, look, this integral is just 'sine t' plus ''sine 4t' over 16' plus et cetera, and so forth. Again, if you want to see this from the abstract point of view, what I'm saying is that by uniform convergence, I can switch these two symbols.
Now, the important point is that to integrate ''cosine 'n squared x'' over 'n squared''-- remember, 'n' is a constant here. This is a very easy thing to integrate. In fact, it comes out to be what? ''Sine 'n squared x'' over 'n to the fourth''. You see the 'n squared' comes down as an 'n to the fourth'. And when I evaluate that between 0 and 1, all I wind up with is summation 'n' goes from one to infinity, ''sine 'n squared t'' over 'n to the fourth''. And this is a perfectly good, bona fide convergent series.
And, in real life, you see I can approximate this to any degree of accuracy that I want just by going out far enough before I chop off the remaining terms. In other words, the beauty in this case with series is that when they converge uniformly and you have to integrate the limit function, you can integrate the individual member of the sequence first and then take the limit as n goes to infinity.
OK. So far so good. Let's apply this now, in particular, to power series. And again, let me keep reminding you, all of this stuff is done much more slowly in the supplementary notes. My main reason for repeating what's in the supplementary notes is I think this material is both difficult enough and new enough so I think you should hear parts of it before you're sent out on your own to read it. Hopefully my words will sound familiar to you as you hear them repeated in the supplementary notes.
Let's suppose that a power series summation, 'a n', 'x to the n', as 'n' goes from 0 to infinity, converges for the absolute value of 'x' less than 'R'. And let me pick a couple of values of 'x', 'x sub 0' and 'x sub 1', such that 'x sub 0' in magnitude is less than 'x sub 1' in magnitude, which in turn is less than 'R' in magnitude.
To give you a hint as to what I'm going to be driving at, I'm going to try to prove that this series converges uniformly within 'R'. And I'm going to try to prove it by setting up the Weierstrauss M-test. And I'm going to have to compare this with a positive convergent series, and the positive convergent series that I have in mind is going to make use of the fact that the magnitude of 'x0' divided by the magnitude of 'x1' is a positive constant less than 1. And I'm going to set this thing up so that I can use a geometric series on it. And if that sounds confusing, let me just go through this now in slow motion and show you what I'm driving at here.
See, first of all, since 'x1' is within the radius of convergence, that means in particular that summation 'a n ''x sub 1' to the n'', as 'n' goes from 0 to infinity, converges. Since this converges, in particular-- the very first test that we learned-- in particular, the n-th term must go to 0. Remember, for a convergent series, the n-th term goes to 0. Now, since the limit of 'a n 'x1 to the n-th'' is 0, that means that for n large enough, this will get smaller than any positive constant. Otherwise it couldn't converge on 0. All right?
So all I'm saying there, I guess, is if this is 0 and this is 'M', if I can't make this thing less than 'M', how can the limit ever be 0? In other words, if these things can't get past 'M', how could the limit be 0? So all I'm saying is that, given a positive 'M', I can always find an 'n' such that when I go out far enough-- in other words, when the subscript 'n' is greater than capital 'N', the magnitude of 'a n 'x1 to the n'' is less than capital 'M'.
Now the key idea is this. What I'm going to do is look at summation 'a n 'x0 to the n'' where 'x0' is as given over here. Now what I'm going to try to do is set this up for the Weierstrauss M-test, and the way I'm going to do that is as follows. I look at the absolute value of 'a n 'x0 to the n''. Somehow or other, the thing I have control over is not 'a n 'x0 to the n'', but rather 'a n 'x1 to the n''. And now I use that very common mathematical trick that gets us out of all sorts of binds, and that is that, since I would like 'a n 'x1 to the n'' over here, I just put it in. And then I must cancel it out again. In other words, all I do is I rewrite 'a n 'x0 to the n'' as 'a n' times 'x0' over 'x1 to the n', times 'x1 to the n'. In other words, notice that this just cancels and I have what I started with over here.
The point is that the magnitude of 'a n 'x1 to the n'' is less than 'M'. We already saw that. So what I do is I split this thing up. Remember, the absolute value of a product is the product of the absolute values. So I write it as what? The magnitude of this times this, times the magnitude of ''x0 over x1' to the n'.
In other words, I guess what I should have over here-- this is still an equality. This is just rewriting. The absolute value of a product is a product of the absolute values. Consequently, this is the absolute value of 'a n 'x1 to the n'' times the absolute value of ''x0 over x1' to the n'. And the key point is that the absolute value of 'a n 'x1 to the n'' is less than 'M' for 'n' sufficiently large.
In other words, for 'n' sufficiently large, the magnitude of 'a n 'x0 to the n'' is less than or equal to this expression here. But what is this expression? My claim is that this is the n-th term of a convergent positive series, namely a convergent geometric series. I'm going to to show you this in more detail. All I'm saying is observe that the magnitude of 'x0 over x1', since 'x0' is less than 'x1' in magnitude, is a positive constant less than 1.
Therefore, this is what? This is a geometric series whose ratio is 'x0 over x1'. In other words, as you pass from the n-th term to the 'n plus first' term, in each case you just multiply by 'x0 over x1', which is a constant. Therefore, by the Weierstrauss M-test, this given series is uniformly convergent inside the interval of convergence. You see?
In other words, what this now means is that we can replace the condition that the series was absolutely convergent inside the radius of convergence by, it's uniformly convergent inside the radius of convergence. And now you see the question is, what does that help us do? And again, this is left in great detail for the supplementary notes and the exercises, but I thought that for a finale, we should do a rather nice practical application. In particular, a problem that we haven't tackled at all. We wern't able to tackle it, but a problem that we've used as a scapegoat many times to show, for example, why the First Fundamental Theorem of integral calculus was not the cure-all it was supposed to be, for example. Let me show you what example I have in mind.
I'm thinking of this example. Find the area of the region 'R', where 'R' is the region bounded above by our old friend 'y' equals 'e to the minus 'x squared'', below by the x-axis, on the left by the y-axis, and on the right by the line 'x' equals 1.
You see, what we used to say before was what? We know by the definition of a definite integral that the area of the region 'R' is the integral from 0 to 1, 'e to the minus 'x squared'', 'dx'. But what we do not know explicitly is a function, 'G of x', for which 'G prime of x' is 'e to the minus 'x squared''. Remember, we could solve this problem by trapezoidal approximations and things of this type. We could approximate it, but we couldn't get a good bound on the answer very conveniently. What I thought I'd like to show you now is how one uses power series to solve this kind of a problem. If nothing else, this one application should be enough impetus to show you the power of power series. They are a very, very important and powerful analytical technique.
Let's see how we can handle this. First of all, from our previous material, we already know that 'e to the u' is represented by '1 plus u plus ''u squared' over '2 factorial'' plus' et cetera, ''u to the n' over 'n factorial'', et cetera, for all real numbers 'u'. OK, we already know that. In particular, if I now think of-- see, since 'u' is generically just a name for any real number, since minus 'x squared' is a real number also, I can replace 'u' by minus 'x squared'. And replacing 'u' by minus 'x squared' leads to what? 'e to the minus 'x squared'' is 1, minus 'x squared'. Now minus 'x squared', squared, is 'x to the fourth', over 2 factorial.
And in general, we see when I replace 'u' by minus 'x squared', this becomes plus or minus 'x to the 2n' over 'n factorial'. And to handle the plus or minus, I simply put in my sign alternator-- that's minus 1 to the n-th power. And the reason I use 'n' here, rather than, say, 'n plus 1', is to keep in mind that in power series, this is called the 0-th term. In other words, when 'n' is 0, minus 1 to the 0 would be positive, and I want the first term to be positive.
At any rate, to make a long story short-- shorter-- another way of writing this is that 'e to the minus 'x squared'' is summation 'n' goes from 0 to infinity, minus '1 to the n', 'x to the 2n' over 'n factorial'. Consequently, since these two are synonyms, to compute the integral from 0 to 1, ''e to the minus 'x squared'' dx', I can replace 'e to the minus 'x squared'' by the power series which converges to it uniformly. Namely, I can now write this as what? Integral from 0 to 1, summation 'n' goes from 0 to infinity, minus '1 to the n', 'x to the 2n' over 'n factorial' times 'dx'.
Now notice what this says. This says, in the order of the given operations, that I must form the sum of this series first, which is not an easy job, and then integrate that from 0 to 1. But the beauty is that, by the Weierstrauss M-test, I know that this converges uniformly wherever it converges absolutely. I already know that it does converge absolutely for all real 'x'. At any rate, then, I now know that it's uniformly convergent.
One of the beauties of the fact that this is uniformly convergent is I can interchange the order of summation and integration, which I do over here. But this is crucial. Let me make sure I underline that. You see, if there wasn't uniform convergence here, mechanically I can change the order, but I may get a different answer. But since this is uniformly convergent, these are still synonyms, and the beauty is, if I just look at the typical term that I'm trying to integrate here, that's a very easy thing to integrate. Namely, I just do what? Replace this by an exponent one larger-- that's '2n plus 1'. Divide through by '2n plus 1'.
In other words, the integral from 0 to 1, ''e to the minus 'x squared'' dx', is just this particular power series, namely summation 'n' goes from 0 to infinity, minus '1 to the n', 'x to the '2n plus 1'' over ''2n plus 1' times 'n factorial''. That must be evaluated between 0 and 1. Since the lower limit gives me 0 and the upper limit here just gives me a 1, rewriting this-- this is just what? Summation 'n' goes from 0 to infinity, minus '1 to the n' over ''2n plus 1' times 'n factorial''.
In other words-- let's take a look at this. When 'n' is 0, this term is 1. When 'n' is 1, this is minus 1 over what? 3 times 1 is 3. When 'n' is 2, this is 5 times 2 factorial, which is 10. When 'n' is 3, this is minus 1 to the third power, which is minus. This is 7 times 3 factorial. 7 times 3 factorial is what? 7 times 6, which is 42, et cetera. In other words, what we now know is that 'A sub R' is 1 minus 1/3 plus 1/10 minus 1/42. And if we keep on going this way, the next term is 1/216, which we can compute very easily.
What is this, by the way? This is a convergent alternating series. And as you recall from our learning exercises and some of the material in our supplementary notes, not only does this thing converge, but in such a case the error is never greater than the magnitude of the next term. So, you see, what I can do over here is just take 1, subtract 1/3, add 1/10, subtract 1/42, and know that whatever answer I get when I do this has to be exact to within no more of an error then 1/216. And in fact, if I now add on the 1/216, I think I get 0.748. And the next term is going to be something like 1/1,300, which means that this must be correct to within three decimal places. And this is the way the so-called error function, the table of error function, is actually computed.
Now I'm going to end the material of the course right at this particular point. In other words, I will leave the remaining applications for the exercises and the supplementary notes. What I would like to do is to become subjective for a moment or so, if I may. And even though this course is quite different from a regular classroom course, I would like to end it like a regular classroom course. And that is to make a genuine remark to the effect that this course has been a pleasure for me to teach. It has been very enlightening. It has been soul-searching to both plan the lectures and the supplementary notes and the learning exercises, and I myself, to coin a cliche, am a much better person for having gone through all of this.
Many other people worked hard with me on this also. And in particular, I would like to single out three of my colleagues for recognition. Professor Harold Mickley, who was the director of the center. My friend, John Fitch, who is the manager of the self-study projects and, in particular, was the director of the lectures, and also gave me invaluable advice in many places as to how to simplify the lectures and make them more meaningful. And finally, my friend Charles Patton, who is our electronics wizard here and kept everything going from a technological point of view, and gave me, also, many valuable pointers.
Now, from your point of view, I am hoping that you too can sit back and feel content, now that the course is over. I'm hoping that this long trip back through Calculus Revisited has given you a new insight to sets, functions, circular functions, hyperbolic functions, differentiation and integration, power series, so that you can go back enriched, revitalized, and tackle the projects of your choice.
I hate to see the course come to an end, but in closing, it was my pleasure and I hope that we will have the pleasure of getting together again soon to tackle, together, Part Two of Calculus Revisited. And so, until that time, so long.
ANNOUNCER: Funding for the publication of this video was provided by the Gabriella and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.