Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: The special case wherein each term in the series is non-negative; the concept of convergence; the comparision test; the ratio test; the integral test.
Instructor/speaker: Prof. Herbert Gross
Lecture 2: Positive Series
Related Resources
This section contains documents that are inaccessible to screen reader software. A "#" symbol is used to denote such documents.
Part V, VI & VII Study Guide (PDF - 35MB)#
Supplementary Notes (PDF - 46MB)#
Blackboard Photos (PDF - 8MB)#
MALE SPEAKER: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. Last time, we had discussed series and sequences. Today, we're going to turn our attention to a rather special situation, a situation in which every term in our series is positive. For this reason, I have entitled today's lecture 'Positive Series'.
Before we can do full justice to positive series, however, there are a few topics that we must discuss as preliminaries. The first of these is simply the process of ordering. Now this is a rather strange situation. Because in the finite case, it turns out to be rather trivial.
By way of illustration, let's suppose the set 'S' consists of the numbers 11, 8, 9, 7, and 10. Now there are many ways in which we could order the set 'S'. We can arrange them so that any one of these five elements comes first, any one of the remaining four second, et cetera.
But suppose that we want to arrange these according to size. Observe that there is a rather straightforward, shall we say, binary technique, whereby we can order these elements. By binary, I mean, let's look at these two at the time.
We look at 11 and 8, and we throw away the larger of the two, which is 11. Then we compare 8 with 9, throwaway 9 because that's bigger. Compare 8 and 7, throw away 8. Compare 7 and 10, throw away 10.
The survivor, being 7, is therefore the least member of our collection. In a similar way, we can delete 7 and start looking for the next number of our set. And in this way, eventually order the elements of 'S' according to size-- 7, 8, 9, 10, 11. OK. Clearly, 7 is the smallest element of 'S' and 11 is the largest element of S.
And in terms of nomenclature, we say that 7 is a lower bound for 'S', 11 is an upper bound for 'S'. You see, in terms of a picture, all we're saying is that when the elements of 'S' are ordered according to size, 7 is the furthest to the left, 11 is the furthest to the right.
We could even talk more about the nomenclature by saying-- and this sounds like a tongue twister. This is why I had you read this material first in the last assignment and the supplementary notes, so that part of this will at least seem like a review. Observe that 7 is called the greatest lower bound for 'S', simply because any number larger than 7 cannot be a lower bound for 'S', simply because 7 would be smaller than that particular number.
And in a similar way, 11 is called the least upper bound for 'S', because any number smaller than 11 would be exceeded by 11, and hence could not be an upper bound for 'S'. Now the interesting point is this. Hopefully, at this stage of the game, you listened to what I've had to say, and you say, my golly, this is trivial.
And the answer is, it is trivial. But remember what our main concern was in our last lecture when we introduced the concept of infinite series and sequences-- that many things that were trivial for the finite case became rather serious dilemmas for the infinite case.
In other words, my claim is that these results are far more subtle for infinite sets. And I think the best way to do this is by means of an example. See now, let 'S' be the set of numbers where the n-th number is 'n' over 'n + 1'. In other words, the first member of 'S' will be 1/2, the second member, 2/3, the third member, 3/4, et cetera.
Now, let's take look to see what the least upper bound for 'S' is. And sparing you the details, I think you can observe at this stage of the game, especially based on the homework of the last unit, that the limit of the sequence 'S' is 1. In fact, 1 is the smallest number which exceeds every member in this collection.
In other words, 1 is the least upper bound for 'S'. But observe the interesting case that here-- and by the way, notice the abbreviation that we use in our notes. 'LUB', least upper bound. 'GLB', greatest lower bound. But 1 is the least upper bound for 'S'. Yet the fact remains that the least upper bound is not a member of 'S' itself.
In other words, there is no number 'n' such that 'n' over 'n + 1' is equal to 1. You see, notice that as these numbers increase, they get bounded by 1, but 1 is not a member of 'S'. In other words, here's an example where the least upper bound of a set does not have to be a member of the set.
And a companion to this would be an example where the greatest lower bound is not a member of the set. And to this end, simply let the n-th member of 'S' arranged by sequence be '1/n'. The n-th member is '1/n'. Therefore, 'S' is what? The set consisting of 1, 1/2, 1/3, et cetera.
Observe that as the terms go further and further out, they get arbitrarily close to 0 in size-- every one of these terms, '1/n', no matter how big 'n' is is greater than 0. In other words, then, observe that 0 will be the greatest lower bound for 'S', but 0 is not a member of 'S'.
In other words, it seems that things which are quite trivial for finite collections have certain degrees of sophistication for infinite collections. So what we're going to do now is to establish a few basic definitions. And we'll do it in a rather formal way.
Our first definition is the following. Given the set of numbers, 'S', 'M' is called an upper bound for 'S', If 'M' is greater than or equal to 'x' for all 'x' in 'S'. In other words, if 'M' exceeds every member of 'S', 'M' is called an upper bound for 'S'.
As I say, these these definitions are very straightforward. The companion-- well, let's go one step further. By the way, observe that I have in the interest of brevity left out the corresponding definitions for lower bounds. But they are quite analogous. In other words, a lower bound for 'S' would be a number that was less than or equal to each member in 'S'.
At any rate, then, 'M' is called a least upper bound for 'S' if first of all, 'M' is an upper bound for 'S'. And secondly, if 'L' is less than 'M', 'L' is not an upper bound for 'S'. In other words, least upper bound means what? Anything smaller cannot be an upper bound. Notice in terms of our previous remark that the least upper bound need not belong to 'x'.
The companion to this would be what? A greatest lower bound would be a number which is a lower bound such that anything greater could not be a lower bound. Again, all of this is easier to see in terms of a picture. Visualize 'S' as being this interval with little 'm' and capital 'M' being the endpoints of the interval.
Observe that capital 'M' is the least upper bound. Little 'm' is the greatest lower bound. Anything smaller than little 'm' will be lower bound. Anything greater than capital 'M' will be an upper bound. And notice that nothing in the set 'S' itself can be either an upper bound or a lower bound. Because anything inside 'S' appears to the right of little 'm' and to the left of capital 'M'.
And one final definition as a preliminary. A set, 'S', is called bounded if it has both an upper and a lower bound. And the key property that we have to keep track of throughout this-- and we won't prove this. In other words, in more rigorous advanced math courses, this is proven as a theorem. For our purposes, it seems self-evident enough so that we're willing to accept it.
And so rather than to belabor the point, let us just accept as a key property that every bounded set of numbers has a greatest lower bound and at least upper bound. In other words, if a set is bounded, we can find a smallest upper bound and a largest lower bound. And this completes the first portion of our preliminary material.
The next portion of our preliminary material before studying positive series involves what we mean by a monotonic non-decreasing sequence. A sequence is called monotonic non-decreasing-- and if you don't frighten at these words, it's almost self-evident what this thing means. It means that no term can be smaller than the one that came before.
In other words, the n-th term is less than or equal to the 'n plus first term' for each 'n'. And wording that more explicitly, it says what? 'a sub 1' is less than or equal to 'a sub 2' is less than or equal to 'a sub 3', et cetera. And I hope that it's clear by this time that it's not self-evident that a sequence has to have this.
Remember the subscripts simply tell you the order in which the terms appear. It has no bearing on the size of the term. For example, in an arbitrary sequence, recall that the second term can be smaller in magnitude than the first term. However, if the terms are non-decreasing sequentially this way, the sequence is called monotonic non-decreasing.
And the problem that comes up is, or the important question that comes up is, what's so important about monotonic non-decreasing sequences? And the answer is that for such sequences, two possibilities exist.
In other words, either the terms can keep getting larger and larger without bound-- see? In other words, that the sequences 'a sub n' has no upper bound. In which case, we say that the limit of 'a sub n' as 'n' approaches infinity is infinity. And for example, the ordinary counting sequence has this property. See, 1, 2, 3, 4, 5, et cetera, is a monotonic non-decreasing sequence.
In fact, it's monotonic increasing. Every member of the sequence is greater than the one that came before. But as you go further and further out, the terms increase without upper bound. OK? Now what is the other possibility for a monotonic non-decreasing sequence?
After all, the opposite, or the only other possibility, is that if the 'a sub n'-- if it's false, that the sequence doesn't have an upper bound, then of course it must have an upper bound. That's the second case.
In other words, suppose the sequence has an upper bound. Then the interesting point is that the limit of this sequence exists. And not only does it exist, but the limit itself is the least upper bound of the sequence. In other words, in the case where the sequence is non-decreasing, if it's bounded-- if it's bounded-- the least upper bound will be the limit.
Now you see, here's where we use that key property. Namely, if a sequence is bounded, it must have a least upper bound. Let's call that least upper bound 'L'. And by the way, you'll notice I wrote the word proof in quotation marks. It's simply to indicate that I prefer to give you a geometric proof here rather than an analytic one.
But the analytic proof follows word for word from this picture. In other words, it just translates in the usual way. And we'll drill on this with the exercises and the notes and the textbook.
But at any rate, the idea is this. To prove that 'L' is a limit, what must I do? I must show that if I choose any epsilon greater than 0, that all the terms beyond a certain one lie between 'L minus epsilon' and 'L plus epsilon'.
Now here's the way this works. Let's see if we can just read the diagram and get this thing very quickly. First of all, at least one term in my sequence has to be between 'L minus epsilon' and 'L'. And the reason for that is simply this. Remember 'L' is a least upper bound. Because 'L' is a least upper bound, 'L minus epsilon' cannot be an upper bound.
Now if no term can get beyond 'L minus epsilon', then certainly 'L minus epsilon' would be an upper bound. The fact that 'L minus epsilon' isn't an upper bound, therefore, means that at least one 'a sub n', say 'a sub capital N', is in this interval here.
Notice also that because 'L' is an upper bound, no 'a sub n' can get beyond here. In other words, no 'a sub n' lies between 'L' and 'L plus epsilon'. Because in that particular case, if that were to happen, 'L' couldn't be an upper bound. OK, so far, so good. That follows just from the definition of the least upper bound. Now, we use the fact that the sequence is non-decreasing.
And that says what? That if little 'n' is greater than capital 'N', 'a sub little n' is greater than or equal to 'a sub capital N'. In other words, what this means is if you list the 'a sub n's along this line, as 'n' increases, the terms move progressively from left to right.
In other words, notice then that if 'n' is greater than or equal to capital 'N', 'L minus epsilon' is less than 'a sub capital N', which in turn is less than or equal to 'a sub little n', which in turn is less than or equal to 'L'. And that's just another geometric way of saying that 'a sub n' is within epsilon of 'L'. And that's exactly the definition that the 'a sub n's converge to the limit 'L'.
Now I went over this rather quickly simply because this is done in the textbook. And all I'm trying to do with this lecture is to give you a quick overview of what's going on. Well, you see, we're now roughly 15 minutes into our lecture and we haven't come to the main topic yet.
My claim is that the main topic works very, very smoothly once we understand these preliminaries. The main topic, you see, is called positive series. And the definition of a positive series is just as the name implies. If each term in the series is at least as great as 0-- in other words, if each term is non-negative-- then the series is called positive.
Now why are positive series important? And why do they tie in with our previous discussion? Well, let's answer the last question. The reason they tie in with our previous discussion is that we have already seen that by the sum of a series, we mean the limit of the sequence of partial sums.
To go from one partial sum to the next, you add on the next term in the series. If each of the 'a sub n's is positive, or at least non-negative, notice then that the sequence of partial sums is monotonic non-decreasing.
Why? Because to go from the n-th partial sums to the 'n plus first' partial sum, you add on 'a sub 'n plus 1''. And since 'a sub 'n plus 1'' is at least as big as 0, it means that the 'n plus first' partial sum can be no smaller than the n-th partial sum.
In other words, therefore, if the series summation 'n' goes from 1 to infinity, 'a sub n' is a positive series, it must either diverge to infinity, or else it converges to the limit 'L', where 'L' is the least upper bound for the sequence of partial sums. OK?
Now qualitatively, that's the end of story. In other words, we now know what? For a positive series, it either diverges to infinity, or else it converges to a sum, a limit. And that limit is what? The least upper bound for the sequence of partial sums. The problem is that quantitatively, we would like to have some criteria for determining whether a positive series falls into the first category or the second category.
Notice, how can we tell whether the series converges or whether it diverges? And you see, the reading material in the text for this assignment focuses attention on three major tests. And these are the ones I'd like to go over with you once lightly, so to speak.
The first test is called the comparison test. And it almost sounds self-evident. I just want to outline how these proofs go. Because I think that once you hear these things spoken, as you read the material, the formal proofs will fit into a pattern much more nicely than if you haven't heard the stuff said out loud, you see.
The idea is this. Let's suppose that we know that summation 'n' goes from 1 to infinity, 'C sub n', is a convergent positive series. In other words, all of these are positive, and the series converges. Suppose I now have another sequence of numbers, 'u sub n', where each 'u sub n' is positive-- see, it's at least as big as 0-- but can be no bigger than 'C sub n' for each 'n'.
Then the statement is that the series formed by adding up the 'u sub n's must also converge. Notice what you're saying is, here's a bunch of positive terms that can't get too large. And these terms in magnitude are less than or equal to these. Therefore, what you're saying is that this sum can't get too large either.
And if you want to verbalize that so that it becomes more formal, the idea is this. Let 'T sub n' denote the n-th partial sums of the series involving the 'C sub n's. In other words, let 'T sub n' be 'C sub 1' plus et cetera, up to 'C sub n'.
And let 'S sub n' be the n-th term in the sequence of partial sums of the series involving the 'u's. In other words, let 'S sub n' be 'u1' plus et cetera up to 'u sub n'. Now the idea is-- lookit. We know that each of the 'u's in magnitude is smaller than each of the 'C's. Consequently, the sum of all the 'u's must be no greater than the sum of all the 'C's.
In other words, n-th partial sum, 'S sub n', is less than or equal to the partial sum, 'T sub n' for each 'n'. Now, by our previous result, knowing that these series summation 'C sub n' converges, it converges to its least upper bound of partial sums. In other words, the limit of 'T sub n' as 'n' approaches infinity is some number 'T', where 'T' is the least upper bound of the sequence of partial sums of the series.
Well in particular, then, since each 'S sub n' is less than or equal to 'T sub n', 'S sub n' itself certainly cannot exceed the least upper bound, namely 'T'. In other words, 'S sub n' can be no bigger than 'T' for each 'n'.
Well, what does this mean? It means, then, that the sequence 'S sub n' itself is bounded. Well, it's bounded. It's a monotonic non-decreasing sequence. Therefore, its limit exists. Not only does it exist, but it's the least upper bound of the sequence of partial sums.
In other words, this proof is given in the text. All I want you to see is that step by step, what this proof really does is it compares magnitudes of terms. In other words, if one batch of terms can't get lodged, if term by term, everything-- another sequence is less than these terms, then that second sum can't get too large either. And this is just a formalization of that.
There are a few notes that we should make first of all. Namely, the condition that 'u sub n' be between 0 and 'C sub n' for all 'n' can be weakened to cover the case where this is true only beyond a certain point.
Look, let me show you what I mean by this. Let's suppose I look at the sequence 1 plus 1/2 plus 1/3-- let's do at a different one. 1 plus 1/2 plus 1/4 plus 1/8 plus 1/16, et cetera. In other words, the geometric series each of whose terms is 1/2. This series we know converges OK.
Now what the comparison test says is, suppose you have something like this-- 1 plus 1/3 plus 1/5 plus 1/9 plus 1/17, et cetera. See, notice that each of these terms is less than the corresponding term here. Consequently, since these terms add up to a finite amount, these terms here must also add up to a finite amount.
But suppose for the sake of argument I decide to replace the first term here by 10 to the sixth. I'll make this a million. Now notice that this sum is going to become much larger. But the point is when I change this from a 1 to 10 to the sixth, even though I made the sum larger, I didn't change the finiteness of it.
In other words, if I replace the first four terms here by fantastically large numbers and then keep the rest of the series intact, sure, I've made to sum very, very large. But I've only change it by a finite amount, which will in effect, not change the convergence.
That's all I was saying over here, that the comparison test really hinges on what? Beyond a certain term, you could stop making the comparison.
And the second observation is the converse to what we're talking about. Namely, if we know that 'u sub n' is at least as big as 'd sub n' for each 'n', where the series summation 'n' goes from 1 to infinity, 'd sub n' is a positive divergent series, then this series, summation 'u sub n', must also diverge since its convergence would imply the convergence of this.
In other words, notice that by the comparison test, if summation 'u sub n' converged, the 'd n's, being less than the 'u n's would mean that summation 'dn' end would have to converge also. At any rate, these are the two portions of the comparison test. And this is what goes into the comparison test.
Now the interesting thing, or the draw back to the comparison test is simply this-- that 99 times out of 100, if you can find a series to compare a given series with, you probably would have known whether the given series converged or diverged in the first place.
In other words, somehow or other, to find the right series to compare something with is a rather subtle thing if you didn't already know the right answer to the problem. Well, at any rate, what I'm trying to drive at is that the comparison test has as one of its features a proof for a more interesting test-- a test that's far less intuitive-- called the 'ratio test'.
The ratio test says the following. Let's suppose again you're given a positive series. What you do now is form a sequence whereby each term in the sequence is the ratio between two terms in the series. In other words, what I do is I form the sequence by taking the second term divided by the first term, the third term divided by the second term, the fourth term divided by the third term. Now let's call that general term 'u sub n'.
This seems a little bit abstract for you. Let's look at a more tangible example. Suppose I take the series summation 'n' goes from 1 to infinity, '10 to the n' over 'n factorial'. Notice that in this case, the n-th term is '10 to the n' over 'n factorial'. The 'n plus first' term is '10 to the 'n plus 1'' over ''n plus 1' factorial'. So 'u sub n' is the ratio of the 'n plus first' term to the n-th term.
In other words, '10 to the 'n plus 1'' over ''n plus 1' factorial' divided by '10 to the n' over 'n factorial'. '10 to the 'n plus 1'' divided by '10 to the n' is simply 10. And ''n plus 1' factorial' divided by 'n factorial' is simply 'n plus 1'. In other words, observe the structure of the factorials that you get from the n-th to the 'n plus first' simply by multiplying by 'n plus 1'. Again, the computational details will be left for the exercises.
At any rate, then, in this particular case, 'u sub n' would be '10 over 'n plus 1''. Now here's what the ratio test says. Assuming that the limit 'u sub n' as 'n' approaches infinity exists, call it 'rho'. Then, the series converges if rho less than 1 and diverges if rho is greater than 1. And the test fails if rho equals 1.
In other words, if the terms get progressively smaller, so that the ratio and the limit stays less than 1, then the series converges. If on the other hand, the ratio in the limit exceeds 1, that means the terms are getting big fast enough so that the series diverges.
Let me point out an important observation here. Notice the difference between the limit equaling 1 and each term of the sequence being less than 1. In other words, notice that even if 'u sub n' is less than 1 for every 'n', rho may still equal 1.
For example, look at the terms 'n' over 'n plus 1''. For each 'n', 'n' over 'n plus 1' is less than 1. Yet the limit as 'n' approaches infinity is exactly equal to 1.
Now again, the formal proof of this is given in the book. But I thought if I just take a few minutes to show you geometrically what's happening here, you'll get a better picture to understand what's happening in the text. Let's prove this in the case that rho is less than 1.
Pictorially, if rho is less than 1, that means there's a space between rho and 1. That means I can choose an epsilon such that rho plus epsilon, which I'll call it 'r', is a positive number, but still less than 1.
Now, by definition of rho being the limit of the sequence 'a sub 'n plus 1'' over 'a sub n', that means I can find the capital 'N' for this given epsilon, such that whenever 'n' is greater than capital 'N', that 'a sub 'n plus 1' over 'a sub n' is less than rho plus epsilon. In other words, is less than 'r', where 'r' is some fixed number now less than 1.
Now let me apply this to successive values of 'n'. In other words, taking 'n' to be capital-- see, looking at this thing here, taking 'n' to be capital 'N', I have 'a' over capital 'N plus 1', 'a sub capital 'N plus 1'' over 'a sub N' is less than 'r'.
In other words, 'a sub capital 'N plus 1'' is less than 'r' times 'a sub N'. Similarly, 'a sub capital 'N plus 2'' over 'a sub capital 'N plus 1'' is also less than 'r'. In other words, 'a sub capital 'N plus 2'' is less than 'r' times 'a sub capital 'N plus 1''. But 'a sub 'N plus 1'' in turn is less than 'r' times 'a sub N'. Putting this in here, 'a sub 'N plus 2' is less than 'r squared' times 'a sub N'..
At any rate, if I now sum these inequalities, you see what I get is what? The limit as we go from 'n plus 1' to infinity-- in other words, the tail end of this particular sum is less than what?
Well, I can factor out the 'a sub N' over here. And what's left inside is what? 'r' plus 'r squared' plus 'r cubed', et cetera. But this particular series is a convergent geometric series. In other words, this must converge because this converges, and this is less than, term by term, the terms over here.
In other words, notice that the proof of the ratio test hinges on knowing two things, the comparison test and the convergence of a geometric series. Again, the reason I go through this as rapidly as I do is that every detail is done magnificently in the textbook. All I want you to see here is the overview of how these tests come about.
Finally, we have another powerful test called the 'integral test'. And the integral test essentially equates positive series with improper integrals. By the way, I have presented the material, so to speak, in the order of appearance in the textbook.
You see, the comparison test, ratio test, and integral test are given in that order in the text. And so, I kept the same order here. However, it's interesting to point out that the integral test, in a way, is a companion of the comparison test.
And let me show you first of all what the integral test says. It says, and I've written out the formal statements here. I'll show you pictorially what this means in a moment. But it says, suppose there is a decreasing continuous function, 'f of x'. Notice the word continuous in here. That guarantees that we can integrate 'f of x'. Such that 'f' evaluated at each integral value of 'x', say 'x' equals 'n', is 'u sub n', where 'u sub n' is the n-th term of the positive series 'u1' plus 'u2', et cetera.
Then, what the integral test says is that the series summation 'n' goes from 1 to infinity 'u sub n', and the integral 1 to infinity, ''f of x' dx', either converge together or diverge together.
In other words, we can test a particular series for convergence by knowing whether a particular improper integral converges. And to show you what this thing means pictorially, simply observe this. See, what we're saying is, suppose that when you plot the terms of the series-- so, 'u1', 'u2', 'u3', 'u4'-- that these happen to be the integral values of a continuous curve, 'y' equals 'f of x', which not only passes through these points, but is a continuous decreasing function as this happens.
Then what the statement is is that the sum of these lengths converges if and only if the area under the curve is finite. And the proof go something like this. It's a rather ingenious type of thing.
You see, notice that if this height is 'u sub 1', and the base of the rectangle is 1, notice that numerically, the area of the rectangle-- it's quite in general. If the base of a rectangle is 1, numerically the area of the rectangle equals the height, because the area is the base times the height. If the base is 1, the area equals the height.
So the idea is simply this. Lookit. Suppose, for example, that we look at this diagram over here. Notice that in this diagram, if we stop at n, the area under this curve is the integral from 1 to 'n', or 1 to 'n plus 1', because of how these lines are drawn.
See, notice that the first height stops at the number 2. The second base stops at number 3, et cetera. The idea is this, though, that the area under the curve in general, from 1 to 'n plus 1', is integral from 1 to 'n plus 1', ''f of x' dx'.
On the other hand, the area of the rectangles are what? 'u1' plus 'u2' plus 'u3', up to 'u n'.
In other words, for any given 'n', 'u1' up to u n' is greater than or equal to this particular integral. Consequently, taking the limit as 'n' goes to infinity, we get what? The summation 'u n' is greater than or equal to integral from one to infinity ''f of x' dx'.
Consequently, if this integral diverges, meaning that this gets very large, the fact that this can be no less than this means that this too must also diverge. Correspondingly, if we now do the same problem, but draw the thing slightly differently, notice that now in this particular picture, the area under the curve is integral from 1 to 'n', ''f of x' dx'.
On the other hand, the area of the rectangles are what? It's 'u2' plus 'u3' up to 'u n'. But now you see the rectangles are inscribed under the curve. Consequently, 'u2' plus et cetera, up to 'u n', is less than this integral. Therefore, if I add 'u1' onto both sides, the sum 'u1', et cetera, up to 'u n', is less than 'u1' plus integral 1 to 'n', ''f of x' dx'.
If I now take the limit as n goes to infinity, you see this becomes summation un. n goes from 1 to infinity. This becomes integral from 1 to infinity, ''f of x' dx'. And therefore, if this now converges, the sum on the right is finite.
Since the sum on the left cannot exceed the sum on the right, the sum on the left must also be finite. Consequently, the convergence of the integral implies the convergence of the series. Again, I apologize for doing this this rapidly. All I wanted you to do was to get an idea of what's happening here. Because as I say, the book is magnificent in this section. The proofs are very well self-contained.
At any rate, this gives us three rather powerful tests, which I will drill you on in the exercises for testing convergence of positive series. What we're going to do next time is to come to grips with a much more serious problem. And that is, what do you do if the series isn't positive? But that's another story. And so until next time, good bye.
Funding for the publication of this video was provided by the Gabriella and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.