1 00:00:00,000 --> 00:00:01,950 NARRATOR: The following content provided 2 00:00:01,950 --> 00:00:06,400 by MIT OpenCourseWare under a Creative Commons license. 3 00:00:06,400 --> 00:00:08,330 Additional information about our license 4 00:00:08,330 --> 00:00:10,490 and MIT OpenCourseWare in general 5 00:00:10,490 --> 00:00:11,800 is available at ocw.mit.edu. 6 00:00:14,350 --> 00:00:21,340 PROFESSOR: So this is the second 086 lecture and it'll be 7 00:00:21,340 --> 00:00:27,040 my second and last lecture on solving ordinary differential 8 00:00:27,040 --> 00:00:33,270 equations -- u prime equal f of u and t. 9 00:00:33,270 --> 00:00:40,290 So last time, we got Euler's method, both ordinary, 10 00:00:40,290 --> 00:00:45,970 forward Euler, which is this method for the particular 11 00:00:45,970 --> 00:00:49,530 equation -- so I'm using my model problem, as always, 12 00:00:49,530 --> 00:00:51,440 u prime equal a*u. 13 00:00:51,440 --> 00:00:54,090 That's the model. 14 00:00:54,090 --> 00:00:59,530 And I'm using small u for the true solution and capital 15 00:00:59,530 --> 00:01:02,210 U for the approximation. 16 00:01:02,210 --> 00:01:05,380 By whatever method we use, and right now, that method 17 00:01:05,380 --> 00:01:09,050 is forward Euler. 18 00:01:09,050 --> 00:01:11,720 We looked at the idea of stability, 19 00:01:11,720 --> 00:01:15,760 but now let me follow up right away 20 00:01:15,760 --> 00:01:18,160 with the point of stability. 21 00:01:18,160 --> 00:01:20,740 Where does stability come in? 22 00:01:20,740 --> 00:01:24,740 So I want to do that for this simplest of all methods, 23 00:01:24,740 --> 00:01:32,820 because you'll see how we get to an error equation 24 00:01:32,820 --> 00:01:36,610 and we'll get to a similar error equation for much 25 00:01:36,610 --> 00:01:38,710 better methods. 26 00:01:38,710 --> 00:01:42,630 So the accuracy of Euler is minimal, 27 00:01:42,630 --> 00:01:46,010 but the question of how stability enters 28 00:01:46,010 --> 00:01:49,740 and how we show that it converges 29 00:01:49,740 --> 00:01:52,880 will be the same, also in partial differential equations, 30 00:01:52,880 --> 00:01:55,670 so this is the place to see it. 31 00:01:55,670 --> 00:02:02,410 So I'll start with that, but then after that topic will 32 00:02:02,410 --> 00:02:06,680 come better methods than Euler and the first step 33 00:02:06,680 --> 00:02:09,240 will be, of course, second-order methods, 34 00:02:09,240 --> 00:02:12,440 because Euler and backward Euler are only first-order. 35 00:02:12,440 --> 00:02:15,300 And we'll see -- actually, that'll be the point. 36 00:02:15,300 --> 00:02:18,460 Not only will show when they converge, 37 00:02:18,460 --> 00:02:21,790 why they converge, but also how fast they converge. 38 00:02:21,790 --> 00:02:23,460 What's the error? 39 00:02:23,460 --> 00:02:25,410 This estimate of the error is something 40 00:02:25,410 --> 00:02:29,700 that every code is going to somehow use internally 41 00:02:29,700 --> 00:02:33,530 to control the step size delta t. 42 00:02:33,530 --> 00:02:36,380 So I'm just going to go with the model problem, 43 00:02:36,380 --> 00:02:40,260 u prime equal a*u, the Euler method. 44 00:02:40,260 --> 00:02:45,200 So here is the evaluation -- a*u is f there, 45 00:02:45,200 --> 00:02:48,760 so it's really delta t multiplying f, 46 00:02:48,760 --> 00:02:51,600 which is just a*u in this easy case. 47 00:02:51,600 --> 00:02:53,460 Here's the true solution and then 48 00:02:53,460 --> 00:02:58,460 here is the new point, that when I plug the true solution 49 00:02:58,460 --> 00:03:03,320 in to the difference equation, it, of course, 50 00:03:03,320 --> 00:03:05,800 doesn't satisfy exactly. 51 00:03:05,800 --> 00:03:10,510 There's some error term -- this, I would call the discretization 52 00:03:10,510 --> 00:03:15,030 error -- making the problem discrete. 53 00:03:19,040 --> 00:03:23,600 So this is the, you could say, the error at a single time step 54 00:03:23,600 --> 00:03:28,510 -- it's the new error that's introduced at time step n. 55 00:03:32,370 --> 00:03:38,220 Error, now, is going to be -- e will be u, the true one, 56 00:03:38,220 --> 00:03:43,970 minus the approximate, so I just subtract to get an equation 57 00:03:43,970 --> 00:03:48,830 for the error at time step n plus 1. 58 00:03:48,830 --> 00:03:52,210 That subtraction gives e_n, a delta t. 59 00:03:52,210 --> 00:03:59,360 That subtraction gives e_n, and this is the inhomogeneous term, 60 00:03:59,360 --> 00:04:02,620 you could say, the new error. 61 00:04:02,620 --> 00:04:08,240 So the question is -- first, the question is, what size is that, 62 00:04:08,240 --> 00:04:10,340 the local error? 63 00:04:10,340 --> 00:04:17,830 Then the key question is, when I combine all these local errors, 64 00:04:17,830 --> 00:04:20,460 what's the global error? 65 00:04:20,460 --> 00:04:23,890 So first, the local error. 66 00:04:23,890 --> 00:04:27,090 You can see what this is, approximately, 67 00:04:27,090 --> 00:04:34,200 because the true solution would be e to the a*t and we all know 68 00:04:34,200 --> 00:04:40,990 that e to the a*t times u_0 -- e to the at -- 69 00:04:40,990 --> 00:04:47,470 there's a 1 plus an a delta -- from a single time step -- 70 00:04:47,470 --> 00:04:53,420 sorry, I should have said the true solution would grow by e 71 00:04:53,420 --> 00:04:56,700 to the a delta t over one step. 72 00:04:56,700 --> 00:05:00,080 It would multiply by e to the a delta t, 73 00:05:00,080 --> 00:05:03,190 but instead of multiplying by that exponential e 74 00:05:03,190 --> 00:05:06,510 to the a delta t, we're only multiplying by the first two 75 00:05:06,510 --> 00:05:08,940 terms in the series. 76 00:05:08,940 --> 00:05:18,220 So the error here is going to be something like a half delta t 77 00:05:18,220 --> 00:05:25,830 squared -- that's the key point where we see the local error -- 78 00:05:25,830 --> 00:05:38,840 times probably a squared -- we needed an a squared and u. 79 00:05:38,840 --> 00:05:42,820 So a squared u_n, that would really 80 00:05:42,820 --> 00:05:44,240 be the second derivative. 81 00:05:44,240 --> 00:05:46,740 I'll put it in as second derivative, 82 00:05:46,740 --> 00:05:53,390 because that's what it would look like in a nonlinear case. 83 00:05:53,390 --> 00:05:56,840 So I'm working the linear case for simplicity, 84 00:05:56,840 --> 00:06:03,360 but the point is that this is the Euler local error. 85 00:06:03,360 --> 00:06:05,250 That's so typical, that you really 86 00:06:05,250 --> 00:06:09,280 want to look at that local error, see what it looks like. 87 00:06:09,280 --> 00:06:13,840 There is some constant and that has a certain effect 88 00:06:13,840 --> 00:06:17,310 on the quality of the method. 89 00:06:17,310 --> 00:06:19,720 There's a power of delta t and the question, 90 00:06:19,720 --> 00:06:22,660 what is that exponent has a big effect. 91 00:06:22,660 --> 00:06:29,430 So that exponent is the order plus 1. 92 00:06:29,430 --> 00:06:35,100 So for me, Euler's method has accuracy p equal 1. 93 00:06:35,100 --> 00:06:41,710 Over a single delta t step, the error is delta t squared, 94 00:06:41,710 --> 00:06:46,550 but we'll see that when I take 1 over delta t steps 95 00:06:46,550 --> 00:06:51,330 to get somewhere, to get to some fixed time like 1, 96 00:06:51,330 --> 00:06:55,760 then I have one over delta t of these things 97 00:06:55,760 --> 00:07:00,860 and the 2 comes back down to the 1, which I expect for Euler. 98 00:07:00,860 --> 00:07:07,550 And finally, this term is out of our control, basically. 99 00:07:07,550 --> 00:07:11,680 That's coming from the differential equation. 100 00:07:11,680 --> 00:07:16,950 That would tell us how hard or easy the method 101 00:07:16,950 --> 00:07:20,020 is, the equation is. 102 00:07:20,020 --> 00:07:22,770 So if u double prime is real big, 103 00:07:22,770 --> 00:07:26,590 we're going to expect to have to reduce delta t to keep this 104 00:07:26,590 --> 00:07:30,300 error under control, so that's a typical term -- 105 00:07:30,300 --> 00:07:37,550 a constant delta t to a power of p plus 1 and the p plus first 106 00:07:37,550 --> 00:07:41,430 derivative, the second derivative matching that 2 107 00:07:41,430 --> 00:07:43,050 of the solution. 108 00:07:43,050 --> 00:07:49,990 So that's -- if I now put this -- 109 00:07:49,990 --> 00:07:56,670 think of that as here in the -- as the sort of inhomogeneous 110 00:07:56,670 --> 00:08:00,680 term or the new source term. 111 00:08:00,680 --> 00:08:03,410 I just want to estimate e_n now. 112 00:08:03,410 --> 00:08:06,260 So I've done the local part and now I'm interested 113 00:08:06,260 --> 00:08:09,050 in putting it all together. 114 00:08:09,050 --> 00:08:12,750 How do I look at e -- so I really want, somehow, 115 00:08:12,750 --> 00:08:21,240 the solution -- e at some definite time n is what? 116 00:08:21,240 --> 00:08:27,450 I'm asking, really, for the solution to this system, 117 00:08:27,450 --> 00:08:35,470 and sort of in words -- so that's a good -- 118 00:08:35,470 --> 00:08:39,600 a totally typical error equation. 119 00:08:42,440 --> 00:08:44,720 I think the way to look at it in words 120 00:08:44,720 --> 00:08:52,480 is, at each step, some new term, new error is committed, 121 00:08:52,480 --> 00:08:58,090 and what happens to that error at later steps? 122 00:08:58,090 --> 00:09:01,710 At later steps, that error committed then 123 00:09:01,710 --> 00:09:06,110 gets multiplied by 1 plus a delta t at the next step and 1 124 00:09:06,110 --> 00:09:08,780 plus a delta t at the following step. 125 00:09:08,780 --> 00:09:14,300 So I think we have something like, after n steps, 126 00:09:14,300 --> 00:09:22,480 we would have 1 plus a delta t to the n-th times the error 127 00:09:22,480 --> 00:09:24,750 that we did -- our first step error. 128 00:09:28,390 --> 00:09:31,010 So that's -- you see what's happening here? 129 00:09:31,010 --> 00:09:35,530 Whatever error we commit grows or decays according 130 00:09:35,530 --> 00:09:38,750 to this growth factor. 131 00:09:38,750 --> 00:09:41,230 Then of course, we made an error at step two 132 00:09:41,230 --> 00:09:45,490 and we made an error at step k, so at step k, 133 00:09:45,490 --> 00:09:51,370 it will have n minus k steps still to grow. 134 00:09:51,370 --> 00:10:03,220 This is DE, the error DE at step k, 135 00:10:03,220 --> 00:10:10,440 and all the way through the -- I guess it would end with DE_N. 136 00:10:10,440 --> 00:10:16,550 DE_N would be the last thing that hadn't yet 137 00:10:16,550 --> 00:10:18,890 started to grow or decay. 138 00:10:18,890 --> 00:10:19,760 Okay. 139 00:10:19,760 --> 00:10:22,570 So that's our formula. 140 00:10:22,570 --> 00:10:28,860 It looks a little messy, but it shows everything. 141 00:10:28,860 --> 00:10:33,980 It shows the crucial important of stability and how it enters. 142 00:10:33,980 --> 00:10:37,930 Stability controls -- so there is the error that we made way 143 00:10:37,930 --> 00:10:41,960 at the beginning and then this is the growth factor that 144 00:10:41,960 --> 00:10:49,870 happened n time, so you see that if 1 plus a delta t has 145 00:10:49,870 --> 00:10:54,890 magnitude bigger than 1, what's going to happen? 146 00:10:54,890 --> 00:11:00,040 That will explode and the computer 147 00:11:00,040 --> 00:11:10,480 will very quickly show totally out of bounds output. 148 00:11:10,480 --> 00:11:15,980 But if it's stable, if this thing is less than 1, 149 00:11:15,980 --> 00:11:21,480 less or equal to 1, then that stability means that this error 150 00:11:21,480 --> 00:11:27,270 is not growing and so we get -- so let me put, 151 00:11:27,270 --> 00:11:32,430 if we have this stability, 1 plus a delta t, 152 00:11:32,430 --> 00:11:35,980 smaller equal 1, then we get the bound that we want. 153 00:11:35,980 --> 00:11:38,500 So can you see what that looks like? 154 00:11:41,500 --> 00:11:47,310 We've got N terms, right? 155 00:11:47,310 --> 00:11:52,600 To get to point N, we've made N errors -- capital N errors -- 156 00:11:52,600 --> 00:11:57,140 and then each term, that gives a 1. 157 00:11:57,140 --> 00:12:01,000 That's the critical role of stability. 158 00:12:01,000 --> 00:12:08,270 And the error is of this order, so I'm getting something like 159 00:12:08,270 --> 00:12:16,050 a half delta t squared over 2 times -- 160 00:12:16,050 --> 00:12:18,350 let's say some maximum. 161 00:12:18,350 --> 00:12:24,160 I'll use u double prime for a maximum value, let's say, 162 00:12:24,160 --> 00:12:27,580 of the second derivative. 163 00:12:27,580 --> 00:12:34,000 That's a typical good satisfactory error estimate. 164 00:12:34,000 --> 00:12:36,060 So what's up here? 165 00:12:36,060 --> 00:12:39,710 So N times delta t -- times one delta t -- 166 00:12:39,710 --> 00:12:41,840 is the time that we reached, right? 167 00:12:41,840 --> 00:12:45,030 We took N steps, delta t each step. 168 00:12:45,030 --> 00:12:48,470 So this is the same as -- this is what I'm wanting 169 00:12:48,470 --> 00:12:50,460 and it's going to fit on this board -- 170 00:12:50,460 --> 00:12:58,490 there's a one half T -- I should make it T over 2, right? 171 00:12:58,490 --> 00:13:04,830 N delta t is some time capital T. We have the half. 172 00:13:04,830 --> 00:13:07,180 We still have one delta t. 173 00:13:07,180 --> 00:13:09,190 That's the key. 174 00:13:09,190 --> 00:13:14,650 We have this u double prime, which is fixed. 175 00:13:14,650 --> 00:13:18,410 Anyway, we have first-order convergence. 176 00:13:18,410 --> 00:13:24,460 So the error after n steps goes down like delta t. 177 00:13:27,540 --> 00:13:32,710 If I cut the time step in half, I cut the error in half. 178 00:13:32,710 --> 00:13:36,340 So that's not a great rate of convergence. 179 00:13:36,340 --> 00:13:38,530 That's, like, the minimal rate of convergence -- 180 00:13:38,530 --> 00:13:41,300 just one power delta t. 181 00:13:41,300 --> 00:13:44,650 And that's why we call Euler a first-order method. 182 00:13:47,190 --> 00:13:51,660 I think, I hope you've seen and can kind of -- because we'll -- 183 00:13:51,660 --> 00:14:00,110 this expression for the solution is messy but meaningful. 184 00:14:03,910 --> 00:14:10,510 That's a little bit of pure algebra or something 185 00:14:10,510 --> 00:14:16,630 that shows how stability is used, where it enters. 186 00:14:16,630 --> 00:14:22,070 So now I'm ready, if you are, to move to second-order methods. 187 00:14:24,690 --> 00:14:26,690 Now we're actually going to get methods 188 00:14:26,690 --> 00:14:28,280 that people would really use. 189 00:14:32,340 --> 00:14:38,340 So all those methods are, I think, worth looking at. 190 00:14:38,340 --> 00:14:41,680 Can I say that the notes that are on the web -- 191 00:14:41,680 --> 00:14:46,580 you may have spotted chapter five, section one, for example, 192 00:14:46,580 --> 00:14:51,060 which is this material -- so I'm -- after the lecture, 193 00:14:51,060 --> 00:14:57,580 I always think back, what did I say and improve those notes, 194 00:14:57,580 --> 00:15:01,810 so possibly later this afternoon a revised, 195 00:15:01,810 --> 00:15:06,740 improved version of this material will go up and then 196 00:15:06,740 --> 00:15:10,870 next week starts the real thing, what I think of as the real 197 00:15:10,870 --> 00:15:13,310 thing, the wave equation and the heat equation -- 198 00:15:13,310 --> 00:15:17,590 partial differential equations -- but it's worth -- 199 00:15:17,590 --> 00:15:19,970 here we saw how stability works. 200 00:15:19,970 --> 00:15:24,570 Here we're going to see how accuracy can be increased 201 00:15:24,570 --> 00:15:31,600 and in different ways. 202 00:15:31,600 --> 00:15:35,580 In this way -- so what do I notice about the trapezoidal 203 00:15:35,580 --> 00:15:36,080 method? 204 00:15:39,670 --> 00:15:43,140 Sometimes it's called Crank-Nicolson. 205 00:15:43,140 --> 00:15:50,840 In PDEs, the same idea was proposed by Crank and Nicolson. 206 00:15:50,840 --> 00:15:53,150 So what's their idea? 207 00:15:53,150 --> 00:15:57,550 Or what's the trapezoidal idea in the first place? 208 00:15:57,550 --> 00:16:01,920 Basically, we've centered -- we've recentered the method 209 00:16:01,920 --> 00:16:04,710 at what time position? 210 00:16:07,280 --> 00:16:09,240 You can go ahead. 211 00:16:09,240 --> 00:16:13,080 So where -- instead of Euler, which was, like, 212 00:16:13,080 --> 00:16:17,190 started at time n, t_n, or backward Euler, 213 00:16:17,190 --> 00:16:21,180 that was sort of focused on the new time, t_(n+1), 214 00:16:21,180 --> 00:16:29,750 this combination is at time n plus a half delta t, right? 215 00:16:29,750 --> 00:16:35,460 Somehow we've symmetrized everything around a half 216 00:16:35,460 --> 00:16:42,810 and the point is that around the midpoint, 217 00:16:42,810 --> 00:16:44,410 the accuracy jumps up to 2. 218 00:16:48,840 --> 00:16:51,350 Let me try, always, u prime equal a*u. 219 00:16:54,990 --> 00:16:56,990 So f is a*u. 220 00:16:56,990 --> 00:17:01,030 Then what does this become? 221 00:17:01,030 --> 00:17:04,410 Can I, as you would have to do in coding it, 222 00:17:04,410 --> 00:17:06,590 separate the implicit part? 223 00:17:06,590 --> 00:17:07,930 This is implicit, of course. 224 00:17:10,620 --> 00:17:17,650 That stands for this: f_(n+1) means f of the new unknown u 225 00:17:17,650 --> 00:17:21,060 at the new time. 226 00:17:21,060 --> 00:17:22,340 So it's implicit. 227 00:17:26,760 --> 00:17:29,802 So let me bring that over to the left side, 228 00:17:29,802 --> 00:17:31,510 because it's sort of part of the unknown. 229 00:17:31,510 --> 00:17:37,490 I'll multiply through by delta t and I'll take f to be a*u. 230 00:17:37,490 --> 00:17:41,330 So I just want to rewrite that for our model problem. 231 00:17:41,330 --> 00:17:47,310 So it will be 1 minus a half, when I bring that over, 232 00:17:47,310 --> 00:17:52,140 minus a delta t over 2 U_(n+1). 233 00:17:55,110 --> 00:17:58,360 Is that what I get, when I multiply through by delta t 234 00:17:58,360 --> 00:18:00,220 and bring the implicit part over? 235 00:18:00,220 --> 00:18:03,340 And now I bring the explicit part to that side, 236 00:18:03,340 --> 00:18:10,210 so it's 1 plus a half delta t a delta t U_n. 237 00:18:12,950 --> 00:18:17,000 So that's what we actually solve at every step. 238 00:18:20,840 --> 00:18:25,210 So first of all, I see that it's pretty centered. 239 00:18:25,210 --> 00:18:27,530 I see that there's something has to be inverted. 240 00:18:27,530 --> 00:18:31,590 That's because the method's implicit. 241 00:18:31,590 --> 00:18:34,330 If this was a matrix, a system of equations, 242 00:18:34,330 --> 00:18:39,360 which is very typical -- could be a large system of equations 243 00:18:39,360 --> 00:18:46,230 with a matrix, capital A. Then u would be a vector and we would 244 00:18:46,230 --> 00:18:52,750 be solving this system: matrix times vector equals known 245 00:18:52,750 --> 00:18:55,650 right-hand side from time n. 246 00:18:55,650 --> 00:18:59,580 But here it's a scalar, it's just a division, 247 00:18:59,580 --> 00:19:03,700 so what is stability going to depend on? 248 00:19:03,700 --> 00:19:05,680 What does stability require? 249 00:19:05,680 --> 00:19:08,960 At every step, we don't want to grow. 250 00:19:08,960 --> 00:19:15,830 So if we take a step -- that's 1 plus a over 2 delta t divided 251 00:19:15,830 --> 00:19:22,920 by 1 minus a over 2 delta t, that's the growth factor, 252 00:19:22,920 --> 00:19:28,730 the decay factor that multiplies U_n to give you n plus 1, 253 00:19:28,730 --> 00:19:34,600 so the stability question is, is this smaller or equal to 1? 254 00:19:34,600 --> 00:19:42,530 That's stability for this trapezoidal method. 255 00:19:42,530 --> 00:19:44,180 What's the story on that? 256 00:19:44,180 --> 00:19:48,580 Suppose a is very negative. 257 00:19:48,580 --> 00:19:51,880 That's what killed forward Euler, right? 258 00:19:51,880 --> 00:19:56,230 If a delta t -- that's where we look for instability. 259 00:19:56,230 --> 00:20:01,660 If a delta t becomes too negative, 260 00:20:01,660 --> 00:20:06,700 we lost stability in forward Euler 261 00:20:06,700 --> 00:20:10,890 and we'll lose stability in other, better methods too. 262 00:20:10,890 --> 00:20:16,850 But here, if a delta t is negative, 263 00:20:16,850 --> 00:20:18,680 that denominator is good. 264 00:20:18,680 --> 00:20:23,790 It's bigger than the numerator, right? 265 00:20:23,790 --> 00:20:26,630 So we have a -- you'll see it. 266 00:20:26,630 --> 00:20:31,940 If a delta t is less than 0, if it's very much less than 0, 267 00:20:31,940 --> 00:20:40,500 then this is approaching maybe minus 1, but it's below 1. 268 00:20:40,500 --> 00:20:44,860 So this method is A-stable. 269 00:20:44,860 --> 00:20:46,730 Absolutely stable. 270 00:20:46,730 --> 00:20:52,530 Stable for all a less than 0. 271 00:20:52,530 --> 00:20:55,140 We're fine. 272 00:20:55,140 --> 00:21:00,800 Delta t can be, in principle, as large as we like. 273 00:21:00,800 --> 00:21:03,990 So we might ask, why don't I just jump 274 00:21:03,990 --> 00:21:08,700 in one step to the time that I want to reach and not bother 275 00:21:08,700 --> 00:21:10,640 taking a lot of small steps? 276 00:21:10,640 --> 00:21:15,270 What's the -- in that little paradox? 277 00:21:15,270 --> 00:21:20,750 It would be stable, but what's no good? 278 00:21:20,750 --> 00:21:24,890 So if I jumped in one giant step, 279 00:21:24,890 --> 00:21:27,630 this thing would be about minus 1 or something. 280 00:21:31,350 --> 00:21:34,260 So u here would be about minus 1 times u_0, 281 00:21:34,260 --> 00:21:37,810 and of course that's not the answer. 282 00:21:37,810 --> 00:21:44,030 So why do we still need a delta t? 283 00:21:44,030 --> 00:21:47,410 Because of the discretization error. 284 00:21:47,410 --> 00:21:57,410 Because this isn't the exact differential equation, 285 00:21:57,410 --> 00:22:02,640 so we still need -- we don't have problem with stability, 286 00:22:02,640 --> 00:22:11,310 but we still have to get the local errors down to whatever 287 00:22:11,310 --> 00:22:13,080 level we want. 288 00:22:13,080 --> 00:22:17,460 Now what are those local errors for this guy? 289 00:22:17,460 --> 00:22:25,030 What are the local errors for that method? 290 00:22:25,030 --> 00:22:25,530 Let's see. 291 00:22:25,530 --> 00:22:28,430 Can we actually see it? 292 00:22:28,430 --> 00:22:31,480 I mean, you know what I'm thinking, right? 293 00:22:31,480 --> 00:22:33,120 I'm thinking that the local error 294 00:22:33,120 --> 00:22:36,840 is of order delta t cubed. 295 00:22:36,840 --> 00:22:38,780 That's a second-order method for me. 296 00:22:38,780 --> 00:22:41,580 Delta t cubed at each step. 297 00:22:41,580 --> 00:22:45,550 1 over delta t steps, so delta t squared overall, 298 00:22:45,550 --> 00:22:47,170 and that's second order. 299 00:22:47,170 --> 00:22:49,780 But do we see why this is? 300 00:22:49,780 --> 00:22:53,510 I mean, instinctively, we say, it's centered, it should be. 301 00:22:53,510 --> 00:23:02,450 But let me expand this to see, are we -- 302 00:23:02,450 --> 00:23:08,740 is e to the a delta t -- how close is it to 1 -- 303 00:23:08,740 --> 00:23:17,080 so it's approximately 1 plus a over 2 delta t divided by 1 304 00:23:17,080 --> 00:23:22,530 minus a half a delta t. 305 00:23:22,530 --> 00:23:26,370 I just want to see that sure enough, this is second order. 306 00:23:26,370 --> 00:23:28,900 So of course, I have a denominator here 307 00:23:28,900 --> 00:23:30,730 that I want to multiply. 308 00:23:30,730 --> 00:23:35,340 So that's 1 plus a delta t over 2, the explicit part, 309 00:23:35,340 --> 00:23:37,870 and then everybody knows the 1 -- 310 00:23:37,870 --> 00:23:44,890 there are only two series to know, the truth is. 311 00:23:44,890 --> 00:23:47,940 We study infinite series a lot, but the two 312 00:23:47,940 --> 00:23:52,260 that everybody needs to know are the exponential series, 313 00:23:52,260 --> 00:23:58,150 which is for the left side, and the geometric series, 314 00:23:58,150 --> 00:24:05,570 for the right side, 1 over 1 minus x is 1 plus x 315 00:24:05,570 --> 00:24:13,830 plus x squared plus so on. 316 00:24:16,980 --> 00:24:20,450 So that's the -- this came from the denominator, 317 00:24:20,450 --> 00:24:24,180 getting it up in the numerator where I can multiply the other 318 00:24:24,180 --> 00:24:29,830 term and get 1 plus a delta t, which is good 319 00:24:29,830 --> 00:24:33,630 and what's the coefficient of a squared delta t squared? 320 00:24:40,300 --> 00:24:41,630 Can you see what it is? 321 00:24:41,630 --> 00:24:50,330 If I do that multiplication, what does -- you see it? 322 00:24:50,330 --> 00:24:54,980 Where do I get a delta t squared from this? 323 00:24:54,980 --> 00:25:00,850 I get one here, times 1/4 and I get 1 here times 1/4, 324 00:25:00,850 --> 00:25:02,200 so that's 2/4, is 1/2. 325 00:25:05,180 --> 00:25:07,040 So it's great, right? 326 00:25:07,040 --> 00:25:12,470 It got the next term correct in the exponential series. 327 00:25:12,470 --> 00:25:15,230 so that's why it's one order higher, because it 328 00:25:15,230 --> 00:25:18,490 got the right number there, but it won't get the right number 329 00:25:18,490 --> 00:25:20,230 at the next step. 330 00:25:20,230 --> 00:25:23,470 What's the -- I'd have to find out what the cube -- 331 00:25:23,470 --> 00:25:29,080 this would be a delta t over 2 cubed, so what will it -- 332 00:25:29,080 --> 00:25:33,620 the I'll get a wrong coefficient for a delta t cubed. 333 00:25:33,620 --> 00:25:40,520 I'll get -- that would give me 1/8 and this would give me 334 00:25:40,520 --> 00:25:43,790 another 1/8, I think, so I think I'm getting two eighths out 335 00:25:43,790 --> 00:25:44,290 of that. 336 00:25:44,290 --> 00:25:47,290 I'm getting 1/4, which is wrong. 337 00:25:47,290 --> 00:25:49,660 What's right there? 338 00:25:49,660 --> 00:25:52,850 For the exponential series, it should 339 00:25:52,850 --> 00:25:55,220 be one over three factorial. 340 00:25:55,220 --> 00:25:56,820 It should be 1/6. 341 00:25:56,820 --> 00:26:02,760 So I got 1/4, not 1/6, and the error 342 00:26:02,760 --> 00:26:05,900 is the difference between them, which is 1/12. 343 00:26:05,900 --> 00:26:10,130 So 1/12 would be this number. 344 00:26:10,130 --> 00:26:15,700 The local error, so the local error DE 345 00:26:15,700 --> 00:26:25,660 would be like the missing 1/12, the delta t to the third power 346 00:26:25,660 --> 00:26:29,310 now, and the third derivative. 347 00:26:32,340 --> 00:26:39,030 That's what would come out if I patiently went through 348 00:26:39,030 --> 00:26:45,590 the estimate, just to see, because we'll -- 349 00:26:45,590 --> 00:26:49,990 in these weeks, we'll be creating difference methods 350 00:26:49,990 --> 00:26:53,620 for partial differential equations and we'll want 351 00:26:53,620 --> 00:26:56,500 to know, what's controlling the error? 352 00:26:56,500 --> 00:27:00,660 This is the kind of calculation, the beginning of a Taylor 353 00:27:00,660 --> 00:27:03,370 series that answers that question. 354 00:27:03,370 --> 00:27:09,890 So the main point is, local error delta t cubed, stable 355 00:27:09,890 --> 00:27:12,550 for every a delta t. 356 00:27:12,550 --> 00:27:19,170 So our method would move from Euler to trapezoidal and it 357 00:27:19,170 --> 00:27:25,810 would give us a final error e -- sorry, small e. 358 00:27:25,810 --> 00:27:27,680 I can say it without writing it. 359 00:27:27,680 --> 00:27:32,740 The error, small e, would be delta t to -- what power? 360 00:27:32,740 --> 00:27:34,680 Squared, delta t squared. 361 00:27:34,680 --> 00:27:36,690 Good. 362 00:27:36,690 --> 00:27:42,700 The same kind of reasoning -- now I want to ask about -- 363 00:27:42,700 --> 00:27:45,660 so that takes care of trapezoidal, 364 00:27:45,660 --> 00:27:47,980 which is a pretty good method. 365 00:27:47,980 --> 00:27:56,090 I think it's -- in MATLAB, it would be ode something small t 366 00:27:56,090 --> 00:27:58,220 for trapezoidal. 367 00:27:58,220 --> 00:27:59,670 Forgot -- maybe 1, 2. 368 00:27:59,670 --> 00:28:00,270 I should know. 369 00:28:03,710 --> 00:28:04,790 It's used quite a bit. 370 00:28:08,360 --> 00:28:11,970 Of course, we're going to get higher than second order, 371 00:28:11,970 --> 00:28:16,730 but second order is often, in computational mathematics, 372 00:28:16,730 --> 00:28:18,310 a good balance. 373 00:28:18,310 --> 00:28:21,720 Newton's method for solving nonlinear equation 374 00:28:21,720 --> 00:28:25,130 is second order and it doesn't pay to go to higher. 375 00:28:25,130 --> 00:28:30,700 You could invent a -- you could beat Newton by going, 376 00:28:30,700 --> 00:28:35,330 by inventing some third-order method for solving a nonlinear 377 00:28:35,330 --> 00:28:41,240 equation, but in the end, Newton would be the favorite. 378 00:28:41,240 --> 00:28:43,600 So second order is pretty important. 379 00:28:46,130 --> 00:28:52,220 So let me point to -- so that was implicit, 380 00:28:52,220 --> 00:29:00,110 because it involved f_(n+1), but Adams had another idea, 381 00:29:00,110 --> 00:29:04,250 which seems like a great idea. 382 00:29:04,250 --> 00:29:08,170 Instead of implicit, using f_(n+1), 383 00:29:08,170 --> 00:29:14,590 he used the previous value, f_(n-1), which we already know. 384 00:29:14,590 --> 00:29:17,640 We've already, at that previous step, 385 00:29:17,640 --> 00:29:23,520 substituted u_(n-1) into f. 386 00:29:23,520 --> 00:29:25,180 That might have been time consuming, 387 00:29:25,180 --> 00:29:28,130 but we sure had to do it once and we only 388 00:29:28,130 --> 00:29:31,480 have to do it once with Adams. 389 00:29:31,480 --> 00:29:33,690 We're using something we already know. 390 00:29:33,690 --> 00:29:36,690 This is the one new time that we need it, 391 00:29:36,690 --> 00:29:46,120 that we have to compute f, and we get explicitly the new u. 392 00:29:46,120 --> 00:29:49,350 These numbers, 3/2 and minus 1/2 were 393 00:29:49,350 --> 00:29:53,140 chosen to make it second order and the notes 394 00:29:53,140 --> 00:29:58,170 check that, that with the series that it comes out to give us 395 00:29:58,170 --> 00:30:00,641 the correct second term. 396 00:30:00,641 --> 00:30:01,140 Good. 397 00:30:04,450 --> 00:30:07,300 Let me mention backward differences. 398 00:30:10,270 --> 00:30:17,600 Now we're just using one f and actually the implicit one, 399 00:30:17,600 --> 00:30:18,990 so I'm making it implicit. 400 00:30:18,990 --> 00:30:23,390 So why am I interested in number 3 at all? 401 00:30:23,390 --> 00:30:26,030 Because it's going to have better stability. 402 00:30:26,030 --> 00:30:30,530 By being implicit and by choosing these numbers, 403 00:30:30,530 --> 00:30:37,450 3/2, minus 4/2, and 1/2, I've upped the accuracy to 2, 404 00:30:37,450 --> 00:30:43,470 p equal to 2, and I've maintained high stability 405 00:30:43,470 --> 00:30:46,230 by making it implicit. 406 00:30:46,230 --> 00:30:53,700 So this one competes with this one for importance, for use. 407 00:30:53,700 --> 00:30:56,850 So you're really seeing three pretty good methods 408 00:30:56,850 --> 00:30:58,840 that can be written down. 409 00:30:58,840 --> 00:31:03,500 So these numbers were chosen to get 410 00:31:03,500 --> 00:31:09,590 that extra accuracy, which we didn't get from backward Euler. 411 00:31:09,590 --> 00:31:11,990 When those numbers were 1 and minus 1, 412 00:31:11,990 --> 00:31:15,080 backward Euler was only first order. 413 00:31:15,080 --> 00:31:23,890 Now this goes up to second order and you can guess that we can 414 00:31:23,890 --> 00:31:27,250 -- that every term we allow ourselves, 415 00:31:27,250 --> 00:31:30,320 we can choose our coefficients well, 416 00:31:30,320 --> 00:31:34,880 so that the order goes up by one more. 417 00:31:34,880 --> 00:31:45,150 Now if you looked at that, you might say, why not keep -- 418 00:31:45,150 --> 00:31:47,870 you could come back to this. 419 00:31:47,870 --> 00:31:52,170 Why not combine the idea of backward differences, more 420 00:31:52,170 --> 00:31:57,770 left-hand sides, with the Adams-Bashforth idea 421 00:31:57,770 --> 00:31:59,120 of more right-hand sides? 422 00:31:59,120 --> 00:32:01,420 Suppose I come back to Adams-Bashforth 423 00:32:01,420 --> 00:32:06,020 and create like an 18.086 method. 424 00:32:06,020 --> 00:32:09,660 That'll be third order, but still will only 425 00:32:09,660 --> 00:32:11,090 go back one step. 426 00:32:11,090 --> 00:32:15,160 So it will somehow use some combination like this, 427 00:32:15,160 --> 00:32:19,780 that goes back -- I'm not writing this method down, 428 00:32:19,780 --> 00:32:25,030 for a good reason, of course, but it will use something like 429 00:32:25,030 --> 00:32:30,120 this on the left side and something like this 430 00:32:30,120 --> 00:32:38,280 on the right side and that gives us enough freedom, 431 00:32:38,280 --> 00:32:41,340 enough coefficients to choose -- two coefficients there, 432 00:32:41,340 --> 00:32:46,780 three there -- and enough -- that we can get third-order 433 00:32:46,780 --> 00:32:48,620 accuracy. 434 00:32:48,620 --> 00:32:56,980 In other words, the natural idea is use both U_(n-1) and f_(n-1) 435 00:32:56,980 --> 00:33:00,320 in the formula to get that extra accuracy. 436 00:33:00,320 --> 00:33:04,740 So why isn't that the most famous method of all? 437 00:33:04,740 --> 00:33:08,430 Why isn't that even on the board? 438 00:33:08,430 --> 00:33:10,230 You can guess. 439 00:33:10,230 --> 00:33:15,410 It's violently unstable, so sadly, 440 00:33:15,410 --> 00:33:21,870 the most accurate methods, which use both an old value of u 441 00:33:21,870 --> 00:33:27,630 and an old value of f of u, use both of this and this -- 442 00:33:27,630 --> 00:33:32,030 the numbers that show up here are bad news. 443 00:33:32,030 --> 00:33:41,300 It's a fact of life and so that's why we go backwards. 444 00:33:41,300 --> 00:33:46,210 We have to make a choice: We go backwards with u, 445 00:33:46,210 --> 00:33:55,410 or Adams goes backwards with f, or we think of something way 446 00:33:55,410 --> 00:34:02,020 to hype up the accuracy even further within one step method. 447 00:34:02,020 --> 00:34:10,260 So the notes give a table, the beginning of a table. 448 00:34:10,260 --> 00:34:16,230 People could figure out the formulas for any order p, 449 00:34:16,230 --> 00:34:18,480 so the notes will give the beginning 450 00:34:18,480 --> 00:34:22,120 of a table for p equal 1, which is Euler; p equal 2, 451 00:34:22,120 --> 00:34:25,270 which is this; p equal 3 and p equal 4, 452 00:34:25,270 --> 00:34:29,870 which is Adams-Bashforth further back, backward 453 00:34:29,870 --> 00:34:32,970 differences further back. 454 00:34:32,970 --> 00:34:39,200 And those tables show the constants. 455 00:34:39,200 --> 00:34:42,680 So they show the 1/12 or the 1/2 or whatever 456 00:34:42,680 --> 00:34:48,880 that is, and they show the stability limit 457 00:34:48,880 --> 00:34:52,730 and of course, that's critical. 458 00:34:52,730 --> 00:34:57,240 The stability limit is much better for backward differences 459 00:34:57,240 --> 00:34:59,260 than it is for Adams-Bashforth. 460 00:34:59,260 --> 00:35:06,410 So actually, I only learned recently that Adams-Bashforth, 461 00:35:06,410 --> 00:35:11,580 which we all teach, which all books teach, I should say, 462 00:35:11,580 --> 00:35:15,660 isn't that much used way back -- maybe the astronomers might use 463 00:35:15,660 --> 00:35:19,600 it or they might use backward differences, 464 00:35:19,600 --> 00:35:23,080 which are more stable. 465 00:35:23,080 --> 00:35:25,500 So backward differences has an important role 466 00:35:25,500 --> 00:35:28,680 and then one-step methods will have an important role. 467 00:35:28,680 --> 00:35:31,490 Backward differences are implicit, 468 00:35:31,490 --> 00:35:37,010 so those are great for stiff -- you turn that way for stiff 469 00:35:37,010 --> 00:35:45,800 equations and for nonstiff equations, 470 00:35:45,800 --> 00:35:48,810 let me show you what the workhorse method is 471 00:35:48,810 --> 00:35:51,000 in a moment. 472 00:35:51,000 --> 00:35:57,680 So I have a number 4 here, whose name I better put up here. 473 00:35:57,680 --> 00:36:01,890 Let me put up the name of method 4, which 474 00:36:01,890 --> 00:36:05,200 is two names: Runge-Kutta. 475 00:36:08,260 --> 00:36:12,900 So that's a sort of one compound step. 476 00:36:19,700 --> 00:36:24,090 So what do I mean by -- what's the point of -- 477 00:36:24,090 --> 00:36:28,470 this looked pretty efficient, but there are two aspects that 478 00:36:28,470 --> 00:36:32,020 we didn't really -- I didn't really mention, 479 00:36:32,020 --> 00:36:36,010 and on those two aspects, Runge-Kutta wins by being just 480 00:36:36,010 --> 00:36:39,330 a single self-contained step. 481 00:36:39,330 --> 00:36:45,550 So what are the drawbacks of a multistep 482 00:36:45,550 --> 00:36:47,950 method like this or like this? 483 00:36:51,840 --> 00:36:55,970 You have to store the old value, no problem, 484 00:36:55,970 --> 00:36:59,740 but at the beginning, at t equals 0, what's the problem? 485 00:37:02,420 --> 00:37:08,320 You don't know the old value, so you have to get started somehow 486 00:37:08,320 --> 00:37:12,600 with some separate -- you can't use this at t equals 0, 487 00:37:12,600 --> 00:37:16,150 because it's calling for a value at minus delta t and you 488 00:37:16,150 --> 00:37:18,030 haven't got it. 489 00:37:18,030 --> 00:37:21,580 So you need -- it takes a separate method. 490 00:37:21,580 --> 00:37:23,810 You can deal with that, create -- use Runge-Kutta, 491 00:37:23,810 --> 00:37:27,700 for example, to get that first step. 492 00:37:27,700 --> 00:37:30,670 But then there's another problem with these backward step 493 00:37:30,670 --> 00:37:32,500 methods, which again, can be resolved. 494 00:37:32,500 --> 00:37:36,760 So this is a live method. 495 00:37:36,760 --> 00:37:40,590 The other problem is, if you want to change delta t -- 496 00:37:40,590 --> 00:37:43,760 suppose you want to cut delta t in half, 497 00:37:43,760 --> 00:37:48,530 then this is looking for a value that's a half delta t back 498 00:37:48,530 --> 00:37:51,640 in time and you haven't got that either, 499 00:37:51,640 --> 00:37:56,760 but you've got values 1 delta t back and 2 delta t back 500 00:37:56,760 --> 00:38:03,110 and some interpolation process will produce that half value. 501 00:38:03,110 --> 00:38:04,260 So what am I saying? 502 00:38:04,260 --> 00:38:08,170 I'm just saying that getting started 503 00:38:08,170 --> 00:38:11,730 takes a special attention, changing 504 00:38:11,730 --> 00:38:14,060 delta t takes a little special attention, 505 00:38:14,060 --> 00:38:15,710 but still, it's all doable. 506 00:38:15,710 --> 00:38:21,340 So that this method, which I gave a name to last time -- 507 00:38:21,340 --> 00:38:23,270 I've forgotten what it was. 508 00:38:23,270 --> 00:38:28,730 I think it was something like ode15s 509 00:38:28,730 --> 00:38:40,150 for stiff is much used for stiff equations. 510 00:38:40,150 --> 00:38:45,330 So now I'm left -- let me not only write down 511 00:38:45,330 --> 00:38:47,990 Runge-Kutta's name, but the other -- 512 00:38:47,990 --> 00:38:50,440 so multistep I've spoken about. 513 00:38:50,440 --> 00:38:53,150 I just -- this is -- I'm just going to write this to remind 514 00:38:53,150 --> 00:38:59,890 myself to say one word about it -- DAE. 515 00:39:03,750 --> 00:39:05,460 Can I say one word even now? 516 00:39:05,460 --> 00:39:06,370 I'm sorry. 517 00:39:06,370 --> 00:39:09,030 Then I'll say one word later. 518 00:39:09,030 --> 00:39:14,570 So that's a differential algebraic equation. 519 00:39:14,570 --> 00:39:16,870 That's like a situation which comes up 520 00:39:16,870 --> 00:39:21,400 in chemical engineering and many other places, in which, 521 00:39:21,400 --> 00:39:26,990 out of our n given equations, some of them 522 00:39:26,990 --> 00:39:29,330 are differential equations, but others 523 00:39:29,330 --> 00:39:38,270 are ordinary algebra equations, so there's no d by dt in them. 524 00:39:38,270 --> 00:39:41,850 Nevertheless, we have to solve, and there is a d by dt 525 00:39:41,850 --> 00:39:43,270 in the others. 526 00:39:43,270 --> 00:39:46,950 So this is a whole little world of DAEs, 527 00:39:46,950 --> 00:39:48,400 which you may never meet. 528 00:39:48,400 --> 00:39:51,940 I have never personally met. 529 00:39:51,940 --> 00:39:57,350 But it has to be dealt with and there are codes -- 530 00:39:57,350 --> 00:40:03,900 I'll just mention the code DASSL by Petzold -- 531 00:40:03,900 --> 00:40:05,830 I'll finish saying the one word. 532 00:40:05,830 --> 00:40:13,130 A model problem here would be some matrix times 533 00:40:13,130 --> 00:40:23,460 u prime equal f of u and t, so a sort of implicit differential 534 00:40:23,460 --> 00:40:25,740 equation, because I'm not given u prime, 535 00:40:25,740 --> 00:40:28,700 I'm only given M u prime. 536 00:40:28,700 --> 00:40:35,310 If M is singular -- otherwise I could multiply by M inverse 537 00:40:35,310 --> 00:40:42,040 and make it totally normal, but if M, at some times t -- 538 00:40:42,040 --> 00:40:48,430 or M may depend on u and t -- it could go singular. 539 00:40:48,430 --> 00:40:51,800 If it goes singular, then this system is degenerating 540 00:40:51,800 --> 00:40:53,650 and we have to think what to do. 541 00:40:53,650 --> 00:41:00,200 And this DASSL code would be one of the good ones that does. 542 00:41:00,200 --> 00:41:03,240 So that's sort of like my memory to myself 543 00:41:03,240 --> 00:41:08,890 to say something about DAEs before completing 544 00:41:08,890 --> 00:41:12,320 this topic of ordinary differential equations. 545 00:41:12,320 --> 00:41:14,450 Now for Runge-Kutta. 546 00:41:14,450 --> 00:41:18,130 Let me take p equal to 2 first. 547 00:41:18,130 --> 00:41:22,130 The famous Runge-Kutta is the one at p equal -- 548 00:41:22,130 --> 00:41:23,430 with forth order accuracy. 549 00:41:29,000 --> 00:41:35,990 That will take more of the little internal compounded 550 00:41:35,990 --> 00:41:39,395 steps than p equal to 2 takes, but p 551 00:41:39,395 --> 00:41:41,070 equal to 2 makes the point. 552 00:41:45,300 --> 00:41:49,980 So this'll be simplified Runge-Kutta -- p equal to 2. 553 00:41:49,980 --> 00:41:59,990 It'll be U_(n+1) minus U_n -- you see, it's just one step, 554 00:41:59,990 --> 00:42:01,830 but what goes here? 555 00:42:05,250 --> 00:42:06,260 Do I remember? 556 00:42:06,260 --> 00:42:07,930 The notes will tell me. 557 00:42:07,930 --> 00:42:12,230 I better -- since it's going to be kept forever, 558 00:42:12,230 --> 00:42:14,890 I better get it right. 559 00:42:14,890 --> 00:42:25,770 It involves f_n and f of f, so Runge-Kutta, here we go. 560 00:42:25,770 --> 00:42:35,110 It has a half of f_n, the usual Euler figure out this slope. 561 00:42:35,110 --> 00:42:44,800 But then, the other half is the same f at -- I take Euler, 562 00:42:44,800 --> 00:42:49,800 so doing this part allowed me to take a forward Euler step, 563 00:42:49,800 --> 00:42:58,730 and now I put that forward Euler step in as like a U_(n+1) delta 564 00:42:58,730 --> 00:43:04,830 t f_n at time -- what's the right time? 565 00:43:04,830 --> 00:43:12,020 Probably -- that's sort of -- this is like u_(n+1), 566 00:43:12,020 --> 00:43:16,620 but we didn't have to do -- we're not implicit and so I 567 00:43:16,620 --> 00:43:19,050 presume that that's at time n plus 1. 568 00:43:23,590 --> 00:43:27,800 So I've written in one line what the code would 569 00:43:27,800 --> 00:43:30,020 have to write in two lines. 570 00:43:30,020 --> 00:43:38,590 The first line of code would be compute f at the old U_n, 571 00:43:38,590 --> 00:43:44,400 then take an Euler step, multiply it by delta t and add 572 00:43:44,400 --> 00:43:50,000 to U_n, so this gives something that's like U_(n+1), 573 00:43:50,000 --> 00:43:55,420 but didn't cost us anything. 574 00:43:55,420 --> 00:43:58,460 So can you compare that with trapezoidal? 575 00:44:01,330 --> 00:44:05,780 You see, the comparison with trapezoidal is trapezoidal has 576 00:44:05,780 --> 00:44:12,340 the 1/2 f_n, the same, but it has the 1/2 f_(n+1) at the new 577 00:44:12,340 --> 00:44:19,620 step, involving the new U_(n+1), where this one is something 578 00:44:19,620 --> 00:44:23,600 like it, but something we already knew from Euler. 579 00:44:23,600 --> 00:44:27,120 So that's the natural idea. 580 00:44:27,120 --> 00:44:30,190 All these ideas actually would occur to all of us, 581 00:44:30,190 --> 00:44:35,440 if we started thinking about these for a few weeks. 582 00:44:35,440 --> 00:44:39,820 But, I guess I'm not planning to spend a few weeks on that, 583 00:44:39,820 --> 00:44:46,610 so I'm just jumping ahead to what people have thought of. 584 00:44:46,610 --> 00:44:51,680 So that's, you could say, two-step Euler. 585 00:44:51,680 --> 00:44:58,700 There's two evaluations of f within a step. 586 00:44:58,700 --> 00:45:00,130 So what's p equal 4? 587 00:45:00,130 --> 00:45:04,670 The famous Runge-Kutta is four of those. 588 00:45:04,670 --> 00:45:06,500 So you take -- there's an Euler step. 589 00:45:06,500 --> 00:45:08,300 That'll be step one. 590 00:45:08,300 --> 00:45:13,890 Then you'll stick that into something, at step n plus 1/2. 591 00:45:13,890 --> 00:45:16,430 That'll be a second evaluation of f. 592 00:45:16,430 --> 00:45:18,450 That'll give you some output. 593 00:45:18,450 --> 00:45:22,480 You plug that in and that will actually, 594 00:45:22,480 --> 00:45:25,770 if it's correctly chosen, be another, better approximation 595 00:45:25,770 --> 00:45:28,440 at 1/2, at n plus 1/2. 596 00:45:28,440 --> 00:45:30,040 You put that into the fourth one, 597 00:45:30,040 --> 00:45:34,180 which will give you something close to U_(n+1) and then you 598 00:45:34,180 --> 00:45:35,900 take the right weights. 599 00:45:35,900 --> 00:45:37,090 Here they were 1/2 and 1/2. 600 00:45:39,890 --> 00:45:42,330 Anyway, you can do it. 601 00:45:42,330 --> 00:45:46,150 The algebra begins to get messy beyond p equal 4, 602 00:45:46,150 --> 00:45:48,730 but you can go beyond p equal 4. 603 00:45:48,730 --> 00:45:53,180 I mean, there's books on Runge-Kutta methods 604 00:45:53,180 --> 00:45:56,470 because the algebra gets -- of these f of f of f. 605 00:45:56,470 --> 00:46:02,330 If there's anything that -- any construction in math that looks 606 00:46:02,330 --> 00:46:09,080 so easy, just take f of f of f of f of a function, 607 00:46:09,080 --> 00:46:11,640 of a number. 608 00:46:11,640 --> 00:46:14,510 That seems so simple, but actually it's extremely hard 609 00:46:14,510 --> 00:46:16,490 to understand what happens. 610 00:46:16,490 --> 00:46:22,430 Maybe you know that Mandelbrot's fractal sets and all 611 00:46:22,430 --> 00:46:25,410 these problems of chaos come exactly that way 612 00:46:25,410 --> 00:46:29,940 and they're extremely hard to see what's happening. 613 00:46:29,940 --> 00:46:34,380 When you do repeated composition, 614 00:46:34,380 --> 00:46:37,760 you would call that f of f of f. 615 00:46:37,760 --> 00:46:43,740 So Runge-Kutta stops at 4 and then there 616 00:46:43,740 --> 00:46:46,640 is actually an implicit Runge-Kutta 617 00:46:46,640 --> 00:46:48,655 that people work on, but I don't dare 618 00:46:48,655 --> 00:46:50,910 to begin to write that down and I won't even 619 00:46:50,910 --> 00:46:54,250 write the details of that one, which are in our notes 620 00:46:54,250 --> 00:46:57,100 and in every set of notes. 621 00:46:57,100 --> 00:46:58,370 But what's the point? 622 00:47:01,920 --> 00:47:04,420 First, we get the accuracy, so what's 623 00:47:04,420 --> 00:47:07,120 the other thing you have to ask about? 624 00:47:07,120 --> 00:47:09,390 Stability. 625 00:47:09,390 --> 00:47:11,940 So what does our stability calculation 626 00:47:11,940 --> 00:47:15,740 give for Runge-Kutta? 627 00:47:18,960 --> 00:47:20,590 The thing that you have to work with, 628 00:47:20,590 --> 00:47:23,150 actually, turns out to be quite nice. 629 00:47:23,150 --> 00:47:30,010 It's just the series for e to the a*t chopped off 630 00:47:30,010 --> 00:47:33,330 at the power, however far you're going. 631 00:47:33,330 --> 00:47:44,480 So this is Runge-Kutta stability, p equal to 2. 632 00:47:44,480 --> 00:47:48,990 The growth factor will be from -- 633 00:47:48,990 --> 00:47:56,140 if I just do this and I take the model problem, f equal a*u, 634 00:47:56,140 --> 00:47:59,230 you could -- if I took that out of there, 635 00:47:59,230 --> 00:48:06,540 the growth factor will be 1 plus a delta t plus 1/2 a delta t 636 00:48:06,540 --> 00:48:09,210 squared. 637 00:48:09,210 --> 00:48:09,710 Stop. 638 00:48:13,540 --> 00:48:17,160 That's what this would give, for the model problem, 639 00:48:17,160 --> 00:48:19,990 as the growth factor that multiplies each U to get 640 00:48:19,990 --> 00:48:30,420 the next U. What p equal 4 would give would be the same thing 641 00:48:30,420 --> 00:48:35,150 plus -- it's just the right -- it's just the exponential 642 00:48:35,150 --> 00:48:43,370 series chopped off at the fourth term. 643 00:48:43,370 --> 00:48:45,220 So of course, you see immediately, 644 00:48:45,220 --> 00:48:47,840 what's the discretization error? 645 00:48:47,840 --> 00:48:49,420 It's delta t of the fifth. 646 00:48:49,420 --> 00:48:57,640 It's the first missing term that Runge-Kutta didn't capture, 647 00:48:57,640 --> 00:48:59,880 multiplied by 1 over 5 factorial, 648 00:48:59,880 --> 00:49:03,740 so you know right away that the constant is 1 over 120, 649 00:49:03,740 --> 00:49:07,280 which is pretty good. 650 00:49:07,280 --> 00:49:10,070 But the question is the stability. 651 00:49:10,070 --> 00:49:12,730 So what do we want to know here? 652 00:49:12,730 --> 00:49:18,170 We want to know -- so let me let a delta t be some -- 653 00:49:18,170 --> 00:49:21,020 because that same number is coming up -- let me call it z, 654 00:49:21,020 --> 00:49:21,810 right? 655 00:49:21,810 --> 00:49:26,790 So the stability test, for p equal to 2, 656 00:49:26,790 --> 00:49:31,540 is to look at the magnitude of 1 plus z 657 00:49:31,540 --> 00:49:35,925 plus 1/2 z squared and we want it to be less than 658 00:49:35,925 --> 00:49:43,370 or equal to 1. 659 00:49:43,370 --> 00:49:47,360 Now, the point is, this is something -- 660 00:49:47,360 --> 00:49:50,530 I hope you try this this weekend. 661 00:49:50,530 --> 00:49:54,440 Somehow get MATLAB to plot, in the complex plane -- 662 00:49:54,440 --> 00:50:00,990 see you really -- up there now we've just -- at some point z, 663 00:50:00,990 --> 00:50:05,770 some negative value of z -- it's going to hit 1 and then after 664 00:50:05,770 --> 00:50:11,040 that, it's going to be unstable. 665 00:50:11,040 --> 00:50:13,750 Of course, there's got to be a point where it's unstable. 666 00:50:13,750 --> 00:50:16,420 This is an explicit method. 667 00:50:16,420 --> 00:50:19,860 Runge-Kutta is explicit. 668 00:50:19,860 --> 00:50:22,820 So it can't -- if the problem is too stiff, 669 00:50:22,820 --> 00:50:27,410 we're going to be killed, but if it's an ordinary problem, 670 00:50:27,410 --> 00:50:28,880 this is a winner. 671 00:50:28,880 --> 00:50:35,030 So p equal 4, this is the code ode45, which uses -- 672 00:50:35,030 --> 00:50:41,960 combines Runge-Kutta with 4 on a higher-order formula to get 673 00:50:41,960 --> 00:50:47,770 an estimate for the error in that step. 674 00:50:47,770 --> 00:50:51,540 So I could make some comment on predictor-corrector methods, 675 00:50:51,540 --> 00:50:54,100 but I'll leave that in the notes. 676 00:50:54,100 --> 00:51:02,430 So, what does that region look like? 677 00:51:02,430 --> 00:51:07,115 And then, the other one is, what does the region 1 plus the z 678 00:51:07,115 --> 00:51:12,820 plus 1/2 z squared plus 1/6 z cubed plus 1/24 z 679 00:51:12,820 --> 00:51:15,430 to the fourth, less or equal 1 -- 680 00:51:15,430 --> 00:51:18,760 what's that region look like? 681 00:51:18,760 --> 00:51:24,680 I mean, if those regions are small, the method loses. 682 00:51:24,680 --> 00:51:27,940 If those regions are bigger than we 683 00:51:27,940 --> 00:51:30,830 get from Adams-Bashforth or backward differences, 684 00:51:30,830 --> 00:51:38,240 then the methods in there are good, successful. 685 00:51:38,240 --> 00:51:43,260 I've forgotten exactly what the picture is like there. 686 00:51:43,260 --> 00:51:45,000 Why am I in the complex plane? 687 00:51:45,000 --> 00:51:50,430 Because the number a -- z is a delta t. 688 00:51:50,430 --> 00:51:53,050 That number a can be complex. 689 00:51:53,050 --> 00:51:56,110 How can it be complex? 690 00:51:56,110 --> 00:51:59,860 I'm not really thinking that our model equation u prime equal 691 00:51:59,860 --> 00:52:05,400 a*u will be complex, but our model system would be u prime 692 00:52:05,400 --> 00:52:11,270 equals a matrix times u and then it's the eigenvalues of that 693 00:52:11,270 --> 00:52:18,010 matrix that take the place of little a. 694 00:52:18,010 --> 00:52:21,860 So we have n different little a's going for a system 695 00:52:21,860 --> 00:52:28,330 and the eigenvalues of a totally real matrix can be nonreal, 696 00:52:28,330 --> 00:52:30,220 can be complex numbers. 697 00:52:30,220 --> 00:52:35,797 Let's just think when that happens and then I'll -- 698 00:52:35,797 --> 00:52:37,130 what does the figure looks like? 699 00:52:37,130 --> 00:52:41,010 For p equals 4, I sort of remember that the figure even 700 00:52:41,010 --> 00:52:46,010 captures a little bit of that plane and then it goes -- 701 00:52:46,010 --> 00:52:52,570 why should I mess up the board with -- 702 00:52:52,570 --> 00:52:54,440 it's a pretty good-sized region. 703 00:52:54,440 --> 00:52:59,390 It's probably not shaped like that at all, 704 00:52:59,390 --> 00:53:04,810 but the point is, maybe this reaches out to about e, 705 00:53:04,810 --> 00:53:05,410 minus e. 706 00:53:05,410 --> 00:53:06,910 I think it does, somewhere around -- 707 00:53:06,910 --> 00:53:09,210 I don't know if that's an accurate -- right, 708 00:53:09,210 --> 00:53:15,780 reaches out around minus -- and reaches up to around 2. 709 00:53:15,780 --> 00:53:18,960 So in this last second, I was just going to say -- 710 00:53:18,960 --> 00:53:23,640 and we'll see it plenty of times -- where do we get complex -- 711 00:53:23,640 --> 00:53:27,640 where do we get nice, real, big systems, 712 00:53:27,640 --> 00:53:33,430 stiff systems with complex eigenvalues in real 713 00:53:33,430 --> 00:53:36,360 calculations? 714 00:53:36,360 --> 00:53:40,560 By real calculations, I really mean PDEs. 715 00:53:40,560 --> 00:53:43,630 So in a partial differential equation -- 716 00:53:43,630 --> 00:53:48,130 so this is just a throw-away comment that we'll deal with 717 00:53:48,130 --> 00:53:51,230 again -- in partial differential equations, 718 00:53:51,230 --> 00:53:58,920 a u_(x,x) term or a u_(y,y) term -- that get multiplied, 719 00:53:58,920 --> 00:54:05,310 typically, by diffusion, those are diffusion terms -- 720 00:54:05,310 --> 00:54:10,790 those produce symmetric things, symmetric negative definite. 721 00:54:10,790 --> 00:54:17,120 But convection, advection terms, velocity terms that get 722 00:54:17,120 --> 00:54:21,960 multiplied by some -- that are first derivatives -- 723 00:54:21,960 --> 00:54:25,890 those produce -- those are antisymmetric. 724 00:54:25,890 --> 00:54:28,620 Those produce these complex eigenvalues 725 00:54:28,620 --> 00:54:30,630 that we have to deal with. 726 00:54:30,630 --> 00:54:36,480 So, everybody wants to solve convection-diffusion. 727 00:54:36,480 --> 00:54:40,170 Convection has these terms, diffusion has these. 728 00:54:40,170 --> 00:54:42,770 The coefficient may be very small. 729 00:54:42,770 --> 00:54:44,470 The hard -- that's the hard case, 730 00:54:44,470 --> 00:54:47,530 when the convection is serious. 731 00:54:47,530 --> 00:54:50,970 It's pushing us up the imaginary axis, 732 00:54:50,970 --> 00:54:54,370 where this is bringing us to the left 733 00:54:54,370 --> 00:54:58,740 and we're going to spend time thinking of good methods 734 00:54:58,740 --> 00:55:00,350 for that. 735 00:55:00,350 --> 00:55:05,470 So my homework -- would you like figure out this region and make 736 00:55:05,470 --> 00:55:07,400 a better pictures? 737 00:55:07,400 --> 00:55:12,830 Get MATLAB to do it and try any one of these methods 738 00:55:12,830 --> 00:55:16,710 on the model problem. 739 00:55:16,710 --> 00:55:18,470 Try a method or two. 740 00:55:18,470 --> 00:55:22,490 See whether the stability, the theoretical stability limit 741 00:55:22,490 --> 00:55:23,560 is serious. 742 00:55:23,560 --> 00:55:31,450 I mean, does the limit of a delta t -- can we -- 743 00:55:31,450 --> 00:55:35,140 as I'm driving to MIT, I sometimes exceed the speed 744 00:55:35,140 --> 00:55:39,070 limit, the stability limit. 745 00:55:39,070 --> 00:55:43,420 Is that OK or not with finite differences? 746 00:55:43,420 --> 00:55:44,780 I'll see you Monday. 747 00:55:44,780 --> 00:55:45,800 All right, thanks. 748 00:55:45,800 --> 00:55:47,480 Have a good weekend.