1 00:00:00,030 --> 00:00:02,400 The following content is provided under a Creative 2 00:00:02,400 --> 00:00:03,770 Commons license. 3 00:00:03,770 --> 00:00:06,020 Your support will help MIT OpenCourseWare 4 00:00:06,020 --> 00:00:10,100 continue to offer high quality educational resources for free. 5 00:00:10,100 --> 00:00:12,670 To make a donation or to view additional materials 6 00:00:12,670 --> 00:00:16,580 from hundreds of MIT courses, visit MIT OpenCourseWare 7 00:00:16,580 --> 00:00:17,235 at ocw.mit.edu. 8 00:00:25,290 --> 00:00:28,940 QIQI WANG: So today, I kind of prepared 9 00:00:28,940 --> 00:00:32,040 not for the whole lecture, but a little bit short of that. 10 00:00:32,040 --> 00:00:36,780 So I really expect you to ask questions on this material. 11 00:00:36,780 --> 00:00:38,610 It's supposed to be a lecture that 12 00:00:38,610 --> 00:00:43,760 helps you review the material we have already covered 13 00:00:43,760 --> 00:00:45,620 and prepare you for the midterm. 14 00:00:45,620 --> 00:00:48,970 So instead of me just going mechanically 15 00:00:48,970 --> 00:00:52,890 through the material, I want to ask 16 00:00:52,890 --> 00:00:58,800 you to initiate what do you think is more confusing, 17 00:00:58,800 --> 00:01:04,450 or you'd like me to clarify again, and things like that. 18 00:01:04,450 --> 00:01:06,780 If you feel something is confusing, 19 00:01:06,780 --> 00:01:08,940 it's probably confusing for the whole class. 20 00:01:08,940 --> 00:01:14,000 So please raise it so that I can spend more time on it. 21 00:01:14,000 --> 00:01:18,822 OK, so before I do that, I just first want 22 00:01:18,822 --> 00:01:22,800 to finish up the finite volume scheme 23 00:01:22,800 --> 00:01:25,670 we have been working on for the last two lectures. 24 00:01:25,670 --> 00:01:27,930 We have been discussing finite volume schemes 25 00:01:27,930 --> 00:01:30,700 in one dimension. 26 00:01:30,700 --> 00:01:34,440 But applying the same concept to two dimensions or even three 27 00:01:34,440 --> 00:01:38,900 dimensional is surprisingly straightforward. 28 00:01:38,900 --> 00:01:44,160 So let me first give out what we did in 1D. 29 00:01:44,160 --> 00:01:48,050 We started out in finite volume schemes-- 30 00:01:48,050 --> 00:01:52,820 we started out with the integral form of the differential 31 00:01:52,820 --> 00:01:53,450 equation. 32 00:01:53,450 --> 00:01:58,370 We started out with d dt of the integral 33 00:01:58,370 --> 00:02:05,650 of a conserved quantity inside a control volume, omega, 34 00:02:05,650 --> 00:02:12,740 it is equal to minus of the flux out of the control volume. 35 00:02:12,740 --> 00:02:16,360 The minus sign is because the time derivative is really 36 00:02:16,360 --> 00:02:18,920 the flux into the control volume. 37 00:02:18,920 --> 00:02:22,630 But the definition of our n, normally, 38 00:02:22,630 --> 00:02:24,830 is usually out of the control volume. 39 00:02:24,830 --> 00:02:28,920 So we have a minus sign to reverse the normal so that it 40 00:02:28,920 --> 00:02:33,550 points into the control volume. 41 00:02:33,550 --> 00:02:36,570 Times the flux as a function of rho, ds. 42 00:02:36,570 --> 00:02:43,210 So this is like the integral form of the conservation law, 43 00:02:43,210 --> 00:02:44,070 right? 44 00:02:44,070 --> 00:02:47,270 And I can write the same thing in 1D 45 00:02:47,270 --> 00:02:54,890 that is the specific case of omega being just an interval. 46 00:02:54,890 --> 00:03:02,480 Now, this is the flux at the left minus flux at the right. 47 00:03:02,480 --> 00:03:07,230 Or it's really rho at left and rho at right. 48 00:03:07,230 --> 00:03:13,780 Right, so in 1D, what we did was we set rho bar of k 49 00:03:13,780 --> 00:03:20,884 is equal to the size of the control volume of k. 50 00:03:20,884 --> 00:03:24,790 So it's the integral of the control 51 00:03:24,790 --> 00:03:29,190 volume divided by the size of the control volume. 52 00:03:29,190 --> 00:03:33,790 And basically, by plugging in the definition 53 00:03:33,790 --> 00:03:37,860 into the integral form of the conversation law, what 54 00:03:37,860 --> 00:03:41,240 we get is 1 over the size of the control volume, 55 00:03:41,240 --> 00:03:47,590 the flux at the left side of the control volume-- 56 00:03:47,590 --> 00:03:52,000 I'm just going to say F of k minus 1/2 57 00:03:52,000 --> 00:03:54,560 minus F of k plus 1/2. 58 00:03:54,560 --> 00:03:58,460 That is what we use to denote the flux at the cell 59 00:03:58,460 --> 00:04:00,810 interfaces. 60 00:04:00,810 --> 00:04:04,420 Now, let's go back to 2D, or the multi-dimensional form 61 00:04:04,420 --> 00:04:06,520 of the conservation law. 62 00:04:06,520 --> 00:04:11,440 We're going to define rho bar of a control volume, k, 63 00:04:11,440 --> 00:04:12,720 being the same thing. 64 00:04:12,720 --> 00:04:17,649 It is really the integral over the k-th control volume, 65 00:04:17,649 --> 00:04:24,124 rho dx, divided by-- in 2D, that is the area of that control 66 00:04:24,124 --> 00:04:27,020 volume. 67 00:04:27,020 --> 00:04:35,500 All right, let me draw a typical mesh in two dimensions. 68 00:04:35,500 --> 00:04:39,250 So let's say this is a triangular mesh in 2D. 69 00:04:39,250 --> 00:04:42,900 And this is like the k-th control volume. 70 00:04:42,900 --> 00:04:48,850 Right, for example, this is part of a mesh in two dimensions. 71 00:04:48,850 --> 00:04:49,690 Right? 72 00:04:49,690 --> 00:04:54,160 And we are computing-- we found the volume scheme. 73 00:04:54,160 --> 00:04:58,125 We are tracking the average of the solution 74 00:04:58,125 --> 00:04:59,250 inside that control volume. 75 00:05:03,060 --> 00:05:05,570 So this is my omega k. 76 00:05:05,570 --> 00:05:06,970 That's the control volume. 77 00:05:06,970 --> 00:05:11,350 And the area of that control volume is my Ak. 78 00:05:11,350 --> 00:05:15,140 And rho bar of k is the cell average there. 79 00:05:15,140 --> 00:05:21,420 So what is the time derivative of that cell average value? 80 00:05:21,420 --> 00:05:24,850 Because the area of the control volume does not change. 81 00:05:24,850 --> 00:05:30,900 It is equal to 1 over the area times the time derivative 82 00:05:30,900 --> 00:05:36,080 of the volume integral, right? 83 00:05:36,080 --> 00:05:37,810 And the time derivative of the volume 84 00:05:37,810 --> 00:05:43,810 integral, by the integral form of the conservation law, 85 00:05:43,810 --> 00:05:47,390 which you can get by applying divergence theorem, right? 86 00:05:47,390 --> 00:05:49,870 We did that last lecture by applying divergence theorem 87 00:05:49,870 --> 00:05:52,820 to the differential form of the equation 88 00:05:52,820 --> 00:05:55,950 to get this integral form. 89 00:05:55,950 --> 00:05:59,490 And just by plugging in, we get 1 90 00:05:59,490 --> 00:06:06,760 over Ak times the integral over the boundary of the control 91 00:06:06,760 --> 00:06:08,780 volume. 92 00:06:08,780 --> 00:06:10,200 We get a minus sign here. 93 00:06:10,200 --> 00:06:16,970 And the outward normal dotted with the flux, ds. 94 00:06:16,970 --> 00:06:22,710 Now, let's look at what this is on the control volume. 95 00:06:22,710 --> 00:06:25,562 So we found the integral over the boundary of the volume. 96 00:06:25,562 --> 00:06:29,080 If the control volume is this angle, what are the boundaries? 97 00:06:32,712 --> 00:06:37,110 We have [INAUDIBLE], right? 98 00:06:37,110 --> 00:06:41,920 Now, we can express that as minus 1 99 00:06:41,920 --> 00:06:56,770 over Ak of summation of ni dotted with Fi times the length 100 00:06:56,770 --> 00:06:58,780 of each of the sides. 101 00:06:58,780 --> 00:07:06,680 So i is, I'm going to say, like the sides of k. 102 00:07:06,680 --> 00:07:10,583 So Sk is the sides, the three sides 103 00:07:10,583 --> 00:07:17,360 of the triangle, all the boundaries of this triangle, k. 104 00:07:17,360 --> 00:07:22,596 So Sk is a set of these integrals that gives you 105 00:07:22,596 --> 00:07:25,330 these three boundaries. 106 00:07:25,330 --> 00:07:31,980 And the i the normal of each edge. 107 00:07:31,980 --> 00:07:34,650 So for example, in this direction, 108 00:07:34,650 --> 00:07:40,968 so this is the ni's pointing in this direction. 109 00:07:40,968 --> 00:07:42,880 These are the ni's. 110 00:07:42,880 --> 00:07:46,546 And the graph, the Fi is the flux 111 00:07:46,546 --> 00:07:50,510 on these edges, which we have gone through, again, 112 00:07:50,510 --> 00:07:55,721 approximately, that by applying the graph function of these two 113 00:07:55,721 --> 00:07:59,088 neighboring cells. 114 00:07:59,088 --> 00:08:01,190 It's the same thing as we did in 1D. 115 00:08:01,190 --> 00:08:05,870 In 1D, we approximate the flux at the cell interface 116 00:08:05,870 --> 00:08:08,696 by looking at the value of the function 117 00:08:08,696 --> 00:08:10,576 at the neighboring two cells. 118 00:08:10,576 --> 00:08:12,348 In 2D, it's the same thing. 119 00:08:12,348 --> 00:08:16,380 We approximate the flux of the cell interfaces, 120 00:08:16,380 --> 00:08:20,944 so yield between two cells, by looking at the flux, 121 00:08:20,944 --> 00:08:23,354 by looking at the solution of the cell averages 122 00:08:23,354 --> 00:08:26,250 over the different sides. 123 00:08:26,250 --> 00:08:29,450 If we have to do [INAUDIBLE], we do [INAUDIBLE]. 124 00:08:29,450 --> 00:08:34,420 We look at which direction is the [INAUDIBLE] over this edge. 125 00:08:34,420 --> 00:08:37,500 Is it from this side or this side? 126 00:08:37,500 --> 00:08:41,942 And choose the value on either side of this edge. 127 00:08:44,780 --> 00:08:47,180 If we go to three dimensions, it's the same. 128 00:08:47,180 --> 00:08:51,410 Instead of edges, we have spaces between control volumes. 129 00:08:54,095 --> 00:08:55,720 That is the case where we really become 130 00:08:55,720 --> 00:08:58,220 looking at interfaces, interfaces 131 00:08:58,220 --> 00:09:03,296 at each place in three dimensions instead of an edge 132 00:09:03,296 --> 00:09:07,080 in 2D or a [INAUDIBLE]. 133 00:09:07,080 --> 00:09:16,140 Now, we approximate using a numerical flux that 134 00:09:16,140 --> 00:09:22,325 is rho bar of k, rho bar of-- I'm just going to say neighbor. 135 00:09:25,330 --> 00:09:28,013 And then, we're done, right? 136 00:09:28,013 --> 00:09:31,140 We have approximated the time derivative. 137 00:09:31,140 --> 00:09:36,440 So you know the [INAUDIBLE] of the cell average in the cell k. 138 00:09:36,440 --> 00:09:41,810 And the function of the cell average in k and the cell 139 00:09:41,810 --> 00:09:45,950 average in the neighbors of the k-th control volume. 140 00:09:48,920 --> 00:09:52,479 [INAUDIBLE] other point is the area, just like normal, 141 00:09:52,479 --> 00:09:55,353 the length of the k-th [INAUDIBLE], 142 00:09:55,353 --> 00:09:57,748 the n-th normal in this space. 143 00:09:57,748 --> 00:10:00,345 And the length of the Ses are all 144 00:10:00,345 --> 00:10:03,310 known quantities from the mesh. 145 00:10:03,310 --> 00:10:08,880 So this is [INAUDIBLE] flux, a function of the cell averages. 146 00:10:08,880 --> 00:10:12,030 So essentially, we have turned a PDE into an ODE. 147 00:10:19,850 --> 00:10:21,260 What is that ODE? 148 00:10:21,260 --> 00:10:28,320 That ODE is d dt of rho bar 1, rho bar 2, et cetera, 149 00:10:28,320 --> 00:10:33,160 to rho bar of the last control volume 150 00:10:33,160 --> 00:10:38,624 is equal to some joint function of all these rho bars. 151 00:10:42,420 --> 00:10:47,392 OK, so this n is the function that 152 00:10:47,392 --> 00:10:53,884 connects the time derivative of one cell average 153 00:10:53,884 --> 00:10:58,070 to the value of a cell average of that control volume 154 00:10:58,070 --> 00:10:59,550 and its neighboring control volume. 155 00:11:04,150 --> 00:11:05,090 Is it clear? 156 00:11:05,090 --> 00:11:09,050 How we do the same thing in 2D? 157 00:11:09,050 --> 00:11:12,800 There is a lot-- there is going to be a lot of bookkeeping just 158 00:11:12,800 --> 00:11:16,440 because of the mesh and for every cell, 159 00:11:16,440 --> 00:11:19,190 we need to be able to find its neighboring cells, 160 00:11:19,190 --> 00:11:23,975 and find which edge is connected to which cell, 161 00:11:23,975 --> 00:11:26,885 and who is whose neighbor. 162 00:11:26,885 --> 00:11:29,730 Conceptually, this is exactly what we do. 163 00:11:33,000 --> 00:11:35,750 OK, so we have started finding the difference, 164 00:11:35,750 --> 00:11:39,550 finding the volume, and all these methods 165 00:11:39,550 --> 00:11:43,010 are methods that are intended to turn a partial differential 166 00:11:43,010 --> 00:11:47,460 equation into an enormous ordinary differential equation, 167 00:11:47,460 --> 00:11:49,730 right? 168 00:11:49,730 --> 00:11:52,485 And once we turn a PDE into an ODE, 169 00:11:52,485 --> 00:11:54,300 we can apply exactly what we have 170 00:11:54,300 --> 00:11:57,410 been doing in ODEs to study these PDEs, 171 00:11:57,410 --> 00:12:03,280 to discretize, solve, and study the behavior of these PDEs. 172 00:12:03,280 --> 00:12:07,090 OK, so this is going back to the review part. 173 00:12:07,090 --> 00:12:10,950 I'm going to really both the rule 174 00:12:10,950 --> 00:12:16,070 of the mesh and the outcomes and put particular emphasis 175 00:12:16,070 --> 00:12:18,860 on several of the points. 176 00:12:18,860 --> 00:12:21,880 The first point is order of accuracy. 177 00:12:21,880 --> 00:12:26,770 So the order of accuracy in an ODE scheme 178 00:12:26,770 --> 00:12:29,850 is found by looking at the equation, du 179 00:12:29,850 --> 00:12:31,755 dt equal to f of u. 180 00:12:35,840 --> 00:12:41,670 And we discretize this into some operator. 181 00:12:41,670 --> 00:12:44,789 We discretize the d dt into a finite difference using 182 00:12:44,789 --> 00:12:45,997 a finite difference operator. 183 00:12:48,620 --> 00:12:52,590 So we discretize the d dt into some kind 184 00:12:52,590 --> 00:12:54,750 of a delta over delta t. 185 00:12:54,750 --> 00:12:59,130 I mean, this is going to be finite difference operator, 186 00:12:59,130 --> 00:13:01,080 operating on the u. 187 00:13:01,080 --> 00:13:03,940 Now, this is not going to be-- if you 188 00:13:03,940 --> 00:13:06,120 know the analytical solution, if you 189 00:13:06,120 --> 00:13:10,390 know what u is, for example, if f of u is equal to lambda u, 190 00:13:10,390 --> 00:13:11,720 then you know what u is, right? 191 00:13:11,720 --> 00:13:18,040 And you plug this into the finite difference operator. 192 00:13:18,040 --> 00:13:21,850 This is not going to be exactly equal to 0. 193 00:13:21,850 --> 00:13:22,350 Right? 194 00:13:22,350 --> 00:13:24,561 This is not going to be equal to 0. 195 00:13:28,940 --> 00:13:34,150 OK, so for example, if you're looking at Forward Euler, 196 00:13:34,150 --> 00:13:39,030 if you're looking at Forward Euler, u of k plus 1 minus u 197 00:13:39,030 --> 00:13:46,760 of k divided by delta t, if you subtract by f of u k, 198 00:13:46,760 --> 00:13:49,930 it is not going to be equal to 0. 199 00:13:49,930 --> 00:13:53,054 If instead of u k you plug in the [INAUDIBLE] solution, 200 00:13:53,054 --> 00:13:53,970 you're going to get 0. 201 00:13:53,970 --> 00:13:57,220 But if you plug in the real, analytical solution, 202 00:13:57,220 --> 00:13:59,700 you're not going to get 0. 203 00:13:59,700 --> 00:14:03,310 And the order of that approximation 204 00:14:03,310 --> 00:14:07,620 is going to be the local order of accuracy. 205 00:14:07,620 --> 00:14:13,660 So if this is equal to O delta t to the k-th power, 206 00:14:13,660 --> 00:14:21,710 then k is the local order of accuracy. 207 00:14:21,710 --> 00:14:25,430 Now, here, the usual confusion that happens here 208 00:14:25,430 --> 00:14:29,180 is that when you do the truncation error 209 00:14:29,180 --> 00:14:33,330 analysis in another way, if you do the truncation error 210 00:14:33,330 --> 00:14:40,500 analysis by writing down the tau equal to u of k plus 1, 211 00:14:40,500 --> 00:14:45,084 if you plug in the scheme, say, OK, u of k plus 1 212 00:14:45,084 --> 00:14:47,000 minus the right hand side of the scheme, which 213 00:14:47,000 --> 00:14:56,890 is u k plus delta t times f of u k, I'm going to get-- tau, 214 00:14:56,890 --> 00:15:00,910 for example, in Forward Euler is O of delta t squared. 215 00:15:00,910 --> 00:15:06,500 Which, for any scheme, or for Forward Euler, 216 00:15:06,500 --> 00:15:09,220 it is because k is equal to 1. 217 00:15:09,220 --> 00:15:13,090 Delta t squared is actually delta t to the k plus 1 power. 218 00:15:13,090 --> 00:15:16,670 So why is there a plus 1 when I compute 219 00:15:16,670 --> 00:15:19,460 the local order of accuracy by figuring out the truncation 220 00:15:19,460 --> 00:15:21,436 error of the update scheme? 221 00:15:21,436 --> 00:15:25,388 Well, I get k if I look at the differential. 222 00:15:25,388 --> 00:15:27,858 If I look at the approximate, the time derivative 223 00:15:27,858 --> 00:15:31,316 could be [INAUDIBLE]. 224 00:15:31,316 --> 00:15:32,304 Yeah? 225 00:15:32,304 --> 00:15:33,292 AUDIENCE: [INAUDIBLE] 226 00:15:41,932 --> 00:15:42,640 QIQI WANG: Right. 227 00:15:42,640 --> 00:15:44,160 We still need a manipulation. 228 00:15:44,160 --> 00:15:50,305 You can base the finite difference approximation 229 00:15:50,305 --> 00:15:56,090 of the n-th derivative and the update formula. 230 00:15:56,090 --> 00:15:59,750 So how do you go back and forth from this and this? 231 00:15:59,750 --> 00:16:06,975 How do you go back and forth from the approximation 232 00:16:06,975 --> 00:16:09,810 or the PDE to the update scheme? 233 00:16:09,810 --> 00:16:13,430 You have [INAUDIBLE] to multiply these two equations by delta t 234 00:16:13,430 --> 00:16:15,908 in order to get to here. 235 00:16:15,908 --> 00:16:17,882 You need to multiply this by delta t 236 00:16:17,882 --> 00:16:20,720 to get the update scheme. 237 00:16:20,720 --> 00:16:24,210 In other words, the approximation error 238 00:16:24,210 --> 00:16:29,250 of this update scheme is equal to the approximation 239 00:16:29,250 --> 00:16:33,680 error of the time derivative multiplied by delta t. 240 00:16:33,680 --> 00:16:35,873 Therefore, of course I get one more delta t 241 00:16:35,873 --> 00:16:36,873 in the truncation error. 242 00:16:40,170 --> 00:16:43,915 So that is the reason we ask you to subtract 1 243 00:16:43,915 --> 00:16:48,090 when we figure out the local order of accuracy. 244 00:16:48,090 --> 00:16:52,120 If you figure out the order of the truncation error, 245 00:16:52,120 --> 00:16:53,605 it's the single step upward. 246 00:16:57,565 --> 00:16:59,913 You can go from the truncation error of the single step 247 00:16:59,913 --> 00:17:03,930 updates to the truncation error of the time derivative. 248 00:17:03,930 --> 00:17:06,470 You divide this whole thing by delta t. 249 00:17:06,470 --> 00:17:10,270 And dividing [INAUDIBLE] by delta t, 250 00:17:10,270 --> 00:17:15,740 you get O delta t to the k, right? 251 00:17:15,740 --> 00:17:18,366 AUDIENCE: [INAUDIBLE] 252 00:17:18,366 --> 00:17:19,032 QIQI WANG: Yeah? 253 00:17:19,032 --> 00:17:19,907 AUDIENCE: [INAUDIBLE] 254 00:17:27,280 --> 00:17:29,660 QIQI WANG: Oh, yeah, sorry. 255 00:17:29,660 --> 00:17:31,930 Thank you for that. 256 00:17:31,930 --> 00:17:35,510 Let me use p for the order of accuracy. 257 00:17:35,510 --> 00:17:37,327 It is not by purpose. 258 00:17:37,327 --> 00:17:38,202 AUDIENCE: [INAUDIBLE] 259 00:17:41,646 --> 00:17:42,630 QIQI WANG: Yeah. 260 00:17:42,630 --> 00:17:43,505 AUDIENCE: [INAUDIBLE] 261 00:17:54,905 --> 00:17:55,780 QIQI WANG: Ah, sorry. 262 00:17:55,780 --> 00:17:57,370 This is k. 263 00:17:57,370 --> 00:17:58,190 Those are k's. 264 00:17:58,190 --> 00:17:59,357 Right, I confused myself. 265 00:18:08,222 --> 00:18:09,097 AUDIENCE: [INAUDIBLE] 266 00:18:17,640 --> 00:18:18,930 QIQI WANG: All right. 267 00:18:18,930 --> 00:18:22,050 Yeah, so the k is the time step number, 268 00:18:22,050 --> 00:18:24,670 and p is the order of accuracy. 269 00:18:24,670 --> 00:18:26,553 Yeah, thank you. 270 00:18:26,553 --> 00:18:28,440 AUDIENCE: [INAUDIBLE] 271 00:18:28,440 --> 00:18:29,404 QIQI WANG: Yeah. 272 00:18:29,404 --> 00:18:30,641 AUDIENCE: [INAUDIBLE] 273 00:18:30,641 --> 00:18:31,932 QIQI WANG: Any other questions? 274 00:18:36,490 --> 00:18:39,910 So that is, I think, the one order difference 275 00:18:39,910 --> 00:18:43,000 in the local order of accuracy in that case 276 00:18:43,000 --> 00:18:49,720 is a big source of mistakes in figuring out 277 00:18:49,720 --> 00:18:54,330 the order of accuracy local order of accuracy analysis. 278 00:18:56,920 --> 00:19:00,120 So try to remember that. 279 00:19:00,120 --> 00:19:03,705 OK, so one thing, in the PDE context, 280 00:19:03,705 --> 00:19:07,840 if we approximate a PDE by a set of ODEs 281 00:19:07,840 --> 00:19:13,270 and solve it using ODEs by [INAUDIBLE], what 282 00:19:13,270 --> 00:19:16,550 do you think is going to be the order of accuracy 283 00:19:16,550 --> 00:19:18,120 we'd get at the end? 284 00:19:22,350 --> 00:19:28,679 How much area are we making in that approximation? 285 00:19:39,160 --> 00:19:43,380 All right, what determines the accuracy 286 00:19:43,380 --> 00:19:48,890 if I approximate a PDE by an ODE and solve the ODE using, 287 00:19:48,890 --> 00:19:51,985 let's say, a Forward Euler? 288 00:19:51,985 --> 00:19:54,400 AUDIENCE: [INAUDIBLE] 289 00:19:54,400 --> 00:19:56,476 QIQI WANG: The accuracy of the ODE scheme? 290 00:19:56,476 --> 00:19:58,270 AUDIENCE: [INAUDIBLE] 291 00:19:58,270 --> 00:20:00,880 QIQI WANG: And the accuracy of the discretization, actually, 292 00:20:00,880 --> 00:20:02,290 both. 293 00:20:02,290 --> 00:20:05,020 Actually, both. 294 00:20:05,020 --> 00:20:11,215 So if that happens even for some advanced graduate students when 295 00:20:11,215 --> 00:20:15,800 they look at the order of accuracy of a PDE 296 00:20:15,800 --> 00:20:18,584 discretization, for example, it's quite usual for them 297 00:20:18,584 --> 00:20:24,620 to have a plot of the area versus the grid. 298 00:20:24,620 --> 00:20:27,870 And often what they find out, as I refine the grid further, 299 00:20:27,870 --> 00:20:30,225 I no longer improve my accuracy. 300 00:20:30,225 --> 00:20:32,105 So what happens? 301 00:20:32,105 --> 00:20:32,980 AUDIENCE: [INAUDIBLE] 302 00:20:35,900 --> 00:20:40,010 QIQI WANG: Yes, because then I'm replacing my grid spacing. 303 00:20:40,010 --> 00:20:44,200 And if I'm not also decreasing my cell step size, 304 00:20:44,200 --> 00:20:47,680 if I'm not decreasing my delta t, even 305 00:20:47,680 --> 00:20:51,390 though my spatial discretization is extremely accurate, 306 00:20:51,390 --> 00:20:55,526 has no error, I feel that a truncation error would 307 00:20:55,526 --> 00:20:57,550 be proportionate to delta t. 308 00:20:57,550 --> 00:20:59,642 So if I don't decrease my delta t, 309 00:20:59,642 --> 00:21:02,650 I'm not going to reduce my discretization error further. 310 00:21:05,452 --> 00:21:08,420 So the ODE discretization scheme makes an error. 311 00:21:08,420 --> 00:21:10,830 And that error is actually sometimes overlooked 312 00:21:10,830 --> 00:21:12,970 when you do PDE analysis. 313 00:21:12,970 --> 00:21:15,550 So remember, there is still going 314 00:21:15,550 --> 00:21:22,310 to be an ODE component, always, even if you look at PDEs, 315 00:21:22,310 --> 00:21:25,880 time dependent PDEs. 316 00:21:25,880 --> 00:21:29,120 OK, so this is the local order of accuracy. 317 00:21:29,120 --> 00:21:31,495 How does that relate to the global order of accuracy? 318 00:21:35,860 --> 00:21:41,940 So what if I do global order of accuracy? 319 00:21:41,940 --> 00:21:44,820 How does-- I mean, it's a very simple answer, 320 00:21:44,820 --> 00:21:46,560 so please answer me. 321 00:21:46,560 --> 00:21:49,780 How does local order of accuracy relate to the global order 322 00:21:49,780 --> 00:21:50,730 of accuracy? 323 00:21:54,060 --> 00:21:55,570 Global is one more than local? 324 00:21:58,185 --> 00:21:59,437 AUDIENCE: They're the same. 325 00:21:59,437 --> 00:22:00,770 QIQI WANG: Or are they the same? 326 00:22:00,770 --> 00:22:02,770 If it is consistent, yes. 327 00:22:05,780 --> 00:22:12,010 OK, global is equal to local if the scheme is zero stable. 328 00:22:14,107 --> 00:22:15,815 That's the Dahlquist Equivalence Theorem. 329 00:22:19,453 --> 00:22:19,953 Yes. 330 00:22:25,480 --> 00:22:28,260 Right, so local and global order of accuracy 331 00:22:28,260 --> 00:22:30,280 are the same, right? 332 00:22:30,280 --> 00:22:32,750 There is no plus one or minus one. 333 00:22:32,750 --> 00:22:35,340 The plus or minus one happens over here 334 00:22:35,340 --> 00:22:41,480 when you figure out the order of the one-step truncation error. 335 00:22:41,480 --> 00:22:43,570 Then you can get local order of accuracy, 336 00:22:43,570 --> 00:22:48,780 you need to subtract one from the delta t. 337 00:22:48,780 --> 00:22:50,845 But there is no plus or minus one 338 00:22:50,845 --> 00:22:53,724 in the global and local order of accuracy. 339 00:22:53,724 --> 00:22:56,059 AUDIENCE: [INAUDIBLE] one or greater [INAUDIBLE]. 340 00:22:58,870 --> 00:23:02,100 You can't have like zero order of local accuracy? 341 00:23:02,100 --> 00:23:04,590 QIQI WANG: Oh, yes, this is provided 342 00:23:04,590 --> 00:23:06,650 we have a consistent scheme, provided we 343 00:23:06,650 --> 00:23:09,720 have at least equal to 1 here. 344 00:23:09,720 --> 00:23:12,630 Right, yeah, thanks for that. 345 00:23:12,630 --> 00:23:20,490 So the whole thing is under the assumption 346 00:23:20,490 --> 00:23:23,590 that p is greater than or equal to 1. 347 00:23:23,590 --> 00:23:25,080 I mean, that's usually the case. 348 00:23:25,080 --> 00:23:28,818 We usually only look at vertical schemes 349 00:23:28,818 --> 00:23:31,662 that are consistent, right? 350 00:23:31,662 --> 00:23:33,314 AUDIENCE: [INAUDIBLE] 351 00:23:33,314 --> 00:23:35,230 QIQI WANG: Yeah, the definition of consistency 352 00:23:35,230 --> 00:23:40,500 is p greater than or equal to 1, local. 353 00:23:40,500 --> 00:23:42,590 That is a consistent scheme. 354 00:23:42,590 --> 00:23:43,090 Yes? 355 00:23:43,090 --> 00:23:45,330 AUDIENCE: And the practical way of finding 356 00:23:45,330 --> 00:23:48,460 that is the sum of the-- 357 00:23:48,460 --> 00:23:50,982 QIQI WANG: The practical way of finding what? 358 00:23:50,982 --> 00:23:53,603 AUDIENCE: If p is greater than or equal to 1 359 00:23:53,603 --> 00:23:58,860 [INAUDIBLE] discrete surface [INAUDIBLE] equals 0. 360 00:23:58,860 --> 00:24:00,760 QIQI WANG: A practical way of finding out 361 00:24:00,760 --> 00:24:07,390 if a scheme is consistent is by doing the global truncation 362 00:24:07,390 --> 00:24:08,370 error analysis. 363 00:24:08,370 --> 00:24:10,666 You have to look at the Taylor series expansion 364 00:24:10,666 --> 00:24:15,110 to find out if the scheme is consistent or not. 365 00:24:15,110 --> 00:24:18,010 I don't think there is an easier way 366 00:24:18,010 --> 00:24:21,500 to find out if a scheme is consistent or not. 367 00:24:21,500 --> 00:24:23,800 AUDIENCE: There was something in the notes 368 00:24:23,800 --> 00:24:31,343 about the coefficients of the non-derivative terms sum to 0. 369 00:24:31,343 --> 00:24:32,762 Might be a little bit [INAUDIBLE]. 370 00:24:42,150 --> 00:24:46,150 QIQI WANG: Yeah, OK, so there is a condition 371 00:24:46,150 --> 00:24:49,910 that the coefficients of the constant terms sum to 0. 372 00:24:49,910 --> 00:24:56,581 But I think that is a necessary but not sufficient condition. 373 00:24:56,581 --> 00:24:57,456 AUDIENCE: [INAUDIBLE] 374 00:25:01,408 --> 00:25:03,878 QIQI WANG: Right, right. 375 00:25:03,878 --> 00:25:08,484 So if I flip a coin of [INAUDIBLE], 376 00:25:08,484 --> 00:25:11,610 we're going to get a sum of 0, that 377 00:25:11,610 --> 00:25:13,218 is not a consistent scheme. 378 00:25:13,218 --> 00:25:14,093 AUDIENCE: [INAUDIBLE] 379 00:25:23,224 --> 00:25:24,640 QIQI WANG: Yeah, the sum to 0 only 380 00:25:24,640 --> 00:25:29,500 means that you get consistent approximation of the d by dt 381 00:25:29,500 --> 00:25:31,595 equal to 0 equation. 382 00:25:31,595 --> 00:25:32,470 AUDIENCE: [INAUDIBLE] 383 00:25:42,180 --> 00:25:47,420 QIQI WANG: OK, again, if you look at a PDE, 384 00:25:47,420 --> 00:25:50,175 the global order of accuracy is going 385 00:25:50,175 --> 00:25:53,870 to be determined both by the spatial and temporal 386 00:25:53,870 --> 00:25:55,330 discretization. 387 00:25:55,330 --> 00:26:00,066 So even though if you look at how the area decays 388 00:26:00,066 --> 00:26:01,200 as you refine your grid. 389 00:26:01,200 --> 00:26:05,670 You still need to be careful with the time derivative term. 390 00:26:05,670 --> 00:26:08,790 You need to refine your time step at the same time. 391 00:26:08,790 --> 00:26:11,640 And maybe you have to refine your time step 392 00:26:11,640 --> 00:26:15,457 even more if your order of accuracy in time 393 00:26:15,457 --> 00:26:17,290 is less than the order of accuracy in space. 394 00:26:19,980 --> 00:26:24,310 All right, any questions on accuracy? 395 00:26:24,310 --> 00:26:29,320 And accuracy, if you need to figure out what a scheme is, 396 00:26:29,320 --> 00:26:33,360 the order of accuracy is a good way 397 00:26:33,360 --> 00:26:35,980 to check what the scheme is. 398 00:26:35,980 --> 00:26:39,110 Because different schemes have different orders of accuracy. 399 00:26:39,110 --> 00:26:42,235 Forward Euler, Backward Euler are first-order accurate. 400 00:26:44,855 --> 00:26:50,560 Trapezoidal rule and midpoint are second-order accurate. 401 00:26:50,560 --> 00:26:53,535 And there are more advance schemes. 402 00:26:53,535 --> 00:26:55,660 Many of them have an even higher order of accuracy. 403 00:26:58,300 --> 00:27:04,890 So that's a good distinguisher of different schemes. 404 00:27:04,890 --> 00:27:08,610 OK, eigenvalue stability. 405 00:27:08,610 --> 00:27:10,420 OK, can somebody tell me? 406 00:27:10,420 --> 00:27:14,766 We looked at zero stability, right? 407 00:27:14,766 --> 00:27:20,980 That is, what makes a scheme have a global order of accuracy 408 00:27:20,980 --> 00:27:22,960 equal to local order of accuracy? 409 00:27:22,960 --> 00:27:27,050 How does eigenvalue stability, how is that different from zero 410 00:27:27,050 --> 00:27:29,750 stability? 411 00:27:29,750 --> 00:27:31,353 Yes? 412 00:27:31,353 --> 00:27:32,275 AUDIENCE: [INAUDIBLE] 413 00:27:32,275 --> 00:27:33,660 QIQI WANG: Hm? 414 00:27:33,660 --> 00:27:35,555 If the general case-- 415 00:27:35,555 --> 00:27:36,430 AUDIENCE: [INAUDIBLE] 416 00:27:43,672 --> 00:27:44,380 QIQI WANG: Right. 417 00:27:44,380 --> 00:27:47,030 So eigenvalue stability, in some sense, 418 00:27:47,030 --> 00:27:49,510 is an expanded version of zero stability. 419 00:27:49,510 --> 00:27:53,970 So it basically says that my solution, Vk, 420 00:27:53,970 --> 00:28:06,350 is bounded for the question of du dt equal to lambda u. 421 00:28:06,350 --> 00:28:11,390 So this eigenvalue stability, while zero stability 422 00:28:11,390 --> 00:28:14,787 says that the solution is bounded for du dt equal to 0. 423 00:28:21,610 --> 00:28:25,330 So when you talk about eigenvalue stability, 424 00:28:25,330 --> 00:28:27,830 you have to give me two things for me 425 00:28:27,830 --> 00:28:31,880 to determine if a scheme is eigenvalue stable or not. 426 00:28:31,880 --> 00:28:35,520 You have to give me the scheme. 427 00:28:35,520 --> 00:28:37,210 What scheme are you are using? 428 00:28:37,210 --> 00:28:40,204 Are you talking Forward Euler, Backward Euler, midpoint? 429 00:28:40,204 --> 00:28:42,120 You have to also give me something else for me 430 00:28:42,120 --> 00:28:46,306 to determine if I'm going to have eigenvalue stability 431 00:28:46,306 --> 00:28:46,806 or not. 432 00:28:46,806 --> 00:28:47,707 What is that? 433 00:28:47,707 --> 00:28:48,581 AUDIENCE: The governing equation? 434 00:28:48,581 --> 00:28:49,018 QIQI WANG: Hm? 435 00:28:49,018 --> 00:28:50,392 AUDIENCE: The governing equation. 436 00:28:50,392 --> 00:28:54,210 QIQI WANG: The governing equation, or more specifically, 437 00:28:54,210 --> 00:28:56,030 lambda times delta t. 438 00:28:56,030 --> 00:28:57,330 Right? 439 00:28:57,330 --> 00:29:00,770 You have to give me these two things in order 440 00:29:00,770 --> 00:29:05,660 to determine if a scheme is eigenvalue stable or not. 441 00:29:05,660 --> 00:29:11,420 And if you search for MIT math links, 442 00:29:11,420 --> 00:29:15,040 eigenvalue stability, the first thing 443 00:29:15,040 --> 00:29:19,130 you're going to get on Google at least is this thing. 444 00:29:21,835 --> 00:29:26,140 Always run, OK, run. 445 00:29:26,140 --> 00:29:29,518 You have to do a bunch of Java things. 446 00:29:29,518 --> 00:29:30,485 Come on. 447 00:29:36,430 --> 00:29:41,494 Yeah, that is going to give you the eigenvalue stability. 448 00:29:41,494 --> 00:29:42,930 So you need two things. 449 00:29:42,930 --> 00:29:45,560 One thing is the scheme, right? 450 00:29:45,560 --> 00:29:47,818 For different schemes, you are going 451 00:29:47,818 --> 00:29:51,720 to get a different plot of eigenvalue stability. 452 00:29:51,720 --> 00:29:59,070 So if I get Backward Euler, that is my stability region. 453 00:29:59,070 --> 00:30:02,970 And this plot is a plot [INAUDIBLE] 454 00:30:02,970 --> 00:30:07,096 depending on the distance lambda delta t, right? 455 00:30:07,096 --> 00:30:09,424 That's the second thing you need to tell me 456 00:30:09,424 --> 00:30:12,120 in order to [INAUDIBLE] eigenvalue stability. 457 00:30:12,120 --> 00:30:15,185 You also need to tell me lambda delta t, 458 00:30:15,185 --> 00:30:20,378 and that determines one point in the complex set. 459 00:30:20,378 --> 00:30:27,376 And depending on [INAUDIBLE] and eigenvalue stable [INAUDIBLE] 460 00:30:27,376 --> 00:30:29,320 or the eigenvalue unstable [INAUDIBLE]. 461 00:30:35,152 --> 00:30:37,485 Right? 462 00:30:37,485 --> 00:30:38,068 Any questions? 463 00:30:41,470 --> 00:30:45,570 So questions over there? 464 00:30:45,570 --> 00:30:50,338 OK, so yes, tell me a scheme, and tell me lambda delta t. 465 00:30:50,338 --> 00:30:56,290 [INAUDIBLE] I'm going to tell you I'm stable or not. 466 00:30:56,290 --> 00:30:59,760 And one thing is for Backward Euler, 467 00:30:59,760 --> 00:31:04,810 as your delta t decreases, whatever 468 00:31:04,810 --> 00:31:06,420 is happening, for the same lambda, 469 00:31:06,420 --> 00:31:08,930 as my delta t decreases, I'm more and more 470 00:31:08,930 --> 00:31:13,570 zooming into small regions and in all regions. 471 00:31:13,570 --> 00:31:17,190 And the smaller I get, the more I'm 472 00:31:17,190 --> 00:31:23,860 approximating the stability of the true ODE. 473 00:31:23,860 --> 00:31:27,066 The value of the true ODE is the [INAUDIBLE]. 474 00:31:27,066 --> 00:31:30,332 If lambda has a [INAUDIBLE], then that's unstable behavior. 475 00:31:30,332 --> 00:31:35,120 If lambda has a [INAUDIBLE] value, then it's stable. 476 00:31:35,120 --> 00:31:38,330 Must be stable, right? 477 00:31:38,330 --> 00:31:41,479 If I zoom in [INAUDIBLE], that's going 478 00:31:41,479 --> 00:31:45,380 to be more and more of what I get. 479 00:31:45,380 --> 00:31:49,730 And same thing for Forward Euler, right? 480 00:31:49,730 --> 00:31:54,880 As I zoom more and more into the [INAUDIBLE], that's what I get. 481 00:31:54,880 --> 00:31:58,910 And the same thing for trapezoidal. 482 00:31:58,910 --> 00:32:02,160 It's really depending on if I zoom in or not, 483 00:32:02,160 --> 00:32:04,033 but the same thing for RK2. 484 00:32:04,033 --> 00:32:09,560 As I zoom in [INAUDIBLE], I get [INAUDIBLE] stable, right term 485 00:32:09,560 --> 00:32:10,930 being unstable. 486 00:32:10,930 --> 00:32:14,860 And same thing for [INAUDIBLE]. 487 00:32:14,860 --> 00:32:17,570 I get the behavior that on the left hand side 488 00:32:17,570 --> 00:32:20,221 is stable, on the right hand side is unstable. 489 00:32:20,221 --> 00:32:20,720 Yes? 490 00:32:20,720 --> 00:32:21,595 AUDIENCE: [INAUDIBLE] 491 00:32:27,654 --> 00:32:29,070 QIQI WANG: For the Backward Euler, 492 00:32:29,070 --> 00:32:31,245 I can easily [INAUDIBLE] delta t, 493 00:32:31,245 --> 00:32:36,030 and it actually is going to make an unstable equation stable, 494 00:32:36,030 --> 00:32:37,210 yes. 495 00:32:37,210 --> 00:32:41,080 That actually happens with some PDEs. 496 00:32:41,080 --> 00:32:45,530 Now, you can even sometimes, for example, [INAUDIBLE]. 497 00:32:48,100 --> 00:32:52,158 And a lot of the fluid flows are actually unstable. 498 00:32:52,158 --> 00:32:54,850 So if you're looking at, for example, water shedding 499 00:32:54,850 --> 00:32:59,190 behind the [INAUDIBLE], what you see there is an unstable flow 500 00:32:59,190 --> 00:32:59,922 field. 501 00:32:59,922 --> 00:33:02,290 But you can actually get a stable flow field 502 00:33:02,290 --> 00:33:05,146 if you use Backward Euler. 503 00:33:05,146 --> 00:33:09,040 You can use Backward Euler with a really [INAUDIBLE]. 504 00:33:09,040 --> 00:33:11,960 If you do that, you can actually converge, 505 00:33:11,960 --> 00:33:15,930 force yourself to converge to an actually unstable 506 00:33:15,930 --> 00:33:17,234 closed solution. 507 00:33:17,234 --> 00:33:18,109 AUDIENCE: [INAUDIBLE] 508 00:33:21,560 --> 00:33:25,310 QIQI WANG: The question is, are you still consistent? 509 00:33:25,310 --> 00:33:28,890 The question of consistency is for ODEs. 510 00:33:28,890 --> 00:33:33,980 If we find that lambda delta t goes to zero, 511 00:33:33,980 --> 00:33:37,690 consistency means the [INAUDIBLE] behavior of the ODE 512 00:33:37,690 --> 00:33:41,510 as lambda delta t goes to 0, in this case, that's always 513 00:33:41,510 --> 00:33:45,812 consistent because if your lambda delta t goes to 0, 514 00:33:45,812 --> 00:33:49,890 and approximating the true behavior [INAUDIBLE]. 515 00:33:49,890 --> 00:33:52,430 Consistency has nothing to do with 516 00:33:52,430 --> 00:33:56,310 if I give you a really big delta t. 517 00:33:56,310 --> 00:33:59,170 If I give it a really big delta t, 518 00:33:59,170 --> 00:34:02,118 then it has nothing to do with consistency. 519 00:34:02,118 --> 00:34:09,239 Because consistency is behavior of the linear [INAUDIBLE]. 520 00:34:09,239 --> 00:34:11,147 AUDIENCE: [INAUDIBLE] 521 00:34:11,147 --> 00:34:13,532 QIQI WANG: Oh, OK, yes. 522 00:34:13,532 --> 00:34:17,330 I said converges in a different sense. 523 00:34:17,330 --> 00:34:23,412 So the convergence we are talking about here, 524 00:34:23,412 --> 00:34:28,045 as I decrease my delta t, as the numerical approximates 525 00:34:28,045 --> 00:34:31,580 the analytical solution, what I mean over here 526 00:34:31,580 --> 00:34:41,265 is that when [INAUDIBLE] delta t is low, as I've [INAUDIBLE], 527 00:34:41,265 --> 00:34:45,530 grows closer and closer to a solution of the time 528 00:34:45,530 --> 00:34:46,488 independent equation. 529 00:34:49,841 --> 00:34:54,756 So that's what I was saying when I spoke about 530 00:34:54,756 --> 00:34:57,760 convergence earlier. 531 00:34:57,760 --> 00:35:06,218 It has a-- I think that this is not a very good equation, 532 00:35:06,218 --> 00:35:08,210 but convergence in [INAUDIBLE]. 533 00:35:11,170 --> 00:35:16,190 One thing is when I decrease my [INAUDIBLE] time and space, 534 00:35:16,190 --> 00:35:19,080 I might converge in the analytical solution. 535 00:35:19,080 --> 00:35:23,570 The second thing is in an iterative scheme, 536 00:35:23,570 --> 00:35:26,020 I guess you're going to learn more when you go 537 00:35:26,020 --> 00:35:28,950 through more advanced classes. 538 00:35:28,950 --> 00:35:32,310 When you apply an iterative scheme, 539 00:35:32,310 --> 00:35:37,070 trying to compute the solution to a steady state differential 540 00:35:37,070 --> 00:35:40,420 equation, these states are [INAUDIBLE]. 541 00:35:40,420 --> 00:35:42,710 The steady state never stops closing. 542 00:35:42,710 --> 00:35:47,233 And as I increase the iterations, 543 00:35:47,233 --> 00:35:54,250 do I get closer to the solution of the steady state equation? 544 00:35:54,250 --> 00:35:56,560 It's more like a convergence in the sense of we 545 00:35:56,560 --> 00:35:58,726 can apply Newton-Raphson. 546 00:35:58,726 --> 00:36:03,202 We can apply Newton-Raphson to solve the founding equations. 547 00:36:03,202 --> 00:36:04,660 That happens to be more [INAUDIBLE] 548 00:36:04,660 --> 00:36:07,720 of your iterative solution. 549 00:36:07,720 --> 00:36:11,630 Convergence moves, as I do more Newton-Raphson steps, 550 00:36:11,630 --> 00:36:16,390 do I converge to the solution for the [INAUDIBLE] equation? 551 00:36:16,390 --> 00:36:19,110 So there are two completely different concepts 552 00:36:19,110 --> 00:36:20,475 of convergence. 553 00:36:20,475 --> 00:36:23,220 One is, as I decrease both t and delta x, 554 00:36:23,220 --> 00:36:24,510 I get closer to a solution. 555 00:36:24,510 --> 00:36:30,390 The second is as I why iterate more, 556 00:36:30,390 --> 00:36:33,619 do I converge to a solution? 557 00:36:33,619 --> 00:37:27,848 AUDIENCE: [INAUDIBLE] What about [INAUDIBLE] So what it's saying 558 00:37:27,848 --> 00:37:29,028 is that [INAUDIBLE]. 559 00:37:34,220 --> 00:37:34,958 QIQI WANG: Right. 560 00:37:34,958 --> 00:37:35,833 AUDIENCE: [INAUDIBLE] 561 00:38:21,760 --> 00:38:22,877 QIQI WANG: OK, I'm-- 562 00:38:22,877 --> 00:38:23,752 AUDIENCE: [INAUDIBLE] 563 00:38:27,736 --> 00:38:32,192 QIQI WANG: Yes, so I'm going to draw what [INAUDIBLE]. 564 00:38:32,192 --> 00:38:35,590 So if you look at one where there's [INAUDIBLE] over here, 565 00:38:35,590 --> 00:38:40,280 that means a lambda that is that type of property, 566 00:38:40,280 --> 00:38:43,550 the real part-- and if you look at the analytical solution, 567 00:38:43,550 --> 00:38:45,280 the analytical solution here is something 568 00:38:45,280 --> 00:38:48,470 that oscillates sinusoidally while growing 569 00:38:48,470 --> 00:38:50,580 in magnitude exponentially. 570 00:38:50,580 --> 00:38:51,440 Right? 571 00:38:51,440 --> 00:38:54,970 So that's an analytical solution. 572 00:38:54,970 --> 00:38:59,250 If they used Backward Euler with a small time step, 573 00:38:59,250 --> 00:39:04,290 so that is like when your lambda delta t is going to be 0.1, 574 00:39:04,290 --> 00:39:07,260 with a small delta t, you're [INAUDIBLE] lambda 575 00:39:07,260 --> 00:39:09,530 to somewhere close to the origin. 576 00:39:09,530 --> 00:39:12,530 So you may get a solution that maybe you 577 00:39:12,530 --> 00:39:16,160 wouldn't get exactly the right behavior 578 00:39:16,160 --> 00:39:19,580 because unless your delta t is infinitely small. 579 00:39:19,580 --> 00:39:21,650 But you are also going to get something that 580 00:39:21,650 --> 00:39:24,200 grows exponentially larger. 581 00:39:24,200 --> 00:39:29,370 So that is when you have a small delta t. 582 00:39:29,370 --> 00:39:32,020 Now, if you use a large delta t, like you 583 00:39:32,020 --> 00:39:36,090 are scaling the lambda delta t to somewhere 584 00:39:36,090 --> 00:39:41,850 larger along the same line, because the delta t is real, 585 00:39:41,850 --> 00:39:46,500 what happens is that we get a stable solution. 586 00:39:46,500 --> 00:39:50,350 So although analytically the solution grows larger, 587 00:39:50,350 --> 00:39:52,770 you are expected to get a solution that 588 00:39:52,770 --> 00:39:56,230 looks more like this. 589 00:39:56,230 --> 00:40:00,030 So it is going to get to the wrong answer qualitatively 590 00:40:00,030 --> 00:40:00,530 even. 591 00:40:03,760 --> 00:40:05,350 And of course, this is because you're 592 00:40:05,350 --> 00:40:07,470 using a super large delta t. 593 00:40:07,470 --> 00:40:12,160 You're using a delta t that is actually much larger than 1 594 00:40:12,160 --> 00:40:16,150 over the magnitude of the eigenvalue. 595 00:40:19,126 --> 00:40:20,614 Right? 596 00:40:20,614 --> 00:40:23,600 Does that make sense? 597 00:40:23,600 --> 00:40:24,482 Yes? 598 00:40:24,482 --> 00:40:25,357 AUDIENCE: [INAUDIBLE] 599 00:40:56,340 --> 00:40:57,198 QIQI WANG: Yes. 600 00:40:57,198 --> 00:40:58,073 AUDIENCE: [INAUDIBLE] 601 00:41:14,640 --> 00:41:17,064 QIQI WANG: So you're asking a very good question. 602 00:41:17,064 --> 00:41:20,640 So for a system that is analytically unstable, 603 00:41:20,640 --> 00:41:23,244 what is a good way of telling my numerical scheme is 604 00:41:23,244 --> 00:41:25,220 doing a good job or not? 605 00:41:25,220 --> 00:41:28,430 This is a much deeper question than I can answer. 606 00:41:28,430 --> 00:41:31,528 Yeah, there is a [INAUDIBLE]. 607 00:41:34,839 --> 00:41:38,920 If the solution analytically is unstable, 608 00:41:38,920 --> 00:41:43,000 that means to approximate using numerical methods is extremely 609 00:41:43,000 --> 00:41:48,966 difficult. If you make a small error in the beginning, 610 00:41:48,966 --> 00:41:50,674 even if you have a small error let's 611 00:41:50,674 --> 00:41:53,670 say in terms of [INAUDIBLE], even though we'll assume 612 00:41:53,670 --> 00:41:58,970 you are doing everything exactly starting from time step 0, 613 00:41:58,970 --> 00:42:01,350 you're still going to get a [INAUDIBLE] error 614 00:42:01,350 --> 00:42:03,800 as you come to here. 615 00:42:03,800 --> 00:42:06,414 Just because the equation itself is unstable. 616 00:42:06,414 --> 00:42:07,830 The equation itself being unstable 617 00:42:07,830 --> 00:42:11,590 means you can make a small perturbation over here. 618 00:42:11,590 --> 00:42:13,840 That perturbation will [INAUDIBLE] 619 00:42:13,840 --> 00:42:18,710 grow larger and larger as we integrate more and more. 620 00:42:18,710 --> 00:42:25,110 So treating a case like that numerically is very difficult. 621 00:42:25,110 --> 00:42:31,510 And if interested, let's talk more about that after class. 622 00:42:31,510 --> 00:42:36,750 All right, any other questions? 623 00:42:36,750 --> 00:42:39,540 OK. 624 00:42:39,540 --> 00:42:41,370 And by the way, that's something I'm 625 00:42:41,370 --> 00:42:45,360 actually looking at in my research right now. 626 00:42:45,360 --> 00:42:47,235 So very good question. 627 00:42:47,235 --> 00:42:48,950 I'm very impressed. 628 00:42:48,950 --> 00:42:49,894 AUDIENCE: [INAUDIBLE] 629 00:42:49,894 --> 00:42:51,560 QIQI WANG: Yeah, that's why people can't 630 00:42:51,560 --> 00:42:52,720 predict the weather, right? 631 00:42:52,720 --> 00:42:58,300 I mean, they try to solve a PDE to get the weather seven days 632 00:42:58,300 --> 00:42:58,800 later. 633 00:42:58,800 --> 00:43:01,120 But you know it's going to be a not very 634 00:43:01,120 --> 00:43:02,725 good solution by experience. 635 00:43:05,400 --> 00:43:07,560 So that's exactly what they're trying to do, 636 00:43:07,560 --> 00:43:11,640 solve an unstable system forward in time 637 00:43:11,640 --> 00:43:13,300 for something like seven days. 638 00:43:15,630 --> 00:43:16,130 All right. 639 00:43:18,750 --> 00:43:20,250 OK. 640 00:43:20,250 --> 00:43:24,090 I got a question of, what is the advantage, what 641 00:43:24,090 --> 00:43:28,110 is the main advantage and disadvantage of explicit 642 00:43:28,110 --> 00:43:30,610 versus implicit methods? 643 00:43:30,610 --> 00:43:33,180 Let's do a comparison here. 644 00:43:41,130 --> 00:43:42,723 You all did the project. 645 00:43:42,723 --> 00:43:45,950 So you all kind of know the disadvantage 646 00:43:45,950 --> 00:43:49,110 of an implicit method. 647 00:43:49,110 --> 00:43:50,666 What is that? 648 00:43:50,666 --> 00:43:52,380 AUDIENCE: [INAUDIBLE] 649 00:43:52,380 --> 00:43:55,490 QIQI WANG: A lot of coding, right? 650 00:43:55,490 --> 00:43:56,680 Why lots of coding? 651 00:43:59,921 --> 00:44:02,710 AUDIENCE: [INAUDIBLE] 652 00:44:02,710 --> 00:44:05,210 QIQI WANG: You have to solve a non-linear equation. 653 00:44:05,210 --> 00:44:14,480 Solve non-linear equation, right, in every time step. 654 00:44:14,480 --> 00:44:17,310 And the way we solve it is using Newton-Raphson. 655 00:44:17,310 --> 00:44:22,680 So we have to apply Newton-Raphson equation 656 00:44:22,680 --> 00:44:25,160 within every time step. 657 00:44:25,160 --> 00:44:30,270 That means a nested loop within every time step. 658 00:44:30,270 --> 00:44:31,690 So you have an outer loop that is 659 00:44:31,690 --> 00:44:33,530 looping through the time step. 660 00:44:33,530 --> 00:44:37,090 Within the outer loop, you need to have the inner loop that 661 00:44:37,090 --> 00:44:39,200 does Newton-Raphson iteration. 662 00:44:39,200 --> 00:44:41,280 So of course, it's much more complicated 663 00:44:41,280 --> 00:44:44,180 than explicit schemes, right? 664 00:44:44,180 --> 00:44:51,910 Where you don't need to solve any non-linear equations. 665 00:44:51,910 --> 00:44:56,230 That's why it is explicit, right? 666 00:44:56,230 --> 00:45:03,850 OK, but now, what is the advantage of implicit schemes? 667 00:45:03,850 --> 00:45:05,870 AUDIENCE: [INAUDIBLE] 668 00:45:05,870 --> 00:45:08,470 QIQI WANG: It's way more accurate. 669 00:45:08,470 --> 00:45:11,740 But I wouldn't hold that as a rule. 670 00:45:11,740 --> 00:45:14,840 I mean, in the problem, yes. 671 00:45:14,840 --> 00:45:18,713 We used an implicit scheme that turns out 672 00:45:18,713 --> 00:45:20,345 to be way more accurate. 673 00:45:20,345 --> 00:45:24,511 But the reason may just be our implicit scheme 674 00:45:24,511 --> 00:45:27,740 is eigenvalue stable, right? 675 00:45:27,740 --> 00:45:31,230 The explicit scheme we were using was [INAUDIBLE]. 676 00:45:31,230 --> 00:45:34,380 That turns out to be eigenvalue stable only along 677 00:45:34,380 --> 00:45:35,790 the imaginary axis. 678 00:45:39,024 --> 00:45:45,470 So accuracy is actually not the main driver 679 00:45:45,470 --> 00:45:50,402 of people adopting implicit schemes over explicit schemes. 680 00:45:50,402 --> 00:45:52,390 AUDIENCE: [INAUDIBLE] 681 00:45:52,390 --> 00:45:55,820 QIQI WANG: Yes, the main driver is a larger stability region. 682 00:46:03,400 --> 00:46:06,980 Larger stability region, as we were just looking at. 683 00:46:06,980 --> 00:46:10,030 OK, just for example, compare Forward Euler 684 00:46:10,030 --> 00:46:12,890 with Backward Euler. 685 00:46:12,890 --> 00:46:17,720 Forward Euler, tiny, right? 686 00:46:17,720 --> 00:46:24,260 Backward Euler, the region where it's unstable is tiny, right? 687 00:46:24,260 --> 00:46:26,280 That's kind of an extreme comparison, 688 00:46:26,280 --> 00:46:28,880 but gets the point through. 689 00:46:28,880 --> 00:46:30,005 Yes? 690 00:46:30,005 --> 00:46:30,880 AUDIENCE: [INAUDIBLE] 691 00:46:42,380 --> 00:46:43,590 QIQI WANG: OK, good point. 692 00:46:43,590 --> 00:46:46,420 What happens if Newton-Raphson doesn't converge 693 00:46:46,420 --> 00:46:48,690 in the implicit method? 694 00:46:48,690 --> 00:46:52,120 And now, when you say converge, you don't mean as delta t 695 00:46:52,120 --> 00:46:53,760 and delta s go to 0, right? 696 00:46:53,760 --> 00:46:58,340 You mean as my iteration goes to infinity, doesn't converge? 697 00:46:58,340 --> 00:47:00,664 So it's a very good point. 698 00:47:00,664 --> 00:47:06,220 Because yes, if you have a very non-linear problem, 699 00:47:06,220 --> 00:47:09,330 if you use a super high delta t, it's 700 00:47:09,330 --> 00:47:11,540 quite possible your Newton-Raphson doesn't 701 00:47:11,540 --> 00:47:13,320 converge. 702 00:47:13,320 --> 00:47:19,980 So if you use implicit methods, there is actually an implicit 703 00:47:19,980 --> 00:47:25,950 restriction on delta t in which you cannot get through this 704 00:47:25,950 --> 00:47:27,710 eigenvalue stability analysis. 705 00:47:27,710 --> 00:47:32,870 It is actually a delta t that is going 706 00:47:32,870 --> 00:47:38,280 to enable it to converge rapidly with Newton-Raphson equation. 707 00:47:38,280 --> 00:47:42,860 So one thing I think you can-- by just a little bit 708 00:47:42,860 --> 00:47:49,070 of analysis you can find out is that as I decrease my delta t, 709 00:47:49,070 --> 00:47:52,372 my Newton-Raphson is going to have a much easier time 710 00:47:52,372 --> 00:47:54,370 to converge. 711 00:47:54,370 --> 00:47:59,810 In fact, if my delta t is very small, then my Newton-Raphson, 712 00:47:59,810 --> 00:48:03,900 I'm going to have a very good initial guess 713 00:48:03,900 --> 00:48:04,990 of my Newton-Raphson. 714 00:48:04,990 --> 00:48:07,510 Because if my delta t is very small, 715 00:48:07,510 --> 00:48:10,096 then my next step solution is going 716 00:48:10,096 --> 00:48:13,346 to be pretty close to my current time step solution. 717 00:48:13,346 --> 00:48:15,190 Right? 718 00:48:15,190 --> 00:48:17,324 The change of the state over the two steps 719 00:48:17,324 --> 00:48:18,940 wouldn't be that large. 720 00:48:18,940 --> 00:48:24,670 And Newton-Raphson will always converge if your initial guess 721 00:48:24,670 --> 00:48:28,362 is close enough. 722 00:48:28,362 --> 00:48:31,510 So that's the nature of Newton-Raphson 723 00:48:31,510 --> 00:48:33,770 because it uses a linear approximation 724 00:48:33,770 --> 00:48:38,050 to get the root of that linear approximation. 725 00:48:38,050 --> 00:48:40,320 I can talk more about that. 726 00:48:40,320 --> 00:48:43,420 But if you have a close enough initial guess, 727 00:48:43,420 --> 00:48:45,400 Newton-Raphson will always converge. 728 00:48:45,400 --> 00:48:49,230 Therefore, by decreasing, if Newton-Raphson 729 00:48:49,230 --> 00:48:52,881 doesn't converge, a very straightforward recipe 730 00:48:52,881 --> 00:48:55,240 is decrease your delta t. 731 00:48:55,240 --> 00:48:57,460 And that is going to make the change between one 732 00:48:57,460 --> 00:49:00,190 state and the next time step state 733 00:49:00,190 --> 00:49:04,325 closer, and therefore give you a much better initial guess 734 00:49:04,325 --> 00:49:06,500 to Newton-Raphson. 735 00:49:06,500 --> 00:49:09,380 And that is going to allow you to converge much easier. 736 00:49:13,560 --> 00:49:15,570 So yes, Newton-Raphson actually can 737 00:49:15,570 --> 00:49:19,725 diverge if you have a super non-linear problem 738 00:49:19,725 --> 00:49:22,056 and you use a super large time step. 739 00:49:26,527 --> 00:49:27,402 AUDIENCE: [INAUDIBLE] 740 00:49:37,150 --> 00:49:38,010 QIQI WANG: Oh, yes. 741 00:49:38,010 --> 00:49:41,454 By the way, I can make the same argument for [INAUDIBLE]. 742 00:49:41,454 --> 00:49:44,120 If I get an unstable system, I can always 743 00:49:44,120 --> 00:49:48,166 decrease the time step so that I get into the stability 744 00:49:48,166 --> 00:49:51,030 region of the implicit scheme. 745 00:49:51,030 --> 00:49:56,720 That is still except for the time step restriction 746 00:49:56,720 --> 00:50:00,050 for Newton-Raphson, we said over here, 747 00:50:00,050 --> 00:50:04,320 is set by a different mechanism as the largest 748 00:50:04,320 --> 00:50:05,880 delta t I can handle up here. 749 00:50:05,880 --> 00:50:08,970 The largest delta t I can use in the explicit step 750 00:50:08,970 --> 00:50:12,550 is done by how large, by a larger eigenvalue. 751 00:50:12,550 --> 00:50:14,977 It is really-- we can really obtain it 752 00:50:14,977 --> 00:50:16,670 from a linear analysis. 753 00:50:16,670 --> 00:50:19,135 And linearizing the equation [INAUDIBLE] 754 00:50:19,135 --> 00:50:21,180 the largest eigenvalue, I can find out 755 00:50:21,180 --> 00:50:25,030 what is the largest time step I can take here. 756 00:50:25,030 --> 00:50:27,211 So [INAUDIBLE] the implicit method 757 00:50:27,211 --> 00:50:30,570 is governed by if Newton-Raphson converges. 758 00:50:30,570 --> 00:50:33,300 And Newton-Raphson will always converge in one step 759 00:50:33,300 --> 00:50:37,222 if I have a linear equation. 760 00:50:37,222 --> 00:50:40,530 Newton-Raphson will converge in one single step 761 00:50:40,530 --> 00:50:43,540 if I have a linear equation. 762 00:50:43,540 --> 00:50:45,422 So the time step restriction here 763 00:50:45,422 --> 00:50:49,040 is set by the convergence of Newton-Raphson, 764 00:50:49,040 --> 00:50:51,785 not by the eigenvalue step, right? 765 00:50:51,785 --> 00:50:55,020 How non-linear the equation is. 766 00:50:55,020 --> 00:51:00,000 So there are some integration methods where people actually 767 00:51:00,000 --> 00:51:04,470 separate out the linear part that 768 00:51:04,470 --> 00:51:09,000 has super large eigenvalues and the non-linear part. 769 00:51:09,000 --> 00:51:11,838 So for a linear part that has super large eigenvalues, 770 00:51:11,838 --> 00:51:13,630 they do it implicitly. 771 00:51:13,630 --> 00:51:16,544 And for the non-linear part, they do it explicitly. 772 00:51:16,544 --> 00:51:21,590 And there's a class of methods for the [INAUDIBLE]. 773 00:51:21,590 --> 00:51:25,910 Fancy name, but short for implicit, explicit methods that 774 00:51:25,910 --> 00:51:30,250 creates different parts of the equation, either explicitly 775 00:51:30,250 --> 00:51:32,420 or implicitly. 776 00:51:32,420 --> 00:51:36,350 And as you can guess, the part they treat implicitly 777 00:51:36,350 --> 00:51:39,970 is going to be the linear part that 778 00:51:39,970 --> 00:51:42,940 has super large eigenvalues. 779 00:51:42,940 --> 00:51:44,794 So that you converge in one step, 780 00:51:44,794 --> 00:51:46,210 you can do Newton-Raphson, and you 781 00:51:46,210 --> 00:51:50,462 avoid the time step restriction set by the large eigenvalues. 782 00:51:54,420 --> 00:51:55,283 All right, OK. 783 00:51:58,390 --> 00:51:59,240 Right. 784 00:51:59,240 --> 00:52:05,495 So this is really especially in stiff problems. 785 00:52:09,560 --> 00:52:11,752 And what is a stiff problem? 786 00:52:11,752 --> 00:52:15,580 A stiff problem is actually defined 787 00:52:15,580 --> 00:52:21,330 in terms of if you use an explicit integration scheme. 788 00:52:21,330 --> 00:52:27,609 If you find yourself having to use a super small delta 789 00:52:27,609 --> 00:52:32,350 t not because you want super high accuracy 790 00:52:32,350 --> 00:52:34,730 but because it's still going unstable 791 00:52:34,730 --> 00:52:37,580 if you use a larger delta t, then you 792 00:52:37,580 --> 00:52:40,740 know you have a stiff problem. 793 00:52:40,740 --> 00:52:45,310 That is really the easiest way, I 794 00:52:45,310 --> 00:52:48,510 find, to define a stiff problem. 795 00:52:48,510 --> 00:52:51,320 That is like, you are forced to use a small delta 796 00:52:51,320 --> 00:52:59,090 t by the stability region, not for accuracy reasons, 797 00:52:59,090 --> 00:53:03,930 not for wanting higher accuracy, but simply trying 798 00:53:03,930 --> 00:53:08,210 to avoid [INAUDIBLE]. 799 00:53:08,210 --> 00:53:12,679 OK, so then you get a stiff problem. 800 00:53:12,679 --> 00:53:13,625 Any questions? 801 00:53:17,890 --> 00:53:20,770 Yeah, so any question, please raise it. 802 00:53:20,770 --> 00:53:25,120 Otherwise, I'm going to review Newton-Raphson a little bit, 803 00:53:25,120 --> 00:53:30,221 and there is no more things I'm planning to cover. 804 00:53:30,221 --> 00:53:32,551 So I'm kind of expecting questions from you guys. 805 00:53:32,551 --> 00:53:34,176 AUDIENCE: Finite difference [INAUDIBLE] 806 00:53:40,222 --> 00:53:41,680 QIQI WANG: Yeah, finite difference, 807 00:53:41,680 --> 00:53:44,650 and finite volumes I include in the scope of the exam. 808 00:53:44,650 --> 00:53:49,570 As you saw from the previous year's questions I posted, 809 00:53:49,570 --> 00:53:52,070 sometimes we include it in the actual exam, 810 00:53:52,070 --> 00:53:54,320 sometimes we don't. 811 00:53:54,320 --> 00:54:01,993 So it is included in the scope, but because we just did it, 812 00:54:01,993 --> 00:54:04,400 I think it's a good idea to review 813 00:54:04,400 --> 00:54:09,420 some of the earlier stuff that you may have forgotten. 814 00:54:09,420 --> 00:54:15,990 And another benefit is that we are also 815 00:54:15,990 --> 00:54:19,680 going over this now that we have [INAUDIBLE] to give you 816 00:54:19,680 --> 00:54:22,410 a sense that all the stuff we learned 817 00:54:22,410 --> 00:54:27,030 applies to PDEs because what we did in PDEs 818 00:54:27,030 --> 00:54:32,552 is just to approximate the PDE using a big ODE. 819 00:54:32,552 --> 00:54:34,500 Right? 820 00:54:34,500 --> 00:54:37,995 And you can apply implicit methods on the PDE 821 00:54:37,995 --> 00:54:42,560 also, except for the Jacobian. 822 00:54:42,560 --> 00:54:45,800 You get it's going to be something close 823 00:54:45,800 --> 00:54:49,283 to the matrix form of finite difference, 824 00:54:49,283 --> 00:54:50,949 we did in the finite difference vectors. 825 00:54:53,943 --> 00:54:54,565 Yeah? 826 00:54:54,565 --> 00:54:55,440 AUDIENCE: [INAUDIBLE] 827 00:55:02,319 --> 00:55:03,860 QIQI WANG: Oh, OK, the example format 828 00:55:03,860 --> 00:55:05,820 is that it's closed notes, right? 829 00:55:05,820 --> 00:55:11,520 It's closed everything in the period from you 830 00:55:11,520 --> 00:55:16,316 get the exam to when you come to our office. 831 00:55:16,316 --> 00:55:20,756 So it's closed everything, closed computer, 832 00:55:20,756 --> 00:55:24,170 closed cell phone. 833 00:55:24,170 --> 00:55:25,170 Closed Wikipedia. 834 00:55:32,260 --> 00:55:35,980 We are actually letting you work out, really 835 00:55:35,980 --> 00:55:40,179 think about the problem during the time you 836 00:55:40,179 --> 00:55:42,808 are looking at the problem. 837 00:55:42,808 --> 00:55:49,960 But even though you don't get the answer, or during our face 838 00:55:49,960 --> 00:55:53,800 time, we are going to also interact with you when 839 00:55:53,800 --> 00:55:57,520 we ask you questions that you may not have expected, 840 00:55:57,520 --> 00:56:01,380 or we may actually help you going through some of these. 841 00:56:01,380 --> 00:56:05,590 So that's kind of how [INAUDIBLE] goes. 842 00:56:05,590 --> 00:56:07,395 Do you have something else you want to ask? 843 00:56:07,395 --> 00:56:08,270 AUDIENCE: [INAUDIBLE] 844 00:56:17,800 --> 00:56:20,280 QIQI WANG: Yeah, right. 845 00:56:20,280 --> 00:56:23,145 You can write things down and bring whatever you 846 00:56:23,145 --> 00:56:25,428 have written into our office. 847 00:56:25,428 --> 00:56:28,152 And in the office, you're expected 848 00:56:28,152 --> 00:56:30,530 to use a whiteboard or blackboard 849 00:56:30,530 --> 00:56:36,310 and explain to us like you're the professor, 850 00:56:36,310 --> 00:56:38,950 the other students, explain to us what you got. 851 00:56:46,170 --> 00:56:47,020 Any other questions? 852 00:56:54,080 --> 00:56:54,580 No? 853 00:56:54,580 --> 00:56:56,822 The last thing I want to go through 854 00:56:56,822 --> 00:56:58,760 is Newton-Raphson method. 855 00:56:58,760 --> 00:57:04,410 And it's another sort of confusing-- as I said, 856 00:57:04,410 --> 00:57:08,520 Newton-Raphson method is a method 857 00:57:08,520 --> 00:57:14,150 that simply solves regular non-linear equations. 858 00:57:14,150 --> 00:57:19,120 And a non-linear equation can appear in 16.90. 859 00:57:19,120 --> 00:57:27,110 It can appear anywhere else in your future career. 860 00:57:27,110 --> 00:57:31,470 So what you're learning about solving non-linear equations 861 00:57:31,470 --> 00:57:35,880 really goes very far, even if you 862 00:57:35,880 --> 00:57:39,360 don't deal with numerical methods later on. 863 00:57:39,360 --> 00:57:44,600 So as I said, it's a method of solving non-linear equations 864 00:57:44,600 --> 00:57:48,620 if the set of non-linear equations is relatively small. 865 00:57:48,620 --> 00:57:50,830 Say you have two equations or three equations, 866 00:57:50,830 --> 00:57:54,220 you can go to Matlab and use F solve to get 867 00:57:54,220 --> 00:57:57,910 it solved like brute force. 868 00:57:57,910 --> 00:58:01,480 But if you have a large set of differential equations 869 00:58:01,480 --> 00:58:04,240 you have to solve simultaneously, 870 00:58:04,240 --> 00:58:09,587 like what we're going to have if you use an implicit time 871 00:58:09,587 --> 00:58:15,790 integration method, apply it to a discretized PDE, 872 00:58:15,790 --> 00:58:20,762 then you're going to get at least 100 ODEs 873 00:58:20,762 --> 00:58:23,951 you have to-- which, if you have to solve 874 00:58:23,951 --> 00:58:25,534 using an implicit scheme, you're going 875 00:58:25,534 --> 00:58:29,900 to get hundreds, maybe thousands, maybe millions, 876 00:58:29,900 --> 00:58:35,710 or maybe trillions of algebraic equations you need to solve. 877 00:58:38,660 --> 00:58:43,010 OK, so imagine you have to solve f of u equal to 0 878 00:58:43,010 --> 00:58:46,806 where u, instead of writing down a simplified version 879 00:58:46,806 --> 00:58:50,400 of what you would get in an implicit scheme, 880 00:58:50,400 --> 00:58:55,140 your implicit scheme you would get u minus u k, 881 00:58:55,140 --> 00:59:00,110 which is the stuff you already know, over delta t 882 00:59:00,110 --> 00:59:05,110 is equal to some right hand side of u, and uk, and maybe 883 00:59:05,110 --> 00:59:07,990 something else, right? 884 00:59:07,990 --> 00:59:10,750 And instead of writing the same f here, 885 00:59:10,750 --> 00:59:14,980 I'm just going to write a big F of u equal to 0. 886 00:59:14,980 --> 00:59:21,480 So that big F in this case is really defined as u minus u k 887 00:59:21,480 --> 00:59:27,350 over delta t minus f of u, u k, et cetera. 888 00:59:27,350 --> 00:59:32,190 So this is what happens if we solve, let's say, 889 00:59:32,190 --> 00:59:35,050 a discretized PDE [INAUDIBLE]. 890 00:59:35,050 --> 00:59:38,510 U is the next time-step solution we want to get. 891 00:59:38,510 --> 00:59:43,380 And u k is the previous time step solution you already know. 892 00:59:43,380 --> 00:59:46,590 Now, if F is non-linear, you have 893 00:59:46,590 --> 00:59:50,202 to find the root of its big F, which is essentially 894 00:59:50,202 --> 00:59:51,660 the left hand side minus right hand 895 00:59:51,660 --> 00:59:54,875 side of this implicit update [INAUDIBLE]. 896 00:59:58,270 --> 00:59:59,240 Right? 897 00:59:59,240 --> 01:00:05,760 You need to find a u which is a vector of the solution that 898 01:00:05,760 --> 01:00:12,042 makes this f 0 for all the components of f. 899 01:00:12,042 --> 01:00:14,850 And f has the same dimension as u. 900 01:00:14,850 --> 01:00:19,769 If u is a 3D time step, f is going to be a 3D time step. 901 01:00:24,550 --> 01:00:27,590 And you need to find these 3D numbers that 902 01:00:27,590 --> 01:00:33,200 make the [INAUDIBLE] f equal to 0 simultaneously. 903 01:00:33,200 --> 01:00:34,902 That's not an easy task. 904 01:00:38,520 --> 01:00:40,450 And I think F solve is going to have 905 01:00:40,450 --> 01:00:47,360 a hard time dealing with this if u is a high dimensional vector. 906 01:00:52,440 --> 01:00:56,790 Now, what does Newton-Raphson do? 907 01:00:56,790 --> 01:01:00,020 Newton-Raphson starts with an initial guess. 908 01:01:00,020 --> 01:01:05,776 So u, for example, I'm going to use parentheses to denote 909 01:01:05,776 --> 01:01:08,106 the information [INAUDIBLE]. 910 01:01:08,106 --> 01:01:10,220 Parentheses 0 is the initial guess. 911 01:01:10,220 --> 01:01:12,300 I'm going to set it to u k. 912 01:01:12,300 --> 01:01:15,640 I'm going to set it to the solution at the previous time 913 01:01:15,640 --> 01:01:17,200 step. 914 01:01:17,200 --> 01:01:20,390 Then, I'm going to approximate this non-linear function using 915 01:01:20,390 --> 01:01:22,802 a linear function. 916 01:01:22,802 --> 01:01:28,320 Can I approximate F of u by something linear? 917 01:01:31,330 --> 01:01:32,600 Let's do it one by one. 918 01:01:32,600 --> 01:01:35,132 Let's approximate the first component 919 01:01:35,132 --> 01:01:40,215 of F. Think of F having components. 920 01:01:40,215 --> 01:01:42,340 We're just going to approximate the first component 921 01:01:42,340 --> 01:01:45,350 and have all the other components follow. 922 01:01:45,350 --> 01:01:49,390 OK, so if you just approximate the first component, 923 01:01:49,390 --> 01:01:52,660 I'm going to use Taylor series. 924 01:01:52,660 --> 01:01:54,530 And I'm going to use a Taylor series 925 01:01:54,530 --> 01:02:00,420 of a function of the [INAUDIBLE] variables, all right? 926 01:02:00,420 --> 01:02:03,200 So a function-- the periodicity of a function of a [INAUDIBLE] 927 01:02:03,200 --> 01:02:06,570 variable is pretty complicated. 928 01:02:06,570 --> 01:02:08,180 But the lucky thing is that I don't 929 01:02:08,180 --> 01:02:09,960 need to keep all the terms. 930 01:02:09,960 --> 01:02:14,267 I'm going to throw away anything that involves more 931 01:02:14,267 --> 01:02:15,350 than the first derivative. 932 01:02:18,080 --> 01:02:21,400 In other words, I'm only going to keep the zeroth order 933 01:02:21,400 --> 01:02:24,820 term, which involves no derivatives, 934 01:02:24,820 --> 01:02:27,980 and the first order term, which involves only the first order 935 01:02:27,980 --> 01:02:30,100 derivatives. 936 01:02:30,100 --> 01:02:33,420 OK, now what is the zeroth order term? 937 01:02:33,420 --> 01:02:42,130 The zeroth order term is F1 at my initial guess, right? 938 01:02:42,130 --> 01:02:43,580 What is the first order term? 939 01:02:43,580 --> 01:02:46,460 Actually, in this case, because it's a multivariate function, 940 01:02:46,460 --> 01:02:51,250 I have more than one first order terms. 941 01:02:51,250 --> 01:02:58,230 If I have as many first order terms as there are many u's, 942 01:02:58,230 --> 01:03:01,100 I'm going to be summing from i goes from 1 943 01:03:01,100 --> 01:03:04,610 to the dimension of the system, let me call big 944 01:03:04,610 --> 01:03:16,190 N, partial F1, partial u of the ith dimension times ui 945 01:03:16,190 --> 01:03:17,400 minus u0i. 946 01:03:19,730 --> 01:03:20,230 Right? 947 01:03:22,912 --> 01:03:23,810 Does that make sense? 948 01:03:23,810 --> 01:03:30,580 That's the Taylor series expansion of F1. 949 01:03:30,580 --> 01:03:36,341 We are only keeping the zeroth order terms as opposed 950 01:03:36,341 --> 01:03:39,760 to all the terms. 951 01:03:39,760 --> 01:03:42,470 If I start to write all the other order derivatives, 952 01:03:42,470 --> 01:03:43,955 it would get too complicated. 953 01:03:43,955 --> 01:03:48,110 But I'm not going to write them, so I'm going to truncate them. 954 01:03:48,110 --> 01:03:52,800 And I'll approximate [INAUDIBLE] method, it's a linear function. 955 01:03:52,800 --> 01:03:57,300 It's a linear function because this F1 of u 956 01:03:57,300 --> 01:03:59,920 is probably a linear function of u. 957 01:03:59,920 --> 01:04:01,550 But now, I'm truncating. 958 01:04:01,550 --> 01:04:04,681 I'm removing everything that happens over here, 959 01:04:04,681 --> 01:04:07,500 so if it's quadratic, or cubic, or anything. 960 01:04:07,500 --> 01:04:10,990 I'm only keeping the constant part that is independent 961 01:04:10,990 --> 01:04:13,274 of u, and this part, which is 0. 962 01:04:16,580 --> 01:04:21,840 Now, I get a linear approximation of F. 963 01:04:21,840 --> 01:04:27,706 We all know how to find the root of a linear function? 964 01:04:27,706 --> 01:04:29,330 Even though it's a million dimensional? 965 01:04:33,210 --> 01:04:37,370 Right, that's the reason we have linear algebra, right? 966 01:04:37,370 --> 01:04:38,840 That's why we have matrices. 967 01:04:38,840 --> 01:04:40,907 That's why Matlab is called Matlab. 968 01:04:43,830 --> 01:04:48,704 That is because we can write functions like this, 969 01:04:48,704 --> 01:04:53,170 we can write linear functions in matrix form. 970 01:04:53,170 --> 01:04:59,050 And to solve linear equations like this, even ginormous ones, 971 01:04:59,050 --> 01:05:03,720 we just use linear algebra, right? 972 01:05:03,720 --> 01:05:07,230 OK, so we are also approximating all the components 973 01:05:07,230 --> 01:05:12,090 of F, all the way to the n-th component using 974 01:05:12,090 --> 01:05:20,930 Fn u0, plus summation i goes from 1 to N, partial FN 975 01:05:20,930 --> 01:05:23,670 partial ui, ui minus ui0. 976 01:05:26,180 --> 01:05:29,460 So there are big N equations. 977 01:05:29,460 --> 01:05:31,388 We have big N of these variables. 978 01:05:34,762 --> 01:05:37,654 How to solve them? 979 01:05:37,654 --> 01:05:41,740 How to solve these coupled, N-coupled linear equations? 980 01:05:46,960 --> 01:05:47,708 Yes? 981 01:05:47,708 --> 01:05:48,583 AUDIENCE: [INAUDIBLE] 982 01:05:51,810 --> 01:05:56,610 QIQI WANG: Yeah, it's just write it into a matrix form, right? 983 01:05:56,610 --> 01:05:58,520 First, write it into a matrix form. 984 01:05:58,520 --> 01:06:01,140 So I want to try to say, OK, if I 985 01:06:01,140 --> 01:06:03,760 want to set all these things to 0, 986 01:06:03,760 --> 01:06:11,830 I'm trying to set like 0, 0, 0, 0, is equal to F1 at u0, 987 01:06:11,830 --> 01:06:13,415 et cetera, to FN of u0. 988 01:06:16,440 --> 01:06:18,610 This is this term. 989 01:06:18,610 --> 01:06:20,960 And how about this term? 990 01:06:20,960 --> 01:06:26,530 This term is just a joint matrix vector multiplication. 991 01:06:26,530 --> 01:06:31,930 The matrix is other derivatives, which 992 01:06:31,930 --> 01:06:35,295 is called the Jacobin when I put that into a matrix form. 993 01:06:38,440 --> 01:06:42,420 The first row is my partial F1, partial blah. 994 01:06:42,420 --> 01:06:47,840 The first column is my partial blah, partial u1. 995 01:06:47,840 --> 01:06:53,320 The last column is partial F blah, partial uN. 996 01:06:53,320 --> 01:06:57,870 So each row corresponds to one [INAUDIBLE] the residual. 997 01:06:57,870 --> 01:07:01,340 Each column corresponds to one thing in the independent 998 01:07:01,340 --> 01:07:03,990 variables, in the u's. 999 01:07:03,990 --> 01:07:09,670 And multiplying that by order u1 minus u1 0, 1000 01:07:09,670 --> 01:07:12,331 et cetera, to uN minus uN 0. 1001 01:07:15,940 --> 01:07:18,478 Do you all see these linear equations 1002 01:07:18,478 --> 01:07:20,923 are the same as this matrix? 1003 01:07:31,670 --> 01:07:32,170 All right? 1004 01:07:32,170 --> 01:07:36,490 So what I did is the Taylor series expansion 1005 01:07:36,490 --> 01:07:38,714 for all these non-linear equations 1006 01:07:38,714 --> 01:07:43,880 and write them all into a matrix form. 1007 01:07:43,880 --> 01:07:47,100 And what I'm trying to do is I'm trying to solve for these u1 1008 01:07:47,100 --> 01:07:49,550 to uN. 1009 01:07:49,550 --> 01:07:50,360 How do I do that? 1010 01:07:53,510 --> 01:07:56,301 I just invert the whole Jacobin. 1011 01:07:56,301 --> 01:08:03,090 If I call this as J, then my u1, et cetera, uN 1012 01:08:03,090 --> 01:08:13,340 is going to be equal to u1 0, uN 0 minus J inverse times 1013 01:08:13,340 --> 01:08:18,460 my F1, FN evaluated at the initial guess. 1014 01:08:25,400 --> 01:08:25,900 Right? 1015 01:08:25,900 --> 01:08:30,166 So this is the solution [INAUDIBLE]. 1016 01:08:30,166 --> 01:08:32,082 This is one step in the Newton-Raphson, right? 1017 01:08:34,926 --> 01:08:42,147 In Newton-Raphson, we compute the residual [INAUDIBLE] 1018 01:08:42,147 --> 01:08:45,354 at the initial guess. 1019 01:08:45,354 --> 01:08:49,282 And we do the Jacobian. 1020 01:08:49,282 --> 01:08:54,649 [INAUDIBLE] the Jacobin inverse times the residuals. 1021 01:08:54,649 --> 01:09:00,000 Then, you stop at the result from the [INAUDIBLE]. 1022 01:09:00,000 --> 01:09:03,590 And get your next step solution. 1023 01:09:06,165 --> 01:09:09,112 And what is the next step solution? 1024 01:09:09,112 --> 01:09:11,520 The next step solution is the zero 1025 01:09:11,520 --> 01:09:15,380 of the linear approximation? 1026 01:09:15,380 --> 01:09:17,304 All right? 1027 01:09:17,304 --> 01:09:22,437 Which hopefully gives you a better initial guess than u0. 1028 01:09:22,437 --> 01:09:27,575 And what you do next is you call this set of u's to be u 1029 01:09:27,575 --> 01:09:30,607 parentheses 1. 1030 01:09:30,607 --> 01:09:36,505 We call these as u parentheses 1. 1031 01:09:36,505 --> 01:09:41,430 And then, you linearize again on these. 1032 01:09:41,430 --> 01:09:46,260 So the one dimensional analog is this. 1033 01:09:46,260 --> 01:09:49,130 You try to find F of u. 1034 01:09:49,130 --> 01:09:50,906 OK, you want to find F of u. 1035 01:09:50,906 --> 01:09:54,430 You start over an initial guess. 1036 01:09:54,430 --> 01:09:57,510 This is my u0. 1037 01:09:57,510 --> 01:10:03,170 I construct my first order Taylor series approximation, 1038 01:10:03,170 --> 01:10:04,340 which is what in this case? 1039 01:10:07,480 --> 01:10:10,850 If the zeroth order term is going to be this, 1040 01:10:10,850 --> 01:10:12,280 this is the zeroth order term. 1041 01:10:12,280 --> 01:10:13,460 It's a constant, right? 1042 01:10:13,460 --> 01:10:16,460 It's just F of u0. 1043 01:10:16,460 --> 01:10:18,930 The first order term is a derivative 1044 01:10:18,930 --> 01:10:23,103 at this point, which is going to give you this. 1045 01:10:23,103 --> 01:10:26,340 So this is my first order term is going 1046 01:10:26,340 --> 01:10:27,590 to give me the tangent line. 1047 01:10:30,640 --> 01:10:35,845 My u1 is the solution of the Taylor approximation 1048 01:10:35,845 --> 01:10:36,635 equal to 0. 1049 01:10:36,635 --> 01:10:40,410 So this is my u1. 1050 01:10:40,410 --> 01:10:42,950 And then, I'm going to linearize around this again. 1051 01:10:42,950 --> 01:10:46,050 I'm going to construct another first order approximation, 1052 01:10:46,050 --> 01:10:48,400 and go to this point, et cetera, et cetera, 1053 01:10:48,400 --> 01:10:53,720 until I converge to the zero at the blue line? 1054 01:10:53,720 --> 01:11:00,010 Now, it's hard to draw this picture when both u and F are 1055 01:11:00,010 --> 01:11:02,346 million dimensional. 1056 01:11:02,346 --> 01:11:05,664 But the concept is the same. 1057 01:11:05,664 --> 01:11:08,040 I construct a linear approximation. 1058 01:11:08,040 --> 01:11:11,640 And the linear approximation can be arranged into matrix form 1059 01:11:11,640 --> 01:11:13,230 using the Jacobin. 1060 01:11:13,230 --> 01:11:17,830 Jacobin is just a matrix containing all the derivatives 1061 01:11:17,830 --> 01:11:20,140 of Fu's. 1062 01:11:20,140 --> 01:11:21,337 And I keep iterating. 1063 01:11:24,200 --> 01:11:26,840 I keep iterating. 1064 01:11:26,840 --> 01:11:29,300 I solve the 0 of the linear approximation, 1065 01:11:29,300 --> 01:11:31,880 linearize again at that new point, 1066 01:11:31,880 --> 01:11:36,310 and then solve, get the zero of that linear approximation. 1067 01:11:36,310 --> 01:11:39,112 I linearize again at the new point, 1068 01:11:39,112 --> 01:11:41,704 I get the zero of that linear approximation, 1069 01:11:41,704 --> 01:11:44,164 and then linearize again at the new point. 1070 01:11:48,168 --> 01:11:49,084 Any questions on this? 1071 01:11:55,480 --> 01:11:59,300 General technique of solving large systems 1072 01:11:59,300 --> 01:12:00,532 of non-linear equations. 1073 01:12:06,860 --> 01:12:08,400 Using the fact that we are already 1074 01:12:08,400 --> 01:12:11,670 good as solving large systems of linear equations. 1075 01:12:16,060 --> 01:12:22,522 OK, now I'm all out of things. 1076 01:12:22,522 --> 01:12:23,980 Just time to answer your questions. 1077 01:12:28,800 --> 01:12:29,490 Yes? 1078 01:12:29,490 --> 01:12:32,980 AUDIENCE: Will we be expected to [INAUDIBLE] 1079 01:12:32,980 --> 01:12:35,230 QIQI WANG: No, you don't need to quote anything. 1080 01:12:41,630 --> 01:12:44,200 You don't need to have your hands on the keyboard. 1081 01:12:56,840 --> 01:12:57,340 Yes? 1082 01:12:57,340 --> 01:12:58,215 AUDIENCE: [INAUDIBLE] 1083 01:13:00,920 --> 01:13:03,540 QIQI WANG: What does big F represent in this case? 1084 01:13:03,540 --> 01:13:11,520 Big F represents the function of u if we want to get the zero. 1085 01:13:17,750 --> 01:13:22,710 If I want to solve [INAUDIBLE], if I use an implicit scheme 1086 01:13:22,710 --> 01:13:27,140 and I want to evolve from this case to the next u, 1087 01:13:27,140 --> 01:13:30,833 to the next time step, then my F of u 1088 01:13:30,833 --> 01:13:34,864 would be the left hand side of the scheme minus the right hand 1089 01:13:34,864 --> 01:13:39,240 side of the scheme, right? 1090 01:13:39,240 --> 01:13:41,642 Imagine if I have Backward Euler. 1091 01:13:41,642 --> 01:13:44,606 Then my big F would be u minus uk divided 1092 01:13:44,606 --> 01:13:49,160 by delta t minus F of u. 1093 01:13:49,160 --> 01:13:51,620 That is the Backward Euler. 1094 01:13:51,620 --> 01:13:58,190 If I trapezoidal, my big F would be u minus uk over delta t 1095 01:13:58,190 --> 01:14:01,994 minus half of F of u minus half of F of uk. 1096 01:14:05,151 --> 01:14:05,650 All right? 1097 01:14:05,650 --> 01:14:09,000 And the difference is implicit schemes are 1098 01:14:09,000 --> 01:14:12,000 going to get a different value. 1099 01:14:16,500 --> 01:14:20,183 I have a Backward Euler, [INAUDIBLE], 1100 01:14:20,183 --> 01:14:22,678 half of that F of u plus half of that uk. 1101 01:14:22,678 --> 01:14:24,175 And that is in the derivative. 1102 01:14:28,167 --> 01:14:32,270 Whatever the scheme is, F is the thing along 1103 01:14:32,270 --> 01:14:35,472 with [INAUDIBLE] zero that big F is [INAUDIBLE]. 1104 01:14:35,472 --> 01:14:36,430 AUDIENCE: [INAUDIBLE] 1105 01:14:36,430 --> 01:14:37,910 QIQI WANG: It's a [INAUDIBLE], yes. 1106 01:14:52,670 --> 01:14:54,638 Questions? 1107 01:14:54,638 --> 01:14:58,082 After this, we are going to [INAUDIBLE] the midterm. 1108 01:15:04,478 --> 01:15:07,096 AUDIENCE: [INAUDIBLE] 1109 01:15:07,096 --> 01:15:07,970 QIQI WANG: All right. 1110 01:15:07,970 --> 01:15:11,740 Anything else I can talk to you about? 1111 01:15:11,740 --> 01:15:13,120 All these materials? 1112 01:15:13,120 --> 01:15:17,580 Oh, and just to remind, we are really 1113 01:15:17,580 --> 01:15:20,110 looking at the measurable outcomes. 1114 01:15:20,110 --> 01:15:29,536 And don't forget to go [INAUDIBLE] something 1115 01:15:29,536 --> 01:15:30,244 that [INAUDIBLE]. 1116 01:15:36,406 --> 01:15:39,744 Analytical solutions [INAUDIBLE] what is the solution of u 1117 01:15:39,744 --> 01:15:41,187 [INAUDIBLE]? 1118 01:15:41,187 --> 01:15:44,080 And things like that. 1119 01:15:44,080 --> 01:15:46,250 And if the solution is stable, when it is unstable, 1120 01:15:46,250 --> 01:15:49,350 you need to know that and things like that. 1121 01:15:49,350 --> 01:15:50,340 So this is important. 1122 01:15:50,340 --> 01:15:51,317 Yeah? 1123 01:15:51,317 --> 01:15:52,192 AUDIENCE: [INAUDIBLE] 1124 01:15:56,370 --> 01:15:59,080 QIQI WANG: Should d of dt be equal to lambda u? 1125 01:15:59,080 --> 01:16:01,018 You need to solve it analytically. 1126 01:16:01,018 --> 01:16:01,550 Huh? 1127 01:16:01,550 --> 01:16:02,425 AUDIENCE: [INAUDIBLE] 1128 01:16:07,684 --> 01:16:08,350 QIQI WANG: Yeah. 1129 01:16:08,350 --> 01:16:11,090 For d of dt over the lambda u, you 1130 01:16:11,090 --> 01:16:13,637 need to be able to solve it analytically, yeah. 1131 01:16:13,637 --> 01:16:14,512 AUDIENCE: [INAUDIBLE] 1132 01:16:19,452 --> 01:16:20,440 QIQI WANG: Huh? 1133 01:16:20,440 --> 01:16:22,920 AUDIENCE: [INAUDIBLE] 1134 01:16:22,920 --> 01:16:23,830 QIQI WANG: Yes, yes. 1135 01:16:23,830 --> 01:16:26,740 D of dt equal to au, yes, right. 1136 01:16:26,740 --> 01:16:30,060 Any matrix, A, right? 1137 01:16:30,060 --> 01:16:31,706 You need to be able to solve it. 1138 01:16:31,706 --> 01:16:32,618 AUDIENCE: [INAUDIBLE] 1139 01:16:32,618 --> 01:16:34,898 [LAUGHTER] 1140 01:16:34,898 --> 01:16:36,580 QIQI WANG: Yeah. 1141 01:16:36,580 --> 01:16:40,560 My A can be a trillion by a trillion. 1142 01:16:40,560 --> 01:16:44,500 And you need to know how to solve it. 1143 01:16:44,500 --> 01:16:47,480 You are not expected to-- 1144 01:16:47,480 --> 01:16:49,740 AUDIENCE: [INAUDIBLE] 1145 01:16:49,740 --> 01:16:50,865 QIQI WANG: Hm? 1146 01:16:50,865 --> 01:16:51,740 AUDIENCE: [INAUDIBLE] 1147 01:16:55,330 --> 01:16:58,210 QIQI WANG: Yeah, of course, you won't 1148 01:16:58,210 --> 01:17:02,836 be able to do the eigenvalue problem of A in your head 1149 01:17:02,836 --> 01:17:06,150 if A is a 1,000 by 1,000 matrix. 1150 01:17:06,150 --> 01:17:09,950 But you need to know how to get the analytical solution 1151 01:17:09,950 --> 01:17:12,520 of the ODE. 1152 01:17:12,520 --> 01:17:14,480 AUDIENCE: [INAUDIBLE] 1153 01:17:14,480 --> 01:17:15,460 QIQI WANG: Hm? 1154 01:17:15,460 --> 01:17:16,930 AUDIENCE: [INAUDIBLE] 1155 01:17:16,930 --> 01:17:18,015 QIQI WANG: Yeah. 1156 01:17:18,015 --> 01:17:18,890 AUDIENCE: [INAUDIBLE] 1157 01:17:32,120 --> 01:17:33,130 QIQI WANG: All right. 1158 01:17:33,130 --> 01:17:35,250 Anything else? 1159 01:17:35,250 --> 01:17:37,054 And homework here. 1160 01:17:40,763 --> 01:17:42,350 If you didn't [INAUDIBLE] homework 1161 01:17:42,350 --> 01:17:45,800 for [INAUDIBLE], the previous homeworks, 1162 01:17:45,800 --> 01:17:49,450 you can always come to my office and figure it out.