1 00:00:00,090 --> 00:00:02,490 The following content is provided under a Creative 2 00:00:02,490 --> 00:00:04,030 Commons license. 3 00:00:04,030 --> 00:00:06,330 Your support will help MIT OpenCourseWare 4 00:00:06,330 --> 00:00:10,720 continue to offer high quality educational resources for free. 5 00:00:10,720 --> 00:00:13,320 To make a donation or view additional materials 6 00:00:13,320 --> 00:00:17,280 from hundreds of MIT courses, visit MIT OpenCourseWare 7 00:00:17,280 --> 00:00:18,450 at ocw.mit.edu. 8 00:00:48,636 --> 00:00:50,260 ALAN OPPENHEIM: Last time we introduced 9 00:00:50,260 --> 00:00:54,940 the notion of digital networks and the general topic 10 00:00:54,940 --> 00:00:57,470 of digital network theory. 11 00:00:57,470 --> 00:01:00,550 There, of course, are lots of directions 12 00:01:00,550 --> 00:01:03,100 that that discussion can proceed in, 13 00:01:03,100 --> 00:01:05,140 but during this set of lectures, we 14 00:01:05,140 --> 00:01:09,580 won't be talking in any more detail about the general issues 15 00:01:09,580 --> 00:01:12,370 of digital network theory. 16 00:01:12,370 --> 00:01:14,600 In this lecture and the next lecture, 17 00:01:14,600 --> 00:01:19,120 I would like to consider, in particular, some of the more 18 00:01:19,120 --> 00:01:23,560 common structures that are used for implementing 19 00:01:23,560 --> 00:01:25,240 digital filters-- 20 00:01:25,240 --> 00:01:29,030 first for the case of infinite impulse response filters, 21 00:01:29,030 --> 00:01:32,590 which we'll discuss in this lecture, and then in the next 22 00:01:32,590 --> 00:01:38,260 lecture some of the more common structures for finite impulse 23 00:01:38,260 --> 00:01:40,840 response digital filters. 24 00:01:40,840 --> 00:01:47,830 To begin the discussion, let's consider the most general form, 25 00:01:47,830 --> 00:01:51,070 once again, for the transfer function 26 00:01:51,070 --> 00:01:55,570 of a digital filter where we are assuming 27 00:01:55,570 --> 00:02:00,080 that the system function H of z is a rational function in z 28 00:02:00,080 --> 00:02:02,338 to the minus 1. 29 00:02:02,338 --> 00:02:05,950 We recall from our previous discussions 30 00:02:05,950 --> 00:02:10,000 that a rational transfer function of this form 31 00:02:10,000 --> 00:02:13,600 corresponds to a linear constant coefficient difference 32 00:02:13,600 --> 00:02:17,020 equation, and in particular, the difference equation 33 00:02:17,020 --> 00:02:19,570 corresponding to this system function 34 00:02:19,570 --> 00:02:23,440 is the difference equation that I've indicated here. 35 00:02:23,440 --> 00:02:26,890 The coefficients in the numerator corresponding 36 00:02:26,890 --> 00:02:29,770 to the polynomial that represents 37 00:02:29,770 --> 00:02:32,920 the 0's of the transfer function are 38 00:02:32,920 --> 00:02:36,310 identical to the coefficients applied 39 00:02:36,310 --> 00:02:41,950 to the delayed values of the input, 40 00:02:41,950 --> 00:02:45,280 and the coefficients in the denominator corresponding 41 00:02:45,280 --> 00:02:48,910 to the coefficients for the polynomial representing 42 00:02:48,910 --> 00:02:52,690 the poles are the same coefficients, 43 00:02:52,690 --> 00:02:54,280 which, in the difference equation, 44 00:02:54,280 --> 00:02:56,740 correspond to the weights applied 45 00:02:56,740 --> 00:03:00,560 to delayed values of the output. 46 00:03:00,560 --> 00:03:03,880 So this is the general form of a transfer function 47 00:03:03,880 --> 00:03:07,480 for assuming that the system is representable by a linear 48 00:03:07,480 --> 00:03:10,060 constant coefficient difference equation, 49 00:03:10,060 --> 00:03:14,290 and the difference equation corresponding to this transfer 50 00:03:14,290 --> 00:03:18,160 function is as I've indicated here. 51 00:03:18,160 --> 00:03:23,440 Incidentally, let me stress as I indicated in the last lecture 52 00:03:23,440 --> 00:03:27,400 that our assumption will be throughout these discussions 53 00:03:27,400 --> 00:03:33,910 that we are discussing-- considering a causal system. 54 00:03:33,910 --> 00:03:36,090 In other words, the region of convergence 55 00:03:36,090 --> 00:03:39,120 that we would associate with this transfer function 56 00:03:39,120 --> 00:03:41,400 is the region of convergence that would 57 00:03:41,400 --> 00:03:43,390 correspond to a causal system. 58 00:03:43,390 --> 00:03:45,420 In other words, the region of convergence 59 00:03:45,420 --> 00:03:49,080 is outside the outermost pole. 60 00:03:49,080 --> 00:03:55,230 There are a variety of ways in which we can rewrite a transfer 61 00:03:55,230 --> 00:03:57,480 function of this form. 62 00:03:57,480 --> 00:04:00,780 One possible way of writing this transfer function 63 00:04:00,780 --> 00:04:04,050 is as I've indicated at the bottom. 64 00:04:04,050 --> 00:04:07,260 That is an expression corresponding 65 00:04:07,260 --> 00:04:10,510 to the product of two functions-- 66 00:04:10,510 --> 00:04:13,020 the first representing the polynomial 67 00:04:13,020 --> 00:04:16,620 for the 0's, and the second representing 68 00:04:16,620 --> 00:04:19,529 the polynomial for the poles. 69 00:04:19,529 --> 00:04:22,860 What this suggests is that we can imagine 70 00:04:22,860 --> 00:04:27,030 implementing this system by implementing a system which 71 00:04:27,030 --> 00:04:32,090 realizes this transfer function, and cascading that-- 72 00:04:32,090 --> 00:04:33,930 the cascade leading to the product 73 00:04:33,930 --> 00:04:35,550 of the system functions-- 74 00:04:35,550 --> 00:04:39,510 cascading that with a system that implements this system 75 00:04:39,510 --> 00:04:41,200 function. 76 00:04:41,200 --> 00:04:45,150 What that corresponds to in the implementation 77 00:04:45,150 --> 00:04:47,880 of the difference equation is first 78 00:04:47,880 --> 00:04:51,000 implementing in terms of multipliers, delays, and adders 79 00:04:51,000 --> 00:04:53,460 as we talked about last time-- 80 00:04:53,460 --> 00:04:56,790 implementing first the linear combination 81 00:04:56,790 --> 00:05:02,830 of the weighted delayed input values, 82 00:05:02,830 --> 00:05:06,420 and then using that as the input to a system 83 00:05:06,420 --> 00:05:10,600 which implements the weighted delayed output values. 84 00:05:10,600 --> 00:05:17,370 So implementing this system function with this function 85 00:05:17,370 --> 00:05:22,710 first in cascade with this corresponds to implementing 86 00:05:22,710 --> 00:05:26,010 this difference equation where we 87 00:05:26,010 --> 00:05:36,840 can imagine denoting this first sum as x1 of n-- 88 00:05:36,840 --> 00:05:39,870 implementing x1 of n-- 89 00:05:39,870 --> 00:05:42,450 implementing the function x1 of n, 90 00:05:42,450 --> 00:05:45,690 and then using that as the input to a system which 91 00:05:45,690 --> 00:05:48,720 is represented by the difference equation y of n 92 00:05:48,720 --> 00:05:52,580 equals x1 of n plus this sum. 93 00:05:52,580 --> 00:06:01,250 Carrying that out, the digital network that results 94 00:06:01,250 --> 00:06:06,000 is as I've indicated here, where we have, 95 00:06:06,000 --> 00:06:08,990 first of all, the linear combination 96 00:06:08,990 --> 00:06:13,080 of weighted delayed input values. 97 00:06:13,080 --> 00:06:18,230 So here is x of n, x of n minus 1, x of n minus 2, 98 00:06:18,230 --> 00:06:20,600 down through x of n minus capital 99 00:06:20,600 --> 00:06:24,530 N, where I'm assuming in drawing this 100 00:06:24,530 --> 00:06:28,820 that capital M is equal to capital N. 101 00:06:28,820 --> 00:06:36,680 This first block then implements this summation to form x1 of n, 102 00:06:36,680 --> 00:06:42,500 and then the second system which this has cascaded with 103 00:06:42,500 --> 00:06:45,680 has as an input x1 of n and added 104 00:06:45,680 --> 00:06:49,200 to it weighted delayed values of the output. 105 00:06:49,200 --> 00:06:52,630 So here is y of n, y of n minus 1, 106 00:06:52,630 --> 00:06:55,250 down through y of n minus capital N. 107 00:06:55,250 --> 00:07:00,200 We see the coefficients a1, a2, through a sub capital N. 108 00:07:00,200 --> 00:07:05,450 Clearly in this implementation this block corresponds 109 00:07:05,450 --> 00:07:08,270 to implementing the zeros of the system, 110 00:07:08,270 --> 00:07:12,080 and this block corresponds to implementing 111 00:07:12,080 --> 00:07:13,640 the poles of the system. 112 00:07:13,640 --> 00:07:19,340 So as we've factored the transfer function into the 0's 113 00:07:19,340 --> 00:07:22,460 followed by the poles, we have this system 114 00:07:22,460 --> 00:07:25,910 implementing the 0's followed by this system implementing 115 00:07:25,910 --> 00:07:28,080 the poles. 116 00:07:28,080 --> 00:07:30,680 Well, this, of course, is one implementation 117 00:07:30,680 --> 00:07:34,340 of the difference equation, but in fact, there 118 00:07:34,340 --> 00:07:38,660 are a variety of ways in which we can manipulate the transfer 119 00:07:38,660 --> 00:07:41,990 function, or, equivalently, in which we can manipulate 120 00:07:41,990 --> 00:07:46,490 the difference equation, which will lead to other structures 121 00:07:46,490 --> 00:07:49,490 for implementing the system besides the structure 122 00:07:49,490 --> 00:07:51,680 that I've indicated here. 123 00:07:51,680 --> 00:07:56,330 Well, let's consider one simple way of manipulating this system 124 00:07:56,330 --> 00:07:59,510 to generate another structure. 125 00:07:59,510 --> 00:08:04,220 We recognize this as two systems in cascade. 126 00:08:04,220 --> 00:08:08,150 They both implement linear shift invariant systems, 127 00:08:08,150 --> 00:08:10,680 and we know that two linear shift invariant 128 00:08:10,680 --> 00:08:15,260 systems in cascade can be cascaded in either order 129 00:08:15,260 --> 00:08:17,270 without affecting the overall transfer 130 00:08:17,270 --> 00:08:19,430 function of the system. 131 00:08:19,430 --> 00:08:22,880 So we can imagine just simply breaking the system 132 00:08:22,880 --> 00:08:26,030 at this point, interchanging the order 133 00:08:26,030 --> 00:08:28,550 in which these two systems are cascaded, 134 00:08:28,550 --> 00:08:30,560 and obviously what that leads to is 135 00:08:30,560 --> 00:08:34,470 a second implementation of the same difference equation. 136 00:08:34,470 --> 00:08:36,289 That implementation in particular, 137 00:08:36,289 --> 00:08:40,640 whereas this one has the zeros first followed by the poles-- 138 00:08:40,640 --> 00:08:43,130 interchanging the order of those two 139 00:08:43,130 --> 00:08:46,050 will result in the poles implemented first, 140 00:08:46,050 --> 00:08:48,560 followed by the 0's, and the system 141 00:08:48,560 --> 00:08:53,660 that results is what I've indicated on this next view 142 00:08:53,660 --> 00:08:55,380 graph. 143 00:08:55,380 --> 00:08:59,030 So this system is identical to the other one. 144 00:08:59,030 --> 00:09:01,820 It's clearly identical in terms of the overall transfer 145 00:09:01,820 --> 00:09:04,460 function, and what I've done simply 146 00:09:04,460 --> 00:09:10,220 is just to interchange the order in which the 0's and the poles 147 00:09:10,220 --> 00:09:13,120 are implemented. 148 00:09:13,120 --> 00:09:17,080 Well, that manipulation that is breaking that system 149 00:09:17,080 --> 00:09:20,830 and interchanging the order in which the systems are cascaded 150 00:09:20,830 --> 00:09:24,280 can be interpreted in terms of either a manipulation 151 00:09:24,280 --> 00:09:27,460 on the transfer function or a manipulation on the difference 152 00:09:27,460 --> 00:09:31,360 equation, and to indicate what that corresponds 153 00:09:31,360 --> 00:09:40,050 to let's return to the general difference equation 154 00:09:40,050 --> 00:09:43,320 as we had it on the first view graph-- 155 00:09:43,320 --> 00:09:45,990 the transfer function, where now, 156 00:09:45,990 --> 00:09:49,920 rather than cascading this system first and this system 157 00:09:49,920 --> 00:09:53,220 second, we've simply interchanged the order in which 158 00:09:53,220 --> 00:09:55,650 those two systems are cascaded. 159 00:09:55,650 --> 00:09:58,140 That's interpreting this operation in terms 160 00:09:58,140 --> 00:10:00,660 of the transfer function. 161 00:10:00,660 --> 00:10:03,900 To interpret it in terms of the difference equation 162 00:10:03,900 --> 00:10:09,180 is slightly more involved, but basically and very quickly 163 00:10:09,180 --> 00:10:13,350 what it involves is first implementing the difference 164 00:10:13,350 --> 00:10:18,000 equation in which we consider that the input is just x of n 165 00:10:18,000 --> 00:10:24,060 rather than a weighted sum of delayed x of n's to implement 166 00:10:24,060 --> 00:10:28,020 to implement the output y1 of n. 167 00:10:28,020 --> 00:10:31,350 And then since the input is, in fact, a linear combination 168 00:10:31,350 --> 00:10:34,560 of weighted delayed inputs, the corresponding output 169 00:10:34,560 --> 00:10:38,190 is the same linear combination of weighted delayed outputs. 170 00:10:38,190 --> 00:10:41,760 That essentially is derived from using properties 171 00:10:41,760 --> 00:10:43,890 of linear shift invariant systems 172 00:10:43,890 --> 00:10:47,690 that we talked about and some of the early lectures. 173 00:10:47,690 --> 00:10:53,350 Well, returning to the network that 174 00:10:53,350 --> 00:11:00,010 resulted by interchanging the order of these two systems, one 175 00:11:00,010 --> 00:11:01,970 of the questions we can ask, of course, 176 00:11:01,970 --> 00:11:05,020 is whether there is any advantage 177 00:11:05,020 --> 00:11:08,320 to implementing this system rather than implementing 178 00:11:08,320 --> 00:11:10,750 the first system that we derived. 179 00:11:10,750 --> 00:11:13,090 And in answering that, one thing that we 180 00:11:13,090 --> 00:11:18,160 notice about this system is that there are two parallel branches 181 00:11:18,160 --> 00:11:23,420 here with corresponding delays. 182 00:11:23,420 --> 00:11:25,350 Now, what does that mean? 183 00:11:25,350 --> 00:11:29,780 Well, if we consider this output to be y1 of n-- 184 00:11:29,780 --> 00:11:33,780 it's the y1 of n that we had defined in the previous slide-- 185 00:11:33,780 --> 00:11:37,140 the value appearing here is y1 of n minus 1, 186 00:11:37,140 --> 00:11:40,396 but the value appearing here is y1 of n minus 1. 187 00:11:40,396 --> 00:11:44,790 The value appearing here is y1 of n minus 2 and appearing here 188 00:11:44,790 --> 00:11:46,740 is y1 of n minus 2. 189 00:11:46,740 --> 00:11:51,180 And in fact, following this chain down, what we observe 190 00:11:51,180 --> 00:11:55,200 is that the output of this delay is exactly the same 191 00:11:55,200 --> 00:11:57,780 as the output of this delay. 192 00:11:57,780 --> 00:12:01,470 Well, if that's the case, then in fact there obviously, 193 00:12:01,470 --> 00:12:03,660 if we think about an implementation, 194 00:12:03,660 --> 00:12:09,420 is no reason to separately store this delayed output 195 00:12:09,420 --> 00:12:12,310 and this delayed output since they're the same. 196 00:12:12,310 --> 00:12:16,470 In other words, we can collapse these delays together, 197 00:12:16,470 --> 00:12:19,110 and the network that results when 198 00:12:19,110 --> 00:12:25,440 we do that is the network that I indicate here, where all 199 00:12:25,440 --> 00:12:29,010 that I've done in going from the previous network to this one 200 00:12:29,010 --> 00:12:32,520 is simply collapse the delays together, 201 00:12:32,520 --> 00:12:35,040 taking advantage of the fact that their outputs 202 00:12:35,040 --> 00:12:37,240 were identical. 203 00:12:37,240 --> 00:12:39,990 Now, in drawing a network, of course, 204 00:12:39,990 --> 00:12:44,760 it doesn't particularly matter whether we conserve 205 00:12:44,760 --> 00:12:47,520 z to the minus 1's or equivalently whether we 206 00:12:47,520 --> 00:12:51,000 collapse a network when we can take advantage of the fact 207 00:12:51,000 --> 00:12:54,090 that the output of two delays is the same, 208 00:12:54,090 --> 00:12:56,550 but clearly in terms of implementation 209 00:12:56,550 --> 00:12:59,940 of a digital filter either in terms of a program 210 00:12:59,940 --> 00:13:02,670 or in terms of special purpose hardware, 211 00:13:02,670 --> 00:13:08,400 obviously clearly there's an advantage to reducing 212 00:13:08,400 --> 00:13:12,090 the number of delay registers that are required, because you 213 00:13:12,090 --> 00:13:15,600 see each z to the minus 1 that appears 214 00:13:15,600 --> 00:13:19,900 in the structure requires in the implementation 215 00:13:19,900 --> 00:13:23,590 a register to store the value-- 216 00:13:23,590 --> 00:13:27,080 in other words, to hold it for the next iteration. 217 00:13:27,080 --> 00:13:31,240 So in this structure, as it's implemented here, 218 00:13:31,240 --> 00:13:35,500 we have n delay registers, where again I'm 219 00:13:35,500 --> 00:13:40,120 assuming that capital M was equal to capital N. There are 220 00:13:40,120 --> 00:13:43,570 N delay registers, whereas in the first network 221 00:13:43,570 --> 00:13:45,310 that we generated-- 222 00:13:45,310 --> 00:13:49,030 the network corresponding to the 0's first and then the poles-- 223 00:13:49,030 --> 00:13:52,150 the there were two M delay registers. 224 00:13:55,830 --> 00:13:59,100 In general, a digital filter structure 225 00:13:59,100 --> 00:14:01,614 that has the minimum number of delay 226 00:14:01,614 --> 00:14:03,780 registers-- and you can show that the minimum number 227 00:14:03,780 --> 00:14:07,050 required is the greater of M or N, 228 00:14:07,050 --> 00:14:10,080 or since we're considering M equal to N, 229 00:14:10,080 --> 00:14:14,010 the minimum number is N. The structure 230 00:14:14,010 --> 00:14:16,350 that has only that minimum number 231 00:14:16,350 --> 00:14:18,720 and no more is generally referred 232 00:14:18,720 --> 00:14:21,220 to as a canonic structure. 233 00:14:21,220 --> 00:14:24,610 So the structure that I've indicated here 234 00:14:24,610 --> 00:14:26,490 is a canonic structure. 235 00:14:26,490 --> 00:14:30,960 It has the minimum number of delays, 236 00:14:30,960 --> 00:14:33,720 but in fact, it's not the only canonic structure. 237 00:14:33,720 --> 00:14:37,890 There are a large variety of canonic structures, 238 00:14:37,890 --> 00:14:40,740 and in fact, there's a canonic structure 239 00:14:40,740 --> 00:14:43,620 that is similar to the first structure 240 00:14:43,620 --> 00:14:48,150 that we derived in the sense that it also has-- 241 00:14:48,150 --> 00:14:52,950 is implemented with the 0's first, followed by the poles. 242 00:14:52,950 --> 00:14:56,490 Let me remind you again that this system as it's implemented 243 00:14:56,490 --> 00:15:00,120 is basically a cascade of the system poles. 244 00:15:00,120 --> 00:15:03,600 Those are the a's-- 245 00:15:03,600 --> 00:15:06,270 or the polynomial that this implements 246 00:15:06,270 --> 00:15:08,100 corresponds to the poles-- 247 00:15:08,100 --> 00:15:12,390 followed by an implementation of the 0's, and it's the b's that 248 00:15:12,390 --> 00:15:14,490 control the 0's. 249 00:15:14,490 --> 00:15:17,700 Well, to generate another canonic structure 250 00:15:17,700 --> 00:15:21,540 we can take advantage of a theorem 251 00:15:21,540 --> 00:15:25,290 that, in fact, is a very powerful theorem in dealing 252 00:15:25,290 --> 00:15:26,860 with filter structures-- 253 00:15:26,860 --> 00:15:28,980 the theorem, which is referred to 254 00:15:28,980 --> 00:15:32,400 as the transposition theorem. 255 00:15:32,400 --> 00:15:35,730 What the transposition theorem says 256 00:15:35,730 --> 00:15:42,570 is that, if we have a network that implements a transfer 257 00:15:42,570 --> 00:15:48,140 function, and if we simply reverse the direction of all 258 00:15:48,140 --> 00:15:53,690 of the branches in the network and we interchange 259 00:15:53,690 --> 00:15:58,080 the input and the output, then the transfer 260 00:15:58,080 --> 00:16:02,660 function that results is exactly the same. 261 00:16:02,660 --> 00:16:06,280 So it says take the network, reverse the direction 262 00:16:06,280 --> 00:16:09,640 of the branches, put the input where the output was, 263 00:16:09,640 --> 00:16:12,640 take the output where the input was, 264 00:16:12,640 --> 00:16:16,930 and what you find is that the transfer function of the system 265 00:16:16,930 --> 00:16:19,380 is exactly the same. 266 00:16:19,380 --> 00:16:21,750 Well, let me illustrate this theorem. 267 00:16:21,750 --> 00:16:24,180 We won't incidentally prove the theorem, 268 00:16:24,180 --> 00:16:29,100 although in the notes in the text at the end of the chapter 269 00:16:29,100 --> 00:16:33,510 there, in fact, is a proof of the transposition theorem, 270 00:16:33,510 --> 00:16:36,400 but let me illustrate the transposition theorem-- 271 00:16:36,400 --> 00:16:39,060 first with a simple example that makes it 272 00:16:39,060 --> 00:16:41,850 appear to be a trivial theorem, and then 273 00:16:41,850 --> 00:16:45,060 with another example that suggests 274 00:16:45,060 --> 00:16:48,360 that perhaps the theorem is less obvious than it would at first 275 00:16:48,360 --> 00:16:50,320 appear. 276 00:16:50,320 --> 00:16:52,900 Well, to illustrate the transposition theorem, 277 00:16:52,900 --> 00:16:55,810 let's begin with a simple network-- 278 00:16:55,810 --> 00:16:58,670 just a simple first order network, 279 00:16:58,670 --> 00:17:03,160 two coefficient branches and a delay branch. 280 00:17:03,160 --> 00:17:09,310 The transposition theorem says that we want first of all 281 00:17:09,310 --> 00:17:14,990 to reverse the direction of all of the branches. 282 00:17:14,990 --> 00:17:17,200 So this branch gets turned around, again, 283 00:17:17,200 --> 00:17:19,300 with a gain of unity. 284 00:17:19,300 --> 00:17:22,390 This branch gets turned around with a gain of c. 285 00:17:22,390 --> 00:17:25,595 This branch gets turned around with a gain of a. 286 00:17:25,595 --> 00:17:29,140 The delay branch is turned around. 287 00:17:29,140 --> 00:17:33,970 This branch, which has a gain of unity, is turned around. 288 00:17:33,970 --> 00:17:38,620 Put the input where the output was, and take the output 289 00:17:38,620 --> 00:17:41,310 from where the input was. 290 00:17:41,310 --> 00:17:46,950 So the transpose of this network is the network 291 00:17:46,950 --> 00:17:51,980 that I've indicated here, and now of course we 292 00:17:51,980 --> 00:17:55,070 can redraw this network by putting 293 00:17:55,070 --> 00:17:59,900 the input on the left hand side and taking the output 294 00:17:59,900 --> 00:18:00,980 on the right hand side. 295 00:18:00,980 --> 00:18:03,110 That is taking the same network and just 296 00:18:03,110 --> 00:18:06,020 flipping it over-- flipping it over because we tend 297 00:18:06,020 --> 00:18:08,120 to have a convention that the input is coming in 298 00:18:08,120 --> 00:18:12,320 from the left and the output is going out at the right. 299 00:18:12,320 --> 00:18:14,990 If we do that, just taking this network 300 00:18:14,990 --> 00:18:17,840 and simply flipping it over, we have 301 00:18:17,840 --> 00:18:21,860 x of n coming in through a unity gain. 302 00:18:21,860 --> 00:18:25,380 This delay has now ended up on the left hand side, 303 00:18:25,380 --> 00:18:27,890 and you can verify in a straightforward way 304 00:18:27,890 --> 00:18:30,500 that these branches are now correct if we just take 305 00:18:30,500 --> 00:18:32,810 this network and flip it over. 306 00:18:32,810 --> 00:18:37,670 And is it true that the transfer function of this network 307 00:18:37,670 --> 00:18:42,160 is identical to the transfer function of this network? 308 00:18:42,160 --> 00:18:44,530 Well, you should be able to see by inspection 309 00:18:44,530 --> 00:18:46,400 that in fact it is true. 310 00:18:46,400 --> 00:18:49,360 In fact, if you compare this network to this one, 311 00:18:49,360 --> 00:18:50,750 what's the only difference? 312 00:18:50,750 --> 00:18:54,850 The only difference is that this delay, instead of being here, 313 00:18:54,850 --> 00:18:58,810 ended up on the other side of the coefficient multiplier. 314 00:18:58,810 --> 00:19:02,320 And obviously since these two in cascade 315 00:19:02,320 --> 00:19:04,610 implement a times z to the minus 1, 316 00:19:04,610 --> 00:19:07,900 it doesn't matter whether I do the multiplication 317 00:19:07,900 --> 00:19:11,170 by a first and then delay or the reverse. 318 00:19:11,170 --> 00:19:18,320 So applying the transposition theorem to this simple example, 319 00:19:18,320 --> 00:19:20,950 we see that obviously for this example 320 00:19:20,950 --> 00:19:23,320 the transposition theorem works. 321 00:19:23,320 --> 00:19:30,730 Well, let's try it on a slightly more complicated example 322 00:19:30,730 --> 00:19:34,960 not to verify that it works, but just again to emphasize 323 00:19:34,960 --> 00:19:38,380 how the transposition is implemented. 324 00:19:38,380 --> 00:19:45,260 Here I have an example in which I have a canonic first order 325 00:19:45,260 --> 00:19:46,310 system. 326 00:19:46,310 --> 00:19:50,200 This implements one 0 and one pole. 327 00:19:50,200 --> 00:19:52,420 Here is the implementation of the pole 328 00:19:52,420 --> 00:19:55,220 and the implementation of the 0. 329 00:19:55,220 --> 00:19:58,250 This, in fact, is the first order counterpart 330 00:19:58,250 --> 00:20:03,350 of the canonic structure that I showed several view graphs ago. 331 00:20:03,350 --> 00:20:07,070 And so it's one pole, and that's implemented 332 00:20:07,070 --> 00:20:09,350 through this loop, one 0, and that's 333 00:20:09,350 --> 00:20:12,130 implemented through this loop. 334 00:20:12,130 --> 00:20:14,440 And these, of course, are unity gain 335 00:20:14,440 --> 00:20:17,210 since I put no amplitude on them. 336 00:20:17,210 --> 00:20:19,600 And now to apply the transposition theorem 337 00:20:19,600 --> 00:20:24,420 to this network, again, we interchange the-- we reverse 338 00:20:24,420 --> 00:20:27,130 the direction of all of the arrows, 339 00:20:27,130 --> 00:20:31,250 and you can see that I've done that in all of these branches. 340 00:20:31,250 --> 00:20:33,590 The delay is likewise reversed. 341 00:20:33,590 --> 00:20:38,460 The a is reversed, and the b is reversed. 342 00:20:38,460 --> 00:20:41,600 I put the input in where the output was. 343 00:20:41,600 --> 00:20:45,020 I take the output out where the input was, 344 00:20:45,020 --> 00:20:47,420 and then the transposition theorem 345 00:20:47,420 --> 00:20:51,890 says that this first order system implements 346 00:20:51,890 --> 00:20:55,280 exactly the same transfer function as this first order 347 00:20:55,280 --> 00:20:57,530 system does. 348 00:20:57,530 --> 00:21:03,020 Well, again, we can redraw this by taking again the input 349 00:21:03,020 --> 00:21:04,880 at the left hand side, the output 350 00:21:04,880 --> 00:21:08,270 at the right hand side that corresponds to just taking 351 00:21:08,270 --> 00:21:10,015 this and flipping it over. 352 00:21:10,015 --> 00:21:12,380 In fact, I could do that by taking the view graph 353 00:21:12,380 --> 00:21:16,460 and just flipping it over, and the result then, 354 00:21:16,460 --> 00:21:20,360 just flipping this over, is the system 355 00:21:20,360 --> 00:21:22,700 that I've indicated here-- 356 00:21:22,700 --> 00:21:25,520 x of n in at the left hand side, y of n 357 00:21:25,520 --> 00:21:27,360 out at the right hand side. 358 00:21:27,360 --> 00:21:30,260 And now in comparing these two there 359 00:21:30,260 --> 00:21:32,510 are some changes that took place. 360 00:21:32,510 --> 00:21:35,120 In particular, we notice that the direction 361 00:21:35,120 --> 00:21:40,290 of the delayed branch is reversed. 362 00:21:40,290 --> 00:21:44,840 Furthermore, whereas this system implemented the pole first 363 00:21:44,840 --> 00:21:50,420 followed by the 0, this system implements the 0 first, 364 00:21:50,420 --> 00:21:52,540 followed by the pole. 365 00:21:52,540 --> 00:21:54,310 Is this still a canonic structure? 366 00:21:54,310 --> 00:21:56,200 Well, of course it's a canonic structure 367 00:21:56,200 --> 00:22:01,390 because it only has one delay, and obviously, in fact, 368 00:22:01,390 --> 00:22:04,360 transposing a network couldn't possibly 369 00:22:04,360 --> 00:22:07,250 affect the number of delays in the network 370 00:22:07,250 --> 00:22:09,610 so that, if we begin with a canonic structure 371 00:22:09,610 --> 00:22:12,010 and apply the transposition theorem to it, 372 00:22:12,010 --> 00:22:15,670 we must end up with a canonic structure also. 373 00:22:15,670 --> 00:22:17,470 Well, it shouldn't be obvious-- 374 00:22:17,470 --> 00:22:21,100 or, at least it isn't obvious to me by inspection-- 375 00:22:21,100 --> 00:22:25,450 that this system and this system have the same transfer 376 00:22:25,450 --> 00:22:27,850 function, but in fact you can verify 377 00:22:27,850 --> 00:22:31,150 that in a very simple and straightforward way 378 00:22:31,150 --> 00:22:34,450 by simply calculating what the transfer functions of these two 379 00:22:34,450 --> 00:22:36,640 systems are. 380 00:22:36,640 --> 00:22:43,240 Well, returning then to the general canonic structure 381 00:22:43,240 --> 00:22:50,840 that we had, we can generate a second canonic structure 382 00:22:50,840 --> 00:22:56,720 by applying the transposition theorem to this structure. 383 00:22:56,720 --> 00:23:00,680 That is reversing the directions of all of the arrows, 384 00:23:00,680 --> 00:23:04,310 putting the input in here, and taking the output out there, 385 00:23:04,310 --> 00:23:07,610 and then to keep our convention of the input in at the left, 386 00:23:07,610 --> 00:23:09,440 the output out at the right, flip 387 00:23:09,440 --> 00:23:14,270 that over to generate a second canonic structure, which 388 00:23:14,270 --> 00:23:16,980 is the transpose of this structure. 389 00:23:16,980 --> 00:23:22,430 And if we do that, the two changes to focus on 390 00:23:22,430 --> 00:23:26,060 is that the direction of the delayed branches is reversed, 391 00:23:26,060 --> 00:23:30,780 and furthermore, the system will implement the 0's first, 392 00:23:30,780 --> 00:23:32,420 followed by the poles. 393 00:23:32,420 --> 00:23:36,860 In fact, the network that results 394 00:23:36,860 --> 00:23:42,390 is what I've indicated here. 395 00:23:42,390 --> 00:23:47,790 This is then the transposed version of the structure 396 00:23:47,790 --> 00:23:53,070 that I just showed on the last view graph. 397 00:23:53,070 --> 00:23:55,110 The 0's are implemented here. 398 00:23:55,110 --> 00:23:57,810 The poles are implemented here, and the direction 399 00:23:57,810 --> 00:24:00,090 of the delayed branches is reversed, 400 00:24:00,090 --> 00:24:04,890 but again, this is a canonic form structure. 401 00:24:04,890 --> 00:24:09,240 So some structures are canonic form, and some aren't. 402 00:24:09,240 --> 00:24:11,160 The first one, in fact, that we developed 403 00:24:11,160 --> 00:24:14,760 wasn't canonic form in the sense that it had more delays 404 00:24:14,760 --> 00:24:17,160 than were absolutely necessary. 405 00:24:17,160 --> 00:24:19,560 The last two structures that we've shown 406 00:24:19,560 --> 00:24:22,500 are canonic form structures in that they have 407 00:24:22,500 --> 00:24:24,660 the minimum number of delays. 408 00:24:24,660 --> 00:24:27,750 All of these structures are referred 409 00:24:27,750 --> 00:24:31,800 to as direct form structures-- 410 00:24:31,800 --> 00:24:36,120 direct form because they are structures 411 00:24:36,120 --> 00:24:40,800 that involve as coefficients in them 412 00:24:40,800 --> 00:24:45,300 the same coefficients that are present in the difference 413 00:24:45,300 --> 00:24:48,160 equation describing the overall system. 414 00:24:48,160 --> 00:24:51,960 Recall again that these were the coefficients in the difference 415 00:24:51,960 --> 00:24:55,830 equation which were applied to the delayed values 416 00:24:55,830 --> 00:24:58,680 of the input, and these were the coefficients in the difference 417 00:24:58,680 --> 00:25:03,090 equation that were applied to delayed values of the output. 418 00:25:03,090 --> 00:25:06,930 And this structure and the other structures 419 00:25:06,930 --> 00:25:09,450 involving the coefficients in that form 420 00:25:09,450 --> 00:25:15,320 are often referred to as direct form structures. 421 00:25:15,320 --> 00:25:19,130 Well, these structures are fine for implementing the difference 422 00:25:19,130 --> 00:25:23,060 equation, although there are other structures-- 423 00:25:23,060 --> 00:25:25,430 actually there are essentially an infinite variety 424 00:25:25,430 --> 00:25:29,000 of structures, but there are some other structures that 425 00:25:29,000 --> 00:25:35,660 in some situations are better to use than the direct form 426 00:25:35,660 --> 00:25:38,360 structures, and two of the more common, 427 00:25:38,360 --> 00:25:42,890 which I'd like to introduce now, are the cascade structure 428 00:25:42,890 --> 00:25:47,600 and the parallel structure. 429 00:25:47,600 --> 00:25:53,360 The cascade structure is developed basically 430 00:25:53,360 --> 00:25:57,800 by factoring the transfer function of the system 431 00:25:57,800 --> 00:26:01,040 into a product of second order sections or second order 432 00:26:01,040 --> 00:26:02,150 factors. 433 00:26:02,150 --> 00:26:05,750 In particular, we have, again, the general form 434 00:26:05,750 --> 00:26:07,520 of the transfer function-- 435 00:26:07,520 --> 00:26:11,680 H of z as the numerator polynomial for the 0's, 436 00:26:11,680 --> 00:26:15,830 a denominator polynomial for the poles. 437 00:26:15,830 --> 00:26:18,460 We can factor the numerator polynomial 438 00:26:18,460 --> 00:26:22,330 into a product of first order polynomials and the denominator 439 00:26:22,330 --> 00:26:26,560 polynomial into a product of first order polynomials. 440 00:26:26,560 --> 00:26:30,160 In general, of course, these factors will be complex, 441 00:26:30,160 --> 00:26:33,250 and these factors will be complex. 442 00:26:33,250 --> 00:26:38,020 We can combine together the complex conjugate 0 pairs 443 00:26:38,020 --> 00:26:40,510 and the complex conjugate pole pairs 444 00:26:40,510 --> 00:26:44,200 so that in fact as a general cascade form 445 00:26:44,200 --> 00:26:48,700 it's often convenient to think of a factorization of each 446 00:26:48,700 --> 00:26:52,570 of these polynomials into second order polynomials rather 447 00:26:52,570 --> 00:26:55,420 than first order polynomials. 448 00:26:55,420 --> 00:26:59,260 Carrying that out, we end up with a representation 449 00:26:59,260 --> 00:27:05,110 of the transfer function in a form as I've indicated here-- 450 00:27:05,110 --> 00:27:08,950 a second order numerator polynomial and a second order 451 00:27:08,950 --> 00:27:11,860 denominator polynomial, and, of course, 452 00:27:11,860 --> 00:27:14,590 it's the product of these that we 453 00:27:14,590 --> 00:27:17,140 use to implement this overall transfer 454 00:27:17,140 --> 00:27:20,380 function with some constant multiplier 455 00:27:20,380 --> 00:27:24,610 out in front, which is required essentially because we've 456 00:27:24,610 --> 00:27:27,580 normalized these polynomials to have a leading 457 00:27:27,580 --> 00:27:30,610 coefficient of unity. 458 00:27:30,610 --> 00:27:36,070 Well, first of all, why might we want to do this? 459 00:27:36,070 --> 00:27:40,270 There actually are a variety of reasons for perhaps wanting 460 00:27:40,270 --> 00:27:43,390 to consider an implementation of the transfer function 461 00:27:43,390 --> 00:27:47,080 in terms of a cascade of lower order 462 00:27:47,080 --> 00:27:51,040 systems than the general n-th order system. 463 00:27:51,040 --> 00:27:53,170 One of the more common reasons, which 464 00:27:53,170 --> 00:27:55,940 I'll have more to say about actually 465 00:27:55,940 --> 00:27:58,270 at the end of the next lecture, but I'd 466 00:27:58,270 --> 00:28:00,510 like to at least allude to it now, 467 00:28:00,510 --> 00:28:05,110 is the fact that any time we implement a system 468 00:28:05,110 --> 00:28:09,310 on a digital computer or with special purpose hardware we're 469 00:28:09,310 --> 00:28:14,020 faced with the problem that these coefficients can't 470 00:28:14,020 --> 00:28:16,420 be represented exactly. 471 00:28:16,420 --> 00:28:19,870 If we have, let's say, an 18 bit fixed point register, 472 00:28:19,870 --> 00:28:23,260 we're restricted to truncating or rounding 473 00:28:23,260 --> 00:28:26,110 these coefficients to 18 bits. 474 00:28:26,110 --> 00:28:29,500 If we implement a filter and special purpose hardware, 475 00:28:29,500 --> 00:28:31,780 we might want the coefficient registers 476 00:28:31,780 --> 00:28:35,380 to be as low as 4, or 5, or 6, or 10 bits. 477 00:28:35,380 --> 00:28:39,010 Obviously, the more bits, the more expensive 478 00:28:39,010 --> 00:28:42,010 the hardware implementation is. 479 00:28:42,010 --> 00:28:46,030 And the statement which I'll make and justify 480 00:28:46,030 --> 00:28:49,360 in a little more detail at the end of the next lecture 481 00:28:49,360 --> 00:28:54,460 is that, if I implement the poles of the system 482 00:28:54,460 --> 00:29:00,670 through a high order polynomial, errors in the coefficients 483 00:29:00,670 --> 00:29:04,240 lead to large errors in the pole locations as 484 00:29:04,240 --> 00:29:08,530 compared with an implementation of the poles in terms 485 00:29:08,530 --> 00:29:10,420 of low order polynomials. 486 00:29:10,420 --> 00:29:12,430 That is, basically the sensitivity 487 00:29:12,430 --> 00:29:16,840 of pole locations to errors in the coefficients 488 00:29:16,840 --> 00:29:20,710 is higher the higher the order of the polynomial. 489 00:29:20,710 --> 00:29:25,120 Consequently, if that indeed is an issue for the filter 490 00:29:25,120 --> 00:29:29,470 implementation, then it is better 491 00:29:29,470 --> 00:29:34,000 to implement a system as a cascade of lower order 492 00:29:34,000 --> 00:29:36,580 systems or lower order polynomials 493 00:29:36,580 --> 00:29:41,080 then as one large high order polynomial. 494 00:29:41,080 --> 00:29:44,410 So factoring this transfer function 495 00:29:44,410 --> 00:29:50,230 into a cascade of second order sections, what this leads 496 00:29:50,230 --> 00:29:54,970 to is an implementation of the system as a cascade 497 00:29:54,970 --> 00:29:58,960 of second order systems, and we, again, 498 00:29:58,960 --> 00:30:01,780 have the choice of implementing each second order 499 00:30:01,780 --> 00:30:05,830 section in a variety of ways corresponding 500 00:30:05,830 --> 00:30:10,660 to the various direct forms that we've talked about previously. 501 00:30:10,660 --> 00:30:18,280 One implementation, which is a canonic direct form, 502 00:30:18,280 --> 00:30:21,760 is the implementation that I indicate here, 503 00:30:21,760 --> 00:30:28,030 where this is alpha 1,1, the coefficient alpha 1,1, 504 00:30:28,030 --> 00:30:30,040 the coefficient beta 1,1. 505 00:30:30,040 --> 00:30:32,860 It's a little hard to see where each of these coefficients 506 00:30:32,860 --> 00:30:38,350 goes, but this is a canonic form implementation of a second 507 00:30:38,350 --> 00:30:41,620 order section that has two poles and two 0's. 508 00:30:41,620 --> 00:30:46,180 So as I've implemented it, I've implemented it with the poles 509 00:30:46,180 --> 00:30:50,380 first, followed by the 0's, and then this 510 00:30:50,380 --> 00:30:55,570 is one second order piece that's in cascade with the next pole 0 511 00:30:55,570 --> 00:30:59,740 pair, with the next pole 0 pair, et cetera. 512 00:30:59,740 --> 00:31:03,970 So this is a cascade of second order sections, 513 00:31:03,970 --> 00:31:08,980 and, of course, I can generate other cascade forms was 514 00:31:08,980 --> 00:31:12,490 one possibility by just simply applying the transposition 515 00:31:12,490 --> 00:31:17,830 theorem to this cascade, and basically what would result 516 00:31:17,830 --> 00:31:20,710 is that each of these delay branches 517 00:31:20,710 --> 00:31:23,980 would be reversed in direction, and the 0's 518 00:31:23,980 --> 00:31:27,280 would be implemented first, followed by the poles. 519 00:31:27,280 --> 00:31:29,930 Generally what's meant though by the cascade structure, 520 00:31:29,930 --> 00:31:33,460 and you can see again that there are a variety of cascade 521 00:31:33,460 --> 00:31:35,260 structures depending on how you choose 522 00:31:35,260 --> 00:31:37,840 to implement the pole 0 pairs-- 523 00:31:37,840 --> 00:31:40,120 what is generally meant by the cascade structure 524 00:31:40,120 --> 00:31:44,410 or a cascade structure is an implementation of the transfer 525 00:31:44,410 --> 00:31:48,910 function is a cascade of second order sections 526 00:31:48,910 --> 00:31:51,130 where the second order sections can be implemented 527 00:31:51,130 --> 00:31:54,470 in a variety of ways. 528 00:31:54,470 --> 00:32:02,130 Another structure which is like the cascade structure in that 529 00:32:02,130 --> 00:32:06,250 it implements the poles in terms of low order sections, 530 00:32:06,250 --> 00:32:08,880 but is different in the way effectively 531 00:32:08,880 --> 00:32:13,470 that it realizes the zeros is the so-called parallel form 532 00:32:13,470 --> 00:32:15,840 structure. 533 00:32:15,840 --> 00:32:22,404 And the parallel form structure can be implemented by-- 534 00:32:22,404 --> 00:32:28,170 can be derived by expanding the transfer 535 00:32:28,170 --> 00:32:31,800 function of the system in terms of a partial fraction 536 00:32:31,800 --> 00:32:33,960 expansion. 537 00:32:33,960 --> 00:32:39,960 That is, we can expand this transfer function in terms of-- 538 00:32:39,960 --> 00:32:42,750 and let's assume first of all the capital 539 00:32:42,750 --> 00:32:45,570 M is less than capital N. If capital M were less 540 00:32:45,570 --> 00:32:48,930 than capital N, we can expand this simply 541 00:32:48,930 --> 00:32:55,920 as a sum of residues together with first order poles, 542 00:32:55,920 --> 00:32:59,350 or in general since the poles are complex, 543 00:32:59,350 --> 00:33:04,620 we can imagine factoring this in terms of first order terms 544 00:33:04,620 --> 00:33:09,740 corresponding to the real poles, and then second order 545 00:33:09,740 --> 00:33:13,220 terms where we combine together first order terms which 546 00:33:13,220 --> 00:33:16,550 are complex conjugates so that we have second order 547 00:33:16,550 --> 00:33:19,450 terms of this form. 548 00:33:19,450 --> 00:33:22,290 If capital M is less than capital N, 549 00:33:22,290 --> 00:33:24,600 those are the only two kinds of terms 550 00:33:24,600 --> 00:33:28,290 that would result in a partial fraction expansion. 551 00:33:28,290 --> 00:33:32,510 If capital M is greater than or equal to capital N, 552 00:33:32,510 --> 00:33:36,660 then we'll have additional terms corresponding simply 553 00:33:36,660 --> 00:33:40,932 to weighted powers of z to the minus 1. 554 00:33:40,932 --> 00:33:45,900 Now, as in the cascade form, generally the parallel form 555 00:33:45,900 --> 00:33:48,450 structure is considered to be one 556 00:33:48,450 --> 00:33:53,310 where even if we have real poles we 557 00:33:53,310 --> 00:33:55,830 combine two of the real poles together 558 00:33:55,830 --> 00:33:59,250 to implement the system in terms of second order sections. 559 00:33:59,250 --> 00:34:03,690 If we do that, then the parallel form expansion for the transfer 560 00:34:03,690 --> 00:34:07,240 function is what I've indicated here, 561 00:34:07,240 --> 00:34:10,080 where combining two terms of this form together, 562 00:34:10,080 --> 00:34:15,670 or, equivalently, looking at expansions of this form, 563 00:34:15,670 --> 00:34:20,940 we have a numerator polynomial implementing a single 0 564 00:34:20,940 --> 00:34:25,290 and a denominator polynomial implementing two poles. 565 00:34:25,290 --> 00:34:28,500 And then depending on whether M is greater than capital N 566 00:34:28,500 --> 00:34:30,659 or not, there might be additional terms 567 00:34:30,659 --> 00:34:36,030 involving simple weighted values of z to the minus-- 568 00:34:36,030 --> 00:34:39,300 weighted powers of z to the minus 1. 569 00:34:39,300 --> 00:34:43,620 Let me stress that there are some differences between this 570 00:34:43,620 --> 00:34:45,960 and the cascade form obviously. 571 00:34:45,960 --> 00:34:48,900 One of the differences is that the sections used 572 00:34:48,900 --> 00:34:53,159 for implementing the filter consist of one 0 573 00:34:53,159 --> 00:34:58,770 plus two poles, and then the output 574 00:34:58,770 --> 00:35:02,490 is formed not as a cascade of sections of that type, 575 00:35:02,490 --> 00:35:07,885 but as a sum of the outputs of sections of that type since H 576 00:35:07,885 --> 00:35:12,420 of z here is expressed as a sum of second order sections, 577 00:35:12,420 --> 00:35:14,130 whereas in the cascade form it is 578 00:35:14,130 --> 00:35:18,270 expressed as a product of second order sections. 579 00:35:18,270 --> 00:35:23,990 Well, the general filter structure that results I've 580 00:35:23,990 --> 00:35:26,540 indicated here for the case in which we 581 00:35:26,540 --> 00:35:29,640 have three second order sections, 582 00:35:29,640 --> 00:35:33,890 and again, I'm assuming that capital M is equal to capital N 583 00:35:33,890 --> 00:35:37,250 so that we have one branch, which is just simply 584 00:35:37,250 --> 00:35:38,900 a coefficient branch. 585 00:35:38,900 --> 00:35:41,990 If capital M was one more than capital N, 586 00:35:41,990 --> 00:35:45,570 we would have in addition to that one delay branch. 587 00:35:45,570 --> 00:35:49,580 And then we have second order sections 588 00:35:49,580 --> 00:35:52,160 as I've indicated here, but the second order 589 00:35:52,160 --> 00:35:57,590 sections implement only a single 0 and a pair of poles-- 590 00:35:57,590 --> 00:36:01,980 a single 0 and a pair of poles. 591 00:36:01,980 --> 00:36:06,060 Now, why might you want to use a parallel form 592 00:36:06,060 --> 00:36:11,730 implementation instead of a cascade form implementation? 593 00:36:11,730 --> 00:36:14,410 Well, there actually are several reasons. 594 00:36:14,410 --> 00:36:17,520 One of the most common, in fact, though, 595 00:36:17,520 --> 00:36:22,950 is that sometimes in applying filter design techniques 596 00:36:22,950 --> 00:36:29,040 the filter design parameters are automatically 597 00:36:29,040 --> 00:36:31,260 generated in a parallel form. 598 00:36:31,260 --> 00:36:34,110 That is, rather than being generated 599 00:36:34,110 --> 00:36:38,010 either as a ratio of polynomials or as poles and 0's, it 600 00:36:38,010 --> 00:36:41,640 might be generated in terms of residues and poles. 601 00:36:41,640 --> 00:36:44,400 In that case, of course, it's very straightforward 602 00:36:44,400 --> 00:36:48,090 to go to a parallel form implementation rather than 603 00:36:48,090 --> 00:36:50,790 a cascade form implementation. 604 00:36:50,790 --> 00:36:53,550 Basically the difference between the two forms 605 00:36:53,550 --> 00:36:57,390 is that the cascade form implements 606 00:36:57,390 --> 00:37:02,640 in terms of low order sections poles and 0's of the system, 607 00:37:02,640 --> 00:37:08,340 whereas the parallel form implementation is effectively 608 00:37:08,340 --> 00:37:12,192 an implementation of the system in terms of the poles. 609 00:37:12,192 --> 00:37:14,400 The poles are controlled in the same way as they were 610 00:37:14,400 --> 00:37:18,960 in the cascade structure, but poles and residues rather than 611 00:37:18,960 --> 00:37:21,690 poles and 0's. 612 00:37:21,690 --> 00:37:23,730 Again, with the parallel form structure, 613 00:37:23,730 --> 00:37:26,190 we can generate other parallel form structures 614 00:37:26,190 --> 00:37:29,850 by considering other ways of implementing the second order 615 00:37:29,850 --> 00:37:30,510 sections. 616 00:37:30,510 --> 00:37:34,200 One possibility is to apply the transposition theorem, 617 00:37:34,200 --> 00:37:36,930 and in fact, there are other implementations of second order 618 00:37:36,930 --> 00:37:41,430 sections, and so we can talk about a variety 619 00:37:41,430 --> 00:37:44,610 of parallel form structures, but generally when 620 00:37:44,610 --> 00:37:48,840 we refer to a parallel form or the parallel form structures, 621 00:37:48,840 --> 00:37:51,360 we generally mean a parallel realization 622 00:37:51,360 --> 00:37:54,390 of poles in terms of second order sections 623 00:37:54,390 --> 00:37:57,060 and a weighting applied that correspond to the residues. 624 00:37:59,590 --> 00:38:04,420 Now, the topic of digital filter structures 625 00:38:04,420 --> 00:38:08,320 is, in fact, a very complicated topic. 626 00:38:08,320 --> 00:38:11,710 There are a lot of other filter structures 627 00:38:11,710 --> 00:38:15,670 which can be used for implementing recursive filters, 628 00:38:15,670 --> 00:38:19,540 or non-recursive filters, or a finite impulse response, 629 00:38:19,540 --> 00:38:21,910 or infinite impulse response. 630 00:38:21,910 --> 00:38:25,180 There are structures referred to as continued fraction 631 00:38:25,180 --> 00:38:25,870 structures. 632 00:38:25,870 --> 00:38:29,320 There are structures referred to as interpolation structures. 633 00:38:29,320 --> 00:38:31,930 There are structures referred to as lattice structures, 634 00:38:31,930 --> 00:38:34,360 and ladder structures, et cetera. 635 00:38:34,360 --> 00:38:38,060 There are a large variety of structures, and in fact, 636 00:38:38,060 --> 00:38:43,120 one of the very important issues currently is the design 637 00:38:43,120 --> 00:38:45,580 and development of structures and the comparison 638 00:38:45,580 --> 00:38:50,260 of structures in particular focusing on the trade-offs-- 639 00:38:50,260 --> 00:38:52,300 that is, the advantages and disadvantages-- 640 00:38:52,300 --> 00:38:54,720 between various structures. 641 00:38:54,720 --> 00:38:56,860 The structures that I've introduced here-- 642 00:38:56,860 --> 00:39:00,160 that is, the direct form, the canonic form, cascade, 643 00:39:00,160 --> 00:39:01,540 and parallel-- 644 00:39:01,540 --> 00:39:06,100 are the most common structures which are used. 645 00:39:06,100 --> 00:39:08,500 They, in fact, tend to hold up very well 646 00:39:08,500 --> 00:39:13,270 and seem to be for a variety of reasons 647 00:39:13,270 --> 00:39:16,600 some of the more advantageous structures. 648 00:39:16,600 --> 00:39:22,240 While I've presented this discussion from a general point 649 00:39:22,240 --> 00:39:27,100 of view and tended to focus on infinite impulse response 650 00:39:27,100 --> 00:39:30,790 transfer functions, obviously these structures 651 00:39:30,790 --> 00:39:35,620 can be applied to finite impulse response or infinite impulse 652 00:39:35,620 --> 00:39:37,600 response systems. 653 00:39:37,600 --> 00:39:40,180 In the next lecture, I'd like to continue 654 00:39:40,180 --> 00:39:45,550 the discussion of structures by focusing specifically 655 00:39:45,550 --> 00:39:48,370 on finite impulse response systems 656 00:39:48,370 --> 00:39:54,400 and directing our attention to some specific structures that 657 00:39:54,400 --> 00:39:57,040 apply only to find an impulse response 658 00:39:57,040 --> 00:40:00,460 and take advantage of some particular aspects 659 00:40:00,460 --> 00:40:02,640 of finite impulse response systems. 660 00:40:02,640 --> 00:40:04,390 Thank you.