1 00:00:00,000 --> 00:00:02,350 The following content is provided under a Creative 2 00:00:02,350 --> 00:00:03,630 Commons license. 3 00:00:03,630 --> 00:00:06,600 Your support will help MIT OpenCourseWare continue to 4 00:00:06,600 --> 00:00:10,030 offer high quality educational resources for free. 5 00:00:10,030 --> 00:00:13,060 To make a donation, or to view additional materials from 6 00:00:13,060 --> 00:00:15,561 hundreds of MIT courses, visit MIT OpenCourseWare at 7 00:00:15,561 --> 00:00:21,180 ocw.mit.edu 8 00:00:21,180 --> 00:00:22,950 PROFESSOR: Review what random processes are 9 00:00:22,950 --> 00:00:25,530 a little bit now. 10 00:00:25,530 --> 00:00:31,260 And remember, in what we're doing here, we have pointed 11 00:00:31,260 --> 00:00:36,860 out that random processes in general, have a mean. 12 00:00:36,860 --> 00:00:38,420 The mean is not important. 13 00:00:38,420 --> 00:00:41,150 The mean is not important for random variables or random 14 00:00:41,150 --> 00:00:42,500 vectors either. 15 00:00:42,500 --> 00:00:45,180 It's just something you add on when you're all done. 16 00:00:45,180 --> 00:00:48,500 So the best way to study random variables, random 17 00:00:48,500 --> 00:00:52,040 vectors, and random processes, particularly when you're 18 00:00:52,040 --> 00:00:57,600 dealing with things which are based on base things being 19 00:00:57,600 --> 00:01:01,450 Gaussian, is to forget about the means, do everything you 20 00:01:01,450 --> 00:01:04,830 want to with the fluctuations, and then put the means in when 21 00:01:04,830 --> 00:01:06,280 you're all done. 22 00:01:06,280 --> 00:01:09,910 Which is why the notes do most of what they do, in terms of 23 00:01:09,910 --> 00:01:13,130 zero mean, random variables, random vectors, and random 24 00:01:13,130 --> 00:01:19,940 processes, and simply put in what the value of the mean is 25 00:01:19,940 --> 00:01:21,640 at the end. 26 00:01:21,640 --> 00:01:24,470 OK so some of this has the mean put in. 27 00:01:24,470 --> 00:01:27,560 Some of it doesn't. 28 00:01:27,560 --> 00:01:30,520 So to start out with, a random process is defined by its 29 00:01:30,520 --> 00:01:34,490 joint distribution at each finite set of epochs. 30 00:01:34,490 --> 00:01:37,210 OK, that's where we started with all of this. 31 00:01:37,210 --> 00:01:41,570 How do you define a random waveform? 32 00:01:41,570 --> 00:01:43,870 I mean, you really have to come to grips with that 33 00:01:43,870 --> 00:01:48,870 question before you can see how you 34 00:01:48,870 --> 00:01:50,740 might answer the question. 35 00:01:50,740 --> 00:01:54,910 If you think it's obvious how to define a random process, 36 00:01:54,910 --> 00:01:57,130 then you really have to go back and think about it. 37 00:01:57,130 --> 00:02:00,810 Because a random process is a random waveform. 38 00:02:00,810 --> 00:02:05,890 It's an uncountably infinite number of random variables. 39 00:02:05,890 --> 00:02:10,790 So that defining what it is, is not a trivial problem. 40 00:02:10,790 --> 00:02:13,720 And people have generally agreed that the way to define 41 00:02:13,720 --> 00:02:18,130 it, be the test for whether you have defined it or not, is 42 00:02:18,130 --> 00:02:22,460 whether you can find the joint distribution at each finite 43 00:02:22,460 --> 00:02:24,660 set of epochs. 44 00:02:24,660 --> 00:02:29,160 Fortunately most processes of interest can be defined in a 45 00:02:29,160 --> 00:02:30,950 much, much simpler way in terms of 46 00:02:30,950 --> 00:02:32,850 an orthonormal expansion. 47 00:02:32,850 --> 00:02:37,150 When you to find it in terms of an orthonormal expansion, 48 00:02:37,150 --> 00:02:39,910 you have the orthonormal expansion which is the sum 49 00:02:39,910 --> 00:02:46,690 over k, of a set of random variables, Z sub 1, Z sub 2 50 00:02:46,690 --> 00:02:51,820 and so forth Z sub k, times a set of orthonormal functions. 51 00:02:51,820 --> 00:02:55,660 So all the variation on t is stuck into the set of 52 00:02:55,660 --> 00:02:57,210 orthonormal functions. 53 00:02:57,210 --> 00:02:59,550 All the randomness is stuck into the 54 00:02:59,550 --> 00:03:01,350 sequence of random variables. 55 00:03:01,350 --> 00:03:04,680 So at this point, you have a sequence of random variables, 56 00:03:04,680 --> 00:03:06,090 rather than a waveform. 57 00:03:06,090 --> 00:03:10,870 Why is it so important to have something countable instead of 58 00:03:10,870 --> 00:03:12,680 something uncountable? 59 00:03:12,680 --> 00:03:16,370 I mean, mathematicians understand, in a deep 60 00:03:16,370 --> 00:03:20,090 mathematical sense, why that's so important. 61 00:03:20,090 --> 00:03:24,750 For engineers the argument is a little bit different. 62 00:03:24,750 --> 00:03:28,100 I think for engineers the argument is, no matter how 63 00:03:28,100 --> 00:03:32,770 small the interval of a waveform you're looking at, 64 00:03:32,770 --> 00:03:36,440 even if you shrink it, no matter what you do with it, 65 00:03:36,440 --> 00:03:40,800 you can't approximate it in any way and get rid of that 66 00:03:40,800 --> 00:03:44,090 uncountably infinite number of random variables. 67 00:03:44,090 --> 00:03:46,830 As soon as you represent it in terms of an orthonormal 68 00:03:46,830 --> 00:03:53,520 expansion though, at that point you have represented it 69 00:03:53,520 --> 00:03:56,050 as a countable sum. 70 00:03:56,050 --> 00:03:59,770 As soon as you represent it as a countable sum, you can 71 00:03:59,770 --> 00:04:04,340 approximate it by knocking off the tail of that sum. 72 00:04:04,340 --> 00:04:07,420 OK, and at that point you have a finite number of random 73 00:04:07,420 --> 00:04:09,270 variables, instead of an infinite 74 00:04:09,270 --> 00:04:10,830 number of random variables. 75 00:04:10,830 --> 00:04:12,730 We all know how to deal with a finite 76 00:04:12,730 --> 00:04:14,330 set of random variables. 77 00:04:14,330 --> 00:04:18,790 You've been doing that since you were in 6.041 OK. 78 00:04:18,790 --> 00:04:22,600 So if you have a countable set of random variables, the way 79 00:04:22,600 --> 00:04:25,250 to deal with them is always the same. 80 00:04:25,250 --> 00:04:28,050 81 00:04:28,050 --> 00:04:31,000 It's to hope that they're defined in such a way that 82 00:04:31,000 --> 00:04:33,780 when you look at enough of them, all the rest of them are 83 00:04:33,780 --> 00:04:35,570 unimportant. 84 00:04:35,570 --> 00:04:37,940 That sort of is the underlying meaning of 85 00:04:37,940 --> 00:04:40,730 what countable means. 86 00:04:40,730 --> 00:04:44,880 You can arrange them in a sequence, yes. 87 00:04:44,880 --> 00:04:49,760 But like when you count in school, after you name the 88 00:04:49,760 --> 00:04:53,170 first hundred numbers you get tired of it, and you realize 89 00:04:53,170 --> 00:04:56,000 that you understand the whole thing at that point. 90 00:04:56,000 --> 00:04:59,270 OK in other words, you don't have to count up to infinity 91 00:04:59,270 --> 00:05:02,590 to understand what it means to have a 92 00:05:02,590 --> 00:05:04,600 countable set of integers. 93 00:05:04,600 --> 00:05:06,150 You all understand the integers. 94 00:05:06,150 --> 00:05:08,090 You understand them intuitively. 95 00:05:08,090 --> 00:05:11,090 And you understand that no matter how big an integer you 96 00:05:11,090 --> 00:05:13,940 choose, somebody else can always choose one bigger than 97 00:05:13,940 --> 00:05:14,850 you've chosen. 98 00:05:14,850 --> 00:05:17,730 But you also understand that that's not important. 99 00:05:17,730 --> 00:05:18,670 OK? 100 00:05:18,670 --> 00:05:24,320 So this is the way we usually define a stochastic process, a 101 00:05:24,320 --> 00:05:27,170 random process to start off with. 102 00:05:27,170 --> 00:05:30,500 The other thing that we have to do, is given a random 103 00:05:30,500 --> 00:05:34,140 process like this, we very often want to go from this 104 00:05:34,140 --> 00:05:38,160 random process to other random processes which are defined in 105 00:05:38,160 --> 00:05:39,970 terms of this random process. 106 00:05:39,970 --> 00:05:43,610 So almost always we start out with a random process which is 107 00:05:43,610 --> 00:05:45,180 defined this way. 108 00:05:45,180 --> 00:05:48,320 And then we move from there to various other processes which 109 00:05:48,320 --> 00:05:50,420 we define in terms of this. 110 00:05:50,420 --> 00:05:54,400 So we never really have to deal with this uncountably 111 00:05:54,400 --> 00:05:56,570 infinite number of random variables. 112 00:05:56,570 --> 00:05:58,490 If we really had to deal with that, we 113 00:05:58,490 --> 00:06:01,170 would be in deep trouble. 114 00:06:01,170 --> 00:06:04,040 Because the only way we could deal with it, would be to take 115 00:06:04,040 --> 00:06:06,600 five courses in measure theory. 116 00:06:06,600 --> 00:06:09,290 And after you take five courses in measure theory, 117 00:06:09,290 --> 00:06:11,100 it's not enough. 118 00:06:11,100 --> 00:06:16,180 Because at that point, you're living in measure theory. 119 00:06:16,180 --> 00:06:20,210 And most of the mathematicians that I know can't come back 120 00:06:20,210 --> 00:06:25,530 from measure theory to talk about real problems. 121 00:06:25,530 --> 00:06:28,360 So that every time they write a paper, which looks like 122 00:06:28,360 --> 00:06:33,110 beautiful mathematics, it's very difficult to interpret 123 00:06:33,110 --> 00:06:35,650 whether it means anything about real problems. 124 00:06:35,650 --> 00:06:38,990 Because that's where things get hard. 125 00:06:38,990 --> 00:06:42,510 OK so the point is, if you define random processes in 126 00:06:42,510 --> 00:06:46,350 this way, then you always have some sort of grounding about 127 00:06:46,350 --> 00:06:47,750 what you're talking about. 128 00:06:47,750 --> 00:06:49,270 Because you have a countable number of 129 00:06:49,270 --> 00:06:50,870 random variables here. 130 00:06:50,870 --> 00:06:52,920 You know that if you're dealing with anything that 131 00:06:52,920 --> 00:06:56,480 makes any sense, only a finite number of them 132 00:06:56,480 --> 00:06:57,890 are going to be important. 133 00:06:57,890 --> 00:07:01,520 The thing you don't know is how many of them you need to 134 00:07:01,520 --> 00:07:03,250 be important. 135 00:07:03,250 --> 00:07:06,930 OK, so that's why we take an infinite sum here. 136 00:07:06,930 --> 00:07:09,500 Because you don't know ahead of time, how many you'd need 137 00:07:09,500 --> 00:07:10,780 to deal with. 138 00:07:10,780 --> 00:07:14,880 OK, so we then started to talk about stationarity. 139 00:07:14,880 --> 00:07:17,410 140 00:07:17,410 --> 00:07:21,680 The process Z of t is stationary if Z of t sub 1 up 141 00:07:21,680 --> 00:07:27,280 to Z of t sub k and Z of t sub 1 plus tau up to Z of t sub k 142 00:07:27,280 --> 00:07:30,750 plus tau have the same distribution. 143 00:07:30,750 --> 00:07:34,810 OK, so you take any finite set of random variables in this 144 00:07:34,810 --> 00:07:39,000 process, you shift them all to some other place, and you ask 145 00:07:39,000 --> 00:07:41,380 whether they have the same distribution. 146 00:07:41,380 --> 00:07:43,760 How do you answer that question? 147 00:07:43,760 --> 00:07:45,010 God only knows. 148 00:07:45,010 --> 00:07:47,680 149 00:07:47,680 --> 00:07:50,860 I mean, what you need to be able to answer that question 150 00:07:50,860 --> 00:07:55,320 is some easy way of finding those joint probabilities. 151 00:07:55,320 --> 00:07:57,080 And that's why you deal with examples. 152 00:07:57,080 --> 00:07:58,750 That's why we deal with jointly 153 00:07:58,750 --> 00:08:00,600 Gaussian random variables. 154 00:08:00,600 --> 00:08:03,260 Because once we're dealing with jointly Gaussian random 155 00:08:03,260 --> 00:08:07,640 variables, we can write down those joint distributions just 156 00:08:07,640 --> 00:08:11,960 in terms of covariance matrices. 157 00:08:11,960 --> 00:08:15,770 Well it means, of course, if you want to deal with things 158 00:08:15,770 --> 00:08:19,050 that are not a zero mean. 159 00:08:19,050 --> 00:08:21,700 And after you learn how to deal with covariance matrices, 160 00:08:21,700 --> 00:08:23,340 at least you're dealing with a function of a 161 00:08:23,340 --> 00:08:24,670 finite number of variables. 162 00:08:24,670 --> 00:08:26,890 So it's not so bad. 163 00:08:26,890 --> 00:08:32,480 OK so the argument is, this is what you need to know if 164 00:08:32,480 --> 00:08:34,270 you're going to call it stationary. 165 00:08:34,270 --> 00:08:37,370 But we have easier ways of testing for that. 166 00:08:37,370 --> 00:08:40,920 And in fact this Wide Sense Stationary we said, if the 167 00:08:40,920 --> 00:08:47,040 covariance matrix, mainly the expected value of Z of t sub 1 168 00:08:47,040 --> 00:08:50,890 times the expected value of Z of t sub 2 is equal to some 169 00:08:50,890 --> 00:08:54,010 function of just t sub 1 minus t sub 2. 170 00:08:54,010 --> 00:08:58,040 In other words, where the expected value of this value 171 00:08:58,040 --> 00:09:01,090 times this value is the same, if you shift it 172 00:09:01,090 --> 00:09:03,030 over by some amount. 173 00:09:03,030 --> 00:09:05,010 OK, you see the difference between? 174 00:09:05,010 --> 00:09:07,970 I mean the difference between these two things is one, that 175 00:09:07,970 --> 00:09:10,950 in the definition for stationarity, you need an 176 00:09:10,950 --> 00:09:14,570 arbitrarily large set of random variables here. 177 00:09:14,570 --> 00:09:18,340 You shift them over to some other point here. 178 00:09:18,340 --> 00:09:20,790 And you need the same joint distribution over this 179 00:09:20,790 --> 00:09:23,260 arbitrarily large set of random variables. 180 00:09:23,260 --> 00:09:26,780 When you get into dealing with the covariance matrix, all you 181 00:09:26,780 --> 00:09:29,460 need for Wide Sense Stationarity is you only have 182 00:09:29,460 --> 00:09:32,120 to deal with two random variables here, and the shift 183 00:09:32,120 --> 00:09:34,830 of those two random variables here. 184 00:09:34,830 --> 00:09:39,440 Which really comes out to a problem involving just two 185 00:09:39,440 --> 00:09:42,830 random variables t sub 1, t sub 2, and the fact that it's 186 00:09:42,830 --> 00:09:47,180 the same no matter where you shift the t sub 1 to. 187 00:09:47,180 --> 00:09:50,000 It's only a function of the difference between t 188 00:09:50,000 --> 00:09:53,120 sub 1 and t sub 2. 189 00:09:53,120 --> 00:09:55,400 So it's a whole lot easier. 190 00:09:55,400 --> 00:09:59,120 And then we pointed out that since jointly Gaussian random 191 00:09:59,120 --> 00:10:02,170 variables, and since the Gaussian random process, it's 192 00:10:02,170 --> 00:10:06,800 completely determined by its covariance function, a zero 193 00:10:06,800 --> 00:10:09,740 mean Gaussian process. 194 00:10:09,740 --> 00:10:12,260 If you'll forgive me, I'm not going to keep coming back and 195 00:10:12,260 --> 00:10:12,840 saying that. 196 00:10:12,840 --> 00:10:15,950 I will just be thinking solely today in terms 197 00:10:15,950 --> 00:10:17,880 of zero mean processes. 198 00:10:17,880 --> 00:10:21,090 And you should think solely in terms of zero mean processes. 199 00:10:21,090 --> 00:10:23,640 You can sort out for yourselves whether you'd need 200 00:10:23,640 --> 00:10:25,090 a mean or not. 201 00:10:25,090 --> 00:10:28,160 That's sort of a trivial addition to all of this. 202 00:10:28,160 --> 00:10:32,930 OK, so it's Wide Sense Stationary. 203 00:10:32,930 --> 00:10:34,880 Well we already said that. 204 00:10:34,880 --> 00:10:38,320 205 00:10:38,320 --> 00:10:41,820 Well, here I put in the mean. 206 00:10:41,820 --> 00:10:44,990 OK, Wide Sense Stationary implies that it's stationary 207 00:10:44,990 --> 00:10:46,790 for Gaussian processes. 208 00:10:46,790 --> 00:10:50,650 OK, so in other words, this messy looking question of 209 00:10:50,650 --> 00:10:53,990 asking whether a process is stationary or not, with all of 210 00:10:53,990 --> 00:10:56,390 these random variables here and all of these random 211 00:10:56,390 --> 00:11:01,920 variables here, becomes trivialized for the case of a 212 00:11:01,920 --> 00:11:05,703 Gaussian random process, where all you need to worry about is 213 00:11:05,703 --> 00:11:07,150 the covariance function. 214 00:11:07,150 --> 00:11:11,140 Because that's the thing that specifies the whole process. 215 00:11:11,140 --> 00:11:12,390 OK. 216 00:11:12,390 --> 00:11:15,020 217 00:11:15,020 --> 00:11:17,540 That's part of what we did last time. 218 00:11:17,540 --> 00:11:20,450 219 00:11:20,450 --> 00:11:23,810 We set an important example of this. 220 00:11:23,810 --> 00:11:26,280 And you don't quite know how important this is yet. 221 00:11:26,280 --> 00:11:31,060 But this is really important because all of the Gaussian 222 00:11:31,060 --> 00:11:34,710 processes you want to talk about can be formed in terms 223 00:11:34,710 --> 00:11:36,840 of this process. 224 00:11:36,840 --> 00:11:38,990 So it's nice in that way. 225 00:11:38,990 --> 00:11:42,860 And the process is Z of t is the sum of 226 00:11:42,860 --> 00:11:44,340 an orthonormal expansion. 227 00:11:44,340 --> 00:11:49,080 But the orthonormal functions now are just the time shifted 228 00:11:49,080 --> 00:11:50,250 sinc functions. 229 00:11:50,250 --> 00:11:54,530 We know the time shifted sinc functions are orthogonal. 230 00:11:54,530 --> 00:11:57,460 So we just have this set of random variables. 231 00:11:57,460 --> 00:12:00,990 And we say OK, what we're going to assume here is that 232 00:12:00,990 --> 00:12:07,530 these random variables, well here I put in the mean. 233 00:12:07,530 --> 00:12:09,090 Let's forget about the mean. 234 00:12:09,090 --> 00:12:13,830 The expected value of V sub k times V sub i, namely any two 235 00:12:13,830 --> 00:12:19,000 of these random variables expected value of the pair, is 236 00:12:19,000 --> 00:12:21,840 equal to zero if K is unequal to i. 237 00:12:21,840 --> 00:12:25,620 OK, in other words, the random variables are in some sense 238 00:12:25,620 --> 00:12:26,990 orthogonal to each other. 239 00:12:26,990 --> 00:12:30,700 But let's save the word orthogonal for functions, and 240 00:12:30,700 --> 00:12:33,450 use the word correlated or uncorrelated 241 00:12:33,450 --> 00:12:34,890 here for random variables. 242 00:12:34,890 --> 00:12:37,940 So these random variables are uncorrelated. 243 00:12:37,940 --> 00:12:41,080 So we're dealing with an expansion where the functions 244 00:12:41,080 --> 00:12:44,620 are orthogonal, and where the random variables are 245 00:12:44,620 --> 00:12:46,350 uncorrelated. 246 00:12:46,350 --> 00:12:49,150 And now, what we're really interested in, is making these 247 00:12:49,150 --> 00:12:52,060 random variables Gaussian. 248 00:12:52,060 --> 00:12:57,140 And then we have a Gaussian random process with 249 00:12:57,140 --> 00:12:58,740 this whole sum here. 250 00:12:58,740 --> 00:13:03,940 We're interested in making these have the same variance 251 00:13:03,940 --> 00:13:05,220 in all cases. 252 00:13:05,220 --> 00:13:12,080 And therefore what we have is process which is stationary. 253 00:13:12,080 --> 00:13:16,870 And its covariance function is just sigma squared times this 254 00:13:16,870 --> 00:13:18,460 sinc function here. 255 00:13:18,460 --> 00:13:22,620 There's an error in the notes, in lecture 16, about this. 256 00:13:22,620 --> 00:13:25,570 It will be corrected on the web. 257 00:13:25,570 --> 00:13:28,690 This is in the notes. 258 00:13:28,690 --> 00:13:31,200 259 00:13:31,200 --> 00:13:34,990 This quantity here is left out. 260 00:13:34,990 --> 00:13:39,430 So you can put that back in if you want to. 261 00:13:39,430 --> 00:13:44,570 Or you can get the new notes off the web after a 262 00:13:44,570 --> 00:13:45,590 few hours or so. 263 00:13:45,590 --> 00:13:47,790 This is not put on the web yet. 264 00:13:47,790 --> 00:13:49,200 I just noticed yesterday. 265 00:13:49,200 --> 00:13:52,130 In fact I noticed it this morning. 266 00:13:52,130 --> 00:13:55,820 OK, the sample functions of a Wide Sense Stationary non-zero 267 00:13:55,820 --> 00:13:58,280 process are not L sub 2. 268 00:13:58,280 --> 00:14:01,670 When I talk about a non-zero process I'm not talking about 269 00:14:01,670 --> 00:14:05,810 zero mean, I'm saying this peculiar random process which 270 00:14:05,810 --> 00:14:08,100 is zero everywhere. 271 00:14:08,100 --> 00:14:10,890 In other words, a random process which really doesn't 272 00:14:10,890 --> 00:14:13,490 deserve to be called a random process. 273 00:14:13,490 --> 00:14:15,370 But unfortunately by the definitions, 274 00:14:15,370 --> 00:14:17,180 it is a random process. 275 00:14:17,180 --> 00:14:21,160 Because with probability one, Z of t is equal zero 276 00:14:21,160 --> 00:14:22,860 everywhere. 277 00:14:22,860 --> 00:14:25,130 And that just means you have a constant, which is zero 278 00:14:25,130 --> 00:14:25,590 everywhere. 279 00:14:25,590 --> 00:14:27,610 So you're not interested in it. 280 00:14:27,610 --> 00:14:30,780 OK, if you take the sample functions of any non-trivial 281 00:14:30,780 --> 00:14:33,790 Wide Sense Stationary random process, 282 00:14:33,790 --> 00:14:36,430 they dribble on forever. 283 00:14:36,430 --> 00:14:39,270 And they have the same-- no matter how far 284 00:14:39,270 --> 00:14:42,140 you go out in time-- 285 00:14:42,140 --> 00:14:46,470 the V sub k's which are the samples of this process, 286 00:14:46,470 --> 00:14:51,880 sample random variables of the process, weigh out. 287 00:14:51,880 --> 00:14:54,790 These things all have the same variance. 288 00:14:54,790 --> 00:14:57,010 So that this is stationary because it just 289 00:14:57,010 --> 00:14:59,610 keeps going on forever. 290 00:14:59,610 --> 00:15:03,510 OK and as we said last time, that runs into violent 291 00:15:03,510 --> 00:15:10,040 conflict with our whole idea here of trying to understand 292 00:15:10,040 --> 00:15:12,320 what L sub 2 functions are all about. 293 00:15:12,320 --> 00:15:15,520 Because what we would like to be able to do is stick the L 294 00:15:15,520 --> 00:15:19,500 sub 2 functions as sample values of these processes, 295 00:15:19,500 --> 00:15:23,080 have the waveforms that we send the L sub 2. 296 00:15:23,080 --> 00:15:25,270 When you add up two L sub 2 functions, you 297 00:15:25,270 --> 00:15:27,180 get an L sub 2 function. 298 00:15:27,180 --> 00:15:30,050 And therefore, all the sample values of all the things that 299 00:15:30,050 --> 00:15:35,160 happen with probability one are all L sub 2 functions. 300 00:15:35,160 --> 00:15:39,480 As soon as you do the most natural and simple thing with 301 00:15:39,480 --> 00:15:45,270 random processes, you wind up with something which cannot be 302 00:15:45,270 --> 00:15:47,110 L sub 2 anymore. 303 00:15:47,110 --> 00:15:51,830 But it's not L sub 2 in only a trivial way. 304 00:15:51,830 --> 00:15:55,140 OK in other words, it's not L sub 2 because 305 00:15:55,140 --> 00:15:57,210 it keeps going forever. 306 00:15:57,210 --> 00:16:00,980 It's not L sub 2 for the same reason that sine x is not L 307 00:16:00,980 --> 00:16:04,450 sub 2, or 1 is not L sub 2. 308 00:16:04,450 --> 00:16:07,670 OK, and we've decided not to look at those functions as 309 00:16:07,670 --> 00:16:09,790 reasonable functions for the waveforms that 310 00:16:09,790 --> 00:16:12,460 we transmit or receive. 311 00:16:12,460 --> 00:16:16,570 But for random processes, maybe they're not so bad. 312 00:16:16,570 --> 00:16:18,120 OK. 313 00:16:18,120 --> 00:16:19,650 But we don't know yet. 314 00:16:19,650 --> 00:16:25,580 315 00:16:25,580 --> 00:16:27,730 We started to talk about something called effectively 316 00:16:27,730 --> 00:16:30,530 Wide Sense Stationary, which the notes talk 317 00:16:30,530 --> 00:16:32,720 about quite a bit. 318 00:16:32,720 --> 00:16:36,200 Which to say, OK we will assume that the process is 319 00:16:36,200 --> 00:16:40,840 stationary over some very wide time interval. 320 00:16:40,840 --> 00:16:43,470 We don't know how wide that time interval is, but we'll 321 00:16:43,470 --> 00:16:44,530 assume that it's finite. 322 00:16:44,530 --> 00:16:47,220 In other words, we're going to take this random process, 323 00:16:47,220 --> 00:16:51,420 whatever it is, you can define it, a nice random 324 00:16:51,420 --> 00:16:53,270 process this way. 325 00:16:53,270 --> 00:16:55,800 And then we're going to get our scissors, and we're going 326 00:16:55,800 --> 00:16:57,940 to cut it off over here and we're going to 327 00:16:57,940 --> 00:16:59,360 cut it off over here. 328 00:16:59,360 --> 00:17:03,600 So it's then, time-limited over this very broad band. 329 00:17:03,600 --> 00:17:05,820 Why do we want to do this? 330 00:17:05,820 --> 00:17:11,460 Because again, it's like the integers which are countable. 331 00:17:11,460 --> 00:17:15,210 You don't know how far out you have to go before you're not 332 00:17:15,210 --> 00:17:17,550 interested in something anymore. 333 00:17:17,550 --> 00:17:20,750 But you know if you go out far enough, you can't be 334 00:17:20,750 --> 00:17:23,000 interested in it anymore. 335 00:17:23,000 --> 00:17:24,590 Because otherwise you can't take limits. 336 00:17:24,590 --> 00:17:28,590 You can't do anything interesting with sequences. 337 00:17:28,590 --> 00:17:30,450 OK, so here the idea is. 338 00:17:30,450 --> 00:17:33,420 If you go out far enough, you don't care. 339 00:17:33,420 --> 00:17:38,880 This has to be the way that you model things because any 340 00:17:38,880 --> 00:17:45,190 piece of equipment that you ever design is going to start 341 00:17:45,190 --> 00:17:48,820 being used at a certain time, and it's going to stop being 342 00:17:48,820 --> 00:17:51,260 used at a certain time. 343 00:17:51,260 --> 00:17:53,880 And about halfway after when it stops being used, the 344 00:17:53,880 --> 00:17:56,720 people who make it will stop supporting it. 345 00:17:56,720 --> 00:17:59,710 And that's to hurry on the time at which you have to buy 346 00:17:59,710 --> 00:18:01,280 something new. 347 00:18:01,280 --> 00:18:04,650 I bought a new computer about six months ago. 348 00:18:04,650 --> 00:18:07,810 And I find that Microsoft is not supporting any of the 349 00:18:07,810 --> 00:18:12,060 stuff that it loaded into this damn computer anymore. 350 00:18:12,060 --> 00:18:19,330 Six months old, so you see why I'm angry about this matter, 351 00:18:19,330 --> 00:18:20,890 of things not being supported. 352 00:18:20,890 --> 00:18:24,210 So in a sense they're not stationary after that. 353 00:18:24,210 --> 00:18:27,470 So you can only ask for things to be stationary over a period 354 00:18:27,470 --> 00:18:28,970 of six months or so. 355 00:18:28,970 --> 00:18:31,030 OK. 356 00:18:31,030 --> 00:18:34,680 But for electronic times, when you're sending data at 357 00:18:34,680 --> 00:18:37,270 kilobits per second or megabits per second, or 358 00:18:37,270 --> 00:18:41,160 gigabits per second as people like to do now, that's an 359 00:18:41,160 --> 00:18:42,200 awful long time. 360 00:18:42,200 --> 00:18:46,530 You can send an awful lot of bits in that time. 361 00:18:46,530 --> 00:18:50,290 So stationary means stationary for a long time. 362 00:18:50,290 --> 00:18:52,480 The notes do a lot of analysis of this. 363 00:18:52,480 --> 00:18:54,410 I'm going to do a little of that analysis 364 00:18:54,410 --> 00:18:56,210 here in class again. 365 00:18:56,210 --> 00:18:59,900 Not because it's so important to understand the details of 366 00:18:59,900 --> 00:19:03,890 it, but to understand what it really means to have a process 367 00:19:03,890 --> 00:19:07,090 be stationary, and to understand that doesn't really 368 00:19:07,090 --> 00:19:11,410 destroy any of the L sub 2 theory that we built up. 369 00:19:11,410 --> 00:19:12,500 OK. 370 00:19:12,500 --> 00:19:15,720 So the covariance function is L sub 2 in 371 00:19:15,720 --> 00:19:18,880 cases of physical relevance. 372 00:19:18,880 --> 00:19:20,990 That's the thing. 373 00:19:20,990 --> 00:19:23,210 That's one of the things we need. 374 00:19:23,210 --> 00:19:28,770 We want the sample functions also to be L sub 2 in cases of 375 00:19:28,770 --> 00:19:30,550 physical relevance. 376 00:19:30,550 --> 00:19:33,290 If you have a function here, this is just a function of 377 00:19:33,290 --> 00:19:34,490 time now at this point. 378 00:19:34,490 --> 00:19:36,710 There's nothing random about this. 379 00:19:36,710 --> 00:19:39,910 It's just a statistic of this random process. 380 00:19:39,910 --> 00:19:42,220 But it's a nice well-defined function. 381 00:19:42,220 --> 00:19:45,560 We're talking about real random processes here. 382 00:19:45,560 --> 00:19:49,020 So what can you say about this function? 383 00:19:49,020 --> 00:19:50,550 Is it symmetric in time? 384 00:19:50,550 --> 00:19:54,330 385 00:19:54,330 --> 00:19:55,830 How many people think it must be symmetric? 386 00:19:55,830 --> 00:19:59,040 387 00:19:59,040 --> 00:20:00,980 I see a number of people. 388 00:20:00,980 --> 00:20:02,300 How many people think it's not symmetric? 389 00:20:02,300 --> 00:20:05,350 390 00:20:05,350 --> 00:20:06,480 Well it is symmetric. 391 00:20:06,480 --> 00:20:13,656 It has to be symmetric because its expected value of Z of t 392 00:20:13,656 --> 00:20:17,370 sub 1 times Z of t sub 2, and if you flip the roles of those 393 00:20:17,370 --> 00:20:19,790 two, you have to get the same answer. 394 00:20:19,790 --> 00:20:21,910 So it is symmetric. 395 00:20:21,910 --> 00:20:23,160 It is real. 396 00:20:23,160 --> 00:20:25,380 397 00:20:25,380 --> 00:20:28,070 We're going to talk about the Fourier transform of this 398 00:20:28,070 --> 00:20:32,920 before trying to give any relevance or physical meaning 399 00:20:32,920 --> 00:20:35,800 to this thing called spectral density. 400 00:20:35,800 --> 00:20:39,640 We'll just say this is the Fourier transform of the 401 00:20:39,640 --> 00:20:45,290 covariance function for a Wide Sense Stationary process. 402 00:20:45,290 --> 00:20:51,380 So we have some kind of Fourier transform. 403 00:20:51,380 --> 00:20:53,560 We'll assume this is L sub 2 and everything. 404 00:20:53,560 --> 00:20:55,980 We won't worry about any of that. 405 00:20:55,980 --> 00:20:59,750 So there is some function here which makes sense. 406 00:20:59,750 --> 00:21:04,660 Now if k is both real and symmetric, what can you say 407 00:21:04,660 --> 00:21:06,360 about its Fourier transform? 408 00:21:06,360 --> 00:21:09,020 409 00:21:09,020 --> 00:21:12,330 If you have a real function, what property does this 410 00:21:12,330 --> 00:21:13,940 Fourier transform have? 411 00:21:13,940 --> 00:21:17,160 412 00:21:17,160 --> 00:21:18,450 AUDIENCE: [UNINTELLIGIBLE] 413 00:21:18,450 --> 00:21:21,460 PROFESSOR: It's conjugate symmetric, yes. 414 00:21:21,460 --> 00:21:28,860 And furthermore, if k is symmetric itself, then you can 415 00:21:28,860 --> 00:21:32,990 go back from this thinking you have a symmetric transform, 416 00:21:32,990 --> 00:21:35,830 and realize that this has to be real. 417 00:21:35,830 --> 00:21:36,350 OK. 418 00:21:36,350 --> 00:21:43,890 So spectral densities of real processes are always both real 419 00:21:43,890 --> 00:21:45,170 and symmetric. 420 00:21:45,170 --> 00:21:48,810 We will define what we mean by spectral density later for 421 00:21:48,810 --> 00:21:50,810 complex processes. 422 00:21:50,810 --> 00:21:54,750 And spectral densities are always real. 423 00:21:54,750 --> 00:22:00,450 They aren't always symmetric if you have a complex process. 424 00:22:00,450 --> 00:22:03,010 But don't worry about that now. 425 00:22:03,010 --> 00:22:05,440 There are enough peculiarities that come in when we start 426 00:22:05,440 --> 00:22:08,550 dealing with complex processes, that I don't want 427 00:22:08,550 --> 00:22:10,020 to worry about it at all. 428 00:22:10,020 --> 00:22:13,160 So spectral density is real and symmetric. 429 00:22:13,160 --> 00:22:16,850 And so far it's just a definition. 430 00:22:16,850 --> 00:22:21,640 OK, but the thing we found about this sinc process, is 431 00:22:21,640 --> 00:22:26,580 that the sinc process in fact, is a stationary process. 432 00:22:26,580 --> 00:22:30,240 It's a Wide Sense Stationary process for whatever variables 433 00:22:30,240 --> 00:22:31,580 you want to put in here. 434 00:22:31,580 --> 00:22:35,980 And if these variables are IID and they're Gaussian and zero 435 00:22:35,980 --> 00:22:41,230 mean, then this process is a zero mean, stationary Gaussian 436 00:22:41,230 --> 00:22:42,540 random process. 437 00:22:42,540 --> 00:22:45,520 And it has a spectral density. 438 00:22:45,520 --> 00:22:48,750 This turns out to be a sinc function. 439 00:22:48,750 --> 00:22:53,420 The Fourier transform of it is a rectangular function. 440 00:22:53,420 --> 00:22:57,390 So this process has a spectral density which is constant out 441 00:22:57,390 --> 00:23:01,090 to a certain point, and then drops off to zero. 442 00:23:01,090 --> 00:23:03,850 So it's nice in that sense. 443 00:23:03,850 --> 00:23:08,850 Because you can make this be flat as far as you want to, 444 00:23:08,850 --> 00:23:11,410 and then chop it off wherever you want to. 445 00:23:11,410 --> 00:23:14,290 And we'll see that when we start putting a process like 446 00:23:14,290 --> 00:23:18,760 this through filters, very nice things happen. 447 00:23:18,760 --> 00:23:21,510 OK. 448 00:23:21,510 --> 00:23:25,150 So we're familiar, relatively familiar, and conversant with 449 00:23:25,150 --> 00:23:30,120 at least one Gaussian random process. 450 00:23:30,120 --> 00:23:32,750 OK, we talked about linear functionals last 451 00:23:32,750 --> 00:23:35,840 time before the quiz. 452 00:23:35,840 --> 00:23:38,360 And we said a linear functional is a random 453 00:23:38,360 --> 00:23:43,750 variable, V, which is simply this integral here. 454 00:23:43,750 --> 00:23:46,970 We've talked about this integral a lot where Z of t is 455 00:23:46,970 --> 00:23:50,810 a function, and g of t is a function. 456 00:23:50,810 --> 00:23:54,850 If the sample values of Z of t are L sub 2, then this 457 00:23:54,850 --> 00:23:58,240 integral is very well-defined. 458 00:23:58,240 --> 00:24:03,180 It turns out, also, that if g of t is L sub 2, and these 459 00:24:03,180 --> 00:24:06,640 sample values are bounded, then all of this works out 460 00:24:06,640 --> 00:24:08,420 very nicely also. 461 00:24:08,420 --> 00:24:10,990 And we'll find other ways to look at this as we move on. 462 00:24:10,990 --> 00:24:16,560 OK but what this means is that for all sample points in this 463 00:24:16,560 --> 00:24:19,340 entire sample space, in other words we're dealing with a 464 00:24:19,340 --> 00:24:22,790 probability space where we have a bunch of processes 465 00:24:22,790 --> 00:24:25,780 running around, we have data coming in. 466 00:24:25,780 --> 00:24:27,820 We're transmitting the data. 467 00:24:27,820 --> 00:24:29,820 We have data that gets received. 468 00:24:29,820 --> 00:24:31,970 We're doing crazy things with it. 469 00:24:31,970 --> 00:24:33,730 All of this stuff is random. 470 00:24:33,730 --> 00:24:37,450 We might have other processes doing something else. 471 00:24:37,450 --> 00:24:42,380 Big complicated sample space here for all of the elements 472 00:24:42,380 --> 00:24:49,530 in it, the sample value of the random variable, V, is just 473 00:24:49,530 --> 00:24:53,350 the integral of the sample value of the process here. 474 00:24:53,350 --> 00:24:55,550 In other words for each omega, this 475 00:24:55,550 --> 00:24:59,340 process is simply a waveform. 476 00:24:59,340 --> 00:25:03,840 So it's this waveform times the function g of t, which we 477 00:25:03,840 --> 00:25:05,180 think of like that. 478 00:25:05,180 --> 00:25:12,610 So if g of t is time limited in L sub 2, if g of t is time 479 00:25:12,610 --> 00:25:17,350 limited as far as these sample functions are concerned, we 480 00:25:17,350 --> 00:25:20,250 don't give a fig about what Z of t is 481 00:25:20,250 --> 00:25:22,990 outside of that interval. 482 00:25:22,990 --> 00:25:28,120 And if we're going to only deal with linear operations on 483 00:25:28,120 --> 00:25:31,310 this process where those linear operations are 484 00:25:31,310 --> 00:25:35,540 constrained to some interval, then we don't care what the 485 00:25:35,540 --> 00:25:38,550 process does outside of that interval. 486 00:25:38,550 --> 00:25:41,300 And since we don't care what the process does outside of 487 00:25:41,300 --> 00:25:46,940 that interval, we can simply define the process in terms of 488 00:25:46,940 --> 00:25:50,320 what it's doing from some large negative time to some 489 00:25:50,320 --> 00:25:51,620 large positive time. 490 00:25:51,620 --> 00:25:55,990 And that's all we care about. 491 00:25:55,990 --> 00:25:59,040 And so we wanted to find effective stationarity, as 492 00:25:59,040 --> 00:26:00,860 something which obeys the law of 493 00:26:00,860 --> 00:26:03,280 stationarity over that interval. 494 00:26:03,280 --> 00:26:07,260 And we don't care what it does outside of that interval, OK? 495 00:26:07,260 --> 00:26:10,000 In other words, if Microsoft stops supporting things after 496 00:26:10,000 --> 00:26:12,900 six months, we don't care about it. 497 00:26:12,900 --> 00:26:15,160 Because everything we're going to do we're going to get done 498 00:26:15,160 --> 00:26:16,970 within six months. 499 00:26:16,970 --> 00:26:19,020 And if we don't get it done within six months we're going 500 00:26:19,020 --> 00:26:20,530 to have a terrible crash. 501 00:26:20,530 --> 00:26:21,900 And we're going to throw everything away 502 00:26:21,900 --> 00:26:23,060 and start over again. 503 00:26:23,060 --> 00:26:25,530 So it doesn't make any difference at that point. 504 00:26:25,530 --> 00:26:30,130 505 00:26:30,130 --> 00:26:35,080 OK, so we have a process now we'll assume is effectively 506 00:26:35,080 --> 00:26:39,370 stationary within two time limits. 507 00:26:39,370 --> 00:26:43,400 And our definition is if the covariance function, which is 508 00:26:43,400 --> 00:26:46,740 a function of two values, t and tau, namely it's the 509 00:26:46,740 --> 00:26:52,360 expected value of Z of t times expected value of Z of tau. 510 00:26:52,360 --> 00:26:55,590 And our definition of effective stationarity is that 511 00:26:55,590 --> 00:27:00,120 this is equal to a function of t minus tau, whenever t and 512 00:27:00,120 --> 00:27:02,910 tau are in this box. 513 00:27:02,910 --> 00:27:06,030 OK, and we drew figures last time for what that meant. 514 00:27:06,030 --> 00:27:10,060 We drew this box, you know, bounded by T sub 0 over 2. 515 00:27:10,060 --> 00:27:13,390 And what this effective stationarity means, is that on 516 00:27:13,390 --> 00:27:17,570 all these diagonal lines, the covariance function is 517 00:27:17,570 --> 00:27:20,390 constant on those diagonal lines. 518 00:27:20,390 --> 00:27:25,380 And that gives us the same effect whenever we're passing 519 00:27:25,380 --> 00:27:33,850 this process any time we're calculating the inner product 520 00:27:33,850 --> 00:27:37,900 of a sample value of the process with some function, 521 00:27:37,900 --> 00:27:40,680 which is contained within those limits. 522 00:27:40,680 --> 00:27:44,540 All we need to know about is just what the process is doing 523 00:27:44,540 --> 00:27:48,720 within minus T sub 0 over 2 to plus T sub 0 over 2. 524 00:27:48,720 --> 00:27:50,720 Nothing else matters. 525 00:27:50,720 --> 00:27:53,970 And therefore, something is effectively stationary within 526 00:27:53,970 --> 00:27:57,820 those limits means that we get the same answer here, whether 527 00:27:57,820 --> 00:28:00,830 or not it's stationary. 528 00:28:00,830 --> 00:28:02,110 Why is that important? 529 00:28:02,110 --> 00:28:05,320 530 00:28:05,320 --> 00:28:10,160 It's important because the things that you can do, the 531 00:28:10,160 --> 00:28:14,170 simple things that you can do with stationary processes are 532 00:28:14,170 --> 00:28:18,790 so simple that you'd like to remember them, and not 533 00:28:18,790 --> 00:28:23,030 remember all these formulas with capital T sub 0's 534 00:28:23,030 --> 00:28:25,110 floating around in them. 535 00:28:25,110 --> 00:28:29,520 Because having a capital T sub 0 in it is just as crazy as 536 00:28:29,520 --> 00:28:33,200 assuming that it's stationary, because you never know how 537 00:28:33,200 --> 00:28:35,350 long it's going to be until Microsoft stops 538 00:28:35,350 --> 00:28:38,190 supporting its software. 539 00:28:38,190 --> 00:28:39,550 I mean, if you knew you'd never buy 540 00:28:39,550 --> 00:28:41,570 their products, right? 541 00:28:41,570 --> 00:28:43,960 So you can't possibly know. 542 00:28:43,960 --> 00:28:47,140 So you assume it's going to go on forever. 543 00:28:47,140 --> 00:28:51,190 But then, when we try to ask what does that mean, we say 544 00:28:51,190 --> 00:28:54,310 what it really means is over some finite limits, which are 545 00:28:54,310 --> 00:28:56,840 large compared to anything we're going to deal with that 546 00:28:56,840 --> 00:28:58,730 at all of these results hold. 547 00:28:58,730 --> 00:29:01,360 So eventually we're trying to get to the point where we can 548 00:29:01,360 --> 00:29:05,040 leave the T sub 0 out of it. 549 00:29:05,040 --> 00:29:08,250 But we're going through this so we can say, OK there's 550 00:29:08,250 --> 00:29:12,650 nothing magical that happens in the limit as T sub 0 goes 551 00:29:12,650 --> 00:29:13,900 to infinite. 552 00:29:13,900 --> 00:29:15,810 553 00:29:15,810 --> 00:29:20,320 Mainly we're trying to really derive the fact that all these 554 00:29:20,320 --> 00:29:22,530 results can be established over a 555 00:29:22,530 --> 00:29:24,560 finite interval of time. 556 00:29:24,560 --> 00:29:28,060 And therefore you don't care. 557 00:29:28,060 --> 00:29:28,460 OK. 558 00:29:28,460 --> 00:29:37,120 So if this covariance function, single variable 559 00:29:37,120 --> 00:29:40,880 covariance function, is less than infinity, 560 00:29:40,880 --> 00:29:43,130 what does that mean? 561 00:29:43,130 --> 00:29:46,510 What is a single variance, single value, covariance 562 00:29:46,510 --> 00:29:48,980 function of zero mean? 563 00:29:48,980 --> 00:29:55,120 It's the expected value of Z of T, times Z of T. In other 564 00:29:55,120 --> 00:29:59,790 words, it's the variance of the process at T for any T 565 00:29:59,790 --> 00:30:04,860 within minus T sub 0 to plus T sub 0, or T sub 0 over 2 to 566 00:30:04,860 --> 00:30:06,700 plus T sub 0 over 2. 567 00:30:06,700 --> 00:30:07,950 OK? 568 00:30:07,950 --> 00:30:09,960 569 00:30:09,960 --> 00:30:14,590 So if that variance is finite, then you can just integrate 570 00:30:14,590 --> 00:30:17,830 over the sample values of the process, and you 571 00:30:17,830 --> 00:30:20,050 get something finite. 572 00:30:20,050 --> 00:30:24,200 In other words this is what you need for the sample 573 00:30:24,200 --> 00:30:29,370 functions to be L sub 2 with probability one. 574 00:30:29,370 --> 00:30:33,491 As soon as you have this, then you're in business with all of 575 00:30:33,491 --> 00:30:35,780 your L sub 2 theory. 576 00:30:35,780 --> 00:30:38,500 And what does your L sub 2 theory say? 577 00:30:38,500 --> 00:30:42,080 It really says you can ignore L sub 2 theory. 578 00:30:42,080 --> 00:30:43,740 OK, that was the nice thing about it. 579 00:30:43,740 --> 00:30:44,930 That was why we did it. 580 00:30:44,930 --> 00:30:48,780 OK in other words, the whole thing we're doing in this 581 00:30:48,780 --> 00:30:54,040 course is we're going through complicated things sometimes 582 00:30:54,040 --> 00:30:57,310 so that you can know how far you can go 583 00:30:57,310 --> 00:30:59,680 with the simple things. 584 00:30:59,680 --> 00:31:01,560 And we always end up with the simple things 585 00:31:01,560 --> 00:31:03,990 when we're all done. 586 00:31:03,990 --> 00:31:06,680 OK, and that's the whole principle. 587 00:31:06,680 --> 00:31:08,970 I mean we could just do the simple things like most 588 00:31:08,970 --> 00:31:10,940 courses do. 589 00:31:10,940 --> 00:31:13,840 Except then, you would never know when it applies and when 590 00:31:13,840 --> 00:31:14,800 it doesn't apply. 591 00:31:14,800 --> 00:31:16,690 So, OK. 592 00:31:16,690 --> 00:31:19,840 593 00:31:19,840 --> 00:31:21,090 So there we are. 594 00:31:21,090 --> 00:31:23,600 595 00:31:23,600 --> 00:31:28,180 Let's talk about linear filtering of processes. 596 00:31:28,180 --> 00:31:32,740 The notes don't do quite enough of this. 597 00:31:32,740 --> 00:31:36,125 So when I get to the place where you need it I will-- and 598 00:31:36,125 --> 00:31:38,420 some of it's on this slide-- 599 00:31:38,420 --> 00:31:40,780 I'll tell you. 600 00:31:40,780 --> 00:31:46,570 We're taking a random process and we're passing the sample 601 00:31:46,570 --> 00:31:49,770 waveform of that random process through a linear 602 00:31:49,770 --> 00:31:53,620 filter, a linear time and variant filter, so some other 603 00:31:53,620 --> 00:31:56,360 random process comes out. 604 00:31:56,360 --> 00:32:01,810 OK, and now the output at some time tau, is just the 605 00:32:01,810 --> 00:32:07,480 convolution of the input Z of t times this filter response h 606 00:32:07,480 --> 00:32:09,110 of tau minus t. 607 00:32:09,110 --> 00:32:11,970 And you can interpret this in terms of these sample 608 00:32:11,970 --> 00:32:15,190 functions the way we did before. 609 00:32:15,190 --> 00:32:18,600 But now we already sort of understand that. 610 00:32:18,600 --> 00:32:21,330 So we're just interpreting in terms of if there's a random 611 00:32:21,330 --> 00:32:26,780 variable V, which is the value of the process at time at 612 00:32:26,780 --> 00:32:29,540 epoch tau, which is given in this way. 613 00:32:29,540 --> 00:32:32,570 614 00:32:32,570 --> 00:32:35,690 It's a random variable when you pass the process through 615 00:32:35,690 --> 00:32:37,310 the filter. 616 00:32:37,310 --> 00:32:41,430 OK, if Z of t is effectively Wide Sense Stationary with L 617 00:32:41,430 --> 00:32:46,650 sub 2 sample functions, and if h of t is non-zero only within 618 00:32:46,650 --> 00:32:51,290 finite limits, minus A to plus A, what's going to happen? 619 00:32:51,290 --> 00:32:55,540 When you pass the sample waveforms through a linear 620 00:32:55,540 --> 00:33:00,560 filter, which is bounded between minus A and plus A, 621 00:33:00,560 --> 00:33:08,470 that linear filter cannot do anything with the inputs. 622 00:33:08,470 --> 00:33:12,260 Namely the output that comes out of there at some time, T, 623 00:33:12,260 --> 00:33:15,510 can't depend on the input anymore than A 624 00:33:15,510 --> 00:33:18,900 away from that output. 625 00:33:18,900 --> 00:33:23,040 OK in other words, let me draw a picture of that. 626 00:33:23,040 --> 00:33:27,040 627 00:33:27,040 --> 00:33:32,430 Here's some time where we're observing V of t. 628 00:33:32,430 --> 00:33:41,420 And V of t can depend on Z of t only in this region here, Z 629 00:33:41,420 --> 00:33:49,970 of t minus A up to Z of t plus A. OK that's what the 630 00:33:49,970 --> 00:33:51,030 convolution says. 631 00:33:51,030 --> 00:33:54,250 That's what those filter equations say. 632 00:33:54,250 --> 00:33:58,860 It says that this output, here at this time, depends on the 633 00:33:58,860 --> 00:34:02,110 input only over these finite limits. 634 00:34:02,110 --> 00:34:05,410 And again remember we don't care about realizability here. 635 00:34:05,410 --> 00:34:08,550 Because the timing at the receiver is always different 636 00:34:08,550 --> 00:34:11,740 from the timing at the transmitter. 637 00:34:11,740 --> 00:34:20,640 OK so what that says then is, what does it say? 638 00:34:20,640 --> 00:34:24,260 639 00:34:24,260 --> 00:34:30,380 It says that we know that Z of t is Wide Sense Stationary 640 00:34:30,380 --> 00:34:36,240 over these big limits minus T sub 0 to t sub 0 from six 641 00:34:36,240 --> 00:34:39,130 months ago until six months from now. 642 00:34:39,130 --> 00:34:43,700 And this linear filter is non-zero only over one 643 00:34:43,700 --> 00:34:45,810 millisecond. 644 00:34:45,810 --> 00:34:50,070 It says that the process which comes out, is going to be Wide 645 00:34:50,070 --> 00:34:53,520 Sense Stationary over minus six months plus one 646 00:34:53,520 --> 00:34:59,400 millisecond to six months minus one millisecond. 647 00:34:59,400 --> 00:35:00,660 And that's just common sense. 648 00:35:00,660 --> 00:35:04,400 Because the filter isn't moving the process any more 649 00:35:04,400 --> 00:35:05,650 than that little bit. 650 00:35:05,650 --> 00:35:09,200 651 00:35:09,200 --> 00:35:10,420 OK. 652 00:35:10,420 --> 00:35:14,340 So V of t then, is going to be Wide Sense Stationary in L sub 653 00:35:14,340 --> 00:35:18,800 2 within those slightly smaller limits. 654 00:35:18,800 --> 00:35:23,880 The covariance function is going to be the same thing 655 00:35:23,880 --> 00:35:26,880 that it was before if you had a completely stationary 656 00:35:26,880 --> 00:35:31,010 process, if you only worry about what's going on within 657 00:35:31,010 --> 00:35:35,900 those limits, T sub 0 minus A to T sub zero plus A. You can 658 00:35:35,900 --> 00:35:41,430 take that big mess there, and view it more simply as a 659 00:35:41,430 --> 00:35:42,810 convolution. 660 00:35:42,810 --> 00:35:47,250 Let's see, this part of it is a convolution of h of t with K 661 00:35:47,250 --> 00:35:49,470 tilde, which is the covariance, which is a 662 00:35:49,470 --> 00:35:51,640 function of one variable. 663 00:35:51,640 --> 00:35:54,770 And then it's convolved with-- 664 00:35:54,770 --> 00:36:00,170 and here I put in the complex conjugate of h, because it's 665 00:36:00,170 --> 00:36:02,720 easier to do it here. 666 00:36:02,720 --> 00:36:06,730 Because at some point, we want to start dealing with complex 667 00:36:06,730 --> 00:36:08,150 random processes. 668 00:36:08,150 --> 00:36:10,940 And for filters it's easy to do that. 669 00:36:10,940 --> 00:36:13,070 For linear functionals it's a little harder. 670 00:36:13,070 --> 00:36:17,790 So I want to stick to real processes. 671 00:36:17,790 --> 00:36:21,170 If we're dealing with filtering, I can simply define 672 00:36:21,170 --> 00:36:27,710 this covariance function as the expected value of Z of t 673 00:36:27,710 --> 00:36:32,410 times Z complex conjugate of tau. 674 00:36:32,410 --> 00:36:36,200 And when you do that, we're taking this integral, which is 675 00:36:36,200 --> 00:36:39,700 the convolution of h of t with K tilde. 676 00:36:39,700 --> 00:36:42,730 And then we're convolving it with h complex 677 00:36:42,730 --> 00:36:44,980 conjugate of minus t. 678 00:36:44,980 --> 00:36:48,490 Because the t gets turned around in there. 679 00:36:48,490 --> 00:36:51,340 And what that says when you take the Fourier transform of 680 00:36:51,340 --> 00:36:53,270 it, at least what I hope it says-- 681 00:36:53,270 --> 00:36:57,030 I'm not very good at taking Fourier transforms. 682 00:36:57,030 --> 00:37:01,230 And you people, some of you are very good at it. 683 00:37:01,230 --> 00:37:07,030 We get the spectral density of the process, Z of t times the 684 00:37:07,030 --> 00:37:09,400 magnitude of h hat of f. 685 00:37:09,400 --> 00:37:12,010 Now this is the formula, which is not in the notes 686 00:37:12,010 --> 00:37:14,040 I just found out. 687 00:37:14,040 --> 00:37:15,560 And it's a very important formula. 688 00:37:15,560 --> 00:37:19,490 So you ought to write it down. 689 00:37:19,490 --> 00:37:22,890 It says what the spectral density is on the output of 690 00:37:22,890 --> 00:37:26,550 the filter, if you know the spectral density of the input 691 00:37:26,550 --> 00:37:29,060 of the filter. 692 00:37:29,060 --> 00:37:33,520 And it's a kind of a neat, simple formula for it. 693 00:37:33,520 --> 00:37:38,580 It says, you pass a random process. 694 00:37:38,580 --> 00:37:42,820 I mean, it's like the formula that you have when you take a 695 00:37:42,820 --> 00:37:46,350 waveform and you pass it through a linear filter. 696 00:37:46,350 --> 00:37:49,170 And you know it's kind of easier to look at that in the 697 00:37:49,170 --> 00:37:52,690 frequency domain where you multiply, than it is to look 698 00:37:52,690 --> 00:37:55,430 at it in a time domain where you have to convolve. 699 00:37:55,430 --> 00:38:00,010 Here things become even simpler because all we have to 700 00:38:00,010 --> 00:38:02,860 do is take the spectral density, which is now a 701 00:38:02,860 --> 00:38:06,610 function of frequency, multiply it by this magnitude 702 00:38:06,610 --> 00:38:08,870 squared, and suddenly we get the spectral 703 00:38:08,870 --> 00:38:13,000 density at the output. 704 00:38:13,000 --> 00:38:17,760 Now what happens when you take this nice sinc filter, this 705 00:38:17,760 --> 00:38:22,890 sinc Gaussian process that we have, which is flat over as 706 00:38:22,890 --> 00:38:26,690 large a bandwidth as you want to make it. 707 00:38:26,690 --> 00:38:30,790 And you pass that process through a linear filter. 708 00:38:30,790 --> 00:38:32,040 What do you get out? 709 00:38:32,040 --> 00:38:35,310 710 00:38:35,310 --> 00:38:39,040 You get a process out which has any spectral density that 711 00:38:39,040 --> 00:38:41,790 you want within those wide limits. 712 00:38:41,790 --> 00:38:44,690 713 00:38:44,690 --> 00:38:50,220 OK, so just by understanding the sinc Gaussian process and 714 00:38:50,220 --> 00:38:54,700 by understanding this result, you can create a process which 715 00:38:54,700 --> 00:38:57,030 has any old spectral density that you want. 716 00:38:57,030 --> 00:39:00,490 717 00:39:00,490 --> 00:39:03,470 And if you can create any old spectral density that you 718 00:39:03,470 --> 00:39:08,710 want, you know you can create any old covariance function 719 00:39:08,710 --> 00:39:09,770 that you want. 720 00:39:09,770 --> 00:39:12,450 And all of this is good for Wide Sense Stationarity so 721 00:39:12,450 --> 00:39:16,710 long as your filter doesn't extend for too far. 722 00:39:16,710 --> 00:39:20,310 Which says all of our theory works starting at some 723 00:39:20,310 --> 00:39:23,810 negative time going to some positive time, taking filters 724 00:39:23,810 --> 00:39:28,030 that only exist for a relatively small time. 725 00:39:28,030 --> 00:39:32,650 If it's Wide Sense Stationary and it's Gaussian, then 726 00:39:32,650 --> 00:39:34,210 everything works beautifully. 727 00:39:34,210 --> 00:39:36,780 728 00:39:36,780 --> 00:39:39,370 You can simply create any old spectral 729 00:39:39,370 --> 00:39:41,310 density that you want. 730 00:39:41,310 --> 00:39:45,270 If you have some waveform that's coming in, you can 731 00:39:45,270 --> 00:39:49,110 filter it and make the output process look whatever you want 732 00:39:49,110 --> 00:39:51,290 to make it look like, so long as you have the 733 00:39:51,290 --> 00:39:53,540 stationarity property. 734 00:39:53,540 --> 00:39:57,480 Now you know the secret of why everybody deals with 735 00:39:57,480 --> 00:39:59,610 stationary processes. 736 00:39:59,610 --> 00:40:02,380 It's because of this formula. 737 00:40:02,380 --> 00:40:04,160 It's an incredibly simple formula. 738 00:40:04,160 --> 00:40:06,800 It's an incredibly simple idea. 739 00:40:06,800 --> 00:40:08,220 You see now we know something else. 740 00:40:08,220 --> 00:40:12,300 We know that it also applies over finite, 741 00:40:12,300 --> 00:40:13,600 but large time intervals. 742 00:40:13,600 --> 00:40:18,360 743 00:40:18,360 --> 00:40:18,680 OK. 744 00:40:18,680 --> 00:40:24,980 So just to spell out the conclusions once more, a Wide 745 00:40:24,980 --> 00:40:27,540 Sense Stationary process is Wide Sense 746 00:40:27,540 --> 00:40:30,380 Stationary after filtering. 747 00:40:30,380 --> 00:40:33,520 So if you start out with Wide Sense Stationary, it's Wide 748 00:40:33,520 --> 00:40:36,000 Sense Stationary when you get done. 749 00:40:36,000 --> 00:40:39,750 If it's effectively Wide Sense Stationary, it's effectively 750 00:40:39,750 --> 00:40:44,090 stationary with a reduced interval of effective 751 00:40:44,090 --> 00:40:46,670 stationarity after you filter. 752 00:40:46,670 --> 00:40:49,580 OK in other words, if you start out with a process which 753 00:40:49,580 --> 00:40:53,840 is effectively stationary from minus T sub 0 to plus T sub 0, 754 00:40:53,840 --> 00:40:56,870 then after you filter it with any filter whose impulse 755 00:40:56,870 --> 00:41:00,550 response is limited in time, you get something which is 756 00:41:00,550 --> 00:41:05,150 effectively stationary within that six months minus one 757 00:41:05,150 --> 00:41:06,660 millisecond period of time. 758 00:41:06,660 --> 00:41:06,890 Yes? 759 00:41:06,890 --> 00:41:09,270 AUDIENCE: [UNINTELLIGIBLE] 760 00:41:09,270 --> 00:41:09,700 PROFESSOR: What? 761 00:41:09,700 --> 00:41:12,750 AUDIENCE: Don't you differentiate [UNINTELLIGIBLE] 762 00:41:12,750 --> 00:41:17,820 PROFESSOR: Effectively Wide Sense Stationary, thank you. 763 00:41:17,820 --> 00:41:22,630 764 00:41:22,630 --> 00:41:26,430 I try to differentiate between them, but sometimes I'm 765 00:41:26,430 --> 00:41:29,380 thinking about Gaussian processes, where it doesn't 766 00:41:29,380 --> 00:41:32,530 make any difference. 767 00:41:32,530 --> 00:41:38,990 Is effectively Wide Sense Stationary with reduced 768 00:41:38,990 --> 00:41:40,320 interval after filtering. 769 00:41:40,320 --> 00:41:43,230 OK, good. 770 00:41:43,230 --> 00:41:46,860 The internal covariance, in other words, the covariance 771 00:41:46,860 --> 00:41:50,040 function within these intervals, the spectral 772 00:41:50,040 --> 00:41:54,170 density, and the joint probability density aren't 773 00:41:54,170 --> 00:41:56,910 affected by this interval of effective stationarity. 774 00:41:56,910 --> 00:41:59,440 In other words, so long as you stay in this region of 775 00:41:59,440 --> 00:42:04,230 interest, you don't care what T sub 0 is. 776 00:42:04,230 --> 00:42:06,620 And since you don't care what T sub 0 is, you 777 00:42:06,620 --> 00:42:08,570 forget about it. 778 00:42:08,570 --> 00:42:11,290 And for now on we will forget about it. 779 00:42:11,290 --> 00:42:15,800 But we know that whatever we're dealing with, it's 780 00:42:15,800 --> 00:42:19,700 effectively stationary within some time period, which is 781 00:42:19,700 --> 00:42:23,040 what makes the functions L sub 2. 782 00:42:23,040 --> 00:42:26,130 We don't care how big that is. 783 00:42:26,130 --> 00:42:28,560 So we don't have to specify T sub 0. 784 00:42:28,560 --> 00:42:30,530 We don't have to worry about it. 785 00:42:30,530 --> 00:42:33,600 We just assume it's big enough and move on. 786 00:42:33,600 --> 00:42:36,380 And then if it turns out it's not big enough, if your system 787 00:42:36,380 --> 00:42:40,610 crashes when one of Microsoft's infernal errors 788 00:42:40,610 --> 00:42:43,450 come up, then you worry about it then. 789 00:42:43,450 --> 00:42:45,180 But you don't worry about it before that. 790 00:42:45,180 --> 00:42:51,020 791 00:42:51,020 --> 00:42:54,060 OK so now we can really define white noise and understand 792 00:42:54,060 --> 00:42:57,260 what white noise is. 793 00:42:57,260 --> 00:43:03,550 White noise is noise that is effectively Wide Sense 794 00:43:03,550 --> 00:43:11,810 Stationary over a large enough interval to include all time 795 00:43:11,810 --> 00:43:14,080 intervals of interest. 796 00:43:14,080 --> 00:43:18,015 Also on the other part of white noise, is that the 797 00:43:18,015 --> 00:43:21,710 spectral density is constant in f over all 798 00:43:21,710 --> 00:43:24,050 frequencies of interest. 799 00:43:24,050 --> 00:43:28,940 OK in other words, white noise, really you have to 800 00:43:28,940 --> 00:43:32,880 define it within terms of what you're interested in. 801 00:43:32,880 --> 00:43:37,010 And that's important to understand. 802 00:43:37,010 --> 00:43:40,150 If you're dealing with wireless channels, you 803 00:43:40,150 --> 00:43:45,750 sometimes want to assume that the noise you're seeing is 804 00:43:45,750 --> 00:43:47,990 actually white noise. 805 00:43:47,990 --> 00:43:50,440 But sometimes you do things like jumping from one 806 00:43:50,440 --> 00:43:53,820 frequency band to another frequency band to transmit. 807 00:43:53,820 --> 00:43:56,920 And when you jump from one frequency band to another 808 00:43:56,920 --> 00:43:59,820 frequency band, the noise might in fact be quite 809 00:43:59,820 --> 00:44:03,450 different in this band, than it is in this band. 810 00:44:03,450 --> 00:44:05,160 But you're going to stay in these bands for a 811 00:44:05,160 --> 00:44:07,210 relatively long time. 812 00:44:07,210 --> 00:44:11,460 So as far as any kind of analysis goes which is dealing 813 00:44:11,460 --> 00:44:14,680 with those intervals, you want to assume that 814 00:44:14,680 --> 00:44:17,330 the noise is white. 815 00:44:17,330 --> 00:44:21,640 As far as any overall analysis that looks at all of the 816 00:44:21,640 --> 00:44:25,850 intervals at the same time, in other words, which is dealing 817 00:44:25,850 --> 00:44:29,680 with some very long-term phenomenon, you can't assume 818 00:44:29,680 --> 00:44:31,050 that the noise is white. 819 00:44:31,050 --> 00:44:35,880 OK in other words, white noise is a modeling tool. 820 00:44:35,880 --> 00:44:39,470 It's not something that occurs in the real world. 821 00:44:39,470 --> 00:44:42,600 But it's a simple modeling tool. 822 00:44:42,600 --> 00:44:43,920 Because what do you do with it? 823 00:44:43,920 --> 00:44:47,160 You start out by assuming that the noise is white. 824 00:44:47,160 --> 00:44:49,320 And then you start to get some understanding of what the 825 00:44:49,320 --> 00:44:51,090 problem is about. 826 00:44:51,090 --> 00:44:53,640 And after you understand what the problem is about, you come 827 00:44:53,640 --> 00:44:57,440 back and say, is it really important for me that the 828 00:44:57,440 --> 00:45:00,880 noise is going to have a constant spectral density over 829 00:45:00,880 --> 00:45:02,520 this interval of interest? 830 00:45:02,520 --> 00:45:06,350 And at that point you say, yes or no. 831 00:45:06,350 --> 00:45:06,850 OK. 832 00:45:06,850 --> 00:45:10,100 It's important to always be aware that this doesn't apply 833 00:45:10,100 --> 00:45:12,140 for times and frequencies outside 834 00:45:12,140 --> 00:45:14,000 the interval of interest. 835 00:45:14,000 --> 00:45:17,470 And if the process is also Gaussian, it's called white 836 00:45:17,470 --> 00:45:18,500 Gaussian noise. 837 00:45:18,500 --> 00:45:22,850 So white Gaussian noise is noise which has a flat 838 00:45:22,850 --> 00:45:27,250 spectral density over this effective stationarity period 839 00:45:27,250 --> 00:45:30,040 that we're interested in, over some very large period of 840 00:45:30,040 --> 00:45:32,280 time, minus T sub 0 to plus T sub 0. 841 00:45:32,280 --> 00:45:36,850 And over some very large frequency interval, minus W up 842 00:45:36,850 --> 00:45:42,300 to plus W or even better over some limited frequency band in 843 00:45:42,300 --> 00:45:45,750 positive and negative frequencies, the spectral 844 00:45:45,750 --> 00:45:47,610 density is constant. 845 00:45:47,610 --> 00:45:51,740 You can't define the spectral density until after you look 846 00:45:51,740 --> 00:45:54,920 at this effective period of stationarity. 847 00:45:54,920 --> 00:45:57,590 But at that point, you can then define the spectral 848 00:45:57,590 --> 00:46:01,630 density and see whether it looks constant over the period 849 00:46:01,630 --> 00:46:02,730 you're interested in. 850 00:46:02,730 --> 00:46:06,590 You can either see whether it is, or if you're an academic 851 00:46:06,590 --> 00:46:09,700 and you write papers, you assume that it is. 852 00:46:09,700 --> 00:46:11,990 OK. 853 00:46:11,990 --> 00:46:14,650 The nice thing about being an academic is you can assume 854 00:46:14,650 --> 00:46:15,900 whatever you want to assume. 855 00:46:15,900 --> 00:46:20,620 856 00:46:20,620 --> 00:46:22,670 OK let's go back to linear functionals. 857 00:46:22,670 --> 00:46:24,450 The stuff about linear functionals is 858 00:46:24,450 --> 00:46:25,490 in fact in the notes. 859 00:46:25,490 --> 00:46:28,250 This will start to give us an idea of what spectral density 860 00:46:28,250 --> 00:46:31,340 really means. 861 00:46:31,340 --> 00:46:37,440 So if I have a Wide Sense Stationary process, a linear 862 00:46:37,440 --> 00:46:39,360 functional, well this is what a linear 863 00:46:39,360 --> 00:46:41,910 functional is in general. 864 00:46:41,910 --> 00:46:46,810 But for a Wide Sense Stationary process, we have 865 00:46:46,810 --> 00:46:50,780 the single variance, single variable covariance function. 866 00:46:50,780 --> 00:46:54,970 The expected value of the product of two random 867 00:46:54,970 --> 00:46:59,230 variables of this sort is just this integral here. 868 00:46:59,230 --> 00:47:00,430 This is what we did before. 869 00:47:00,430 --> 00:47:03,860 If you don't remember this formula, good. 870 00:47:03,860 --> 00:47:07,693 It's just what you get when you take the definition of V 871 00:47:07,693 --> 00:47:12,400 sub i, which is the integral of g sub i of t with Z of t. 872 00:47:12,400 --> 00:47:15,970 And then you multiply it by V sub j. 873 00:47:15,970 --> 00:47:18,480 And you take Z of t and Z of tau. 874 00:47:18,480 --> 00:47:20,700 And you take the expected value of the two. 875 00:47:20,700 --> 00:47:22,160 And then you integrate the whole thing. 876 00:47:22,160 --> 00:47:26,370 And it sort of comes out to be that, if I've done it right. 877 00:47:26,370 --> 00:47:30,290 You can express this in this terms, namely as the integral 878 00:47:30,290 --> 00:47:35,000 of g sub i of t times a convolution of this function 879 00:47:35,000 --> 00:47:36,840 times this function. 880 00:47:36,840 --> 00:47:39,450 You can then use Parseval's identity. 881 00:47:39,450 --> 00:47:43,030 Because this convolution gives you a function of t, OK. 882 00:47:43,030 --> 00:47:46,560 When you integrate this over tau, this whole thing is just 883 00:47:46,560 --> 00:47:48,960 a function of t. 884 00:47:48,960 --> 00:47:49,690 OK. 885 00:47:49,690 --> 00:47:52,970 So what we're dealing with is the integral of a function of 886 00:47:52,970 --> 00:47:56,180 t times a function of t. 887 00:47:56,180 --> 00:48:00,240 And Parseval's relation says that the integral of a 888 00:48:00,240 --> 00:48:03,690 function of time, times a function of time is equal to 889 00:48:03,690 --> 00:48:07,020 the integral of the Fourier transform 890 00:48:07,020 --> 00:48:08,550 times the Fourier transform. 891 00:48:08,550 --> 00:48:08,860 OK. 892 00:48:08,860 --> 00:48:10,950 You all know this. 893 00:48:10,950 --> 00:48:14,300 This is almost the same as a trick we used with linear 894 00:48:14,300 --> 00:48:17,580 filters, just slightly different. 895 00:48:17,580 --> 00:48:21,480 So that says that this expected value is equal to 896 00:48:21,480 --> 00:48:24,380 this product here. 897 00:48:24,380 --> 00:48:27,820 And here I've used the complex conjugate for the Fourier 898 00:48:27,820 --> 00:48:32,130 transform of g sub j, because I really need it there if g is 899 00:48:32,130 --> 00:48:35,240 not a symmetric function. 900 00:48:35,240 --> 00:48:39,060 OK so if these Fourier transforms, g sub i of t and g 901 00:48:39,060 --> 00:48:43,540 sub j of t are non-overlapping in frequency, what does this 902 00:48:43,540 --> 00:48:48,240 integral go to? 903 00:48:48,240 --> 00:48:51,390 And this is absolutely wild. 904 00:48:51,390 --> 00:48:53,610 I mean if this doesn't surprise you, you really ought 905 00:48:53,610 --> 00:48:55,730 to go back and think about what it is 906 00:48:55,730 --> 00:48:56,670 that's going on here. 907 00:48:56,670 --> 00:49:00,360 Because it's very astonishing. 908 00:49:00,360 --> 00:49:04,060 It's saying that stationarity says something you would never 909 00:49:04,060 --> 00:49:07,150 guess in a million years that it says. 910 00:49:07,150 --> 00:49:10,800 OK it says, that if this and this do not overlap in 911 00:49:10,800 --> 00:49:13,860 frequency, this integral is 0. 912 00:49:13,860 --> 00:49:17,060 OK in other words, if you take a linear functional over one 913 00:49:17,060 --> 00:49:22,350 band of frequencies, it is uncorrelated with every random 914 00:49:22,350 --> 00:49:26,060 variable that you can take in any other band of frequencies. 915 00:49:26,060 --> 00:49:30,460 So long as the function is Wide Sense Stationary, this 916 00:49:30,460 --> 00:49:33,250 uncorrelated effect takes place. 917 00:49:33,250 --> 00:49:35,750 If you're dealing with the Gaussian random process, it's 918 00:49:35,750 --> 00:49:37,240 also stationery. 919 00:49:37,240 --> 00:49:39,790 Uncorrelated means independent. 920 00:49:39,790 --> 00:49:45,140 And what happens then, is that expected value of V sub i and 921 00:49:45,140 --> 00:49:46,970 V sub j is zero. 922 00:49:46,970 --> 00:49:49,720 V sub i and V sub j are statistically independent 923 00:49:49,720 --> 00:49:51,040 random variables. 924 00:49:51,040 --> 00:49:54,550 And it says that whatever is happening in one frequency 925 00:49:54,550 --> 00:50:01,130 band of a stationary Gaussian process is completely 926 00:50:01,130 --> 00:50:03,750 independent of what's happening in any other band. 927 00:50:03,750 --> 00:50:06,300 928 00:50:06,300 --> 00:50:09,160 Now if any of you can give me an intuitive reason for why 929 00:50:09,160 --> 00:50:12,850 that is, I would love to hear it. 930 00:50:12,850 --> 00:50:16,600 Because I mean it's one of those-- 931 00:50:16,600 --> 00:50:19,290 well after you think about it for a long time, it starts to 932 00:50:19,290 --> 00:50:22,090 sort of make sense. 933 00:50:22,090 --> 00:50:24,820 If you read the appendix in the notes, the appendix in the 934 00:50:24,820 --> 00:50:28,440 notes sort of interprets it a little bit more. 935 00:50:28,440 --> 00:50:32,830 If you take this process, if you limit it in time and you 936 00:50:32,830 --> 00:50:37,150 expand it in a Fourier series, what's going to happen to 937 00:50:37,150 --> 00:50:40,800 those coefficients in the Fourier series? 938 00:50:40,800 --> 00:50:46,340 If the process is stationary, each one of them, if you think 939 00:50:46,340 --> 00:50:49,640 of them as being complex coefficients, the phase in 940 00:50:49,640 --> 00:50:58,220 each one has to be random and uniformly distributed between 941 00:50:58,220 --> 00:51:01,050 zero and 2 Pi. 942 00:51:01,050 --> 00:51:07,260 In other words, if you have a phase on any sinusoid, which 943 00:51:07,260 --> 00:51:14,140 is either deterministic or anything other than uniform, 944 00:51:14,140 --> 00:51:17,980 then that little slice of the process cannot be Wide Sense 945 00:51:17,980 --> 00:51:19,450 Stationary. 946 00:51:19,450 --> 00:51:22,130 And there's nothing in any other frequencies that can 947 00:51:22,130 --> 00:51:25,650 happen to make the process stationary again. 948 00:51:25,650 --> 00:51:28,840 OK in other words, when you expand it in a Fourier series, 949 00:51:28,840 --> 00:51:33,600 you have to get coefficients at every little frequency 950 00:51:33,600 --> 00:51:38,810 slice, which have uniform face to them. 951 00:51:38,810 --> 00:51:41,540 In other words, the Gaussian random variables that you're 952 00:51:41,540 --> 00:51:45,990 looking at, are things that are called proper complex 953 00:51:45,990 --> 00:51:50,330 Gaussian random variables which have the same variance 954 00:51:50,330 --> 00:51:53,400 for the real part as for the imaginary part. 955 00:51:53,400 --> 00:51:55,920 When you look at them as a probability 956 00:51:55,920 --> 00:51:58,800 density, you find circles. 957 00:51:58,800 --> 00:52:01,410 In other words, you find circularly symmetric random 958 00:52:01,410 --> 00:52:04,480 variables at every little frequency interval. 959 00:52:04,480 --> 00:52:07,510 And that's somehow, what stationarity is 960 00:52:07,510 --> 00:52:09,590 implying for us. 961 00:52:09,590 --> 00:52:11,090 So it's a pretty important thing. 962 00:52:11,090 --> 00:52:13,740 963 00:52:13,740 --> 00:52:15,700 And it doesn't just apply to white noise. 964 00:52:15,700 --> 00:52:19,480 It applies to any old process with any old spectral density, 965 00:52:19,480 --> 00:52:25,770 so long as we've assumed that it's Wide Sense Stationary. 966 00:52:25,770 --> 00:52:26,160 OK. 967 00:52:26,160 --> 00:52:30,090 So, different frequency bands are uncorrelated for Gaussian 968 00:52:30,090 --> 00:52:32,420 Wide Sense Stationary processes. 969 00:52:32,420 --> 00:52:34,380 Different frequency bands are completely 970 00:52:34,380 --> 00:52:37,510 independent of each other. 971 00:52:37,510 --> 00:52:42,320 OK, as soon as you have stationary noise, looking at 972 00:52:42,320 --> 00:52:45,750 one frequency band to try to get any idea of what's going 973 00:52:45,750 --> 00:52:50,780 on in another frequency band, is an absolute loser. 974 00:52:50,780 --> 00:52:53,930 You can't tell anything from one band about what's going on 975 00:52:53,930 --> 00:52:55,060 in another band. 976 00:52:55,060 --> 00:52:58,650 So long as you're only looking at Gaussian noise If you're 977 00:52:58,650 --> 00:53:01,090 sending data which is correlated between the two 978 00:53:01,090 --> 00:53:03,310 bands, then of course it's worthwhile to 979 00:53:03,310 --> 00:53:04,690 look at both bands. 980 00:53:04,690 --> 00:53:07,160 But if you're sending data that's limited to one 981 00:53:07,160 --> 00:53:13,980 frequency band, then at that point, there's no reason to 982 00:53:13,980 --> 00:53:15,310 look anywhere else. 983 00:53:15,310 --> 00:53:16,680 We're going to find that out when we 984 00:53:16,680 --> 00:53:18,060 start studying detection. 985 00:53:18,060 --> 00:53:21,550 It'll be a major feature of understanding detection at 986 00:53:21,550 --> 00:53:22,850 that point. 987 00:53:22,850 --> 00:53:24,700 OK, and one of the things you want to understand with 988 00:53:24,700 --> 00:53:27,940 detection, is what things can you ignore and what things 989 00:53:27,940 --> 00:53:28,940 can't you ignore. 990 00:53:28,940 --> 00:53:32,170 And this is telling us something that can be ignored. 991 00:53:32,170 --> 00:53:34,640 It's also starting to tell us what this 992 00:53:34,640 --> 00:53:38,600 spectral density means. 993 00:53:38,600 --> 00:53:46,010 But I'll interpret that in the next slide I hope. 994 00:53:46,010 --> 00:53:49,050 OK if you take this set of functions g sub j of t, in the 995 00:53:49,050 --> 00:53:54,600 notes I now convert that set of functions to V sub j's 996 00:53:54,600 --> 00:53:57,710 instead of g sub j's. 997 00:53:57,710 --> 00:54:03,810 If I start out with that set of functions these-- 998 00:54:03,810 --> 00:54:07,440 oh dear, this is a random variable-- 999 00:54:07,440 --> 00:54:13,010 1000 00:54:13,010 --> 00:54:17,580 V sub j is the component of the process in the expansion, 1001 00:54:17,580 --> 00:54:19,230 in that orthonormal set. 1002 00:54:19,230 --> 00:54:22,600 So in other words, you can take this process, you can 1003 00:54:22,600 --> 00:54:27,150 expand it into an orthonormal expansion using random 1004 00:54:27,150 --> 00:54:31,400 variables and these orthonormal functions. 1005 00:54:31,400 --> 00:54:36,150 If g sub j of t is a narrow band, and S sub Z of f is 1006 00:54:36,150 --> 00:54:38,930 constant within that band-- in other words we don't need 1007 00:54:38,930 --> 00:54:39,860 white noise here. 1008 00:54:39,860 --> 00:54:43,410 All we need is a spectral density 1009 00:54:43,410 --> 00:54:45,250 which changes smoothly. 1010 00:54:45,250 --> 00:54:47,680 And if it changes smoothly, we can look at a 1011 00:54:47,680 --> 00:54:50,560 small enough interval. 1012 00:54:50,560 --> 00:54:52,160 And then what happens? 1013 00:54:52,160 --> 00:54:56,130 When we take the expected value of V sub j squared, what 1014 00:54:56,130 --> 00:55:00,780 we get is this times this times this-- 1015 00:55:00,780 --> 00:55:05,790 OK the product of these three terms and since this-- 1016 00:55:05,790 --> 00:55:12,320 1017 00:55:12,320 --> 00:55:14,150 that's supposed to be a j there. 1018 00:55:14,150 --> 00:55:17,010 1019 00:55:17,010 --> 00:55:20,580 OK so what's happening is these are narrow band, this is 1020 00:55:20,580 --> 00:55:23,110 just a constant within that narrow band. 1021 00:55:23,110 --> 00:55:31,250 So we can pull this out and then all we have is the 1022 00:55:31,250 --> 00:55:38,340 integral g sub i of f times g sub i complex conjugate of f, 1023 00:55:38,340 --> 00:55:42,460 which since these functions are orthonormal, is just one. 1024 00:55:42,460 --> 00:55:49,390 So what we find is that random variable there is simply the 1025 00:55:49,390 --> 00:55:52,920 value of the spectral density. 1026 00:55:52,920 --> 00:55:57,350 Now we don't use that to figure out what these random 1027 00:55:57,350 --> 00:55:58,800 variables are all about. 1028 00:55:58,800 --> 00:56:00,960 We use this to understand what the spectral 1029 00:56:00,960 --> 00:56:02,640 density is all about. 1030 00:56:02,640 --> 00:56:05,850 Because the spectral density is really the energy per 1031 00:56:05,850 --> 00:56:09,290 degree of freedom in the process at frequency f. 1032 00:56:09,290 --> 00:56:12,670 In other words, if we use orthonormal functions which 1033 00:56:12,670 --> 00:56:16,900 are tightly constrained in frequency, they just pick out 1034 00:56:16,900 --> 00:56:20,440 the value of the spectral density at that frequency. 1035 00:56:20,440 --> 00:56:25,700 S sub Z of f sub 0 is just sort of a central frequency in 1036 00:56:25,700 --> 00:56:30,260 this band, or integrating over here, OK. 1037 00:56:30,260 --> 00:56:33,920 So the interpretation of the spectral density is that it's 1038 00:56:33,920 --> 00:56:37,340 the energy per degree of freedom in any kind of 1039 00:56:37,340 --> 00:56:40,920 orthonormal expansion you want to use, so long as those 1040 00:56:40,920 --> 00:56:45,660 functions are tightly limited in frequency. 1041 00:56:45,660 --> 00:56:47,700 So that's what spectral density means. 1042 00:56:47,700 --> 00:56:52,540 It tells you how much energy there is in the noise, at all 1043 00:56:52,540 --> 00:56:55,170 these different frequencies. 1044 00:56:55,170 --> 00:56:57,900 It tells you, for example at this point, that if you start 1045 00:56:57,900 --> 00:57:00,930 out with a sinc Gaussian process like we were talking 1046 00:57:00,930 --> 00:57:04,960 about before, pass it through a filter, what the filter is 1047 00:57:04,960 --> 00:57:07,800 going to do is change the spectral density at all the 1048 00:57:07,800 --> 00:57:09,260 different frequencies. 1049 00:57:09,260 --> 00:57:12,650 And the interpretation of that which now makes perfect sense 1050 00:57:12,650 --> 00:57:15,050 when you're going through a filter, is the amount of 1051 00:57:15,050 --> 00:57:18,340 energy in each of those frequency bands is going to be 1052 00:57:18,340 --> 00:57:22,970 changed exactly by the amount that you filtered it. 1053 00:57:22,970 --> 00:57:28,200 OK so we have a nice thing there. 1054 00:57:28,200 --> 00:57:30,720 S sub Z of f is the energy per degree of freedom in the 1055 00:57:30,720 --> 00:57:33,490 process at frequency f. 1056 00:57:33,490 --> 00:57:36,770 For a Gaussian Wide Sense Stationary process, this is 1057 00:57:36,770 --> 00:57:38,810 the energy per degree of freedom at f. 1058 00:57:38,810 --> 00:57:41,330 And the noise at all different frequencies is independent. 1059 00:57:41,330 --> 00:57:43,090 That's what we were saying before. 1060 00:57:43,090 --> 00:57:47,160 And after filtering, this independence is maintained. 1061 00:57:47,160 --> 00:57:51,210 In other words, you pass a Gaussian random process 1062 00:57:51,210 --> 00:57:55,200 through a filter and you still have independence between all 1063 00:57:55,200 --> 00:57:56,560 the different frequency bands. 1064 00:57:56,560 --> 00:57:59,460 It's an absolutely remarkable property. 1065 00:57:59,460 --> 00:58:03,830 1066 00:58:03,830 --> 00:58:09,620 Just this property of the process acting the same at 1067 00:58:09,620 --> 00:58:13,170 each time as it acts at each other time, that this leads to 1068 00:58:13,170 --> 00:58:15,910 this amazing property of things being independent from 1069 00:58:15,910 --> 00:58:21,040 one frequency to another is really worth thinking about. 1070 00:58:21,040 --> 00:58:24,360 1071 00:58:24,360 --> 00:58:27,130 And it's one of the things that people use when they 1072 00:58:27,130 --> 00:58:29,250 actually design systems. 1073 00:58:29,250 --> 00:58:31,110 I mean it's almost natural to use if you 1074 00:58:31,110 --> 00:58:32,480 don't think about it. 1075 00:58:32,480 --> 00:58:36,220 If you start thinking about it then it becomes even more 1076 00:58:36,220 --> 00:58:38,640 worthwhile. 1077 00:58:38,640 --> 00:58:42,390 OK should I start detection? 1078 00:58:42,390 --> 00:58:46,620 Let me just start to say a few things about detection, then I 1079 00:58:46,620 --> 00:58:49,680 want to stop in time to pass the quiz back. 1080 00:58:49,680 --> 00:58:53,240 1081 00:58:53,240 --> 00:59:00,180 OK let's talk about where detection fits in with our 1082 00:59:00,180 --> 00:59:02,200 master plan of what we've been doing. 1083 00:59:02,200 --> 00:59:05,060 1084 00:59:05,060 --> 00:59:10,000 We started the course talking about what was over on this 1085 00:59:10,000 --> 00:59:13,490 side of the channel business, namely all the source coding 1086 00:59:13,490 --> 00:59:16,000 and all of those things. 1087 00:59:16,000 --> 00:59:20,020 We then started to talk about how do you do signal encoding. 1088 00:59:20,020 --> 00:59:23,920 In other words, how do you map from binary digits coming in, 1089 00:59:23,920 --> 00:59:28,820 into signals. 1090 00:59:28,820 --> 00:59:31,870 Well we never talked about this problem. 1091 00:59:31,870 --> 00:59:34,190 We only talked about this problem, when you 1092 00:59:34,190 --> 00:59:36,170 didn't have any noise. 1093 00:59:36,170 --> 00:59:39,430 We carefully avoided dealing with this problem at all. 1094 00:59:39,430 --> 00:59:44,640 Then we talked about baseband modulation 1095 00:59:44,640 --> 00:59:46,810 for PAM and for QAM. 1096 00:59:46,810 --> 00:59:50,720 For PAM it was real, for QAM it was complex. 1097 00:59:50,720 --> 00:59:53,900 Then we talked about how you go from baseband frequencies 1098 00:59:53,900 --> 00:59:56,970 up to passband frequencies. 1099 00:59:56,970 --> 01:00:01,960 Now we've been talking about what happens when you add 1100 01:00:01,960 --> 01:00:06,620 white Gaussian noise, or any other noise at passband. 1101 01:00:06,620 --> 01:00:09,370 Then you go from passband down to baseband. 1102 01:00:09,370 --> 01:00:13,310 When I do this, we're going to have to come back at some 1103 01:00:13,310 --> 01:00:17,010 point and deal with the question of what happens when 1104 01:00:17,010 --> 01:00:20,380 you take passband white Gaussian noise and convert it 1105 01:00:20,380 --> 01:00:22,210 down to baseband. 1106 01:00:22,210 --> 01:00:24,770 Because some kind of funny things happen there. 1107 01:00:24,770 --> 01:00:28,630 And we have to deal with it with a little bit of care. 1108 01:00:28,630 --> 01:00:31,190 But let's forget about that for the time being, and 1109 01:00:31,190 --> 01:00:34,040 suppose everything works out all right. 1110 01:00:34,040 --> 01:00:37,580 Then we go through our baseband demodulator. 1111 01:00:37,580 --> 01:00:41,900 Finally something comes out of the baseband demodulator, 1112 01:00:41,900 --> 01:00:46,040 which we hope is the same as what came out 1113 01:00:46,040 --> 01:00:47,590 of the signal encoder. 1114 01:00:47,590 --> 01:00:51,040 But what we're trying to do at this point now, is when the 1115 01:00:51,040 --> 01:00:54,820 noise is here to say, this is in some sense, going to be 1116 01:00:54,820 --> 01:00:58,150 what we put in plus noise. 1117 01:00:58,150 --> 01:01:01,460 And for the time being, in terms of looking at detection, 1118 01:01:01,460 --> 01:01:02,740 we will just assume that. 1119 01:01:02,740 --> 01:01:06,690 We will assume that this thing down here is what went in, 1120 01:01:06,690 --> 01:01:07,760 plus noise. 1121 01:01:07,760 --> 01:01:10,980 And assume that all this process of passing white 1122 01:01:10,980 --> 01:01:17,150 Gaussian noise through this junk gives us just a signal 1123 01:01:17,150 --> 01:01:19,870 plus noise. 1124 01:01:19,870 --> 01:01:24,790 OK, so what a detector does is it observes a sample value of 1125 01:01:24,790 --> 01:01:31,165 this random variable V, or it observes a vector, or observes 1126 01:01:31,165 --> 01:01:32,500 a random process. 1127 01:01:32,500 --> 01:01:34,840 It observes whatever it's going to observe. 1128 01:01:34,840 --> 01:01:37,960 And it guesses the value of another random variable. 1129 01:01:37,960 --> 01:01:40,650 And we'll call the other random variable H. We had been 1130 01:01:40,650 --> 01:01:43,910 calling it other things for the input. 1131 01:01:43,910 --> 01:01:48,200 It's nice to call it H because statisticians always call this 1132 01:01:48,200 --> 01:01:50,820 a hypothesis. 1133 01:01:50,820 --> 01:01:53,960 And I think it's easier to think of what's going on if 1134 01:01:53,960 --> 01:01:55,930 you think of it as a 1135 01:01:55,930 --> 01:01:59,270 hypothesis type random variable. 1136 01:01:59,270 --> 01:02:03,420 Namely the detector has to guess. 1137 01:02:03,420 --> 01:02:05,960 If we have a binary input, it has to 1138 01:02:05,960 --> 01:02:08,480 guess at a binary output. 1139 01:02:08,480 --> 01:02:12,850 It has to flip a coin somehow and say, I think that what 1140 01:02:12,850 --> 01:02:18,120 came in is a zero, or I think what came in is a one. 1141 01:02:18,120 --> 01:02:23,400 The synonyms for detection are hypothesis testing, decision 1142 01:02:23,400 --> 01:02:26,650 making, and decoding. 1143 01:02:26,650 --> 01:02:28,450 They all mean exactly the same thing. 1144 01:02:28,450 --> 01:02:31,360 There's no difference whatsoever between them, 1145 01:02:31,360 --> 01:02:34,930 except the fields that they happen to deal with. 1146 01:02:34,930 --> 01:02:38,520 And each field talks about it in a different sense. 1147 01:02:38,520 --> 01:02:42,580 The word detection really came mostly from the radar field 1148 01:02:42,580 --> 01:02:45,170 where people were trying to determine whether a target was 1149 01:02:45,170 --> 01:02:48,030 there or not. 1150 01:02:48,030 --> 01:02:50,210 And hypothesis testing has been around for 1151 01:02:50,210 --> 01:02:51,300 a great deal longer. 1152 01:02:51,300 --> 01:02:56,250 Because all scientists from time immemorial have had to 1153 01:02:56,250 --> 01:02:59,400 deal with the question of have you collect a bunch of data 1154 01:02:59,400 --> 01:03:00,720 about something. 1155 01:03:00,720 --> 01:03:02,140 And it's always noisy data. 1156 01:03:02,140 --> 01:03:05,210 There's always a bunch of junk that comes into it. 1157 01:03:05,210 --> 01:03:09,050 And you have to try to make some conclusions out of that. 1158 01:03:09,050 --> 01:03:12,670 And making conclusions out of it is easier if you know 1159 01:03:12,670 --> 01:03:15,760 there's only a finite set of alternatives 1160 01:03:15,760 --> 01:03:16,750 which might be true. 1161 01:03:16,750 --> 01:03:19,640 In a presidential election, after you know who the 1162 01:03:19,640 --> 01:03:23,610 candidates are, it's a binary decision. 1163 01:03:23,610 --> 01:03:25,840 The people who try to guess who people are going to vote 1164 01:03:25,840 --> 01:03:30,230 for, are really doing binary decision making. 1165 01:03:30,230 --> 01:03:34,030 They don't have very good probability models. 1166 01:03:34,030 --> 01:03:37,240 And it certainly isn't a stationary process. 1167 01:03:37,240 --> 01:03:40,800 But in fact, it is a detection problem, the same as the 1168 01:03:40,800 --> 01:03:42,430 detection problem we have here. 1169 01:03:42,430 --> 01:03:47,180 So all these problems really fall into the same category. 1170 01:03:47,180 --> 01:03:50,260 We're fortunate here at having one of the cleanest 1171 01:03:50,260 --> 01:03:53,080 probabilistic descriptions for detection that 1172 01:03:53,080 --> 01:03:54,640 you will ever find. 1173 01:03:54,640 --> 01:03:57,810 So if you want to study decision making, you're far 1174 01:03:57,810 --> 01:04:01,860 better off trying to learn about it in this context and 1175 01:04:01,860 --> 01:04:05,570 then using it in all these other contexts, than you are 1176 01:04:05,570 --> 01:04:06,890 doing something else. 1177 01:04:06,890 --> 01:04:11,300 If you go back and look at old books about this, you will be 1178 01:04:11,300 --> 01:04:16,670 amazed at the philosophical discussions that people would 1179 01:04:16,670 --> 01:04:22,870 go through about what makes sense for hypothesis testing, 1180 01:04:22,870 --> 01:04:25,390 and what doesn't make sense. 1181 01:04:25,390 --> 01:04:28,960 And all of these arguments were based on a fundamental 1182 01:04:28,960 --> 01:04:32,590 misapprehension that scientists had until 1183 01:04:32,590 --> 01:04:35,080 relatively recently. 1184 01:04:35,080 --> 01:04:38,660 And the misapprehension that they had, was that they 1185 01:04:38,660 --> 01:04:42,970 refused to accept models as taking an intermediate 1186 01:04:42,970 --> 01:04:47,710 position between physical reality, and then you have 1187 01:04:47,710 --> 01:04:51,230 models, and then you have doing something with a model. 1188 01:04:51,230 --> 01:04:53,430 And you do something with the model instead of doing 1189 01:04:53,430 --> 01:04:56,510 something with the physical reality. 1190 01:04:56,510 --> 01:04:59,040 And people didn't understand that originally. 1191 01:04:59,040 --> 01:05:04,150 So they got the analysis of the model terribly confused 1192 01:05:04,150 --> 01:05:07,630 with trying to understand what the physical reality was. 1193 01:05:07,630 --> 01:05:12,530 In other words, statisticians and probabilists were both 1194 01:05:12,530 --> 01:05:14,100 doing the same thing. 1195 01:05:14,100 --> 01:05:17,130 Both of them were trying to simultaneously determine what 1196 01:05:17,130 --> 01:05:20,090 a reasonable model was, and to determine what 1197 01:05:20,090 --> 01:05:21,380 to do with a model. 1198 01:05:21,380 --> 01:05:24,390 And because of that, they can never decide on anything. 1199 01:05:24,390 --> 01:05:28,910 Because they can never do what we've been doing, which says 1200 01:05:28,910 --> 01:05:31,990 take a toy model, analyze the toy model. 1201 01:05:31,990 --> 01:05:35,190 Figure out something from it about the physical reality. 1202 01:05:35,190 --> 01:05:38,140 Then go back and look at reality and see what you need 1203 01:05:38,140 --> 01:05:42,920 to know about reality in order to make these decisions, which 1204 01:05:42,920 --> 01:05:45,880 is the way people do it now I think. 1205 01:05:45,880 --> 01:05:48,500 Particularly decision theory is done that way now. 1206 01:05:48,500 --> 01:05:53,240 Because now we usually assume both that we know Apriori 1207 01:05:53,240 --> 01:05:57,790 probabilities, it's called, for these inputs here. 1208 01:05:57,790 --> 01:06:01,080 For the binary communication problem, we usually want to 1209 01:06:01,080 --> 01:06:06,730 assume that these inputs are equiprobable, in other words, 1210 01:06:06,730 --> 01:06:08,890 if P sub 0 is equal to a half, and P sub 1 1211 01:06:08,890 --> 01:06:09,840 is equal to a half. 1212 01:06:09,840 --> 01:06:11,890 And that's the Apriori probability of 1213 01:06:11,890 --> 01:06:13,140 what's coming in. 1214 01:06:13,140 --> 01:06:15,980 1215 01:06:15,980 --> 01:06:20,840 And we then want to assume some probabilistic model on v. 1216 01:06:20,840 --> 01:06:24,680 Usually what we do, is we figure out what v is in terms 1217 01:06:24,680 --> 01:06:27,530 of a conditional probability density, conditional 1218 01:06:27,530 --> 01:06:32,020 probability density of v, conditional on either of these 1219 01:06:32,020 --> 01:06:33,270 two outputs. 1220 01:06:33,270 --> 01:06:37,520 You see for this model here, this is just made 1221 01:06:37,520 --> 01:06:39,680 in heaven for us. 1222 01:06:39,680 --> 01:06:43,430 Because this thing is an input plus a 1223 01:06:43,430 --> 01:06:45,700 Gaussian random variable. 1224 01:06:45,700 --> 01:06:48,970 So if we look at the conditional probability 1225 01:06:48,970 --> 01:06:53,300 density of the output, which is signal plus noise, 1226 01:06:53,300 --> 01:06:58,050 conditional on the signal is just a Gaussian random 1227 01:06:58,050 --> 01:07:01,180 variable shifted over by the input. 1228 01:07:01,180 --> 01:07:01,750 OK. 1229 01:07:01,750 --> 01:07:03,740 So we have these two probabilities, the input 1230 01:07:03,740 --> 01:07:07,170 probability, this thing, which statisticians call a 1231 01:07:07,170 --> 01:07:10,140 likelihood, which is the probability of this 1232 01:07:10,140 --> 01:07:11,610 conditional on this. 1233 01:07:11,610 --> 01:07:16,490 And in terms of that, we try to make the right decision. 1234 01:07:16,490 --> 01:07:19,210 Now, I think I'm going to stop now so we can pass back the 1235 01:07:19,210 --> 01:07:22,280 quizzes and so on. 1236 01:07:22,280 --> 01:07:26,530 Before Wednesday, if you don't read the notes before 1237 01:07:26,530 --> 01:07:32,050 Wednesday, spend a little bit of time thinking about how you 1238 01:07:32,050 --> 01:07:35,250 want to make the optimal decision for this problem as 1239 01:07:35,250 --> 01:07:37,260 we've laid it out now. 1240 01:07:37,260 --> 01:07:41,340 What's surprising is that it really is a trivial problem. 1241 01:07:41,340 --> 01:07:43,740 Or you can read the notes and accept the fact that it's a 1242 01:07:43,740 --> 01:07:46,440 trivial problem. 1243 01:07:46,440 --> 01:07:48,480 But there's nothing hard about it. 1244 01:07:48,480 --> 01:07:52,930 The only interesting things in detection theory are finding 1245 01:07:52,930 --> 01:07:56,400 easy ways to actually solve this problem which is 1246 01:07:56,400 --> 01:07:58,920 conceptional trivial. 1247 01:07:58,920 --> 01:08:00,170 OK. 1248 01:08:00,170 --> 01:08:00,280