1 00:00:00,000 --> 00:00:02,350 The following content is provided under a Creative 2 00:00:02,350 --> 00:00:03,640 Commons license. 3 00:00:03,640 --> 00:00:06,600 Your support will help MIT OpenCourseWare continue to 4 00:00:06,600 --> 00:00:09,970 offer high quality educational resources for free. 5 00:00:09,970 --> 00:00:12,810 To make a donation, or to view additional materials from 6 00:00:12,810 --> 00:00:16,870 hundreds of MIT courses, visit MIT OpenCourseWare at 7 00:00:16,870 --> 00:00:18,120 ocw.mit.edu. 8 00:00:22,090 --> 00:00:22,590 PROFESSOR: OK. 9 00:00:22,590 --> 00:00:25,250 I want to review zero-mean jointly 10 00:00:25,250 --> 00:00:27,600 Gaussian random variables. 11 00:00:27,600 --> 00:00:29,510 And I want to review a couple of the other 12 00:00:29,510 --> 00:00:32,180 things I did last time. 13 00:00:32,180 --> 00:00:36,290 Because when I get into questions of stationarity and 14 00:00:36,290 --> 00:00:41,630 things like that today, I think it will be helpful to 15 00:00:41,630 --> 00:00:45,220 have a little better sense of what all that was about, or it 16 00:00:45,220 --> 00:00:47,100 will just look like a big mess. 17 00:00:47,100 --> 00:00:49,900 And there are sort of a small number of critical things 18 00:00:49,900 --> 00:00:52,200 there that we have to understand. 19 00:00:52,200 --> 00:00:57,380 One of them is that if you have a non-singular covariance 20 00:00:57,380 --> 00:01:01,580 matrix, OK, you have a bunch of random variables. 21 00:01:01,580 --> 00:01:04,780 This bunch of random variables, each one of them 22 00:01:04,780 --> 00:01:06,640 has a has a covariance. 23 00:01:06,640 --> 00:01:11,000 Namely, z sub i and z sub j have an expected value of z 24 00:01:11,000 --> 00:01:13,210 sub i and z sub j. 25 00:01:13,210 --> 00:01:16,110 I'm just talking about zero mean here. 26 00:01:16,110 --> 00:01:18,860 Anytime I don't say whether there's a mean or not, I mean 27 00:01:18,860 --> 00:01:20,670 there isn't a mean. 28 00:01:20,670 --> 00:01:23,990 But I think the notes are usually pretty careful. 29 00:01:23,990 --> 00:01:27,480 If the covariance matrix is non-singular, then the 30 00:01:27,480 --> 00:01:29,480 following things happen. 31 00:01:29,480 --> 00:01:32,910 The vector, z, is jointly Gaussian. 32 00:01:32,910 --> 00:01:35,830 And our first definition was, if you can 33 00:01:35,830 --> 00:01:38,120 represent it as a -- 34 00:01:38,120 --> 00:01:43,560 all the components of it each as linear combinations of IID 35 00:01:43,560 --> 00:01:46,920 normal Gaussian random variables, namely independent 36 00:01:46,920 --> 00:01:50,070 Gaussian random variables -- so they're all just built up 37 00:01:50,070 --> 00:01:53,770 from this common set of independent 38 00:01:53,770 --> 00:01:55,400 Gaussian random variables. 39 00:01:55,400 --> 00:01:59,810 Which sort of matches our idea of why 40 00:01:59,810 --> 00:02:02,520 noise should be Gaussian. 41 00:02:02,520 --> 00:02:06,980 Namely, it comes from a collection of a whole bunch of 42 00:02:06,980 --> 00:02:11,540 independent things, which all get added up in various ways. 43 00:02:11,540 --> 00:02:15,090 And we're trying to look at processes, noise processes. 44 00:02:15,090 --> 00:02:19,090 The thing that's going to happen then is that, at each 45 00:02:19,090 --> 00:02:23,570 epoch in time, the noise is going to be some linear 46 00:02:23,570 --> 00:02:26,830 combination of all of these noise effects, but is going to 47 00:02:26,830 --> 00:02:28,050 be filtered a little bit. 48 00:02:28,050 --> 00:02:30,430 And because it's being filtered a little bit, the 49 00:02:30,430 --> 00:02:34,690 noise at one time and the noise at another time are all 50 00:02:34,690 --> 00:02:38,020 going to depend on common sets of variables. 51 00:02:38,020 --> 00:02:41,240 So that's the intuitive idea here. 52 00:02:41,240 --> 00:02:44,630 Starting with that idea, last time we derived what the 53 00:02:44,630 --> 00:02:47,100 probability density had to be. 54 00:02:47,100 --> 00:02:51,560 And if you remember, this probability density came out 55 00:02:51,560 --> 00:02:52,430 rather simply. 56 00:02:52,430 --> 00:02:56,540 The only thing that was involved here was this idea of 57 00:02:56,540 --> 00:03:01,490 going from little cubes in the IID noise domain into 58 00:03:01,490 --> 00:03:04,180 parallelograms in the z domain. 59 00:03:04,180 --> 00:03:06,170 Which is what happens when you go through a linear 60 00:03:06,170 --> 00:03:07,530 transformation. 61 00:03:07,530 --> 00:03:11,280 And because of that, this is the density that has to come 62 00:03:11,280 --> 00:03:12,310 out of there. 63 00:03:12,310 --> 00:03:15,030 By knowing a little bit about matrices -- 64 00:03:15,030 --> 00:03:17,750 which is covered in the appendix, or you can just take 65 00:03:17,750 --> 00:03:20,660 it on faith if you want to -- 66 00:03:20,660 --> 00:03:24,900 you can break up this matrix into eigenvalues and 67 00:03:24,900 --> 00:03:26,150 eigenvectors. 68 00:03:27,620 --> 00:03:31,060 If the matrix is nonsingular -- 69 00:03:31,060 --> 00:03:34,100 well, if the matrix is singular or nonsingular, if 70 00:03:34,100 --> 00:03:38,380 it's a covariance matrix, or it's -- 71 00:03:38,380 --> 00:03:42,630 a non-negative definite matrix, is what these matrices 72 00:03:42,630 --> 00:03:45,930 are -- then you can always break this matrix up into 73 00:03:45,930 --> 00:03:48,060 eigenvalues and eigenvectors. 74 00:03:48,060 --> 00:03:50,050 The eigenvectors span the space. 75 00:03:50,050 --> 00:03:51,760 The eigenvectors can be taken to be 76 00:03:51,760 --> 00:03:54,210 orthonormal to each other. 77 00:03:54,210 --> 00:03:58,200 And then when you express this in those terms, this 78 00:03:58,200 --> 00:04:05,340 probability density becomes just a product of normal 79 00:04:05,340 --> 00:04:09,790 Gaussian random variable densities, where the 80 00:04:09,790 --> 00:04:12,830 particular Gaussian random variables are these inner 81 00:04:12,830 --> 00:04:17,810 products of the noise vector with these various 82 00:04:17,810 --> 00:04:18,610 eigenvectors. 83 00:04:18,610 --> 00:04:24,050 In other words, you're taking this set of random variables 84 00:04:24,050 --> 00:04:27,300 which you're expressing in some reference system, where 85 00:04:27,300 --> 00:04:33,940 z1 up to z sub k are the different 86 00:04:33,940 --> 00:04:35,450 components to the vector. 87 00:04:35,450 --> 00:04:41,800 When you express this in a different reference system, 88 00:04:41,800 --> 00:04:47,070 you're rotating that space around. 89 00:04:47,070 --> 00:04:50,100 What this says is, you can always rotate the space in 90 00:04:50,100 --> 00:04:59,830 such a way that what you will find is orthogonal random 91 00:04:59,830 --> 00:05:02,850 variables, which are independent of each other. 92 00:05:02,850 --> 00:05:06,140 So that this is the density that you wind up having. 93 00:05:06,140 --> 00:05:10,760 And what this means is, if you look at the region of equal 94 00:05:10,760 --> 00:05:20,950 probability density over this set of random variables, what 95 00:05:20,950 --> 00:05:23,640 you find is it's a bunch of ellipsoids. 96 00:05:23,640 --> 00:05:26,730 And the axes of the ellipsoids are simply these 97 00:05:26,730 --> 00:05:28,270 eigenvectors here. 98 00:05:28,270 --> 00:05:29,990 OK, so this is the third way. 99 00:05:29,990 --> 00:05:34,410 If you can represent the noise in this way, again, it has to 100 00:05:34,410 --> 00:05:39,340 be jointly Gaussian. 101 00:05:39,340 --> 00:05:42,770 Finally, if all linear combinations of this random 102 00:05:42,770 --> 00:05:47,090 vector are Gaussian, that's probably the simplest one. 103 00:05:47,090 --> 00:05:54,240 But it's the hardest one to verify, in a sense. 104 00:05:54,240 --> 00:05:56,170 And it's the hardest one to get all these 105 00:05:56,170 --> 00:05:58,150 other results from. 106 00:05:58,150 --> 00:06:01,810 But if all linear combinations of these random variables are 107 00:06:01,810 --> 00:06:08,940 all Gaussian, then in fact, again, the variables have to 108 00:06:08,940 --> 00:06:10,320 be jointly Gaussian. 109 00:06:10,320 --> 00:06:13,550 Again, it's important to understand that jointly 110 00:06:13,550 --> 00:06:17,335 Gaussian means more than just individually Gaussian. 111 00:06:17,335 --> 00:06:20,700 It doesn't mean what you would think from the words jointly 112 00:06:20,700 --> 00:06:24,300 Gaussian, as saying that each of the variables are Gaussian. 113 00:06:24,300 --> 00:06:27,630 It means a whole lot more than that. 114 00:06:27,630 --> 00:06:31,580 In the problem set, what you've done in one of the 115 00:06:31,580 --> 00:06:37,630 problems is to create a couple of examples of two random 116 00:06:37,630 --> 00:06:41,210 variables which are each Gaussian random variables, but 117 00:06:41,210 --> 00:06:45,660 which jointly are not jointly Gaussian. 118 00:06:45,660 --> 00:06:49,440 OK, finally, if you have a singular covariance matrix, z 119 00:06:49,440 --> 00:06:52,560 is jointly Gaussian if a basis of these 120 00:06:52,560 --> 00:06:54,910 zi's are jointly Gaussian. 121 00:06:54,910 --> 00:06:55,210 OK? 122 00:06:55,210 --> 00:06:58,000 In other words, you throw out the random variables which are 123 00:06:58,000 --> 00:07:01,390 just linear combinations of the others, because you don't 124 00:07:01,390 --> 00:07:04,220 even want to think of those as random variables in a sense. 125 00:07:04,220 --> 00:07:07,630 I mean, technically they are random variables, because 126 00:07:07,630 --> 00:07:09,570 they're defined in the sample space. 127 00:07:09,570 --> 00:07:12,500 But they're all just defined in terms of other things, so 128 00:07:12,500 --> 00:07:13,900 who cares about them? 129 00:07:13,900 --> 00:07:16,690 So after you throw those out, the others have 130 00:07:16,690 --> 00:07:18,820 to be jointly Gaussian. 131 00:07:18,820 --> 00:07:23,730 So in fact, if you have two random variables, z1 and z2, 132 00:07:23,730 --> 00:07:26,490 and z2 equals z1 -- 133 00:07:26,490 --> 00:07:30,130 OK, in other words the probability density is on a 134 00:07:30,130 --> 00:07:33,230 straight diagonal line -- 135 00:07:33,230 --> 00:07:36,920 and z1 is Gaussian, z1 and z2 are jointly 136 00:07:36,920 --> 00:07:38,690 Gaussian in that case. 137 00:07:38,690 --> 00:07:44,040 This is not an example like the ones you did in this 138 00:07:44,040 --> 00:07:47,060 problem set that you're handing in today, where you in 139 00:07:47,060 --> 00:07:49,950 fact have things like a probability density which 140 00:07:49,950 --> 00:07:52,970 doesn't exist because it's impulsive on this line, and 141 00:07:52,970 --> 00:07:55,040 also impulsive on this line. 142 00:07:55,040 --> 00:07:58,060 Or something which looks Gaussian in two of the 143 00:07:58,060 --> 00:08:01,600 quadrants and is zero in both the other quadrants, and 144 00:08:01,600 --> 00:08:03,730 really bizarre things like that. 145 00:08:03,730 --> 00:08:04,320 OK? 146 00:08:04,320 --> 00:08:08,560 So please try to understand what jointly Gaussian means. 147 00:08:08,560 --> 00:08:12,590 Because everything about noise that we do is based on that. 148 00:08:12,590 --> 00:08:13,920 It really is. 149 00:08:13,920 --> 00:08:16,850 And if you don't know what jointly Gaussian means, you're 150 00:08:16,850 --> 00:08:19,370 not going to understand anything about noise 151 00:08:19,370 --> 00:08:23,560 detection, or anything from this point on. 152 00:08:23,560 --> 00:08:25,930 I know last year I kept telling people that, and in 153 00:08:25,930 --> 00:08:29,050 the final exam there were still four or five people who 154 00:08:29,050 --> 00:08:34,600 didn't have the foggiest idea what jointly Gaussian meant. 155 00:08:34,600 --> 00:08:36,720 And you know, you're not going to understand the rest of 156 00:08:36,720 --> 00:08:39,260 this, you're not going to understand why these things 157 00:08:39,260 --> 00:08:42,560 are happening, if you don't understand that. 158 00:08:42,560 --> 00:08:44,470 OK. 159 00:08:44,470 --> 00:08:47,710 So the next thing we said is Z of t -- 160 00:08:47,710 --> 00:08:50,610 I use those little curly brackets around something just 161 00:08:50,610 --> 00:08:53,690 as a shorthand way of saying that what I'm interested in 162 00:08:53,690 --> 00:08:57,430 now is the random process, not the random variable at a 163 00:08:57,430 --> 00:08:59,200 particular value of t. 164 00:08:59,200 --> 00:09:02,970 In other words, if I say Z of t, what I'm usually talking 165 00:09:02,970 --> 00:09:07,690 about is a random variable, which is the random variable 166 00:09:07,690 --> 00:09:11,340 corresponding to one particular epoch of this 167 00:09:11,340 --> 00:09:12,550 random process. 168 00:09:12,550 --> 00:09:14,910 Here I'm talking about the whole process. 169 00:09:14,910 --> 00:09:16,940 And it's a Gaussian process. 170 00:09:16,940 --> 00:09:22,610 If Z of t1 to Z of tk are jointly Gaussian for all k and 171 00:09:22,610 --> 00:09:27,370 for all sets of t sub i. 172 00:09:27,370 --> 00:09:31,820 And if the process can be represented as a linear 173 00:09:31,820 --> 00:09:38,290 combination of just ordinary random variables multiplied by 174 00:09:38,290 --> 00:09:43,370 some set of orthonormal functions, and these Z sub i 175 00:09:43,370 --> 00:09:48,550 are independent, and if the sum of the variances of these 176 00:09:48,550 --> 00:09:52,320 random variables sum up to something less than infinity. 177 00:09:57,250 --> 00:10:00,600 So you have finite energy in these sample functions, 178 00:10:00,600 --> 00:10:02,370 effectively is what this says. 179 00:10:02,370 --> 00:10:05,930 Then it says the sample functions of Z of t are L2 180 00:10:05,930 --> 00:10:08,210 with probability one. 181 00:10:08,210 --> 00:10:09,740 OK. 182 00:10:09,740 --> 00:10:14,230 You have almost proven that in the problem set. 183 00:10:14,230 --> 00:10:17,380 If you don't believe that you've almost proven it, you 184 00:10:17,380 --> 00:10:20,210 really have. 185 00:10:20,210 --> 00:10:22,120 And anyway, it's true. 186 00:10:22,120 --> 00:10:25,510 One of the things that we're trying to do when we're trying 187 00:10:25,510 --> 00:10:31,100 to deal with these wave forms that we transmit, the only way 188 00:10:31,100 --> 00:10:34,230 we can deal with them very carefully is to know that 189 00:10:34,230 --> 00:10:35,330 they're L2 functions. 190 00:10:35,330 --> 00:10:37,280 Which means they have Fourier transforms. 191 00:10:37,280 --> 00:10:40,020 We can do all this stuff we've been doing. 192 00:10:40,020 --> 00:10:43,090 And you don't need anything more mathematically than that. 193 00:10:43,090 --> 00:10:46,840 So it's important to be dealing with L2 functions. 194 00:10:46,840 --> 00:10:50,660 We're now getting into random processes, and it's important 195 00:10:50,660 --> 00:10:54,810 to know that the sample functions are L2. 196 00:10:54,810 --> 00:10:58,330 Because so long as the sample functions are L2, then you can 197 00:10:58,330 --> 00:11:01,250 do all of these things we've done before and just put it 198 00:11:01,250 --> 00:11:02,800 together and say well, you take a big 199 00:11:02,800 --> 00:11:04,370 ensemble of these things. 200 00:11:04,370 --> 00:11:06,960 They're all well defined, they're all L2. 201 00:11:06,960 --> 00:11:08,850 We can take Fourier transforms, we can do 202 00:11:08,850 --> 00:11:10,800 everything else we want to do. 203 00:11:10,800 --> 00:11:13,690 OK, so that's the game we're playing. 204 00:11:13,690 --> 00:11:16,980 OK, I'm just going to assume that sample functions are L2 205 00:11:16,980 --> 00:11:20,600 from now on, except in a couple of bizarre cases that 206 00:11:20,600 --> 00:11:22,320 we look at. 207 00:11:22,320 --> 00:11:26,930 Then a linear functional is a random variable given by the 208 00:11:26,930 --> 00:11:30,590 random variable is equal to the integral of the noise 209 00:11:30,590 --> 00:11:33,410 process z of t, multiplied by an ordinary 210 00:11:33,410 --> 00:11:35,600 function, g of t, dt. 211 00:11:35,600 --> 00:11:39,360 And we talked about that a lot last time. 212 00:11:39,360 --> 00:11:42,990 This is the convolution of a process -- well, it's not the 213 00:11:42,990 --> 00:11:43,430 convolution. 214 00:11:43,430 --> 00:11:49,180 It's just the inner product of a process with a function. 215 00:11:49,180 --> 00:11:53,090 And we interpret this last time in terms of the sample 216 00:11:53,090 --> 00:11:56,160 functions of the process and the sample values of the 217 00:11:56,160 --> 00:11:58,030 random variable. 218 00:11:58,030 --> 00:12:00,230 And then since we could do that, we could really talk 219 00:12:00,230 --> 00:12:02,620 about the random variable here. 220 00:12:02,620 --> 00:12:08,760 This means that for all of the sample values in the sample 221 00:12:08,760 --> 00:12:16,120 space with probability one, the sample values of the 222 00:12:16,120 --> 00:12:21,340 random variable v are equal to the integral of the sample 223 00:12:21,340 --> 00:12:24,940 values of the process times g of t dt. 224 00:12:24,940 --> 00:12:27,730 OK, in other words, this isn't really 225 00:12:27,730 --> 00:12:31,450 something unusual and new. 226 00:12:31,450 --> 00:12:37,285 I mean, students have the capacity of looking at this in 227 00:12:37,285 --> 00:12:40,250 two ways, and I've seen it happen for years. 228 00:12:40,250 --> 00:12:45,250 The first time you see this, you say this is trivial. 229 00:12:45,250 --> 00:12:48,210 Because it just looks like the kind of integration you're 230 00:12:48,210 --> 00:12:49,800 doing all your life. 231 00:12:49,800 --> 00:12:52,560 You work with it for a while, and then at some point you 232 00:12:52,560 --> 00:12:55,780 wake up and you say, oh, but my god, this is 233 00:12:55,780 --> 00:12:56,950 not a function here. 234 00:12:56,950 --> 00:12:58,990 This is a random process. 235 00:12:58,990 --> 00:13:01,720 And you say, what the heck does this mean? 236 00:13:01,720 --> 00:13:04,250 And suddenly you're way out in left field. 237 00:13:04,250 --> 00:13:06,350 Well, this says what it means. 238 00:13:06,350 --> 00:13:09,310 Once you know what it means, we go back to this and we use 239 00:13:09,310 --> 00:13:10,380 this from now on. 240 00:13:10,380 --> 00:13:11,630 OK? 241 00:13:16,770 --> 00:13:20,800 If we have a zero-mean Gaussian process z of t, if 242 00:13:20,800 --> 00:13:24,240 you have L2 sample functions, like you had when you take a 243 00:13:24,240 --> 00:13:29,930 process and make it up as a linear combination of random 244 00:13:29,930 --> 00:13:36,160 variables times orthonormal functions, and if you have a 245 00:13:36,160 --> 00:13:41,350 bunch of different L2 functions, g1 up to g sub j 0, 246 00:13:41,350 --> 00:13:43,710 then each of these random variables, each of these 247 00:13:43,710 --> 00:13:48,730 linear functionals, are going to be Gaussian. 248 00:13:48,730 --> 00:13:50,170 And in fact the whole set of them 249 00:13:50,170 --> 00:13:51,880 together are jointly Gaussian. 250 00:13:51,880 --> 00:13:53,740 And we showed that last time. 251 00:13:53,740 --> 00:13:56,250 I'm not going to bother to show it again. 252 00:13:59,030 --> 00:14:02,030 And we also found what the covariance was between these 253 00:14:02,030 --> 00:14:03,380 random variables. 254 00:14:03,380 --> 00:14:05,850 And it was just this expression here. 255 00:14:05,850 --> 00:14:06,370 OK. 256 00:14:06,370 --> 00:14:09,910 And this follows just by taking vi, which is this kind 257 00:14:09,910 --> 00:14:13,780 of integral, times vj, and interchanging the order of 258 00:14:13,780 --> 00:14:17,570 integration, taking the expected value of z of t times 259 00:14:17,570 --> 00:14:20,450 z of tau, which gives you this quantity. 260 00:14:20,450 --> 00:14:24,410 And you have the g i of t and the g j of t stuck in there 261 00:14:24,410 --> 00:14:26,670 with an integral around it. 262 00:14:26,670 --> 00:14:31,350 And it's hard to understand what those integrals mean and 263 00:14:31,350 --> 00:14:33,790 whether they really exist or not, but we'll get to that a 264 00:14:33,790 --> 00:14:35,040 little later. 265 00:14:38,660 --> 00:14:42,260 We then talked about linear filtering of processes. 266 00:14:42,260 --> 00:14:46,550 You have a process coming into a filter, and the output is 267 00:14:46,550 --> 00:14:48,990 some other stochastic process. 268 00:14:48,990 --> 00:14:50,220 Or at least we hope it's a well 269 00:14:50,220 --> 00:14:51,840 defined stochastic process. 270 00:14:51,840 --> 00:14:55,530 And we talked a little bit about this last time. 271 00:14:55,530 --> 00:15:02,850 And for every value of tau, namely for each random 272 00:15:02,850 --> 00:15:08,260 variable in this output random process, V of tau is just a 273 00:15:08,260 --> 00:15:09,880 linear functional. 274 00:15:09,880 --> 00:15:12,950 It's a linear functional corresponding to the 275 00:15:12,950 --> 00:15:17,770 particular function h of that tau minus t. 276 00:15:17,770 --> 00:15:21,970 So the linear functional is a function of t here for the 277 00:15:21,970 --> 00:15:24,480 particular value of tau that you have over here. 278 00:15:24,480 --> 00:15:27,170 So this is just a linear functional like the ones we've 279 00:15:27,170 --> 00:15:29,240 been talking about. 280 00:15:29,240 --> 00:15:33,830 OK, if z of t is a zero-mean Gaussian process, then you 281 00:15:33,830 --> 00:15:36,620 have a bunch of different linear functionals here for 282 00:15:36,620 --> 00:15:40,800 any set of times tau 1 up to tau sub k. 283 00:15:40,800 --> 00:15:44,530 And those are jointly Gaussian from what we just said. 284 00:15:44,530 --> 00:15:51,330 And if all of these sets of k sample -- 285 00:15:51,330 --> 00:15:58,490 If based a random process, v of tau, is a Gaussian random 286 00:15:58,490 --> 00:16:05,490 process, if for all k sets of epochs, v, tau 1, tau 2, up to 287 00:16:05,490 --> 00:16:11,900 tau sub k, if this set of random variables are all 288 00:16:11,900 --> 00:16:13,080 jointly Gaussian. 289 00:16:13,080 --> 00:16:15,000 So that's what we have here. 290 00:16:15,000 --> 00:16:24,590 So the V of tau is actually a Gaussian process if Z of t is 291 00:16:24,590 --> 00:16:27,080 a Gaussian random process to start with. 292 00:16:27,080 --> 00:16:30,390 And the covariance function is just this quantity that we 293 00:16:30,390 --> 00:16:32,190 talked about before. 294 00:16:32,190 --> 00:16:34,110 OK, so we have a covariance function. 295 00:16:34,110 --> 00:16:37,300 We also have a covariance function for the process we 296 00:16:37,300 --> 00:16:38,530 started with. 297 00:16:38,530 --> 00:16:41,650 And if it's Gaussian, all you need to know is what the 298 00:16:41,650 --> 00:16:44,140 covariance function is. 299 00:16:44,140 --> 00:16:47,400 So that's all rather nice. 300 00:16:47,400 --> 00:16:49,580 OK, as we said, we're going to start talking about 301 00:16:49,580 --> 00:16:52,500 stationarity today. 302 00:16:52,500 --> 00:16:55,460 I really want to talk about two ideas of stationarity. 303 00:16:55,460 --> 00:16:59,750 One is the idea that you have probably seen as 304 00:16:59,750 --> 00:17:04,940 undergraduates one place or another, which is simple 305 00:17:04,940 --> 00:17:11,140 computationally, but is almost impossible to understand when 306 00:17:11,140 --> 00:17:15,400 you try to say something precise about it. 307 00:17:15,400 --> 00:17:18,230 And the other is something called effectively stationary, 308 00:17:18,230 --> 00:17:20,690 which we're going to talk about today. 309 00:17:20,690 --> 00:17:23,940 And I'll show you why that makes sense. 310 00:17:23,940 --> 00:17:29,100 So we say that a process Z of t is stationary if, for all 311 00:17:29,100 --> 00:17:36,930 integers k, all shifts tau, all epochs t1 to t sub k, and 312 00:17:36,930 --> 00:17:43,840 all of values of z1 to zk, this joint density here is 313 00:17:43,840 --> 00:17:47,620 equal to the joint density shifted. 314 00:17:47,620 --> 00:17:50,590 In other words, what we're doing is we're taking a set of 315 00:17:50,590 --> 00:17:55,220 times over here, t1, t2, t3, t4, t5. 316 00:17:55,220 --> 00:17:58,040 We're looking at the joint distribution function for 317 00:17:58,040 --> 00:18:01,200 those random variables here, and then we're 318 00:18:01,200 --> 00:18:03,020 shifting it by tau. 319 00:18:03,020 --> 00:18:06,210 And we're saying if the process is stationery, the 320 00:18:06,210 --> 00:18:10,260 joint distribution function here has to be the same as the 321 00:18:10,260 --> 00:18:12,440 joint distribution function there. 322 00:18:15,830 --> 00:18:18,710 You might think that all you need is the distribution 323 00:18:18,710 --> 00:18:21,630 function here has to be the same as the distribution 324 00:18:21,630 --> 00:18:22,380 function there. 325 00:18:22,380 --> 00:18:25,610 But you really want the same relationship to hold through 326 00:18:25,610 --> 00:18:31,140 here as there if you want to call process stationary. 327 00:18:31,140 --> 00:18:32,390 OK. 328 00:18:34,580 --> 00:18:39,420 If we have zero-mean Gauss, that's just equivalent to 329 00:18:39,420 --> 00:18:44,320 saying that the covariance function at ti and t sub j is 330 00:18:44,320 --> 00:18:47,360 the same as the covariance function at ti plus 331 00:18:47,360 --> 00:18:49,980 tau and tj plus tau. 332 00:18:49,980 --> 00:18:53,330 And that's true for t1 up to t sub k. 333 00:18:53,330 --> 00:18:58,060 Which really just means this has to be true for all tau, 334 00:18:58,060 --> 00:19:02,020 for all t sub i and for all t sub j. 335 00:19:02,020 --> 00:19:04,620 So you don't need to worry about the k at all once you're 336 00:19:04,620 --> 00:19:08,700 dealing with a Gaussian process. 337 00:19:08,700 --> 00:19:11,200 Because all you need to worry about is 338 00:19:11,200 --> 00:19:12,700 the covariance function. 339 00:19:12,700 --> 00:19:15,700 And the covariance function is only a function of two 340 00:19:15,700 --> 00:19:20,705 variables, the process at one epoch and the process at 341 00:19:20,705 --> 00:19:22,770 another epoch. 342 00:19:22,770 --> 00:19:27,480 And this is equivalent, even more simply, to saying that 343 00:19:27,480 --> 00:19:32,710 the covariance function at t1 and t2 has to be equal to the 344 00:19:32,710 --> 00:19:37,480 covariance function at t1 minus t2 and zero. 345 00:19:37,480 --> 00:19:41,210 Can you see why that is? 346 00:19:41,210 --> 00:19:46,970 If I start out with this, and I know that this is true, then 347 00:19:46,970 --> 00:19:49,740 the thing that I can do is take this and I can shift it 348 00:19:49,740 --> 00:19:52,570 by any amount that I want to. 349 00:19:52,570 --> 00:19:56,540 So if I shift this by adding tau here, then what I wind up 350 00:19:56,540 --> 00:20:03,840 with is kz of t1 minus t2 plus tau, comma tau, is equal to kz 351 00:20:03,840 --> 00:20:06,650 of t1 plus tau, t2 plus tau. 352 00:20:06,650 --> 00:20:08,540 And if I change variables around a little 353 00:20:08,540 --> 00:20:11,280 bit, I come to this. 354 00:20:11,280 --> 00:20:15,110 So this is the condition we need for a Gaussian process to 355 00:20:15,110 --> 00:20:17,550 be stationary. 356 00:20:17,550 --> 00:20:21,350 I would defy any of you to ever show in any way that a 357 00:20:21,350 --> 00:20:25,350 process satisfies all of these conditions. 358 00:20:25,350 --> 00:20:29,310 I mean, if you don't have nice structural properties in the 359 00:20:29,310 --> 00:20:32,300 process, like a Gaussian process, which says that all 360 00:20:32,300 --> 00:20:36,670 you need to define it is this, this is something that you 361 00:20:36,670 --> 00:20:39,070 just can't deal with very well. 362 00:20:39,070 --> 00:20:40,320 So we have this. 363 00:20:43,660 --> 00:20:51,980 And then we say, OK, this covariance is so easy 364 00:20:51,980 --> 00:20:53,200 to work with -- 365 00:20:53,200 --> 00:20:56,690 I mean no, it's not easy to work with, but it's one hell 366 00:20:56,690 --> 00:21:00,010 of a lot easier to work with than that. 367 00:21:00,010 --> 00:21:02,935 And therefore if you want to start talking about processes, 368 00:21:02,935 --> 00:21:06,070 and you don't really want to go into all the detail of 369 00:21:06,070 --> 00:21:09,880 these joint distribution functions, you will say to 370 00:21:09,880 --> 00:21:16,120 yourselves that one thing I might ask for in a process is 371 00:21:16,120 --> 00:21:20,960 the question of whether the covariance function satisfies 372 00:21:20,960 --> 00:21:21,910 this property. 373 00:21:21,910 --> 00:21:25,460 I mean, for a Gaussian process this is all you need to make 374 00:21:25,460 --> 00:21:27,070 the process stationary. 375 00:21:27,070 --> 00:21:30,210 For other processes you need more, but at least it's an 376 00:21:30,210 --> 00:21:32,220 interesting question to ask. 377 00:21:32,220 --> 00:21:37,580 Is the process partly stationary, in the sense that 378 00:21:37,580 --> 00:21:40,810 the covariance function at least is stationary? 379 00:21:40,810 --> 00:21:43,780 Namely, the covariance function here looks the same 380 00:21:43,780 --> 00:21:46,280 as the covariance function there. 381 00:21:46,280 --> 00:21:48,880 So we say that a zero-mean process is wide-sense 382 00:21:48,880 --> 00:21:53,050 stationary if it satisfies this condition. 383 00:21:53,050 --> 00:21:57,110 So a Gaussian process then is stationary if and only if it's 384 00:21:57,110 --> 00:21:58,990 wide-sense stationary. 385 00:22:01,690 --> 00:22:05,720 And a random process with a mean, or with a mean that 386 00:22:05,720 --> 00:22:09,850 isn't necessarily zero, is going to be stationary or 387 00:22:09,850 --> 00:22:13,570 wide-sense stationary if the mean is constant and the 388 00:22:13,570 --> 00:22:16,060 fluctuation is stationary. 389 00:22:16,060 --> 00:22:18,510 Or wide-sense stationary, as the case may be. 390 00:22:18,510 --> 00:22:20,190 So you want both of these properties there. 391 00:22:23,400 --> 00:22:26,660 So as before, we're just going to throw out the mean and not 392 00:22:26,660 --> 00:22:27,470 worry about it. 393 00:22:27,470 --> 00:22:30,230 Because if we're thinking of it as noise, it's not going to 394 00:22:30,230 --> 00:22:31,210 have a mean. 395 00:22:31,210 --> 00:22:33,710 Because of it has a mean, it's not part of the noise. 396 00:22:33,710 --> 00:22:35,740 It's just something we know, and we might 397 00:22:35,740 --> 00:22:38,490 as well remove it. 398 00:22:38,490 --> 00:22:41,550 OK, interesting example here. 399 00:22:41,550 --> 00:22:44,720 Let's look at a process, V of t, which is 400 00:22:44,720 --> 00:22:47,440 defined in this way here. 401 00:22:47,440 --> 00:22:52,600 It's the sum of a set of random variables, V sub k, 402 00:22:52,600 --> 00:22:56,630 times the sinc function for some sampling interval, 403 00:22:56,630 --> 00:22:59,970 capital T. And I'm assuming that the v sub k are 404 00:22:59,970 --> 00:23:06,090 zero-mean, and that, at least as far as second moments are 405 00:23:06,090 --> 00:23:09,220 concerned, they have this stationarity 406 00:23:09,220 --> 00:23:12,440 property between them. 407 00:23:12,440 --> 00:23:17,880 Namely, they are uncorrelated from one j to another k. 408 00:23:17,880 --> 00:23:21,700 Expected value of vj squared is sigma squared. 409 00:23:21,700 --> 00:23:27,140 Expected value of vj vk for k unequal to j is zero. 410 00:23:27,140 --> 00:23:32,180 So they're uncorrelated; they all have the same variance. 411 00:23:32,180 --> 00:23:35,660 Then I claim that the process V of t is wide-sense 412 00:23:35,660 --> 00:23:36,910 stationary. 413 00:23:41,340 --> 00:23:46,760 And I claim that this covariance function is going 414 00:23:46,760 --> 00:23:53,430 to be sigma squared times sinc of t minus tau over t. 415 00:23:53,430 --> 00:23:56,060 Now for how many of you is this an obvious 416 00:23:56,060 --> 00:23:57,310 consequence of that? 417 00:24:01,580 --> 00:24:04,680 How many of you would guess this if you 418 00:24:04,680 --> 00:24:05,930 had to guess something? 419 00:24:09,920 --> 00:24:12,830 Well, if you wouldn't guess it, you should. 420 00:24:12,830 --> 00:24:15,110 Any time you don't know something, the best thing to 421 00:24:15,110 --> 00:24:17,670 do is to make a guess and try to see whether your guess is 422 00:24:17,670 --> 00:24:18,540 right or not. 423 00:24:18,540 --> 00:24:21,410 Because otherwise you never know what to do. 424 00:24:21,410 --> 00:24:23,310 So if you have a function which is 425 00:24:23,310 --> 00:24:27,600 defined in this way -- 426 00:24:27,600 --> 00:24:29,030 think of it this way. 427 00:24:29,030 --> 00:24:33,490 Suppose these V sub k's were not random variables, but 428 00:24:33,490 --> 00:24:35,570 instead suppose they were all just constants. 429 00:24:35,570 --> 00:24:37,590 Suppose they were all 1. 430 00:24:37,590 --> 00:24:41,140 Suppose you're looking at the function V of t, which is the 431 00:24:41,140 --> 00:24:44,230 sum of all of the sinc functions. 432 00:24:44,230 --> 00:24:46,520 And think of what you'd get when you add up a bunch of 433 00:24:46,520 --> 00:24:49,880 sinc functions which are all exactly the same. 434 00:24:49,880 --> 00:24:51,130 What do you get? 435 00:24:53,490 --> 00:24:56,070 I mean, you have to get a constant, right? 436 00:24:56,070 --> 00:24:58,450 You're just interpolating between all these points, and 437 00:24:58,450 --> 00:25:00,590 all the points are the same. 438 00:25:00,590 --> 00:25:02,980 So it'd be very bizarre if you didn't get 439 00:25:02,980 --> 00:25:06,180 something which was constant. 440 00:25:06,180 --> 00:25:09,250 Well the same thing happens here. 441 00:25:09,250 --> 00:25:14,100 The derivation of this is one of those derivations which is 442 00:25:14,100 --> 00:25:19,700 very, very simple and very, very slick and very elegant. 443 00:25:19,700 --> 00:25:23,000 And I was going to go through it in class and I thought, no. 444 00:25:23,000 --> 00:25:25,700 You just can't follow this in real time. 445 00:25:25,700 --> 00:25:27,710 It's something you have to sit down and think 446 00:25:27,710 --> 00:25:30,350 about for five minutes. 447 00:25:30,350 --> 00:25:34,960 So I urge you all to read this, because this is a trick 448 00:25:34,960 --> 00:25:37,950 that you need to use all the time. 449 00:25:37,950 --> 00:25:42,620 And you should understand it, because various problems as we 450 00:25:42,620 --> 00:25:47,290 go along are going to use this idea in various ways. 451 00:25:47,290 --> 00:25:50,540 But anyway, when you have a process defined this way, it 452 00:25:50,540 --> 00:25:52,460 has to be wide-sense stationary. 453 00:25:55,250 --> 00:26:01,290 And the covariance function is just this function here. 454 00:26:01,290 --> 00:26:10,320 Now if these V sub k up here are jointly Gaussian, and they 455 00:26:10,320 --> 00:26:15,290 all have the same variance, so they're all IID, then what 456 00:26:15,290 --> 00:26:19,510 this means is you have a Gaussian random process. 457 00:26:19,510 --> 00:26:23,340 And it's a stationary Gaussian random process. 458 00:26:23,340 --> 00:26:26,910 In other words, it's a process where, if you look at the 459 00:26:26,910 --> 00:26:30,690 random variable v of tau at any given tau, you get the 460 00:26:30,690 --> 00:26:34,140 same thing for all tau. 461 00:26:34,140 --> 00:26:36,840 In other words, it's a little peculiar, in the sense that 462 00:26:36,840 --> 00:26:40,720 when you look at this, it looks like time 0 463 00:26:40,720 --> 00:26:42,370 was a little special. 464 00:26:42,370 --> 00:26:46,860 Because time 0 is where you specified the process. 465 00:26:46,860 --> 00:26:49,140 Time t you specified the process. 466 00:26:49,140 --> 00:26:51,810 Time 2t you specified the process. 467 00:26:51,810 --> 00:26:54,580 But in fact this is saying, no. 468 00:26:54,580 --> 00:26:56,770 Time 0 is not special at all. 469 00:26:56,770 --> 00:26:59,800 Which is why we say that the process is wide-sense 470 00:26:59,800 --> 00:27:00,640 stationary. 471 00:27:00,640 --> 00:27:03,180 All times look the same. 472 00:27:03,180 --> 00:27:08,020 On the other hand, if you take these V sub k here to be 473 00:27:08,020 --> 00:27:11,550 binary random variables which take one the value plus 1 or 474 00:27:11,550 --> 00:27:14,970 minus 1 with equal probability, 475 00:27:14,970 --> 00:27:18,270 what do you have then? 476 00:27:18,270 --> 00:27:22,900 You look at the sum here, and at the sample points -- namely 477 00:27:22,900 --> 00:27:27,240 at 0, at t, at 2t, and so forth -- what values can the 478 00:27:27,240 --> 00:27:28,490 process take away? 479 00:27:31,580 --> 00:27:33,670 At time 0? 480 00:27:33,670 --> 00:27:39,050 If v sub 0 is either plus 1 or minus 1, what 481 00:27:39,050 --> 00:27:41,780 is v of 0 over here? 482 00:27:41,780 --> 00:27:44,100 It's plus 1 or minus 1. 483 00:27:44,100 --> 00:27:46,970 It can only be those two things. 484 00:27:46,970 --> 00:27:51,220 If you look at V of capital T over 2, namely if you look 485 00:27:51,220 --> 00:27:57,230 halfway between these two sample points, you then ask, 486 00:27:57,230 --> 00:28:01,720 what are the different values that v of T 487 00:28:01,720 --> 00:28:05,270 over 2 can take on? 488 00:28:05,270 --> 00:28:07,660 It's an awful mess. 489 00:28:07,660 --> 00:28:11,270 It's a very bizarre random variable. 490 00:28:11,270 --> 00:28:13,540 But anyway, it's not a random variable which is 491 00:28:13,540 --> 00:28:16,570 plus or minus 1. 492 00:28:16,570 --> 00:28:20,370 Because it's really a sum of an infinite number of terms. 493 00:28:20,370 --> 00:28:24,840 So you're taking an infinite number of binary random 494 00:28:24,840 --> 00:28:30,190 variables, each with arbitrary multipliers. 495 00:28:30,190 --> 00:28:32,100 So you're adding them all up. 496 00:28:32,100 --> 00:28:34,070 So you get something that's not differentiable. 497 00:28:34,070 --> 00:28:37,370 It's not anything nice. 498 00:28:37,370 --> 00:28:38,460 I don't care about that. 499 00:28:38,460 --> 00:28:42,060 The only thing that I care about is that v of capital T 500 00:28:42,060 --> 00:28:47,030 over 2 is not a binary random variable anymore. 501 00:28:47,030 --> 00:28:50,900 And therefore this process is not stationary anymore. 502 00:28:50,900 --> 00:28:54,500 Here's an example of a wide-sense stationary process 503 00:28:54,500 --> 00:28:57,600 which is not stationary. 504 00:28:57,600 --> 00:29:00,770 So it's a nice example of where you have wide-sense 505 00:29:00,770 --> 00:29:04,350 stationarity, but you don't have stationarity. 506 00:29:04,350 --> 00:29:08,200 So all kinds of questions about power and things like 507 00:29:08,200 --> 00:29:12,970 that, this process works very, very well. 508 00:29:12,970 --> 00:29:17,440 Because questions about power you answer only in terms of 509 00:29:17,440 --> 00:29:20,010 covariance functions. 510 00:29:20,010 --> 00:29:23,420 Questions of individual possible values and 511 00:29:23,420 --> 00:29:26,520 probability distributions you can't answer the very well. 512 00:29:30,080 --> 00:29:31,870 One more thing. 513 00:29:31,870 --> 00:29:35,540 If these variables are Gaussian, and if you actually 514 00:29:35,540 --> 00:29:39,390 believe me that this is a stationary Gaussian process, 515 00:29:39,390 --> 00:29:44,740 and it really is a stationary Gaussian process, what we have 516 00:29:44,740 --> 00:29:53,640 is a way of creating a broad category of random processes. 517 00:29:53,640 --> 00:29:56,750 Because if I look at the sample functions here of this 518 00:29:56,750 --> 00:30:01,710 process, each sample function is bandlimited. 519 00:30:01,710 --> 00:30:03,150 Baseband limited. 520 00:30:03,150 --> 00:30:08,760 And it's baseband limited to 1 over 2 times capital T. 1 over 521 00:30:08,760 --> 00:30:13,780 the quantity 2 times capital T. So by choosing capital T to 522 00:30:13,780 --> 00:30:18,250 be different things, I can make these sample functions 523 00:30:18,250 --> 00:30:21,760 have large bandwidth or small bandwidth so I can look at a 524 00:30:21,760 --> 00:30:24,340 large variety of different things. 525 00:30:24,340 --> 00:30:28,410 All of them have Fourier transforms, which look sort of 526 00:30:28,410 --> 00:30:31,320 flat in magnitude. 527 00:30:31,320 --> 00:30:34,270 And we'll talk about that as we go on. 528 00:30:34,270 --> 00:30:39,430 And when I pass these things through linear filters, what 529 00:30:39,430 --> 00:30:44,020 we're going to find is we can create any old Gaussian random 530 00:30:44,020 --> 00:30:46,120 process we want to. 531 00:30:46,120 --> 00:30:47,480 So that's why they're nice. 532 00:30:47,480 --> 00:30:47,750 Yes? 533 00:30:47,750 --> 00:30:50,280 AUDIENCE: [INAUDIBLE] 534 00:30:50,280 --> 00:30:53,970 PROFESSOR: Can we make V of t white noise? 535 00:30:53,970 --> 00:30:55,740 In a practical sense, yes. 536 00:30:55,740 --> 00:30:58,200 In a mathematical sense, no. 537 00:30:58,200 --> 00:31:00,060 In a mathematical sense, you can't make 538 00:31:00,060 --> 00:31:01,800 anything white noise. 539 00:31:01,800 --> 00:31:04,130 In a mathematical sense, white noise is something 540 00:31:04,130 --> 00:31:06,580 which does not exist. 541 00:31:06,580 --> 00:31:08,890 I'm going to get to that later today. 542 00:31:08,890 --> 00:31:09,950 And it's a good question. 543 00:31:09,950 --> 00:31:15,140 But the answer is yes and no. 544 00:31:22,220 --> 00:31:26,430 OK, the trouble with stationary processes is that 545 00:31:26,430 --> 00:31:29,950 the sample functions aren't L2. 546 00:31:29,950 --> 00:31:33,940 That's not the serious problem with them, because all of us 547 00:31:33,940 --> 00:31:37,770 as engineers are willing to say, well, whether it's L2 or 548 00:31:37,770 --> 00:31:39,540 not I don't care. 549 00:31:39,540 --> 00:31:43,270 I'm just going to use it because it looks like a nice 550 00:31:43,270 --> 00:31:44,280 random process. 551 00:31:44,280 --> 00:31:46,870 It's not going to burn anything else or anything. 552 00:31:46,870 --> 00:31:48,950 So, so what? 553 00:31:48,950 --> 00:31:53,480 But the serious problem here is that it's very difficult to 554 00:31:53,480 --> 00:31:56,520 view stationary processes as 555 00:31:56,520 --> 00:32:00,020 approximations to real processes. 556 00:32:00,020 --> 00:32:03,980 I mean, we've already said that you can't have a random 557 00:32:03,980 --> 00:32:06,530 process which is running merrily away 558 00:32:06,530 --> 00:32:09,780 before the Big Bang. 559 00:32:09,780 --> 00:32:12,360 And you can't have something that's going to keep on 560 00:32:12,360 --> 00:32:17,650 running along merrily after we destroy ourselves, which is 561 00:32:17,650 --> 00:32:20,020 probably sooner in the future than the Big 562 00:32:20,020 --> 00:32:23,370 Bang was in the past. 563 00:32:23,370 --> 00:32:29,530 But anyway, this definition of stationarity does not give us 564 00:32:29,530 --> 00:32:32,390 any way to approximate this. 565 00:32:32,390 --> 00:32:35,520 Namely, if a process is stationary, it is either 566 00:32:35,520 --> 00:32:40,140 identically zero, or every sample function with 567 00:32:40,140 --> 00:32:42,600 probability one has infinite energy in it. 568 00:32:45,610 --> 00:32:47,170 In other words, they keep running on forever. 569 00:32:47,170 --> 00:32:50,230 They keep building up energy as they go. 570 00:32:50,230 --> 00:32:53,670 And as far as the definitions current is concerned, there's 571 00:32:53,670 --> 00:32:59,400 no way to say large negative times and large positive times 572 00:32:59,400 --> 00:33:01,840 are unimportant. 573 00:33:01,840 --> 00:33:04,680 If you're going to use mathematical things, though 574 00:33:04,680 --> 00:33:07,780 you're using them as approximations, you really 575 00:33:07,780 --> 00:33:11,780 have to consider what they're approximations to. 576 00:33:11,780 --> 00:33:14,130 So that's why I'm going to develop this idea of 577 00:33:14,130 --> 00:33:17,750 effectively wide-sense stationary or effectively 578 00:33:17,750 --> 00:33:20,120 stationary. 579 00:33:20,120 --> 00:33:20,890 OK. 580 00:33:20,890 --> 00:33:25,880 So a zero-mean process is effectively stationary or 581 00:33:25,880 --> 00:33:28,720 effectively wide-sense stationary, which is what I'm 582 00:33:28,720 --> 00:33:34,260 primarily interested in, within two intervals, within 583 00:33:34,260 --> 00:33:38,210 some wide interval of time, minus T0 to plus T0. 584 00:33:38,210 --> 00:33:42,440 I'm thinking here of choosing T0 to be enormous. 585 00:33:42,440 --> 00:33:47,310 Namely if you build a piece of equipment, you would have 586 00:33:47,310 --> 00:33:51,820 minus T0 to plus T0 include the amount of time that the 587 00:33:51,820 --> 00:33:54,400 equipment was running. 588 00:33:54,400 --> 00:33:59,200 So we'll say it's effectively stationary within these limits 589 00:33:59,200 --> 00:34:02,960 if the joint probability assignment, or the covariance 590 00:34:02,960 --> 00:34:05,600 matrix if we're talking about effectively wide-sense 591 00:34:05,600 --> 00:34:11,690 stationary, for t1 up to t sub k is the same as that for t1 592 00:34:11,690 --> 00:34:15,410 plus tau up to tk plus tau. 593 00:34:15,410 --> 00:34:23,310 In other words I have this big interval, and I have a bunch 594 00:34:23,310 --> 00:34:24,560 of times here. 595 00:34:29,710 --> 00:34:30,440 t sub k. 596 00:34:30,440 --> 00:34:37,280 And I have a bunch of times over here, t1 plus tau up to 597 00:34:37,280 --> 00:34:41,230 tk plus tau. 598 00:34:41,230 --> 00:34:44,600 And this set of times don't have to be disjoint from that 599 00:34:44,600 --> 00:34:44,940 set of times. 600 00:34:44,940 --> 00:34:48,830 And I have this time over here, minus t0, and I have 601 00:34:48,830 --> 00:34:52,500 this time way out here which is plus t0. 602 00:34:52,500 --> 00:34:55,160 And what I'm saying is, I'm going to call this function 603 00:34:55,160 --> 00:34:59,460 wide-sense stationery if, when we truncate it to minus t0 to 604 00:34:59,460 --> 00:35:04,300 plus t0, it is stationary as far as all 605 00:35:04,300 --> 00:35:06,480 those times are concerned. 606 00:35:06,480 --> 00:35:09,790 In other words, we just want to ignore what happens before 607 00:35:09,790 --> 00:35:13,410 minus t0 and what happens after plus t0. 608 00:35:13,410 --> 00:35:14,980 You have to be able to do that. 609 00:35:14,980 --> 00:35:18,930 Because if you can't talk about the process in that way, 610 00:35:18,930 --> 00:35:22,510 you can't talk about the process at all. 611 00:35:22,510 --> 00:35:26,750 The only thing you can do is view the noise over some 612 00:35:26,750 --> 00:35:28,760 finite interval. 613 00:35:28,760 --> 00:35:30,090 OK. 614 00:35:30,090 --> 00:35:33,880 And as far as covariance matrix for wide-sense and 615 00:35:33,880 --> 00:35:39,120 stationary, well it's the same definition. 616 00:35:39,120 --> 00:35:42,590 So we're going to truncate the process and deal with that. 617 00:35:45,590 --> 00:35:53,800 For effectively stationary or effectively wide-sense 618 00:35:53,800 --> 00:35:57,690 stationary, I want to view the process as being truncated to 619 00:35:57,690 --> 00:35:59,740 minus t0 to plus t0. 620 00:35:59,740 --> 00:36:01,780 We have this process which might or might not be 621 00:36:01,780 --> 00:36:02,780 stationary. 622 00:36:02,780 --> 00:36:05,150 I just truncate it and I say I'm only going to look at this 623 00:36:05,150 --> 00:36:07,910 finite segment of it. 624 00:36:07,910 --> 00:36:13,410 I'm going to define this one variable covariance function 625 00:36:13,410 --> 00:36:18,260 as kz of t1 and t2. 626 00:36:18,260 --> 00:36:28,330 And kz of t1 and t2 is going to be -- blah. 627 00:36:28,330 --> 00:36:33,930 The single variable covariance function is defined as kz if 628 00:36:33,930 --> 00:36:38,460 t1 minus t2 is equal to the actual covariance function 629 00:36:38,460 --> 00:36:41,640 evaluated with one argument at t1 and the 630 00:36:41,640 --> 00:36:43,120 other argument at t2. 631 00:36:43,120 --> 00:36:46,080 And I want this to hold true for all t1 and 632 00:36:46,080 --> 00:36:48,650 all t2 in this interval. 633 00:36:48,650 --> 00:36:51,130 And this square here gives a picture of what 634 00:36:51,130 --> 00:36:53,200 we're talking about. 635 00:36:53,200 --> 00:36:56,310 If you look at the square in the notes, unfortunately all 636 00:36:56,310 --> 00:37:00,870 the diagonal lines do not appear because of the bizarre 637 00:37:00,870 --> 00:37:03,400 characteristics of LaTeX. 638 00:37:03,400 --> 00:37:07,600 LaTeX is better than most programs, but the graphics in 639 00:37:07,600 --> 00:37:09,670 it are awful. 640 00:37:09,670 --> 00:37:13,790 But anyway, here it is drawn properly. 641 00:37:13,790 --> 00:37:20,200 And along this line, t minus tau is constant. 642 00:37:20,200 --> 00:37:24,860 So this is the line over which we're insisting that kz of t1 643 00:37:24,860 --> 00:37:26,960 and t2 be constant. 644 00:37:26,960 --> 00:37:33,420 So for a random process to be wide-sense stationary, what 645 00:37:33,420 --> 00:37:37,540 were insisting is that within these limits, minus t0 to plus 646 00:37:37,540 --> 00:37:43,840 t0, the covariance function is constant along each of these 647 00:37:43,840 --> 00:37:45,970 lines along here. 648 00:37:45,970 --> 00:37:49,620 So it doesn't depend on both t1 and t2. 649 00:37:49,620 --> 00:37:52,560 It only depends on the difference between them. 650 00:37:52,560 --> 00:37:56,260 Which is what we require for stationary to start with. 651 00:37:56,260 --> 00:37:58,630 If you don't like the idea of effectively wide-sense 652 00:37:58,630 --> 00:38:02,360 stationary, just truncate the process and say, well if it's 653 00:38:02,360 --> 00:38:05,100 stationary, this has to be satisfied. 654 00:38:05,100 --> 00:38:08,990 It has to be constant along these lines. 655 00:38:08,990 --> 00:38:12,600 The peculiar thing about this is that the single variable 656 00:38:12,600 --> 00:38:18,960 covariance function does not run from minus t0 to plus t0. 657 00:38:18,960 --> 00:38:24,350 It runs from minus 2T0 to plus 2T0. 658 00:38:24,350 --> 00:38:28,580 So that's a little bizarre and a little unpleasant. 659 00:38:28,580 --> 00:38:32,280 And it's unfortunately the way things are. 660 00:38:32,280 --> 00:38:36,270 And it also says that this one variable covariance function 661 00:38:36,270 --> 00:38:41,920 is not always the same as the covariance evaluated at t 662 00:38:41,920 --> 00:38:44,100 minus tau and 0. 663 00:38:44,100 --> 00:38:47,960 Namely, it's not the same because t minus tau might be 664 00:38:47,960 --> 00:38:51,420 considerably larger than t0. 665 00:38:51,420 --> 00:38:55,470 And therefore, this covariance is zero, if we truncated the 666 00:38:55,470 --> 00:38:58,420 process, and this covariance is not zero. 667 00:38:58,420 --> 00:39:03,440 In other words, for these points up here, and in fact 668 00:39:03,440 --> 00:39:08,150 this point in particular -- well, I guess the best thing 669 00:39:08,150 --> 00:39:12,200 to look at is points along here, for example. 670 00:39:12,200 --> 00:39:21,560 Points along here, what we insist on is that t minus tau, 671 00:39:21,560 --> 00:39:27,970 k sub z of t and tau is constant along this line. 672 00:39:27,970 --> 00:39:34,570 But we don't insist on kz of t minus tau, which is some 673 00:39:34,570 --> 00:39:41,910 quantity bigger than t0 along this line, being the same as 674 00:39:41,910 --> 00:39:44,220 this quantity here. 675 00:39:44,220 --> 00:39:49,120 OK, so aside from that little peculiarity, this is the same 676 00:39:49,120 --> 00:39:52,290 thing that you would expect from just taking a function 677 00:39:52,290 --> 00:39:54,040 and truncating it. 678 00:39:54,040 --> 00:39:57,660 There's nothing else that's peculiar going on there. 679 00:40:00,830 --> 00:40:05,180 OK, so let's see if we can do anything with this idea. 680 00:40:05,180 --> 00:40:08,150 And the first thing we want to do is to say, how about these 681 00:40:08,150 --> 00:40:11,810 linear functionals we've been talking about? 682 00:40:11,810 --> 00:40:13,850 Why do I keep talking about linear 683 00:40:13,850 --> 00:40:16,770 functionals and filtering? 684 00:40:16,770 --> 00:40:24,230 Because any time you take a noise process and you receive 685 00:40:24,230 --> 00:40:26,920 it, you're going to start filtering it. 686 00:40:26,920 --> 00:40:28,410 You're going to take start taking linear 687 00:40:28,410 --> 00:40:30,280 functionals of it. 688 00:40:30,280 --> 00:40:33,830 Namely all the processing you do is going to start with 689 00:40:33,830 --> 00:40:37,390 finding random variables in this particular way. 690 00:40:37,390 --> 00:40:39,650 And when you look at the covariance between two of 691 00:40:39,650 --> 00:40:45,660 them, what you find is the same thing we found last time. 692 00:40:45,660 --> 00:40:52,140 Expected value of Vi times Vj is equal to the integral of gi 693 00:40:52,140 --> 00:40:56,690 of t times the covariance function evaluated at t and 694 00:40:56,690 --> 00:41:00,460 tau, times gj of tau. 695 00:41:00,460 --> 00:41:03,620 All of this is for real processes 696 00:41:03,620 --> 00:41:05,510 and for real functions. 697 00:41:05,510 --> 00:41:11,060 Because noise random processes are in fact real. 698 00:41:11,060 --> 00:41:14,870 I keep trying to alternate between making all of this 699 00:41:14,870 --> 00:41:17,980 complex and making it real. 700 00:41:17,980 --> 00:41:19,600 Both have their advantages. 701 00:41:19,600 --> 00:41:23,020 But at least in the present version of the notes, 702 00:41:23,020 --> 00:41:27,590 everything is real, concerned with random processes. 703 00:41:27,590 --> 00:41:29,420 So I have this function here. 704 00:41:29,420 --> 00:41:35,625 If gj of t is zero for t greater than T0, what's going 705 00:41:35,625 --> 00:41:37,420 to happen here then? 706 00:41:37,420 --> 00:41:40,100 I'm evaluating this where this is a double 707 00:41:40,100 --> 00:41:45,310 integral over what region? 708 00:41:45,310 --> 00:41:49,810 It's a double integral over the region where t is 709 00:41:49,810 --> 00:41:54,550 constrained to minus T0 to plus T0, and tau is 710 00:41:54,550 --> 00:41:58,380 constrained to the interval minus t0 to plus T0. 711 00:41:58,380 --> 00:42:02,700 In other words, I can evaluate this expected value, this 712 00:42:02,700 --> 00:42:07,060 covariance, simply by looking at this box here. 713 00:42:07,060 --> 00:42:09,860 If I know what the random process is doing within this 714 00:42:09,860 --> 00:42:14,480 box, I can evaluate this thing. 715 00:42:14,480 --> 00:42:18,120 So that if the process is effectively stationary within 716 00:42:18,120 --> 00:42:23,450 minus t0 to plus t0, I don't need to know anything else to 717 00:42:23,450 --> 00:42:27,740 evaluate all of these linear functionals. 718 00:42:27,740 --> 00:42:30,280 But that's exactly the way that were choosing this 719 00:42:30,280 --> 00:42:32,680 quantity, T0. 720 00:42:32,680 --> 00:42:36,290 Namely, we're choosing T0 to be so large that everything 721 00:42:36,290 --> 00:42:39,670 we're interested in happens inside of there. 722 00:42:39,670 --> 00:42:42,410 So we're saying that all of these linear functionals for 723 00:42:42,410 --> 00:42:46,300 effective stationarity are just defined for what happens 724 00:42:46,300 --> 00:42:47,550 inside this interval. 725 00:42:51,970 --> 00:42:55,310 So if Z of t is effectively wide-sense stationary within 726 00:42:55,310 --> 00:42:58,400 this interval, you make the intervals large as you want or 727 00:42:58,400 --> 00:42:59,420 as small as you want. 728 00:42:59,420 --> 00:43:03,260 The real quantity is that these functions g have to be 729 00:43:03,260 --> 00:43:07,820 constrained within minus T0 to plus T0. 730 00:43:07,820 --> 00:43:12,340 Then Vi, Vj, are jointly Gaussian. 731 00:43:12,340 --> 00:43:17,470 In other words, you can talk about jointly Gaussian without 732 00:43:17,470 --> 00:43:20,850 talking at all about whether the process is really 733 00:43:20,850 --> 00:43:22,730 stationary or not. 734 00:43:22,730 --> 00:43:25,770 You can evaluate this for everything within these 735 00:43:25,770 --> 00:43:29,220 limits, strictly in terms of what's going on 736 00:43:29,220 --> 00:43:30,470 within those limits. 737 00:43:37,190 --> 00:43:38,460 That one was easy. 738 00:43:38,460 --> 00:43:39,880 The next one is a little harder. 739 00:43:42,680 --> 00:43:45,050 We have our linear filter now. 740 00:43:45,050 --> 00:43:49,050 We have the noise process going into a linear filter. 741 00:43:49,050 --> 00:43:50,910 And we have some kind of process 742 00:43:50,910 --> 00:43:52,800 coming out of the filter. 743 00:43:52,800 --> 00:43:56,450 Again, we would like to say we couldn't care less about what 744 00:43:56,450 --> 00:44:01,950 this process is doing outside of these humongous limits. 745 00:44:01,950 --> 00:44:06,830 And the way we do that is to say -- 746 00:44:06,830 --> 00:44:10,580 well, let's forget about that for the time being. 747 00:44:10,580 --> 00:44:15,510 Let us do what you did as an undergraduate, before you 748 00:44:15,510 --> 00:44:20,810 acquired wisdom, and say let's just integrate and not worry 749 00:44:20,810 --> 00:44:24,170 at all about changing orders of 750 00:44:24,170 --> 00:44:26,370 integration or anything else. 751 00:44:26,370 --> 00:44:31,330 Let's just run along and do everything as 752 00:44:31,330 --> 00:44:34,530 carelessly as we want to. 753 00:44:34,530 --> 00:44:39,370 The covariance function for what comes out of this filter 754 00:44:39,370 --> 00:44:42,690 is something we already evaluated. 755 00:44:42,690 --> 00:44:47,650 Namely, what you want to look at is v of t and v of tau. v 756 00:44:47,650 --> 00:44:52,620 of t is this integral here. v of tau is this integral here 757 00:44:52,620 --> 00:44:55,870 with tau substituted for t. 758 00:44:55,870 --> 00:44:58,360 So when you put both of these together and you take the 759 00:44:58,360 --> 00:45:02,220 expected value, and then you bring the expected value in 760 00:45:02,220 --> 00:45:04,410 through the integrals -- 761 00:45:04,410 --> 00:45:07,870 who knows whether you can do that or not, but we'll do it 762 00:45:07,870 --> 00:45:11,740 anyway -- what we're going to get is a double integral of 763 00:45:11,740 --> 00:45:16,350 the impulse response, evaluated at t minus t1, times 764 00:45:16,350 --> 00:45:21,410 the covariance function evaluated at t1 and t2, times 765 00:45:21,410 --> 00:45:25,970 the function that we're dealing with, h, evaluated at 766 00:45:25,970 --> 00:45:27,550 tau minus t2. 767 00:45:27,550 --> 00:45:31,710 This is what you got just by taking the expected value of v 768 00:45:31,710 --> 00:45:34,680 of t times v of tau. 769 00:45:34,680 --> 00:45:36,250 And you just write it out. 770 00:45:36,250 --> 00:45:39,580 You write out the expected value of these two integrals. 771 00:45:39,580 --> 00:45:41,240 It's what we did last time. 772 00:45:41,240 --> 00:45:44,250 It's what's in the notes if you don't see how to do it. 773 00:45:44,250 --> 00:45:47,220 You take the expected value inside, and 774 00:45:47,220 --> 00:45:49,350 this is what you get. 775 00:45:49,350 --> 00:45:52,510 Well now we start playing these games. 776 00:45:52,510 --> 00:45:57,620 And the first game to play is, we would like to use our 777 00:45:57,620 --> 00:46:00,000 notion of stationarity. 778 00:46:00,000 --> 00:46:04,790 So in place of kz of t1 and t2, we want to substitute the 779 00:46:04,790 --> 00:46:11,880 single variable form, kz tilde of t1 minus t2. 780 00:46:11,880 --> 00:46:15,520 But we don't like the t1 minus t2 in there, so we substitute 781 00:46:15,520 --> 00:46:18,060 phi for t1 and t2. 782 00:46:18,060 --> 00:46:22,460 And then what we get is the double integral 783 00:46:22,460 --> 00:46:25,000 of kz of phi now. 784 00:46:25,000 --> 00:46:29,760 And we have gotten rid of t1 by this substitution, so we 785 00:46:29,760 --> 00:46:33,530 wind up with the integral over phi and over t2. 786 00:46:36,900 --> 00:46:38,530 Next thing we want to do, we want to get 787 00:46:38,530 --> 00:46:40,850 rid of the t2 also. 788 00:46:40,850 --> 00:46:45,880 So we're going to let mu equal t2 minus tau. 789 00:46:45,880 --> 00:46:49,610 When we were starting here, we had four variables, t and tau 790 00:46:49,610 --> 00:46:52,460 and t1 and t2. 791 00:46:52,460 --> 00:46:54,840 We're not getting rid of the t and we're not getting rid of 792 00:46:54,840 --> 00:46:58,440 the tau, because that's what we're trying to calculate. 793 00:46:58,440 --> 00:47:01,290 But we can play around with the t1 and t2 as much as we 794 00:47:01,290 --> 00:47:06,180 want, and we can substitute variables of integration here. 795 00:47:06,180 --> 00:47:09,110 So mu is going to be t2 minus tau. 796 00:47:09,110 --> 00:47:11,640 That's going to let us get rid of the t2. 797 00:47:11,640 --> 00:47:14,670 And we wind up with this form here when we do that 798 00:47:14,670 --> 00:47:15,480 substitution. 799 00:47:15,480 --> 00:47:20,080 It's just that h of tau minus t2 becomes h of minus mu 800 00:47:20,080 --> 00:47:22,170 instead of h of plus mu. 801 00:47:22,170 --> 00:47:23,670 I don't know what that means. 802 00:47:23,670 --> 00:47:25,350 I don't care what it means. 803 00:47:25,350 --> 00:47:29,350 The only thing I'm interested in now is, you look at where 804 00:47:29,350 --> 00:47:32,950 the t's and the taus are, and the only place that t's and 805 00:47:32,950 --> 00:47:35,260 taus occur are here. 806 00:47:35,260 --> 00:47:37,360 And they occur together. 807 00:47:37,360 --> 00:47:41,530 And the only thing this is a function of is t minus tau. 808 00:47:41,530 --> 00:47:44,100 I don't know what kind of function it is, but the only 809 00:47:44,100 --> 00:47:46,200 thing we're dealing with is t minus tau. 810 00:47:46,200 --> 00:47:48,780 I don't know whether these intervals exists or not. 811 00:47:48,780 --> 00:47:51,960 I can't make any very good argument that they can. 812 00:47:51,960 --> 00:47:55,780 But if they do exist, it's a function only of t minus tau. 813 00:47:55,780 --> 00:47:58,180 So aside from the pseudo-mathematics we've been 814 00:47:58,180 --> 00:48:02,670 using, V of t is wide-sense stationary. 815 00:48:02,670 --> 00:48:05,210 OK? 816 00:48:05,210 --> 00:48:07,620 Now if you didn't follow all this integration, 817 00:48:07,620 --> 00:48:09,680 don't worry about it. 818 00:48:09,680 --> 00:48:11,600 It's all just substituting integrals 819 00:48:11,600 --> 00:48:14,670 and integrating away. 820 00:48:14,670 --> 00:48:18,680 And it's something you can do. 821 00:48:18,680 --> 00:48:22,230 Some people just take five minutes to do it, some people 822 00:48:22,230 --> 00:48:23,480 10 minutes to do it. 823 00:48:26,240 --> 00:48:29,430 Now we want to put the effective stationarity in it. 824 00:48:29,430 --> 00:48:31,950 Because again, we don't know what this means. 825 00:48:31,950 --> 00:48:34,050 We can't interpret what it means in terms of 826 00:48:34,050 --> 00:48:36,390 stationarity. 827 00:48:36,390 --> 00:48:40,660 So we're going to assume that our filter, h of t, has a 828 00:48:40,660 --> 00:48:43,740 finite duration impulse response. 829 00:48:43,740 --> 00:48:48,340 So I'm going to assume that it's zero when the magnitude 830 00:48:48,340 --> 00:48:51,550 of t is greater than A. Again, I'm assuming a filter which 831 00:48:51,550 --> 00:48:55,680 starts before zero and runs until after zero. 832 00:48:55,680 --> 00:48:58,930 Because again, what I'm assuming is the receiver 833 00:48:58,930 --> 00:49:01,530 timing is different from the transmitter timing. 834 00:49:01,530 --> 00:49:03,850 And I've set the receiver timing at 835 00:49:03,850 --> 00:49:05,610 some convenient place. 836 00:49:05,610 --> 00:49:09,460 So my filter starts doing things before anything hits 837 00:49:09,460 --> 00:49:13,190 it, just because of this change of time. 838 00:49:13,190 --> 00:49:17,630 So v of t, then, is equal to the integral of the process Z 839 00:49:17,630 --> 00:49:21,370 of t1 times h of t minus t1. 840 00:49:21,370 --> 00:49:25,360 This is a linear functional again. 841 00:49:25,360 --> 00:49:30,250 This depends only on what Z of t1 is for t minus A less than 842 00:49:30,250 --> 00:49:35,470 or equal to t1, less than or equal to t plus A. Because I'm 843 00:49:35,470 --> 00:49:40,070 doing this integration here, this function here is 0 844 00:49:40,070 --> 00:49:42,160 outside of that interval. 845 00:49:42,160 --> 00:49:45,400 I'm assuming that h of t is equal to 0 outside of this 846 00:49:45,400 --> 00:49:47,350 finite interval. 847 00:49:47,350 --> 00:49:52,870 And this is just a shift of t1 on it. 848 00:49:52,870 --> 00:49:58,510 So this integral is going to be equal to zero from when t 849 00:49:58,510 --> 00:50:05,170 is equal to t1 minus A to when t is equal to t1 plus A. Every 850 00:50:05,170 --> 00:50:09,930 place else, this is equal to zero. 851 00:50:09,930 --> 00:50:17,580 So V of t, the whole process depends -- and if we only want 852 00:50:17,580 --> 00:50:22,780 to evaluate it within minus T0 plus A to T0 minus A -- in 853 00:50:22,780 --> 00:50:29,990 other words we look at 10 to the eighth years and we 854 00:50:29,990 --> 00:50:33,170 subtract off a microsecond from that region. 855 00:50:33,170 --> 00:50:36,890 So now we're saying we want to see whether this is wide-sense 856 00:50:36,890 --> 00:50:41,020 stationary within 10 to the eighth years minus a 857 00:50:41,020 --> 00:50:44,980 microsecond, minus that to plus that. 858 00:50:44,980 --> 00:50:48,420 So that's what we're dealing with here. 859 00:50:48,420 --> 00:50:54,770 V of t in this region depends only on Z of t for t in the 860 00:50:54,770 --> 00:50:57,580 interval minus T0 to plus T0. 861 00:50:57,580 --> 00:51:00,520 In other words, when you're calculating V of t for any 862 00:51:00,520 --> 00:51:04,520 time in this interval, you're only interested in Z of t, 863 00:51:04,520 --> 00:51:08,900 which is diddling within plus or minus A of that. 864 00:51:08,900 --> 00:51:10,410 Because that's the only place where the 865 00:51:10,410 --> 00:51:13,820 filter is doing anything. 866 00:51:13,820 --> 00:51:17,840 So V of t depends only on Z of t. 867 00:51:17,840 --> 00:51:20,790 And here, I'm going to assume that Z of t is wide-sense 868 00:51:20,790 --> 00:51:23,700 stationary within those limits. 869 00:51:23,700 --> 00:51:26,340 And therefore I don't have to worry about what Z of t is 870 00:51:26,340 --> 00:51:28,190 doing outside of those limits. 871 00:51:32,090 --> 00:51:37,140 So if the sample functions of Z of t are L2 within minus T0 872 00:51:37,140 --> 00:51:42,030 to plus T0, then the sample functions of V of t are L2 873 00:51:42,030 --> 00:51:46,010 within minus T0 plus A to T0 minus A. 874 00:51:46,010 --> 00:51:48,610 Now this is not obvious. 875 00:51:48,610 --> 00:51:50,860 And there's a proof in the notes of this, which goes 876 00:51:50,860 --> 00:51:54,550 through all this L1 stuff and L2 stuff that you've been 877 00:51:54,550 --> 00:51:56,030 struggling with. 878 00:51:56,030 --> 00:52:00,370 It's not a difficult proof, but if you don't dig L2 879 00:52:00,370 --> 00:52:04,810 theory, be my guest and don't worry about it. 880 00:52:04,810 --> 00:52:08,120 If you do want to really understand exactly what's 881 00:52:08,120 --> 00:52:12,050 going on here, be my guest and do worry about it, and go 882 00:52:12,050 --> 00:52:12,910 through it. 883 00:52:12,910 --> 00:52:15,040 But it's not really an awful argument. 884 00:52:15,040 --> 00:52:18,110 But this is what happens. 885 00:52:18,110 --> 00:52:21,470 So what this says is we can now view wide-sense 886 00:52:21,470 --> 00:52:27,750 stationarity and stationarity, at least for Gaussian 887 00:52:27,750 --> 00:52:30,960 processes, as a limit of effectively wide-sense 888 00:52:30,960 --> 00:52:33,920 stationary processes. 889 00:52:33,920 --> 00:52:38,000 In other words, so long as I deal with filters whose 890 00:52:38,000 --> 00:52:42,180 impulse response is constrained in time, the only 891 00:52:42,180 --> 00:52:46,550 effect of that filtering is to reduce the interval over which 892 00:52:46,550 --> 00:52:51,200 the process is wide-sense stationary by the quantity A. 893 00:52:51,200 --> 00:52:54,870 And the quantity A is going to be very much smaller than T0, 894 00:52:54,870 --> 00:52:57,230 because we're going to choose T0 to be as large as 895 00:52:57,230 --> 00:52:58,760 we want it to be. 896 00:52:58,760 --> 00:53:01,170 Namely, that's always the thing that we do when we try 897 00:53:01,170 --> 00:53:03,620 to go through limiting arguments. 898 00:53:03,620 --> 00:53:07,030 So what we're saying here is, by using effective 899 00:53:07,030 --> 00:53:12,320 stationarity, we have managed to find a tool that lets us 900 00:53:12,320 --> 00:53:16,690 look at stationary as a limit of effectively stationary 901 00:53:16,690 --> 00:53:21,690 processes as you let T0 become larger and larger. 902 00:53:21,690 --> 00:53:23,560 And we have our cake and we eat it too, here. 903 00:53:23,560 --> 00:53:27,150 Because at this point, all these functions we're dealing 904 00:53:27,150 --> 00:53:29,420 with, we can assume that they're L2. 905 00:53:29,420 --> 00:53:33,590 Because they're L2 for every T0 that we want to look at. 906 00:53:33,590 --> 00:53:36,640 So we don't have to worry about that anymore. 907 00:53:36,640 --> 00:53:38,130 So we have these processes. 908 00:53:38,130 --> 00:53:41,140 Now we will just call them stationary or wide-sense 909 00:53:41,140 --> 00:53:42,310 stationary. 910 00:53:42,310 --> 00:53:45,630 Because we know that what we're really talking about is 911 00:53:45,630 --> 00:53:49,040 a limit as T0 goes to infinity of things that 912 00:53:49,040 --> 00:53:50,290 make perfect sense. 913 00:53:54,750 --> 00:53:59,760 So suddenly the mystery, or the pseudo-mystery has been 914 00:53:59,760 --> 00:54:01,570 taken out of this, I hope. 915 00:54:05,100 --> 00:54:09,340 So you have a wide-sense stationary process with a 916 00:54:09,340 --> 00:54:13,480 covariance now of k sub z of tau. 917 00:54:13,480 --> 00:54:16,140 In other words, it's effectively stationary over 918 00:54:16,140 --> 00:54:20,680 some very large interval, and I now have this covariance 919 00:54:20,680 --> 00:54:24,100 function, which is a function of one variable. 920 00:54:24,100 --> 00:54:30,770 I want to define the spectral density of this process as a 921 00:54:30,770 --> 00:54:33,740 Fourier transform of k sub z. 922 00:54:33,740 --> 00:54:38,810 In other words, as soon as I get a one variable covariance 923 00:54:38,810 --> 00:54:42,460 function, I can talk about its Fourier transform. 924 00:54:42,460 --> 00:54:45,880 At least assuming that it's L2 or something. 925 00:54:45,880 --> 00:54:49,270 And let's forget about that for the time being, because 926 00:54:49,270 --> 00:54:50,360 that all works. 927 00:54:50,360 --> 00:54:52,860 We want to take the Fourier transform of that. 928 00:54:52,860 --> 00:54:56,230 I'm going to call that spectral density. 929 00:54:56,230 --> 00:55:00,235 Now if I call that spectral density, the thing you ought 930 00:55:00,235 --> 00:55:04,010 to want to know is, why am I calling that spectral density? 931 00:55:04,010 --> 00:55:06,680 What does it have to do with anything you might be 932 00:55:06,680 --> 00:55:08,700 interested in? 933 00:55:08,700 --> 00:55:11,530 Well, we look at it and we say well, what might it have to do 934 00:55:11,530 --> 00:55:13,360 with anything? 935 00:55:13,360 --> 00:55:17,270 Well, the first thing we're going to try is to say, what 936 00:55:17,270 --> 00:55:19,460 happens to these linear functionals we've 937 00:55:19,460 --> 00:55:20,760 been talking about? 938 00:55:20,760 --> 00:55:24,460 We have a linear functional, V. It's an integral of g of t 939 00:55:24,460 --> 00:55:26,740 times Z of t dt. 940 00:55:26,740 --> 00:55:30,020 We would like to be able to talk about the expected value 941 00:55:30,020 --> 00:55:31,320 of V squared. 942 00:55:31,320 --> 00:55:33,700 Talk about zero-mean things, so we don't care about the 943 00:55:33,700 --> 00:55:35,370 mean at zero. 944 00:55:35,370 --> 00:55:38,910 So we'll just talk about the variance of any linear 945 00:55:38,910 --> 00:55:41,380 functional of this form. 946 00:55:41,380 --> 00:55:43,460 If I can talk about the variance of any linear 947 00:55:43,460 --> 00:55:47,310 functional for any g of t that I'm interested in, I can 948 00:55:47,310 --> 00:55:51,810 certainly say a lot about what this process is doing. 949 00:55:51,810 --> 00:55:54,640 So I want to find that variance. 950 00:55:54,640 --> 00:55:57,300 If I write this out, it's same thing we've been 951 00:55:57,300 --> 00:55:58,510 writing out all along. 952 00:55:58,510 --> 00:56:03,360 It's the integral of g of t times this one variable 953 00:56:03,360 --> 00:56:10,230 covariance function now, times g of tau d tau dt. 954 00:56:10,230 --> 00:56:13,820 When you look at this, and you say OK, what is this? 955 00:56:13,820 --> 00:56:19,130 It's an integral of g of t times some function of t. 956 00:56:19,130 --> 00:56:23,010 And that function of t is really the 957 00:56:23,010 --> 00:56:24,830 convolution of k sub z. 958 00:56:24,830 --> 00:56:26,910 This is a function now. 959 00:56:26,910 --> 00:56:29,800 It's not a random process anymore; it's just a function. 960 00:56:29,800 --> 00:56:36,250 That's a convolution of the function k tilde with g. 961 00:56:36,250 --> 00:56:41,770 So I can rewrite this as the integral of g of t times this 962 00:56:41,770 --> 00:56:43,100 convolution of t. 963 00:56:43,100 --> 00:56:46,500 And then I'm going to call the convolution theta of t, just 964 00:56:46,500 --> 00:56:47,870 to give it a name. 965 00:56:47,870 --> 00:56:54,180 So this variance is equal to g of t times this function theta 966 00:56:54,180 --> 00:57:00,050 of t, which is just the convolution of k with g. 967 00:57:00,050 --> 00:57:02,820 Well now I say, OK, I can use I can use 968 00:57:02,820 --> 00:57:05,620 Parseval's relation on that. 969 00:57:05,620 --> 00:57:08,770 And what I get is this integral, if I look in the 970 00:57:08,770 --> 00:57:13,350 frequency domain now, as the integral of the Fourier 971 00:57:13,350 --> 00:57:18,330 transform of g of t, times the complex conjugate of theta 972 00:57:18,330 --> 00:57:21,240 star of f times df. 973 00:57:21,240 --> 00:57:25,570 I've cheated it you just a little bit there because what 974 00:57:25,570 --> 00:57:28,290 I really want to talk about is -- 975 00:57:28,290 --> 00:57:32,060 I mean, Parseval's theorem relates this quantity to this 976 00:57:32,060 --> 00:57:35,700 quantity, where theta is theta complex conjugate. 977 00:57:35,700 --> 00:57:38,690 But fortunately, theta is real. 978 00:57:38,690 --> 00:57:42,380 Because g is real and k is real. 979 00:57:42,380 --> 00:57:45,205 Because we're only dealing with real processes here, and 980 00:57:45,205 --> 00:57:47,050 we're only dealing with real filters. 981 00:57:47,050 --> 00:57:48,860 So this is real, this is real. 982 00:57:48,860 --> 00:57:53,700 So this integral is equal to this in the frequency domain. 983 00:57:53,700 --> 00:58:01,030 And now, what is the Fourier transform of -- 984 00:58:01,030 --> 00:58:04,950 OK, theta of t is this convolution. 985 00:58:04,950 --> 00:58:08,790 When I take the Fourier transform of theta, what I get 986 00:58:08,790 --> 00:58:12,940 is the product of the Fourier transform of k times the 987 00:58:12,940 --> 00:58:14,540 Fourier transform of g. 988 00:58:14,540 --> 00:58:20,040 So in fact what I get is g hat of f times theta star of f. 989 00:58:20,040 --> 00:58:26,780 Which is g complex conjugate of that times k complex 990 00:58:26,780 --> 00:58:30,420 conjugate -- times -- it doesn't make any difference 991 00:58:30,420 --> 00:58:32,210 whether it's a complex conjugate or not, 992 00:58:32,210 --> 00:58:34,490 because it's real. 993 00:58:34,490 --> 00:58:38,710 So I wind up with the integral of the magnitude squared of g 994 00:58:38,710 --> 00:58:41,740 times this spectral density. 995 00:58:41,740 --> 00:58:44,160 Well at this point, we can interpret all sorts 996 00:58:44,160 --> 00:58:46,420 of things from this. 997 00:58:46,420 --> 00:58:49,140 And we now know that we can interpret this also in terms 998 00:58:49,140 --> 00:58:52,460 of effectively stationary things. 999 00:58:52,460 --> 00:58:54,900 So we don't have any problem with things going to infinity 1000 00:58:54,900 --> 00:58:55,940 or anything. 1001 00:58:55,940 --> 00:58:59,270 I mean, you can go through this argument carefully for 1002 00:58:59,270 --> 00:59:03,880 effectively stationary things, and everything works out fine. 1003 00:59:03,880 --> 00:59:08,610 So long as g of t is constrained in time. 1004 00:59:08,610 --> 00:59:13,245 Well now, I can choose this function any 1005 00:59:13,245 --> 00:59:15,980 way that I want to. 1006 00:59:15,980 --> 00:59:18,550 In other words, I can choose my function g of t in 1007 00:59:18,550 --> 00:59:20,070 any way I want to. 1008 00:59:20,070 --> 00:59:22,990 I can choose this function in any way I want to. 1009 00:59:22,990 --> 00:59:25,910 So long as its inverse transform is real. 1010 00:59:25,910 --> 00:59:32,110 Which means g hat of f has to be equal to g hat of minus f 1011 00:59:32,110 --> 00:59:34,480 complex conjugate. 1012 00:59:34,480 --> 00:59:38,340 But aside from that, I can choose this to be anything. 1013 00:59:38,340 --> 00:59:42,240 And if I choose this to be very, very narrow band, what 1014 00:59:42,240 --> 00:59:43,840 is this doing? 1015 00:59:43,840 --> 00:59:48,410 It's saying that if I take a very narrow band g of t, like 1016 00:59:48,410 --> 00:59:53,000 a sinusoid truncated out to some very wide region, I 1017 00:59:53,000 --> 00:59:59,420 multiply that by this process, I integrate it out, and I look 1018 00:59:59,420 --> 01:00:00,620 at the variance. 1019 01:00:00,620 --> 01:00:03,210 What am I doing when I'm doing that? 1020 01:00:03,210 --> 01:00:08,320 I'm effectively filtering my process with a very, very 1021 01:00:08,320 --> 01:00:12,470 narrow band filter, and I'm asking, what's the variance of 1022 01:00:12,470 --> 01:00:14,260 the output? 1023 01:00:14,260 --> 01:00:18,250 So the variance of the output really is related to the 1024 01:00:18,250 --> 01:00:23,110 amount of energy in this process at that frequency. 1025 01:00:26,110 --> 01:00:29,050 That's exactly what the mathematics says here. 1026 01:00:29,050 --> 01:00:36,670 It says that s sub z of f is in fact the amount of noise 1027 01:00:36,670 --> 01:00:41,280 power per unit bandwidth at this frequency. 1028 01:00:41,280 --> 01:00:44,790 It's the only way you can interpret it. 1029 01:00:44,790 --> 01:00:49,470 Because I can make this as narrow as I want to, so that 1030 01:00:49,470 --> 01:00:53,800 if there is any interpretation of something in this process 1031 01:00:53,800 --> 01:00:57,365 as being at one frequency rather than another frequency 1032 01:00:57,365 --> 01:01:00,330 and the only way I can interpret that is by filtering 1033 01:01:00,330 --> 01:01:02,010 the process. 1034 01:01:02,010 --> 01:01:04,850 This is saying that when you filter the process, what you 1035 01:01:04,850 --> 01:01:10,070 see at that bandwidth is how much power there is in the 1036 01:01:10,070 --> 01:01:12,770 process at that bandwidth. 1037 01:01:12,770 --> 01:01:17,300 So this is simply giving us an interpretation of spectral 1038 01:01:17,300 --> 01:01:20,300 density as power per unit bandwidth. 1039 01:01:22,960 --> 01:01:32,390 OK, let's go on with this. 1040 01:01:32,390 --> 01:01:35,830 If this spectral density is constant over all frequencies 1041 01:01:35,830 --> 01:01:39,200 of interest, we say that it's white. 1042 01:01:39,200 --> 01:01:41,650 Here's the answer to your question: what is white 1043 01:01:41,650 --> 01:01:44,660 Gaussian noise? 1044 01:01:44,660 --> 01:01:50,410 White Gaussian noise is a Gaussian process which has a 1045 01:01:50,410 --> 01:01:53,720 constant spectral density over the frequencies that we're 1046 01:01:53,720 --> 01:01:55,530 interested in. 1047 01:01:55,530 --> 01:01:59,900 So we now have this looking at it in a certain band of 1048 01:01:59,900 --> 01:02:02,520 frequencies. 1049 01:02:02,520 --> 01:02:06,330 You know, if you think about this in more or less practical 1050 01:02:06,330 --> 01:02:09,260 terms, suppose you're building a wireless network, and it's 1051 01:02:09,260 --> 01:02:14,100 going to operate, say, at five or six gigahertz. 1052 01:02:17,170 --> 01:02:20,380 And suppose your bandwidth is maybe 100 1053 01:02:20,380 --> 01:02:21,430 megahertz or something. 1054 01:02:21,430 --> 01:02:24,870 Or maybe it's 10 megahertz, or maybe it's 1 megahertz. 1055 01:02:24,870 --> 01:02:29,920 In terms of this frequency of many gigahertz, you're talking 1056 01:02:29,920 --> 01:02:32,560 about very narrowband communication. 1057 01:02:32,560 --> 01:02:35,330 People might call it wideband communication, but it's really 1058 01:02:35,330 --> 01:02:36,580 pretty narrowband. 1059 01:02:38,980 --> 01:02:43,270 Now suppose you have Gaussian noise, which is caused by all 1060 01:02:43,270 --> 01:02:49,060 of these small tiny noise affects all over the place. 1061 01:02:49,060 --> 01:02:51,890 And you ask, is this going to be flat or is it 1062 01:02:51,890 --> 01:02:52,770 not going to be flat? 1063 01:02:52,770 --> 01:02:56,050 Well, you might look at it and say, well, if there's no noise 1064 01:02:56,050 --> 01:02:58,150 in this little band and there's a lot of noise in this 1065 01:02:58,150 --> 01:03:02,220 band, I'm going to use this little band here. 1066 01:03:02,220 --> 01:03:04,560 But after you get all done playing those games, what 1067 01:03:04,560 --> 01:03:06,330 you're saying is, well, this noise is sort of 1068 01:03:06,330 --> 01:03:08,710 uniform over this band. 1069 01:03:08,710 --> 01:03:11,810 And therefore if I'm only transmitting in that band, if 1070 01:03:11,810 --> 01:03:15,510 I never transmit anything outside of that band, there's 1071 01:03:15,510 --> 01:03:20,630 no way you can tell what the noise is outside of that band. 1072 01:03:20,630 --> 01:03:28,980 You all know that the noise you experience if you're 1073 01:03:28,980 --> 01:03:32,930 dealing with a carrier frequency in the kilohertz 1074 01:03:32,930 --> 01:03:36,860 band is very different than what you see in the megahertz 1075 01:03:36,860 --> 01:03:40,140 band, is very different from what you see at 100 megahertz, 1076 01:03:40,140 --> 01:03:43,240 is very different from what you see in the gigahertz band, 1077 01:03:43,240 --> 01:03:46,090 is very different from what you see in the optical bands. 1078 01:03:46,090 --> 01:03:49,400 And all of that stuff. 1079 01:03:49,400 --> 01:03:55,330 So it doesn't make any sense to model the noise as being 1080 01:03:55,330 --> 01:04:00,300 uniform spectral density over all of that region. 1081 01:04:00,300 --> 01:04:03,690 So the only thing we would ever want to do is to model 1082 01:04:03,690 --> 01:04:08,390 the noise as having a uniform density over this narrowband 1083 01:04:08,390 --> 01:04:09,640 that we're interested in. 1084 01:04:12,040 --> 01:04:14,900 And that's how we define white Gaussian noise. 1085 01:04:14,900 --> 01:04:18,870 We say the noise is white Gaussian noise if, in fact, 1086 01:04:18,870 --> 01:04:23,150 when we go out and measure it we can measure it by passing 1087 01:04:23,150 --> 01:04:26,010 it through a filter by finding this variance, which is 1088 01:04:26,010 --> 01:04:30,170 exactly what we're talking about here, if what we get at 1089 01:04:30,170 --> 01:04:32,550 one frequency is the same as what we 1090 01:04:32,550 --> 01:04:33,840 get at another frequency. 1091 01:04:33,840 --> 01:04:37,300 So long as we're talking about the frequencies of interest, 1092 01:04:37,300 --> 01:04:40,370 then we say it's white. 1093 01:04:40,370 --> 01:04:43,580 Now how long do we have to make this measurement? 1094 01:04:43,580 --> 01:04:46,460 Well, we don't make it forever, because all the 1095 01:04:46,460 --> 01:04:49,690 filters we're using and everything else are filters 1096 01:04:49,690 --> 01:04:54,420 which only have a duration over a certain amount of time. 1097 01:04:54,420 --> 01:04:57,090 We only use our device over a certain amount of time. 1098 01:04:57,090 --> 01:05:01,530 So what we're interested in is looking at the noise over some 1099 01:05:01,530 --> 01:05:06,130 large effective time from minus T0 to plus T0. 1100 01:05:06,130 --> 01:05:09,630 We want the noise to be effectively stationary within 1101 01:05:09,630 --> 01:05:13,950 minus T0 to plus T0. 1102 01:05:13,950 --> 01:05:16,030 And then what's the next step in the argument? 1103 01:05:16,030 --> 01:05:18,700 You want the noise to be effectively stationary between 1104 01:05:18,700 --> 01:05:21,610 these very broad limits. 1105 01:05:21,610 --> 01:05:23,610 And then we think about it for a little bit, 1106 01:05:23,610 --> 01:05:24,800 and we say, but listen. 1107 01:05:24,800 --> 01:05:28,990 I'm only using this thing in a very small fraction of that 1108 01:05:28,990 --> 01:05:30,860 time region. 1109 01:05:30,860 --> 01:05:34,410 And therefore as far as my model is concerned, I 1110 01:05:34,410 --> 01:05:37,850 shouldn't be bothered with T0 at all. 1111 01:05:37,850 --> 01:05:40,950 I should just say mathematically, this process 1112 01:05:40,950 --> 01:05:45,180 is going to be stationary, and I forget about the T0. 1113 01:05:45,180 --> 01:05:48,740 And I look at it in frequency, and I say I'm going to use 1114 01:05:48,740 --> 01:05:52,640 this over my 10 megahertz or 100 megahertz, or whatever 1115 01:05:52,640 --> 01:05:54,930 frequency band I'm interested in. 1116 01:05:54,930 --> 01:05:56,470 And I'm only interested in what the 1117 01:05:56,470 --> 01:05:58,430 noise is in that band. 1118 01:05:58,430 --> 01:06:02,050 I don't want to specify what the bandwidth is. 1119 01:06:02,050 --> 01:06:05,470 And therefore I say, I will just model it as being uniform 1120 01:06:05,470 --> 01:06:08,450 over all frequencies. 1121 01:06:08,450 --> 01:06:11,450 So what white noise is, is you have effectively 1122 01:06:11,450 --> 01:06:13,340 gotten rid of the T0. 1123 01:06:13,340 --> 01:06:17,310 You've effectively gotten rid of the w0. 1124 01:06:17,310 --> 01:06:20,540 And after you've gotten rid of both of these things, you have 1125 01:06:20,540 --> 01:06:25,070 noise which has constant spectral density over all 1126 01:06:25,070 --> 01:06:30,100 frequencies, and noise way which has constant 1127 01:06:30,100 --> 01:06:32,460 power over all time. 1128 01:06:32,460 --> 01:06:35,700 And you look at it, and what happens? 1129 01:06:35,700 --> 01:06:40,880 If you take the inverse Fourier transform of sz of f, 1130 01:06:40,880 --> 01:06:45,340 and you assume that sz of f is just non-zero within a certain 1131 01:06:45,340 --> 01:06:48,400 frequency band, what you get when you take the Fourier 1132 01:06:48,400 --> 01:06:53,240 transform is a little kind of wiggle around zero. 1133 01:06:53,240 --> 01:06:55,760 And that's all very interesting. 1134 01:06:55,760 --> 01:06:59,100 If you then say well, I don't care about that. 1135 01:06:59,100 --> 01:07:01,950 I just like to assume that it's uniform over all 1136 01:07:01,950 --> 01:07:03,570 frequencies. 1137 01:07:03,570 --> 01:07:05,610 You then take the Fourier transform, and what you've got 1138 01:07:05,610 --> 01:07:06,860 is an impulse function. 1139 01:07:09,510 --> 01:07:13,190 And what the impulse function tells you is, when you pass 1140 01:07:13,190 --> 01:07:17,920 the noise through a filter, what comes out of the filter, 1141 01:07:17,920 --> 01:07:22,520 just like any time you deal with impulse functions -- 1142 01:07:22,520 --> 01:07:24,810 I mean, the impulse response is in fact the 1143 01:07:24,810 --> 01:07:27,640 response to an impulse. 1144 01:07:27,640 --> 01:07:30,190 So that as far as the output from the filter goes, 1145 01:07:30,190 --> 01:07:31,470 it's all very fine. 1146 01:07:31,470 --> 01:07:37,650 It only cares about what the integral is of that pulse. 1147 01:07:37,650 --> 01:07:41,910 And the integral of the pulse doesn't depend very much on 1148 01:07:41,910 --> 01:07:45,650 what happens at enormously large frequencies or anything. 1149 01:07:45,650 --> 01:07:48,980 So all of that's well behaved, so long as you go through some 1150 01:07:48,980 --> 01:07:52,020 kind of filtering first. 1151 01:07:52,020 --> 01:07:56,180 Unfortunately, as soon as you start talking about a 1152 01:07:56,180 --> 01:07:59,140 covariance function which is an impulse, 1153 01:07:59,140 --> 01:08:00,250 you're in real trouble. 1154 01:08:00,250 --> 01:08:03,730 Because the covariance function evaluated at zero is 1155 01:08:03,730 --> 01:08:05,240 the power in the process. 1156 01:08:08,070 --> 01:08:10,810 And the power in the process is then infinite. 1157 01:08:10,810 --> 01:08:15,760 So you wind up with this process which is easy to work 1158 01:08:15,760 --> 01:08:18,230 with any time you filter it. 1159 01:08:18,230 --> 01:08:21,250 It's easy to work with because you don't have these constants 1160 01:08:21,250 --> 01:08:25,120 capital T0 and capital W0 stuck in them. 1161 01:08:25,120 --> 01:08:27,820 Which you don't really care about, because what you're 1162 01:08:27,820 --> 01:08:31,390 assuming is you can wander around as much as you want in 1163 01:08:31,390 --> 01:08:33,420 frequency, subject to the antennas and 1164 01:08:33,420 --> 01:08:35,660 so on that you have. 1165 01:08:35,660 --> 01:08:38,280 And you want to be able to wander around as much in time 1166 01:08:38,280 --> 01:08:41,650 as you want to and assume that things are uniform over all 1167 01:08:41,650 --> 01:08:44,390 that region. 1168 01:08:44,390 --> 01:08:48,060 But you then have this problem that you have a noise process 1169 01:08:48,060 --> 01:08:51,010 which just doesn't make any sense at all. 1170 01:08:51,010 --> 01:08:53,400 Because it's infinite everywhere. 1171 01:08:53,400 --> 01:08:56,270 You look at any little frequency band of it and it 1172 01:08:56,270 --> 01:09:00,130 has infinite energy if you integrate over all time. 1173 01:09:00,130 --> 01:09:04,910 So you really want to somehow use these ideas of being 1174 01:09:04,910 --> 01:09:11,750 effectively stationary and of being effectively bandlimited, 1175 01:09:11,750 --> 01:09:14,380 and say what I want is noise which is 1176 01:09:14,380 --> 01:09:16,780 flat over those regions. 1177 01:09:16,780 --> 01:09:20,980 Now what we're going to do after the quiz, which is on 1178 01:09:20,980 --> 01:09:24,420 Wednesday, is we're going to start talking about how you 1179 01:09:24,420 --> 01:09:27,920 actually detect signals in the presence of noise. 1180 01:09:27,920 --> 01:09:30,960 And what we're going to find out is, when the noise is 1181 01:09:30,960 --> 01:09:34,710 white in the sense -- namely, when it behaves the same over 1182 01:09:34,710 --> 01:09:38,150 all the degrees of freedom that we're looking at -- then 1183 01:09:38,150 --> 01:09:41,040 it doesn't matter where you put your signal. 1184 01:09:41,040 --> 01:09:43,470 You can put your signal anywhere in this huge time 1185 01:09:43,470 --> 01:09:48,490 space, in this huge bandwidth that we're talking about. 1186 01:09:48,490 --> 01:09:50,500 And we somehow want to find out how to 1187 01:09:50,500 --> 01:09:52,150 detect signals there. 1188 01:09:52,150 --> 01:09:56,330 And we find out that the detection process is 1189 01:09:56,330 --> 01:10:00,300 independent of what time we're looking at and what frequency 1190 01:10:00,300 --> 01:10:01,320 we're looking at. 1191 01:10:01,320 --> 01:10:04,340 So what we have to focus on is this relatively small interval 1192 01:10:04,340 --> 01:10:08,280 of time and relatively small interval of bandwidth. 1193 01:10:08,280 --> 01:10:09,730 So all of this works well. 1194 01:10:09,730 --> 01:10:13,260 Which is why we assume white Gaussian noise. 1195 01:10:13,260 --> 01:10:17,120 But the white Gaussian noise assumption really makes sense 1196 01:10:17,120 --> 01:10:20,330 when you're looking at what the noise looks like in these 1197 01:10:20,330 --> 01:10:22,030 various degrees of freedom. 1198 01:10:22,030 --> 01:10:25,070 Namely, what the noise looks like when you pass the noise 1199 01:10:25,070 --> 01:10:28,810 through a filter and look at the output at specific 1200 01:10:28,810 --> 01:10:30,580 instance of time. 1201 01:10:30,580 --> 01:10:34,000 And that's where the modeling assumption is come in. 1202 01:10:34,000 --> 01:10:38,950 So this is really a very sophisticated use of modeling. 1203 01:10:38,950 --> 01:10:41,680 Did the engineers who created this sense of modeling have 1204 01:10:41,680 --> 01:10:44,190 any idea of what they were doing? 1205 01:10:44,190 --> 01:10:44,430 No. 1206 01:10:44,430 --> 01:10:46,770 They didn't have the foggiest idea of what they were doing, 1207 01:10:46,770 --> 01:10:49,390 except they had common sense. 1208 01:10:49,390 --> 01:10:52,320 And they had enough common sense to realize that no 1209 01:10:52,320 --> 01:10:55,490 matter where they put their signals, this same noise was 1210 01:10:55,490 --> 01:10:58,630 going to be affecting them. 1211 01:10:58,630 --> 01:11:03,980 And because of that, what they did is they created some kind 1212 01:11:03,980 --> 01:11:08,410 of pseudo-theory, which said we have noise which looks the 1213 01:11:08,410 --> 01:11:09,930 same wherever it is. 1214 01:11:09,930 --> 01:11:13,530 Mathematicians got a hold of it, went through all this 1215 01:11:13,530 --> 01:11:15,810 theory of generalized functions, 1216 01:11:15,810 --> 01:11:17,490 came back to the engineers. 1217 01:11:17,490 --> 01:11:19,640 The engineers couldn't understand any of that, and 1218 01:11:19,640 --> 01:11:23,510 it's been going back and forth forever. 1219 01:11:23,510 --> 01:11:29,460 Where we are now, I think, is we have a theory where we can 1220 01:11:29,460 --> 01:11:33,130 actually look at finite time intervals, finite frequency 1221 01:11:33,130 --> 01:11:37,180 intervals, see what's going on there, make mathematical sense 1222 01:11:37,180 --> 01:11:40,580 out of it, and then say that the results don't depend on 1223 01:11:40,580 --> 01:11:45,590 what t0 is or w0 is, so we can leave it out. 1224 01:11:45,590 --> 01:11:47,850 And at that point we really have our cake and 1225 01:11:47,850 --> 01:11:49,200 we can eat it too. 1226 01:11:49,200 --> 01:11:52,610 So we can do what the engineers I've always been 1227 01:11:52,610 --> 01:11:56,310 doing, but we really understand it at this point. 1228 01:11:56,310 --> 01:11:57,470 OK. 1229 01:11:57,470 --> 01:11:59,440 I think I will stop at that point.