1 00:00:00,500 --> 00:00:02,993 The following content is provided under a Creative 2 00:00:02,993 --> 00:00:03,719 Commons license. 3 00:00:03,719 --> 00:00:06,284 Your support will help MIT OpenCourseWare 4 00:00:06,284 --> 00:00:10,680 continue to offer high quality educational resources for free. 5 00:00:10,680 --> 00:00:13,632 To make a donation or view additional materials 6 00:00:13,632 --> 00:00:19,068 from hundreds of MIT courses, visit MIT OpenCourseWare 7 00:00:19,068 --> 00:00:21,460 at ocw.mit.edu. 8 00:00:21,460 --> 00:00:23,182 PROFESSOR: OK. 9 00:00:23,182 --> 00:00:24,904 Let's start. 10 00:00:24,904 --> 00:00:31,792 So we are back to calculating partition functions 11 00:00:31,792 --> 00:00:34,243 for Ising models. 12 00:00:34,243 --> 00:00:39,304 So again some in general hypercubic lattice 13 00:00:39,304 --> 00:00:44,370 where you have variables sigma i being 14 00:00:44,370 --> 00:00:49,818 minus plus 1 at each side with a tendency to be parallel. 15 00:00:49,818 --> 00:00:54,782 And our task is to calculate the partition function which 16 00:00:54,782 --> 00:00:59,246 for n sides amounts to summing over 2 17 00:00:59,246 --> 00:01:04,146 to the n binary configurations of a weight that 18 00:01:04,146 --> 00:01:10,434 favors near neighbors to be in the same state with a strength 19 00:01:10,434 --> 00:01:10,970 K. 20 00:01:10,970 --> 00:01:11,532 OK. 21 00:01:11,532 --> 00:01:16,590 So the procedure that we are going to follow 22 00:01:16,590 --> 00:01:19,408 is this high temperature expansion, 23 00:01:19,408 --> 00:01:27,078 that is writing this as hyperbolic cosine of k 1 24 00:01:27,078 --> 00:01:34,671 plus t standing for tan K sigma i, sigma j. 25 00:01:34,671 --> 00:01:40,479 And then as we discussed, actually this i 26 00:01:40,479 --> 00:01:47,819 would have to write as a product over all bonds. 27 00:01:47,819 --> 00:01:54,704 And then essentially to each bond on this lattice 28 00:01:54,704 --> 00:02:03,219 I would have to either assign 1 or t sigma i, sigma j. 29 00:02:03,219 --> 00:02:06,590 Binary choice now moves to bonds. 30 00:02:06,590 --> 00:02:12,000 And then we saw that in order to make sure 31 00:02:12,000 --> 00:02:15,246 that following the summation over sigma 32 00:02:15,246 --> 00:02:19,800 these factors survived we have to make sure that we construct 33 00:02:19,800 --> 00:02:24,757 graphs where out of each side go out an even number of bonds. 34 00:02:24,757 --> 00:02:27,669 And we rewrote the whole thing as 2 35 00:02:27,669 --> 00:02:32,402 to the n, cos K to the number of bonds, which 36 00:02:32,402 --> 00:02:35,660 for a hypercubic lattice is dn. 37 00:02:35,660 --> 00:02:40,004 And then we had a sum over graphs 38 00:02:40,004 --> 00:02:48,335 where we had an even number of bonds per side. 39 00:02:48,335 --> 00:02:53,525 And the contribution of the graph 40 00:02:53,525 --> 00:03:04,019 was basically t to the power of the number of bonds. 41 00:03:04,019 --> 00:03:10,649 So I'm going to call this sum over here s. 42 00:03:10,649 --> 00:03:15,959 After all, the interesting part of the problem 43 00:03:15,959 --> 00:03:19,755 is captured in this factor s, these 2 to the n cos K 44 00:03:19,755 --> 00:03:22,091 to the power of dn are perfectly well 45 00:03:22,091 --> 00:03:23,452 behaved analytical functions. 46 00:03:23,452 --> 00:03:27,700 We are looking for something that is singular. 47 00:03:27,700 --> 00:03:33,554 So this s I can either pick one from all of the factors-- 48 00:03:33,554 --> 00:03:38,510 so to the lowest order in t I would have to do to one. 49 00:03:38,510 --> 00:03:41,591 And then we saw that the next order correction 50 00:03:41,591 --> 00:03:43,625 would be something like a square, 51 00:03:43,625 --> 00:03:46,935 would be this t to the 4th. 52 00:03:46,935 --> 00:03:51,140 And then objects that are more complicated versions. 53 00:03:51,140 --> 00:03:58,462 And I can write all of those as a sum over kind of graphs 54 00:03:58,462 --> 00:04:04,424 that I can draw on the lattice by going around and making 55 00:04:04,424 --> 00:04:05,342 a loop. 56 00:04:05,342 --> 00:04:11,496 But then I will have graphs that will be composed of two loops, 57 00:04:11,496 --> 00:04:14,236 for example, that are disconnected. 58 00:04:14,236 --> 00:04:18,971 Since this can be translated all over the place 59 00:04:18,971 --> 00:04:22,832 this would have a factor of n for a lattice that 60 00:04:22,832 --> 00:04:25,951 is large for getting side effects and edge effects. 61 00:04:25,951 --> 00:04:29,321 This would have a contribution once I slide these two 62 00:04:29,321 --> 00:04:31,006 with respect to each other. 63 00:04:31,006 --> 00:04:33,614 That is, of the order of n squared. 64 00:04:33,614 --> 00:04:35,896 And then I would have things that 65 00:04:35,896 --> 00:04:41,449 would three loops and so forth. 66 00:04:41,449 --> 00:04:47,443 Now based on what we have seen before it 67 00:04:47,443 --> 00:04:52,879 is very tempting to somehow exponentiate this sum 68 00:04:52,879 --> 00:04:58,951 and write it as exponential of the sum 69 00:04:58,951 --> 00:05:04,264 over objects that have a single loop. 70 00:05:04,264 --> 00:05:12,488 And I will call this new sum actually s prime for reasons 71 00:05:12,488 --> 00:05:16,689 to become apparent because if I start 72 00:05:16,689 --> 00:05:19,930 to expand this exponential, what do I get? 73 00:05:19,930 --> 00:05:23,066 I will certainly start to get 1, then 74 00:05:23,066 --> 00:05:26,594 I will have the sum over single loop graphs 75 00:05:26,594 --> 00:05:33,560 plus one half whatever is in the exponent over here squared, 76 00:05:33,560 --> 00:05:40,728 1 over 3 factorial whatever is in the exponent, which is 77 00:05:40,728 --> 00:05:44,620 this sum cubed, and so forth. 78 00:05:44,620 --> 00:05:49,282 And if I were to expand this thing that 79 00:05:49,282 --> 00:05:52,390 is something squared, I will uncertainty 80 00:05:52,390 --> 00:05:55,033 get terms in it that would correspond 81 00:05:55,033 --> 00:05:57,865 to two of the terms in the sum. 82 00:05:57,865 --> 00:06:01,524 Here I would get things that would correspond to three 83 00:06:01,524 --> 00:06:03,786 of the terms in the sum. 84 00:06:03,786 --> 00:06:06,050 And the communitorial factors would certainly 85 00:06:06,050 --> 00:06:10,175 work out to get rid of the 2 and the 6, 86 00:06:10,175 --> 00:06:14,101 depending on-- let's say I have loop a, b, or c, 87 00:06:14,101 --> 00:06:16,757 I could pick a from the first sum 88 00:06:16,757 --> 00:06:19,759 here, b from the second, c from the third 89 00:06:19,759 --> 00:06:22,927 or any permutation thereafter that would amount to c. 90 00:06:22,927 --> 00:06:25,759 I would have a factor of c abc. 91 00:06:25,759 --> 00:06:32,791 But s definitely is not equal to s prime because immediately we 92 00:06:32,791 --> 00:06:39,315 see that once I square this term a plus b squared, in addition 93 00:06:39,315 --> 00:06:44,880 to a squared ab I will also get a squared and b squared, 94 00:06:44,880 --> 00:06:52,230 which corresponds to objects such as this. 95 00:06:52,230 --> 00:07:00,571 One half of the same graph repeated twice. 96 00:07:00,571 --> 00:07:01,071 Right? 97 00:07:01,071 --> 00:07:06,243 It is the a squared term that is appearing in this sum. 98 00:07:06,243 --> 00:07:11,490 And this clearly has no analog here in my sum s. 99 00:07:11,490 --> 00:07:16,790 I emphasize that in the sum s each bond can 100 00:07:16,790 --> 00:07:20,580 occur either zero or one time. 101 00:07:20,580 --> 00:07:26,564 Whereas if I exponentiate this you can see that s prime 102 00:07:26,564 --> 00:07:30,550 certainly contains things that potentially are repeated twice. 103 00:07:30,550 --> 00:07:34,600 As I go further I could have multiple times. 104 00:07:34,600 --> 00:07:39,820 Also when I square this term I could potentially 105 00:07:39,820 --> 00:07:46,660 get 1 factor from 1, another factor, which is also 106 00:07:46,660 --> 00:07:52,967 a loop from the second term, the second in this product of two 107 00:07:52,967 --> 00:07:56,771 brackets, which once I multiply them happen 108 00:07:56,771 --> 00:08:02,276 to overlap and share something. 109 00:08:02,276 --> 00:08:03,377 OK? 110 00:08:03,377 --> 00:08:05,580 All right. 111 00:08:05,580 --> 00:08:10,572 So very roughly and hand wavingly, 112 00:08:10,572 --> 00:08:19,740 the thing is that s has loops that avoid each other, 113 00:08:19,740 --> 00:08:21,030 don't intersect. 114 00:08:21,030 --> 00:08:25,545 Whereas s prime has loops that intersect. 115 00:08:25,545 --> 00:08:33,312 So very naively s is sum over, if you like, 116 00:08:33,312 --> 00:08:39,798 a gas of non intersecting loops. 117 00:08:39,798 --> 00:08:50,620 Whereas s prime is a sum over gas of what 118 00:08:50,620 --> 00:08:53,385 I could call phantom loops. 119 00:08:53,385 --> 00:08:56,150 They're kind of like ghosts. 120 00:08:56,150 --> 00:08:59,470 They can go through each other. 121 00:08:59,470 --> 00:09:00,973 OK? 122 00:09:00,973 --> 00:09:03,980 All right. 123 00:09:03,980 --> 00:09:08,572 So that's one of a number of problems. 124 00:09:08,572 --> 00:09:13,738 Now in this lecture we are going to ignore 125 00:09:13,738 --> 00:09:18,160 this difference between s and s prime and calculate s prime, 126 00:09:18,160 --> 00:09:21,798 which therefore we will not be doing the right job. 127 00:09:21,798 --> 00:09:25,906 And this is one example of where I'm not doing the right job. 128 00:09:25,906 --> 00:09:28,231 And as I go through the calculation 129 00:09:28,231 --> 00:09:31,611 you may want to figure out also other places where 130 00:09:31,611 --> 00:09:35,138 I don't know correctly reproduce the sum s in order 131 00:09:35,138 --> 00:09:38,000 to ultimately be able to make a calculation 132 00:09:38,000 --> 00:09:39,880 and see what it comes to. 133 00:09:39,880 --> 00:09:44,128 Now next lecture we will try to correct all of those errors, 134 00:09:44,128 --> 00:09:50,715 so you better keep track of all of those errors, 135 00:09:50,715 --> 00:09:58,403 make sure that I am doing everything right. 136 00:09:58,403 --> 00:10:02,260 Seems good-- all right. 137 00:10:02,260 --> 00:10:10,340 So let's go and take a look at this s prime. 138 00:10:10,340 --> 00:10:22,183 So s prime, actually, log of s prime is a sum over graphs 139 00:10:22,183 --> 00:10:28,570 that I can draw on the lattice. 140 00:10:28,570 --> 00:10:31,474 And since I could exponentiate that, 141 00:10:31,474 --> 00:10:35,891 there was no problem in the exponentiation involved over 142 00:10:35,891 --> 00:10:36,432 here. 143 00:10:36,432 --> 00:10:40,760 Essentially I have to, for calculating the log, 144 00:10:40,760 --> 00:10:43,766 just calculate these singly connected loops. 145 00:10:43,766 --> 00:10:46,726 And you can see that that will work out fine 146 00:10:46,726 --> 00:10:48,510 as far as extensivity is concerned 147 00:10:48,510 --> 00:10:51,174 because I could translate each one of these figures 148 00:10:51,174 --> 00:10:55,900 of the loops over the lattice and get the factors of n. 149 00:10:55,900 --> 00:11:00,150 So this is clearly something that is OK. 150 00:11:00,150 --> 00:11:04,884 Now since I'm already making this mistake of forgetting 151 00:11:04,884 --> 00:11:07,802 about the intersections between loops 152 00:11:07,802 --> 00:11:13,486 I'm going to make another assumption that 153 00:11:13,486 --> 00:11:19,170 is in this sum of single loops. 154 00:11:19,170 --> 00:11:29,786 I already include things where the single loop 155 00:11:29,786 --> 00:11:37,266 is allowed to intersect itself. 156 00:11:37,266 --> 00:11:48,034 For example, I'm going to allow as a single loop entity 157 00:11:48,034 --> 00:11:54,681 something that is like this where this particular bond will 158 00:11:54,681 --> 00:11:57,687 give a factor of t squared. 159 00:11:57,687 --> 00:12:02,700 Clearly I should not include this in the correct sum. 160 00:12:02,700 --> 00:12:05,010 But since I'm ignoring intersections 161 00:12:05,010 --> 00:12:09,180 among the different groups and I'm making it phantom, 162 00:12:09,180 --> 00:12:12,672 let's make it also phantom with respect to itself, 163 00:12:12,672 --> 00:12:15,010 allow it to intersect itself [INAUDIBLE]. 164 00:12:15,010 --> 00:12:19,530 It's in the same spirit of the mistake 165 00:12:19,530 --> 00:12:22,930 that we are going to do. 166 00:12:22,930 --> 00:12:27,070 So now what do we have? 167 00:12:27,070 --> 00:12:33,280 We can write that log of s prime is 168 00:12:33,280 --> 00:12:40,170 this sum over loops of length l and then multiply it 169 00:12:40,170 --> 00:12:42,920 by t to the l. 170 00:12:42,920 --> 00:12:49,969 So basically I say that I can draw loops, let's say of size 171 00:12:49,969 --> 00:12:52,164 four, loops of size six. 172 00:12:52,164 --> 00:12:57,871 Each one of them I have to multiply by t to the l. 173 00:12:57,871 --> 00:13:09,500 Of course, I have to multiply with the number of loops 174 00:13:09,500 --> 00:13:12,857 of length l. 175 00:13:12,857 --> 00:13:13,980 OK? 176 00:13:13,980 --> 00:13:17,932 And this I'm going to write slightly differently. 177 00:13:17,932 --> 00:13:22,872 So I'm going to say that log of s prime 178 00:13:22,872 --> 00:13:26,628 is sum over this length of the loop. 179 00:13:26,628 --> 00:13:30,340 All the loops of length l are going 180 00:13:30,340 --> 00:13:35,477 to contribute a factor of t to the l. 181 00:13:35,477 --> 00:13:40,886 And I'm going to count the loops of length 182 00:13:40,886 --> 00:13:45,712 l that start and end at the origin. 183 00:13:45,712 --> 00:13:49,960 And I'll give that a symbol wl00. 184 00:13:49,960 --> 00:13:54,376 Actually, very soon I will introduce 185 00:13:54,376 --> 00:13:57,320 a generalization of this. 186 00:13:57,320 --> 00:14:01,736 So let me write the definition. 187 00:14:01,736 --> 00:14:06,128 I define a matrix that is indexed 188 00:14:06,128 --> 00:14:10,992 by two side of the lattice and counts 189 00:14:10,992 --> 00:14:18,569 the number of walks that I can have from one to the other, 190 00:14:18,569 --> 00:14:22,566 from i to j in l steps. 191 00:14:22,566 --> 00:14:36,727 So this is number of walks of length l from i to j. 192 00:14:36,727 --> 00:14:38,260 OK? 193 00:14:38,260 --> 00:14:43,408 So what am I doing here? 194 00:14:43,408 --> 00:14:50,272 Since I am looking at single loop objects, 195 00:14:50,272 --> 00:14:56,150 I want to sum over, let's say, all terms 196 00:14:56,150 --> 00:14:59,705 that contribute, in this case t to the 4. 197 00:14:59,705 --> 00:15:02,704 It's obvious because it's really just one shape. 198 00:15:02,704 --> 00:15:06,920 It's a square, but this square I could have started at anywhere 199 00:15:06,920 --> 00:15:09,131 on the lattice. 200 00:15:09,131 --> 00:15:15,027 And this slight factor, which captures the extensivity, 201 00:15:15,027 --> 00:15:19,592 I'll take outside because I expect this log of s 202 00:15:19,592 --> 00:15:20,600 to be extensive. 203 00:15:20,600 --> 00:15:22,616 It should be proportional to n. 204 00:15:22,616 --> 00:15:25,210 So one part of it is essentially where 205 00:15:25,210 --> 00:15:27,094 I start to draw this loop. 206 00:15:27,094 --> 00:15:31,224 So I say that I always start with the loops 207 00:15:31,224 --> 00:15:36,040 that I have at point zero. 208 00:15:36,040 --> 00:15:39,344 Then I want to come back to myself. 209 00:15:39,344 --> 00:15:43,887 So I indicate that the end point should also be zero. 210 00:15:43,887 --> 00:15:46,238 And if I want to get a term here, 211 00:15:46,238 --> 00:15:48,648 this is a term that is t to the fourth. 212 00:15:48,648 --> 00:15:52,220 I need to know how many of such blocks I have. 213 00:15:52,220 --> 00:15:52,720 Yes? 214 00:15:52,720 --> 00:15:55,960 AUDIENCE: Are you allowing the loop to intersect itself 215 00:15:55,960 --> 00:15:57,760 in this case or not? 216 00:15:57,760 --> 00:15:59,395 PROFESSOR: In this case, yes. 217 00:15:59,395 --> 00:16:01,684 I will also, when I'm calculating anything 218 00:16:01,684 --> 00:16:04,393 to do with s prime I allow intersection. 219 00:16:04,393 --> 00:16:08,144 So if you are asking whether I'm allowing something like this, 220 00:16:08,144 --> 00:16:09,200 the answer is yes. 221 00:16:09,200 --> 00:16:10,200 AUDIENCE: OK. 222 00:16:10,200 --> 00:16:11,200 PROFESSOR: Yeah. 223 00:16:11,200 --> 00:16:15,320 AUDIENCE: And are we assuming an infinitely large system so 224 00:16:15,320 --> 00:16:15,820 that-- 225 00:16:15,820 --> 00:16:16,392 PROFESSOR: Yes. 226 00:16:16,392 --> 00:16:16,964 That's right. 227 00:16:16,964 --> 00:16:20,110 So that the edge effects, you don't have to worry about. 228 00:16:20,110 --> 00:16:24,671 Or alternatively you can imagine that you have periodic boundary 229 00:16:24,671 --> 00:16:25,170 conditions. 230 00:16:25,170 --> 00:16:27,335 And with periodic boundary conditions 231 00:16:27,335 --> 00:16:30,985 we can still slide it all over the place. 232 00:16:30,985 --> 00:16:31,485 OK? 233 00:16:31,485 --> 00:16:34,588 But clearly then the maximal size of these loops, 234 00:16:34,588 --> 00:16:36,532 et cetera, will be determined potentially 235 00:16:36,532 --> 00:16:40,490 by the size of the lattice. 236 00:16:40,490 --> 00:16:44,994 Now this is not entirely correct because there 237 00:16:44,994 --> 00:16:47,246 is an over counting. 238 00:16:47,246 --> 00:16:51,715 This 1one square that I have drawn over here, 239 00:16:51,715 --> 00:16:54,802 I could have started from this point, 240 00:16:54,802 --> 00:16:57,010 to this point, this point. 241 00:16:57,010 --> 00:17:01,386 And essentially for something that has length l, 242 00:17:01,386 --> 00:17:05,769 I would have had l possible starting points. 243 00:17:05,769 --> 00:17:11,453 So in order to avoid the over counting I have to divide by l. 244 00:17:11,453 --> 00:17:16,595 And in fact I could have started walking along this direction, 245 00:17:16,595 --> 00:17:20,039 or alternatively I could have gone 246 00:17:20,039 --> 00:17:22,339 in the clockwise direction. 247 00:17:22,339 --> 00:17:24,796 So there's two orientations to the walk 248 00:17:24,796 --> 00:17:27,604 that will take me from the origin back 249 00:17:27,604 --> 00:17:30,080 to the origin in l steps. 250 00:17:30,080 --> 00:17:33,940 And not to do the over counting, I have to divide by 2l. 251 00:17:33,940 --> 00:17:34,440 Yes? 252 00:17:34,440 --> 00:17:38,528 AUDIENCE: If we allow walking over ourselves 253 00:17:38,528 --> 00:17:42,620 is it always a degeneracy of 2l? 254 00:17:42,620 --> 00:17:43,418 PROFESSOR: Yes. 255 00:17:43,418 --> 00:17:47,009 You can go and do the calculation to convince 256 00:17:47,009 --> 00:17:52,213 yourself that even for something as convoluted as that, 257 00:17:52,213 --> 00:17:55,420 you can... is too. 258 00:17:55,420 --> 00:17:56,628 PROFESSOR: OK. 259 00:17:56,628 --> 00:18:01,460 So this is what we want to calculate. 260 00:18:01,460 --> 00:18:06,046 Well, it turns out that this entity actually 261 00:18:06,046 --> 00:18:07,776 shows up somewhere else also. 262 00:18:07,776 --> 00:18:12,897 So let me tell you why I wanted to write a more general thing. 263 00:18:12,897 --> 00:18:16,769 Another quantity that I can try to calculate 264 00:18:16,769 --> 00:18:19,190 is the spin spin correlation. 265 00:18:19,190 --> 00:18:24,998 I can pick spin zero here and say spin r here, 266 00:18:24,998 --> 00:18:26,582 some other location. 267 00:18:26,582 --> 00:18:32,086 And I want to calculate what is the correlation 268 00:18:32,086 --> 00:18:34,942 between these two spins. 269 00:18:34,942 --> 00:18:35,660 OK? 270 00:18:35,660 --> 00:18:39,800 So how do I have to do that for the Ising model? 271 00:18:39,800 --> 00:18:42,775 I have to essentially sum over all configurations 272 00:18:42,775 --> 00:18:47,215 with an additional factor of sigma zero sigma 273 00:18:47,215 --> 00:18:58,508 r of this weight, e to the k, sum over ij, sigma i, sigma j, 274 00:18:58,508 --> 00:19:04,990 appropriately normalized, of course, by the partition 275 00:19:04,990 --> 00:19:05,920 function. 276 00:19:05,920 --> 00:19:08,552 And I can make the same transformation 277 00:19:08,552 --> 00:19:12,901 that I have on the first line of these exponential factors 278 00:19:12,901 --> 00:19:18,130 to write this as a sum over sigma i, 279 00:19:18,130 --> 00:19:23,360 I have sigma zero, sigma r, and then product 280 00:19:23,360 --> 00:19:28,950 over all bonds of these factors of one plus t sigma i, sigma j. 281 00:19:28,950 --> 00:19:32,886 The factors of 2 to the n cos K will cancel out 282 00:19:32,886 --> 00:19:34,860 between the numerator and the denominator. 283 00:19:34,860 --> 00:19:41,324 And basically I will get the same thing. 284 00:19:41,324 --> 00:19:48,114 Now of course the denominator is the partition function. 285 00:19:48,114 --> 00:19:51,039 It is the sum s that we are after, 286 00:19:51,039 --> 00:19:55,580 but we can also, and we've seen this already how to do this, 287 00:19:55,580 --> 00:19:58,380 express the sum in the numerator graphically. 288 00:19:58,380 --> 00:20:02,176 And the difference between the numerator and the denominator 289 00:20:02,176 --> 00:20:06,217 is that I have an additional sigma sitting here, 290 00:20:06,217 --> 00:20:08,962 an additional sigma sitting there, 291 00:20:08,962 --> 00:20:15,110 that if left by themselves will average out to 0. 292 00:20:15,110 --> 00:20:18,894 So I need to connect them by paths 293 00:20:18,894 --> 00:20:23,151 that are composed of factors of t sigma sigma, 294 00:20:23,151 --> 00:20:28,816 originating on one and ending on the other. 295 00:20:28,816 --> 00:20:29,557 Right? 296 00:20:29,557 --> 00:20:38,202 So in the same sense that what is appearing in the denominator 297 00:20:38,202 --> 00:20:42,514 is a sum that involves these loops, 298 00:20:42,514 --> 00:20:47,418 the first term that appears in the numerator 299 00:20:47,418 --> 00:20:52,042 is a path that connects zero to r 300 00:20:52,042 --> 00:20:56,272 through some combination of these factors of t, 301 00:20:56,272 --> 00:21:01,230 and then I have to sum over all possible ways of doing that. 302 00:21:01,230 --> 00:21:04,898 But then I could certainly have graphs 303 00:21:04,898 --> 00:21:07,518 that involves the same thing. 304 00:21:07,518 --> 00:21:11,540 And the loop, there is nothing that 305 00:21:11,540 --> 00:21:17,910 is against that, or the same thing, and two loops, 306 00:21:17,910 --> 00:21:19,821 and so forth. 307 00:21:19,821 --> 00:21:25,280 And you can see that as long as, and only as long 308 00:21:25,280 --> 00:21:28,206 as I treat these as phantom objects 309 00:21:28,206 --> 00:21:33,149 that can pass through each other I can factor out this term 310 00:21:33,149 --> 00:21:37,692 and the rest of 1 plus 1, loop plus 2 loop, 311 00:21:37,692 --> 00:21:41,309 is exactly what I have in the denominator. 312 00:21:41,309 --> 00:21:49,265 So we see that under the assumption of phantomness, 313 00:21:49,265 --> 00:21:55,453 if phantom then this becomes really just 314 00:21:55,453 --> 00:22:02,810 the sum over all paths that go from 0 to r. 315 00:22:02,810 --> 00:22:07,978 And of course the contribution of each path 316 00:22:07,978 --> 00:22:12,874 is how many factors of t I have. 317 00:22:12,874 --> 00:22:13,479 Right? 318 00:22:13,479 --> 00:22:21,642 So I have to have a sum over the length of this path l. 319 00:22:21,642 --> 00:22:26,152 Paths of length l will contribute a factor of t 320 00:22:26,152 --> 00:22:27,505 to the l. 321 00:22:27,505 --> 00:22:30,346 But there are potentially multiple ways 322 00:22:30,346 --> 00:22:34,970 to go from i0 to r in l steps. 323 00:22:34,970 --> 00:22:37,709 How many ways? 324 00:22:37,709 --> 00:22:45,926 That's precisely what I call this 0w of lr. 325 00:22:45,926 --> 00:22:46,840 Yes? 326 00:22:46,840 --> 00:22:50,606 AUDIENCE: Why does a graph that goes 327 00:22:50,606 --> 00:22:56,000 from 0 to r in three different ways have [INAUDIBLE]? 328 00:22:56,000 --> 00:22:56,896 PROFESSOR: OK. 329 00:22:56,896 --> 00:23:01,824 So you want to have to go from 0 to r, 330 00:23:01,824 --> 00:23:05,852 you want to have a single path, and then 331 00:23:05,852 --> 00:23:09,740 you want that path to do something like this? 332 00:23:09,740 --> 00:23:10,880 AUDIENCE: Yeah. 333 00:23:10,880 --> 00:23:12,590 That doesn't [INAUDIBLE]. 334 00:23:12,590 --> 00:23:14,351 PROFESSOR: That's fine. 335 00:23:14,351 --> 00:23:17,873 If I ignore the phantomness condition 336 00:23:17,873 --> 00:23:21,990 this is the same as this multiplied 337 00:23:21,990 --> 00:23:26,687 by this, which is a term that appears in the denominator 338 00:23:26,687 --> 00:23:27,980 and cancels out. 339 00:23:27,980 --> 00:23:31,226 AUDIENCE: But you're assuming that you 340 00:23:31,226 --> 00:23:33,390 have the phantom condition. 341 00:23:33,390 --> 00:23:38,120 So this is completely normal. 342 00:23:38,120 --> 00:23:45,800 It doesn't matter. 343 00:23:45,800 --> 00:23:48,296 PROFESSOR: I'm not sure I understand your question. 344 00:23:48,296 --> 00:23:50,955 You say that even without the phantom condition 345 00:23:50,955 --> 00:23:52,380 this graph exists. 346 00:23:52,380 --> 00:23:54,320 AUDIENCE: With them phantom condition-- 347 00:23:54,320 --> 00:23:55,360 PROFESSOR: Yes. 348 00:23:55,360 --> 00:23:58,220 AUDIENCE: --this graph is perfectly normal. 349 00:23:58,220 --> 00:24:06,008 PROFESSOR: Even without the phantom condition 350 00:24:06,008 --> 00:24:12,498 this is an acceptable graph. 351 00:24:12,498 --> 00:24:13,796 Yeah. 352 00:24:13,796 --> 00:24:15,094 OK. 353 00:24:15,094 --> 00:24:16,400 Yeah? 354 00:24:16,400 --> 00:24:20,582 AUDIENCE: So what does phantomness mean? 355 00:24:20,582 --> 00:24:26,160 Why then we can simplify only a [INAUDIBLE]? 356 00:24:26,160 --> 00:24:27,682 PROFESSOR: OK. 357 00:24:27,682 --> 00:24:36,053 Because let's say I were to take this as a check 358 00:24:36,053 --> 00:24:40,435 and multiply it by the denominator. 359 00:24:40,435 --> 00:24:45,035 The question is, would I generate a series 360 00:24:45,035 --> 00:24:47,910 that is in the numerator? 361 00:24:47,910 --> 00:24:48,485 OK? 362 00:24:48,485 --> 00:24:53,678 So if I take this object that I have said is the answer, 363 00:24:53,678 --> 00:24:57,037 I have to multiply it by this object. 364 00:24:57,037 --> 00:25:03,049 And make sure that it introduces correctly the numerator. 365 00:25:03,049 --> 00:25:06,755 The question is, when does it? 366 00:25:06,755 --> 00:25:10,049 I mean, certainly when I multiply this by this, 367 00:25:10,049 --> 00:25:14,204 I will get the possibility of having a graph such as this. 368 00:25:14,204 --> 00:25:17,691 And from here, I can have a loop such as this. 369 00:25:17,691 --> 00:25:22,101 And the two of them would share a bond such as that. 370 00:25:22,101 --> 00:25:26,131 So in the real Ising model, that is not allowed. 371 00:25:26,131 --> 00:25:31,308 So that's the phantomness condition 372 00:25:31,308 --> 00:25:39,652 that allows me to factor these things. 373 00:25:39,652 --> 00:25:40,844 OK? 374 00:25:40,844 --> 00:25:43,240 All right. 375 00:25:43,240 --> 00:25:50,611 So we see that if I have this quantity that I have written 376 00:25:50,611 --> 00:25:55,660 in red, then I can calculate both correlation functions, 377 00:25:55,660 --> 00:26:01,130 as well as the free energy, log of the partition 378 00:26:01,130 --> 00:26:04,175 function within this phantomness assumption. 379 00:26:04,175 --> 00:26:09,167 So the question is, can I calculate that? 380 00:26:09,167 --> 00:26:12,309 And the answer is that calculating number 381 00:26:12,309 --> 00:26:16,089 of random walks is one of the most basic things 382 00:26:16,089 --> 00:26:20,430 that one does in statistical physics, 383 00:26:20,430 --> 00:26:24,380 and easily accomplished as follows. 384 00:26:24,380 --> 00:26:32,432 Basically, I say that, OK, let's say that I start from 0, 385 00:26:32,432 --> 00:26:38,665 actually let's do it 0 and r, and let's 386 00:26:38,665 --> 00:26:46,750 say that I have looked at all possible paths that have 387 00:26:46,750 --> 00:26:53,210 l steps and end up over here. 388 00:26:53,210 --> 00:26:57,190 So this is step one, step two, step number three. 389 00:26:57,190 --> 00:27:00,779 And the last one, step l, I have purposely 390 00:27:00,779 --> 00:27:03,049 drawn as a dotted line. 391 00:27:03,049 --> 00:27:06,681 Maybe I will pull this point further down 392 00:27:06,681 --> 00:27:10,394 to emphasize that this is the last one. 393 00:27:10,394 --> 00:27:14,615 This is l minus 1 is the previous one. 394 00:27:14,615 --> 00:27:21,124 So I can certainly state that the number of 395 00:27:21,124 --> 00:27:29,460 walks from 0 to r in l steps. 396 00:27:29,460 --> 00:27:34,320 Well, any walk that got to r in l steps, 397 00:27:34,320 --> 00:27:39,190 at the l minus one step had to be somewhere. 398 00:27:39,190 --> 00:27:39,811 OK? 399 00:27:39,811 --> 00:27:49,747 So what I do is I do a number of walks from 0 to r prime, 400 00:27:49,747 --> 00:27:58,150 I'll call this point r prime, in l minus one steps. 401 00:27:58,150 --> 00:28:07,830 And then times the number of ways, or number of walks 402 00:28:07,830 --> 00:28:12,833 from r prime to r in one step. 403 00:28:12,833 --> 00:28:18,134 So before I reach my destination the previous step 404 00:28:18,134 --> 00:28:22,189 I had to have been somewhere, I sum 405 00:28:22,189 --> 00:28:27,170 over all possible various places where that somewhere has to be, 406 00:28:27,170 --> 00:28:30,869 and then I have to make sure that I 407 00:28:30,869 --> 00:28:35,542 can reach from that somewhere in one step to my destination. 408 00:28:35,542 --> 00:28:38,919 That's all that sum is, OK? 409 00:28:38,919 --> 00:28:43,098 Now I can convert that to mathematics. 410 00:28:43,098 --> 00:28:50,098 This quantity is start from 0, take l steps, arrive at r. 411 00:28:50,098 --> 00:28:52,643 By definition that's the number. 412 00:28:52,643 --> 00:29:00,238 And what it says is that this should be a sum over r pi, 413 00:29:00,238 --> 00:29:08,194 start from 0, take l minus 1 steps, arrive at r prime. 414 00:29:08,194 --> 00:29:13,471 Start from r prime, take one step, 415 00:29:13,471 --> 00:29:22,300 arrive at your destination r, sum over r prime. 416 00:29:22,300 --> 00:29:23,289 OK? 417 00:29:23,289 --> 00:29:30,105 Now these are n by n matrices that are labeled by l. 418 00:29:30,105 --> 00:29:30,673 Right? 419 00:29:30,673 --> 00:29:37,549 So these, being n by n matrices, this summation over r prime 420 00:29:37,549 --> 00:29:40,439 is clearly a matrix multiplication. 421 00:29:40,439 --> 00:29:45,765 So what that says is that summing over r prime 422 00:29:45,765 --> 00:29:51,797 tells you that that is w 1, w of l minus 1, 0. 423 00:29:51,797 --> 00:29:56,359 And that is true for any pair of elements, starting 424 00:29:56,359 --> 00:29:57,703 and final points. 425 00:29:57,703 --> 00:29:59,943 So basically, quite generically, we 426 00:29:59,943 --> 00:30:03,800 see that w, the matrix that corresponds 427 00:30:03,800 --> 00:30:10,919 to the count for l steps, is obtained from the matrix that 428 00:30:10,919 --> 00:30:15,087 corresponds to the count for one step multiplying 429 00:30:15,087 --> 00:30:18,213 that of l minus 1 steps. 430 00:30:18,213 --> 00:30:23,064 And clearly I can keep going. wl minus 1, 431 00:30:23,064 --> 00:30:30,820 I can write as w 1, w of l minus 2, and so forth. 432 00:30:30,820 --> 00:30:33,760 And ultimately the answer is none other 433 00:30:33,760 --> 00:30:37,549 than the entity that corresponds to one step raised 434 00:30:37,549 --> 00:30:39,353 to the l power. 435 00:30:39,353 --> 00:30:43,412 And just to make things easier on my writing, 436 00:30:43,412 --> 00:30:52,560 I will indicate this as t to the l where t stands 437 00:30:52,560 --> 00:30:59,748 for this matrix for one set. 438 00:30:59,748 --> 00:31:00,950 OK? 439 00:31:00,950 --> 00:31:03,925 This condition over here that I said 440 00:31:03,925 --> 00:31:10,074 in words that allows me to write this in this nice matrix form 441 00:31:10,074 --> 00:31:13,734 is called the Markovian condition. 442 00:31:13,734 --> 00:31:20,330 The kind of walks that I have been telling 443 00:31:20,330 --> 00:31:24,320 you our Markovian in the sense that they only they 444 00:31:24,320 --> 00:31:28,598 depend on where you came from at the last step. 445 00:31:28,598 --> 00:31:33,989 They don't have memory of where you had been before. 446 00:31:33,989 --> 00:31:37,133 And that's what enables us to do this. 447 00:31:37,133 --> 00:31:42,399 And that's why I had to do the phantom condition because if I 448 00:31:42,399 --> 00:31:45,943 really wanted to say that something like this 449 00:31:45,943 --> 00:31:49,478 has to be excluded, then the walk must 450 00:31:49,478 --> 00:31:53,515 keep memory of every place that it had been before. 451 00:31:53,515 --> 00:31:54,015 Right? 452 00:31:54,015 --> 00:31:56,500 And then it would be non Markovian. 453 00:31:56,500 --> 00:32:01,230 Then I wouldn't have been able to do this nice calculation. 454 00:32:01,230 --> 00:32:06,559 That's why I had to do this phantomness assumption so 455 00:32:06,559 --> 00:32:15,062 that I forgot the memory of where my walk was previously. 456 00:32:15,062 --> 00:32:15,840 OK? 457 00:32:15,840 --> 00:32:17,260 Now... 458 00:32:17,260 --> 00:32:18,680 Yeah? 459 00:32:18,680 --> 00:32:20,100 Question? 460 00:32:20,100 --> 00:32:21,520 No? 461 00:32:21,520 --> 00:32:28,010 So this matrix, where you can go in one step, 462 00:32:28,010 --> 00:32:34,678 is really the matrix of who is connected to whom. 463 00:32:34,678 --> 00:32:35,496 Right? 464 00:32:35,496 --> 00:32:40,404 So this tells you the connectivity. 465 00:32:40,404 --> 00:32:47,766 So for example, if I'm dealing with a 2D 466 00:32:47,766 --> 00:32:57,146 square lattice the sides on my lattice are labeled by x and y. 467 00:32:57,146 --> 00:33:04,429 And I can ask, where can I go in one step 468 00:33:04,429 --> 00:33:08,069 if I start from x and y? 469 00:33:08,069 --> 00:33:15,461 And the answer is that either x stays the same and why y 470 00:33:15,461 --> 00:33:26,329 shifts by one, or y stays the same and x shifts by one. 471 00:33:26,329 --> 00:33:31,801 These are the four non zero elements 472 00:33:31,801 --> 00:33:37,547 of the matrix that allows you to go on the square lattice either 473 00:33:37,547 --> 00:33:39,385 up, down, right, or left. 474 00:33:39,385 --> 00:33:41,275 And the corresponding things that you 475 00:33:41,275 --> 00:33:45,430 would have for the cube or whatever lattice. 476 00:33:45,430 --> 00:33:47,380 OK? 477 00:33:47,380 --> 00:33:51,016 Now you look at this and you can see 478 00:33:51,016 --> 00:33:54,248 that what I have imposed clearly is such 479 00:33:54,248 --> 00:33:58,892 that for a lattice where every side looks 480 00:33:58,892 --> 00:34:03,351 like every equivalent side this is really 481 00:34:03,351 --> 00:34:08,734 a function only of the separation between the two 482 00:34:08,734 --> 00:34:09,319 points. 483 00:34:09,319 --> 00:34:12,829 It's an n by n matrix. 484 00:34:12,829 --> 00:34:17,208 It has n squared elements, but the elements really 485 00:34:17,208 --> 00:34:19,380 are essentially one column that gets 486 00:34:19,380 --> 00:34:23,284 shifted as you go further and further down 487 00:34:23,284 --> 00:34:26,760 in a very specific way. 488 00:34:26,760 --> 00:34:32,600 And whenever you have a matrix such as this translational 489 00:34:32,600 --> 00:34:37,408 symmetry implies that you can diagonalize it 490 00:34:37,408 --> 00:34:40,240 by Fourier transformation. 491 00:34:40,240 --> 00:34:46,848 And what do I mean by that? 492 00:34:46,848 --> 00:34:53,655 This is, I can define a vector q such 493 00:34:53,655 --> 00:35:00,310 that it's various components are things like e to the i 494 00:35:00,310 --> 00:35:03,740 q dot r in whatever dimension. 495 00:35:03,740 --> 00:35:06,860 Let's normalize it by square root of n. 496 00:35:06,860 --> 00:35:12,560 And I should be sure that that is an eigenvector. 497 00:35:12,560 --> 00:35:19,220 So basically my statement is that if I take the matrix t, 498 00:35:19,220 --> 00:35:27,050 act on q, then I should get some eigenvalue in the vector back. 499 00:35:27,050 --> 00:35:34,270 And let's check that for the case of the 2D system. 500 00:35:34,270 --> 00:35:47,530 So for the 2D system, if I say that x y t qx qy, what is it? 501 00:35:47,530 --> 00:35:54,778 Well, that is x y t x prime, y prime, 502 00:35:54,778 --> 00:35:58,271 the entity that I have calculated here, 503 00:35:58,271 --> 00:36:01,265 x prime, y prime qx, qy. 504 00:36:01,265 --> 00:36:06,410 And of course, I have a sum over x prime and y prime. 505 00:36:06,410 --> 00:36:07,870 That's the matrix product. 506 00:36:07,870 --> 00:36:13,382 And so again, remember this entity is simply 507 00:36:13,382 --> 00:36:21,650 e to the i qx x prime plus qy y prime, divided 508 00:36:21,650 --> 00:36:24,130 by square root of n. 509 00:36:24,130 --> 00:36:29,090 And because this is a set of delta functions, what 510 00:36:29,090 --> 00:36:30,578 does it do? 511 00:36:30,578 --> 00:36:34,746 It basically sets x prime either to x plus 1, or x minus 1, 512 00:36:34,746 --> 00:36:38,698 y prime either to y or y minus 1, y minus 1. 513 00:36:38,698 --> 00:36:44,606 You can see that you always get back your e to the i qx 514 00:36:44,606 --> 00:36:49,330 x plus qy y with a factor of root n. 515 00:36:49,330 --> 00:36:52,193 So essentially that by the delta functions 516 00:36:52,193 --> 00:36:56,290 just changes the x primes to x at the cost 517 00:36:56,290 --> 00:36:58,124 of the different shifts that you have 518 00:36:58,124 --> 00:37:02,094 to do over there, which means that you will get a factor of e 519 00:37:02,094 --> 00:37:06,462 to the i qx, e to the minus iqx, with qy not changing, 520 00:37:06,462 --> 00:37:11,052 or e to the i qy, and e to the minus i 521 00:37:11,052 --> 00:37:14,041 qy with the x component not changing. 522 00:37:14,041 --> 00:37:18,682 So this is the standard thing that you have seen, 523 00:37:18,682 --> 00:37:24,898 is none other than 2 cosine of qx, plus cosine of qy. 524 00:37:24,898 --> 00:37:33,310 And so we can see that quite generally in the d dimensional 525 00:37:33,310 --> 00:37:40,780 hypercube, my t of q is going to be 526 00:37:40,780 --> 00:37:52,403 the sum over all d components of these factors of cosine 527 00:37:52,403 --> 00:37:55,724 of q alpha. 528 00:37:55,724 --> 00:38:01,270 And that's about it, OK? 529 00:38:01,270 --> 00:38:05,630 So why did I bother to this diagonalization? 530 00:38:05,630 --> 00:38:11,090 The answer is that that now allows me to calculate 531 00:38:11,090 --> 00:38:13,754 everything that I want. 532 00:38:13,754 --> 00:38:20,414 So, for example, I know that this quantity that I'm 533 00:38:20,414 --> 00:38:25,440 interested in, sigma 0 sigma r, is 534 00:38:25,440 --> 00:38:34,865 going to be a sum over l, t to the l, then 0. 535 00:38:34,865 --> 00:38:44,146 wl is t to the l times r. 536 00:38:44,146 --> 00:38:46,030 Right? 537 00:38:46,030 --> 00:38:50,593 Now this small t, I can take inside here 538 00:38:50,593 --> 00:38:53,128 and do it like this. 539 00:38:53,128 --> 00:39:01,972 And if I want, I can write this as the 0 r component of a sum 540 00:39:01,972 --> 00:39:07,264 over l tt raised to the power of l. 541 00:39:07,264 --> 00:39:11,588 So it's a new matrix, which is essentially 542 00:39:11,588 --> 00:39:17,936 sum over l, small t times this connectivity matrix to the l th 543 00:39:17,936 --> 00:39:18,435 power. 544 00:39:18,435 --> 00:39:20,660 This is a geometric series. 545 00:39:20,660 --> 00:39:23,775 We can immediately do the geometric series. 546 00:39:23,775 --> 00:39:35,876 The Answer is 0, 1 over 1 minus tt going all the way to r. 547 00:39:35,876 --> 00:39:36,738 OK? 548 00:39:36,738 --> 00:39:43,270 And the reason I did this diagonalization is 549 00:39:43,270 --> 00:39:48,550 so that I can calculate this matrix element, because I don't 550 00:39:48,550 --> 00:39:51,832 really want to invert a whole matrix, 551 00:39:51,832 --> 00:39:54,968 but I can certainly invert the matrix when 552 00:39:54,968 --> 00:40:00,120 it is in the diagonal basis because all I have to do 553 00:40:00,120 --> 00:40:02,620 is to invert pure numbers. 554 00:40:02,620 --> 00:40:11,013 So what is done is to go to the Fourier basis 555 00:40:11,013 --> 00:40:17,120 and rotate to the Fourier basis calculate this. 556 00:40:17,120 --> 00:40:20,630 It is diagonal in this basis. 557 00:40:20,630 --> 00:40:22,970 I have q r. 558 00:40:22,970 --> 00:40:25,895 And so what is that? 559 00:40:25,895 --> 00:40:30,348 These are these exponentials here evaluated at the origin, 560 00:40:30,348 --> 00:40:33,484 so it's just 1 over root n. 561 00:40:33,484 --> 00:40:36,295 This is 1 over root n. 562 00:40:36,295 --> 00:40:42,379 This is e to the i q dot r over root n. 563 00:40:42,379 --> 00:40:48,753 This is just the eigenvalue that I have calculated over here. 564 00:40:48,753 --> 00:40:58,005 So this entity is none other than a sum over q, e 565 00:40:58,005 --> 00:41:04,315 to the i q dot r divided by n, two 566 00:41:04,315 --> 00:41:07,261 factors of square root of n. 567 00:41:07,261 --> 00:41:10,698 1 over 1 minus t-- well, actually 568 00:41:10,698 --> 00:41:23,620 let's write it 2t-- sum over alpha cosine of q alpha. 569 00:41:23,620 --> 00:41:27,157 And then of course I'm interested in big systems. 570 00:41:27,157 --> 00:41:30,710 I replace the sum over q with an integral 571 00:41:30,710 --> 00:41:34,870 over q, 2 pi to the d density of states. 572 00:41:34,870 --> 00:41:39,256 In going from there to there, there's a factor of volume, 573 00:41:39,256 --> 00:41:43,325 the way that I have set the unit of length in my system. 574 00:41:43,325 --> 00:41:46,310 The volume is actually the number of particles 575 00:41:46,310 --> 00:41:47,600 that I have. 576 00:41:47,600 --> 00:41:50,180 So that factor of n disappears. 577 00:41:50,180 --> 00:41:54,655 And all I need to do is evaluate this factor 578 00:41:54,655 --> 00:42:00,211 of 1 minus 2t sum over alpha cosine of q alpha integrated 579 00:42:00,211 --> 00:42:05,700 over q Fourier transformed. 580 00:42:05,700 --> 00:42:07,370 OK? 581 00:42:07,370 --> 00:42:09,040 Yes? 582 00:42:09,040 --> 00:42:12,131 AUDIENCE: So I notice that you have a sum over q, 583 00:42:12,131 --> 00:42:15,692 but then you also have a sum alpha q alpha. 584 00:42:15,692 --> 00:42:16,400 PROFESSOR: Right. 585 00:42:16,400 --> 00:42:17,790 AUDIENCE: Is there a relationship 586 00:42:17,790 --> 00:42:20,300 between the q and the q alpha or not? 587 00:42:20,300 --> 00:42:20,990 PROFESSOR: OK. 588 00:42:20,990 --> 00:42:22,715 So that goes back here. 589 00:42:22,715 --> 00:42:26,456 So when I had two dimensions, I had qx and qy. 590 00:42:26,456 --> 00:42:26,956 Right? 591 00:42:26,956 --> 00:42:32,624 And so I labeled, rather than x and y, with q1 and q2. 592 00:42:32,624 --> 00:42:39,100 So the index alpha is just a number of spatial dimensions. 593 00:42:39,100 --> 00:42:46,040 If you like, this is also dq1, dq2, dqd. 594 00:42:46,040 --> 00:42:51,146 AUDIENCE: OK. 595 00:42:51,146 --> 00:42:53,700 [INAUDIBLE] 596 00:42:53,700 --> 00:42:55,824 PROFESSOR: OK. 597 00:42:55,824 --> 00:42:57,948 All right. 598 00:42:57,948 --> 00:43:03,258 So we are down here. 599 00:43:03,258 --> 00:43:05,382 Let's proceed. 600 00:43:05,382 --> 00:43:11,760 So what is going to happen? 601 00:43:11,760 --> 00:43:15,810 So suppose I'm picking two sides, 0 and r, 602 00:43:15,810 --> 00:43:18,960 let's say both along the x direction, 603 00:43:18,960 --> 00:43:21,804 some particular distance apart. 604 00:43:21,804 --> 00:43:25,794 Let's say seven, eight apart. 605 00:43:25,794 --> 00:43:32,879 So in order to evaluate this I would have an integral if this 606 00:43:32,879 --> 00:43:37,807 is the x direction of something like e to the iq x times r. 607 00:43:37,807 --> 00:43:44,709 Now when I integrate over qx the integral of e to the iqx times 608 00:43:44,709 --> 00:43:48,820 r would go to 0. 609 00:43:48,820 --> 00:43:51,682 The only way that it won't go to 0 610 00:43:51,682 --> 00:43:54,880 is from the expansion of what is in the denominator. 611 00:43:54,880 --> 00:43:57,624 I should bring on enough factors of e 612 00:43:57,624 --> 00:44:00,025 to the minus iqx, which certainly exist 613 00:44:00,025 --> 00:44:04,535 in these cosine factors, to get rid of that. 614 00:44:04,535 --> 00:44:09,440 So essentially, the mathematical procedure that is over here 615 00:44:09,440 --> 00:44:15,644 is to bring in sufficient factors of e to the minus i 616 00:44:15,644 --> 00:44:18,746 q dot r to eliminate that. 617 00:44:18,746 --> 00:44:22,878 And the number of ways that you are going to do that 618 00:44:22,878 --> 00:44:26,776 is precisely another way of capturing this entity, which 619 00:44:26,776 --> 00:44:33,196 means that clearly if I'm looking at something like this, 620 00:44:33,196 --> 00:44:39,178 and I mean the limit that t goes to be very, very small so 621 00:44:39,178 --> 00:44:41,760 that the lowest order in t contributes, 622 00:44:41,760 --> 00:44:46,390 the lowest order t would be the shortest path that 623 00:44:46,390 --> 00:44:48,250 joins these two points. 624 00:44:48,250 --> 00:44:53,578 So it is like connecting these two points with a string that 625 00:44:53,578 --> 00:44:54,910 is very tight. 626 00:44:54,910 --> 00:44:59,540 So that what I am saying is that the limit as t 627 00:44:59,540 --> 00:45:02,532 goes to 0 of something like sigma 0, 628 00:45:02,532 --> 00:45:12,110 sigma r is going to be identical to t to the minimum distance 629 00:45:12,110 --> 00:45:15,490 between 0 and r. 630 00:45:15,490 --> 00:45:19,146 Actually, I should say proportional, 631 00:45:19,146 --> 00:45:21,911 is in fact more correct. 632 00:45:21,911 --> 00:45:25,790 Because there could be multiple shortest paths 633 00:45:25,790 --> 00:45:30,580 that go between two points. 634 00:45:30,580 --> 00:45:31,540 OK? 635 00:45:31,540 --> 00:45:33,764 Now let's make sense. 636 00:45:33,764 --> 00:45:36,544 There's kind of an exponential. 637 00:45:36,544 --> 00:45:40,690 Essentially I start with the high temperature limit 638 00:45:40,690 --> 00:45:43,190 where the two things don't know anything about each other. 639 00:45:43,190 --> 00:45:46,201 So sigma 0, sigma r is going to be 0. 640 00:45:46,201 --> 00:45:50,788 So anything that is beyond 0 has to come from somewhere 641 00:45:50,788 --> 00:45:54,780 in which the information about the state of the site 642 00:45:54,780 --> 00:45:57,475 was conveyed all the way over here. 643 00:45:57,475 --> 00:46:01,600 And it is done through passing one bond at a time. 644 00:46:01,600 --> 00:46:04,885 And in some sense, the fidelity of each one 645 00:46:04,885 --> 00:46:09,940 of those transformations is proportional to t. 646 00:46:09,940 --> 00:46:13,988 Now as t becomes larger you are going 647 00:46:13,988 --> 00:46:18,542 to be willing to pay the costs of paths 648 00:46:18,542 --> 00:46:24,610 that go from 0 to r in a slightly more disordered way. 649 00:46:24,610 --> 00:46:28,150 So your string that was tight becomes 650 00:46:28,150 --> 00:46:30,600 kind of loose and floppy. 651 00:46:30,600 --> 00:46:34,030 And why does that become the case? 652 00:46:34,030 --> 00:46:37,845 Because now although these paths are longer and carry 653 00:46:37,845 --> 00:46:40,813 more factors of this t that is small, 654 00:46:40,813 --> 00:46:45,455 there are just so many of them that the entropy, 655 00:46:45,455 --> 00:46:49,911 the number of these paths starts to dominate. 656 00:46:49,911 --> 00:46:50,470 OK? 657 00:46:50,470 --> 00:46:56,640 So very roughly you can state the competition between them 658 00:46:56,640 --> 00:47:01,233 as follows, that the contribution of the path 659 00:47:01,233 --> 00:47:08,247 of length l, decays like t to the l, but the number of paths 660 00:47:08,247 --> 00:47:11,934 roughly grows like 2d to the power of l. 661 00:47:11,934 --> 00:47:14,608 And since we have this phantomness character, 662 00:47:14,608 --> 00:47:19,364 if I am sitting at a particular site, I can go up, 663 00:47:19,364 --> 00:47:20,573 down, right, left. 664 00:47:20,573 --> 00:47:24,210 So at each step I have in two dimensions 665 00:47:24,210 --> 00:47:29,127 a choice of four, in three dimensions a choice of six, 666 00:47:29,127 --> 00:47:32,270 in d dimensions a choice of 2d. 667 00:47:32,270 --> 00:47:35,366 So you can see that this is exponentially small, 668 00:47:35,366 --> 00:47:36,742 this is exponentially large. 669 00:47:36,742 --> 00:47:39,455 So they kind of balance each other. 670 00:47:39,455 --> 00:47:42,290 And the balance is something like e 671 00:47:42,290 --> 00:47:46,870 to the minus l over some typical l that we'll contribute. 672 00:47:46,870 --> 00:47:49,866 And clearly the typical l is going 673 00:47:49,866 --> 00:47:55,202 to be finite as long as 2dt is less than 1. 674 00:47:55,202 --> 00:47:59,458 So you can see that something strange has 675 00:47:59,458 --> 00:48:04,902 to happen at the value where tc is such 676 00:48:04,902 --> 00:48:09,060 that 2dtc is equal to 1. 677 00:48:09,060 --> 00:48:14,540 At that point the cost of making your paths longer 678 00:48:14,540 --> 00:48:19,868 is more than made up by increasing the number of paths 679 00:48:19,868 --> 00:48:23,216 that you can have, the entropy starts to be. 680 00:48:23,216 --> 00:48:28,860 And you can see that precisely that condition tells me 681 00:48:28,860 --> 00:48:34,152 whether or not this integral exists, right? 682 00:48:34,152 --> 00:48:38,556 Because one point of this integral where the integrand is 683 00:48:38,556 --> 00:48:41,384 largest is when q goes to 0. 684 00:48:41,384 --> 00:48:45,652 And you can see that as q goes to 0, 685 00:48:45,652 --> 00:48:49,576 the value in the denominator is 1 minus 2td. 686 00:48:49,576 --> 00:48:53,454 So there is precisely a pole when 687 00:48:53,454 --> 00:48:56,290 this condition takes place. 688 00:48:56,290 --> 00:48:59,839 And if I'm interested in seeing what 689 00:48:59,839 --> 00:49:04,910 is happening when I'm in the vicinity of that transition, 690 00:49:04,910 --> 00:49:09,838 right before these paths become very large, what I can do 691 00:49:09,838 --> 00:49:13,542 is I can start exploring what is happening 692 00:49:13,542 --> 00:49:16,548 in the vicinity of that pole. 693 00:49:16,548 --> 00:49:22,336 So 1 minus 2d, sum over alpha cosine of q alpha, 694 00:49:22,336 --> 00:49:25,400 how does it behave? 695 00:49:25,400 --> 00:49:27,850 Each cosine I can start expanding around 696 00:49:27,850 --> 00:49:32,050 its q going to 0 as 1 minus q squared over 2, 697 00:49:32,050 --> 00:49:38,940 so you can see that this is going to be 1 minus 2td. 698 00:49:38,940 --> 00:49:43,466 And then I would have plus t q squared, 699 00:49:43,466 --> 00:49:46,612 because I will have q1 squared plus q2 squared plus qd. 700 00:49:46,612 --> 00:49:50,600 So this q squared is the sum of all the q's. 701 00:49:50,600 --> 00:49:55,048 And then I do have higher order terms. 702 00:49:55,048 --> 00:49:59,496 Order of q to fourth, and so forth. 703 00:49:59,496 --> 00:50:00,052 OK? 704 00:50:00,052 --> 00:50:07,031 So if I'm looking in the vicinity of t going to tc, 705 00:50:07,031 --> 00:50:10,565 this is roughly tc q squared. 706 00:50:10,565 --> 00:50:18,115 This is something that goes to 0 and once I factor out the tc, 707 00:50:18,115 --> 00:50:22,840 I can define a length squared in this fashion, 708 00:50:22,840 --> 00:50:23,881 inverse length squared. 709 00:50:23,881 --> 00:50:24,381 OK? 710 00:50:24,381 --> 00:50:31,070 PROFESSOR: You can see that this inverse length squared is 711 00:50:31,070 --> 00:50:42,133 going to be 1 over tc 1 minus 2d, which is 1 over tc times t. 712 00:50:42,133 --> 00:50:51,220 So you can see that this is none other than tc minus t. 713 00:50:51,220 --> 00:50:54,576 And if I'm looking at the vicinity of that point, 714 00:50:54,576 --> 00:50:58,491 I find that the correlation between sigma 0 sigma r 715 00:50:58,491 --> 00:51:01,576 is approximately the integral ddq 716 00:51:01,576 --> 00:51:09,680 2 pi to the d, Fourier transform of the denominator that I said 717 00:51:09,680 --> 00:51:16,080 is approximately 1 over tc times q squared plus z 718 00:51:16,080 --> 00:51:18,650 to the minus 2. 719 00:51:18,650 --> 00:51:20,938 We've seen this before. 720 00:51:20,938 --> 00:51:24,370 You evaluated this Fourier transform when 721 00:51:24,370 --> 00:51:27,240 you were doing Landau Ginzburg. 722 00:51:27,240 --> 00:51:32,140 So this is something that grows when 723 00:51:32,140 --> 00:51:36,340 you are looking at distances that 724 00:51:36,340 --> 00:51:39,924 are much less than this correlation 725 00:51:39,924 --> 00:51:43,200 length as the Coulomb power law. 726 00:51:43,200 --> 00:51:46,480 When you are looking at distances 727 00:51:46,480 --> 00:51:50,920 that are much larger than the correlation event, 728 00:51:50,920 --> 00:51:56,006 you get the exponential decay with this r 729 00:51:56,006 --> 00:52:02,990 to the d minus 1 over 2 factor. 730 00:52:02,990 --> 00:52:07,638 So what we find is that the correlation 731 00:52:07,638 --> 00:52:11,124 of these phantom loops is precisely 732 00:52:11,124 --> 00:52:16,676 the correlation that we had seen for the Gaussian model, 733 00:52:16,676 --> 00:52:17,770 in fact. 734 00:52:17,770 --> 00:52:21,610 It has a correlation length that diverges 735 00:52:21,610 --> 00:52:24,501 in precisely the same way that we 736 00:52:24,501 --> 00:52:28,640 had seen for the Gaussian model with the square root 737 00:52:28,640 --> 00:52:29,255 singularity. 738 00:52:29,255 --> 00:52:36,020 So this is our usual mu equals 1/2 type of behavior 739 00:52:36,020 --> 00:52:37,870 that we've seen. 740 00:52:37,870 --> 00:52:43,456 And somehow, by all of these routes, 741 00:52:43,456 --> 00:52:50,640 we have reproduced some property of the Gaussian model. 742 00:52:50,640 --> 00:52:54,600 In fact, it's a little bit more than that, 743 00:52:54,600 --> 00:52:59,000 because we can go back and look at what we 744 00:52:59,000 --> 00:53:01,946 had here for the free energy. 745 00:53:01,946 --> 00:53:06,860 So let's erase the things that pertain to the correlation 746 00:53:06,860 --> 00:53:15,188 length and correlations, and focus on the calculation 747 00:53:15,188 --> 00:53:26,040 that we kind of left in the middle over here. 748 00:53:26,040 --> 00:53:28,885 So what do we have? 749 00:53:28,885 --> 00:53:34,575 We have that log of S prime, the intensive part, 750 00:53:34,575 --> 00:53:39,662 is a sum over the lengths of these loops 751 00:53:39,662 --> 00:53:43,568 that start and end at the origin. 752 00:53:43,568 --> 00:53:48,225 And the contribution of a loop of length l 753 00:53:48,225 --> 00:53:50,840 is small t to the l. 754 00:53:50,840 --> 00:53:54,332 And since w of l is the connectivity matrix 755 00:53:54,332 --> 00:53:59,268 to the l power, it's really like looking at the matrix element 756 00:53:59,268 --> 00:54:00,696 of this entity. 757 00:54:00,696 --> 00:54:05,537 And of course, there is this degeneracy factor of 2l. 758 00:54:05,537 --> 00:54:11,010 And I can write this as 1/2 sum over l. 759 00:54:11,010 --> 00:54:18,552 Well, let's do it this way-- the 0 th, 760 00:54:18,552 --> 00:54:29,460 0 th element of sum over l dt to the l over l. 761 00:54:29,460 --> 00:54:33,600 And what is this? 762 00:54:33,600 --> 00:54:40,845 This is, in fact, the series expansion 763 00:54:40,845 --> 00:54:48,090 of minus log of 1 minus dt. 764 00:54:48,090 --> 00:54:56,298 So I can, again, go to the Fourier basis, 765 00:54:56,298 --> 00:55:07,370 write this as minus 1/2 sum over q0, 0 q log of 1 766 00:55:07,370 --> 00:55:12,520 minus t the eigenvalue t of q, and then q0. 767 00:55:12,520 --> 00:55:17,808 Each one of these is just the factor of 1 768 00:55:17,808 --> 00:55:21,048 over square root of n. 769 00:55:21,048 --> 00:55:28,176 The sum over q goes over to n integral over q. 770 00:55:28,176 --> 00:55:36,424 So this simply becomes minus 1/2 integral over q 2 pi 771 00:55:36,424 --> 00:55:48,580 to the d log of 1 minus t, this sum over alpha cosine of q 772 00:55:48,580 --> 00:55:55,790 alpha [INAUDIBLE] 2 that we had over here. 773 00:55:55,790 --> 00:56:00,020 And again, if I go to this limit where 774 00:56:00,020 --> 00:56:05,200 I am close to tc, the critical value of this t, 775 00:56:05,200 --> 00:56:08,200 and focus on the behavior as q goes to 0, 776 00:56:08,200 --> 00:56:10,300 this is going to be something that 777 00:56:10,300 --> 00:56:15,013 has this q squared plus c to the minus 2 type of singularity. 778 00:56:15,013 --> 00:56:17,997 And again, this is a kind of integral 779 00:56:17,997 --> 00:56:24,020 that we saw in connection with the Gaussian model. 780 00:56:24,020 --> 00:56:30,380 And we know the kind of singularities it gives. 781 00:56:30,380 --> 00:56:37,350 But why did we end up with the Gaussian model? 782 00:56:37,350 --> 00:56:39,441 Let's work backward. 783 00:56:39,441 --> 00:56:43,526 That is, typically, when we are doing 784 00:56:43,526 --> 00:56:47,846 some kind of a partition function of a Gaussian model-- 785 00:56:47,846 --> 00:56:54,610 let's say we have some integral over some variables phi i. 786 00:56:54,610 --> 00:56:58,086 Let's say we put them on the sides of a lattice. 787 00:56:58,086 --> 00:57:02,887 And we have e to the minus phi i some matrix m ij phi 788 00:57:02,887 --> 00:57:07,716 j over 2 sum over i and j implicit over there. 789 00:57:07,716 --> 00:57:09,472 What was the answer? 790 00:57:09,472 --> 00:57:13,606 Then the answer was typically proportional to 1 791 00:57:13,606 --> 00:57:18,480 over the determinant of this matrix to the 1/2, 792 00:57:18,480 --> 00:57:23,250 which, if I exponentiated, would be 793 00:57:23,250 --> 00:57:29,620 exponential of minus 1/2 logarithm of the determinant 794 00:57:29,620 --> 00:57:36,760 of this matrix. 795 00:57:36,760 --> 00:57:39,462 So that's the general result. And we 796 00:57:39,462 --> 00:57:42,936 see the result for our log of S prime 797 00:57:42,936 --> 00:57:47,656 is, indeed, the form of 1/2 minus 1/2 798 00:57:47,656 --> 00:57:50,936 of the log of something. 799 00:57:50,936 --> 00:57:55,528 And indeed, this sum over q corresponds 800 00:57:55,528 --> 00:57:58,175 to summing over the different eigenvalues. 801 00:57:58,175 --> 00:58:02,145 And if I were to express det m in terms 802 00:58:02,145 --> 00:58:05,268 of the product of its eigenvalues, 803 00:58:05,268 --> 00:58:08,180 it would be precisely that. 804 00:58:08,180 --> 00:58:12,348 So you can see that actually, what we 805 00:58:12,348 --> 00:58:15,995 have calculated by comparison of these two 806 00:58:15,995 --> 00:58:23,117 things corresponds to a matrix m ij, which 807 00:58:23,117 --> 00:58:33,159 is delta ij minus t times this single step connectivity matrix 808 00:58:33,159 --> 00:58:35,635 that I had before. 809 00:58:35,635 --> 00:58:38,730 So indeed, the partition function 810 00:58:38,730 --> 00:58:47,448 that I calculated, that I called Z prime or S prime, corresponds 811 00:58:47,448 --> 00:58:55,790 to doing the following-- doing an integral over phi i's 812 00:58:55,790 --> 00:58:57,330 from the delta ij. 813 00:58:57,330 --> 00:59:01,180 Essentially for each phi i, I would have a factor 814 00:59:01,180 --> 00:59:04,420 of minus phi i squared over 2. 815 00:59:04,420 --> 00:59:09,005 So essentially, I have to do this. 816 00:59:09,005 --> 00:59:13,600 And then from here, once it's exponentiated, 817 00:59:13,600 --> 00:59:30,980 I will get a factor of e to the sum over ij this t phi i phi j. 818 00:59:30,980 --> 00:59:39,320 So you can see that I started calculating Ising variables 819 00:59:39,320 --> 00:59:41,822 on this lattice. 820 00:59:41,822 --> 00:59:46,108 The result that I calculated for these phantom walks 821 00:59:46,108 --> 00:59:50,312 is actually identical if I had to replace the Ising variables 822 00:59:50,312 --> 00:59:52,736 with just quantities that I integrate 823 00:59:52,736 --> 00:59:56,648 all over the place, provided that I weigh them 824 00:59:56,648 --> 00:59:58,136 with this factor. 825 00:59:58,136 --> 01:00:01,608 So really, the difference between the Ising 826 01:00:01,608 --> 01:00:06,149 and what I have done here can be captured 827 01:00:06,149 --> 01:00:12,940 by putting a weight for the indirect integration per site. 828 01:00:12,940 --> 01:00:16,970 So if I really want to do Ising, the weight 829 01:00:16,970 --> 01:00:20,597 that I want to do for the Ising-- let's 830 01:00:20,597 --> 01:00:24,384 do it this way-- for phi has to have 831 01:00:24,384 --> 01:00:30,524 a delta function at minus 1 and a delta function at plus 1. 832 01:00:30,524 --> 01:00:35,718 Rather than doing that, I have calculated 833 01:00:35,718 --> 01:00:40,912 a w that corresponds to the Gaussian 834 01:00:40,912 --> 01:00:50,420 where the weight for each phi is basically a Gaussian weight. 835 01:00:50,420 --> 01:00:55,160 And if I really wanted to do the Landau Ginzburg, 836 01:00:55,160 --> 01:01:01,720 all I would need to do is to add here a phi to the 4th. 837 01:01:01,720 --> 01:01:05,640 The problem with this Gaussian-- the phantom system 838 01:01:05,640 --> 01:01:09,070 that I have-- is the same problem 839 01:01:09,070 --> 01:01:12,104 that we had with the Gaussian model. 840 01:01:12,104 --> 01:01:16,344 It only gives me one side of the phase transition. 841 01:01:16,344 --> 01:01:20,632 Because you see that I did all of these calculations. 842 01:01:20,632 --> 01:01:24,412 All of these calculations were consistent, as long 843 01:01:24,412 --> 01:01:30,980 as I was dealing with t that was less than tc. 844 01:01:30,980 --> 01:01:34,610 Once I go to t that is greater than tc, 845 01:01:34,610 --> 01:01:37,520 then this denominator that I had became negative. 846 01:01:37,520 --> 01:01:39,370 It just doesn't make sense. 847 01:01:39,370 --> 01:01:41,220 Correlations negative don't make sense. 848 01:01:41,220 --> 01:01:45,105 The log, the argument that I have to calculate here, 849 01:01:45,105 --> 01:01:49,155 if t is below-- is larger than 1 over 2d, 850 01:01:49,155 --> 01:01:53,440 then it doesn't make sense. 851 01:01:53,440 --> 01:01:56,531 And of course, the reason the whole theory doesn't make sense 852 01:01:56,531 --> 01:01:58,628 is kind of related to the instability 853 01:01:58,628 --> 01:02:01,414 that we have in the Gaussian model. 854 01:02:01,414 --> 01:02:03,404 Essentially, in the Gaussian model 855 01:02:03,404 --> 01:02:06,088 also, when t becomes large enough, 856 01:02:06,088 --> 01:02:09,864 this phi squared is not enough to remove 857 01:02:09,864 --> 01:02:14,158 the instability that you have for the largest eigenvalue. 858 01:02:14,158 --> 01:02:17,504 Physically, what that means is that we 859 01:02:17,504 --> 01:02:20,074 started with this taut string. 860 01:02:20,074 --> 01:02:23,476 And as we approached the transition, 861 01:02:23,476 --> 01:02:26,320 the string became more flexible. 862 01:02:26,320 --> 01:02:29,944 And in principle, what this instability is telling 863 01:02:29,944 --> 01:02:35,302 me is that you go below the transition of t greater 864 01:02:35,302 --> 01:02:39,320 than tc, and the string becomes something 865 01:02:39,320 --> 01:02:44,492 that can go over and over itself as many times, 866 01:02:44,492 --> 01:02:46,796 and gain entropy further and further. 867 01:02:46,796 --> 01:02:49,100 So it will keep going forever. 868 01:02:49,100 --> 01:02:51,950 There is nothing to stop it. 869 01:02:51,950 --> 01:02:55,833 So the phantomness condition, the cost that you pay for it, 870 01:02:55,833 --> 01:02:58,852 is that once you go beyond the transition, 871 01:02:58,852 --> 01:03:00,636 you essentially overwhelm yourself. 872 01:03:00,636 --> 01:03:04,870 There's just so much that is going on. 873 01:03:04,870 --> 01:03:12,640 There is nothing that you can do. 874 01:03:12,640 --> 01:03:17,080 So that's the story. 875 01:03:17,080 --> 01:03:23,550 Now, let's try to finally understand some of the things 876 01:03:23,550 --> 01:03:29,040 that we had before, like this other critical dimension of 4. 877 01:03:29,040 --> 01:03:31,280 Where did it come from, et cetera? 878 01:03:31,280 --> 01:03:36,340 You are now in the position to do things and understand 879 01:03:36,340 --> 01:03:36,970 things. 880 01:03:36,970 --> 01:03:43,466 First thing to note is, let's try 881 01:03:43,466 --> 01:03:52,760 to understand what this exponent mu equal to 1/2 means. 882 01:03:52,760 --> 01:03:59,900 So we said that if I think about having information 883 01:03:59,900 --> 01:04:04,898 about my site at the origin, that 884 01:04:04,898 --> 01:04:10,571 has to propagate so that further and further neighbors start 885 01:04:10,571 --> 01:04:15,880 to know what the information was at site sigma 0-- 886 01:04:15,880 --> 01:04:22,900 that that information can come through these paths that 887 01:04:22,900 --> 01:04:26,029 fluctuate, go different distances, 888 01:04:26,029 --> 01:04:34,741 and eventually, let's say, reach a boundary that is at size r. 889 01:04:34,741 --> 01:04:43,045 As we said, the contribution of each path decays exponentially, 890 01:04:43,045 --> 01:04:49,580 but the number of paths grows exponentially. 891 01:04:49,580 --> 01:04:53,792 And so for a particular t that is smaller 892 01:04:53,792 --> 01:04:57,068 than the critical value, I can roughly 893 01:04:57,068 --> 01:05:00,804 say that this falls off like this, 894 01:05:00,804 --> 01:05:05,700 so that there is a characteristic length, l bar. 895 01:05:05,700 --> 01:05:12,368 This characteristic l bar is going to be minus 1 896 01:05:12,368 --> 01:05:15,152 over log of 2dt. 897 01:05:15,152 --> 01:05:23,592 And 2dt I can write as 2d tc plus t minus tc. 898 01:05:23,592 --> 01:05:28,926 2 d tc is, by construction, 1. 899 01:05:28,926 --> 01:05:37,308 So this is minus 1 over log of 1 plus something 900 01:05:37,308 --> 01:05:47,320 like 2d, which is 1 over t minus tc, t minus tc over tc. 901 01:05:47,320 --> 01:05:53,365 Now, log of 1 plus a small number-- so if my t goes 902 01:05:53,365 --> 01:05:56,628 and approaches tc-- this log will behave 903 01:05:56,628 --> 01:05:59,436 like what I have over here. 904 01:05:59,436 --> 01:06:04,584 So you can see that this diverges as t minus tc 905 01:06:04,584 --> 01:06:07,786 to the minus 1 power. 906 01:06:07,786 --> 01:06:15,255 I want it, I guess, to be correct-- tc minus t, 907 01:06:15,255 --> 01:06:19,078 because t is less than tc. 908 01:06:19,078 --> 01:06:24,019 But the point is that the divergence is linear. 909 01:06:24,019 --> 01:06:29,662 As I approach tc, the length of these paths 910 01:06:29,662 --> 01:06:36,830 will grow inversely to how close I am. 911 01:06:36,830 --> 01:06:39,485 Now what are these paths? 912 01:06:39,485 --> 01:06:44,795 I start from the origin, and I randomly take steps. 913 01:06:44,795 --> 01:06:51,035 And I've said that the typical steps that I will get 914 01:06:51,035 --> 01:06:54,490 will roughly have length l bar. 915 01:06:54,490 --> 01:06:58,338 How far have these paths carried the information? 916 01:06:58,338 --> 01:07:03,226 These are random walks, so the distance over which they 917 01:07:03,226 --> 01:07:06,304 have managed to carry the information, 918 01:07:06,304 --> 01:07:10,921 c, is going to be like the square root 919 01:07:10,921 --> 01:07:13,570 of the length of these walks. 920 01:07:13,570 --> 01:07:18,658 And since the length of the walks grows like t minus tc, 921 01:07:18,658 --> 01:07:24,478 this goes like tc minus t to the minus 1/2 power. 922 01:07:24,478 --> 01:07:31,022 So the exponent mu of 1/2 that we have been thinking about 923 01:07:31,022 --> 01:07:36,566 is none other than the 1/2 that you have for random walks, 924 01:07:36,566 --> 01:07:40,256 once you realize that what is going on 925 01:07:40,256 --> 01:07:44,846 is that the length of the paths that carry information 926 01:07:44,846 --> 01:07:51,167 essentially diverges linearly on approaching this. 927 01:07:51,167 --> 01:07:56,560 So that's one understanding. 928 01:07:56,560 --> 01:08:01,860 Now, you would say that this is the Gaussian picture. 929 01:08:01,860 --> 01:08:06,274 Now I know that when we calculated things 930 01:08:06,274 --> 01:08:14,722 to order of epsilon, we found that mu was 1/2 plus something. 931 01:08:14,722 --> 01:08:16,839 It became larger. 932 01:08:16,839 --> 01:08:19,679 So what does that mean? 933 01:08:19,679 --> 01:08:26,509 Well, if you have these paths, and the paths cannot cross each 934 01:08:26,509 --> 01:08:31,489 other-- it comes here, it has to go further away, 935 01:08:31,489 --> 01:08:36,341 because they are really non phantom-- then they will swell. 936 01:08:36,341 --> 01:08:40,400 So the exponent mu that you expect to get 937 01:08:40,400 --> 01:08:42,655 will be larger than 1/2. 938 01:08:42,655 --> 01:08:48,520 So that's what's captured in here. 939 01:08:48,520 --> 01:08:53,054 Well, how can I really try to capture that more 940 01:08:53,054 --> 01:08:53,679 mathematically? 941 01:08:53,679 --> 01:08:57,928 Well, I say that in the calculations that I did-- 942 01:08:57,928 --> 01:09:01,675 let's say when I was calculating the correlation functions sigma 943 01:09:01,675 --> 01:09:05,883 0 sigma r, in the approximation of phantomness, 944 01:09:05,883 --> 01:09:11,160 I included all paths that went from 0 to r. 945 01:09:11,160 --> 01:09:15,174 Among those there were paths that were crossing themselves. 946 01:09:15,174 --> 01:09:21,013 So I really have to subtract from that a path that 947 01:09:21,013 --> 01:09:23,719 comes and crosses itself. 948 01:09:23,719 --> 01:09:26,683 So I have to subtract that. 949 01:09:26,683 --> 01:09:31,623 I also had this condition that I had the numerator 950 01:09:31,623 --> 01:09:35,236 and denominator that cancel each other, which really means 951 01:09:35,236 --> 01:09:39,617 that I have to subtract the possibility of my path 952 01:09:39,617 --> 01:09:45,169 intersecting with another loop that is over here. 953 01:09:45,169 --> 01:09:53,399 And we can try to incorporate these as corrections. 954 01:09:53,399 --> 01:09:57,239 But we've already done that, because if I Fourier transform 955 01:09:57,239 --> 01:10:02,365 this object, I saw that it is this 1 over q squared 956 01:10:02,365 --> 01:10:03,739 plus k squared. 957 01:10:03,739 --> 01:10:07,410 And then we were calculating these u perturbative 958 01:10:07,410 --> 01:10:12,536 corrections, and we had diagrams that kind of looked like this. 959 01:10:12,536 --> 01:10:22,480 Oops, I guess I want to first draw the other diagram. 960 01:10:22,480 --> 01:10:26,920 And then we had a diagram that was like this. 961 01:10:26,920 --> 01:10:30,489 You remember when we were doing these phi 962 01:10:30,489 --> 01:10:33,081 to the 4th calculations, the corrections 963 01:10:33,081 --> 01:10:38,484 that we had for the propagator, which was related to the two 964 01:10:38,484 --> 01:10:42,524 point correlation function, were precisely these diagrams, where 965 01:10:42,524 --> 01:10:47,634 we were essentially subtracting factors that were set by u. 966 01:10:47,634 --> 01:10:52,250 Of course, the value of u could be anything, 967 01:10:52,250 --> 01:10:56,275 and you can see that there is really a one to one 968 01:10:56,275 --> 01:10:56,900 correspondence. 969 01:10:56,900 --> 01:11:00,752 Any of the diagrams that you had before really 970 01:11:00,752 --> 01:11:04,179 captures the picture of one of these paths 971 01:11:04,179 --> 01:11:08,337 trying to cross itself that you have to subtract. 972 01:11:08,337 --> 01:11:13,732 And you can sort of put a one to one mathematical correspondence 973 01:11:13,732 --> 01:11:15,420 between what is going on here. 974 01:11:15,420 --> 01:11:15,920 Yeah. 975 01:11:15,920 --> 01:11:18,104 AUDIENCE: So why can't we have the path 976 01:11:18,104 --> 01:11:19,742 in the first correction you drew? 977 01:11:19,742 --> 01:11:21,668 Because aren't we allowed to have 978 01:11:21,668 --> 01:11:24,062 four bonds that attach to one site 979 01:11:24,062 --> 01:11:26,960 when we're doing the original expansion? 980 01:11:26,960 --> 01:11:29,687 PROFESSOR: OK, so I told you at the beginning 981 01:11:29,687 --> 01:11:32,719 that you should keep track of all of my mistakes. 982 01:11:32,719 --> 01:11:35,005 And that's a very subtle thing. 983 01:11:35,005 --> 01:11:39,199 So what you are asking is, in the original Ising model, 984 01:11:39,199 --> 01:11:45,019 I can draw perfectly OK a graph such as this that has 985 01:11:45,019 --> 01:11:47,444 an intersection such as this. 986 01:11:47,444 --> 01:11:51,684 As we will show next time-- so bear with me-- 987 01:11:51,684 --> 01:11:55,019 in calculating things while in the phantom condition, 988 01:11:55,019 --> 01:11:59,149 this is counted three times as much as it should. 989 01:11:59,149 --> 01:12:03,815 So I have to subtract that, because a walk that comes here 990 01:12:03,815 --> 01:12:06,300 can either go forward, up, or down. 991 01:12:06,300 --> 01:12:09,090 There is some degeneracy there that essentially, this 992 01:12:09,090 --> 01:12:11,786 has done an over counting that is important, 993 01:12:11,786 --> 01:12:15,980 and I have to correct for when I do things more 994 01:12:15,980 --> 01:12:17,640 carefully next time around. 995 01:12:17,640 --> 01:12:18,140 Yes. 996 01:12:18,140 --> 01:12:20,072 AUDIENCE: When you did the Gaussian model, 997 01:12:20,072 --> 01:12:22,559 we never had to put any sort of requirement 998 01:12:22,559 --> 01:12:24,360 on the lattice being a square lattice. 999 01:12:24,360 --> 01:12:25,340 PROFESSOR: No. 1000 01:12:25,340 --> 01:12:27,491 AUDIENCE: Didn't we have to do that here when 1001 01:12:27,491 --> 01:12:28,690 you did those random walks? 1002 01:12:28,690 --> 01:12:33,140 PROFESSOR: No, I only use the square condition or hypercube 1003 01:12:33,140 --> 01:12:36,522 condition in order to be able to write 1004 01:12:36,522 --> 01:12:37,926 this in general dimension. 1005 01:12:37,926 --> 01:12:40,734 I could very well have done triangular, FCC, 1006 01:12:40,734 --> 01:12:44,648 or any other lattice. 1007 01:12:44,648 --> 01:12:57,480 The expression here would have been more complicated. 1008 01:12:57,480 --> 01:13:06,560 So finally, we can also ask, we have a feel 1009 01:13:06,560 --> 01:13:11,100 from renormalization group, et cetera, 1010 01:13:11,100 --> 01:13:15,707 that the Gaussian exponents, like mu equals to 1/2, 1011 01:13:15,707 --> 01:13:18,940 are, in fact, good provided that you 1012 01:13:18,940 --> 01:13:20,875 are in sufficiently high dimension-- 1013 01:13:20,875 --> 01:13:23,197 if you are above four dimensions. 1014 01:13:23,197 --> 01:13:33,384 Where did you see that occurring in this picture? 1015 01:13:33,384 --> 01:13:42,019 The answer is as follows. 1016 01:13:42,019 --> 01:13:51,109 So basically, I have ignored the possibility of intersections. 1017 01:13:51,109 --> 01:13:58,652 So let's see when that condition is roughly good. 1018 01:13:58,652 --> 01:14:03,098 The kind of entities that I have as I 1019 01:14:03,098 --> 01:14:08,683 get closer and closer to tc in the phantom case 1020 01:14:08,683 --> 01:14:11,519 are these random walks. 1021 01:14:11,519 --> 01:14:15,139 And we said that the characteristic of a random walk 1022 01:14:15,139 --> 01:14:19,934 is that if I have something that carries l steps, 1023 01:14:19,934 --> 01:14:30,296 that the typical size in space that it will grow scales 1024 01:14:30,296 --> 01:14:35,006 like l to the 1/2. 1025 01:14:35,006 --> 01:14:41,770 So we can recast this as a dimension. 1026 01:14:41,770 --> 01:14:49,240 Basically, we are used to say linear objects having 1027 01:14:49,240 --> 01:15:00,059 a mass that grows-- what do I want to do? 1028 01:15:00,059 --> 01:15:07,717 Let's say that I have a hypercube 1029 01:15:07,717 --> 01:15:18,657 of size L. Let's actually call it size R. Then 1030 01:15:18,657 --> 01:15:22,816 the mass of this, or the number of elements 1031 01:15:22,816 --> 01:15:31,650 that constitute this object, grow like R to the d. 1032 01:15:31,650 --> 01:15:35,731 So if I take my random walk, and think of it 1033 01:15:35,731 --> 01:15:39,194 as something in which every step has unit mass, 1034 01:15:39,194 --> 01:15:44,034 you would say that the l is proportional to mass 1035 01:15:44,034 --> 01:15:48,404 so that the radius grows like the number of elements 1036 01:15:48,404 --> 01:15:51,979 to the 1/2 power or the mass of the 1/2 power. 1037 01:15:51,979 --> 01:15:55,081 So you would say that the random walk, 1038 01:15:55,081 --> 01:16:01,412 if I want to force it in terms of a relationship between mass 1039 01:16:01,412 --> 01:16:06,691 and radius, that mass goes like radius squared. 1040 01:16:06,691 --> 01:16:14,579 So in that sense, you can say that the random walk 1041 01:16:14,579 --> 01:16:28,170 has a fractal or Hausdorff dimension of 2. 1042 01:16:28,170 --> 01:16:32,967 So if you kind of are very, very blind, 1043 01:16:32,967 --> 01:16:38,300 you would say that this is like a random walk. 1044 01:16:38,300 --> 01:16:41,065 It's a two dimensional thing. 1045 01:16:41,065 --> 01:16:42,730 It's a page. 1046 01:16:42,730 --> 01:16:49,770 So now the question is, if I have two geometrical entities, 1047 01:16:49,770 --> 01:16:51,690 will they intersect? 1048 01:16:51,690 --> 01:16:58,676 So if I have a plane and a line in three dimensions, 1049 01:16:58,676 --> 01:17:00,884 they will barely intersect. 1050 01:17:00,884 --> 01:17:05,409 In four dimensions, they won't intersect. 1051 01:17:05,409 --> 01:17:10,557 If I have two surfaces that are two dimensional, in three 1052 01:17:10,557 --> 01:17:13,380 dimensions, they intersect in a line. 1053 01:17:13,380 --> 01:17:16,521 In four dimensions, they would intersect in a point. 1054 01:17:16,521 --> 01:17:19,701 And in five dimensions, they won't generically intersect, 1055 01:17:19,701 --> 01:17:24,579 like two lines generically don't intersect in three dimensions. 1056 01:17:24,579 --> 01:17:31,146 So if you ask, how bad is it that I ignored 1057 01:17:31,146 --> 01:17:36,699 the intersection of objects that are inherently random walking 1058 01:17:36,699 --> 01:17:40,124 in sufficiently high dimensions, I 1059 01:17:40,124 --> 01:17:43,549 would say the answer geometrically 1060 01:17:43,549 --> 01:17:50,189 would be in intersection is generic if d 1061 01:17:50,189 --> 01:17:56,989 is less than 2 df, which is 4. 1062 01:17:56,989 --> 01:18:02,587 So we made a very drastic assumption. 1063 01:18:02,587 --> 01:18:08,054 But as long as we are above four dimensions, it's OK. 1064 01:18:08,054 --> 01:18:12,088 There's so much space around that 1065 01:18:12,088 --> 01:18:17,183 statistically, these intersections, this non 1066 01:18:17,183 --> 01:18:23,601 phantomness, is so entropically constraining that it never 1067 01:18:23,601 --> 01:18:24,100 happens. 1068 01:18:24,100 --> 01:18:28,033 You can ignore it, and the results are OK. 1069 01:18:28,033 --> 01:18:31,969 But you go to four dimensions and below, you 1070 01:18:31,969 --> 01:18:34,503 can't ignore it, because generically, these things 1071 01:18:34,503 --> 01:18:36,313 will intersect with each other. 1072 01:18:36,313 --> 01:18:40,441 That's why these diagrams are going to blow up on you, 1073 01:18:40,441 --> 01:18:43,509 and give you some important contribution that would swell, 1074 01:18:43,509 --> 01:18:47,019 and give you a value of mu that is larger than the 1/2 1075 01:18:47,019 --> 01:18:52,739 that we have for random walks. 1076 01:18:52,739 --> 01:18:57,119 So that's the essence of where the Gaussian model was, 1077 01:18:57,119 --> 01:19:00,955 why we get mu equals to 1/2, why we 1078 01:19:00,955 --> 01:19:03,555 get mu's that are larger than 1/2, what 1079 01:19:03,555 --> 01:19:06,570 the meaning of these diagrams is, what four dimensions 1080 01:19:06,570 --> 01:19:07,392 is special. 1081 01:19:07,392 --> 01:19:11,913 All of it really just comes down to central limit theorem 1082 01:19:11,913 --> 01:19:15,493 and knowing that the sum of a large number of variables 1083 01:19:15,493 --> 01:19:19,822 that has a square root of n type of variance and fluctuations. 1084 01:19:19,822 --> 01:19:23,579 And it's all captured by that. 1085 01:19:23,579 --> 01:19:29,177 But we wanted to really solve the model exactly. 1086 01:19:29,177 --> 01:19:35,322 It turns out that we can make the conditions that 1087 01:19:35,322 --> 01:19:39,586 were very hard to implement in general dimensions 1088 01:19:39,586 --> 01:19:43,329 to work out correctly in two dimensions. 1089 01:19:43,329 --> 01:19:46,489 And so the next lecture will show you 1090 01:19:46,489 --> 01:19:49,649 what these mistakes are, how to avoid them, 1091 01:19:49,649 --> 01:19:54,600 and how to get the exact value of this sum in two dimensions.