1 00:00:00,090 --> 00:00:02,490 The following content is provided under a Creative 2 00:00:02,490 --> 00:00:04,030 Commons license. 3 00:00:04,030 --> 00:00:06,330 Your support will help MIT OpenCourseWare 4 00:00:06,330 --> 00:00:10,720 continue to offer high-quality educational resources for free. 5 00:00:10,720 --> 00:00:13,320 To make a donation or view additional materials 6 00:00:13,320 --> 00:00:17,280 from hundreds of MIT courses, visit MIT OpenCourseWare 7 00:00:17,280 --> 00:00:20,430 at ocw.mit.edu. 8 00:00:20,430 --> 00:00:22,860 ABBY NOYCE: So we've been talking this week 9 00:00:22,860 --> 00:00:24,790 about perception of language, how 10 00:00:24,790 --> 00:00:27,090 you take all of this information that's coming in, 11 00:00:27,090 --> 00:00:29,340 sort it out, figure out the structure to it. 12 00:00:29,340 --> 00:00:33,564 We're going to talk today about a slightly smaller subject, 13 00:00:33,564 --> 00:00:34,980 or something that has been studied 14 00:00:34,980 --> 00:00:37,100 in less depth than comprehension has, 15 00:00:37,100 --> 00:00:39,019 which is how you produce language. 16 00:00:39,019 --> 00:00:40,560 You can do the same thing in reverse. 17 00:00:40,560 --> 00:00:44,250 You can go from this kind of non-semantic idea of something 18 00:00:44,250 --> 00:00:47,655 that you want to convey to your listeners. 19 00:00:47,655 --> 00:00:49,590 So that's this kind of message level, 20 00:00:49,590 --> 00:00:52,500 where you've got an idea and from there, 21 00:00:52,500 --> 00:00:55,670 you select the words you're going to use to convey it 22 00:00:55,670 --> 00:01:00,460 and you put together a sentence structure for it. 23 00:01:00,460 --> 00:01:03,870 And from there, once you have words in a sentence structure, 24 00:01:03,870 --> 00:01:08,040 then you need to pull up the phonological information 25 00:01:08,040 --> 00:01:10,350 about the sentence, about all of the words that 26 00:01:10,350 --> 00:01:13,260 are going into it. 27 00:01:13,260 --> 00:01:16,110 And finally, you pass all of this information onto the motor 28 00:01:16,110 --> 00:01:18,180 control centers so that you can actually 29 00:01:18,180 --> 00:01:19,305 articulate your sentence. 30 00:01:22,464 --> 00:01:24,770 So this is a fairly standard-- 31 00:01:24,770 --> 00:01:27,890 This model shows that there are four different levels 32 00:01:27,890 --> 00:01:31,700 of processing, and these don't necessarily 33 00:01:31,700 --> 00:01:33,590 happen sequentially, like you do all 34 00:01:33,590 --> 00:01:35,420 of your grammatical encoding before any 35 00:01:35,420 --> 00:01:36,680 of your phonological encoding. 36 00:01:36,680 --> 00:01:39,380 There's some reasonably good evidence 37 00:01:39,380 --> 00:01:41,090 that they interact a bit. 38 00:01:41,090 --> 00:01:45,110 And likewise, with phonological encoding and articulation. 39 00:01:45,110 --> 00:01:50,330 But first, just to get our brains working, 40 00:01:50,330 --> 00:01:53,780 we're going to have a moment of design an experiment. 41 00:01:53,780 --> 00:01:58,700 So you're a cognitive neuroscientist, or 42 00:01:58,700 --> 00:02:02,120 psycholinguist, perhaps, and you believe 43 00:02:02,120 --> 00:02:05,330 that there are distinct levels involved 44 00:02:05,330 --> 00:02:06,930 in the production of language. 45 00:02:06,930 --> 00:02:08,810 So when somebody produces a sentence, 46 00:02:08,810 --> 00:02:11,150 they have to go through different kinds of processing 47 00:02:11,150 --> 00:02:13,620 to do that. 48 00:02:13,620 --> 00:02:18,170 How might you prove or disprove-- how 49 00:02:18,170 --> 00:02:21,110 might you test this hypothesis that there are distinct 50 00:02:21,110 --> 00:02:24,470 phases in language production? 51 00:02:30,200 --> 00:02:31,060 Think about it. 52 00:02:31,060 --> 00:02:34,030 You can talk to your classmates. 53 00:02:34,030 --> 00:02:36,170 Things in your brain that are involved in language. 54 00:02:36,170 --> 00:02:38,740 So we talked earlier this week about, 55 00:02:38,740 --> 00:02:40,900 remember Wernicke had had this idea that there were 56 00:02:40,900 --> 00:02:45,880 auditory images in Wernicke's area and motor images 57 00:02:45,880 --> 00:02:46,840 and Broca's area. 58 00:02:46,840 --> 00:02:49,190 And so Wernicke's aphasia versus Broca's aphasia, 59 00:02:49,190 --> 00:02:53,410 the difference was which of these centers was harmed. 60 00:02:53,410 --> 00:02:57,220 And there's a lot of flaws in that model. 61 00:02:57,220 --> 00:03:00,547 And this guy named Marsel Mesulam modified 62 00:03:00,547 --> 00:03:02,630 it a little bit and said that all language, again, 63 00:03:02,630 --> 00:03:05,080 depends on circuits, on activities 64 00:03:05,080 --> 00:03:07,280 between different language areas. 65 00:03:07,280 --> 00:03:09,430 And Mesulam was one of the first ideas 66 00:03:09,430 --> 00:03:12,160 to point out that different kinds of language production 67 00:03:12,160 --> 00:03:15,072 require different kinds of processing. 68 00:03:15,072 --> 00:03:16,780 And then the example he used is that if I 69 00:03:16,780 --> 00:03:18,850 ask you to do something like name 70 00:03:18,850 --> 00:03:20,330 all the months in the year. 71 00:03:20,330 --> 00:03:20,830 Ready? 72 00:03:20,830 --> 00:03:21,150 Go. 73 00:03:21,150 --> 00:03:21,790 Come on. 74 00:03:21,790 --> 00:03:27,110 January, February, March, April, May, June, July, August, 75 00:03:27,110 --> 00:03:30,274 September, October, November, December. 76 00:03:30,274 --> 00:03:31,690 So this is something that requires 77 00:03:31,690 --> 00:03:34,000 a lot less planning and a lot less 78 00:03:34,000 --> 00:03:36,460 structuring than actually producing 79 00:03:36,460 --> 00:03:39,890 a well-formed sentence does. 80 00:03:39,890 --> 00:03:41,740 And Mesulam hypothesized that just 81 00:03:41,740 --> 00:03:44,470 rote motor connection like this, or recite the alphabet, 82 00:03:44,470 --> 00:03:47,170 or recite your times tables, or any of these things 83 00:03:47,170 --> 00:03:50,590 that you learned by heart as a little kid, really 84 00:03:50,590 --> 00:03:52,930 only requires premotor and motor areas up 85 00:03:52,930 --> 00:03:54,500 here in the frontal lobe. 86 00:03:54,500 --> 00:03:56,080 So we have our primary motor cortex 87 00:03:56,080 --> 00:03:59,620 is this strip right along here, and premotor is the blue stuff 88 00:03:59,620 --> 00:04:01,257 next to it. 89 00:04:01,257 --> 00:04:03,340 He pointed out, though, when you're hearing words, 90 00:04:03,340 --> 00:04:06,790 this activates primarily auditory cortex down here 91 00:04:06,790 --> 00:04:10,930 in the frontal lobe in auditory association areas. 92 00:04:10,930 --> 00:04:13,900 Unimodal, so auditory alone versus these kind 93 00:04:13,900 --> 00:04:17,170 of crossmodal representations, when you're actually thinking 94 00:04:17,170 --> 00:04:18,490 about what you're hearing. 95 00:04:18,490 --> 00:04:22,000 If I say the word, I don't know, baseball, 96 00:04:22,000 --> 00:04:24,670 you probably come up with not just an auditory response 97 00:04:24,670 --> 00:04:26,680 to that, but all of these other associations. 98 00:04:26,680 --> 00:04:28,721 You might have visual memories that you associate 99 00:04:28,721 --> 00:04:32,506 with that or auditory memories that 100 00:04:32,506 --> 00:04:33,880 aren't just the sound of the word 101 00:04:33,880 --> 00:04:35,270 that you associate with that. 102 00:04:35,270 --> 00:04:37,260 So that would be like a crossmodal association. 103 00:04:37,260 --> 00:04:39,970 But Mesulam says that when you're just listening to words 104 00:04:39,970 --> 00:04:43,930 but not acting on them, then most of what's activated 105 00:04:43,930 --> 00:04:46,165 is just auditory cortex. 106 00:04:49,310 --> 00:04:52,540 And Mesulam says when you're producing words, 107 00:04:52,540 --> 00:04:55,480 then you end up seeing activity all over the brain. 108 00:04:55,480 --> 00:04:56,380 You get motor stuff. 109 00:04:56,380 --> 00:04:58,354 You get word knowledge stuff. 110 00:04:58,354 --> 00:05:00,145 All of these different pieces are involved. 111 00:05:05,020 --> 00:05:07,610 So here's an example of-- 112 00:05:07,610 --> 00:05:11,110 this is some old-ish, at this point, data-- 113 00:05:11,110 --> 00:05:12,710 a PET scan. 114 00:05:12,710 --> 00:05:15,920 So these are participants doing four different tasks. 115 00:05:15,920 --> 00:05:18,730 So they're looking at a computer screen. 116 00:05:18,730 --> 00:05:23,792 And in the first case, they would just 117 00:05:23,792 --> 00:05:25,750 show a word on the screen, and the participants 118 00:05:25,750 --> 00:05:29,650 didn't have to do anything but look at the word. 119 00:05:29,650 --> 00:05:31,630 And as you can see, most of what's 120 00:05:31,630 --> 00:05:34,150 most active in that condition is occipital. 121 00:05:34,150 --> 00:05:36,720 It's all that visual cortex. 122 00:05:36,720 --> 00:05:39,052 You can see there's a little bit in the frontal lobes, 123 00:05:39,052 --> 00:05:40,510 a little bit in the temporal lobes, 124 00:05:40,510 --> 00:05:41,926 but the areas that are most active 125 00:05:41,926 --> 00:05:46,060 are the visual cortex in the back of the brain there. 126 00:05:46,060 --> 00:05:48,530 The second case was pretty similar, 127 00:05:48,530 --> 00:05:51,507 except instead of looking at words, this was auditory. 128 00:05:51,507 --> 00:05:53,590 So you'd have headphones on and look at the screen 129 00:05:53,590 --> 00:05:58,120 and you would hear words being played into the headphones. 130 00:05:58,120 --> 00:05:59,770 And in this case, it's mostly, as you 131 00:05:59,770 --> 00:06:04,220 might expect, temporal lobe stuff that's most active. 132 00:06:04,220 --> 00:06:07,070 It's auditory cortex on the sides of your head 133 00:06:07,070 --> 00:06:10,386 there in the temporal lobe. 134 00:06:10,386 --> 00:06:12,010 And one of the things that's moderately 135 00:06:12,010 --> 00:06:14,230 interesting about that-- and I don't have a good story for why 136 00:06:14,230 --> 00:06:16,646 this is-- is that a lot of the frontal lobe activation you 137 00:06:16,646 --> 00:06:20,320 see in the looking-at-words case goes away in the auditory case. 138 00:06:23,242 --> 00:06:25,572 AUDIENCE: [INAUDIBLE]? 139 00:06:25,572 --> 00:06:27,780 ABBY NOYCE: They were just looking at a blank screen. 140 00:06:27,780 --> 00:06:29,760 I think it had like a fixation point on it. 141 00:06:29,760 --> 00:06:33,730 "Keep looking at the little dot on the screen." 142 00:06:33,730 --> 00:06:36,137 In the third case, they showed them a word on the screen 143 00:06:36,137 --> 00:06:37,970 and participants had to read the word aloud. 144 00:06:37,970 --> 00:06:39,950 They had to pronounce it. 145 00:06:39,950 --> 00:06:42,980 And as you can see, at this point, a lot of the activity 146 00:06:42,980 --> 00:06:44,865 is suddenly in the motor cortex. 147 00:06:44,865 --> 00:06:46,220 I'm sure you all are shocked. 148 00:06:46,220 --> 00:06:51,260 So this is one piece of evidence that articulation requires 149 00:06:51,260 --> 00:06:56,390 a different kind of activity than simply understanding 150 00:06:56,390 --> 00:06:58,100 or perceiving words, that you do, 151 00:06:58,100 --> 00:06:59,510 in fact, need motor activation. 152 00:06:59,510 --> 00:07:02,270 So this is that strip between the frontal lobe 153 00:07:02,270 --> 00:07:05,250 and the parietal lobe, where motor cortex is. 154 00:07:05,250 --> 00:07:08,250 And finally, they asked them to do a word generation task. 155 00:07:08,250 --> 00:07:10,580 So they would show them one word on a screen 156 00:07:10,580 --> 00:07:14,300 and ask the participant to say a word that was related. 157 00:07:14,300 --> 00:07:17,450 So if the word that was shown to them was "bike," 158 00:07:17,450 --> 00:07:19,580 they might say "ride" or they might say "helmet" 159 00:07:19,580 --> 00:07:21,750 or they might say "Cambridge." 160 00:07:21,750 --> 00:07:23,330 If the word on the screen was "boat," 161 00:07:23,330 --> 00:07:25,970 they might say "sail" and so on. 162 00:07:25,970 --> 00:07:28,264 So trying to find a word that's associated 163 00:07:28,264 --> 00:07:29,430 with the word on the screen. 164 00:07:29,430 --> 00:07:32,810 So this suddenly requires participants to actually use 165 00:07:32,810 --> 00:07:36,020 the semantics of the word in a way that they didn't have to 166 00:07:36,020 --> 00:07:38,471 on any of the previous levels. 167 00:07:38,471 --> 00:07:40,970 If you're just reading it or just hearing it or just reading 168 00:07:40,970 --> 00:07:44,127 it aloud, you don't really have to think about what it means. 169 00:07:44,127 --> 00:07:46,460 Whereas if you're trying to come up with a related word, 170 00:07:46,460 --> 00:07:48,740 you've got to tap into all of your semantic knowledge 171 00:07:48,740 --> 00:07:51,710 about what that word represents. 172 00:07:51,710 --> 00:07:55,670 And what's interesting about this is that you get, 173 00:07:55,670 --> 00:07:58,100 again, some motor activation, and you 174 00:07:58,100 --> 00:08:00,440 get some auditory activation, but you 175 00:08:00,440 --> 00:08:03,110 get a whole bunch of stuff going on kind of in that premotor 176 00:08:03,110 --> 00:08:05,660 area, that area that kind of coordinates and plans-- 177 00:08:08,900 --> 00:08:11,480 utterances and other motor things, but in this case, 178 00:08:11,480 --> 00:08:12,904 utterances. 179 00:08:12,904 --> 00:08:16,730 So this kind of imaging scan of participants 180 00:08:16,730 --> 00:08:18,194 on tasks that require them to think 181 00:08:18,194 --> 00:08:19,610 about language on different levels 182 00:08:19,610 --> 00:08:23,390 is one piece of evidence showing that there probably are 183 00:08:23,390 --> 00:08:25,066 different phases of processing. 184 00:08:30,559 --> 00:08:32,059 Another piece of evidence that shows 185 00:08:32,059 --> 00:08:34,570 that there are different stages in processing 186 00:08:34,570 --> 00:08:37,870 are the kinds of mistakes that people make when they talk. 187 00:08:37,870 --> 00:08:40,419 We all got used to making fun of President Bush 188 00:08:40,419 --> 00:08:42,516 because his command of English language 189 00:08:42,516 --> 00:08:44,890 can be a little weak at times, but the fact of the matter 190 00:08:44,890 --> 00:08:47,570 is that, if you followed anybody, 191 00:08:47,570 --> 00:08:50,740 no matter how articulate they are, around with microphones 192 00:08:50,740 --> 00:08:52,330 and wrote down everything they said, 193 00:08:52,330 --> 00:08:53,560 you'd find tons of errors. 194 00:08:53,560 --> 00:08:56,380 We all make speech errors anytime 195 00:08:56,380 --> 00:08:57,880 you're talking and not just reading 196 00:08:57,880 --> 00:09:01,300 aloud, and sometimes even then. 197 00:09:01,300 --> 00:09:04,240 So one of the most common types of speech errors 198 00:09:04,240 --> 00:09:06,370 are what are called exchange errors, 199 00:09:06,370 --> 00:09:10,610 and you'll see these when people are talking fast all the time. 200 00:09:10,610 --> 00:09:12,340 And you'll get both word exchange errors 201 00:09:12,340 --> 00:09:15,280 like, "I wrote a mother to my letter," 202 00:09:15,280 --> 00:09:19,340 and you'll also see sound exchange errors, 203 00:09:19,340 --> 00:09:22,910 phoneme exchange errors, and these are called spoonerisms 204 00:09:22,910 --> 00:09:25,840 in honor of Archibald Spooner, who 205 00:09:25,840 --> 00:09:29,740 was a professor at one of the colleges in New York 206 00:09:29,740 --> 00:09:32,110 in like the early 20th century and was 207 00:09:32,110 --> 00:09:33,980 notorious for doing this. 208 00:09:33,980 --> 00:09:39,040 And there are many statements, such as the one there, 209 00:09:39,040 --> 00:09:42,060 that have been attributed to Monsieur Spooner, 210 00:09:42,060 --> 00:09:45,710 and he has given his name to this kind of sound exchange 211 00:09:45,710 --> 00:09:46,720 error. 212 00:09:46,720 --> 00:09:49,390 "You've hissed all my mystery lectures." 213 00:09:49,390 --> 00:09:51,107 "You have tasted the whole worm." 214 00:09:54,516 --> 00:09:56,813 AUDIENCE: What's it supposed to really be? 215 00:09:56,813 --> 00:09:57,438 AUDIENCE: Yeah. 216 00:09:57,438 --> 00:09:58,412 "Wasted the whole term." 217 00:09:58,412 --> 00:09:58,953 AUDIENCE: Oh. 218 00:09:58,953 --> 00:10:01,850 ABBY NOYCE: There you go. 219 00:10:01,850 --> 00:10:04,750 So sound exchange errors are pretty common. 220 00:10:04,750 --> 00:10:06,940 I mean, I remember my eighth grade earth science 221 00:10:06,940 --> 00:10:09,140 teacher sent the entire class of us 222 00:10:09,140 --> 00:10:12,005 into about five minutes of uncontrollable giggling. 223 00:10:12,005 --> 00:10:13,630 We were talking about volcanoes, and he 224 00:10:13,630 --> 00:10:17,260 talked about how volcanoes would release clouds of gash and ass. 225 00:10:17,260 --> 00:10:18,980 And of course, this is eighth grade, 226 00:10:18,980 --> 00:10:21,250 and [GASPS] he said "ass." 227 00:10:21,250 --> 00:10:22,970 And there was much giggling. 228 00:10:22,970 --> 00:10:24,970 And it's just a straight up sound exchange error 229 00:10:24,970 --> 00:10:26,350 like people make all the time. 230 00:10:29,180 --> 00:10:34,720 So let's look at some of these different levels of processing 231 00:10:34,720 --> 00:10:39,070 that have to happen in order to produce 232 00:10:39,070 --> 00:10:43,010 an utterance of some sort. 233 00:10:43,010 --> 00:10:46,090 So the message level is probably pretty straightforward. 234 00:10:46,090 --> 00:10:49,300 You have some piece of information, some idea, 235 00:10:49,300 --> 00:10:51,460 that you want to convey to the people around you. 236 00:10:51,460 --> 00:10:56,150 How you move on from there is a little bit harder. 237 00:10:56,150 --> 00:11:00,290 So one of the things that happens 238 00:11:00,290 --> 00:11:02,110 is that there's this word selection stage. 239 00:11:02,110 --> 00:11:05,480 So you're going from this raw meaning idea 240 00:11:05,480 --> 00:11:07,430 and trying to find the words that 241 00:11:07,430 --> 00:11:09,260 are necessary to represent it. 242 00:11:11,826 --> 00:11:13,700 Most likely, if we look at the sort of models 243 00:11:13,700 --> 00:11:16,220 that we've been looking at throughout this course, where 244 00:11:16,220 --> 00:11:18,847 you've got competing activations, 245 00:11:18,847 --> 00:11:20,930 you can think of this as like the meaning that you 246 00:11:20,930 --> 00:11:26,660 want to get across then is going to activate different word 247 00:11:26,660 --> 00:11:29,420 representations depending on how strongly related to the meaning 248 00:11:29,420 --> 00:11:30,710 they are. 249 00:11:30,710 --> 00:11:33,800 So if we think of the cortical interconnections that we've 250 00:11:33,800 --> 00:11:36,740 considered all along. 251 00:11:36,740 --> 00:11:39,410 And sometimes, every once in a while, you'll 252 00:11:39,410 --> 00:11:43,130 get errors that are a blend of two different words. 253 00:11:43,130 --> 00:11:45,230 So sometimes when you have two words that 254 00:11:45,230 --> 00:11:47,907 are both strongly activated for a particular position 255 00:11:47,907 --> 00:11:50,240 in your sentence, you'll try to say both of them at once 256 00:11:50,240 --> 00:11:51,200 and get all tangled up. 257 00:11:54,440 --> 00:11:56,870 The example in the reading is, if you dropped 258 00:11:56,870 --> 00:11:59,275 a pen under somebody in front of you's chair 259 00:11:59,275 --> 00:12:01,400 and you're trying to ask them to pick it up for you 260 00:12:01,400 --> 00:12:03,937 and you've got both the word "chair" and the word "seat"-- 261 00:12:03,937 --> 00:12:05,520 you know, "My pen is under your chair. 262 00:12:05,520 --> 00:12:06,603 My pen is under your seat. 263 00:12:06,603 --> 00:12:07,696 Can you get it for me?" 264 00:12:07,696 --> 00:12:09,320 But the "chair" and the "seat" are both 265 00:12:09,320 --> 00:12:13,040 competing for that noun slot in the middle there. 266 00:12:13,040 --> 00:12:17,690 And if they're both equally strongly activated, 267 00:12:17,690 --> 00:12:20,840 they can both get passed down to the phonological stage 268 00:12:20,840 --> 00:12:23,210 and you can end up trying to pronounce both. 269 00:12:23,210 --> 00:12:25,640 And you'll say things like "My pen is under your cheat-- 270 00:12:25,640 --> 00:12:27,450 under your sair-- under-- 271 00:12:27,450 --> 00:12:28,260 under your chair. 272 00:12:28,260 --> 00:12:29,314 Can I have it, please?" 273 00:12:29,314 --> 00:12:30,980 And I don't know if anyone here has ever 274 00:12:30,980 --> 00:12:35,300 experienced that kind of tangled up language production, 275 00:12:35,300 --> 00:12:39,980 but it's fairly common if you start listening for this stuff. 276 00:12:44,450 --> 00:12:46,882 The other step in grammatical encoding 277 00:12:46,882 --> 00:12:48,590 that's happening kind of at the same time 278 00:12:48,590 --> 00:12:50,480 as you're trying to find the words you need 279 00:12:50,480 --> 00:12:55,550 is you're trying to understand how 280 00:12:55,550 --> 00:12:59,310 the syntax is put together-- what guides sentence syntax. 281 00:12:59,310 --> 00:13:01,070 And this is, at least immediately, 282 00:13:01,070 --> 00:13:04,550 hard to think about because all that you get to see of it 283 00:13:04,550 --> 00:13:05,330 is the output. 284 00:13:05,330 --> 00:13:07,160 It's what sentences finally come up. 285 00:13:07,160 --> 00:13:11,600 You don't get to see any of the intermediate processes. 286 00:13:11,600 --> 00:13:16,271 And so somebody did a kind of clever study. 287 00:13:16,271 --> 00:13:20,180 They told participants they were doing a memory experiment. 288 00:13:20,180 --> 00:13:23,930 And so they showed them a string of simple images 289 00:13:23,930 --> 00:13:28,770 and said, hey, in order for you to remember these better, 290 00:13:28,770 --> 00:13:31,010 we want you to say one sentence about each image 291 00:13:31,010 --> 00:13:34,280 as it's presented to you. 292 00:13:34,280 --> 00:13:36,470 These were simple scenes. 293 00:13:36,470 --> 00:13:38,832 So you might have-- 294 00:13:38,832 --> 00:13:40,040 I don't know if I showed you. 295 00:13:48,070 --> 00:13:49,400 Here's a house and a tree. 296 00:13:52,922 --> 00:13:54,880 And so for this sentence, you could say either, 297 00:13:54,880 --> 00:13:57,050 the house is next to the tree, or you could say, 298 00:13:57,050 --> 00:13:58,300 the tree is next to the house. 299 00:14:01,030 --> 00:14:06,190 And what these guys did is, before each picture was 300 00:14:06,190 --> 00:14:08,080 presented, they just put up a word 301 00:14:08,080 --> 00:14:10,300 and told subjects just to read it out loud. 302 00:14:10,300 --> 00:14:16,270 And so for a scene like this, they might put up a word like, 303 00:14:16,270 --> 00:14:21,010 oh, I don't know, "building" versus a word like "pine." 304 00:14:21,010 --> 00:14:24,250 And what they were hypothesizing was 305 00:14:24,250 --> 00:14:26,050 that, if your priming word was a word that 306 00:14:26,050 --> 00:14:29,320 was related to the house part, then 307 00:14:29,320 --> 00:14:33,590 when you tried to develop a sentence about this picture, 308 00:14:33,590 --> 00:14:35,110 your ideas about houses are going 309 00:14:35,110 --> 00:14:37,630 to be faster to activate because they've 310 00:14:37,630 --> 00:14:40,349 been primed by the word than your ideas about trees. 311 00:14:40,349 --> 00:14:41,890 And so you would expect to see people 312 00:14:41,890 --> 00:14:44,767 say sentences that were like, the house is next to the tree. 313 00:14:44,767 --> 00:14:46,600 They'd pull up the house part first and then 314 00:14:46,600 --> 00:14:49,180 fill the rest of it in later. 315 00:14:49,180 --> 00:14:51,310 Whereas if the priming sentence was something 316 00:14:51,310 --> 00:14:54,530 like pine or maple, then your tree representations 317 00:14:54,530 --> 00:14:58,570 would be more active and easier and faster to be pulled up. 318 00:14:58,570 --> 00:15:00,640 And so you would start developing a sentence 319 00:15:00,640 --> 00:15:03,400 structure that let the tree part go first 320 00:15:03,400 --> 00:15:07,810 and fill in later with the other half 321 00:15:07,810 --> 00:15:13,817 of the image with the house, so that you would see people say, 322 00:15:13,817 --> 00:15:15,400 the tree is next to the house, if they 323 00:15:15,400 --> 00:15:19,050 were primed with one of those tree-related words. 324 00:15:19,050 --> 00:15:22,490 And pretty much what they found is 325 00:15:22,490 --> 00:15:24,670 that the part of the scene that was primed generally 326 00:15:24,670 --> 00:15:28,540 came first in the sentences people spoke. 327 00:15:28,540 --> 00:15:31,180 And the part of the scene that was not related to the priming 328 00:15:31,180 --> 00:15:32,160 word came second. 329 00:15:32,160 --> 00:15:33,940 So you'd see people putting things 330 00:15:33,940 --> 00:15:37,600 into active versus passive voice depending on which 331 00:15:37,600 --> 00:15:39,029 part of the scene was first-- 332 00:15:39,029 --> 00:15:41,320 not so much for this image, but an image that shows one 333 00:15:41,320 --> 00:15:43,090 object acting on another object. 334 00:15:45,700 --> 00:15:51,310 So this idea that word selection and sentence structure 335 00:15:51,310 --> 00:15:54,100 happen kind of at the same time and how fast you can pull up 336 00:15:54,100 --> 00:15:56,162 the words you want is going to affect how 337 00:15:56,162 --> 00:15:57,370 you structure your sentences. 338 00:15:57,370 --> 00:15:59,770 If you can get one word right away, 339 00:15:59,770 --> 00:16:01,990 then you're going to build the rest of the sentence 340 00:16:01,990 --> 00:16:04,677 to accommodate putting that word at the beginning 341 00:16:04,677 --> 00:16:06,760 and filling in later with the other words that are 342 00:16:06,760 --> 00:16:08,110 a little bit slower to come up. 343 00:16:23,530 --> 00:16:25,590 So back to these ideas about stages 344 00:16:25,590 --> 00:16:28,800 that are involved in language production. 345 00:16:28,800 --> 00:16:32,070 We talked about grammatical encoding earlier. 346 00:16:32,070 --> 00:16:33,750 Let's talk about this next stage down, 347 00:16:33,750 --> 00:16:37,680 this phonological encoding. 348 00:16:37,680 --> 00:16:39,330 When people are speaking fluently, 349 00:16:39,330 --> 00:16:43,206 they usually produce about three words every second. 350 00:16:43,206 --> 00:16:45,080 And so for each of these words, the speaker's 351 00:16:45,080 --> 00:16:47,370 got to be able to pull up the phonological information 352 00:16:47,370 --> 00:16:50,220 about it-- the information about what pattern of phonemes, 353 00:16:50,220 --> 00:16:55,950 what pattern of sounds, makes up that word, how you stress them, 354 00:16:55,950 --> 00:16:58,950 how it maybe changes when it's next to other words 355 00:16:58,950 --> 00:17:02,341 in a particular order, all of that information. 356 00:17:06,190 --> 00:17:11,949 And one thing that you tend to see happening when people fail 357 00:17:11,949 --> 00:17:13,740 at pulling up this phonological information 358 00:17:13,740 --> 00:17:16,690 is what's call a "tip of the tongue" state. 359 00:17:16,690 --> 00:17:22,050 When somebody maybe reads you a definition and asks you, 360 00:17:22,050 --> 00:17:23,280 what's that word? 361 00:17:23,280 --> 00:17:24,690 And you know it. 362 00:17:24,690 --> 00:17:27,960 You know it's a word you know, and the word just 363 00:17:27,960 --> 00:17:29,580 won't come to you, or the name. 364 00:17:29,580 --> 00:17:30,330 Who's that person? 365 00:17:30,330 --> 00:17:30,996 Oh, I know them. 366 00:17:30,996 --> 00:17:34,290 They are-- and you just can't get it. 367 00:17:34,290 --> 00:17:36,221 Anyone ever have this happen to them? 368 00:17:36,221 --> 00:17:38,380 AUDIENCE: All the time. 369 00:17:38,380 --> 00:17:39,130 ABBY NOYCE: Right. 370 00:17:39,130 --> 00:17:40,630 So it's a "tip of the tongue" state. 371 00:17:40,630 --> 00:17:45,430 And in this model, this "tip of the tongue" scenario 372 00:17:45,430 --> 00:17:48,550 is explained by when you know all of the semantic stuff 373 00:17:48,550 --> 00:17:50,314 about the word. 374 00:17:50,314 --> 00:17:51,730 If it's a name that you can't get, 375 00:17:51,730 --> 00:17:53,140 you know which person you're talking about. 376 00:17:53,140 --> 00:17:55,348 You can tell somebody all sorts of things about them, 377 00:17:55,348 --> 00:17:58,060 like if they were in your history class last semester 378 00:17:58,060 --> 00:18:02,290 or whatever, but you can't come up with the name. 379 00:18:02,290 --> 00:18:04,150 And this is generally believed that when 380 00:18:04,150 --> 00:18:07,240 the connection between the semantic representation 381 00:18:07,240 --> 00:18:10,900 and the phonological information is blocked for whatever reason, 382 00:18:10,900 --> 00:18:14,110 that you can't go from the semantic step 383 00:18:14,110 --> 00:18:16,750 to the phonological step. 384 00:18:16,750 --> 00:18:19,480 "Tip of the tongue" states are more common for moderately 385 00:18:19,480 --> 00:18:21,280 uncommon words. 386 00:18:21,280 --> 00:18:24,210 So for really common, everyday, short words, 387 00:18:24,210 --> 00:18:26,470 "tip of the tongue" states are pretty rare. 388 00:18:26,470 --> 00:18:28,510 For longer, more precise words, words 389 00:18:28,510 --> 00:18:31,060 that you don't see so often, they're more common. 390 00:18:31,060 --> 00:18:33,070 Usually when you have a "what's that word?" 391 00:18:33,070 --> 00:18:35,530 it's for a word that's not one of the most 392 00:18:35,530 --> 00:18:38,720 common in the language. 393 00:18:38,720 --> 00:18:41,620 And there have been a few cases of patients, 394 00:18:41,620 --> 00:18:44,830 usually following some kind of brain injury, brain trauma, 395 00:18:44,830 --> 00:18:47,950 where they have this "tip of the tongue" phenomenon 396 00:18:47,950 --> 00:18:48,590 all the time. 397 00:18:48,590 --> 00:18:50,240 They have a great deal of difficulty. 398 00:18:50,240 --> 00:18:53,757 For every word they try and come up with, they just can't-- 399 00:18:53,757 --> 00:18:55,090 they know what they want to say. 400 00:18:55,090 --> 00:18:57,610 They've got the semantic representation right there, 401 00:18:57,610 --> 00:19:00,771 and they just can't find the phonological representation 402 00:19:00,771 --> 00:19:01,270 for it. 403 00:19:04,633 --> 00:19:07,450 Yeah, tip of the tongue. 404 00:19:07,450 --> 00:19:09,920 So this is another kind of example 405 00:19:09,920 --> 00:19:13,540 of a stage where the sentence construction 406 00:19:13,540 --> 00:19:18,820 stage, the grammatical encoding stage, of producing language 407 00:19:18,820 --> 00:19:23,890 is working OK, and then this later phonological encoding is 408 00:19:23,890 --> 00:19:25,090 not working. 409 00:19:25,090 --> 00:19:27,670 So another piece of evidence that these might, in fact, 410 00:19:27,670 --> 00:19:32,200 be separate phases in language processing, language creation, 411 00:19:32,200 --> 00:19:35,930 language production, something. 412 00:19:35,930 --> 00:19:40,074 So do these things ever interact with each other? 413 00:19:40,074 --> 00:19:41,740 The easiest way of looking at this model 414 00:19:41,740 --> 00:19:45,190 is saying that sentences start out as abstract meanings, which 415 00:19:45,190 --> 00:19:47,380 then are passed to this grammatical encoding 416 00:19:47,380 --> 00:19:51,190 level, which finds words and builds a structure, which then 417 00:19:51,190 --> 00:19:53,140 passes them to the phonological level, which 418 00:19:53,140 --> 00:19:56,260 finds sounds, which then passes them to this articulation 419 00:19:56,260 --> 00:19:58,390 machine. 420 00:19:58,390 --> 00:20:00,100 But it turns out there's actually 421 00:20:00,100 --> 00:20:02,500 at least some feedback going on between the levels, 422 00:20:02,500 --> 00:20:04,546 or it seems to be. 423 00:20:04,546 --> 00:20:07,900 And the best evidence for this at the moment 424 00:20:07,900 --> 00:20:10,480 is that we talked about word exchange errors. 425 00:20:10,480 --> 00:20:13,870 "I wrote a mother to my letter." 426 00:20:13,870 --> 00:20:17,620 And these occur-- if there was no interaction 427 00:20:17,620 --> 00:20:21,370 between the grammatical levels and the phonological levels, 428 00:20:21,370 --> 00:20:27,730 then you'd expect to see that word exchange errors happen 429 00:20:27,730 --> 00:20:30,430 equally often between pairs of words, 430 00:20:30,430 --> 00:20:35,860 disregarding whether they have any phonological similarity. 431 00:20:35,860 --> 00:20:38,620 If you actually document what kinds of word exchange errors 432 00:20:38,620 --> 00:20:40,540 people tend to make, this isn't true. 433 00:20:40,540 --> 00:20:44,080 The word exchange errors people make occur more often 434 00:20:44,080 --> 00:20:46,540 in words that have some kind of similar phoneme pattern. 435 00:20:46,540 --> 00:20:50,470 So "mother" and "letter" both have a stressed first syllable, 436 00:20:50,470 --> 00:20:53,310 and then that last syllable with just the E-R on it 437 00:20:53,310 --> 00:20:55,500 that's unstressed. 438 00:20:55,500 --> 00:20:57,619 And patterns like that are what you tend to see. 439 00:20:57,619 --> 00:20:59,410 So there's some amount of feedback going on 440 00:20:59,410 --> 00:21:00,284 between these levels. 441 00:21:04,080 --> 00:21:07,050 Articulation, of course, this last stage. 442 00:21:07,050 --> 00:21:09,340 Articulation is mostly just a matter of motor control. 443 00:21:09,340 --> 00:21:13,030 It's saying, OK, I've got a phonological representation. 444 00:21:13,030 --> 00:21:17,550 How do I turn that into actual, specific movements 445 00:21:17,550 --> 00:21:21,360 of my mouth, my tongue, my lips, all of this stuff that's 446 00:21:21,360 --> 00:21:23,190 actually articulating it? 447 00:21:23,190 --> 00:21:24,960 So here's our good old brain diagram. 448 00:21:24,960 --> 00:21:28,680 Again, primary motor cortex here. 449 00:21:28,680 --> 00:21:30,270 And it's been really well documented 450 00:21:30,270 --> 00:21:32,490 at this point that is kind of laid out 451 00:21:32,490 --> 00:21:35,100 over a strip along motor cortex are regions that 452 00:21:35,100 --> 00:21:36,360 respond to different things. 453 00:21:36,360 --> 00:21:41,670 So toes, ankles, hip, knee, trunk, arm. 454 00:21:41,670 --> 00:21:43,880 Notice there's a lot of cortex devoted to your hands. 455 00:21:43,880 --> 00:21:46,560 That's probably not too shocking. 456 00:21:46,560 --> 00:21:49,280 Fine control over hands is really important for humans. 457 00:21:49,280 --> 00:21:52,510 It's important for a lot of things that we do. 458 00:21:52,510 --> 00:21:55,140 There's also a lot of it connected to, like, lips 459 00:21:55,140 --> 00:21:56,910 and jaw and tongue. 460 00:21:56,910 --> 00:22:01,410 Again, this ability to have really fine and precise control 461 00:22:01,410 --> 00:22:04,437 over these things is important. 462 00:22:04,437 --> 00:22:07,020 I'm always kind of startled at just how much motor cortex goes 463 00:22:07,020 --> 00:22:09,870 to the swallowing and throat control, which makes sense, 464 00:22:09,870 --> 00:22:11,370 but it's not something that we think 465 00:22:11,370 --> 00:22:12,744 of as requiring a lot of control. 466 00:22:20,480 --> 00:22:23,780 So motor cortex happening right up there. 467 00:22:23,780 --> 00:22:27,110 Remember, if we go back to-- 468 00:22:27,110 --> 00:22:29,769 what was his name? 469 00:22:29,769 --> 00:22:32,060 The guy we were talking about at the beginning of class 470 00:22:32,060 --> 00:22:33,800 who had a model of how language depends 471 00:22:33,800 --> 00:22:39,999 on different parts of the motor cortex. 472 00:22:39,999 --> 00:22:41,540 And I'm totally blanking on his name. 473 00:22:41,540 --> 00:22:43,780 "Tip of the tongue" moment. 474 00:22:43,780 --> 00:22:45,675 Starts with an M. What was it? 475 00:22:45,675 --> 00:22:48,360 AUDIENCE: Mesulam. 476 00:22:48,360 --> 00:22:49,730 ABBY NOYCE: Mesulam, OK. 477 00:22:49,730 --> 00:22:50,900 Yeah, him. 478 00:22:50,900 --> 00:22:53,300 And he was saying that, depending on what you're doing, 479 00:22:53,300 --> 00:22:57,710 some kinds of articulation basically just require 480 00:22:57,710 --> 00:22:58,320 motor control. 481 00:22:58,320 --> 00:23:00,590 Others have to be kind of coordinated from further up. 482 00:23:03,530 --> 00:23:06,110 So we've got our motor cortex. 483 00:23:06,110 --> 00:23:08,150 So the primary motor cortex, the strip 484 00:23:08,150 --> 00:23:11,210 that's right along the sulcus there, 485 00:23:11,210 --> 00:23:13,790 right up against the parietal lobe, 486 00:23:13,790 --> 00:23:17,360 controls mostly fine motor coordination, 487 00:23:17,360 --> 00:23:18,820 very fine control of movement. 488 00:23:18,820 --> 00:23:22,400 There's actually nerve fibers that go from primary cortex 489 00:23:22,400 --> 00:23:23,880 all the way out to the muscles. 490 00:23:23,880 --> 00:23:26,660 This is like direct muscular control for fine motion. 491 00:23:29,630 --> 00:23:32,690 And then next to it is this kind of premotor area. 492 00:23:32,690 --> 00:23:35,240 And the premotor area seems to be most involved in setting up 493 00:23:35,240 --> 00:23:38,480 sequences of actions, especially in response 494 00:23:38,480 --> 00:23:40,200 to perceptual information. 495 00:23:40,200 --> 00:23:44,450 So for example, if I was to toss something 496 00:23:44,450 --> 00:23:46,790 across the room to a student. 497 00:23:46,790 --> 00:23:48,490 Sarah, catch. 498 00:23:48,490 --> 00:23:50,810 And Sarah is-- very nice. 499 00:23:50,810 --> 00:23:54,110 So Sarah is taking in all of this perceptual information, 500 00:23:54,110 --> 00:23:56,420 where she sees the pen coming towards her, 501 00:23:56,420 --> 00:24:00,230 and coordinating this plan that involves moving her hands up 502 00:24:00,230 --> 00:24:03,080 at the right time and in the right place 503 00:24:03,080 --> 00:24:05,750 in order to be able to catch the pen when it gets to her. 504 00:24:05,750 --> 00:24:07,880 And that's coordinated in part-- 505 00:24:07,880 --> 00:24:09,980 setting up that sequence is coordinated 506 00:24:09,980 --> 00:24:13,049 through the premotor area and the cerebellum. 507 00:24:13,049 --> 00:24:14,840 The cerebellum has a lot to do with getting 508 00:24:14,840 --> 00:24:18,950 this kind of smooth coordination so that our muscles are all 509 00:24:18,950 --> 00:24:21,650 working at the right time relative to one another 510 00:24:21,650 --> 00:24:25,190 so that we aren't moving all jerkily. 511 00:24:25,190 --> 00:24:27,740 And then there's this kind of supplementary motor 512 00:24:27,740 --> 00:24:34,070 area that is involved in more self-directed action planning. 513 00:24:34,070 --> 00:24:36,902 So premotor is really working with responding 514 00:24:36,902 --> 00:24:39,110 to your immediate environment, taking that perception 515 00:24:39,110 --> 00:24:42,170 and building a sequence from there. 516 00:24:42,170 --> 00:24:44,660 Supplementary motor area is involved 517 00:24:44,660 --> 00:24:46,070 in longer-term planning. 518 00:24:51,050 --> 00:24:53,319 If I was like, I have a motor goal 519 00:24:53,319 --> 00:24:54,860 of getting to that window, that would 520 00:24:54,860 --> 00:24:57,230 be coordinated by the supplementary motor area. 521 00:24:57,230 --> 00:25:01,040 And then that premotor area would 522 00:25:01,040 --> 00:25:03,920 be more involved in helping me walk around obstacles 523 00:25:03,920 --> 00:25:06,110 and make sure I don't walk into anything 524 00:25:06,110 --> 00:25:08,570 on my way of getting there. 525 00:25:08,570 --> 00:25:12,820 So all of these are going to be involved in articulation. 526 00:25:12,820 --> 00:25:18,170 Someone did a study in monkeys, training them to-- 527 00:25:18,170 --> 00:25:21,950 no, what am I thinking of? 528 00:25:21,950 --> 00:25:23,420 Yeah, this was a study in monkeys, 529 00:25:23,420 --> 00:25:25,750 but it wasn't a directly articulation-related one. 530 00:25:25,750 --> 00:25:27,083 Someone did a study in monkeys-- 531 00:25:29,680 --> 00:25:31,810 because monkeys don't have language-- 532 00:25:31,810 --> 00:25:34,550 where they had either trained them 533 00:25:34,550 --> 00:25:39,680 to push a set of three buttons in a certain pattern 534 00:25:39,680 --> 00:25:42,740 or train them to look at how the buttons lit up and then 535 00:25:42,740 --> 00:25:44,960 repeat the pattern by pushing the buttons. 536 00:25:44,960 --> 00:25:48,860 So one of these was a task where the monkey had 537 00:25:48,860 --> 00:25:50,254 to know the plan in advance. 538 00:25:50,254 --> 00:25:51,920 One of these was a task where the monkey 539 00:25:51,920 --> 00:25:55,280 had to respond to environmental input 540 00:25:55,280 --> 00:25:57,230 to perceptual information. 541 00:25:57,230 --> 00:26:00,460 And this is one of the really solid pieces of evidence 542 00:26:00,460 --> 00:26:02,840 for the difference between supplementary motor 543 00:26:02,840 --> 00:26:05,060 and premotor in terms of what they do. 544 00:26:05,060 --> 00:26:07,639 When the monkeys were working entirely from memory, 545 00:26:07,639 --> 00:26:09,680 then it was mostly their supplementary motor area 546 00:26:09,680 --> 00:26:10,910 that was active. 547 00:26:10,910 --> 00:26:12,770 When they were working in response 548 00:26:12,770 --> 00:26:15,311 to what was shown to them right then, it was mostly premotor. 549 00:26:28,030 --> 00:26:30,580 One other thing about articulation before we move on 550 00:26:30,580 --> 00:26:35,710 is just that, because our face is right up 551 00:26:35,710 --> 00:26:38,440 here close to the brain, a lot of the nerves that 552 00:26:38,440 --> 00:26:41,279 control the muscles that are involved in articulation 553 00:26:41,279 --> 00:26:43,570 don't run through the spinal cord the way a lot of them 554 00:26:43,570 --> 00:26:46,510 do for, say, controlling your hands. 555 00:26:46,510 --> 00:26:48,340 A lot of it is these direct cranial nerves 556 00:26:48,340 --> 00:26:50,500 that come right from the brain. 557 00:26:50,500 --> 00:26:52,990 They don't go out and through the spinal cord and back. 558 00:26:52,990 --> 00:26:55,724 So the way that the circuit works for controlling 559 00:26:55,724 --> 00:26:57,640 facial stuff is different than for controlling 560 00:26:57,640 --> 00:27:00,610 a lot of the rest of your body. 561 00:27:00,610 --> 00:27:01,950 Moving on very quickly. 562 00:27:01,950 --> 00:27:04,360 The other context in which we tend to produce language 563 00:27:04,360 --> 00:27:05,980 is writing things. 564 00:27:08,710 --> 00:27:11,950 How we produce spoken language is not well understood. 565 00:27:11,950 --> 00:27:17,620 How we produce written language is even less well understood. 566 00:27:17,620 --> 00:27:19,634 But there are some key differences 567 00:27:19,634 --> 00:27:21,050 between producing written language 568 00:27:21,050 --> 00:27:23,830 and between speaking language that I just wanted 569 00:27:23,830 --> 00:27:25,570 to highlight for you guys. 570 00:27:28,157 --> 00:27:30,490 Unlike when you talk-- you're usually talking to people, 571 00:27:30,490 --> 00:27:34,000 with people, with people immediately around you-- 572 00:27:34,000 --> 00:27:36,550 when you're writing, it's usually just you, usually just 573 00:27:36,550 --> 00:27:41,020 your stream of language that's being put down. 574 00:27:41,020 --> 00:27:43,510 You aren't building it into like a conversation or anything 575 00:27:43,510 --> 00:27:45,280 like that. 576 00:27:45,280 --> 00:27:46,960 If you analyze the syntax that's used 577 00:27:46,960 --> 00:27:49,210 in people's writings versus a syntax that's 578 00:27:49,210 --> 00:27:52,120 used when they are speaking, it's more complex. 579 00:27:52,120 --> 00:27:54,520 You'll see more recursion, more subordinate clauses, more 580 00:27:54,520 --> 00:27:56,927 nested phrases, all of that. 581 00:27:56,927 --> 00:27:59,260 And this seems to be easier to follow when reading, too. 582 00:27:59,260 --> 00:28:02,450 So this make sense on both sides. 583 00:28:02,450 --> 00:28:04,450 And when you're writing, you get to futz with it 584 00:28:04,450 --> 00:28:06,490 and change it after you can write it down and be like, no, 585 00:28:06,490 --> 00:28:07,490 that doesn't make sense. 586 00:28:07,490 --> 00:28:08,530 Let me go back. 587 00:28:08,530 --> 00:28:09,910 You don't get to do that when you're talking. 588 00:28:09,910 --> 00:28:11,785 If you say something that doesn't make sense, 589 00:28:11,785 --> 00:28:15,090 you're kind of stuck with it being out there.