Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Description: Behavioral methods to study cognitive development in infants, probing infants’ evolving understanding of objects and their physical behaviors, and understanding of agents who engage in goal-directed activity and initiate social interactions.
Instructor: Liz Spelke
Lecture 3.2: Liz Spelke - C...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
LIZ SPELKE: I want to talk about-- you asked about development of knowledge within infancy-- and I want to talk about one case where we've seen an interesting developmental change. One that I don't think we really fully understand. I'm sure we don't fully understand it. I'm not sure we understand it at all. But it seems to be there.
And it has to do with effects of both, maybe, inertial properties of object motion and also effects of gravity on object motion. Let me just tell you the result first. These are studies that were done by In-Kyeong Kim a long time ago. But recently enough, that video was part of our toolkit, which wasn't always the case, and involved showing babies videotapes of events in which an object held by a hand was placed on an inclined plane, released, and started to move.
And in one study, babies either saw a plane that was inclined downward, the hand released it, and it rolled downward with the natural acceleration, or the plane was inclined upward, the hand released it, and it rolled upward, decelerating. OK? We studied this in adults as well, and for adults, adults reported that it looked like this hand set the ball in motion. In fact, to control the motion better, there was a sling shot apparatus that actually set it in motion. But it underwent what looked like a natural deceleration from the point at which the hand released it and as it rose up. Half the babies were bored with this event, and half were bored with that.
And then, at test, we switched the direction of the orientation of the ramp so that if babies had been seeing something move downward, now they were seeing events in which it moved upward. And in one of those two events, it underwent the natural acceleration that adults would expect, which is to say, a change in the motion that the infants had previously seen. So these guys saw this thing rolling downward and accelerating. And here in the natural event, they're seeing something that's being propelled upward, and it's decelerating. OK? In this event, they saw the same motion pattern that they saw before, so if they saw speeding up here, they saw speeding up here, contrary to effects of gravity on object motion. OK.
So the question was, if the babies were sensitive to effects of gravity, they should look longer when the thing accelerates, you know, speeds up as it's moving upward than they do when it slows down as it's moving upward. On the other hand, if they're not sensitive to gravity, and they're just representing these events as speeding up or slowing down, they might show the opposite pattern. OK?
And that's actually what we found in both conditions of that study. At five months of age, babies showed the pattern opposite to what adults would expect, as if they thought an object that speeds up in the beginning is going to continue speeding up irrespective of the orientation of that plane, with respect to gravity, OK? So failure at five months, success at seven months. At seven months, we get a reversal, and they flip in their looking times here.
Well, we thought, maybe the 5-month-olds will succeed at a simpler task. Suppose we start out just showing them a flat surface and constant speed, so we're not engendering any expectation that this object is going to speed up over time or slow down over time. They're just seeing a constant motion, and then they get tested with events in which the object is placed either at the top or at the bottom of this ramp. It's released, and it speeds up in both cases. In that case, will they view this motion as more natural and look longer at that one? Or, you might ask, would they show the opposite pattern [AUDIO OUT] if these events are difficult for them, show the opposite pattern?
What they actually showed was absolutely no expectation, the 5-month-olds, in that case. OK? They seemed utterly uncommitted as to whether it was more natural for the thing to be moving downward than to be moving upward after seeing it move on this flat surface. Then, at seven months, kids responded as adults would.
Then we thought, maybe there's just a problem with video. Maybe kids don't understand that vertical in a video corresponds to vertical in the real world. So we just ran a control experiment to see if that was true, by familiarizing kids with downward motion in the real world and then presenting those same two test events. And now they showed the pattern that adults would show. That's not showing any knowledge here, it's just saying, yeah, they can distinguish downward from upward, and they can relate real events to video. But they don't seem to have any prior expectation that objects will move downward when they're rolling on an inclined plane.
And all these things seem to develop between five and seven months, which I think provides us with an interesting window for asking what's developing here. One possibility is what's developing is something very local. But I don't think so, because there's another change also occurring between five and seven months in what looks like a very similar situation. These are experiments that were conducted in Renée Baillargeon's lab, where she did these very simple studies where you'd have a single object in an array, and then a hand would come in holding another object and place it on that array.
And in that top study, she compares infants' reactions when the hand places the object on top of the box, releases it, and it stays there, versus places the object on the side of the box, releases it, and it stays there. OK? Now, if you're a 3-month-old infant, you are equally happy with those two events. You do not look differentially at them. But by five months, infants do. They look longer when the object seems to be stably supported by the side of the box, than when it's stably supported by the top of the box.
But then she goes on and asks, do infants make distinctions among objects that are stably supported from below? As, by the way, they were in all those studies with the rolling on the ramps, right? Do they make the kind of distinctions that we would make based on the mass of the object, the physical force on the object, and so forth? And to get at that, she did a second study where she has a box, and she either places it in the middle of the top box or way off to the side where we would expect it to fall, OK, not to be stably supported. 7-month-olds get that right, 5-month-olds do not.
So across these different situations, it looks like we have this developmental change in what infants know. I mean, I think these studies raise a lot of questions that they don't answer about what infants are learning over this time period. I'm excited about that because I think they also give us a method for addressing those questions. OK. And Tomer has been working on chasing that [AUDIO OUT], which would be great.
OK. So I've been asked already by a bunch of you, what happens at the very beginning of visual experience? I do have some slides on that. I do want us to take a break, but let me go through them very quickly. There's been a little bit done with newborn infants, and where they've been studied under conditions where we are confident that they're able to see what they're being presented with. It looks as if some of these-- there's evidence for these abilities in newborns, but most of the things I told you about have not been tested in newborn infants and would be really hard to test in them.
But fortunately, humans aren't the only animals that have to be able to find the objects in a scene and track them through time. Other animals do that as well. And many animals seem to succeed at representing objects under the conditions where infants succeed. The big problem we have with animal models is that in many cases, the animals are way more competent than the infants. And they succeed in cases where infants would fail. OK?
But at least where they succeed where infants succeed, we can ask, what would happen with controlled rearing? And can we at least get an existence proof that in a mechanism of the sort, that the abilities that we see in infants can be performed by some nervous system on first encounters with visible objects and types of events that it's now being asked to reason about?
So this has been done with controlled reared experiments. I'm going to talk about just a few, show you the results of just a few experiments that were done on chicks. Chicks are being used a lot lately because they grow up pretty quickly, they're easy to raise, and they show this innate behavior toward objects. That gives us a nice indicator, which is in some ways a little bit like preferential looking, though opposite in sign.
They imprint if you show them an object repeatedly, and a chick has been isolated, so there's no other chicks or hens around in its environment. The object is the only moving object they see. They will imprint to it, treat it like another-- behave as they would with their mom, were she there. And in particular, if you then take the chick and put them in a novel environment where they're a little bit stressed, they will tend to approach that object. So this has the same logic as looking longer at a novel thing. You have a selective approach to a familiar thing, and you can now run, on chicks, the kinds of experiments that have been run on human infants.
So here is one imprinted chick to a-- these are experiments that have actually been done a while ago, in some case they're much more recent, imprint a chick to a triangle, and then present them with a triangle whose center is missing, either because it's occluded or because there's a gap there. Who's mom in this case? The one with the occluded center, not the one with the gap at the center. OK? Consistent with findings with infants in one way, these chicks are perceiving occlusion, but they're doing better than the infants. This works when the objects move behind the occluder. If you move objects behind the occluder, you get all the same kind of motion effects that I was talking about with infants. But it also works if the object is stationary. So the chicks are better than the infants in that case.
What about object permanence? Here's a task that babies can't solve until they're 12 months old-- well, eight months in this case, 12 months in this case-- that chicks solved the first time they're presented with an imprinted object that moves out of you. I should have said that on the previous slide. The chicks never saw occlusion until the imprinting test. Here, they're imprinted to mom on the first two days of life, but there's no other surfaces in the environment, so they never see her occluded by another surface. Maybe they see her occluded by their own wing or something, but not by other objects in the environment. OK.
There's two screens there. A chick is restrained in a Plexiglas box and sees mom disappear behind one of the screens. They will go and search behind that screen for her. Only at 12 months do human infants solve the following problem. Mom has disappeared behind and been found behind this screen five times in a row. Now she goes there, where will chicks search? A baby, until 12 months, will search behind the first place where she was hidden, where they found her in the past. A chick is more like a 12-month-old baby, shows the more mature pattern. He goes where mom actually is.
AUDIENCE: Are these images or actual moms?
LIZ SPELKE: These are actual moms. This is imprinting to either a ping pong ball in some of these studies or a cylinder in other studies, dangles on a wire during the imprinting period so you can see that it moves. It moves here, and it kind of dangles and moves behind one screen or another screen. And then the chick is released, and the question is where will the chick go? Yeah.
AUDIENCE: Are these other chicks?
LIZ SPELKE: I think they imprint them for two days post hatching. So they're hatched in an incubator. In the killer study I'm going to tell you about, the incubator's in the dark. They spend two days in-- I think they spend a day in the dark getting used to just walking around. And then starts the visual experience, and it's controlled, so no occlusion. But you do see this one object that they get to imprint you, and then they later go toward the object. OK? So an existence proof that such a capacity could exist doesn't tell us that it does exist in a young infant, but it could.
Here's the one that was done recently, that's my favorite study in this series on solidity. OK, here's a study where the chick is hatched in the dark and spends most of the day for the first three days of its life in the dark. But during a certain period each day, it's put in a Plexiglas box with black walls, a black floor, black ceiling. And through the Plexiglas, there is a single object that dangles that they imprint to. OK?
Now, they can't touch the object, so they never get evidence whether this object is solid or not. They can't peck at it, right? They can peck, but they can only peck at this black surface that they can't see or at this transparent surface that they also can't see. So they might learn there are objects in the environment, but they're not going to have any visual characteristics like this guy does. OK?
All right. So then after this experience, the chick stays in this box, and in that box, sees a series of events involving this object. In the first set of experiments, the object moves behind one screen and then is revealed there, or moves behind the other screen and then is revealed there. In this particular study, the chick does not get to go out to get to that object. They did a series of studies before this one where they did, but in this case study, they don't.
Then they see a second set of events where all they see is mom starts moving toward the space between the two screens. Then there, a screen comes down, so they can't see what happens next. And then when the screen comes up, she is no longer visible, but she then emerges behind either one screen or the other, basically teaching the chicks, in effect, mom can go behind each of these two screens, OK, either of these two screens.
And then comes the critical test. In the critical test, mom starts moving toward the midpoint of the two screens. The screen comes down, and when the screen-- sorry, the big occluder-- vision is blocked, and then when that curtain raises again and the two screens are again visible, they've been rotated backward. And across a series of studies, they vary the size of mom and the degree of rotation of the screen. And the question is, will the chick go to the side where the screen's rotation is consistent with a solid object? And they do.
I want to argue from that that knowledge of solidity-- well, knowledge in some sense-- representations that accord with solidity is innate in chicks. And what I mean by innate is simply that it's present and functional on first encounters with objects that exhibit that property. This chick has not had the opportunity to peck at these objects, or observe them bumping into each other, or anything like that. And the first time they see this degree of rotation, they make inferences that are consistent with that principle.
This doesn't tell us that human infants do it. To be convinced of that, I'd want to see evidence for this ability in an animal that's a much better model for human object cognition. Or I'd want to see more evidence from the chicks to convince me that, actually, the chick is a really good model of human object cognition. But I do think it should encourage us to start thinking about how nervous systems could be wired to exhibit this property in the absence of specific learning and experiences with it.
Here are some questions that we could talk about in the Q&A later. I can give you the one-liner on it. On the issue of compositionality, the question is, do babies have a laundry list of rules about how objects are going to behave, or are they building some kind of unitary model of the world that accords with certain general properties? I think the evidence, starting at least at eight months of age-- lower than that, we don't know-- but starting at that age, I think the evidence favors the second possibility.
It comes primarily from these beautiful studies that were conducted by Susan Carey and her students and collaborators where they presented infants with an event that violated cohesion. So this could be a sand pile that's poured onto a stage, or a cookie that's broken into two pieces before those pieces are put in boxes, or a block that falls apart when it's hit by another block.
And then, she asks, having seen that this object violated this property, do they expect it to accord with the other properties? And the answer is no, they don't. OK? So for example, if you've seen, there's-- in that causality study where an object moves behind a screen, and then another thing starts to move-- if instead of starting to move, it falls apart-- it violates cohesion, falls apart-- infants no longer expect the first object to hit it. OK? Maybe it fell apart on its own, OK?
They don't assume that it's going to behave like an object in other respects. They also don't assume that sand piles will move continuously, that they won't pass through surfaces, that a cookie that is broken in two-- if you have two cookies in one box and one cookie in the other, the babies crawl to the two. But if you have one large cookie, and you break it in two and put it in one box, they're neutral between those two options. OK? So it looks like the system is acting like an interconnected whole. What the babies learn about one aspect of an object's behavior bears on the inferences they make about other aspects of its behavior.
The final thing I want to end with that I think makes this point is very recent work that was conducted by Lisa Feigenson-- that I think Laura Schulz may talk about as well. Josh mentioned it last Friday, but didn't really describe it-- that I think suggests that at least at the end of infancy-- and again, we don't know what's happening earlier-- infants' understanding of objects isn't only forming an interconnected whole, but it centers on abstract notions about the causes of object motions.
So these are studies that picked up on some old findings from a variety of labs. Here's one that Baillargeon had worked on, and I did studies on it as well, where infants see an object that's moving in an array that has two barriers, but the barriers are mostly hidden by a screen, and the object moves so that it's fully hidden by the screen. The question is, where will it come to rest?
Looking time studies say babies expect it to come to rest at the first barrier in its path. They look longer if they find it when the screen is raised behind the second barrier, as if it passed through that surface in its path, a solidity violation. Here's another study that Baillargeon and others have studied showing that by three months of age, infants expect objects to fall in the absence of support. If you push a truck along a block so that it stops, but under conditions where it stops, when it gets to the end of the block or continues off the block and doesn't fall, infants look longer in the latter case.
And what Feigenson asked is, suppose we don't let look infants look as long as they want. Suppose we limit their looking time to just a couple of seconds, so they've looked equally at these two outcomes. What is their internal state? Are they just bewildered in the case where the object did something impossible? Or are they seeking to understand what's happening in the world? Is this a learning signal for them?
So she did two experiments-- actually, more than two-- but asked two questions with experiments. One question was, suppose after showing this event, or this event, or this, or this one, after showing an event where an object behaves naturally versus apparently unnaturally, you now teach infants in effect, try to teach infants in effect some new property of the object. So you pick the object up and squeak it. And the question is, do the babies learn that the object makes that sound? And what she finds is that the infants learn much more consistently about the object whose previous behavior was unpredicted than about the object whose previous behavior was predictable. OK.
I think this is both good news and bad news about all the looking time methods I've been telling you about. The good news is looking time really does seem, when you let babies look for as long as they want, it really does look like that's tracking what they're seeking to learn about their exploration and learning. The bad news is that when you restrict looking time to just a very short amount of time, there are many, many things that could be going on. They could be attending a lot, or they could be attending a little. It's a very crude measure that's not telling us that. But here's a richer measure suggesting differential learning in these two cases.
The other study, I think, is even cooler. In the other studies, after showing infants a violation event, she handed infants the object, and allowed them to explore it, and looked to see what they would do. And what she found is that they do different things depending on the nature of violation. So when there was a solidity violation, they take the object, and they bang it on the tray in front of them. OK? In the case of a support violation, they take the object, and they release it. OK? So if they're specifically oriented at 11 months, and we don't know what happened earlier, they're specifically oriented to expected properties of the object that could be relevant to understanding the apparent violation that they saw.
To summarize, it looks like there is a system that is growing over the course of infancy, as it's clear from all the questions that we can continue to discuss. There's a lot we don't know about this system. But at no point in development do infants seem to be perceiving just a two-dimensional, disorganized world of sensory experiences. At no point do they seem oriented to events going on in the visual field as opposed to the real world when you test them in these situations where you're asking them questions about properties of the world. They seem to be oriented to properties of the world itself.
And to start out already, as early as we can test them, in a few cases, like the center-occluded objects, that means newborns. To start out already with a system which, although it's radically different from ours, we see that most of the things that we know about, most of the information we can get about objects from a scene, they do not get. Nevertheless, we're seeing core skeletal abilities that we continue to use throughout life and that seem to be present to serve as a basis for learning.
There have been lots of studies in which infants have been presented with people or other self-propelled objects that have used the same logic as the studies for looking at naive physics to get at something like naive psychology in infancy, what they know about object motion. Most of them are on babies that are older than the babies from the object studies. Those studies ran mostly up to about four months of age. These studies are starting later than that, but they give us a starting point.
And here's one slide that tries to capture just about everything that I think we know about what 6- to 12-month-old infants, how they represent agents. First of all, they represent people's actions on objects as directed to goals. So this is a basic study that was conducted-- a whole series of studies, one of many-- conducted by Amanda Woodward, who was the advisor for Jessica Sommerville, who will be here giving a talk on Thursday afternoon.
They're very simple studies in which she presents babies with two objects side by side, and then they see a hand reach out and grasp one of those two objects. And the question is, how do babies represent that action? Do they represent it as a motion to a position on the left, or do they represent it as an action with the goal of obtaining the ball? And to distinguish those two possibilities, she then reverses the positions of the two objects, presents the hand in alternation, taking a new trajectory to the old object versus the old trajectory to a new object.
And the babies look longer when the hand goes to the new object, suggesting not that they can't represent trajectories, but that they care more about-- bigger news is the agent's goal. When the goal changes, that's a bigger change for the infant than when the motion simply changes to a new path and a new endpoint. That's at as young as five months of age.
Then there's been studies showing that infants' goal attribution depends on what is visible from the perspective of the agent. So if two objects are present, and the agent consistently reaches for one of them, babies in some sense, in some poorly-understood sense, represent that agent as having a preference for the object that they've chosen to go for over the other object. But if the object they didn't go for is occluded, and there's no evidence that that agent ever saw it, they don't make that preference, inference. OK?
Third, infants represent agents as acting efficiently. And this is true whenever they're given evidence for self-propulsion, whether the object that's moving in a self-propelled manner has the features of an agent or not. So with these classic studies by Gergely and his collaborators that continue to the present day, but started, now, I guess 20 years ago, they present infants with two balls that engage in self-propelled motion and even a little kind of interaction. And then one of the balls jumps over a barrier to get to the other ball. And across a series of familiarization trials, the barrier varies in height. The jump is appropriately adapted to it.
And the question is, what do infants infer this agent will do when the barrier is taken away? Will it engage in one of its familiar patterns of motion, or will it engage in a new pattern of motion that's more efficient to get to the object? And the finding is, they expect that new motion and look longer at the more superficially familiar but inefficient indirect action.
Here are my favorite studies. This is one that Shimon talked about yesterday afternoon. He talked about one version of this. The original study was conducted by Sabina Pauen, and Rebecca Saxe and Susan Carey did interesting extensions on it. It uses the simplest imaginable method. They present babies first with two objects, stationary, side by side, one with animate features, a face and sort of a fuzzy body, tail-like body, the other, an inanimate ball. They're both stationary, and the objects were chosen to be about equally interesting. The babies look about half the time at each one.
Then, they see a series of events in which these two objects are stuck together and undergo this very irregular pattern of motion. This was actually some kind of parlor trick toy that was sold for a while. They got them started on this study. There's a mechanism inside the ball that's actually propelling the two objects around. But the question is, which of these two objects does the baby attribute the motion to?
And to find that out, they subsequently separated the two objects again, and ask again, which one will the infant look at more? And after seeing this motion, the infant looks more at the one that has the animate features. Although both objects underwent the same motion, they attribute that motion to this guy, not the other guy. It follows then that they're perceiving this-- they're representing this guy as causing the other guy's motion. Right? The other guy is not seen as causing its own motion. The motion is being caused by this guy. This is at seven months of age.
Now, even stronger evidence that infants infer causal agents come from these beautiful studies that Rebecca Saxe, and Susan Carey, and Josh Tenenbaum conducted 10 years ago now, that went back to this efficient situation where you see an object efficiently going over barriers of variable heights to get to a new position on the stage. The one thing they added is that the object is manifestly inanimate. It's a beanbag. And the kid had a chance to play with it before the study began. They can feel that this is not an animate object.
So if this isn't an animate object, and infants are actively trying to explain the motions of objects that they see, they're going to need another kind of explanation. And what Saxe, Tenenbaum, and Carey showed is that infants infer that there is an agent, off-screen, on this side of the screen, that set that object in motion. And they show that by their relative looking time to an event where a hand comes in on that side of the screen, which is consistent with that causal attribution, versus it comes in on the other side, which is inconsistent with it.
And really, a pretty manipulation. If in fact babies are making causal attributions here, then you ought to be able to screen them off. If you make a simple change to the method, you show evidence that actually, this object is animate. If it is animate, then you don't need to infer another cause. So they ran that study as well and showed that when you change from an inanimate object to an animate one, you no longer get this effect. OK? So it really looks like they're seeking to explain these events and doing so in accord with the principle that agents can cause not only their own motion, but also can make changes in the world, cause motions and other changes in objects.
And a final study that shows this, I told you that if a truck goes behind a screen toward a box, and then the box subsequently collapses, babies do not infer that the truck hit the box. OK? Collapsing takes this outside the domain of physical reasoning for those young babies. OK? But suppose instead of a truck going behind that-- this is work by Susan Carey as well. Sorry, I should have put it on here, Muentener and Carey. If instead of a truck, a hand goes behind that screen, now they infer that the hand did contact the object. OK? So hands can make things move, and they not only can make things move, they can make things do all sorts of things that objects otherwise won't do, like fall apart, OK?
All right. So one problem, as I said, with all these studies is that they're all older infants, many of them much older infants like 8- and 10-month-olds, just about all of them 6 months old or older. And when you test younger infants, you get failures on some of these tasks. So these goal attribution tasks work with 5-month-old infants, but they fail with 3-month-old infants. 3-month-old infants seem uncommitted as to where this hand is going to go once the two objects exchange locations.
And that raises the question-- to me, the most interesting question-- what kinds of representations of agents, if any, does a younger infant have before they're able to act on the world by manipulating things, which starts around five months of age? And tell you just quickly, two kinds of studies that I think get at this-- imperfectly, but they're getting there.
One is, again, going back to studies of controlled reared animals. Chicks, again, in particular. There have been imprinting studies done where chicks are raised in the dark, and all they get to see are video screens in which you get two objects engaging in this simple causal billiard ball type event. One object starts to move, contacts a second object, and at the point of contact, it stops moving, and the other object starts to move. OK?
And they see that repeatedly. And now, OK, I told you that if chicks are raised in isolation, but there's a moving object there, they'll imprint to it. I was treating the imprinting object as just an example of an object. But of course, the imprinting object is mom, and she's an agent. So it should be a self-propelled object, really, right? So given a choice between these two, do they selectively imprint to one over the other?
And the finding is that they do. The two objects have different features, I think, different colors, maybe different shapes as well. If you now present one of them at one end of their cage and the other at the other end, they'll go to object A over object B as if they saw that as the object that set the other object in motion. But now you could ask, is that because they see A as causal, as causing its own motion? Or is it for other superficial reasons like A moved first, or A was moving at the time of the collision, and B was stationary?
So they tested for all of these things one by one, but this is their killer experiment, the test for all of them at once. They present exactly the same events. The only difference is you don't see A initially at rest and starting to move. OK? You never get to see that. There are screens on the two ends of the display, so all you see is that A enters the display already in motion and contacts B. So you have no evidence as to whether A is self-propelled or not, but you see the two objects interacting otherwise in the same way. That makes the effect completely go away.
I think this is logically a little like the studies with infants where you show that the thing is animate or not, and it effects whether they expect that there's an agent there or not. It suggests that infants are representing A as causing its own motion. They're representing it as causing B's motions, so A is a better object of imprinting than B is. And all of this can be abolished if A isn't seen to have the first causal property, they're not inferring the second. So it's, again, an existence proof kind of argument saying, you can get this kind of system working in an animal that hasn't had prior visual experiences in which they've seen objects in motion or in which they themselves have been-- they haven't had any other encounters with objects, so they haven't been able to move things around.
So that's one way of trying to get at early representations. Here's another way that I hope Sommerville will talk about on Thursday night, because she pioneered this method with Amanda Woodward and Amy Needham. You can give a 3-month-old infant, who otherwise wouldn't be reaching for things for another two months, you can give them the ability to pick up objects by equipping them with mittens that have Velcro on them and presenting them with Velcro objects. So now they can make things move.
And when you do that, you see interesting changes in their representations of these events. So an infant who has played with an object while wearing a sticky mitten, who subsequently looks at events in which another person wearing that mitten reaches for that object or for another object, now they look like a 5-month-old and represent that reaching as goal-directed. But on the other hand, if they saw those same events without the mittens, they failed.
So that's saying that the infant's own action experience can elicit these action representations. And it raises, I think, all sorts of interesting questions about, do infants have to learn one by one what the properties of agents are? Or is it possible that once these representations of goal-directed actions are elicited, we'll see other knowledge of agents already present? So let me just give you one finding that suggests we might get the second.
This is a study that Amy Skerry performed rather recently. She was interested in this representation of efficient action before babies are able to reach for and pick up objects themselves. Do they already represent the actions of agents as directed to goals efficiently? Did they already expect that agents will move efficiently to achieve their goals? So to get at that, she gave infants, 3-month-old infants, sticky mittens experience with an object where they got to pick up the object and then showed them events in one-- split them into two different conditions, and in both conditions, they see events in which a person moves on an indirect path to an object.
In one of the series of events, there's a barrier in the way, so that motion is the best way to get there, probably the only obvious way to get there. In the other case, the barrier is in the display, but it's out of the way, so this is not an efficient action. And then at test, the barrier is gone in both conditions, and the person either moves efficiently or moves inefficiently.
Now, in this control condition, the baby does not expect efficient action. But in the condition where they previously had the sticky mittens experience, which presumably told them that this is actually the person's goal, that they're attempting to get there, once they have that, what seems to follow immediately from that is this efficiency, this principle of efficient action, right, that you'll move to the goal on the most direct path possible, even though you've never seen that direct path. And even though when the infant was playing with the object with the mittens on and picking up objects for the first time in his life, there were no barriers present. They never had to do anything indirect. They could always get that object directly. Yet they're expecting direct action only when they've seen efficient action in another agent.
OK. So in summary, it looks like these abilities are all there quite early. They're unlearned, at least some of them are unlearned, in chicks. And they can be elicited before-- at least some of them can be elicited before reaching-- develops in young infants. But infants-- and this is important, I think, for what Alia's going to be talking about-- infants' understanding of agents is limited. It's radically limited, just like their understanding of inanimate objects is. And here are a few, I think, really interesting limits. Although infants are sensitive to what's perceptually accessible to an agent, they don't seem to represent agents as seeing objects.
What do I mean by that? Well, here's a experiment with exactly the structure of the Woodward reaching experiment. Two objects present, and a person acts with respect to one of them. But instead of reaching for one object, she looks at one of the two objects. And now, you ask, after you exchange the two objects' positions, do babies find that she's doing something newer if she looks at the other object? Or do they find she's doing something newer if she looks in a different direction than she looked in before? OK?
At 12 months, infants view looking as object-directed, by this measure. Younger than 12 months, they do not. They don't seem to view the orientation of the person as a look that's directed to one of these two objects, by that measure. Now, the age at which they succeeded, this is also the first age at which infants start to show this really interesting pattern in their communication with other people, that Mike Tomasello has written about a lot.
It's the first age at which they'll start to alternately look at objects, and look at another person, and attempt to engage their attention to the object, by checking back and forth between looks at the person and looks at the object, or by pointing to the object, or by following another person's point to an object. All of that comes in at the end of the first year. It's not there earlier. Younger infants, younger than about 12 months of age, don't even seem to expect that if a person reaches for one of two objects, they'll tend to reach for the object that they were looking at. OK? We thought for sure babies would succeed at that task early. They don't reliably succeed at it until 12 or even 14 months of age.
And finally, babies attribute first order but not second order goals to agents. So if they see an agent pull on a rake and then reach for an object, they perceive the agent's goal as the rake, not the object that they're reaching for. So very limited representations. Also, as far as we know, none of these are unique to humans. I think this raised all sorts of questions that we might want to talk about in the Q&A later.
What's the relationship between infants' representations of agents and their representations of the things that agents act on? Clearly, these should be related in some interesting ways. Although, agents can do things that objects can't do, like move on their own, and formulate goals, and act on things that are visually accessible to them. Agents also are objects, right? And we're subject to all the constraints on objects. We can't walk through walls. If we want to make something move, we have to contact it. And babies are sensitive to those constraints.
And I think this raises all sorts of questions that to date, research on infants hasn't really answered. Is there some hierarchy of representations where you've got all objects, and then you've got these especially talented objects that are agents, right? Or I think one way Tomer has put it is maybe we're all agents, but objects are just really bad agents, right, that can't do very much. That's one possibility. It's also possible that these are just separate systems at the beginning, and they have to get linked together over time. I think these are all answerable questions.
I said that babies don't see looking as directed to objects, but they do from birth respond in a pro-social way to looking that's directed to them. So an infant will look longer at someone whose eyes are directed to them than someone whose eyes are looking away. That's true for human infants. It's also true for infant monkeys. If a monkey is presented with this display, and they're over here, they'll look longer than if they're over here. Right? When they're here, it looks like that guy is looking at them.
They also, human and monkey infants engage in eye-to-eye contact with adults from birth. They also, human infants and monkey infants tend to imitate the gestures of other people, interestingly, only when those people are looking at them, not when they close their eyes. Then they'll imitate them as you saw in those beautiful films that Winrich showed from the Ferrari group, attentive to the person and then trying to reproduce the person's action. This has been shown with baby chimpanzees and baby monkeys as well as with human newborn infants.
And finally, I said infants don't follow gaze to objects. That's true, but it's also true that if they see a person who's looking directly at them, and then the person's gaze shifts to the side, they'll continue to look at the person, but their attention will undergo a momentary shift in the direction of the shift of the other person's gaze. OK? So this has been shown at two months of age and also with newborn infants. It's been shown with photographs of real faces and also with schematic faces.
The way to show that attention is shifting is to present infants with an event that could never happen in the real world. You have an image of a face, the eyes shift to the side, either left or right, and then the face disappears, and a probe appears either on the left or on the right. They'll get to the probe faster if it appears on the side to which the person shifted their gaze. And nice control studies show that it's not just any kind of low level motion in that direction that gets them there.
So this could be a sign of something like infancy direct gaze. And it engages something like a state of engagement with another person in which evidence about the other person's state of attention-- possibly also emotion if the questionable literature on empathy emerging early is right-- this can be automatically spread from one person to a social partner.
So the hypothesis is that infants are finding other potential social partners from the beginning of life, by looking at things like gaze direction, maybe also infant-directed speech, as a signal that someone else is engaging with them, by interpreting patterns of imitation as a communicative signal that somebody is tracking what they do, and acting in kind with them, and interpreting shifts of attention as-- you're responding to shifts of attention by shifting their own mental states in the same direction.
See previous session.