Lecture 1.7: Matt Wilson - Hippocampus, Memory, & Sleep Part 2

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: The role of the hippocampus in forming episodic and spatial memory and the connection between sleep and memory. Experimental methods to probe the neural encoding of temporal sequences, simultaneous recording of neurons in multiple brain areas.

Instructor: Matt Wilson

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

MATT WILSON: Now, why are we interested in sleep? So, we kind of think about this as two modes, online mode and offline mode. In the online mode, you're taking information in. In the offline mode, you're going back and evaluating information.

What's the purpose of evaluating information that you've taken in? Well, if we think about the general problem that's being solved, the problem of intelligence. The problem of intelligence is trying to understand and infer these sort of deep generalizable relationships rules. You're trying to extract rules from instances. And you'd like those rules to be as generally applicable as possible.

And in order to do that, presumably, one has to go back and evaluate the many individual instances to try to extract some kind of statistical regularity, and then perhaps evaluate models that you have constructed in terms of their consistency with individual instances that you've already collected. Or perhaps future instances that you haven't yet.

So you have the raw material, you build a little model, and then you continually test that model against new information that comes in. And the question is, when can you do that? Now, you could do that when you're out and about in the world. As you saw and read in that paper, when animals are sitting quietly, they very quickly can switch into this kind of offline mode that looks a lot like sleep.

In fact, electrophysiologically, sleep and quiet wakefulness in the hippocampus are nearly indistinguishable. It's the same kind of offline mode. It says, OK, when the hippocampus is not being used to take new information in, I quickly switch into this offline evaluation mode. But during sleep, I no longer have the constraint of having to direct behavior. Behavior's shut off, inputs are shut off, and now I can switch into this purely internal introspective mode.

So what goes on during sleep. In this simple experiment, we look at the activity when the animal is performing behavioral tasks. And then we examine activity during sleep, both before and after, and ask, is there anything about behavioral experience that changes activity during sleep?

And what we found was that if you look at activity during behavior in which you think about these spatial sequences being expressed, and you look during sleep afterward, you find that the spatial sequences are expressed again. So the hippocampus replays the firing of these cell sequences. But it replays the firing of these cell sequences at a time scale that appears to be compressed relative to the behavioral timescale.

So these are eight place cells. Animals were walking from left to right. The ticks indicate that's the location of the peak. So these place cells will fire one through eight as the animal moves along the track over about five seconds. That's how long it takes the animal to walk from left to right. The same sequence of 1, 2, 3, 4, 5, 6, 7, 8 gets replayed during sleep, but now over about 150 milliseconds. Same sequence, so you're preserving time order, not absolute time.

And that when you ask, when these sequences are expressed, what's going on in the local field potential in these oscillations? And this is where you find these sharp wave ripple events. This is this ripple-like event that I was describing to you. So these sharp wave ripples are when these apparently compressed sequences are being expressed.

I could point out that a simple model for these sharp wave ripple reactivated sequences is the same model that I like to use to explain phase precession during the fade oscillation. That is, I give it an input, I sweep inhibition from high to low such that cells that are getting the strongest input fire earlier. And this will reactivate a sequence.

So the difference between a theta sequence and a reactivated sequence is really just, where's the input coming from. If the input's coming from what I'm actually experiencing right now, now it's a theta sequence. And as I move, the input changes systematically. And so, again, it looks like it's encoding information about space.

If I'm now offline, and this information is being delivered to the hippocampus from some other source, you can apply exactly the same operation, get the same kind of sequence. It's just now it's on information that is not tied to your immediate context or location. That's the only difference. Where is the input coming from?

Now further, if you think about this model, you can imagine the depth of disinhibition could actually affect the length of the sequence. But that's just sort of a mechanistic thing. And if you actually look at the mechanisms that regulate sharp wave ripples in the theta oscillation, it basically comes from the same structure.

It's a structure that regulates acetylcholine, and it's a neuromodulator that's associated with attention and memory, and the structure called the medial septum. So the medial septum provides the drive, the cholinergic drive, of the hippocampus. Damage to the medial septum, loss of cholinergic tone, was one of the dominant models of neurodegenerative cognitive and memory loss in Alzheimer's disease.

So what you find is one of the earliest indications of neurodegenerative damage in Alzheimer's is the loss of cholinergic tone. And the systems that begin to break down are the systems that actually fall along this limbic pathway starting with hippocampus-entorhinal cortex.

So it's as though the cholinergic system that regulates the expression of this oscillation in the hippocampus, when it breaks down, it leads to general memory loss and disruption. And it turns out the medial septum is also involved in regulating the expression of these sharp wave ripples. So, same system, different modes. One quite active, online. One inactive, offline. Same kind of modulation of excitability through inhibition.

But that's the idea. Simple model, sweep inhibition, that gives you the sequences. And then if you can control the input, you control the input to control the content, you control the inhibition to control the structure of the timing. Those are the two things.

So the question is, how could you control the input? Well, you're kind of thinking about what is the input into the hippocampus under these two conditions. And as I mentioned, hippocampus, you've got the entorhinal cortex. Entorhinal cortex gets information across the brain. It's sort of these sensory association areas, visual cortex, auditory cortex. So all this information about the world converging on the hippocampus and then getting modulated.

And so let's look in, for instance, the visual cortex. So if we simultaneous record visual cortex and the hippocampus, we could see how these two structures communicate. And when we do this in a simple task-- this is like a little figure eight task-- and I won't go into a lot of the details.

But one interesting thing that came out from doing this experiment, recording the visual cortex and the hippocampus, is that recording in the visual cortex when an animal is moving in space, what you find is cells in the visual cortex have spatial-like receptive fields. Similar to the hippocampus, though not as spatially tuned. But here, for instance, are eight visual cortical cells. And you can see that they fire in a sequence. They'll have different spatial receptive fields. This shows where this one visual cortical cell likes to fire. Different visual cortical cells will fire at different locations.

So when animals are moving in space, you actually see sequential activation of these visual cortical responses. And so if you have visual perceptual sequences in hippocampal spatial sequences, one question is, how do those sequences relate offline when the animal, for instance, during sleep. So you have sequences during sleep in the hippocampus, sequences during sleep in the visual cortex. And it turns out that those two things are actually correlated.

So when hippocampus plays out a spatial sequence, the visual cortex plays out a visual sequence that corresponds to the visual responses at those locations. So hippocampus plays out a sequence of where it was, visual cortex expresses the visual stimuli that were present along that sequence.

One thing about these sequences when we're looking at hippocampal neocortical interactions is that the sequences are now at a much longer time scale. And in fact this time scale, which here is on the order of about half a second to a second, corresponds to another oscillation that is characteristic of sleep known as the slow oscillation.

So when you go to sleep, brain rhythm, the oscillations, the dominant frequencies start to slow down. And you'll get this oscillation in this one hertz or so frequency range. If you record activity from a bunch of cells, it looks like this. This is the activity of a whole bunch of cells in the visual cortex in the hippocampus. And you see that the activity is flipping between lots of cells active, no cells active. These are the so-called up and down states of cells during sleep. That every half a second to a second or so many cells will become active, then they'll all be shut off, then they'll become active again.

So you are flipping between these up and down-like states. And if you look at these up and down states in the visual cortex, and you look at similar activity in the hippocampus, you see these up and down-like states in the hippocampus as well. But you'll notice that the up state in the neocortex leads up state in the hippocampus. So neocortex first, then hippocampus, neocortex, hippocampus. Neocortex seems to lead.

So again, this question of, who's providing the input? That simple model where I'm sweeping inhibition during the theta, during active behavior, that information is coming in from perception, is what I'm saying. During sleep, the question is, where is that information coming from? Well, this would say, this is where it's coming from. It's coming from these cortical areas. Let's say sensory, visual cortical areas turn on. They provide the information to the hippocampus. Now the hippocampus turns on, and the hippocampus responding to input coming in from the sensory cortex.

Now, if that's the case, could we manipulate hippocampal activity by manipulating the sensory cortex? And so this experiment, simple experiment, answer was yes. In this case we use the auditory system. One of the reasons to use the auditory system is that, unlike the visual system, the auditory system remains in a state of persistent vigilance, even during sleep. Auditory cortical responses, even when the animal goes to sleep, measure auditory cortical responses same as when it's awake. So essentially the auditory cortex stays vigilant during sleep, which has clear evolutionary value.

Animal's asleep, it's trying to minimize arousal by shutting off visual input, which may be of limited value anyway, given the circadian nocturnal nature of visual stimuli. So if you can't see anything, you might as well actually close your eyes. But you can still hear things. So you and other animals are still listening to pick up threats that might require that they actually wake up.

So anyway, taking advantage of that. Auditory system on, so train an animal on task where it learns to associate auditory cues with locations. Right sound means go over here to get food, left sound means go over there to get food. Very simple task. Then animal goes to sleep, and you just continue to play the sounds. So it's learned something. And now you try to bias cortical activity during sleep and ask, what does that do?

And so this is the idea. When the animal is actually running, we can decode hippocampal activity. So we can tell, oh, here's the hippocampal pattern that corresponds to the left side or the right side. And in this case, when the animal's performing the task, play the right-hand sound, animal goes to the right-hand side, see this in the hippocampal response. Play the left-hand sound, animal goes to the left-hand side.

Again, the left-hand place fields when the animal's on the left, right-hand place fields on the right. So this is what the experiment and the behavior looks like from the standpoint of the hippocampus. Very clear, right? Right sound, right hippocampus, right place cells. Left sound, left place cells.

But now the animal's going to go to sleep, and we're going to do the same thing. Continue to play the right sounds and the left sounds. It's just that now the animal is not actually moving on the track. And so we ask, does that-- can we bias the reactivated sequences that you get? So there's the same thing now. Animal's awake, but now it's going to go to sleep, and we just keep playing the sounds. The little tics there indicate the delivery of sounds every 10 seconds or so.

So we play a sound, and now we look at the response. The difference here is that the animal's not running, it's not behaving. Sound, response. Now we'll decode the activity. So here, for instance, we take a little short window here, about half a second. And what you see is this is this up-like state. Multi-unit activity, lots of cells firing. Decode activity. Now ask when you play the left sound, what does activity decode to? And here you see it's going to decode into the left-hand side.

So that's the basic hypothesis. Play the left sound, you get the replay of the left side of the track. You play the right sound, you get reactivation of the right side of the track. And that's what you get. So when you play the left-hand sound, left sound bias activates place cells on the left side. Right side bias biases place cells on the right.

And you can do the same kind of psychophysical experiments which has been done in humans, either with different sensor modalities-- it's been done in olfaction. It's also been done in audition. So the equivalent experiment, using auditory cueing in humans, where you have people learn the simple task.

And this was done in Ken Paller's lab where they do the simple spatial matching game. It's like, you know, where's the cat card, where's the teapot card? The variant here is when they flipped over the cat card, they would play the cat sound, a meow. When they flip over the teapot sound, they play a little associated auditory cue, teapot whistle. And then people go to sleep. And during sleep, they would play either the cat sound or the teapot sound. And they found that when they would wake up and now they do the task, if they played the cat sound during sleep, they were better at remembering the location of the cats than the whistle.

So this says not only does sleep actually contribute to memory, but it's selective. And not only is it selective, but it can be influenced. It can be biased. You can direct the nature memory processing. And then our experiments suggest that, well, one of the consequences of this kind of sleep manipulation would be to bias the memory reactivation in the hippocampus.

And so the idea is simple. That is that cortex biases the state that the hippocampus gets. And then the hippocampus takes that state, sweeps inhibition, replays a sequence, and that sequence then gets played back to the cortex.

So the cortex, it sort of knows-- it has these sort of discrete states that doesn't necessarily know what the causal correlations might be. What happens next? It doesn't necessarily know what happens next. It knows what happened. Hippocampus knows what happened next. It has lots of instances of that though.

Well, I saw a red light, what happens next? You say, oh, you know, I saw red light, and all the cars stopped. OK, that's great. If I'm just the cortex, that's what I learned. All the car stop. If I'm the hippocampus, I'm a little bit smarter than that. I say, you know, last Tuesday I was there, there's a red light, and all the cars, except for these cyclists. Man, they didn't stop, they just kept on going.

So wait a minute, there's a rule. Red light, cars stop, bicycles don't stop. So you have to refine what appear to be simple rules. Often that's actually used in an example. How do you use the prefrontal cortex? Oh, red light means stop, green light means go. That's great, except in the real world, that rule is too simple. It has to be refined based upon your particular experience.

In fact, if you're really sophisticated, red light means cars, except for cabbies, will stop, right? If I see a cabbie, guaranteed that guy's not going to stop, he's going to accelerate, right? And that's the kind of information-- that's what the hippocampus has. And so you imagine that's what's going on. Neocortex, in each one of these slow oscillations, is saying red light. And the hippocampus says, oh, yeah, OK, cars stop. Red light again. Well, there was that bicycle thing, bicycles continue to go. Red light. Well, that was the cabbie incident.

So now you have all these sort of causal sequences that are being expressed back to the neocortex, presumably in order to establish this more comprehensive, consolidated model of real world traffic lights, rather than the cartoon traffic lights. And so that would be the idea of what's going on during sleep.

Now, during quiet wakefulness you can think of the same sort of idea. And that is that you're kind of evaluating multiple casual contingencies. Each of which might be expressible as a simple rule, but might also be experienced as distinct variations of that rule. And the idea is, OK, do we use the rule, or do we use the exceptions, or how do we actually refine the rule based upon these different instances or exceptions?

And so you can think about that as refining the rule, that's like the learning side. Applying the rule, that's the memory or decision making side. So you can think about, during quiet wakefulness, this reactivation being used in the service of actual learning or in decision making.

And so again, you read the paper, and one of the things that we had discovered interestingly about reactivation during quiet wakefulness was that when animals stop after running on a track, they do reactivate sequences. This is raw data, animal running from left to right and then stopping for a long period of time. And you can see activity. There are these bursts of activity. You blow these things up, these are these sharp wave ripples. They last about half a second or so.

So you see a burst of activity, you see these place cell sequences. In this case the sequence actually runs in time reversed fashion. So in this case-- time reversed fashion-- this doesn't seem like planning or decision making. It may be evaluation of temporal correlations that might be relevant to behavior and learning. But what would a reverse sequence actually have to do with spatial learning and memory?

The insight into how this might be used came from computational models. In fact, computational models that the post-doc in this case, Dave Foster, who made this discovery, had used in his doctoral work. Where he was actually building models, reinforcement learning models of spatial navigation. And one of the problems in reinforcement learning is that reinforcement often comes after you've actually carried out the steps that lead up to it.

In other words, you walk from left to right, you get rewarded when you get here. What you want to know is not just, this is where the reward is. What you really want to know is, what were the things that actually lead up to that? In other words, I want to take credit, reward value, and I want to spread it backward in time to place value on the things that predict or lead up to reward. The so-called temporal credit assignment problem. How do I give credit to things that actually lead up to or predict reward?

And thinking about how you might solve the temporal credit assignment problem, this reverse reactivation actually has the capacity to solve that problem in one step. If you imagine when the animal gets to the end, gets some reward, and if now at that point I pair the delivery of reward signal, which I will indicate here as-- this is a cartoon suggesting this is dopamine, a reward signal. And I'm going to pair that with the reverse sequence.

And now you can think of the association. This is my current location. This is way back where I started from. Current location gets paired with strong reward. Remote location gets paired with low reward. So this association will essentially solve, will convolve this discrete reward impulse function, will turn it into this continuous graded monotonic reward gradient function. Translating this into this essentially in one step.

So the thinking is, hey, animals are actually using this to learn, and they're using this to solve the temporal credit assignment problem in a way that would be important for general reinforcement learning. Now, I won't go into-- so, we actually did this experiment recorded from the reward area, the VTA, and the hippocampus. And, indeed, during these reactivation events, you see the pairing of this reward signaling.

And not only do you see the pairing of the reward signaling, but you find pairing of the reward signaling in which the precise firing of reward signals map onto the delivery of rewards at goal locations. So it's not just certain sequences are good, others are bad. It's that there are certain locations along the sequence that have differential reward value. So there's a mapping of relative reward to location.

And that if you look at this while animals are actually performing a task you can see biases in these sequences that correspond to planning. So both of these elements of sequence reactivation-- this was some work by Dave Foster where he looked at an animal that has to just forage and find a location space. When animals are searching for a location, and you look at these reactivated sequences, it turns out that reactivated sequences tend to be directed toward the locations where the animal thinks the goal might be. So it's sort of thinking about things that would lead to reward.

So both sides of this kind of sequential computation seem to be expressed in the hippocampus. Co-expression with reward during evaluation or learning. The expression during reward directed behavior in the service of planning and decision making.

And then finally, this is the paper that you had read. It's just looking at the structure of these reward sequences on long tracks. And the most salient point that came out of this paper is that the phenomena of sharp wave ripple sequential activation that correspond to these theta sequences, when animals are sitting quietly over the course of these longer up state like events, takes the form of bursts of sharp wave ripples, or sequences of these short sequences.

So there's this compositional notion that you form long sequences out of little short sequences. And the other question is, how do you actually put these short sequences together, and why would you have a representation that has this kind of compositional structure? It's the Lego block kind of idea, where I can put together sequences from these more elemental sequential units.

I won't show the movie, but if I actually zoom in on one of those sequences in that particular instance where the animals stop, you can see there's a longer reactivated sequence that actually takes the form of four shorter sequences. That's how long sequences are being evaluated.

And one interesting thing about this was that if you look at these different sequences of different length, shorter sequences, longer sequences, they all seem to have a fixed velocity. And that is, there's this kind of a fixed-- what I've heard is the speed of thought. So there is some fixed constraint that evaluating further out in the future simply takes more time.

And so that suggests that there's an interesting constraint in capacity for extended evaluation that comes in the form of these oscillatory modes of the slow oscillation. Longer, or slower, frequencies give you, essentially, more time, longer sequences. And that, potentially, if you want to go even longer, you might actually even link sequences across subsequent cycles. Just as you could do across successive sharp wave ripples, you might be able to do that across successive slow waves.

So the idea that you have these oscillations that you can link or couple sequences across different cycles of these oscillations, that you have oscillations at different frequencies, suggests that this is the mechanism. It's the compositional mechanism for sequences that are nested within oscillations. By combining these oscillations you can construct these sequences in ways that presumably contribute to cognition.

So we now kind of see how things are structured, and now the question is, can we actually manipulate it? And so that's really the challenge. Seeing the compositional structure, having demonstrated potential access, the ability to bias content. To coord, to bias the structure of when things occur, what actually occurs. Now I think the capacity exists to essentially engineer the sequences themselves.

And that is that, during the sleep or quiet wakeful states, get animals to think about things that they have not actually experienced, and test the hypothesis that this is what animals are using to construct these models, to essentially tinker with the blocks of memory and cognition.

And then finally, just pointing out, it's interesting. These ripple sequences, same space and time scale as theta sequences. So again, it's suggesting this is the fundamental unit. It's not unique to the hippocampus. Essentially any brain system could express this kind of structure using that simple model. Where you see ramps and you see oscillations, you're going to get sequences. And this capacity is something that probably is broadly expressed and enjoyed by different brain areas. So, there you go.

Free Downloads

Video


Caption

  • English-US (SRT)