Description: David Rolnick and Ishita Dasgupta explore how Hopfield networks, commonly used to model static memories, can be extended to represent dynamically shifting memory states that capture stochastic sequences of events.
Speaker: David Rolnick and Ishita Dasgupta
[MUSIC PLAYING]
ISHITA DASGUPTA: I'm Ishita Dasgupta, I'm going into my third year of my PhD at Harvard in Computational Cognitive Science.
DAVID ROLNICK: I'm David Rolnick. I'm just getting into my fourth year with my PhD at MIT. I am in the applied math department.
ISHITA DASGUPTA: So we're working with Hopfield networks, which is a kind of-- it's a concept in which tiny neurons are kind of connected-- they're all connected together, and the way they update each other, basically determines the state they're going to be in.
It has been used in the past to model memories. It's basically that there are certain kinds of states that the neurons prefer to be in given the way that they're all connected together. And you can make them go into these states by initializing at a different point. And so it's been used to store memories before, but these are static memories. Like, once you're in one of those memories, you just stay there.
So we were working with this kind of model to make some changes to it and have it be such that you can go from one such memory to another such memory and decide what probability it is that you're going to go to one memory or to another memory, and so basically add some stochastic dynamics to a Hopfield network.
DAVID ROLNICK: Well, the idea is that there are many situations where the living brain is going to be faced with the task of reconstructing or simulating a stochastic sequence of actions.
So for instance, if one were simulating an event in which one didn't know quite what the probabilities were that something was going to happen, then you can imagine playing it out in your mind and imagining one way of realizing it and each state in your mental sequence would be determined by the previous state. So if something's falling, then its state when it's falling is determined by the state when it was upright.
And if we can understand how we could use memory to generate these sequences of patterns that are determined by stochastic rules, then we would be able to get a better sense of what kind of imagination memory connections there are possible even in a very simple model of the brain.
And we're working with sort of the simplest model of memory, but it still turns out to be extremely powerful in being able to create these patterns of stochastic sequences Markov chains.
ISHITA DASGUPTA: So far, we've just been modeling it on using-- computationally modeling what we think should happen. For us, there's a bit of theory work to figure out what kind of connections we should put in there so that it should work, and then we actually set up those connections and see if it does work. And we're hoping at some point to be able to tie it back to actual real world situations in which this kind of stochastic sequence of events actually happens in the brain, but that is currently on the-- like, in the future. Right now, we're just making sure that we can model this kind of behavior in a computer.
DAVID ROLNICK: In some sense, it's an engineering task or a theoretical task followed by an engineering task. Understanding what can be done in a system like this and then simply building it. We built it and now we have to--
ISHITA DASGUPTA: Yes.
DAVID ROLNICK: --see how it works in practice.
ISHITA DASGUPTA: It becomes kind of like an experimental science, we're just changing parameters and seeing how things change, because these are not entirely clear and predicable. You can't just say that because you built it, you should know how it works. There are too many degrees of freedom, so there are a lot of things to be tested to see how well it performs in different-- under different environments.
[MUSIC PLAYING]