Danny Jeck: Impact of Attention on Cortical Models of Visual Recognition

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: Danny Jeck explores how modulations in neural behavior in the early stages of visual processing, due to shifts in the focus of visual attention, impact the performance of later cortical areas engaged in object recognition.

Speaker: Danny Jeck

[MUSIC PLAYS]

DANNY JECK: Hi, I'm Danny Jeck. I'm a fifth year grad student getting my PhD at Johns Hopkins in biomedical engineering. My project, what I'm trying to look into is how attention is related to models of visual cortex.

The first lecture was by Jim DiCarlo. He gave a talk about how the models of object recognition seem to match pretty well with the behavior of inferotemporal cortex in macaques. We know that also in macaques earlier areas of visual processing are modulated by attention.

So the question is, well, OK. We have this model, let's say we add some modulation due to attention, what does that do downstream as that information propagates through the network?

I'm building a model in Python right now. And it's running. The main goal of the model is to see how some modulation in earlier cortex would propagate through a model like what we believe is happening in the brain already. A boring finding would be that a 10% modulation results in a 10% modulation downstream.

I'm expecting that that's not the case, because there's a whole bunch of nonlinearities and normalization that happens that should propagate through this network. The question is what is the magnitude of that, how does that affect things if the 10% modulation is not actually the right number because of some measurements or the way I'm interpreting the measurements that have been made already, what would different numbers allow for. Or perhaps the modulations we found downstream are all due to other feedback from other areas rather than this going back to the beginning and propagating all the way through.

So the idea came about from Ethan Meyers. He was originally interested in trying to do this kind of two passes through a network, one in which you sort of try and figure out the location of an object, and another in which you try to recognize it. I kind of took that in a different direction because I was more interested in the neurophysiology side of things.

In my current lab, I wouldn't have had time to do something like this because I wasn't planning on investing a lot of time understanding what deep networks were. So really, having the time to sort of work on a free project has been really nice.

[MUSIC PLAYS]