Monday, September 15,  12:00 – 1:30 pm, in 489 Minor Hall

Graduate Student Seminar

presented by

Michael Oliver, PhD Candidate

Jack Gallant Lab

Learning nonlinear feature sets from V1 and V2 neurons using self-regularizing deep neural networks

It is widely accepted that neurons in visual cortex encode a distributed representation of the visual world, i.e. each neuron is responsive to a limited set of features within a limited region of the visual field known as the receptive field. This view, which we will call the feature-coding paradigm, is intuitively appealing because it offers the possibility of understanding computation in visual cortex in terms of features that can be visualized.  The feature-coding paradigm is most evident in V1 where neurons are maximally responsive to Gabor-wavelet like features of specific orientations and spatial frequencies within small receptive fields. However, it has proved extraordinarily difficult to find equivalently general quantitative descriptions of the features to which neurons respond in areas anterior to V1.  This difficulty is due to several factors both experimental and statistical. A major source of difficulty is due to the fact that the areal size of receptive fields of cells in areas anterior to V1 increases greatly: V2 receptive fields are larger on average by about a factor of 4 relative to V1. This increase in receptive field size vastly increases the number of pixel patterns that could fall within the receptive field, making it difficult to fully sample the space of stimuli that effectively drive a neuron. Furthermore the stimulus-response mapping of V2 neurons is highly nonlinear. I will demonstrate that we can effectively learn the nonlinear stimulus-response mapping of V1 and V2 neurons using deep time delay neural networks and gain insight into neural computation by examining the features learned by the networks.

 

James Gao, PhD Candidate

Jack Gallant Lab

High speed MRI and the challenges of real time decoding

fMRI models of human brain activity have shown extremely promising ability to decode both visual experience and imagery. However, these models must be run on offline data and require entire contiguous runs at once for preprocessing. In addition, the recent development of high speed parallel imaging sequences allow us to acquire data at a rate previously inaccessible to fMRI. In this talk, I will evaluate the advantage of these sequences in real time decoding of sensory experience, and provide a preview of decoding results in spatial navigation.

← Back to Oxyopia Archive