Brian Cheung and Paul Cullen
Speakers
Brian Cheung and Paul Cullen
PhD Candidates
Date and Time
Monday, May 7, 2018
12pm - 1pm
Location
489 Minor Hall
Berkeley, CA
Abstracts
Paul Cullen's Talk Title: The Secret Lives of Retinal Astrocytes
The study of glia – the support cells of the central nervous system – has come a long way since Rudolf Virchow described a connective tissue of the brain that he termed ‘nervenkitt’ in 1856. Rather than a passive scaffolding for neurons (the word ‘glia’ means glue in Greek), these cells are responsible for a dizzying array of tasks in the central nervous system. One particular type of glia, the astrocytes, are a broad and heterogeneous group that is increasingly studied for their potential role in neurodegeneration. However, the tools to study these essential cells lag far behind those developed for their neuronal partners. Recent developments in sequencing technology has led to the widespread adoption of RNA-seq, a massively parallel approach for investigating the relative expression of genes within a population of cells. Although populations of brain astrocytes have been studied using this technology, to our knowledge those in the retina have never been so investigated. I will present an overview of this exciting technology and how we intend to utilize it to study the response of retinal astrocytes in a powerful in vivo model of ocular hypertension, as well as the challenges this approach presents.
Brian Cheung's Talk Title: Unsupervised learning in biological neural networks.
Supervised learning has proven extremely effective for many problems in machine learning where large amounts of labeled training data are available. However, the dependence on large labeled datasets and non-local updates make it unclear how similar algorithms might function in the brain. Conversely, biological neural networks are extremely effective at building rich, high utility, representations of sensory input with little or no labeled training data. However, unsupervised representation learning in artificial neural networks lags far behind both biological networks, and supervised artificial networks. One explanation for our failure to develop effective unsupervised learning rules is that the objective functions we propose are mismatched to the behaviorally relevant tasks for which we wish to use the learned representation. We optimize objectives such as log likelihood, sparsity, or reconstruction error, and then hope a learned representation which exposes high-level features of sensory data relevant to survival will result purely as a side effect. Rather than proposing a hand-designed update rule, in this work we use supervised training to play the role of evolution in discovering an update rule for biological neural networks. Specifically, we perform supervised training of the unsupervised learning rule, so that it leads to representations which maximize a biologically plausible utility function. Additionally, we parameterize the learned update rule itself in a biologically plausible way. We meta-learn a local learning rule that only depends on bottom-up input from the pre-synaptic neuron and top-down feedback from the post-synaptic neuron. By re-casting unsupervised learning as meta-learning, we directly optimize an unsupervised learning rule with respect to its utility. We argue that this is a natural approach to unsupervised learning in the context of biology. Our work offers a preliminary investigation of unsupervised learning rules meta-learned using this novel perspective.