Joel Bowen and Justin Theiss

Speaker

Justin Theiss and Joel Bowen

Date and Time

Monday, October 21, 2019
11:10 am - 12:30 pm

Location

489 Minor Hall
Berkeley, CA

Joel Bowen's Talk

Feature interference within dynamic receptive field pooling arrays: Implications for visual crowding

Abstract: Visual crowding imposes severe perceptual limitations on object recognition in peripheral vision: target objects that are easily identified in isolation are much more difficult to identify when flanked by similar nearby objects. Most models of crowding postulate that the relatively large size and low density of cortical receptive fields in the periphery lead to an over-integration of features. Additionally, previous work with human subjects has shown that precueing attention to the target location diminishes the effects of crowding. In this talk, I will share a technique to model feature interference within an eccentricity-dependent pooling array, and I will share results on how the interference is influenced by dynamic changes in spatial resolution across the array. By the end of the talk, I hope to convince you that techniques like this are useful for 1) studying the effects of attention on visual crowding both directly within single receptive fields and across multiple receptive fields, and 2) providing testable hypotheses for future visual crowding perceptual experiments.

Justin Theiss' Talk

Extending models of attention to complex visual tasks using a hierarchical generative model with dynamic pooling

Abstract: Visual perception in cluttered environments is a dynamic process facilitated by attention. Previous models of attention have described how attention can bias processing of specific features and increase the sampling resolution of specific regions of the visual field. However, these models require a priori knowledge of the attended feature or location which restricts the scope of their application to top-down attention tasks that have explicit targets. Modeling attention during more complex tasks in which target features and locations vary (e.g., visual search) therefore requires the ability to infer the attentional priority of features and locations within a visual scene. In this talk, I will describe how these different aspects of attention can be modeled during a visual search task for digits among non-digit distracters through the use of a hierarchical generative model with dynamic pooling.