Research

Our research centers on a joint strategy of  behavioral and psychophysical studies, along with mathematical and computational modeling. We focus primarily on visual cognition and theoretical neuroscience, but our work extends to high-level cognitive phenomena such as analogy making and problem solving.

Visual Analogy-Making and Its Neural Mechanisms

Trial sequence with a Raven stimulus A new project in our lab — and one that new grad students are very welcome to join — investigates the neural mechanisms of analogy making. Human beings routinely and effortlessly “see” the shared relational structure between superficially dissimilar situations. For example, we have no problem understanding metaphorical sentences such as “Enron is the Titanic of the corporate world.” Arguably, the ability to represent and manipulate relational knowledge is at the core of human cognition, and yet explaining this ability remains one of the great unsolved problems in cognitive science. The sentence you just read consists of 46 words in a hierarchical arrangement of various subordinate clauses. You have never encountered this particular sentence before and yet you understood it without too much difficulty. How does the brain handle such nested relational structures? This is an open problem of fundamental importance in cognitive science.

In our lab, we adopt a two-pronged approach to this problem. On the one hand, we collect behavioral data on how people solve visual analogies such as those used on the Raven Advanced Progressive Matrices (RAPM) Test of fluid intelligence. The data include accuracy, response time, eye tracking, and think-aloud protocols. The second prong of our approach is to design biologically plausible neural-network models that can do simplified versions of these same tasks. Alex’s interest in analogy making goes all the way back to  his Ph.D dissertation, “An emergent dynamic emergent computational model of analogy-making based on decentralized representations.”

Dual-pathway Model of Perceptual Learning (Dimple)

Dimple modelMuch of the current research in the lab is related to a grant from the National Institute of Health to develop a Dual-process Model of Perceptual Learning (Dimple).

Human performance in many perceptual tasks improves with practice. This perceptual learning (PL) is often specific to the particular stimuli used in training. For example, when people practice to discriminate the orientation of near-vertical stimuli, their discrimination thresholds improve significantly for vertical but not horizontal stimuli. This stimulus specificity is universally recognized as a key property of PL, but is still poorly understood. Most PL theories (including the Selective Reweighting (SRW) Model summarized below) try to account for all patterns of specificity and transfer exclusively in terms of the amount of overlap in the sensory representations activated and expressed during training and those activated during a subsequent generalization test. Although this idea clearly has merit, such representational overlap is not sufficient for a complete explanation of all cases of transfer of PL.  The perceptual categorization (PC) literature has identified mechanisms that could fill this gap. Converging evidence suggests that human category learning is mediated by multiple systems. One system is explicit, involves verbal rules, logical reasoning, working memory, and executive attention. This system supports greater generalization to novel stimuli and tasks. Another system is implicit, learns stimulus-response associations via reinforcement learning, but generalizes relatively poorly. The Dimple model integrates the influential SRW model of PL (Petrov, Dosher, & Lu, 2005) with the influential COVIS theory of PC (Ashby et al., 1998). The SRW model maps naturally onto the implicit system in COVIS. The innovation in Dimple lies in the explicit system, which operates on intermediate-level representations that give separate, controlled access to individual stimulus attributes such as orientation and spatial frequency. Dimple also has a working memory layer that maintains and adjusts the current decision boundary. This interdisciplinary research project will be of interest both to students interested in modeling and to students interested in visual psychophysical experimentation.

Selective Reweighting Model of Perceptual Learning

Selective reweighting model The selective reweighting (SRW) model instantiates the hypothesis that perceptual learning occurs at the interface between perception and cognition and is statistically driven. Specifically, learning in this model occurs at the read-out connections between a large population of sensory units and a single decision-making unit. The tuning curves and other properties of the sensory units never change. The idea of selective reweighting dates at least as far back as Rosenblatt’s (1959) Perceptron, but the SRW model was the first and remains one of the very few perceptual-learning models that can take actual images as inputs, perform the same discrimination task that the human participants are asked to perform, and produce learning curves that can be compared directly to the empirical data. Since the 2005 publication, converging evidence for selective reweighting has accumulated steadily and this mechanism is now well established as central to perceptual learning. The Matlab software implementing the SRW model is freely available from Dr. Petrov’s website, which has contributed to the considerable popularity and impact of this model. It has been used in several independent modeling efforts. It still plays a central role in the PL research in our lab. It is the basis of the implicit pathway of the Dimple model (described above), and is routinely fit to behavioral data collected in our lab and elsewhere.

Specificity and Transfer of Perceptual Learning

Sample perceptual-learning dataWe collect behavioral data on visual perceptual learning — that is, on the practice-induced improvement of performance in various visual discrimination tasks. The improvement can be measured as increased accuracy (percent correct and d’), decreased discrimination thresholds (as illustrated in the sample data set on the left), and/or decreased response times. In a typical experiment in our lab, human participants practice to discriminate some particular stimuli for a few training sessions, and are then tested with different stimuli for one or more test sessions. This experimental design allows us to measure the degree to which perceptual learning transfers from the trained condition to the test condition. These data provide important constraints on the models discussed above and on perceptual-learning theories more generally. All undergraduate and graduate students in the lab are involved in one or more such experiments.

Pupillometry

The lab is equipped with an EyeLink 1000 eye tracker, which is used in all empirical projects listed above. Thus, for example, it records the order and frequency of eye fixations on various display elements during visual analogy-making. In addition, the eye tracker also records the diameter of the pupil on a millisecond-by-millisecond basis. When the ambient light is held constant, pupil size covaries systematically with cognitive load and other information-processing variables. It can also be used as a non-invasive index of the release of the neurotransmitter norepinephrine by a certain nucleus (locus coeruleus) in the brain stem. Within the framework of the so-called Adaptive Gain Theory, the pupil-size data can constrain the perceptual-learning models listed above. Graduate student Taylor Hayes recently made an exciting discovery in this regard. He showed that (phasic) pupil diameter decreases systematically with practice, as shown on the figure to the left. He is currently following up on this study and working out the theoretical implications of his pupillometric results.  Taylor won the prestigious Presidential Fellowship award for this excellent work.

Other

While the above list illustrates some of the ongoing research in the lab, it is by no means exhaustive. Graduate students and postdocs are strongly encouraged to formulate research directions of their own. Last but not least, the publications page provides many examples of past projects, many of which are still active. These include:

  • Visual object recognition and its philosophical implications
  • Memory-based scaling and categorization, prototypes
  • Cognitive architectures: Leabra, ACT-R, DUAL
  • Machine learning