Research Interests

The ability to perceive spoken language is a faculty that many of us take for granted, but one that we rely on almost every waking moment of every day. Speech perception happens so efficiently and seems so effortless that we rarely give much thought to the complex processes that are occurring. We are interested in the sensory, cognitive, and neurobiological mechanisms that underlie the perception of speech. The elegance and complexity of such a system is an absolute marvel, and it is the goal of my program of research to better understand how it works. Speech perception utilizes a number of different neural and cognitive systems: from the very basic sensory mechanisms required to transduce and encode sound itself, to the higher level cognitive mechanisms required to turn it into a complex mental representation. We investigate three fundamental questions in our lab: what is the relationship between the sensory encoding of speech and the percept that it evokes (or more succinctly, how do you get from sensation to perception); to what degree are these mechanisms specific to speech and language as compared to domain general perceptual abilities; and how malleable are these mechanisms, and how can they be enhanced. We examine these questions using techniques from multiple disciplines including cognitive psychology, behavioral neuroscience, linguistics, and the speech and hearing sciences. Each informs our work, and has provided me with a repertoire of skills that have been valuable both in terms of pursuing a productive research program, as well as engaging, advising and training student researchers in psychology, neuroscience, and linguistics.

Auditory Cognition

Our ability to hear is not purely audiological, but rather is an ability that relies on the entire cognitive network dedicated to auditory processing. Much of the focus of research in the Speech and Hearing Sciences has been on finding the source of and treating the hearing loss itself. Indeed, the broader cognitive systems that process the lower-level auditory inputs are
often overlooked. Similarly, in the disciplines of Cognitive Psychology and Neuroscience, little attention has been paid to general auditory cognition, with studies favoring the higher-level systems for spoken language perception. Auditory cognitive abilities are important to study as they undergird a variety of domains such as language acquisition (first language as well as second language learning), and may be able to explain a variety of general nonspecific speech and hearing deficits such as central auditory processing disorders or some forms of dyslexia. Auditory cognition is likely important for adapting to new environments, tolerance to speech in noise, effective sound localization and navigation. Auditory cognitive abilities may also play an important role in determining the extent of the impact of hearing loss on the individual. Auditory Cognitive abilities likely act as coping strategies, helping some to stave off the effects of hearing loss for longer. Indeed, this may well contribute to differences in timelines for who seeks hearing testing earlier on, and for who opts for hearing aids or cochlear implants sooner. Two individuals may present with the same degree of hearing loss, but may experience very different practical levels of impairment due to differences in auditory cognitive abilities, just as two individuals with similar etiologies and deficits may also show different levels of adaptation and utilization of their hearing aids or cochlear implants. Such variability is well known in the field but remains largely unexplained. Since comparatively little is known about auditory cognitive processes that lie in between audibility and language, the goal of this project is to address these research gaps.

This project seeks to answer the two following research questions. RQ1: How does auditory cognition vary across individuals? What are the influences of previous language learning, real world listening experience, noise exposure, age and formal education on auditory cognition? RQ2: How does auditory cognition vary with hearing loss and assistive hearing device usage? To what extent does auditory cognition mitigate the impact of hearing loss and alter the timeline to seek formal diagnosis and utilization of an assistive listening device? How does auditory cognition influence the successful usage of such devices? To address these questions, the project has 3 specific aims. Aim 1 is to finalize and deploy a mobile testing system to assess auditory cognition. This would integrate five auditory cognitive tests (outlined below) to take behavioral data (accuracy and reaction time) and subjective assessment of listening effort during performance. Simultaneously, we will utilize pupillometry as an objective measure of neurocognitive processing and effort. Together, these will allow us to integrate self assessment, task performance with a more objective measure of cognitive load management. Aim 2 is to standardize these tests in a large range of participants across a variety of hearing abilities. We aim to test approximately 800 individuals using a targeted recruiting strategy to achieve a representative sample within our home state. Aim 3 is to utilize statistical modeling and machine learning to develop a metric that allows us to distill baseline abilities and define an expected range of functioning from low to high for specific age ranges and hearing abilities. This will allow us to understand how these auditory cognitive abilities contribute to healthy hearing across the lifespan.

Psychophysiology

We are also keenly interested in applying a variety of psychophysiological techniques in order to investigate the neural bases of speech perception. I have experience using EEG, ERP, EOG, Eye tracking and fNIR in my laboratory courses as well as in my own empirical research, all of which are available at St. Olaf, and are interested in applying it to our normal hearing listeners and cochlear implant users.

Perceptual Learning of Degraded Auditory Signals

A tremendous challenge for our perceptual systems is to create a stable and reliable percept from a highly variable input signal. When perceiving speech in our native language, we have to deal with talker variability (accent, dialect, speech pathologies), environmental variability (competing talkers, background noise, variability in room and environmental acoustics), and signal variability (fast or slow rate, signal loss, spectral filtering and degradation). The ability of our perceptual systems to withstand such variability relies on rapid perceptual learning. We are interested in understanding how lower-level sensory mechanisms interact with higher-level cognitive mechanisms in perceptual learning, and the resulting cognitive and perceptual skills that develop as a consequence. To this end, one line of our research focuses on the perceptual learning of degraded speech by normal hearing (NH) subjects listening to acoustic simulations of a cochlear implant.

Cochlear Implants

An exciting (and perhaps unique) aspect of our research program is its applicability to clinical practice, namely to cochlear implant users themselves. Although the perceptual learning process is incredibly important to the development of accurate perceptual abilities following cochlear implantation, most adult cochlear implant users do not undergo any formal training or rehabilitation. Thus, there is significant variability across CI users in terms of their proficiency of use and therefore their psychological experience and satisfaction with their implant. Over the past 4 years, we have developed and are currently testing a novel, multi-day, targeted training paradigm for new CI users. This training paradigm focuses both on the higher level (context, syntactic) and lower level (word identification, phoneme discrimination) linguistic aspects of speech, in addition to other paralinguistic aspects of speech (talker identification) and domain general auditory abilities (environmental sound identification). Our goal for this training paradigm is to help new CI users develop a standardized set of cognitive and perceptual abilities that enhances performance in real world listening situations, such as when listening to speech in noise, a difficult condition for CI users.

et cetera

As in any lab, much of what we do is driven by student interests. We have a variety of projects that were generated by students and still continue in some capacity. This includes the link between musical mode and emotion, pitch training for cochlear implant users, the speech to song illusion, and the link between mathematical reasoning and language.