Kira Wegner-Clemens, PhD
cognitive neuroscientist
Hi! I'm a postdoctoral fellow at Georgetown University in the Right Hemisphere Emotion, Cognition, Recovery Lab, focusing on attention changes after right hemisphere stroke.

My primary research interest is attentional selection in multisensory contexts.

I completed my PhD at George Washington University in the Attention and Cognition Lab, focusing on semantic guidance of attention in audiovisual contexts.

Before grad school, I recieved my BA from Rice University and worked as a post bac researcher at Baylor College of Medicine.
2026

Wegner-Clemens, K., Malcolm, G. L., Kravitz, D.J., Shomstein, S. (2026) Task irrelevant sounds influence visual attention through graded crossmodal semantic modulation. Psychonomic Bulletin and Review. In press.


2024

Wegner-Clemens, K., Malcolm, G. L., Shomstein, S. (2024) Predicting attention in real-world environments: the need to investigate crossmodal semantic guidance. WIRES Cognitive Science. (link)

2022

Wegner-Clemens, K., Malcolm, G. L., Shomstein, S. (2022) How much is a cow like a meow? A novel database of human judgements of audiovisual semantic relatedness. Attention, Perception, & Psychophysics. ( link; preprint )


2020

Magnotti, J.F., Dzeda, K.B., Wegner-Clemens, K., Rennig, J., & Beauchamp, M.S. (2020). Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and audiovisual speech-in-noise: a causal inference explanation. Cortex. doi:10.1016/j.cortex.2020.10.002 (pdf; link)


Wegner-Clemens, K., Rennig, J., & Beauchamp, M.S. (2020) A relationship between Autism-Spectrum Quotient and face viewing behavior in 98 participants. PLoS ONE 15(4): e0230866. (pdf; link)


2019


Wegner-Clemens K, Rennig J, Magnotti JF, Beauchamp MS. (2019) Using principal components analysis to characterize eye movement fixation patterns during face viewing. Journal of Vision, November 2019, Vol.19, 2. doi:10.1167/19.13.2 (pdf; link)


Rennig, J., Wegner-Clemens, K., & Beauchamp, M.S. (2019) Face Viewing Behavior Predicts Multisensory Gain During Speech Perception. Psychonomic Bulletin & Review. 27, 70–77(2020) ( pdf; link)


Convento, S., Wegner-Clemens, K. A., & Yau, J. M. (2019). Reciprocal Interactions Between Audition and Touch in Flutter Frequency Perception, Multisensory Research, 32(1), 67-85. (pdf; link)
Ongoing projects: My work currently focus on understanding temporal attention and attention in complex dynamic scenes after stroke.


Semantic guidance of audiovisual attention: Real world environments are multimodal and semantically rich. Prior work has shown semantic relatedness between visual objects guide what is attended in visual scenes in an automatic way, rather than a task strategy. In my disseration work, we demonstrated that task irrelevant semantic guidance extends both across modality (e.g., between sounds and images) and in a graded manner (e.g., not simply between related, but in a scaled manner). (To be published soon!)

This project leveraged online data collection techniques to create a database of crossmodal semantic relatedness (the Sight Sound Semantics database) and investigate attentional questions.


Spatial attention & speech perception: Visual information from the face and mouth can drastically improve speech perception, particularly in noisy environments. However, how much visual information helps varies substantially between individuals. During post bac resarch, we used eye tracking techniques and found that where individual look during easy to understand speech predicts how much they benefit from visual information during difficult to understand speech (published here).

In graduate work, we tested whether this effect could be explained by individual differences in covert attention, by holding gaze constant and manipulating covert attention using cues to the eyes or mouth (manuscript currently in preparation)