Skip to main content

Daniel Levin

Professor of Psychology and Human Development

Research in the Levin lab is focused on the interface between knowledge and visual perception. To this end, we have been exploring the concepts associated with a variety of object categories, and the knowledge that drives visual selection during scene and event perception. Some of our research explores how knowledge and other basic cognitive constraints affect scene and event perception. For example, we are currently exploring how people perceive the sequence of natural visual events, and how they represent space while viewing movies. In related research, we are exploring how visual attention and concepts about agency affect event perception, human-computer interaction, learning from agent-based tutoring systems, and performance in nursing simulation training. I received by BA from Reed College in 1990, and my Ph.D. at Cornell University in 1997, then moved to a faculty position Kent State University. Starting in 2003, I have been here at Vanderbilt where I am Professor of Psychology in the Peabody's department of Psychology and Human Development.

Lab Website

Representative Publications

Levin, D.T., & Keliikuli, K. (2020). An empirical assessment of cinematic contintuity. Psychology of Aesthetics, Creativity, and the Arts.

Jaeger, C.B., Brosnan, S.F., Levin, D.T., & Jones, O.B. (2020). Predicting variation in endowment effect magnitudes. Evolution and Human Behavior.41(3), 253-259. Impact factor: 3.067

Jaeger, C.B., Little, J.W., & Levin, D.T. (2021). The prevalence and utility of formal features in YouTube screen-capture instructional videos. Technical Communication, 68(1), 56-72.

Salas, J.A., & Levin, D.T. (2021). Efficient calculations of NSS-based gaze similarity for time dependent stimuli. Behavioral Research Methods, doi: 10.3758/s13428-021-01562-0

Levin, D. T., Salas, J. A., Wright, A. M., Seiffert, A. E., Carter, K. E., & Little, J. W. (2021). The incomplete tyranny of dynamic stimuli: gaze similarity predicts response similarity in screen‐captured instructional videos. Cognitive Science45(6), e12984.

Levin, D.T., Baker, L.J., Wright, A., Little, J. & Jaeger, C. (2022). Perceiving vs. Scrutinizing: Viewers do not default to awareness of small spatiotemporal inconsistencies in movie edits. Psychology of Aesthetics, Creativity and the Arts.

Levin, D.T., Mattarelle-Micke, A., Lee, M.J., Baker, L.J., Bezdek, M.A., & McCandliss, B.D. (2022) How movie events engage childrens’ brains to combine visual attention with domain-specific processing involving number and theory of mind in a cinematic arena. Projections: The Journal for Movies and Mind.

Wright, A.M., Carter, K.E., Bibyk, S.A., Jaeger, C.B., Watson, D.G., & Levin, D.T. (2022). Video speeding can be efficient and learner preference can be improved by selective speeding. Journal of Experimental Psychology: Applied, 28(4), 916-930.

Xie, S., Xue, X., Mishra, S., Wright, A., Biswas, G, & Levin, D.T. (2022).A case study of prevalence and causes of eye tracking data loss in a middle school classroom. Educational Technology Research and Development, 70, 2017-2032. 

Wright, A. M., Salas, J. A., Carter, K. E., & Levin, D. T. (2022). Eye Movement Modeling Examples guide viewer eye movements but do not improve learning. Learning and Instruction, 79, 101601.

Davalos, E., Vatral, C., Cohn, C., Fonteles, J., Biswas, G., Mohammad, N., Lee, M., & Levin, D.T. (2023). Identifying Gaze Behavior Evolution via Temporal Fully-Weighted Scanpath Graphs. 13th International Learning Analytics and Knowledge Conference.

Lee, M.J., Jaeger, C.B., & Levin, D.T. (in press). To search is to see: Reducing incidental change blindness via intentional search tasks. Journal of Experimental Psychology: Human Perception and Performance.

Kondyli, V., Bhatt, M., Levin, D.T., & Suchan, J. (in press). Using change detection and gaze on visual changes as indices of strategic attention during simulated driving. Cognitive Research: Principles and Implications.