so I am searchable

so I am searchable

LAURA MAI STOINSKI. Hi, I am a PhD Student at the Max Planck Research Group "Vision and Computational Cognition." My co-supervisors are Prof. Martin Hebart and Prof. Gesa Hartwigsen.

I am interested in how we perceive and represent the world around us. More precisely, my research focuses on the nature of neural object representations (e.g., our mental representations of the concepts "dog," "peace," or "scientist").

Currently, my work is driven by the following questions: "What are the core visual and semantic object properties that allow us to recognize and differentiate objects?", "How do semantic and purely visual representations differ?", "Where do they map on the human cortex?" and "What is the distinct contribution of visual vs. semantic dimensions in forming internal object representations?"

I aim to address these questions in congenitally blind and sighted populations using a combination of neuroscientific, computational, and psychological approaches.

Beyond studying them, I enjoy using my visuo-cognitive capacities for painting, reading, and art.

since 2022 Research Fellow in the group of Martin Hebart, Max Planck Institute for Human Cognitive & Brain Sciences, Leipzig
2019 - 2022 M.Sc. Psychology, University of Leipzig
2015 - 2019 B.Sc. Psychology, University of Constance

Stoinski, L. M., Perkuhn, J., & Hebart, M. N. (2023). THINGSplus: New Norms and Metadata for the THINGS Database of 1,854 Object Concepts and 26,107 Natural Object Images.

Sikström, S., Stoinski, L. M., Karlsson, K., Stille, L., & Willander, J. (2020). Weighting power by preference eliminates gender differences. PLOS ONE, 15(11).


Stoinski, L. M., Perkuhn, J., & Hebart, M. N. (2022, May). THINGS+: new norms and metadata for the THINGS database of 1,854 object concepts and 26,107 natural object images. Annual Meeting of the Vision Sciences Society, St. Pete Beach, FL, USA. view poster, view abstract

THINGS is a freely available, large-scale database of 1,854 systematically sampled object concepts and 26,107 high-quality naturalistic images of these concepts. We recently extended THINGS by adding concept- and image-specific norms, 53 higher-level category memberships, and one license-free image per concept.

With this project, we hope to provide researchers with a valuable resource of normed living and non-living object stimuli. Many laboratories around the world have started collecting data with THINGS and made their data publicly available for research purposes. Interested in using THINGS? Check it out here ➔