Making sense of the world around us is a vital ability of the human brain. Nearly effortlessly, we are able to recognize a sheer endless amount of object categories
and retrieve their key properties. For instance, we are able to recognize a mug when we see it, know that we can drink from it, and even anticipate how heavy it is
going to feel when we pick it up. Similarly, we can report visual attributes (e.g., “it is green”), conceptual knowledge (e.g., “it is artificial”), and properties of
other modalities (e.g., haptic: “it feels smooth”) when reading or hearing the word “mug.”
We can identify countless perceptual and semantic properties to describe objects. But which of these does the brain rely on to distinguish between objects, and which are the most important?
Decades of research have explored these questions, revealing several candidate dimensions, such as eccentricity, curvature, animacy, and size. However, both visual input and semantic knowledge
contribute to how objects are represented in the brain. Since these two factors are deeply intertwined, it remains challenging to disentangle the influence of dimensions rooted in visual input from those grounded in semantic knowledge.
For instance,visual dimensions are often
correlate with semantic properties or specific categories, e.g. most mammals (semantic category)
have a curved shape (mid-level visual feature). And even congentially blind people may have a concept of visual features and relate, say, the color red with roses, cherries and wine.
Also, some dimensions like size are represented at a lower visual level (i.e., the space an object occupies on the retina when we look at it)
as well as at a higher conceptual level (i.e., how large we know that the object actually is).
Ok, you get the problem here...
To disentangle visual-perceptual, visual-semantic, and other semantic dimensions, I aim to identify the properties people use to differentiate object images v. words. I will then examine how these image-derived and word-derived dimensions are reflected in fMRI responses to the same images and words. Additionally, I am investigating object representations in both sighted and congenitally blind individuals. Through this work, I hope to uncover the dimensions shaping object representations when visual input or experience is present versus absent.
This page is where I'll post all my updates, so stay tuned!