Making sense of the world around us is a vital ability of the human brain. Nearly effortlessly, we are able to recognize a sheer endless amount of object categories
and retrieve their key properties. For instance, we are able to recognize a mug when we see it, know that we can drink from it, and even anticipate how heavy it is
going to feel when we pick it up. Similarly, we can report visual attributes (e.g., “it is green”), conceptual knowledge (e.g., “it is artificial”), and properties of
other modalities (e.g., haptic: “it feels smooth”) when reading or hearing the word “mug.”
Object recognition is supported by the ventral visual stream, a hierarchy of brain regions spanning
the occipitotemporal cortex. Along the ventral stream, object categories are represented by a distributed
pattern of activation. We think this pattern reflects a combination of object features or
dimensions with increasing complexitiy.
i.e., the represented information ranges from low-level visual features like eccentricity, color, or orientation of the object, over
mid-level properties as structure and shape, to high-level
conceptual properties, as animacy & real-world size.
Take dinosaurs, for example. We can list countless perceptual and semantic properties to describe them.
But which of them does our brain use to represent the concept of "dinosaur"? And which are the most important ones?
Decades of research has addressed these questions and uncovered a number of important object dimensions, such as color, shape, size or animacy.
However, both visual input an semantic knowledge contribute to our object representations. In addition, both are tightly intertwined,
what makes it difficult to differentiate purely visual from higher-level semantic dimensions.
For instance,visual dimensions are often
correlate with semantic properties or specific categories, e.g. most mammals (semantic category)
have a curved shape (mid-level visual feature). And even congentially blind people may have a concept of visual features and relate, say, the color red with roses, cherries and wine.
Also, some dimensions like size are represented at a lower visual level (i.e., the space an object occupies on the retina when we look at it)
as well as at a higher conceptual level (i.e., how large we know that the object actually is).
Ok, you get the problem here...
To better differentiate between visual and semantic dimensions, I will compare neural representations of object
images vs. words, the dimensions they are composed of, and how they transfer to behavior. Further,
I am studying object representations in sighted and congenitally blind participants. In doing so, I hope to
find out what information is only represented in sighted (and thus probably "visual"), what in both
(thus probably semantic or of non-visual modalities), and what information is encoded stronger in congenitally blind (potential of plasticity).
This page is where I'll post all my updates, so stay tuned!