There are plenty of linguistic variables that one has to consider when designing experiments on language processing. The list of all those factors is getting longer so swiftly that it’s difficult to catch up with them. We’ve recently conducted a neuroimaging pilot study looking at a new such variable – semantic size i.e., the real size of an object to which a given word refers to. Here is what drove us to do that.
In 2009, Sereno, O’Donnell and Sereno came up with the concept of semantic size, and initially examined its effect on word recognition in a lexical decision task. They compared RT and accuracy between ‘big’ (e.g., ‘jungle’) and ‘small’ words (‘needle’), and found that semantic size accounted for the behavioural responses over and above well-established variables (word length, frequency, etc.). Sereno et al. were the first ones to show that ‘big’ words (513 ms) were recognised faster than words denoting small entities (528 ms). However, soon after the study Kang, Yap, Tse, and Kurby (2011) used the same stimuli and task, and found no latency advantage for bigger words. Not only did they have a larger sample than Sereno’s team (24 vs 80 Ss), but their detection power was also far better (.32 vs .81 Cohen’s d). To support the null hypothesis, Kang et al. additionally analysed latency data from two word recognition megastudies. This is where the story ends.
It does make sense. Why would the size of a word’s referent facilitate lexical decision (distinguishing between pseudowords and words)? We were, on the other hand, interested whether the brain processes semantic size at all. So we conducted a simple fMRI pilot study on five folks, in which we used the stimuli from Sereno et al. in a silent word reading paradigm (adding fixation cross and verbs as low- and high-level baselines, respectively). The results were far more interesting from what we had expected. Reading big nouns increased the BOLD signal in the left middle occipitial gyrus [-36 -88 31], whilst small nouns correlated with a cluster in the right cuneus [3 -82 28]. A more lenient threshold for that contrast also showed a four-voxel activation within the right lingual gyrus [15 -55 -2].
While the precise interpretation of the results remains under further scrutiny, we propose that semantic size can modulate brain activity through processes related to mental imagery. Given that much of our conceptual knowledge is represented in the perceptual system, it’s plausible that word recognition, not exclusively of visual modality, can automatically initiate mental imagery of the perceptual properties (size) of the word’s referent. The neural mechanisms that seem to support this process are those located in the primary visual system and nearby cortical regions (BA 17, 18, 19). For instance, BA 17 has been shown to be more active during mental imagery of letters (Kosslyn et al., 1993) and the size of objects (Kosslyn, Thompson, & Alpert, 1997). Activation within BA 19 (visual association cortex) is on the other hand a correlate of mental imagery of shape (Knauff, Kassubek, Mulack, & Greenlee, 2000). Finally, a neuroimaging study of Ganis, Thompson, and Kosslyn (2004) demonstrated that all BA 17, 18, and 19 underlie not only visual perception, but also visual mental imagery.
Although all we’ve got is data from 5 subjects that might have some false positives, the results are quite cool – semantic size may modulate brain activity in a conceptually similar manner as object size affects visual perception. When viewed at the same distance, bigger objects, compared to smaller ones, are transmitted far more quickly through the magnocellular pathway in the visual system. More importantly, semantic size modulation would suggest that words trigger a perceptual representation of the object they refer to, thus somewhat bridging word meaning and object recognition. For now, we’ve stopped right there. We’ve decided to do a couple of behavioural studies first, to know what actually is going on in the scanner. We hope to understand a bit more on processing of semantic size through the Stroop effect, and later using dual-task methods (word processing vs mental imagery).