Search Abstracts | Symposia | Slide Sessions | Poster Sessions
The neural basis of language about animates
Poster Session A, Friday, September 12, 11:00 am - 12:30 pm, Field House
Miriam Hauptman1, Giulia Elli1, Connor Lane1, Rashi Pant1,2, Marina Bedny1; 1Johns Hopkins University, 2Universität Hamburg
Starting early in life, humans are highly motivated to think and talk about animate entities (i.e., people, animals). Infants attend to faces from birth (Maurer, 1985; Johnson & Morton, 1991), and nouns referring to people and animals are among their first words (Nelson, 1973; Gentner & Boroditsky, 2008). Throughout life, we use language to acquire culturally-accumulated knowledge about animates from other people, such as how plants grow and why people get sick (Harris & Koenig, 2006; Legare et al., 2012). Here we applied univariate and multivariate fMRI approaches to investigate the neural mechanisms that support thinking about animates during language comprehension. To provide broad insight into this question, we combine data across three fMRI experiments (total n=75) with stimuli ranging from single words (e.g., ‘sparrow’, ‘giraffe’), to two-word phrases (e.g., ‘princesses want’) and short stories (e.g., ‘Sam sunbathed on a sunny beach…’). Across conditions, linguistic stimuli were matched in word frequency, length, and grammatical complexity. Stories elicited causal inferences across sentences without adding explicit mention of people or animals, allowing us to disentangle conceptual retrieval from surface form. Some stories encouraged thinking about minds, and others about bodies (i.e., illness). A subset of the participants (n=32) additionally completed a language localizer task that identified language-responsive regions in individual brains (Fedorenko et al., 2010). Analyses used both univariate (whole-cortex, individual-subject fROI) and multivariate (MVPA, RSA) approaches. Across all experiments, understanding language about people and animals recruited a consistent temporoparietal ‘animacy network,’ including specific portions of the temporoparietal junction and precuneus. This network exhibits several signatures of sensitivity to animacy. First, it responds more to single words describing animates (animals) than inanimates (places). Elevated responses to the mention of animates are also observed in story contexts, whereby the network’s activity scales with the number of animates (people) mentioned. Multivariate patterns within the animacy network distinguish between people types (e.g., princesses vs. cowboys) described in two-word phrases (e.g., ‘cowboys believe’), regardless of the actions the agents are engaged in (e.g., wanting vs. believing). Finally, the same temporoparietal animacy network shows an elevated response to causal inferences about animates (e.g., sneezing transmits illness) compared to causal inferences about inanimate objects (e.g., earthquakes damage buildings) as well as causally unconnected stories. Neighboring regions within this network are sensitive to causal inferences about bodies (illness) and minds (beliefs). In contrast to the temporoparietal animacy network, visual areas associated with face processing (i.e., fusiform face area) did not respond to language about animates. Individually-localized frontotemporal language regions also did not respond to the number of animates in stories or prefer causal stories over non-causal stories, animate or otherwise. However, multivariate patterns in language regions did distinguish between animate from inanimate causal stories. Together, our findings suggest that understanding language about animates depends on an amodal semantic network. We hypothesize that this network interacts with frontotemporal language regions during comprehension. More broadly, our results support the hypothesis that the frontotemporal language network interacts with a distributed collection of semantic systems that represent different types of abstract conceptual knowledge.
Topic Areas: Meaning: Lexical Semantics, Meaning: Discourse and Pragmatics