Poster Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Linguistic features affect the formation of our semantic networks

Poster Session B, Friday, September 12, 4:30 - 6:00 pm, Field House

Bek N.I. Hardy1, Dwight J. Kravitz1,2; 1The George Washington University, 2U.S. National Science Foundation

As we read, text we have already read primes semantic networks, resulting in predictions of upcoming words making reading more efficient. Cognitive accounts propose that this priming is primarily defined by semantic similarity, whereas linguistic accounts emphasize the roles of other features, such as hierarchy (super- vs. sub-ordinate categorical relationships). In the current study we sought to explore the impact of semantic similarity and hierarchical relationships simultaneously. 576 participants completed a lexical decision task (real vs. pseudoword) across 144 independent and trial-unique stimuli (500ms). Pseudowords were constructed to precisely match the real words in letters but remain phonologically plausible. Trials were arranged in counterbalanced pairs, such that real and pseudowords followed each other equally often. Unbeknownst to participants, prime and target pairs were embedded in the trial structure and could appear either in the order of superordinate (category head) followed by the subordinate (category member), or the subordinate followed by the superordinate. Stimuli were tightly controlled for frequency, string length, and semantic similarity. If the cognitive account of spreading activation predicted by semantic similarity is accurate, the Superordinate-Subordinate and Subordinate–Superordinate conditions should be equal in response time as similarity is the driving factor and identical across the conditions. However, linguistic accounts would predict that the Subordinate-Superordinate condition should be faster as subordinate terms contain specific features, with more information, leading to a larger priming effect. Contrary to both theories, the strongest priming effect was observed in the Superordinate-Subordinate condition. Thus, neither theory accurately predicted the outcome, as hierarchy directly affected priming but in the opposite direction from the linguistic prediction. However, we were able to explain this apparently hierarchical effect through a more thorough modeling of the semantic neighborhood of, and the presumptive spreading activation caused by, the prime words. We utilized GPT-4 responses for the 50 closest neighbors of the prime, and SpaCy to determine the similarity of those neighbors to the target. Superordinate primes (e.g., color) tended to yield neighbors that more broadly overlapped with the subordinate targets (e.g., blue), but not vice versa. Subordinate terms are anchored in specific context and thus co-occur with a wider range of domain-relevant superordinates, whereas superordinates appear in diffuse contexts and rarely co-occur with any single subordinate. This asymmetry, recoverable purely from co-occurrence patterns, shows that hierarchical structure can emerge naturally from a mechanism that simply captures co-occurrence patterns. The natural statistics of language create hierarchical priming with no need for a specific mechanism to capture it. Our findings suggest that both cognitive and linguistic theories are right, but describe the language mechanism at two different levels. Even simple statistical models, when trained on rich input, can recover complex effects like hierarchical priming due to the nature of language. This result has significant implications for computational modeling, language learning, and our understanding of related disorders. These results provide a conceptual framework for unifying experience-based learning with structured semantic knowledge.

Topic Areas: Meaning: Lexical Semantics,

SNL Account Login


Forgot Password?
Create an Account