Categories
Nevin Manimala Statistics

No frills: Simple regularities in language can go a Long Way in the development of word knowledge

Dev Sci. 2023 Jan 21:e13373. doi: 10.1111/desc.13373. Online ahead of print.

ABSTRACT

Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children’s language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children’s growing word knowledge. Highlights Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use. Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words. We show that infants’ and children’s language input is rich in simple co-occurrence that can support learning similarities in meaning between words. We find that simple co-occurrences can explain infants’ and children’s knowledge that words are similar in meaning. This article is protected by copyright. All rights reserved.

PMID:36680539 | DOI:10.1111/desc.13373

By Nevin Manimala

Portfolio Website for Nevin Manimala