Categories
Nevin Manimala Statistics

Scaling language model size yields diminishing returns for single-message political persuasion

Proc Natl Acad Sci U S A. 2025 Mar 11;122(10):e2413443122. doi: 10.1073/pnas.2413443122. Epub 2025 Mar 7.

ABSTRACT

Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 10 US political issues from 24 language models spanning several orders of magnitude in size. We then deploy these messages in a large-scale randomized survey experiment (N = 25,982) to estimate the persuasive capability of each model. Our findings are twofold. First, we find evidence that model persuasiveness is characterized by sharply diminishing returns, such that current frontier models are only slightly more persuasive than models smaller in size by an order of magnitude or more. Second, we find that the association between language model size and persuasiveness shrinks toward zero and is no longer statistically significant once we adjust for mere task completion (coherence, staying on topic), a pattern that highlights task completion as a potential mediator of larger models’ persuasive advantage. Given that current frontier models are already at ceiling on this task completion metric in our setting, taken together, our results suggest that further scaling model size may not much increase the persuasiveness of static LLM-generated political messages.

PMID:40053360 | DOI:10.1073/pnas.2413443122

By Nevin Manimala

Portfolio Website for Nevin Manimala