Categories
Nevin Manimala Statistics

When AI sees hotter: Overestimation bias in large language model climate assessments

Public Underst Sci. 2025 Jul 13:9636625251351575. doi: 10.1177/09636625251351575. Online ahead of print.

ABSTRACT

Large language models (LLMs) have emerged as a novel form of media, capable of generating human-like text and facilitating interactive communications. However, these systems are subject to concerns regarding inherent biases, as their training on vast text corpora may encode and amplify societal biases. This study investigates overestimation bias in LLM-generated climate assessments, wherein the impacts of climate change are exaggerated relative to expert consensus. Through non-parametric statistical methods, the study compares expert ratings from the Intergovernmental Panel on Climate Change 2023 Synthesis Report with responses from GPT-family LLMs. Results indicate that LLMs systematically overestimate climate change impacts, and that this bias is more pronounced when the models are prompted in the role of a climate scientist. These findings underscore the critical need to align LLM-generated climate assessments with expert consensus to prevent misperception and foster informed public discourse.

PMID:40652388 | DOI:10.1177/09636625251351575

By Nevin Manimala

Portfolio Website for Nevin Manimala