J Nurs Educ. 2026 Apr 1:1-6. doi: 10.3928/01484834-20260216-01. Online ahead of print.
ABSTRACT
BACKGROUND: As artificial intelligence (AI) becomes increasingly integrated into education, understanding student perceptions of AI-generated support is critical. This pilot study examined how Doctor of Nursing Practice (DNP) students evaluate statistical help from different sources.
METHOD: Seven DNP students submitted statistical questions related to their capstone projects and received blind responses from a custom-trained, large language model (LLM) chatbot; a graduate assistant; and a professor. Students rated each response on helpfulness, satisfaction, and likelihood of use (i.e., 1 = worst, 5 = best), and guessed which response came from the chatbot.
RESULTS: The LLM chatbot received the highest average ratings for helpfulness and satisfaction. However, students consistently rated responses lower when they believed they were AI-generated.
CONCLUSION: Students preferred the LLM chatbot’s responses when blinded yet demonstrated a bias against AI when the source was suspected. This bias may influence AI adoption in academic support and warrants further study.
PMID:41915914 | DOI:10.3928/01484834-20260216-01