When AI and Humans Produce Partial Truths: Examining Acceptability of Perceived Error and Perceived Associated Harms
Dr. Isabelle Freiling, Dr. Sara Yeo, and Dr. Haoning Xue
Dr. Isabelle Freiling, Dr. Sara Yeo, and Dr. Haoning Xue have published new research examining how people accept partially true health information when it is produced by humans, generative artificial intelligence, or a combination of the two. Their article, “When AI and Humans Produce Partial Truths: Examining Acceptability of Perceived Error and Perceived Associated Harms,” appears in the journal Health Communication.
The study moves beyond traditional misinformation research by focusing on messages that contain both accurate and inaccurate information, instead of entirely false messages. Using vaccine-related content as a test case, the authors examine how audiences judge the acceptability of perceived errors depending on whether the source is presented as a scientist, generative AI, or a scientist using AI.
“Our results imply that when information consumers perceive a human to be involved in content creation, perceived error acceptability, at least when it comes to vaccine misinformation, is higher than when [they think the] content is created by generative AI alone,” said Freiling.
The research also examines the acceptability of perceived error and harms associated with the message and its topic, and how these factors interact as predictors of intentions to engage with the message or intervene. Overall, the study offers timely insights into trust, credibility, and accountability in health communication as generative AI becomes more widely used in producing public-facing information.
You can read more about their research and findings here: https://www.tandfonline.com/doi/full/10.1080/10410236.2025.2608202.