Generative AI in social media health communication: systematic review and meta-analysis of user engagement with implications for cancer prevention

Background - Artificial intelligence (AI) is increasingly used to generate digital health communication, but its impact on user engagement and message perception in oncology prevention remains unclear. Understanding how AI-generated content influences attitudes and behaviors is critical for designin...

Full description

Saved in:
Bibliographic Details
Main Authors: Merl, Nicolas B. (Author) , Schramm, Franziska (Author) , Wies, Christoph (Author) , Winterstein, Jana Therés (Author) , Brinker, Titus Josef (Author)
Format: Article (Journal)
Language:English
Published: January 2026
In: European journal of cancer
Year: 2026, Volume: 232, Pages: 1-14
ISSN:1879-0852
DOI:10.1016/j.ejca.2025.116114
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1016/j.ejca.2025.116114
Verlag, kostenfrei, Volltext: https://www.sciencedirect.com/science/article/pii/S0959804925010007
Get full text
Author Notes:Nicolas B. Merl, Franziska Schramm, Christoph Wies, Jana T. Winterstein, Titus J. Brinker
Description
Summary:Background - Artificial intelligence (AI) is increasingly used to generate digital health communication, but its impact on user engagement and message perception in oncology prevention remains unclear. Understanding how AI-generated content influences attitudes and behaviors is critical for designing effective and trustworthy prevention strategies. - Methods - We conducted a systematic review and meta-analysis following PRISMA 2020 guidelines (PROSPERO CRD420251021036). PubMed, Scopus, Web of Science, and Google Scholar were searched for studies published between 2020 and 2025 that compared AI-generated and human-generated social media content regarding user attitudes, interaction, or health-related outcomes. Risk of bias was assessed using RoB 2, ROBINS-I, AXIS, MMAT, and CASP tools. Random-effects meta-analyses pooled standardized effect sizes for user interaction and perceived quality. - Results - Thirty-three studies (28 quantitative, five qualitative) met inclusion criteria across health, marketing, political, and social media domains. AI-generated content significantly increased user interaction compared with human-generated content (pooled ratio = 1.12; 95% CI 1.04-1.20), while perceived quality showed a positive but nonsignificant trend. Credibility and emotional resonance consistently mediated user engagement across modalities. - Conclusions - AI-generated communication can expand the reach and personalization of cancer prevention messages but carries risks when transparency and factual accuracy are lacking. Ethical frameworks emphasizing disclosure, credibility cues, and expert verification are essential to ensure safe use. Integrating AI tools into oncology prevention strategies may strengthen engagement, trust, and adherence to evidence-based health communication.
Item Description:Online verfügbar: 14. November 2025, Artikelversion: 17. November 2025
Gesehen am 26.01.2026
Physical Description:Online Resource
ISSN:1879-0852
DOI:10.1016/j.ejca.2025.116114