Project
This study examines whether personalized large language models (LLMs) can influence people’s attitudes and beliefs about climate change. In a controlled online experiment, participants interact with AI systems differing in personalization and content. The project aims to assess shifts in climate concern, policy preferences, and behavioral intentions—informing the responsible use of AI for climate communication and education. The project investigates how conversational AI can support or hinder engagement with climate issues.
Participants are randomly assigned to one of three experimental conditions: (1) a personalized LLM, fine-tuned on climate arguments and informed by participants’ climate belief profiles (based on the Six Americas segmentation); (2) a non-personalized control model (standard LLM); and (3) a random-assignment model where personalization does not match user profiles.
Changes in climate concern are measured using the Six Americas Short Scale (SASSY), alongside self-reported policy preferences, climate beliefs, and behavioral intentions toward pro-environmental actions. The findings will provide empirical evidence on the psychological effects of AI-mediated communication about climate change. They may inform practical applications in education, online learning, and science communication.
Prof. Daniel Durstewitz, Central Institute of Mental Health (CIMH) Mannheim