Project

What users choose to share: Self-disclosure in LLM-based chatbot conversations

Conversational agents powered by large language models (LLMs) are increasingly becoming part of everyday communication, enabling users to engage in personal and socially meaningful interactions with artificial intelligence. This project examines how different conversational styles influence users’ willingness to share personal information during interactions with chatbots. Specifically, the study compares an expert-like chatbot that communicates in a formal, informational manner with a partner-like chatbot that adopts a supportive and socially engaging style. It further investigates how interaction context (emotional versus cognitive conversations) shapes users’ self-disclosure, perceived social support, and credibility of chatbots. This study combines questionnaire responses with data from users’ interactions to examine both subjective experiences and language use during conversations with the chatbot. By investigating how conversational design shapes users’ openness and their perceptions of the interaction, the project seeks to improve our understanding of social processes in human–AI communication and provide insights for the development of more responsible conversational systems.

Part of the lab

Duration

09/2025 - 12/2026

Funding

IWM budget resources

Your contact person

Cooperation partners

  • Prof. Dr. Niels van Berkel, Aalborg University, Denmark

  • Dr. Samuel Rhys Cox, Aalborg University, Denmark

  • Jade Martin-Lise, Aalborg University, Denmark