News
7 Feb 2024
Artificial intelligence, through tools like ChatGPT or voice assistants like Alexa, often simplifies our daily tasks. But our reliance on these innovations is often accompanied by a nagging concern: how much of our (personal) information should we willingly entrust to the powerful algorithms of tech giants like Google and Microsoft?
In her dissertation, which she successfully defended the week before last, Miriam Gieselmann explored the question of when people are willing to share their personal data when interacting with artificial intelligence. Among other things, she examined the extent to which the (perceived) capabilities of the technology being used influence whether or not people are willing to share personal data. The result: When users perceive a certain level of capability in an AI tool, their willingness to share personal data also increases. However, if the perceived capabilities of the AI exceed a critical qualitative threshold - for example, if the AI is able to make cross-domain inferences - the opposite effect may occur.
These reservations about powerful AI extend beyond individuals to decision-makers in professional settings. "We have also observed that decision-makers in the workplace are less inclined to invest in AI systems if the AI has very high capabilities," reports Miriam Gieselmann.