News
11 Jun 2021
During the pandemic, many decisions with far-reaching consequences had to be made. Algorithms could have helped in making these. Researchers at the IWM in collaboration with the German University of Freiburg have now discovered that citizens generally do not object to the use of algorithm-based advice by those in responsible positions - as long as a human is the final decision-maker.
Various COVID-19-related decision scenarios were presented to the study participants. The distribution of ventilators among COVID-19 patients was considered the most morality-laden scenario. Decisions about financial support for suffering businesses or curfew rules for members of risk groups were presumed less severe and therefore less moral-laden. The result: In general, people had positive attitudes towards human-algorithm teaming – as long as a human has the final say.
Another important insight arose from the various perspectives the study participants took. They either had to imagine themselves in the role of the decision-maker or the person affected by the decision. For the most moral-laden scenario, a clear effect could be seen. “In the role of the physician, the study participants were more willing to get an initial assessment from an algorithm than when they imagined being the patient about whom the decision was being made,” said Prof. Dr. Sonja Utz. The IWM researcher is heading the study as part of the Tübingen Excellence Cluster “Machine Learning: New Perspectives for the Sciences.”
For more information, please refer to our press release.