{"id":6004,"date":"2025-03-26T14:11:55","date_gmt":"2025-03-26T14:11:55","guid":{"rendered":"https:\/\/blogs.kent.ac.uk\/psychology\/?p=6004"},"modified":"2025-04-01T10:55:26","modified_gmt":"2025-04-01T09:55:26","slug":"peoples-trust-in-ai-systems-to-make-moral-decisions-is-still-some-way-off","status":"publish","type":"post","link":"https:\/\/blogs.kent.ac.uk\/psychology\/2025\/03\/26\/peoples-trust-in-ai-systems-to-make-moral-decisions-is-still-some-way-off\/","title":{"rendered":"People\u2019s trust in AI systems to make moral decisions is still some way off"},"content":{"rendered":"<p class=\"lead\">Psychologists warn that AI\u2019s perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions.<\/p>\n<p>Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that we understand how people think about such artificial moral advisors.<\/p>\n<p>Research led by the\u00a0<a href=\"https:\/\/www.kent.ac.uk\/school-of-psychology\">School of Psychology<\/a>\u00a0explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.<\/p>\n<p>Published in the journal\u00a0<em>Cognition<\/em>, the research shows that people have a significant aversion to AMAs (vs humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors \u2013 human and AI alike \u2013 gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors\u2014human or AI\u2014who align with principles that prioritise individuals over abstract outcomes.<\/p>\n<p>Even when participants agreed with the AMA\u2019s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.<\/p>\n<p><a href=\"https:\/\/www.kent.ac.uk\/school-of-psychology\/people\/2039\/everett-jim\">Dr Jim Everett<\/a>\u00a0who led the research at Kent said: \u2018Trust in moral AI isn\u2019t just about accuracy or consistency\u2014it\u2019s about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems, therefore there is a major need to understand how to bridge the gap between AI capabilities and human trust.\u2019<\/p>\n<p>The research paper \u2018<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0010027724003147\">People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors<\/a>\u2019 is published by\u00a0<em>Cognition<\/em>\u00a0(Everett, J [University of Kent]; Myers, S [University of Warwick]).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Psychologists warn that AI\u2019s perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions. Artificial moral advisors (AMAs) &hellip; <a href=\"https:\/\/blogs.kent.ac.uk\/psychology\/2025\/03\/26\/peoples-trust-in-ai-systems-to-make-moral-decisions-is-still-some-way-off\/\">Read&nbsp;more<\/a><\/p>\n","protected":false},"author":78674,"featured_media":6098,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[70,722],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/posts\/6004"}],"collection":[{"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/users\/78674"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/comments?post=6004"}],"version-history":[{"count":1,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/posts\/6004\/revisions"}],"predecessor-version":[{"id":6006,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/posts\/6004\/revisions\/6006"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/media\/6098"}],"wp:attachment":[{"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/media?parent=6004"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/categories?post=6004"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.kent.ac.uk\/psychology\/wp-json\/wp\/v2\/tags?post=6004"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}