Emerging technologies powered by artificial intelligence (AI) are reshaping the world around us. Debates rage about the ethical issues of AI, how we should programme ethical AI, and which ethical values we need to prioritize, however machine morality is as much about human moral psychology as it is about the philosophical and practical issues of building artificial agents. In this work stream we will focus on both understanding the moral psychology of emerging technologies driven by AI and consider how emerging digital technologies powered by artificial intelligence like Mixed and Virtual Reality and LLMs might enhance our ability to study human morality. Tackling these questions from a distinctively social psychological lens, this workstream will address both moral psychology of AI and moral psychology with technology and AI.

We will consider questions such as:

  • How do people think about AI being used as moral agents, and what are the psychological and ethical consequences of this?
  • How do people (mis)understand the role that AI has in either exacerbating or reducing bias and unfairness?
  • How do people think about and assign blame and responsibility to AI moral decisions?
  • What ethical principles do people want AI to follow in moral dilemmas, and what are the psychological and ethical challenges of aligning AI with our values?
  • How do people think about the rights and moral standing of artificial intelligences, and what if anything does this tell us about why they grant moral concern to humans and other living beings?

 

Workstream Leaders

Jim A.C. Everett

Reader in Psychology at the University of Kent in the United Kingdom. With joint training in both philosophy and psychology, Jim’s specialises in moral judgment, perceptions of moral character, and the moral psychology of artificial intelligence. He currently leads three grants on the moral psychology and experimental philosophy of AI, including a Starting Grant from the UKRI/ ERC on trust in moral machines.

Kathryn Francis

Lecturer in Psychology at the University of Keele in the United Kingdom. Kathryn leads the Keele Augmented Virtual and Extended Reality Network (KAVERN) and is the lead for Digital Psychology in the Digital Society Institute at Keele University. She works at the intersection of social psychology and experimental philosophy to understand both the way that people think about technology like AI being used for moral purposes, and the way that technology like VR can shed light on the fault lines of our moral psychology.

Guest Teachers

Ethan Landes

Dr Ethan Landes is a Postdoctoral Research Associate in the School of Psychology focusing on Moral Psychology. Ethan is a philosopher by training, but a psychologist at heart. Before Kent, Ethan was a postdoctoral researcher at the University of Zurich under Dr Kevin Reuter studying dual character concepts and developing research methods for experimental conceptual engineering. He completed his PhD in philosophy on the philosophy of philosophy at the University of St Andrews, and his work with at Kent is focusing on the concept and usage of trust in artificial agents.

Caoilte Ó Ciardha

Dr Caoilte Ó Ciardha is a Reader in Forensic Psychology at the University of Kent, specializing in the causes and prevention of sexual offending. His current work explores effective deterrence messaging for child sexual abuse material (CSAM) and the impact of AI-generated CSAM on the perpetration ecosystem.

Madeline “Gracie” Reinecke

Madeline G. Reinecke is a Postdoctoral Researcher in Collective Moral Development, jointly within the Uehiro Oxford Institute and the Psychiatry Department’s Neuroscience, Ethics, and Society team. She researches moral cognition in humans and artificial intelligence from an interdisciplinary perspective, intersecting developmental psychology, moral philosophy, and computer science. Madeline holds her PhD, MSc, and MPhil from Yale University, as well as her BSc in Psychology and Philosophy from the University of Illinois at Urbana-Champaign. She also interned on the Ethics Research Team at Google DeepMind in 2022.