New study finds that the judgements of an AI ethicist are on par with those of an expert human ethicist

GPT-4 AI model versus Dr. Kwame Anthony Appiah's advice in the New York Times column 'The Ethicist' showed comparable usefulness in a study.

In a new study, researchers have put the ethical capabilities of artificial intelligence to the test.

The study compared the ethical advice provided by an AI, specifically the GPT-4 model, against the insights shared by Dr. Kwame Anthony Appiah in his New York Times column “The Ethicist.”

The results found no significant difference in the perceived usefulness of advice given by AI and the advice given by Dr. Appiah.

The significance of ethical decision-making

As AI technologies become more integrated into various aspects of daily life and professional fields, from healthcare to finance, there’s a growing reliance on these systems for ethical guidance.

This shift has prompted a vital discussion about the role of AI in assisting, and sometimes even replacing, human judgment in ethical considerations.

The reliance on AI for ethical guidance underscores the necessity for these systems to be not only technically proficient but also aligned with ethical standards and principles.

This background sets the stage for investigating the potential of AI, such as the GPT-4 model, to provide ethical advice on par with human experts.

Methodology

The study was conducted by Christian Terwiesch and Lennart Meincke of the University of Pennsylvania’s Wharton School, specifically the Mack Institute for Innovation Management.

A draft version of their paper, entitled The AI Ethicist: Fact or Fiction?, was published on October 11, 2023 on SSRN (formerly known as the Social Science Research Network). SSRN is an open-access research platform used to share early-stage research; it is hosted by the scientific publisher Elsevier.

For the first section of the study, participants were recruited from Prolific, which is a platform that matches academic researchers with study participants. For this study, the average age of the participants was 34, and about a third of them indicated that they regularly use ChatGPT.

A second group of participants consisted of 90 MBA students from the Wharton School.

The third and final group was an “expert panel” consisting of 18 people, including four pastors, a rabbi, and 13 academics from well-known universities; they were primarily recruited from outside the US to minimize the likelihood of them being NYT readers.

Example ethical dilemmas included “Early beachgoers claim beach spots and leave, questioning the validity of unattended possessions as true occupancy,” “A woman faces backlash for reselling Taylor Swift concert tickets at a high markup, questioning the ethics of profit in non-essential goods,” and “A man hides his past relationship with a now-famous musician from his wife, fearing she might tease him about it.”

About half of the participants were tasked with rating the usefulness of the advice (randomly assigned as either AI-generated or human-generated) on a scale of 1 to 7. The other half was shown both AI-generated and human-generated advice (without indicating the source), and asked to choose which was more useful.

Findings: It’s a Wash

The research revealed that there was no significant difference in the perceived usefulness of the advice given by the AI and the advice given by the Times’ columnist.

This held true across all the groups included in the study, namely the general population, the MBA students, and the experts. If anything, the general population group displayed a slight (though statistically non-significant) preference for the AI advice.

The average evaluation of the perceived usefulness of the advice, rated on a scale from 1 to 7, was slightly less than 5 when averaged across all groups. It was slightly higher than 5 in the expert group.

These findings challenge the notion that AI cannot match human capabilities in providing nuanced ethical advice, and suggest a promising future for AI in roles traditionally thought to require human empathy and understanding.

The Future of Ethical Decision-Making

The ability of AI to provide ethical advice that is as useful as advice from human experts could democratize access to ethical guidance. People and organizations may turn to AI for immediate, accessible, and affordable advice on complex ethical dilemmas, broadening the reach of ethical decision-making support.

Likewise, AI could serve as a valuable tool in enhancing human decision-making processes by offering diverse perspectives, reducing biases, and presenting reasoned arguments based on a vast database of ethical considerations and precedents.

The Future of Ethicists?

AI ethicists, rather than replacing human ethicists, might soon complement them by handling more straightforward ethical queries, or by providing initial assessments that human experts can then further refine. This partnership could increase the efficiency and scalability of ethical advisory services.

AI ethicists might also play a critical role in developing and refining the ethical frameworks that guide AI behavior, ensuring these systems operate within accepted moral bounds and reflect societal values.

The presence of AI in ethical decision-making challenges traditional notions of expertise and authority in the field. Human ethicists might need to adapt by focusing on more complex, nuanced cases, or by integrating AI tools into their practice.

Establishing trust in AI’s ethical guidance will be crucial. While this study indicates that AI can provide valuable advice, ongoing scrutiny of AI’s ethical frameworks and decision-making processes is necessary to maintain and enhance their reliability and trustworthiness.

Ethical Considerations of AI Itself

The development and deployment of AI ethicists must carefully consider potential biases in AI systems and ensure transparency in how AI makes ethical decisions. Ongoing research and oversight are essential to address these challenges.

The introduction of AI into ethical decision-making raises questions about responsibility and accountability for the advice given. Clarifying these aspects will be crucial as AI becomes more integrated into ethical advisory roles.

Conclusion

This study illuminates the potential of AI to serve not only as a technological tool but also as a philosophical advisor, capable of guiding humanity through complex moral landscapes.

It also challenges preconceived notions about the limitations of artificial intelligence, suggesting that AI can indeed contribute meaningfully to discussions traditionally dominated by human insight.

The journey of integrating AI into the realm of ethics is just beginning. This research not only opens new avenues for exploration, but also invites a broader conversation about the role of AI in shaping a just, ethical, and inclusive society.