Philosophers Endorse New Test to Guide Moral Decisions in Driverless Cars
- ritambhara516
- Jun 23
- 2 min read

Photo credit: Samuele Errico Piccarini.
Researchers have confirmed a method for examining how individuals make moral choices while driving, aiming to use this data to help train the artificial intelligence systems in self-driving vehicles. To ensure the validity of their approach, they tested it on the most rigorous evaluators they could find: philosophers.
“Most people don’t intend to cause accidents or harm others on the road,” explains Veljko Dubljević, the study’s lead author and a professor in North Carolina State University’s Science, Technology & Society program. “Crashes often result from seemingly minor decisions, like slightly speeding or not fully stopping at a stop sign. We wanted to understand how people make these decisions—and what makes them moral or not while driving.”
Dubljević emphasizes the need for measurable data to train AI in making ethical driving choices. “Once we had a method to gather this data, the next step was to prove its reliability. Philosophers, being particularly meticulous when it comes to moral reasoning, were the ideal group to help us assess its validity.”
The researchers’ method is grounded in the Agent-Deed-Consequence (ADC) model, which suggests that moral judgments are shaped by three key factors: the agent (the person’s intent or character), the deed (the action taken), and the consequence (the resulting outcome).
To evaluate how people perceive the morality of driving decisions, the team presented participants with a range of traffic scenarios. Participants were then asked a series of questions to assess the moral acceptability of the drivers’ actions and various details within each situation.
READ ALSO: General Atomics Releases FUSE: A Powerful Tool to Fast-Track the Development of Fusion Power Plants
READ ALSO: Macron and Modi tour ITER together
For this validation study, the team worked with 274 participants who held advanced degrees in philosophy. These individuals reviewed the driving scenarios and evaluated the ethical nature of the drivers’ decisions. The researchers also used an established tool to determine the ethical perspectives of each participant.
“Philosophers often align with different moral theories,” explains Dubljević. “For instance, utilitarians focus on outcomes, while deontologists emphasize rule-following. Since these frameworks interpret morality in distinct ways, we expected the assessments of what counted as moral behavior to differ according to each participant’s philosophical approach.”
“What’s particularly exciting is that our results were consistent across all philosophical perspectives,” says Dubljević. “Whether participants identified as utilitarians, deontologists, or virtue ethicists, they all arrived at similar moral judgments in driving-related scenarios.
“This consistency allows us to generalize the findings,” he adds. “That makes the technique highly promising for training artificial intelligence systems — it marks a meaningful advancement in the field.”
Dubljević notes that the next phase involves expanding the study to include more diverse populations and languages. The aim is to evaluate how widely the approach can be applied, both within Western societies and globally.
The research paper, titled “Morality on the Road: The ADC Model in Low-Stakes Traffic Vignettes,” appears in Frontiers in Psychology. The lead author is Michael Pflanzer, a Ph.D. student at NC State. Co-authors include Dario Cecchini, a postdoctoral researcher at NC State, and Sam Cacace, an assistant professor of psychology at the University of North Carolina at Charlotte.
Komentarai