In March 2018, a self-driving vehicle operated by Uber struck and killed a pedestrian crossing a street at night. The onboard sensors registered the woman’s presence but failed to identify her as a pedestrian correctly to correctly, and no emergency braking was triggered. It was the first known fatal crash involving a fully autonomous car. The incident prompted international scrutiny of how self-driving systems make decisions in emergencies. It reignited the ethical question of whether an autonomous vehicle should prioritize the safety of its passengers or that of pedestrians when a collision is unavoidable. “Should a self-driving car hit a pedestrian or crash into a wall?” This single question captures the ethical dilemma at the heart of autonomous vehicle design. Artificial intelligence now controls machines operating at highway speeds on real roads, with human lives at stake. The primary question is whether a computer can comprehend moral judgments and if it is reasonable to expect it to do so.

 

The Tesla Model Y can drive itself, but Tesla stresses that the driver must always stay attentive and responsible. /Photography by Hwang Ji-woo
The Tesla Model Y can drive itself, but Tesla stresses that the driver must always stay attentive and responsible. /Photography by Hwang Ji-woo

 

Ethical decision-making in autonomous driving

   A self-driving car does far more than simply follow lane markings. Cameras, LiDAR, radar, and other sensors constantly scan the surroundings and send real-time data to an artificial intelligence system that manages acceleration, braking, and steering. These processes go through  three stages including perception, decision, and control.

   Ethical challenges arise during the decision phase. For example, a vehicle might need to choose between hitting a jaywalker or crashing into a concrete barrier. While a human driver reacts instinctively, shaped by fear and emotion, an autonomous vehicle relies on pre-programmed algorithms and probability-based calculations. The 2018 Moral Machine study by MIT revealed how moral judgments vary significantly among people. Analyzing over 40 million responses from participants in more than 200 countries, the project showed that cultural background and individual values influence decisions in identical ethical dilemmas. This absence of a universal standard raises a deeper question about the criteria AI should use to evaluate the value of life. Kai-Fu Lee, an AI expert and venture capitalist, argues that ethical decision-making in self-driving cars transcends technical issues and requires societal consensus. He stresses the importance of transparent public discussions on who should define AI values.

   Corporate approaches differ widely. Tesla asserts that human drivers bear ultimate responsibility. According to the company's safety guidelines, drivers must always remain attentive, as they hold final control over the vehicle. In contrast, Waymo prioritizes preventing dangerous situations altogether, employing predictive algorithms designed to identify risks early and automatically avoid potential collisions. Global organizations have also issued guidance. The IEEE’s 2019 Ethically Aligned Design report recommends human-centered design, accountability, and transparency in AI systems. It advises clear human oversight for life-critical applications like autonomous vehicles. Likewise, UNESCO’s 2021 AI ethics framework emphasizes the protection of human rights, prevention of discrimination, and rigorous safety standards, calling for strict ethical guidelines in systems that might cause harm.

   Although no definitive framework for moral AI currently exists, experts agree that preventing accidents is more important than programming ethical decisions for crash scenarios. The ultimate goal is not to code decisions about who lives or dies but to design systems that avoid such dilemmas altogether.

 

The Renault XM3 in motion uses a combination of sensors such as cameras and radar to support its autonomous driving capabilities. /Photography by Hwang Ji-woo
The Renault XM3 in motion uses a combination of sensors such as cameras and radar to support its autonomous driving capabilities. /Photography by Hwang Ji-woo

 

Technological measures for ethical risk prevention

   Autonomous vehicle developers are focusing on improving the car’s perception abilities and reaction speeds. Advanced sensors such as LiDAR, radar, and high-resolution cameras work together to provide a comprehensive real-time understanding of the vehicle’s surroundings. These sensors detect potential hazards including pedestrians, cyclists, and unexpected obstacles, and relay the information to the artificial intelligence system for early detection and prompt response. Experts emphasize that these sophisticated sensors along with emergency braking and evasive maneuver systems are essential in reducing the ethical dilemmas self-driving cars face. Regulatory bodies like the U.S. National Highway Traffic Safety Administration and the European New Car Assessment Program recommend that autonomous vehicle safety features be designed to minimize ethical conflict situations.

   Fast reaction systems operate immediately based on the perceived information. Emergency braking and evasive actions occur within milliseconds, far faster than human reflexes. Powerful processors such as NVIDIA’s DRIVE chips quickly analyze massive amounts of data to execute complex driving commands efficiently. Beyond the vehicle itself, communication technologies also play a crucial role in ensuring safety. Vehicle-to-Everything networks connect cars with traffic signals, road sensors, and other vehicles to rapidly share information about hazards or traffic changes that a vehicle’s own sensors might not detect. For example, a smart crosswalk can identify a pedestrian hidden from view and send a warning to the autonomous car. If a car ahead suddenly swerves, nearby vehicles are alerted instantly to avoid collisions.

   Industry experts agree that these technological safety measures are vital for reducing ethical dilemmas. Instead of forcing AI to make moral decisions during unavoidable accidents, the priority is designing systems that prevent such situations from occurring. Specialists stress AI should not be asked to decide who to save but should focus on detecting and responding swiftly to unpredictable road conditions and hazards to minimize ethical conflicts. They explain that utilizing advanced sensors and communication technology to prevent potential accidents is key to achieving both ultimate safety and ethical responsibility.

   Meanwhile, as autonomous driving technology advances, regulators, manufacturers, and ethicists face the challenge of balancing innovation, safety, and societal values. Discussions about self-driving cars are expanding beyond technical issues to include trust, accountability, and defining acceptable risk levels for society. Ultimately, the most ethical autonomous vehicle will be one that avoids forcing impossible moral choices and instead is equipped with systems designed to detect risks early and evade danger, protecting all lives on the road.

 

   Engineering ethics is not merely about choosing the lesser evil. It is about designing systems that are so safe, accurate, and well-connected that ethical dilemmas rarely, if ever, occur. As autonomous vehicle technology progresses, the focus is shifting from whether machines can make moral decisions to how such decisions can be avoided altogether. Engineers are tasked with developing systems reliable enough to prevent classic ethical challenges like the trolley problem from arising. This requires ongoing improvements in sensor precision, faster data processing, and strong communication networks between vehicles and infrastructure. Technical advances alone are not enough. A broad and inclusive discussion involving engineers, ethicists, regulators, and the public is essential to establish clear ethical standards.

   True engineering ethics emphasizes building technology robust enough to protect everyone’s safety from the outset, rather than making artificial intelligence inherently moral. Only through collaboration can autonomous vehicle systems achieve not only technical excellence but also social acceptance and ethical responsibility. The real promise of autonomous vehicles lies in their ability to minimize human error and unpredictable behavior, significantly reducing accidents. Therefore, their ethical success will be judged not by how well they handle impossible moral choices, but by how effectively they prevent those situations from ever arising. A future that balances innovation with public trust envisions self-driving cars that avoid crashes entirely, rather than just making the right decisions during emergencies. Such progress will contribute to safer roads and strengthen society’s confidence in new technologies.

 

저작권자 © 동국대학교 대학미디어센터 무단전재 및 재배포 금지