Questioning the decision-making ability of driverless cars, experts have suggested model driverless car regulations to ensure safety of the passengers.
Artificial intelligence (AI) experts David Danks and Alex John London from Carnegie Mellon University in the US argued that current safety regulations do not plan for autonomous systems and are ill-equipped to ensure that these systems would perform safely and reliably.
"Currently, we ensure safety on the roads by regulating the performance of the various mechanical systems of vehicles and by licensing drivers. When cars drive themselves we have no comparable system for evaluating the safety and reliability of their autonomous driving systems," said London.
In an opinion piece that appeared in the Institute of Electrical and Electronics Engineers' (IEEE) Intelligent Systems, Danks and London suggested creating a dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring.
"Self-driving cars and autonomous systems are rapidly spreading so we, as a society, need to find new ways to monitor and guide the development and implementation of these autonomous systems," added Danks.
The proposed phased process would begin with "pre-clinical trials," or testing in simulated environments, such as self-driving cars navigating varied landscapes and climates.
This would provide information about how the autonomous system makes decisions in a wide range of contexts, so that we can understand how they might act in future in new situations, the duo said.
When a vehicle passes this test, the system would move on to "in-human" studies through a limited introduction into real world environments with trained human "co-pilots."
Successful trials in these targeted environments would then lead to monitored, permit-based testing and further easing of restrictions as performance goals were met, the researchers noted.