MEPs have called for EU rules to be put in place to govern the fast-evolving field of robotics – ensuring that as AI develops, humans remain in control.
AI is developing at an alarming rate, with robot employees capable of self learning and even self driving cars not far from becoming a reality. So fast, in fact, that some of the most eminent voices from the tech world have voiced their concerns, including Elon Musk, Bill Gates and Steve Wozniak, with the most apocalypic warning coming from Professor Stephen Hawking, who said in a speech in Cambridge marking the opening of the Centre for the Future of Intelligence that:
“The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”
Other speakers at the event voiced concern at some of the possible applications, for example in Japan, there is enthusiasm for using robots to give care for the elderly, which could have the effect of ‘dehumanising’ what should be a caring profession.
MEPs share these concerns. In a meeting of the Legal Affairs Committee earlier this week, Rapporteur Mady Delvaux (S&D, LU) said:
“A growing number of areas of our daily lives are increasingly affected by robotics. In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework”.
Her report, approved by 17 votes to 2, with 2 abstentions, looks at robotics-related issues such as liability, safety and changes in the labour market. The report argues that EU needs to take the lead on regulatory standards, so as not to be forced to follow those set by third party states.
MEPs agreed that EU-wide rules are needed, in order to balance deriving maximum benefit from the economic potential of robotics and artificial intelligence whilst guaranteeing a standard level of safety and security. MEPs also urged the Commission to consider creating a European agency for robotics and artificial intelligence to supply public authorities with technical, ethical and regulatory expertise.
They also proposed a voluntary ethical conduct code to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards, for example, a recommendation that robot designers include “kill” switches so that robots can be turned off in emergencies.
There are a number of issues with the ‘rise of the robots’:
The EU also expressed concern that the development of robotics could also result in big societal changes, including the creation and loss of jobs in certain fields. It urges the Commission to follow these trends closely, including the need to look at new employment models and the viability of the current tax and social system for robotics.
Only this week, a study found that people were concerned that their jobs would be replaced by AI within five years. Self-driving cars could replace taxis, drones are already being used to replace couriers in certain situations. Would AI make a better employee anyway? The problem with self-learning robots is best illustrated by Microsoft’s self-learning chatbot Tay – which, when exposed to Twitter, became racist, homophobic and anti-semitic within one day.
Liability for Technology
One of the biggest concerns for MEPs was liability. This is a particular concern for self-driving cars, and MEPS called for an obligatory insurance scheme and a fund to ensure victims are fully compensated in cases of accidents caused by driverless cars.
In the long-term, even the possibility of creating a specific legal status of “electronic persons” for the most sophisticated autonomous robots, so as to clarify responsibility in cases of damage, should also be considered.
The challenges a Human-AI working environment could bring
Dr Sarah Fletcher of the Chartered Institute of Ergonomics and Human Factors believes technology needs to be developed to work with human characteristics, the main one being human unpredictability:
“In any given human-machine working situation, human factors need to be taken into consideration in order to ensure the design of the workflow system is effective and that humans can integrate with automated systems with little or no errors.
“Humans often bear the brunt of responsibility when errors arise in robotic environments. Therefore, the application of Human Factors in design is vital to ensure we are creating robots that can effectively work and interact with people. For example, humans come in all shapes and sizes and, unlike machines, bring levels of unpredictability in their responses and behaviour.
Human variability has been a traditional problem for manufacturing system design and performance prediction, but the progressive trend for more flexible and adaptable workforces means that differences between operators and their capabilities is being seen as more valuable in systems which require more frequent product and skills changes. So, as workforces become more mobile and diverse, Human Factors is needed to ensure inclusive design of robots and intelligent systems to improve their capability for interpreting and responding to human operator requirements.”