Robotic researchers have a duty to prevent autonomous weapons, Robotics is rapidly being transformed by advances in artificial intelligence.

And the benefits are widespread We are seeing safer vehicles with the ability to automatically brake in an emergency, robotic arms transforming factory lines that were once offshored and new robots that can do everything from shop for groceries to deliver prescription drugs to people who have trouble doing it themselves.

As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful only about half of the time, and it was stuck there for years.

Today, though, the best algorithms as shown in published papers are now at 86% accuracy. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.

In removing humans from the process, the assumptions that underpin the decisions related to privacy and security have been fundamentally altered. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.

When developing machines that can make own decisions typically called autonomous systems  the ethical questions that arise are arguably more concerning than those in object recognition. AI-enhanced autonomy is developing so rapidly that capabilities which were once limited to highly engineered systems are now available to anyone with a household toolbox and some computer experience.

People with no background in computer science can learn some of the most state-of-the-art artificial intelligence tools, and robots are more than willing to let you run your newly acquired machine learning techniques on them. There are online forums filled with people eager to help anyone learn how to do this.

Using a variety of techniques, autonomous drones are already a threat. They have been caught dropping explosives on U.S. troops, shutting down airports and being used in an assassination attempt on Venezuelan leader Nicolas Maduro. The autonomous systems that are being developed right now could make staging such attacks easier and more devastating.

For one, the researchers, companies and developers who wrote the papers and built the software and devices generally aren’t doing it to create weapons. However, they might inadvertently enable others, with minimal expertise, to create such weapons.

Still, a large number of researchers would fall under these boards’ purview within the autonomous robotics research community, nearly every presenter at technical conferences are members of an institution. Research review boards would be a first step toward self-regulation and could flag projects that could be weaponized.

Many of my colleagues and I are excited to develop the next generation of autonomous systems. I feel that the potential for good is too promising to ignore,researcher said.

But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people, researcher said. Yet with some careful organization and informed conversations today, I believe we can work toward achieving those benefits while limiting the potential for harm.


The study was originally published in the conversation