Here’s how German engineering student Britt Liv Ulrike Michelsenturned a Nerf gun into a robot. First, she connected the toy’s electric motor to a microcontroller and a laptop computer, upgraded the firing spring to make it shoot faster and more accurately then connected the toy to a servo-driven stand, which allows it to swivel back and forth. Finally, she added a webcam and the open-source program Project Sentry Gun to make the device automatically track and attack anything that moves.
End result: an automated, foam-firing robot sentry gun.
Michelson’s toy itself is harmless, but the idea behind it does raise some provocative questions. How easy is it to automate a real gun? What are the legal and ethical issues? (Spoiler: there are many.) And could we soon see automated weapons in the home?
For help, we asked Patrick Lin, a professor at California Polytechnic State University and a leading researcher on the ethics of robots, drones and unmanned machines. We also wanted to know what the Pentagon is doing with its own armed ground ‘bots.
“What seems to be newsworthy is that a Nerf gun is involved, and some people may be unnerved by the juxtaposition of childhood innocence and visions of Terminator-style AI,” Lin says. “But a Nerf gun isn’t usually lethal, though it could harm targets if they were hit in the eye, for instance. So a non-lethal home security robot could be permitted by law, depending on how harmful it is.”
A pepper-spraying home defense ‘bot might go too far. A lethal autonomous sentry gun is out of the question, not the least of which is the inability to distinguish between an intruder and an innocent child. “Just like you can’t rig a shotgun to shoot whatever opens your door,” he adds. “The big problem here is that it would be an indiscriminate attack, even in states with ‘Stand Your Ground or ‘Castle’ laws that permit lethal defense of the home.”
The capability exists — though only for the military. Years ago, Samsung defense subsidiary Samsung Techwin developed a robotic machine gun, the SGR-A1, which is also equipped with infrared cameras. The ‘bot’s destination was reportedly the South Korean side of the Korean demilitarized zone, but these were never deployed or turned into full-auto mode.
The Pentagon has its own projects. A weaponized robot called SWORDS was deployed to Iraq — under Army control — until funding was yanked. A follow-up program by the Marines, called MAARS, can roll around with a swappable weapons system, including both lethal bullets and non-lethal grenades and laser dazzlers. Its developer, defense contractor QinetiQ, describes the robot as “taking its place on the frontlines to keep warfighters at a safe distance from enemy fire while effectively executing security missions.” MAARS hasn’t been deployed and has not been made autonomous. The Pentagon is also very skeptical about taking its own personnel out of the decision-making process with it comes to lethal robots and drones.
“The important point, though, is that the capability already exists today,” Lin says. “If it were to be deployed, that would be an important and, to many, regrettable milestone in future warfare.”
But should we be worried about someone making a killbot at home and then automating to do something terrible? Sort of. Maybe. It’s possible someone could do it. “It would seem so,” Lin says. “But there are a lot of terrible things people can make that thankfully don’t appear very much, such as privacy-infringing or bomb-carrying drones. We have laws that address most of those contingencies, and that seems to be enough to deter most rational people inclined to try them out.”
One thing to watch out for, he notes, is for an engineer or activist to develop a lethal sentry gun just to provoke a debate, create a backlash or force regulation. We’ve seen that happen with 3-D printed guns. But even criminals getting ahold of those are a big stretch. Lin says, “Luckily, criminals tend to be dumb — not so much the Lex Luthor type.”