The coupling of artificial intelligence and weapons is well under way. Any military strategist will tell you that technology is a huge tactical advantage on the battlefield. At Agincourt it was the longbow, and in World War II the nuclear weapon. Deciding not to take advantage of new technology might just be the last decision you make.

We have had cruise missiles capable of autonomous flight and terrain following for decades. But with the advent of GPS and low cost electronics and sensors fully autonomous flight systems can be built for under $1000. Open source software and wide availability of hardware has opened up the field to many UAV startup businesses.

In January these technologies were used to launch an attack against Russian military bases in Syria. Ten small drones rigged with explosives attacked Hmeimim air base while three more drones attacked a Russian Naval facility in the nearby city of Tartus. Several of the drones were shot down with anti-aircraft missiles, while some of the remaining drones were captured. These drones were of simple plywood construction, and utilized consumer grade parts.

Early last year the US Department of Defence tested UAV swarms. Three F/A-18 Super Hornets released one hundred small autonomous UAV over a target area. Once released they establish a local communications network to coordinate the operations of the swarm. In this exercise they were not armed, their mission was simply to detect and track targets. It is impossible to control so many individial UAVs in these swarms, rather they communicate and coordinate among themselves autonomously, with only high level instructions being provided by controllers. While this was a test of unarmed UAV these drones could easily be armed in order to engage an enemy.

The US Air Force is now ordering more drones than traditional manned aircraft, yet to date there has always been human pilots in control. How long will this remain the case?

Nick Ernest, a doctoral graduate of the University of Cincinnati, has developed an artificially intelligent pilot capable of beating the best human pilots in air to air combat. It took on retired U.S. Air Force Colonel Gene Lee, shooting him down in every simulated engagement. Lee called it “the most aggressive, responsive, dynamic and credible A.I. I’ve seen to date.” Dubbed ‘ALPHA’ the system radically outperforms both humans and other artificial intelligent systems.

Lee was surprised at how aware and reactive it was, commenting “It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.” While the technology was demonstrated in a simulation it could easily be deployed into real combat aircraft.

The US Department of Defense is taking artificial intelligence seriously. Aerospace engineer and undersecretary of defense for research and engineering Mike Griffin commented recent “Certainly, human-directed weapons systems can deal with one or two or a few drones if they see them coming, but can they deal with 103?” Griffin asked. “If they can deal with 103, which I doubt, can they deal with 1,000?”

He called for more serious work by the Pentagon, saying, “There might be an artificial intelligence arms race, but we are not yet in it.” America’s adversaries, he said, “understand very well the possible utility of machine learning. I think it’s time we did as well.” Griffin believes an artificial intelligence arms race might develop, but they are not yet in one. He was not however dismissive, saying that “America’s adversaries understand very well the possible utility of machine learning.”

Google has been providing assistance to the Pentagon’s Project Maven, an effort to use artificial intelligence to analyze video feeds from UAV and identify tagets of interest. It is impossible for humans to view and process all the incoming data in real time, and so the Pentagon is funding development of systems which are capable of real time visual processing, such like Google and facebook currently do to recognize faces.

However, employees of Google are protesting about Googles involvement. For a company founded on the motto ‘don’t be evil’ to be involved in weapons systems is not a good look. These Google employees join thousands of AI researchers who have called for a international ban on autonomous weapon systems.

The tension between the need to stay in front technologically and avoid a artificial intelligence arms race that will lead us to similar existential threats as nuclear weapons is not an easy problem to solve. Unlike nuclear weapons construction of autonomous weapons is cheap and does not require technologies unavailable to the average citizen. As the science of autonomous weapons could be advanced without making actual hardware through software simulation placing meaningful restrictions or bans around it’s development may be futile, no matter how well meaning. While Russia has rejected a autonomous weapon ban China recently called for such a ban.