Connect with us

Tech

Artificial Intelligence Will Forever Change The Battlefield

 8 min read / 

Most have by now heard that Google is facing an identity crisis because of its links to the American military. To crudely summarise, Google chose not to renew its “Project Maven” contract to provide artificial intelligence (A.I) capabilities to the U.S. Department of Defense after employee dissent reached a boiling point.

This is an issue for Google, as the “Do No Evil” company is currently in an arm-wrestling match with Amazon and Microsoft for a variety of cloud and AI government contracts worth around $10bn. Rejecting such work would deprive Google of a potentially huge business; in fact, Amazon recently advertised its image recognition software “Rekognition for defense”, and Microsoft has touted the fact that its cloud technology is currently used to handle classified information within every branch of the American military. Nevertheless, the nature of the company’s culture means that proceeding with big defence contracts could drive AI experts away from Google.

Though Project Maven is reportedly a “harmless”, non-offensive, non-weaponised, non-lethal tool used to identify vehicles in order to improve the targeting of drone strikes, its implementation raises some serious questions about the battlefield moving to data centres and the difficulty of separating civilian technologies from the actual business of war. Try as they may to forget it, the staff at Google know that the images analysed by their AI would be used for counter-insurgency and counter-terrorism strikes across the world, operations that are seldom victim-free.

Those worries are linked to those of about 100 of the world’s top artificial intelligence companies (including potential James Bond villain Elon Musk), who last summer wrote an open letter to the UN warning that autonomous weapon systems that could identify targets and fire without a human operator could be wildly misused, even with the best intentions. And though Terminators are far from reality, as is full lethal autonomy, some worrying topics are rising up in the field.

This is, of course, no wonder, as the creation of self-directing weapons constitutes the third weaponry revolution after gunpowder and the atomic bomb. For this reason, the US is in the midst of AI arms race: military experts say China and Europe are already investing heavily in AI for defence. Russia meanwhile, is also headed for an offensively-mind weaponised AI Putin was quoted last year saying “artificial intelligence is the future not only of Russia but of all of mankind … Whoever becomes the leader in this sphere will become the ruler of the world”. Beyond the inflammatory nature of this statement, it is (partly) true. In fact, the comparative advantages that the first state to develop AI would have over others, militarily, economically, and technologically, are almost too vast to predict.

The stakes are as high as they can get. In order to understand why, one needs to go back to the basics of military strategy.

The End Of MAD

Since the Cold War era, a foundation of the military complex has been the principle of Mutually Assured Destruction (MAD), a concept which means that any attacker would be retaliated against and in turn destroyed should it fail to fully destroy its target in a first-move attack. Hence, countries have continued to seek first strike capability to gain an edge and technically may choose to use it if they see the balance of MAD begin to erode, whether on one side or another.

This is where AI comes in: with mass surveillance and intelligent identification of patterns and potential targets, the first mover could make the first move virtually free of consequence. Such a belief in a winner-takes-all scenario is what leads to arms races, which destabilises the world order in major ways as disadvantaged states with their backs against the wall are less likely to act rationally in the face of non-MAD, and more likely to engage in what is called a pre-emptive war, one that would aim to stop an adversary from gaining an edge such as a great military AI.

Autonomous Weapons don’t need to kill people to undermine stability and make catastrophic war more likely: an itchier trigger finger might just do.

Use It Or Lose It

Another real danger with any new powerful weapon is a use-it-or-lose-it mentality. Once an advantage has been gained, how does a nation convey that MAD is no longer relevant and that its demands ought to be taken seriously? History tells us what most nations would do if a weaponised AI is created: once the US developed the nuclear bomb, it eagerly used it to test it, make its billion-dollar investment seem worth it, and show the Russians that they had a new weapon everyone ought to be afraid of, changing the balance of power for the following decade. Human nature dictates that any country creating lethal AIs will want to show it off for political and strategic reason.

The Dehumanisation of War

A common argument in favour of automated weapons claims that deploying robotic systems might be more attractive than “traditional warfare” because there would be fewer body bags coming home since bots would be less prone to error, fatigue, or emotion than human combatants. The last part is a major issue. Emotions such as sadness and anger are a staple of humanity in both times of war and peace. Mercy is an inherent and vital part of war, and the arrival of lethal autonomous weapon systems threatens to de-humanise its victims. To quote Aleksandr Solzhenitsyn:

“the line separating good and evil passes not through states, nor between classes, nor between political parties either — but right through every human heart — and through all human hearts.”

Furthermore, developed countries may suffer fewer casualties, but what about the nations where superpowers wage their proxy wars? Most do not have the luxury of having robotics companies, and as such would suffer the human cost of war, as they have for the past century. Those countries, unable to retaliate on the battlefield, would be more likely to turn to international terrorism as it would become the only way to hurt their adversaries.

Losing Control

Assuming new AI technology is a mix of machine learning, robotics and big-data analytics, it would produce systems and weapons with varying degrees of autonomy, from being able to work under constant human supervision to “thinking” for themselves. The most decisive factor on the battlefield of the future may then be the quality of each side’s algorithms, and combat may speed up so much that humans can no longer keep up.

Another risk with regards to this is the potential for an over-eager, under-experienced player losing control of its military capacities. The fuel for AI is data. Feed a military bot the wrong data and it might identify all humans within range as potential targets, highlighting the need for rigorous and thorough testing.

“Dumb” Bots

The strong potential for losing control is why experts do not fear a smart AI. They fear a “dumb” one. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. As such, it is unlikely the military will ever fully understand or control certain types of robots. Knowing that some may have guns is a very good reason to ask for more oversight on the matter.

The main problem the world faces today is not necessarily those outlined above, but the fact that the uninformed masses are shaping the issue as Terminator vs Humanity, which leads to a science-fiction narrative instead of the very real, very tough challenges that currently need to be addressed in the realm of law, policy and business. As Google showed with the publication of its vapid set of principles regarding how it will ethically implement artificial intelligence, companies cannot be trusted to always make the right choice. As such, society needs a practical, ethical, and responsible framework organised internationally by the relevant organisations and governments in order to create a set of accepted norms.

Automated defence systems can already make decisions based on an analysis of a threat and choose an appropriate response much faster than humans can. Yet, at the end of the day, the “off switch” to mitigate the harm that killer robots may cause is in each person, at every moment and in every decision, big and small.

This has always been the case.

Have your say. Sign up now to become an Author!

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend