Autonomous weapons determine who they attack. Some countries encourage development in this area. But at what price? Peace researcher Niklas Schörnig knows what can go wrong.

This interview first appeared on chrismon.de.

chrismon: Germany is again arguing whether the Bundeswehr should arm its drones. Do you think that makes sense as a peace researcher?

Niklas Schörnig: The debate on armed drones for the Bundeswehr has been going on for more than a decade and all arguments have been exchanged. The question is whether the protection afforded by such remote-controlled weapon systems to soldiers in action is outweighed by potential dangers. It is feared that politicians will opt for military action more quickly because ‘only’ drones are being sent and there is no need to fear victims. I am not a fan of the armed drone, but the potential risks of strict Bundestag mandates are less pronounced in Germany than in other countries.

More than 30 countries now have armed drones or are acquiring them. A waiver would not be as strong a signal as it was seven or eight years ago. I think the supporters will be victorious this time. However, the federal government needs to clarify when the drones will be used and that the U.S. practice of targeted killings is not only dismissed, but considered to be against international law.

Armed drones are often mentioned in the same breath with so-called autonomous weapons. Is it the same?

No, they are different systems. The armed drones that are currently being discussed are controlled remotely. People still decide on the use of weapons. Today’s drones also have autonomous functions, for example they fly routes independently without anyone having to hold the control stick. But the crucial functions of target selection and combat, as it is called in military jargon, are still in humans. An autonomous weapon would be a system that seeks, analyzes and decides on the use of weapons without involving people. If this weapon wants to kill people, we are talking about a deadly autonomous weapon system.

Critics are afraid that autonomous weapons may accidentally cause wars or that killer robots will one day attack us.

Such weapons do not yet exist, with the exception of some systems whose autonomy can be contested. For example, there are automatic weapons on warships or for the protection of camps that can independently recognize and launch incoming missiles. Autonomous weapons that are mobile and aimed at people have not yet been introduced.

Niklas SchörnigBorn in 1972, is a research assistant in the International Security program area of ​​the Hessian Peace and Conflict Research Foundation (HSFK). He conducts research on military robotics, war change, targeted killings, arms control and Australian foreign and security policy.

Many people think that autonomous weapons already exist because we see similar technological advances in everyday life, such as autonomous driving, image recognition or decision-making tools when shopping online. Science fiction films also play a major role. But it is not that easy to safely and reliably combine these technologies in one weapon. Many states are also reluctant to propose systems that they do not know are in accordance with international law.

International law prevents states from buying autonomous weapon systems?

According to Article 36 of the First Additional Protocol to the Geneva Conventions, new weapons of war must meet certain criteria of international humanitarian law. For example, they should not cause unnecessary suffering. For example, an autonomous weapon system must ensure that it can distinguish between combatants and civilians. Some computer scientists believe that weapons will never make it.

Do autonomous weapons violate human dignity because they are, as it were, “merciless”?

Some experts argue in the same way. While a human soldier can show mercy, have a conscience, and always wonder if there are other non-lethal options, the machine follows algorithms. Who takes responsibility when a weapon kills itself? Deployed the officer? The programmer? The Secretary of Defense or the President? Ultimately, no one can be responsible.

Niklas Schörnig’s research at the Hessian Peace and Conflict Research Foundation (HSFK) includes military robotics, war change, targeted killings, and arms control. (Source: press photo HSFK)

Colleagues therefore see the dignity of people who are massively injured when killed by a machine. You only have to think of the self-shot systems on the inner German border to understand the argument.

Usually this argument does not convince soldiers. For them, the idea of ​​being badly injured on the battlefield with a belly shot towards the end is worse than the target of a combat robot. If you read the reports of historic battlefields where dying men are screaming for their mothers – I don’t see much dignity either, just bad suffering and misery. In this regard, I was torn in judging whether autonomous weapons violated human dignity more than legally killing in war.

Can machines make real decisions at all?

You can of course imagine that a system with very complex if-then rules decides what is a target and what is not. The system recognizes: There is a man on the battlefield in a uniform with the opponent’s insignia. It can be attacked. That would be a relatively clear decision that ticks off a checklist and then comes to the result: goal? Yes No.

The philosophers can argue whether this would be described as a “real” decision, but at least the system may take action from the checklist. Some experts would speak of artificial intelligence with sufficiently complex decision trees, others not. But it is also conceivable that the ‘decision’ is based on different forms of machine learning. In an extreme case, a system is conceivable that stems from images, videos and information itself, which it can attack.

What is the difference between a checklist and machine learning?

If I knew the decision tree in detail, I would know how the system responds – who is attacking it and who is not. After that I can at least reconstruct how the system made the decision. When it comes to machine learning, even programmers often can’t tell why a system categorizes it. It contains statistics and factors that we humans don’t have on the screen at all. It is suspicious if we no longer know how the decision was made. With an autonomous weapon system, it would be deadly if allies or civilians were suddenly attacked.

To what extent does the Bundeswehr use the technology?

The Bundeswehr has the anti-aircraft system “Mantis”, which could probably be switched to an autonomous mode. This is currently prohibited by the Bundeswehr betting rules. Another example: a modern Bundeswehr ship can simultaneously observe about 400 flying objects and assess whether they pose a hazard. And then a system suddenly says: these three targets must now be attacked because they are dangerous! Many factors play a role in this, which could not be investigated in detail in a tense combat situation. If the time is short, you have to rely on the assessment of the system. But that is not an autonomous weapon system.

Because another person must agree to the attack?

Yes. In the Bundeswehr, a person must approve the use of weapons, but it would be theoretically conceivable and technically possible to omit this step. The question is how reliable and legally secured the system should be.

No pilot needed: crew members of the aircraft carrier USS George H.W. Bush sees an unmanned X-47B combat drone flying over. The aircraft can perform semi-autonomous operations. (Source: Getty Images / Mass communication specialist 2nd class Timothy Walter / US Navy)No pilot needed: crew members of the aircraft carrier USS George H.W. Bush sees an unmanned X-47B combat drone flying over. The aircraft can perform semi-autonomous operations. (Source: 2nd Class Mass Communication Specialist Timothy Walter / US Navy / Getty Images)

If this person has no additional information, how often would he say, “This is too delicate for me”? On what basis should he abandon a highly developed system?

There is a risk that people rely too much on the system. An example is the shooting of an Iranian Airbus by an American warship in 1988 on the basis of a computer recommendation. It takes a lot of courage to trust your gut feeling and contradict the algorithms, especially in situations where hesitation can mean your own death.

But there are also good examples. In the early 1980s, a Soviet system mistakenly “recognized” a handful of incoming NATO missiles and raised the alarm. The question then went to the officer: will he pass it on and start a massive counterattack? But the officer had doubts. He thought that if NATO attacked, there wouldn’t be so many missiles. This makes no sense strategically. However, the system was not programmed to put strategic considerations into context, but rather it was programmed: could it be a missile, would the radar echo and speed fit? And everything seemed to fit! The officer has rightly deviated from the system. If he hadn’t had the courage to do that, we might have had a nuclear war.

Could an autonomous weapon system turn against its own commander? Because the algorithm identifies him as an enemy?

I think such systems will never exist.

Why?

There is a big argument among computer scientists whether a general artificial intelligence, that is, a system that sees the big picture and not only handles certain tasks, is ever possible. But it is conceivable that a system could attack targets on strategic or tactical considerations that it considers so important that it ignores targets. If there is an anti-aircraft gun next to a mosque, the destruction of the mosque, although in violation of international law, can be acceptable damage in the complex calculation of the machine.

Small Machine Gun Combat Robot: Research into these types of devices has been going on for years. They are also increasingly being used with artificial intelligence (Source: Reuters / Fred Greaves)Small Machine Gun Combat Robot: Research into these types of devices has been going on for years. They are also increasingly being used with artificial intelligence (Source: Reuters / Fred Greaves)

Would it be possible to program machines in ethical guidelines? For example: do not attack children under any circumstances?

American robotics engineer Ron Arkin has been working for years on an ethical control body in autonomous weapons. It is intended to ensure that a system always adheres to law and ethics and, if necessary, prefers its own destruction. But there are also problems: what if a system with a supervisory authority encounters an enemy system that has none? Does it matter if you have to go through 10,000, 20,000 lines of software code again? But I can imagine military situations where this minimum time is important.

The second problem arises because international law is not always clear, but open to interpretation. There may then be another programming. And, of course, dictators can eliminate an ethical control body.

So the problem is not so much the technical feasibility, but that one side doesn’t stick to it?

Trust between states is difficult to establish and easy to damage. But even if you just look at the technology, things get complicated. When I talk to technicians, I always get the answer: we can find a technical solution for this. But they only look at their own system. I fear that such a system will violate international law and commit atrocities despite the greatest caution.

We can control systems individually and still wonder afterwards that in a conflict everything is different. Technological progress does not necessarily solve strategic problems. And then there are unpredictable interactions: you know your system. You can predict how it will behave. But that now has consequences for another system whose software code you do not know. What the two systems produce in the interaction for a result is unpredictable. It’s called emergence.

An example?

Third parties offer books on Amazon. They determine their book price based on prices from other retailers. In 2011, there was an incident with a biology book that would cost about $ 40. Suddenly, it was offered for just under $ 24 million because two algorithms interact and push the price up. The interesting thing is that for each provider, their own algorithm was 100 percent predictable. Since they both reacted very quickly to each other, as nobody expected, the price went so high. When people noticed that, the situation already escalated. Imagine that with autonomous weapon systems from rival states!

Then we would have the “accidental war.” Are you afraid that someday everything will fly around our ears?

The people we talk to think in worst case scenarios and I am no longer immune to it. There is a real danger that the trend will go to autonomous weapons that no one really wants. However, it gives me hope that the problem of lethal autonomous weapons has been discussed intensively at UN level in Geneva for many years. It is not yet clear whether international law will impose a ban, as required by non-governmental organizations, or “only” politically binding rules of conduct, as proposed by the federal government. I am confident that there will be at least some control.

Read more at chrismon.de:

The Man Who Saved the World: In 1983, a Soviet surveillance system reported the attack on U.S. nuclear missiles. The officer on duty, Stanislaw Yevgrafovich Petrov, reacted cautiously – possibly preventing nuclear war.

Can robots also feel? Unimaginable, says Raúl Rojas, professor of computer science. Still, the visions need to be thought through to the end, says the philosopher Thea Dorn.

What if the robot gets cheeky? Disconnecting will no longer help at some point, says Jürgen Schmidhuber. He is considered to be a leading developer of so-called artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *