Will military ethics principles make AI GRRET again?

closeup computer screen

U.S. Defense Secretary Mark Esper has announced the military’s five ethical principles for AI use. The devil will definitely be in the details because the guidelines are mostly a statement of values. But I already have concerns. Allow me to explain.

Can ethical guidelines make AI GRRET again?

I’ve acronymized the five principles as GRRET:

Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.

Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.”

The Pentagon’s move reflects findings by the Defense Innovation Board, led by former Google CEO Eric Schmidt.

Tony Tiger GRRET

I get that the language is intentionally broad, but I’m going to split hairs anyway.

Under ‘Governable’, DoD (Department of Defense) says its AI capabilities will be able to “detect and avoid unintended consequences“…. But here’s the problem. People are often *surprised* by unintended side effects, having failed to foresee them. Or if they do foresee them, they sometimes underestimate the likelihood they will really happen. Making matters worse, it often takes awhile before an unintended consequence is detected, such as biased hiring practices. Spelling out how to be ‘Responsible’ for AI, DoD says personnel will “exercise appropriate levels of judgment and care“. Good to know!

It’s the decisions that matter.

Some argue this is merely ethics-washing. Others claim the principles are toothless and do not ease fears about military uses of AI. Tech workers and other U.S. citizens are worried about the business of war, particularly the power and impact of AI products on the battlefield.

My biggest concern is that decision-making processes aren’t being directly addressed. Guidelines like these make it seem that AI tools will plop down in the workplace, requiring people to learn new steps so things don’t go cattywampus before everything returns to steady state. AI is different. Not only is it rapidly evolving, it’s capable of changing how groups function, how they make decisions, and how they choose what decisions should be made.

GRRET does not address this. The intent to be ‘Reliable’ requires military AI capabilities to have “explicit, well-defined uses“. Surely those uses will include informing, or making, important decisions. Judgment calls, risk assessments, priorities, and decision weights will have to determined, with policy implications beyond the scope of most technology developers. What’s more, someone has to define how AI decision quality will be established, measured, and sustained.

Recommended next steps.

As DoD moves forward on AI deployment, vendors will need to explain AI to the Pentagon, and military branches will need to explain it to stakeholders. My advice is to emphasize decision-making to show the audience where AI will influence important processes. Rather than describing the technical nuts and bolts, focus on how decisions are made now, and how that will change after AI adoption. I describe how to do this in a recent post about my before|after methodology.

Posted by Tracy Allison Altman on 26 Feb 2020.

Photo credit: 7th Army Training Command on Flickr.

Related Posts

Museum musings.

Pondering the places where people interact with artificial intelligence: Collaboration on evidence-based decision-making, automation of data-driven processes, machine learning, things like that.

Recent Articles

muscle car by bing/create
20 June 2023
Stolen cars and AI ‘moral self-correction’
person in silhouette with orange background, pondering AI input for an evidence based decision
9 May 2023
Can you trust AI with your next decision? Part 3 in a series on fact-checking/citation
image generated by bing image creator bottle on apothecary shelf
25 April 2023
How is generative AI referencing sources? Part 2 in our series
22 April 2023
Sneaky STEM: Inspire learning with immersive experiences
15 March 2023
Can AI replace your CEO?