How to blame a robot for mistakes: Do your finger pointing properly.

People and artificial intelligence work together, sometimes quite nicely, on insurance underwriting, medical diagnosis, customer experience. (Plus autonomous vehicles, until some dude crashes his Tesla.) This is the Human+AI hybrid we often talk about at Museum of AI.

Humans have varying levels of awareness, input, or control over an AI system. Evidence suggests these factors influence how we apportion blame if something goes wrong.

When a Human+AI hybrid goes pear-shaped…

Emerging research examines how people assign blame when a robot makes a mistake. In Human Factors, Furlough and friends look at how people view imperfect Human+AI outcomes and apportion blame to humans, nonautonomous or autonomous robots, or environmental factors/conditions.

Hierarchy of blame! The authors “suggest that humans use a hierarchy of blame in which robots are seen as partial social actors, with the degree to which people view them as social actors depending on the degree of autonomy. Acceptance of robots by human co-workers will be a function of the attribution of blame when errors occur in the workplace. The present research suggests that greater autonomy for the robot will result in greater attribution of blame in work tasks.”

Others find evidence of the bias known as blame attribution asymmetry (Risk Analysis, January 2021). Writing about the world of Human+AI vehicles, Liu and Du explain a tendency to judge an “automation-caused crash more harshly, ascribe more blame and responsibility to automation and its creators, and think the victim in this crash should be compensated more.” Tread lightly, creators of factory automation and automotive tech ~ this echoes the knowledge base on assumed vs. imposed risks, two very different animals, as I learned working on energy projects.
I’d love to feature more evidence about nuances of introducing Human+AI hybrids. Please share.

Speaking of asymmetrical finger pointing

In a white paper on AI governance, Google describes scenarios where people collaborate with AI, reflecting varying levels of awareness, input, and control. Human actors range from unaware end users to managers with decision-making responsibility.

yellow toy car illustrating an autonomous vehicle
Humans+AI driving together, 12 ways

Inspired by Google’s approach, I put together twelve Human+AI scenarios from the perspective of someone sharing control with a semiautonomous vehicle. View/download 12 Ways People & AI Work Together. The aforementioned levels of awareness, input, and control determine who sets the speed, plans the route, takes the wheel.

Let the blame game begin. It stands to reason that human control over AI is inversely proportional to the blame we assign for perceived errors. Google shares insight into permission to tweak (or distance oneself from) AI. System designers, business leaders, and policy makers should choose very carefully to avoid triggering uninformed backlash against their AI applications.

Honig et al. reviewed the literature seeking insights from “human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology”. The authors developed the Robot Failure Human Information Processing model to illustrate their findings. The purpose is to describe how “people perceive, process, and act on failures in human robot interaction.” RF-HIP considers three main aspects of failure: communication; perception and comprehension; and solution. Understanding subconscious behavior is an important component.

Planning a decision workflow? Easy does it

It’s challenging for subject matter experts to delineate how artificial intelligence might make a series of decisions. The exercise is bittersweet, partly because people so rarely take the time to explain our own decision processes.

Avoid confusing a decision outcome with the quality of the underlying process.

Things might get weird. Close examination of an existing business process can surface questions, prompting a purposeful redesign – such as when a group of engineers realizes they each approach the same problem differently. Expect intense moments when people lay bare their decision methods, especially when experiencing unfamiliar technology.

Fingerpointing is easy, identifying the problem is hard. Doing a post-mortem on a suboptimal outcome? Avoid the temptation to conflate an outcome with the quality of the underlying decision process. A low-quality process can produce evidence-based, algorithmic decisions, including decisions with ‘I got lucky!’ positive outcomes.

The takeaway: Who decides what?

Participants must clearly understand the capabilities/limitations of both the people and the AI involved – much easier said than done. Communicating back to the decision maker after a failure is particularly important.

When people perceive flaws in autonomous AI, they tend to blame machine creators, perhaps disproportionately. We are more patient with semiautonomous robot – but to build trust, human-in-the-loop AI must allow insight into a machine’s decision steps and exercise discretion over them. Then the humans must acknowledge their own flawed contributions, if any.

Posted by Tracy Allison Altman on 2 Aug 2021

References:
Altman, T. Museum of AI, 12 Ways People & AI Work Together, July 2021.

Furlough et al., Human Factors, Attributing Blame to Robots: I. The Influence of Robot Autonomy, June 2021.

Google, Perspectives on Issues in AI Governance, Box 11, February 2019.

Honig et al., Frontiers in Psychology, Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development, June 2018.

Liu et al., Risk Analysis, Blame Attribution Asymmetry in Human-Automation Cooperation, January 2021.

Photo credits: Anton Maksimov juvnsky, Charles Deluvio, Franco Antonio Giovanella, and Gustavo on Unsplash

Related Posts

Museum musings.

Pondering the places where people interact with artificial intelligence: Collaboration on evidence-based decision-making, automation of data-driven processes, machine learning, things like that.

Recent Articles

muscle car by bing/create
20 June 2023
Stolen cars and AI ‘moral self-correction’
person in silhouette with orange background, pondering AI input for an evidence based decision
9 May 2023
Can you trust AI with your next decision? Part 3 in a series on fact-checking/citation
image generated by bing image creator bottle on apothecary shelf
25 April 2023
How is generative AI referencing sources? Part 2 in our series
22 April 2023
Sneaky STEM: Inspire learning with immersive experiences
15 March 2023
Can AI replace your CEO?