Struggling to explain AI? Try this before|after strategy.

To cut through AI complexity, focus on decisions.

Never has it been more important to effectively explain complex concepts. Technology is influencing most decision processes, not always transparently so. On the bumpy road toward explainable AI (XAI), we find great communication options, from printed materials to state-of-the-art experiences. But where do you start, or know when you’re finished? How do you describe technology to people with varying levels of knowledge/interest? Focus on the decision(s) being made.

Experts weigh in on XAI from numerous perspectives: previously I wrote about a wonderful philosophical analysis centered around Dennett’s intentional stance. That logic is consistent with my method of framing conversations around the actions being influenced by the technology. In this post, I show how to achieve the dual purpose of creating engaging content and satisfying learning objectives.

Envision a human-in-the-loop future.

Emphasize before & after decisions to explain technology’s potential role in important processes. Rather than describe nuts and bolts, articulate how decision-making will evolve in the human-in-the-loop, AI-informed future. Use concrete language that’s meaningful to the audience – your executive team, a user group, a client, or the public.

Diagram of content strategy for explainable AI

Before: What decisions are being made now, and how (status quo before AI)? Which are machine-automated or machine-augmented?

After: How will key decisions differ after AI adoption? Does the frequency change? Which decisions are made by AI autonomously? Which continue to be made by humans? How will humans approve or collaborate on decisions alongside AI?

Quality, quality, quality. In both cases, address how decision quality will be evaluated and sustained. Establish characteristics of a ’good’ AI decision and those of a ’good’ human-led one.

Example 1: Insurance underwriting.

before AIafter AI
human-led / underwriter policy pricing AI-informed, human-led pricing decisions
policy-writing opportunitiessimilar decision frequency
complex informationmore variables as decision criteria
quality check: consistency with professional standards, expectationsquality check: consistency with corporate financial targets & ethical guidelines, avoidance of systematic bias

Example 2: Driver dispatch.

before AIafter AI
taxi dispatcher assigns driverAI approves passenger request, suggests rideshare driver(s)
human-ledgranular data, many interim decisions (e.g., choosing a route)
quality check: consistency with standard practicesquality check: ‘good’ decisions improve customer experience, avoid red-zoning, and regulatory violations

Now for the explaining part. Once the before/after scenarios are identified, it’s time to explain specifically how humans and AI will arrive at decisions. This emphasis creates a clear point of view that drives meaningful content design. Options range from walking decision makers through a hands-on experience to presenting concise descriptions and visualizations.

Posted by Tracy Allison Altman on 29 Jan 2020.

Photo credit: Alexander Mueller on Flickr.

Related Posts

Museum musings.

Pondering the places where people interact with artificial intelligence: Collaboration on evidence-based decision-making, automation of data-driven processes, machine learning, things like that.

Recent Articles

muscle car by bing/create
20 June 2023
Stolen cars and AI ‘moral self-correction’
person in silhouette with orange background, pondering AI input for an evidence based decision
9 May 2023
Can you trust AI with your next decision? Part 3 in a series on fact-checking/citation
image generated by bing image creator bottle on apothecary shelf
25 April 2023
How is generative AI referencing sources? Part 2 in our series
22 April 2023
Sneaky STEM: Inspire learning with immersive experiences
15 March 2023
Can AI replace your CEO?