analytics

Tag

fox
1. Prior experience → More trust In Trustworthy Data Analysis, Roger Peng gives an elegant description of how he evaluates analytics presentations, and what factors influence his trust level. First, he imagines analytical work in three buckets: A (the material presented), B (work done but not presented), and C (analytical work not done). “We can...
Read More
1. Vigilance → Better algorithms “Eliminating bias… requires constant vigilance on the part of not only data scientists but up and down the corporate ranks.” In an insightful Information Week commentary, James Kobielus (@jameskobielus) considers the importance of Debiasing Our Statistical Algorithms Down to Their Roots. “Rest assured that AI, machine learning, and other statistical...
Read More
1. Debiasing → Better decisions Debiasing is hard work, requiring honest communication and occasional stomach upset. But it gets easier and can become a habit, especially if people have a systematic way of checking their decisions for bias. In this podcast and interview transcript, Nobel-winning Richard Thaler explains several practical ways to debias decisions. First,...
Read More
boston-dynamics-spot-mini
1. Machines Gone Wild → Digital trust gapLast year I spoke with the CEO of a smallish healthcare firm. He had not embraced sophisticated analytics or machine-made decision making, with no comfort level for ‘what information he could believe’. He did, however, trust the CFO’s recommendations. Evidently, these sentiments are widely shared. — Tracy A...
Read More
1. Recognize bias → Create better algorithmsCan we humans better recognize our biases before we turn the machines loose, fully automating them? Here’s a sample of recent caveats about decision-making fails: While improving some lives, we’re making others worse. Yikes. From HBR, Hiring algorithms are not neutral. If you set up your resume-screening algorithm to...
Read More
man and woman about to feed each other
1. It’s tempting to think there’s a hierarchy for data: That evidence from high-quality experiments is on top at Level 1, and other research findings follow thereafter. But even in healthcare – the gold standard for the “gold standard” – it’s not that simple, says NICE in The NICE Way: Lessons for Social Policy and...
Read More
When presenting findings, it’s essential to show their reliability and relevance. Today’s post discusses how to show your evidence is reproducible; next week in Part 2, we’ll cover how to show it’s relevant. Show that your insights are reproducible. With complexity on the rise, there’s no shortage of quality problems with traditional research: People are...
Read More
magical thinking forest
1. Magical thinking about ev-gen. Rachel E. Sherman, M.D., M.P.H., and Robert M. Califf, M.D. of the US FDA have described what is needed to develop an evidence generation system – and must be playing a really long game. “The result? Researchers will be able to distill the data into actionable evidence that can ultimately...
Read More

Museum musings.

Pondering the places where people interact with artificial intelligence: Collaboration on evidence-based decision-making, automation of data-driven processes, machine learning, things like that.

Recent Articles

finger pointing
19 August 2021
How to blame a robot for mistakes: Do your finger pointing properly.
photo of row of townhouses seen through fisheye camera lens
12 August 2020
How Human-in-the-Loop AI is Like House Hunters
coronavirus pandemic curve - johns hopkins
9 April 2020
Deciding while distancing: From data viz to the hard decisions.
soldiers looking at 3d map
26 February 2020
Will military ethics principles make AI GRRET again?
woman exiting revolving door
30 January 2020
Struggling to explain AI? Try this before|after strategy.