“Big E” vs. “little e” evidence, and probabilistic thinking.

1. It’s tempting to think there’s a hierarchy for data: That evidence from high-quality experiments is on top at Level 1, and other research findings follow thereafter. But even in healthcare – the gold standard for the “gold standard” – it’s not that simple, says NICE in The NICE Way: Lessons for Social Policy and Practice from the National Institute for Health and Care Excellence. The report, produced with the Alliance for Useful Evidence (@A4UEvidence), cautions against “slavish adherence” to hierarchies of research. “The appropriateness of the evidence to the question is more important, not its place in any such hierarchy.” NICE articulates how its decision model works, and argues why it’s also relevant outside healthcare.

2. Don’t let “little e” evidence kill off good Big Ideas. Take note, lean startups + anyone new to the ways of validating ideas with evidence. In their should-be-considered-a-classic Is Decision-Based Evidence Making Necessarily Bad?, Tingling and Broydon examine the different uses of evidence in decision-making (MIT Sloan Management Review). As a predictive tool, sometimes it’s flat wrong: Those iconic Aeron chairs in tech offices everywhere? Utterly rejected by Herman Miller’s market-research focus groups. It’s good to have a culture where “small” evidence isn’t just an excuse to avoid risk-taking. But it’s also good to look at “Big E” Evidence, assessing what research is predictive over time, and replace older methods (focus groups, perhaps).

3. 10+ years ago, Billy Beane famously discovered powerful analytic insights for managing the Oakland A’s baseball team, and as a reward was portrayed by Brad Pitt in Moneyball. Now a bipartisan group of U.S. federal leaders and advisors offers Moneyball for Government, intending to encourage use of data, evidence, and evaluation in policy and funding decisions (@Moneyball4gov).

4. We’ve barely scratched the surface on figuring out how to present data to decision makers. All Analytics did a web series, The Results Are In: Think Presentation From the Start. James Haight of Blue Hill Research (@James_Haight) compared this activity to Joseph Campbell’s hero’s journey.

5. We’re wired to seek certainty. But Ted Cadsby argues the world is too complex for our simplified conclusions. He suggests probabilistic thinking to arrive at a “provisional truth” that we can test over time in Closing the Mind Gap: Making Smarter Decisions in a Hypercomplex World.

Posted by Tracy Allison Altman on 05-Nov-2017.

Photo Credit: Photo by marqquin on Unsplash

Related Posts

Leave a Reply

Museum musings.

Pondering the places where people interact with artificial intelligence: Collaboration on evidence-based decision-making, automation of data-driven processes, machine learning, things like that.

Recent Articles

finger pointing
19 August 2021
How to blame a robot for mistakes: Do your finger pointing properly.
photo of row of townhouses seen through fisheye camera lens
12 August 2020
How Human-in-the-Loop AI is Like House Hunters
coronavirus pandemic curve - johns hopkins
9 April 2020
Deciding while distancing: From data viz to the hard decisions.
soldiers looking at 3d map
26 February 2020
Will military ethics principles make AI GRRET again?
woman exiting revolving door
30 January 2020
Struggling to explain AI? Try this before|after strategy.