Sometimes we fool ourselves into thinking that if people just had access to all the relevant data, then the right decision – and better outcomes – would surely follow.
Of course we know that’s not the case. A number of things block a clear path from evidence to decision to outcome. Evidence can’t speak for itself (and even if it could, human beings aren’t very good listeners).
It’s complicated. Big decisions require synthesizing lots of evidence arriving in myriad forms, much of it opaque, from diverse sources, with varying agendas. Not only do decision makers need to resolve conflicting evidence, they must also balance competing values and priorities. (Which is why “evidence-based management” is a useful concept, but as a tangible process is simply wishful thinking.)
If you’re providing evidence to influence a decision, what can you do? Transparency can move the ball forward substantially. Ideally, it’s a two-way street: Transparency in the presentation of evidence, rewarded with transparency into the decision process. However, decison-makers avoid exposing their rationale for difficult decisions. It’s not always a good idea to publicly articulate preferences about values, risk assessments, and priorities when addressing a complex problem: You may get burned. And it’s even less of a good idea to reveal proprietary methods for weighing evidence. Mission statements or checklists, yes, but not processes with strategic value.
The human touch. If decision-making was simply a matter of following the evidence, then we could automate it, right? In banking and insurance, they’ve created impressive technology to automate approvals for routine decisions: But doing so first requires a very explicit weighing of the evidence and design of business rules.
Where automation isn’t an option, decision makers use a combination of informal methods and highly sophisticated models. Things like Delphi, efficient frontier, or multiple criteria decision analysis (MCDA) – but let’s face it, there are still a lot of high-stakes beauty contests going on out there.
What should define transparency? Participants can make their evidence transparent in several ways:
Level 1: Make the evidence accessible. Examples: Providing access to a report, along with the supporting data. Publishing a study in a conventional academic/science journal style.
Level 2: Show, don’t tell: Supplement lengthy narrative with visual cues. Provide data visualization and synopsis. Demonstrate the dependencies and interactivity of the information. Example: Provide links to comprehensive analysis, but first show the highlights in easily digestible form – including details of the analytical methods being applied.
Level 3: Make it actionable: Apply the “So what?” test. Demonstrate value. Show why the evidence matters. Example: Show how actions influence important outcomes. Action → Outcome
On the flip side, decision makers can add transparency by explaining how they view the evidence: What qualifies as evidence? Which evidence carries the most weight? Which actions are expected to influence desired outcomes? A structured decision framework requires this and more.
Posted by Tracy Allison Altman on 30-Apr-2017.
Photo credit: iStock.
Great post – thank you. Data and evidence are one (key) component of what is often a multicomponent decisionmaking process.