When presenting findings, it’s essential to show their reliability and relevance. Today’s post discusses how to show your evidence is reproducible; next week in Part 2, we’ll cover how to show it’s relevant.
Show that your insights are reproducible. With complexity on the rise, there’s no shortage of quality problems with traditional research: People are finding it impossible to replicate everything from peer-reviewed, published findings to Amy Cuddy’s power pose study. A recent examination of psychology evidence was particularly painful.
In a corporate setting, the problem is no less difficult. How do you know a data scientist’s results can be replicated?* How can you be sure an analyst’s Excel model is flawless? Much confusion could be avoided if people produced documentation to add transparency.
Demystify, demystify, demystify. To establish credibility, the audience needs to believe your numbers and your methods are reliable and reproducible. Numerous efforts are bringing transparency to academic research (@figshare, #openscience). Technologies such as self-serve business intelligence and data visualization have added traceability to corporate analyses. Data scientists are coming to grips with the need for replication, evidenced by the Johns Hopkins/Coursera class on reproducible research. At presentation time, include highlights of data collection and analysis so the audience clearly understands the source of your insights.
Make a list: What would you need to know? Imagine a colleague will be auditing or replicating your work – whether it’s a straightforward business analysis, data science, or scientific research. Put together a list of the things they would need to do, and the data they would access, to arrive at your result. Work with your team to set expectations for how projects are completed and documented. No doubt this can be a burdensome task, but the more good habits people develop (e.g., no one-off spreadsheet tweaking), the less pain they’ll experience when defending their insights.
*What is a “reproducible” finding, anyway? Does this mean literally replicated, as in producing essentially the exact same result? Or does it mean a concept or research theory is supported? Is a finding replicated if effect size is different, but direction is the same? Sanjay Srivastava has an excellent explanation of the differences as they apply to psychology research in What counts as a successful or failed replication?
Posted by Tracy Allison Altman on 18-Oct-2016.