Stand up for science, evidence for surgery, and cognitive computing for execs.

1. Know someone who effectively promotes evidence?
Nominations are open for the 2016 John Maddox Prize for Standing up for Science, recognizing an individual who promotes sound science and evidence on a matter of public interest, facing difficulty or hostility in doing so.

Researchers in any area of science or engineering, or those who work to address misleading information and bring evidence to the public, are eligible. Sense About Science (@senseaboutsci) explains that the winner will be someone who effectively promotes evidence despite challenge, difficulty, or adversity, and who takes responsibility for public discussion beyond what would be expected of someone in their position. Nominations are welcome until August 1.

2. Evidence to improve surgical outcomes.
Based on Oxford, UK, the IDEAL Collaboration is an initiative to improve the quality of research in surgery, radiotherapy, physiotherapy, and other complex interventions. The IDEAL model (@IDEALCollab) describes the stages of innovation in surgery: Idea, Development, Exploration, Assessment, and Long-Term Study. Besides its annual conference, the collaborative also proposes and advocates for assessment frameworks, such as the recent IDEAL-D for assessing medical device safety and efficacy.

3. Could artificial intelligence replace executives?
In the MIT Sloan Management Review, Sam Ransbotham asks Can Artificial Intelligence Replace Executive Decision Making? ***insert joke here*** Most problems faced by executives are unique, not well-documented, and lack structured data, so they’re not available to train an artificial intelligence system. What would be more useful would be analogies and examples of similar decisions – not a search for concrete patterns. AI needs repetition, and most executive decisions don’t lend themselves to A/B testing or other research methods. However, some routine/small issues could eventually be handled by cognitive computing.

4. Can data be labeled for quality?
Jim Harris (@ocdqblog) describes must-haves for data quality. His SAS blog post compares consuming data without knowing its quality to purchasing unlabeled food. Possible solution: A data-quality ‘label’ could be implemented as a series of yes/no or pass/fail flags appended to all data structures. These could indicate whether all critical fields were completed, and whether specific fields were populated with a valid format and value.

Posted by Tracy Allison Altman on 6-Jul-2016.

 

Related Posts

Leave a Reply

Museum musings.

Pondering the places where people interact with artificial intelligence: Collaboration on evidence-based decision-making, automation of data-driven processes, machine learning, things like that.

Recent Articles

finger pointing
19 August 2021
How to blame a robot for mistakes: Do your finger pointing properly.
photo of row of townhouses seen through fisheye camera lens
12 August 2020
How Human-in-the-Loop AI is Like House Hunters
coronavirus pandemic curve - johns hopkins
9 April 2020
Deciding while distancing: From data viz to the hard decisions.
soldiers looking at 3d map
26 February 2020
Will military ethics principles make AI GRRET again?
woman exiting revolving door
30 January 2020
Struggling to explain AI? Try this before|after strategy.