1. CircleUp uses algorithm to evaluate consumer startups.
Recently we wrote about #fintech startups who are challenging traditional consumer lending models. CircleUp is doing something similar to connect investors with non-tech consumer startups (food, cosmetics, recreation). It’s not yet a robo adviser for automated investing, but they do use machine learning to remove drudgery from the analysis of private companies. @CircleUp‘s classifier selects emerging startups based on revenue, margins, distribution channels, etc., then makes their findings available to investors. They’ve also launched a secondary market where shareholders can sell their stakes twice annually. The company has successfully raised Series C funding.
2. Student decision-making competition.
In the 2016 @SABR case competition, college and university students analyzed and presented a baseball operations decision — the type of decision a team’s GM and staff face over the course of a season. Contestants were required to construct and defend a 2016 bullpen from scratch for any National League team, focusing on that team’s quality of starting pitching, defense, home ballpark, division opponents, and other factors. The Carnegie Mellon team from the Tepper School of Business won the graduate division.
3. For many, writing is an essential data science skill.
Matt Asay (@mjasay) reminds us data science breaks down into two categories, depending on whether it’s intended for human or machine consumption. The human-oriented activity often requires straightforward steps rather than complex digital models; business communication skills are essential. Besides manipulating data, successful professionals must excel at writing paragraphs of explanation or making business recommendations.
4. Chasing statistical wild geese.
The American Statistical Association has released a statement on p-values: context, process, and purpose. There’s been a flurry of discussion. If you find this tl;dr, the bottom line = “P-values don’t draw bad conclusions, people do”. The ASA’s supplemental info section presents alternative points of view – mostly exploring ways to improve research by supplementing p-values, using Bayesian methods, or simply applying them properly. Christie Aschwanden wrote on @FiveThirtyEight that “A common misconception among nonstatisticians is that p-values can tell you the probability that a result occurred by chance. This interpretation is dead wrong, but you see it again and again and again and again. The p-value only tells you something about the probability of seeing your results given a particular hypothetical explanation….” Hence ASA Principle No. 2: “P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.” Nor can a p-value tell you the size of an effect, the strength of the evidence, or the importance of a result. The problem is the way p-values are used, explains Deborah Mayo (@learnfromerror): “failing to adjust them for cherry picking, multiple testing, post-data subgroups and other biasing selection effects”.
Posted by Tracy Allison Altman on 16-Mar-2017.
Photo credit: Around the campfire by Jason Pratt.