If you’re planning to launch a new social enterprise, or scaling a successful venture, you may be asked by investors and funders to demonstrate impact. But with product development and customer acquisition to worry about, you may not have the time or experience to track down the latest rigorously peer-reviewed study that proves your value proposition will work. How can you be informed by impact evidence while building something innovative and new?
The 7 Levels of Evidence
It starts with an understanding of what “evidence” means. Evidence is defined as “the available body of facts or information indicating whether a belief or proposition is true or valid.” Research and evaluation use a simple framework to rank the rigor of studies, corresponding to higher and higher levels of causality, or proof of impact.
The seven common levels of evidence are understood as:
- Expert Opinion
- Evidence from Single Study
- Systematic Review/Lit Review of Similar Studies
- Case-Control or Cohort Study
- Quasi-Experimental (using a similar matched population as the control group)
- Randomized Controlled Trials (known as the “gold standard” of evidence)
- Meta-Analysis (weighted averages of many similar randomized controlled trials)
As you move up the ladder, there is usually a greater quality of evidence of causality and greater confidence in the key findings, due to greater scientific rigor and larger sample sizes.
The Cost of Confidence
Higher levels of evidence also tend to be associated with higher costs. Conducting a randomized controlled trial or extensive study can be expensive and time-consuming. That’s why it’s important to determine which level of evidence makes sense for your social enterprise, your budget, and where you are in the funding journey.
For a new idea, you probably don’t need to be 95% confident in your predicted outcome. If you can project your impact assumptions with a lower level of evidence, you can begin to shop your idea sooner and see the direct results. If you’re waiting for the highest level of evidence, you might end up taking months or even years to launch your program, while spending significant money on research.
We’re calling this idea “the cost of confidence.” Getting to 95% confidence in an intervention might cost twice as much as getting to 70% confidence. That cost is often a barrier to new ideas launching, particularly in pay-for-success-type programs where the taxpayer funding may require a higher level of confidence of causality. The burden of proof ends up taking too much time and money, and the program never gets off the ground.
The Right Level of Evidence at the Right Time
To address that problem, we advocate for using the levels of evidence “out front” to indicate where you’ve found sources, and what that indicates as a research baseline.
When considering the cost of confidence, meta-analysis could be the best benchmark for many social enterprises. Looking at meta-analyses of multiple studies not only results in a higher level of confidence — because it shows that results can be repeated under different circumstances — but it also tends to be quicker and less expensive. It bypasses the extensive costs of putting together a randomized controlled trial or other rigorous study, and offers strong evidence that an intervention will work.
There are more and more clearinghouses or sources of meta-analytic studies that are being used to inform public policy and grantmaking. These are a great place to start for social entrepreneurs.
Even if a meta-analysis is used for evidence up front, a randomized controlled trial may still be needed to show evidence of impact after launch. The level of evidence required often depends on the stage of the enterprise: Funders’ expectations and investments may increase as higher levels of evidence are possible. However, simply getting off the ground and attracting initial funding shouldn’t require a prohibitive cost of confidence.
As evaluation and data become more central to investments, public funding, and other decision-making, we think it’s important for social entrepreneurs to have a basic understanding of the cost of confidence and levels of impact evidence.
Get the basics:
Evidence Based Practice Toolkit, Winona State University: http://libguides.winona.edu/c.php?g=11614&p=61584
More insights about the cost of confidence and the right time to evaluate impact in “Ten Reasons Not to Measure Impact — and What to Do Instead,” Mary Kay Gugerty & Dean Karlan, Stanford Social Innovation Review: https://ssir.org/articles/entry/ten_reasons_not_to_measure_impact_and_what_to_do_instead