News
Correlation is not causation: a spectrum of impact measurement from SSIR

The Stanford Social Innovation Review (SSIR) has posted an essay intended to act as a “Playbook for Designing Social Impact Measurement”. The authors begin by reiterating the importance of data in decision-making and resource allocation, before turning to the challenges of working out which data should be used, and in what ways:
As basic as it might sound, one of the most important elements to understand about claims of social impact is the old adage “correlation doesn’t equal causation.” While correlation, which is simply a relationship between two things, can be a useful endpoint, it’s important to distinguish between a lightly informed decision and an evidence-based one, especially when vulnerable populations and billions of dollars hang in the balance.
The authors have developed a “spectrum of impact measurement” to aid such decisions:

They’re keen to emphasise that this is more of a framework than a roadmap, as moving to some of the later tiers of the spectrum would demand a degree of time and resources that not every organisation has access to:
It’s also worth noting that achieving the experimental methodologies at the very top of the spectrum need not always be the goal. If an organization isn’t pursuing a high level of confidence in its impact, it can gain many insights by simply tracking internal data.
They go on to explain the five tiers in greater detail, after first grouping them into two sections:
The first three points on the spectrum—logic model, KPI selection, and data collection and analysis—are the social sector equivalents of business analytics. They are not simply prerequisites to experimental evaluations; they are valuable in their own right. Developing these three areas helps organizations build cultures that emphasize the importance of data and information, make informed resource allocation decisions, drive performance through goal-setting frameworks and feedback loops, and ultimately use the information they produce as a strategic asset. […] The last two steps are evaluations and involve constructing a control group. Only when there is a control group, or a group similar to existing clients that doesn’t participate in a program, can organizations begin claiming causal inference—that is, confidently claiming that the program is responsible for the change in clients’ circumstance, not merely correlated with it.