outcome measurements

IS DATA NECESSARILY PROOF WHEN MEASURING THE HUMAN CONDITION?

INTELLIGENT PEOPLE CAN FOOL THEMSELVES WHEN IT COMES TO DATA

In the fifteen years from 1997 and 2012, the percentage of American homes with at least one computer with Internet access has increased from 18% to 75%.  With this proliferation of access to information has come an increase in demand for data to accompany almost all claims and requests at least in business situations.  This is not an inherently bad phenomenon, but with statistics and other data in the wrong hands, it can be a bit like to handling dynamite.

We live in a data driven time.  All of the stakeholders want to “see the data.”  My mind travels back to my time in manufacturing.  If I had a machine that produced 100 widgets an hour, that would be the baseline.  Then, if someone showed me a machine that would produce 200 widgets an hour, that would get my attention.  The raw materials don’t change, and the same operator is in place.  The only variable is the machine. Depending on its cost, I would have to consider buying that machine.  Cause and effect have been established.

It becomes much more complicated when analyzing data as it applies to the human condition in the nonprofit world.  Not everyone who is charged with decision making based on data understands the difference between correlation and causality.  In fact, I would proffer that most people on both ends of this work in the human services arena are amateurs when it comes to carrying out this responsibility.  The human service professionals are social workers, case managers, and other good people who just want to help those in need.  The evaluators and decision makers, boards and foundations, are often well-intended volunteers trying to make a difference in their spare time.

Foundations and others who hold grant money to give to the most effective programs also want to see data.  Outcome measurements they call them.  I support measuring results as best we can.  However, in measuring outcomes related to humans, we must be careful not to confuse correlation with proven cause and effect.

Let’s say a school system is graduating in four years 70% of the students that enter ninth grade.  It sets a goal to increase that rate to 85% in five years.  The planned solution is to pay teachers on a volunteer basis for two extra hours a day to tutor students who want that service.  Foundation X thinks that is a logical plan, and it grants a half-million dollars to implement the program for five years and promises to consider renewal of the grant if the schools reach their goal.

In five years, the graduation rate is still 70%.  Obviously, the idea failed.  Or did it?  Within those five years the following events occurred:

  • Cuts in government funding to that school system forced the schools to lay off 20% of their teachers. That caused the average class room size to increase from 30 to 37.
  • There was a major downsizing by the town’s largest employer causing many of the parents to lose their jobs. Many of the high school age students had to take minimum wage jobs after school to help out, or worse yet, had to drop out of school to become the bread winner. Or maybe the parent’s loss of a job just put more stress in the home to the point that the student couldn’t concentrate on her studies.
  • A reduction of funding for law enforcement in the city was reduced, and enforcement and punishment for domestic violence offenders and deadbeat dads were no longer priorities.

Each of these events was a factor that caused some number of students to drop out of school.  We don’t know what the tipping point was for each of those students, but it is possible that, without the tutoring program, the graduation rate would have dropped to 60% or lower.

The same logic can be applied in reverse to the measurement of programs that appear to be successful.  And yet, those with the money stand together and applaud and pour more money into the apparent successes, and they stand together and denounce the apparent failures.  All the while they often disregard those collateral circumstances that cause the appearance not to be reality.

Non-profit organizations learn what turns funders on and how to present information accordingly.  I’m not saying they cheat, but they do learn how to play the game.  Those who learn to play the game the best, usually get the money.

Advertisements