In April I had the opportunity to participate in a panel on Pay For Success (PFS) programs at the annual Early Childhood Social Impact Performance Advisors Conference in San Diego.  Jointly sponsored by the Institute for Child Success and ReadyNation, the conference is the only one to focus specifically on early childhood PFS social impact financing.

Progress-Monitoring-for-Early-Childhood-Learning2Sessions at this year’s conference covered areas including investing, risk management, contracting, capacity building, early childhood program features, and global developments.  My panel, titled “The Data You Need And How To Get It,” focused on the methodological aspects of Pay for Success.  I was fortunate to present with two co-panelists, Mark Innocenti from Utah State University and Christina Altamayer of the Children & Families Commission of Orange County, who provided in-depth illustrations of the challenges associated with data collection and analysis for PFS programs.

My presentation focused on identifying what programs are appropriate for PFS.  A few key points:

1.  Data collection should be built into the program model

High-quality data are essential to successful PFS programs.  PFS are high-stakes operations:  millions of dollars may trade hands based on their results.  Therefore, accurate and complete data on student characteristics and outcomes are essential.  People considering PFS programs should ensure that adequate data will be available and that data sharing agreements are in place among programs and evaluators.

2.  Randomized controlled trials are better for identifying effective programs than for assessing PFS outcomes

Randomized controlled trials (RCTs) are considered the “gold standard” for identifying program impact.  When implemented well, they are the most powerful methodological tool available for isolating the effect on outcomes specifically due to the program model.  For this reason, selection criteria for programs considered as part of a PFS should include evidence from RCTs.

RCT’s are not necessarily the best methodological tool for determining the success of a PFS.  They are costly, and often require longer timeframes than are possible within a PFS context, in which investors expect a short-term return.  For this reason, other methodology, such as quasi-experimental designs (QEDs) are often more appropriate to gauge progress within an RCT.

3.  Close attention must be paid to context and scale when identifying programs for PFS

Even programs with a rigorous body of research evidence based on RCTs may not be good fits for PFS programs.   The likelihood of success in translating an existing program model to a PFS context is highly dependent on contextual factors.  These might include the scale of the program, the policy context, the demographics of the children served, and local costs.