The Promise of Administrative Data and Performance ManagementReflections on Recommendations from the Commission on Evidence-Based Policy Making
The primary goals of the Final Report of the Commission on Evidence-Based Policymaking are laudable: to “[improve] both the privacy protections afforded to the American public and the availability of rigorous evidence to inform policymaking.” Government has wrestled with these issue areas for decades, and while significant challenges remain, the recommendations of the Commission provide a unique bipartisan opportunity for government to dramatically improve the way it delivers services to focus on data and program impact.
One of the important questions following the submission of the Commission’s final report is how the government will operationalize and implement these recommendations. How exactly does the Federal government use an ever-expanding trove of administrative data to gain timely, actionable insights about programming? Even with improved access to—and protection of—administrative data, conducting program evaluations can be a time-intensive process that may or may not produce a robust evidence base across interventions.
Historically, randomized controlled trials (RCTs) have been the golden standard for evaluations of various types of government programs. Studies where individuals are placed at random into two groups, one which receives an intervention and one that does not get these services, minimizes biases and tests the effectiveness of an intervention. Along with the opportunity to rigorously examine program impact, RCTs and other types of experimental evaluation designs come with several challenges:
- They can be costly and resource-intensive, even with improved access to administrative data;
- They can require long observation windows and large sample sizes to be statistically meaningful;
- They demand fidelity to a specific intervention during the evaluation, even as circumstances change; and
- “False negatives” incorrectly characterize some programs as ineffective; they conflate a conclusion that a study did not find that a program had impact with a conclusion that the program in fact has no impact.
Results of RCTs in one geography, at a particular point in time, have also been used to dictate standardized intervention models across other communities that have vastly different needs. An intervention that shows promise in one location may not be effective in another years later.
In addition to rigorous evaluations, government must expand the use of administrative data to establish a data feedback loop, where government and service providers are able to see—in near real-time—administrative data describing the outcomes and needs of program participants. These feedback systems allow providers of government services to make adjustments as they analyze program data.
While adjusting the program model to improve outcomes is antithetical to the classic RCT approach, this type of data sharing and performance management is how the private sector makes evidence-based decisions all of the time and is embraced and enshrined in the Pay For Success model. In PFS, service providers at the community level track not just performance metrics such as enrollment, completion, attrition, but have access to project outcomes via administrative data sources on a monthly or quarterly basis. We’re seeing the value of this collaborative ecosystem in Massachusetts’ PFS initiatives and elsewhere, where government securely and confidentially shares administrative data on employment, wages, academic performance and recidivism with service providers, allowing them to continuously see what’s working and what’s not.
As access to administrative data becomes more seamless and public protections strengthen, government and service providers must focus not only on RCTs, but also on improved access to data for ongoing performance management and rapid-cycle program improvements at the community level.
Using data and evidence effectively demands innovative approaches that addresses unique conditions in communities around the country. While building an evidence base at the Federal level will be crucially important, establishing ecosystems that provide ongoing access at the local level will allow government and providers to tailor service effectively to meet the specific needs of their communities.