The National Institute for Early Education Research (NIEER) is proud to partner with the Early Childhood Development Action Network (ECDAN) on this blog series exploring issues highlighted in the recent Annals of the New York Academy of Sciences Special
Issue Implementation Research and Practice for Early Childhood Development.
Nurturing care is necessary for children’s healthy development, yet there is little understanding of how best to deliver these interventions across the full range of existing systems and in a wide diversity of settings. Implementation research is central to understanding what, why and how interventions work in real-world settings and to test approaches to improve them.
A question commonly asked by policy makers about programs that promote early child development (ECD) is what program elements, policies, and/or family and community circumstances contribute to positive results and under what conditions? (Lombardi et al., ANYAS, 2018)
In order to answer this question, it is not enough to only collect data on the impact of interventions on intended outcomes (e.g., ECD), it is also important to collect data on the implementation context, inputs and processes that contribute to the outcomes we expect to achieve for young children, their caregivers and their community. In other words, an implementation evaluation can provide evidence on what makes successful programs work, what makes less successful programs not work, and what features of a program can be improved to make it more effective on the intended outcomes.
Before designing an implementation evaluation for a program promoting ECD, it is critical to develop a Theory of Change (TOC) showing how the inputs and actions lead to the outputs and outcomes. The TOC is a road map that can provide guidance on what data evaluators should collect to assess:
- Program Fidelity: Whether components of the program were implemented as intended; for example, the number of days of training delivered to ECD workers?
- Program Quality: Whether the program components were implemented as well as they could be to be effective on intended outcomes; for example, knowledge and skills of the ECD workers to deliver the curriculum?
- Program Improvements: Whether components of the program need to be improved to strengthen effectiveness; for example, integrate booster training sessions to improve the competencies of ECD workers to deliver the curriculum?
Key implementation data which should be collected include dosage (i.e., duration, frequency and intensity of the program), content, competencies of the delivery personnel and how well the program is accepted by the recipients, including local demand. These data may be both quantitative and qualitative. The “C.A.R.E” (Consolidated Advice on Reporting ECD Implementation Research) guidelines were developed by a group of global ECD experts to support program implementers and researchers to systematically collect data on implementation of ECD programs (Yousafzai et al., ANYAS, 2018).
Implementation data should be collected whether the program is a small pilot project or a large-scale national service. Key principles for the implementation evaluation include:
- Transparency: Reporting successful and not successful results are equally important and should be widely disseminated for the global ECD community to learn from one another.
- Utilization of Data: Fidelity is a core construct in implementation research, but it is also recognized that during implementation, adaptations or modifications may be desirable or necessary in response to real-world contexts (e.g., overcoming barriers to implementation, responding to changing policy environment, making program quality improvements). In other words data needs to be made available to implementers in real-time in order to make decisions about how to improve the quality of the program.
The use of a common set of guidelines to collect data on implementation inputs and processes will enable the global community to reach consensus on features that make ECD programs more or less successful in differing policy environments and socio-culture contexts. Sharing these lessons through accessible evaluation reports will foster greater learning and effective partnerships among policy makers, implementers and researchers who share a common vision to ensure that all children thrive.
The special series on ‘Implementation Research and Practice’ published in the Annals of the New York Academy of Sciences is intended to advance evidence on implementation research and practice, including improved reporting of systems and processes when implementing early childhood development programs. This Blog series will feature lessons learned and evidence on implementation from this series and ongoing work in the field to foster learning and debate on issues of implementation.
Lombardi J. What policy makers need from evaluations of early childhood development programs? Annals of the New York Academy of Sciences 2018; 1419(1): 17-19.
Yousafzai AK, Aboud FE, Nores M, Kaur R. Reporting guidelines for implementation research on nurturing care interventions designed to promote early childhood development. Annals of the New York Academy of Sciences 2018; 1419(1): 26-37.
Dr. Yousafzai is an Associate Professor of Global Health, Department of Global Health and Population at the Harvard T.H. Chan School of Public Health and a Visiting Faculty at the Department of Paediatrics and Child Health, Aga Khan University. She has 15 years of field research experience and established an early childhood focused field research team in rural Sindh, Pakistan. Dr. Yousafzai has extensive experience in evaluating early childhood interventions in south Asia, east Africa, and in central and east Europe. Dr. Yousafzai’s research has also focused on the inclusion of children and adolescence with disabilities in global child health services. She also serves on a number of Advisory Groups on early child development for international organizations including the Interim Executive Group of the Early Childhood Development Action Network-ECDAN.