THE Study Guide: Intro to Evaluation Studies
The Data for Life Prize aims to support scientific evaluation of efforts to save children’s lives in order to maximize effectiveness.
For this prize, we ask applicants to design a scientific study to validate the success of their child mortality intervention in the most convincing way possible: cold, hard data. Although evaluation tends to come at the end of a project, it is important that evaluation be planned for in initial stages so that necessary information can be collected. Evaluation is especially relevant for innovative programs, where it is crucial to test operational alternatives and learn during the implementation, in order to show that the intervention makes a significant difference before scaling up. Impact evaluations increase donor confidence and ensure that funds are being allocated to interventions that are proven to be efficient and successful.
Evaluation must take place before, during, and after the project in order to monitor changes throughout the study. Applicants should aim to measure the health effects of interventions as well as assess the efficacy of the project’s strategy. A good study will not just tell if an intervention works, but also how, why, and what about it works. Here are some tips to help craft an evaluation study:
Before: Baseline Evaluation
In order to determine the impact of the proposal, applicants must evaluate the initial conditions of their study participants. It is best to start with the problem: what is the problem and how does the intervention solve it? Set goals and expectations for the project’s design as well as its outcome. Determine what factors will be important to validate the study at its end and make sure to collect that information from the start.
During: Process Evaluation
This is frequently referred to as monitoring and will be unique to each proposal. In this stage, changes occurring as a result of the project must be tracked. Bookkeeping and data collection is key! Applicants should be monitoring both the health of participants as well as the successes and failures of their project design. Taking qualitative, observational notes throughout the duration of the intervention can help future quantitative analysis.
It is most effective to simultaneously monitor a control group and a comparison group since it reduces the influence of unplanned variables. Additionally, ensure that the participant group is large and varied enough so that results can be generalized beyond the sample.
After: Impact Evaluation
Once the project comes to an end, it is essential to evaluate the effects and influence of the intervention. Some projects will see results in the short-run and others will develop over time, therefore it is important to note the time scale of the evaluation. Compare the initial conditions with the final conditions and calculate how much change can be attributed to the implementation of the project.
A successful scientific study will include but is not limited to the following points:
- background description of the problem;
- methodology that describes the study design with the control group and comparison group clearly outlined, participants, sample size, sampling method, etc;
- instrument development and data collection plan;
- plan for ongoing monitoring;
- plan for study implementation;
- description of data entry and analysis;
- discussion of potential risks and possible mitigation strategies.
During each stage of the study, it is up to each applicant to determine what monitoring devices and data collection methods are needed in order to accurately represent the success of the project. These tools and their implementation are what’s being submitted for consideration to the Data for Life Prize administrators.
For more information please refer to the reference links below or contact us at firstname.lastname@example.org.
References and Resources