Description
The aim of the Mind the Gap impact evaluation was to establish broadly if its learner self-instruction materials made a difference to Grade 12 learner performance in the high stakes NSC examination, and consider taking them to scale if they did. It asked: did the materials improve learner performance?, was the improvement had in all 4 subjects?, and, which learners benefited most from the intervention?
Adopting an impact/RCT design and using secondary data sourced from a website with NSC results, evaluators found the materials did improve performance, that performance was improved in 2 of the 4
subjects, and that more able learners benefited most form using the materials. Findings were not statistically significant, and thus need to be treated cautiously especially in discussion to take the materials to scale. In this regard authors should also have noted the usual practice in RCTs to produce statistically significant findings and test for reliability, amongst others, before a discussion of use. As it
stands, the evaluation appears a rigorous application of an impact evaluation which has been carefully implemented, but its claims may overstate what can be done with its findings. As impact evaluation is a recognized approach by government and in the evaluation literature, it has the advantage of being open to debate, unlike approaches which are not. And it should be noted this application of impact evaluation is unusual as it relies on secondary sources of data and not, as in more usual applications, on test development, testing on site, and the like. The assessment score of 3.30 appears about right for the evaluation as it stands, which is likely to have benefited from critical comment from peers external to the DBE.