Matching grants are one of the most common policy instruments used by developing country governments to try to foster technological upgrading, innovation, exports, use of business development services and other activities leading to firm growth. However, since they involve subsidizing firms, the risk is that they could crowd out private investment, subsidizing activities that firms were planning to undertake anyway, or lead to pure private gains, rather than generating the public gains that justify government intervention. As a result, rigorous evaluation of the effects of such programs is important. The authors attempted to implement randomized experiments to evaluate the impact of seven matching grant programs offered in six African countries, but in each case were unable to complete an experimental evaluation. One critique of randomized experiments is publication bias, whereby only those experiments with \"interesting\" results get published. The hope is to mitigate this bias by learning from the experiments that never happened. This paper describes the three main proximate reasons for lack of implementation: continued project delays, politicians not willing to allow random assignment, and low program take-up; and then delves into the underlying causes of these occurring. Political economy, overly stringent eligibility criteria that do not take account of where value-added may be highest, a lack of attention to detail in \"last mile\" issues, incentives facing project implementation staff, and the way impact evaluations are funded, and all help explain the failure of randomization. Lessons are drawn from these experiences for both the implementation and the possible evaluation of future projects.
Campos, F.; Coville, A.; Fernandes, A.M.; Goldstein, M.; McKenzie, D. Learning from the experiments that never happened : lessons from trying to conduct randomized evaluations of matching grant programs in Africa. The World Bank, Washington DC, USA (2012) 2 + 34 pp.