- Ethical and rigorous experimentation can clearly identify and attribute the impacts of interventions in a robust and unbiased way, generating the evidence necessary for informed product, service and program design and delivery to improve outcomes.
- It is less risky and less costly to know what works and what doesn’t on a smaller scale, before investing in scaling up products, services or programs.
- Experiments are repeatable by design and can be used to inform ongoing iteration and improvements to products, services and program delivery.
- Larger experiments with multiple treatment groups allow for the comparison of multiple options, giving decision-makers more informed choices from which to select.
- Lessons learned can also be generalized and applied in areas unrelated to the original experiment.
- Experimental method flexibility allows adaptation to unique situations and challenges.
- Not everything is amenable to experimentation for a variety of ethical, methodological, and logistical reasons.
- Experimentation can take time to get right, both in terms of generating meaningful results but also designing an experimental approach that is able to test the specific hypothesis that was formulated. Measuring what works can also be hindered by a lack of baseline data.
- A lack of capacity, support, and resources can easily frustrate attempts at rigorous experimentation. Without investment or a culture that accepts and learns from failure, experimentation may not meet its potential. To remedy this, organizations should devote a percentage of program funds to supporting and conducting regular experimentation.
- Experimentation provides an established method to test new approaches to programs and policies that generates data related to a particular problem or challenge.
- The comparative nature of experimentation helps inform whether a particular intervention is optimized for a specific problem or challenge.
- Experimentation supports the policy development cycle by connecting program and service design, implementation and evaluation.
- Experiments enhance program value for money by helping decision-makers allocate resources towards what works and away from what does not.
- Experimentation has the potential to introduce features of research and development (R&D) into the social sector; an approach that has generated significant value and breakthroughs in the private sector.
- Experiments use impact indicators to support clear and transparent reporting of program impact and results to program managers, senior executives, Parliament, and the public - an essential component for evidence-based decision making.
- The term “experimentation” has negative connotations in some policy areas, so alternative language may be required, e.g. “investigating what works.”
- The quality of the results of an experiment rely upon a well-planned experimental design that includes a clear hypothesis and an appropriate means to measure impact.
- There may be factors external to the experiment that affect the ability for solutions to be effectively scaled up.
- There is varying levels of capacity and skills required to design and run experiments within the Government of Canada; you may need to secure expertise from an outside party.
- Experimentation in government requires a culture that accepts risk-taking and failure as necessary parts of the policy development and learning process.
Government of Canada
- In 2010, ESDC funded the UPSKILL project to measure the impact of essential skills training on workers’ skills and job performance. The project demonstrated positive returns for workers, employers, and government.
- In 2014, the Public Health Agency of Canada leveraged the UPSKILL research through a sub-study of the impact of essential skills training on health, and connections between health and job performance (UPSKILL Health). UPSKILL Health found that workplace essential skills training has considerable potential to achieve health outcomes at a population level.
- Natural Resources Canada Energy Efficiency Rewards Pilot tested whether a points-based rewards mobile application could increase Canadians’ energy efficiency behaviours. This is an example of using a specific non-experimental approach (in this case, a pre-/post-test) to gather insights about potential impact.
Best in Class
- The Innovation Growth Lab is a global collaboration that aims to enable, support, undertake and disseminate high impact research that uses randomised controlled trials to develop and test different approaches to support innovation, entrepreneurship and growth.
- The What Works Network in the UK aims to improve the way government and other organizations generate and use evidence for decision-making to improve public services.
- The Laura and John Arnold Foundation (LJAF) in the US helps governments build an evidence-base for social interventions through initiatives such as low-cost randomized controlled trials to drive effective social spending.
- The Abdul Latif Jameel Poverty Action Lab (J-PAL) is a network of 136 affiliated professors from over forty universities with a mission to reduce poverty by ensuring that policy is informed by scientific evidence.
- Social Research and Demonstration Corporation is a Canadian-based research organization that develops, field tests, and rigorously evaluates social programs at scale and in multiple locations before they become policy and are implemented on a broader basis.
GoogleX’s mission is to invent, test, and launch new technologies using experiments.
- Nesta - Better Public Services through Experimental Government
- Innovation Growth Labs - Experimental Innovation and Growth Policy: Why do we need it?
- Mowat Centre - Better Outcomes for Public Services
- Nesta - Running Randomised Controlled Trials in Innovation, Entrepreneurship and Growth: An Introductory Guide
- Haynes, Service, Goldacre, and Torgerson, “Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials.”
- What Works Network - What Works? Evidence for Decision Makers
- Nesta - Using Research Evidence: A Practice Guide
- University of California, San Diego - Designing, Running and Analyzing Experiments