AI is a suite of tools with the potential to help the Government of Canada to deliver services more effectively, design policy more responsively, and potentially enable an entire suite of new capabilities. As the set of applications is diverse, its potential impact on the public sector is wide-ranging. Institutions have been examining applications that can be organized into three interdependent themes:
- Applying AI to the internal services of government e.g. information management, automated content generation, people management, and security and access management.
- Applying AI to the delivery of services to the public e.g. smarter searches, chatbots, automated decision support.
- Applying AI to help design policy and respond to risk e.g. identifying patterns in data that humans were previously incapable of doing; predicting the impact of our work with greater precision; understand future pressures on programs
With all of the potential use cases offering to improve policy and services, enthusiasm for AI in government has been high. Unfortunately, improper application of this technology can lead to negative outcomes for users, from frustrating service experiences to being mistakenly denied eligibility for benefits and to biased profiling of people.
As AI grows to operate in increasingly sophisticated spaces, they act on behalf of the Crown, and should be subject to similar values, ethics, and laws as public servants and adherence to international human rights obligations. In addition, the following considerations, which are more fully explored in the white paper, should be taken into account:
- Ensuring High-Quality Data - AI applications are only as effective as the quality and quantity of their input data.
- Prevention of Data Bias - AI systems are not neutral; they will learn the biases of its programmers and the datasets used to train it.
- Data for Insights and Privacy Rights - While many of the privacy risks brought by AI are not fundamentally new, the magnitude of data collected, the ability to manipulate this data beyond what a human is capable of , as well as the ability to craft programs that can subtly manipulate human behaviour (e.g. targeted ads), brings a new dimension to these risks.
- Accounting for the Actions of AI: The “Black box” Problem - Much like the human brain, it is difficult to understand the entire decision-making process in detail made by the AI (particularly in the case of Artificial Neural Networks). This is known as the “black box” problem.
Social Acceptability - If AI is going to make decisions, recommendations, or help design policy, there needs to be a sufficient level of social trust that these systems work, and work to the population’s benefit. If the trust and support does not exist, then these tools will fail.
Existing analytical models have already given the Government of Canada the ability to better understand certain social or environmental outcomes to policy, but new methods may be able to identify patterns in data that humans previously could not. This could enable us to make more precise and informed predictions than ever before.
Governments work with big problems. We work in an environment often marked by complex, interdependent systems, where small policy changes can result in massive impacts for a population or the economy. If we can use data to predict the impact of our work with greater precision, or to understand future pressures on social or economic programs, then we can respond more efficiently, act proactively, and ensure that regulatory resources are focused on the highest risk elements of their industries.
There are some limitations to this approach. Predictions are extrapolations of patterns that appeared in the past, but the past is not necessarily an indicator of the future. Like all AI systems, the right quantity and quality of data will need to be accessible to make accurate predictions. There is also a risk that predictions are made using data that has been collected in a way that is biased or not fully representative of the world that we live in.
Already, many federal institutions use a method to describe and compare the degree of risk involved with providing a service to a user. This “risk scoring” technique can be an efficient method to associate an administrative action with risk. To date, this has most often been accomplished using methods that require institutions to create precise, domain-specific operational definitions of risk. These “rule-based” algorithms, while not AI, are a form of automation that has shown to be service-enabling by reducing the compliance and enforcement burden on lower-risk users.
Government of Canada
- Global Public Health Intelligence Network uses machine-learning to detect outbreaks
- Transport Canada conducts automated risk scoring of air cargo
- Canada Border Service Agency and Immigration, Refugee, and Citizenship Canada use Automated decision support
- Public Service and Procurement Canada is analyzing contracts for language precision
- Statistics Canada applies Machine Learning to the census data to fill in missing information based on “best combination of respondent characteristics”
- Various departments are conducting semantic and sentiment analysis of consultations
- The Public Health Agency of Canada has awarded a contract for an Artificial Intelligence (AI) pilot project for surveillance of suicide-related behaviours using social media
- Treasury Board Secretariat will be conducting analysis of regulations using semantic analysis
- Translation Bureau is experimenting with machine translation