Artificial intelligence (AI) is a term used to describe a suite of related technologies intended to simulate and enhance human cognitive capabilities, such as pattern recognition, judgment, vision, or hearing. Its evolution can be divided into three stages:
AI, as it currently exists (weak AI), can be classified as having narrow intelligence in that it equals or exceeds human intelligence or efficiency but only in a specific area. We are now at a point where weak AI can not only replicate many human tasks - it can come close to surpassing our effectiveness at certain tasks, such as recognizing subjects of images, or reading lips.
Artificial general intelligence (strong AI), when a machine could successfully perform any intellectual task that a human can, is yet to arrive and experts do not agree on how long it will be before it does.
AI superintelligence (sometimes called singularity) is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. This is currently the AI of dystopian (and sometimes utopian) fiction.
Since AI is a term used to describe a suite of related technologies, it is important to parse some of these out:
- Machine learning is a method by which algorithms can be trained how to recognize patterns within information, and the ways in which data interrelate.
- Deep learning is a branch of machine learning. While many deep learning algorithms use labeled data, it also brought the capability of using unstructured data such as audio or visual data, allowing the system to extract features of information on its own.
- Artificial neural networks, inspired by the human brain, are composed of artificial neurons, which receive data individually and calculate outputs independently, allowing a complex problem to be broken down into millions of simple problems and then reassembled as one answer. As the network is provided more data, it can identify new and complex relationships in data, much like how the human brain forms synapses.
- Natural language processing allows computers to parse meaning and context out of written text.
- Machine vision and hearing provide machines with the capability of structuring, and using, typically unstructured data such as imagery or sound.
This tile will explore how weak AI may be used in policy making.
AI is a suite of tools with the potential to help the Government of Canada to deliver services more effectively, design policy more responsively, and potentially enable an entire suite of new capabilities. As the set of applications is diverse, its potential impact on the public sector is wide-ranging. Institutions have been examining applications that can be organized into three interdependent themes:
- Applying AI to the internal services of government e.g. information management, automated content generation, people management, and security and access management
- Applying AI to the delivery of services to the public e.g. smarter searches, chatbots, automated decision support
- Applying AI to help design policy and respond to risk e.g. identifying patterns in data that humans were previously incapable of doing; predicting the impact of our work with greater precision; understand future pressures on programs
- With all of the potential use cases offering to improve policy and services, enthusiasm for AI in government has been high. Unfortunately, improper application of this technology can lead to negative outcomes for users, from frustrating service experiences to being mistakenly denied eligibility for benefits and to biased profiling of people.
- As AI grows to operate in increasingly sophisticated spaces, they act on behalf of the Crown, and should be subject to similar values, ethics, and laws as public servants and adherence to international human rights obligations. In addition, the following considerations, which are more fully explored in the white paper, should be taken into account:
- Ensuring High-Quality Data - AI applications are only as effective as the quality and quantity of their input data.
- Prevention of Data Bias - AI systems are not neutral; they will learn the biases of its programmers and the datasets used to train it.
- Data for Insights and Privacy Rights - While many of the privacy risks brought by AI are not fundamentally new, the magnitude of data collected, the ability to manipulate this data beyond what a human is capable of , as well as the ability to craft programs that can subtly manipulate human behaviour (e.g. targeted ads), brings a new dimension to these risks.
- Accounting for the Actions of AI: The “Black box” Problem - Much like the human brain, it is difficult to understand the entire decision-making process in detail made by the AI (particularly in the case of Artificial Neural Networks). This is known as the “black box” problem.
- Social Acceptability - If AI is going to make decisions, recommendations, or help design policy, there needs to be a sufficient level of social trust that these systems work, and work to the population’s benefit. If the trust and support does not exist, then these tools will fail.
- Existing analytical models have already given the Government of Canada the ability to better understand certain social or environmental outcomes to policy, but new methods may be able to identify patterns in data that humans previously could not. This could enable us to make more precise and informed predictions than ever before.
- Governments work with big problems. We work in an environment often marked by complex, interdependent systems, where small policy changes can result in massive impacts for a population or the economy. If we can use data to predict the impact of our work with greater precision, or to understand future pressures on social or economic programs, then we can respond more efficiently, act proactively, and ensure that regulatory resources are focused on the highest risk elements of their industries.
- There are some limitations to this approach. Predictions are extrapolations of patterns that appeared in the past, but the past is not necessarily an indicator of the future. Like all AI systems, the right quantity and quality of data will need to be accessible to make accurate predictions. There is also a risk that predictions are made using data that has been collected in a way that is biased or not fully representative of the world that we live in.
- Already, many federal institutions use a method to describe and compare the degree of risk involved with providing a service to a user. This “risk scoring” technique can be an efficient method to associate an administrative action with risk. To date, this has most often been accomplished using methods that require institutions to create precise, domain-specific operational definitions of risk. These “rule-based” algorithms, while not AI, are a form of automation that has shown to be service-enabling by reducing the compliance and enforcement burden on lower-risk users.
Government of Canada
- Global Public Health Intelligence Network uses machine-learning to detect outbreaks
- Transport Canada conducts automated risk scoring of air cargo
- Canada Border Service Agency and Immigration, Refugee, and Citizenship Canada use Automated decision support
- Public Service and Procurement Canada is analyzing contracts for language precision
- Statistics Canada applies Machine Learning to the census data to fill in missing information based on “best combination of respondent characteristics”
- Various departments are conducting semantic and sentiment analysis of consultations
- The Public Health Agency of Canada has awarded a contract for an Artificial Intelligence (AI) pilot project for surveillance of suicide-related behaviours using social media:
- Treasury Board Secretariat will be conducting analysis of regulations using semantic analysis
- Translation Bureau is experimenting with machine translation
Interview with Gayemarie Brown at the 2018 Policy Community Conference
Google Duplex (Artificial Intelligence)