Skip to main content
Home » Artificial Intelligence » Tracking AI incidents can help protect citizens from the risks of AI
Future of AI & Robotics 2022

Tracking AI incidents can help protect citizens from the risks of AI

iStock / Getty Images Plus / Sylverarts

Karine Perset

Head of OECD.AI

While AI provides tremendous benefits, it can also cause harm and lead to negative incidents or generate risks that fuel anxieties.


Some algorithms incorporate biases that discriminate against people for their gender, race or socioeconomic condition. Others have manipulated individuals by influencing their choices of what to believe or how to vote. Some AI-powered autonomous vehicles have even caused fatal accidents. These negative outcomes — or AI incidents — are as diverse in nature as the environments and industries where they happen.

Regulation and risk dominate the debate

Governments around the world are addressing these risks as they look to protect citizens’ rights and democracies with safeguards. They want to ensure that AI is trustworthy and that it benefits people and the planet. Most are converging around risk-based AI policies.

The trick is to strike the right balance. Heavy regulations could be hard to enforce and stifle innovation, while light control could allow AI risks to continue and even make some situations worse. What’s more — rapid developments in AI make it hard to design policies and regulations that stand the test of time.

AI-related legislative proposals are gaining traction in many countries, with particularly strong momentum in heavyweights like the European Union, the United States, China and Brazil. The UK has proposed a regulatory framework for AI systems that is “proportionate, light-touch and forward-looking” according to Nadine Dorries, the Secretary of State for digital, culture, media and sport.

They want to ensure that AI is trustworthy and that it benefits people and the planet.

An international database for AI incidents

Much like climate action, AI knows no borders. This means no single country or economic actor can tackle AI-related risks alone. National governments must think about designing policies that are interoperable on an international scale.

Effective legislation and government policies will need facts about AI incidents and where they are materialising to guide sound decision-making. For the moment, governments are having a hard time agreeing on what constitutes an AI incident. While this is not the end goal — it is an important first step.

Allowing AI to develop effectively

Luckily, governments and all stakeholders are coming together under the OECD AI Policy Observatory to develop standards and mechanisms for tracking AI incidents and risks of all types and origins. This will give governments a shared evidence base to help make informed decisions that protect citizens and democracies while allowing AI to flourish.

Next article