Policy Monitor

United Kingdom - AI Safety Institute

The UK government has founded an AI Safety Institute that focuses on advanced AI Safety for public interest. Its mission is to minimize surprise to the UK and humanity from rapid and unexpected advances in AI. The institute will develop the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.

What: Administrative decision

Impact score: 2

For whom: Government, businesses, researchers

URL - https://www.gov.uk/government/publications/ai-safety-institute-overview

(Key takeaways for Flanders):

The establishment can be relevant for the Flemish AI Research Program for several reasons:

  • Shared goals: both organisations aim to advance AI technology while ensuring its safe and beneficial use.
  • Safety: the work of the AI Safety Institute can supplement the Flanders AI Research on Responsible AI
  • Collaboration Opportunities: Collaborating with established institutes like the UK AI Safety Institute could enhance the Flemish program’s research capabilities and broaden its impact

The UK AI Safety Institute is an organization with a mission to “minimize surprise to the UK and humanity from rapid and unexpected advances in AI”. It aims to achieve this by “developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance”. The institute is not a regulator, but it provides foundational insights into the UK’s governance regime and will play a leading role in ensuring an evidence-based, proportionate response to regulating the risks of AI.

The institute has three main functions:

  • Develop and conduct evaluations of advanced AI systems: This includes evaluating dual-use capabilities, societal impacts, system safety & security, and potential loss of control.
  • Drive foundational AI safety research: The institute recognizes that system evaluations alone are not sufficient to ensure safe and beneficial AI. Recognizing the potential limitations in risk assessment, the Institute is committed to foundational AI safety research to understand better the risks posed by advanced AI systems and to develop necessary tools for effective AI governance. The Institute’s research, which supports both short and long-term AI governance, will range from the rapid development of tools to inform governance to exploratory AI safety research. This includes building products for AI governance, improving the science of evaluations, and pursuing novel approaches to safer AI systems. The Institute will collaborate with various organizations, including international ones, and focus on research areas that are not sufficiently explored by academia or industry. It aims to forge a scientific consensus around the state of AI and associated risks. A first study looks into the evaluation process for assessing the capabilities of the next generation of advanced AI systems.
  • Facilitate information exchange: The institute works to mitigate “insight gaps” between industry, governments, academia, and the public. This includes incident reporting for harms & vulnerabilities, sharing usage data, and providing technical support to the rest of the government.

    In terms of partnerships, the Safety Institute has already formed international collaborations with the new US AI Safety Institute and the Government of Singapore to work on AI safety testing. Additionally, the institute will work closely with academia, civil society, the national security community, and industry to further its mission.