policy monitor

United Kingdom - Digital sandbox FCA

The UK Financial Conduct Authority (FCA) has been piloting a Digital Sandbox-project. Some valuable lessons on how to conceive and operationalize the regulatory sandboxes envisioned by the AI Act can be extracted from this initiative.

What: pilot project for a digital sandbox

Impact score: 2

For who: regulators, technology companies, public authorities

URL: https://www.fca.org.uk/firms/i...

Key take-away for Flanders: Flanders has a significant experience with testing facilities such as imec’s Living Labs. Incorporating their capabilities in a digital sandbox could boost its attractiveness to innovators.

Summary

Background

The ongoing discussions around the proposed AI Act touch upon many issues related to human rights protection, technological transparency, sovereignty and facilitation of innovation. Boosting innovation is a central topic in the debate around regulatory sandboxes for AI, which inspired the significant evolution of their legal status in the latest draft of the AI Act. The ambition to turn regulatory sandboxes into innovation facilitators is met with challenges such as the large number of stakeholders, the lack of experience and the inherent limitations of regulatory sandboxes. These challenges have cast doubt on aptness of regulatory sandboxes to positively and substantially influence AI innovation in the EU.

In the UK, the birthplace of regulatory sandboxes, the discussions revolve not so much around regulatory sandboxes for AI but around the evolution and improvement of the traditional governance model. Accordingly, since 2020 the UK FCA has run two Digital Sandbox pilots. The idea was to evaluate the capabilities, acquire know-how and potentially adapt the approach for the future when digital and traditional regulatory sandboxes would co-exist. The results of the two pilots showed the opportunities created by the digital sandbox but also emphasized the weaknesses of the model and the need for a customized approach.

Digital Sandboxes

In order to put these conclusions into context we need to first explain what a digital sandbox is and how it differs from a traditional regulatory sandbox. As a point of reference, we use the second pilot which took place between November 2021 and March 2022. This second pilot focused on sustainability by testing and developing innovative products and services in the field of environmental, social and governance (ESG) data and disclosure. The pilot relied on 3 market use-cases:

  1. using technology to enable transparency disclosure and sustainability reporting with special focus on corporate assets and the whole profile of the supply chain;
  2. using technology to improve consumers’ understanding of the products and providers they are engaged with and providing them with alternatives; and
  3. using technology to automatically validate ESG data and ESG-labelled corporate bond issuance.

Like a traditional regulatory sandbox, the digital sandbox includes an application process based on transparent entrance criteria. These criteria are set in advance by the regulator. In this case, this was done by the FCA whose mandate highlights the importance of ESG data for investors and hedge funds and its impact on financial regulation.

This digital sandbox emphasizes the testing and support capabilities of the process, as well as the tools to achieve it. In contrast, this appears to be a secondary concern for regulatory sandboxes for AI, as conceived under the AI Act, which seem to focus on compliance activities.

The novel testing element is based on the ‘lessons learnt’ from the first pilot. The first pilot showed that one of the major obstacles for innovators continues to be the lack of data. Therefore, the second pilot explored the opportunities created by providing the sandbox participants with synthetic data relevant to their product/service. This was achieved by involving additional stakeholders, platformization of the process and providing access to new collaborative tools such as an API marketplace. The regulator’s participation ensured the data was used in a legally compliant way. Its involvement also created a channel for further work on elements of legal compliance related to the FCA’s mandate.

To summarize, the digital sandbox aims to provide a collaborative testing environment for novel technological solutions at different stages of their development. It endeavours to overcome the scalability issue of traditional regulatory sandboxes. Instead of zooming in on legal compliance, the digital sandbox facilitates innovation by promoting and enabling participants’ access to reliable and suitable data collected or generated in accordance with the relevant legal requirements. In contrast, the Spanish regulatory sandbox pilot envisions a web-based tool for self-assessment of the compliance of the participating ‘systems’. This approach limits the sandbox’s scope to testing of AI products and services at much higher technology readiness levels.

The digital sandbox pilot did show some weaknesses too. For example, it was not always able to cater to the specific data needs of all the participants due to lack of time. Some use cases were overly broad while others were confronted with a lack of availability of particular datasets. A possible way to mitigate such weaknesses would be the involvement of available organisational structures and stakeholders such as innovation hubs and/or federations of innovation hubs, which could facilitate the access to infrastructure and reliable and suitable data for the participants in the digital sandbox.

The analysis of the participants and the technologies tested showed that in almost half of the cases the applications tested involved different forms of machine learning. This was followed by digital ledger technologies, natural language processing, web scraping and Big Data and privacy enhancing technologies (PETs). The digital sandbox can therefore be successfully utilized for AI systems, either independently or complementary to the traditional regulatory sandboxing.

Conclusion

The EU legal landscape is a constellation of rules operating at many different levels. Traditional regulatory sandboxing incentives, such as waivers of legal rules, may be hard or impossible to apply meaningfully in such a complex regulatory environment. Regulatory sandboxes for AI in the EU are, however, currently rather focused on compliance purposes but this may prove ineffective as an innovation facilitator. The EU should therefore invest more in enhancing the testing capabilities of sandboxes and less in compliance activities that may come dangerously close to unauthorized practice of law.

This post was written by Katerina Yordanova, researcher at CiTiP-KU Leuven