With AI taking over news feeds and conversations around the world, the tech is beginning to attract the attention of legislators. As Britain and other countries race to become key players in AI regulation, we examine how businesses could be affected.
Britain as the home of AI regulation
On a recent visit to the US, Rishi Sunak expressed his view that Britain has a pivotal role to play when it comes to AI. “You would be hard-pressed to find many other countries other than the US in the western world with more expertise and talent in AI. We are the natural place to lead the conversation,” he said.
Britain will be hosting the first global summit on AI regulation in London this autumn, which Sunak hopes will cement the country’s role as a key player in AI. In line with this, the UK government has allocated £100 million to a Foundation Model Taskforce focused on AI safety. Inspired by the UK’s successful Covid-19 vaccine taskforce, the group will focus on the safe development of foundation models: AI models trained on vast swathes of data which have broad capabilities across a range of tasks, and which power tools like ChatGPT and Google’s Bard.
In a positive sign for the UK, Open AI, the company behind ChatGPT, recently chose London as the location of its first international office. But can Britain really become the centre of AI legislation? Post-Brexit, it makes sense that the UK would want to develop itself as an AI hub. Dr Mateja Durovic, Director at the Centre of Digital Law at King’s College London and Co-Director of the Centre for Technology, Ethics, Law and Society, says that “London remains one of the commercial capitals of the world”. To strengthen that position, Durovic explains that the UK needs to “demonstrate innovation and support new technologies, with AI being the most important and biggest development”. Having proper legislative processes in place proves that a country is serious about embracing new technology, but legislation must strike a balance between ensuring safety and nurturing innovation.
Predicting the shape of any legislation is difficult, because the state of AI is changing rapidly. “You don’t want to impose any kind of disproportionate regulatory obstacle”, warns Durovic. Equally, though, the regulations must not be easily circumvented or become outdated – it’s a delicate balancing act.
The worldwide regulation race
The EU is another key player in AI regulation, having long been concerned with digitisation, data protection and other tech issues. The bloc is preparing an AI Act, which it hopes will set a global standard for the technology.
The regulation will apply a four-tier risk framework to AI systems based on the risk they pose to “health, safety and/or fundamental rights”, ranging from “minimal or no risk” to “unacceptable risk”. The European Parliament moved the act a step closer to reality in June when it approved draft text of the Act, but it could still be a few years before it comes into force. It will be interesting to see how the UK and the EU diverge on their approaches post-Brexit, says Durovic.
Brazil has also been proactive in tackling AI, its Senate President Rodrigo Pacheco having presented a proposal to create a “civil rights framework” around AI earlier this year. However, the proposals were drafted before the 2023 AI boom, and critics have suggested it may already be out of date and ineffectual in tackling the challenges posed by ChatGPT and other similar technologies.
In the US, where many industry-leading AI tech companies are based, the White House has published a blueprint for an AI bill of rights. Compliance is voluntary for now, however, and experts believe legislation may still be some way off. Taking a slower approach at a federal level could allow the US to learn from other legislators’ mistakes, but also risks a patchwork of incompatible regulations emerging at state and local level.
The impact of regulation on businesses
Many companies globally have already incorporated AI into their business structures and operations. Others have taken a more cautious approach, amid concerns about the legality of tools like the AI image generating tool Midjourney, which is subject to legal action by a group of artists who argue that their work was used to train the tool without their consent.
According to The Guardian “almost 60% of people would like to see the UK government regulate the use of generative AI technologies”. In the absence of rules, businesses are left on their own to make ethical and privacy decisions, leaving room for issues.
However, rushing into regulation and getting it wrong could hinder some of AI’s benefits to businesses, for example its ability to boost productivity. Regulators must manage a delicate balance between safety and innovation, and most importantly, not get left behind.