https://www.the-cover.com/images/uploads/content-images/iStock-1172499187_1140x810.jpg

Navigating the legal pitfalls of artificial intelligence


Navigating the legal pitfalls of artificial intelligence

Whether we like it or not, it seems certain that Artificial Intelligence (AI) is here to stay, and its use is only going to increase. There is currently no specific legislation regarding AI in the UK, and the government proposes to govern it using existing law and through the sector-specific regulatory guidance. This approach is outlined in a white paper called A Pro-Innovation Approach to AI Regulation.

According to government research from 2022, approximately 15% of small businesses have adopted AI technology, which translates to 432,000 companies. The motivation for businesses to use AI is to save time and costs and be more efficient. However, its use has numerous risks and has various legal implications.

The risk of inaccuracy

A primary concern about the use of AI in the legal context regards its accuracy, and one assumes that some types of AI may be more useful and exact than others. If you use a lawyer to draft a contract, you know that they have undertaken the appropriate studies and gained the required qualifications, and that itself should give you peace of mind. However, it’s equally possible that, over time, AI could turn out to be less prone to error than human beings, no matter how qualified they are.

As well as drafting contracts, it’s also increasingly common for people to use AI to prepare or draft written submissions in court proceedings or help with the court process generally. AI is only as good as the parties who created it, because, after all, it is neither being created or checked by a lawyer.

Linked to accuracy is the issue of liability. If you use a lawyer, not only should you assume that content is correct, but also who is likely to be liable if it is not. The law generally requires a “legal personality” to bear legal responsibility, and this means that the ultimate responsibility for the acts and omissions of an AI system lies with its human or corporate creators, suppliers and users.

After all, you cannot sue a robot, or a computer. A lawyer in New York is being sued because he used ChatGPT to research precedent case law, and six out of the seven cases he cited had been completely made up by the AI.

“A lawyer in New York is being sued because he used ChatGPT to research precedent case law, and six out of the seven cases he cited had been completely made up by the AI”

Where businesses are contracting for an AI system, the terms and conditions will be vital in apportioning risks and liability. However, when the end user is a consumer (someone acting outside of a business (i.e., the general public), it will prove much more difficult for the AI provider to “contract out” of liability because there is legislation which prohibits this.

Beware AI bias and discrimination

Clearly, there are also major ethical concerns about the use of AI. Its use, and the results it produces, could be unexpected or unfair. For example, the data used to “train” the AI could contain bias which would be replicated into the content it produces. It goes without saying, that there is also the future impact that AI is likely to have on the job market, and society in general.

The use of AI-generated algorithms in recruiting new staff and screening CVs has increased in recent years. The Equality Act 2010 prohibits discrimination by employers (and also service providers) on the grounds of certain protected characteristics, such as age, sex or race. However, as numerous real-life examples have shown, whilst AI is completely data-driven so may reduce human bias, due to the ways AI is trained, based on historical data or a narrow data set, the data itself may be flawed, inaccurate or biased. This may then lead to unintended unlawful discrimination in recruitment and other employment decision-making processes.

Who owns the intellectual property?

A significant area of law being impacted by AI is intellectual property (IP). IP owners can enforce their rights to prevent others from benefitting from their creations, by seeking injunctions and/or damages/compensation, but what is the position regarding work created by AI? For example, in copyright law, can works created using AI meet the "author's own intellectual creation" originality test, and if so, who has the benefit and ownership of the work?

“In relation to patent law, a 2021 Court of Appeal case ruled that a machine is not an inventor for the purposes of UK law”

With this in mind, Google may be facing a court claim from the Daily Mail in relation to alleged copyright infringement, as they have allegedly taken thousands of the Mail’s articles to help “train” their own AI. It will be interesting to see how this case and the law itself develops.

In relation to patent law, a 2021 Court of Appeal case ruled that a machine is not an inventor for the purposes of UK law. However, this has been appealed by the creator of the AI, who asserts that the law does not require a human inventor, and therefore he can be granted patents. A judgment is expected later in 2023 and serves as yet another example of the fluid nature and uncertainty surrounding AI and the law.

AI and data protection

AI must comply with existing data protection laws. The relevant law is contained in the Data Protection Act 2018 (DPA 2018) and the General Data Protection Regulation, now retained in domestic law as UK GDPR, whilst the new Data Protection and Digital Information (No 2) is still making progress through parliament. Data controllers can, in principle, use AI, so long as they comply with general data protection principles and have a lawful basis for processing.

The principle of fair and transparent processing lies at the heart of the UK GDPR and for processing to be fair and transparent, the data controller must provide data subjects with concise, transparent, intelligible and easily accessible information about their data processing activities, including profiling activities and the existence of any automated decision-making.

Automated decision making is the making of decisions about an individual, based solely on automated means without any human involvement, whilst profiling is any form of automated processing of personal data to evaluate certain things about an individual.

To help achieve compliance, businesses should have a privacy notice/policy that informs its data subjects about the business, what it will do with their data, the grounds for processing their data, who it is shared with, and who to contact if there’s a problem. It’s a good way of showing that the business cares about people’s data and is transparent in terms of what it does with data. Privacy notices should be in plain English and flagged up in a prominent position e.g., on a website, and should outline any use of AI, where appropriate.

Facial recognition and deepfakes

Other hot topics include facial recognition technology, and how compliant this is with the subject’s human and data protection rights, and deepfakes, which are fake videos, pictures and sound recordings created to look like realistic depictions of people doing or saying things that they have never done or said.

Clearly these could lead, amongst other things, to potential defamation claims by the person whose ID has been used without their consent to create misleading content. The ‘victim’ will, of course, be able to deny liability for anything they are supposed to have said or done, but actually didn’t.

It very much remains a case of “watch this space” as this fascinating area of the law continues to develop.