https://www.the-cover.com/images/uploads/content-images/iStock-1480674100_1140x810.jpg

What does AI mean for cyber-security?


What does AI mean for cyber-security?

Artificial intelligence is arming scammers with new technology that could be used to trick employees and to break into vital systems. So how can you protect your organisation from this new threat?

The past 12 months have seen artificial intelligence technology make huge leaps in sophistication. While we once might have been impressed that a chatbot could handle a simple banking query or recommend a useful video for a work task, generative AI tools such as ChatGPT can now do everything from writing reports to designing a marketing poster. Employees are increasingly embracing such tools: a study by the Institute for the Future of Work recently found that more than two-thirds thought AI could improve the quality of their jobs.

But with every new digital leap forward comes an increased risk of a cyber-security breach, according to Tom Hebbron, principal security consultant at Savanti. He describes generative AI as a “force multiplier” which can give enormous efficiency boosts to many kinds of knowledge work, unfortunately including launching cyber attacks. “In the last 12 months, with ChatGPT and other generative AI tools, there’s been a step change in terms of what AI can do and it is widely available,” he says. “What previously required some time investment, such as a highly personalised phishing email using information scoured from LinkedIn, can now be automated easily and at scale.”

A phishing scam could lead an employee to click on a link that might introduce malware into the system or enable ‘bad actors’ to gain access to key systems such as banking apps, or sensitive data (which can be sold to other hackers on the dark web). Many businesses run multiple systems that ‘talk’ to each other and are linked by the same passwords to make things easier for users, but this can also make the impact of a breach much worse. 

Security risk is the key concern

Research by BrightHR finds that, while businesses are keen to embrace the possibilities offered by AI, three in 10 employers say a security risk is their key concern when using it. Thea Watson, chief international growth and marketing officer at BrightHR, says: “As with any new tool you bring in, the more you have, the more susceptible you are to a data breach or phishing scam. Often with newer AI tools, they’ll sit on top of other technologies and will be sourcing data from many places, and this is when error and increased risk can come in.”

"As with any new tool you bring in, the more you have, the more susceptible you are to a data breach or phishing scam"

There is a range of other risks, too. Inputting confidential data into AI platforms could place your company in breach of General Data Protection Regulation (GDPR) rules, while employees could inadvertently be breaching copyright because it’s not always clear where the source of ChatGPT’s output is, or to whom it belongs. Accuracy is another issue: employees may use AI to ask questions or perform research, saving a lot of time in the process, but it can be dangerous to assume that asking a tool to write a legal policy, for example, will create something that is 100% watertight.

At its most sophisticated, AI can now convincingly impersonate a colleague, customer or client by replicating someone’s written style and tone or even their voice. Hebbron adds: "This makes our job much harder, because requests to change account details, transfer funds, grant access or install software really do appear to come from a trusted person". At the same time, generative AI tools “democratise” hacking tools and techniques, he says, using generative AI tools as a smart 'copilot' to make them more accessible than ever to non-expert users.

 Review your policies around AI

A good first step to protecting against such attacks is to define who needs to use an AI tool and what for, and align this to internal policies around good practice, advises Watson. Reviewing policies regularly is crucial, as new AI tools emerge all the time. Don’t be afraid to ask your firewall or other security suppliers about what they’re doing to enhance protection against emerging scams, she says.

Hebbron advocates beating the hackers at their own game by using the same "force multiplier" and use ChatGPT-type tools to refresh staff training and policies quickly – remembering that the output always needs to be vetted by a human expert! Cyber security tools are also integrating generative AI in the 'copilot' model to help overloaded human analysts identify and investigate potential attacks. It may be worth looking into cyber and data risk cover, too, to insure against an attack, theft or loss of data and the reputational and financial consequences that could ensue.

Training for employees is essential

Whilst generative AI will help defenders as well as attackers, protection still comes down to good risk management and governance. Understand your risks, put technical controls in place and monitor their effectiveness. It may be worth looking into cyber and data risk cover, too, to insure against an attack, theft or loss of data and the reputational and financial consequences that could ensue when technical controls aren't enough.

From a practical perspective, “segmenting” your digital environment (either by teams of employees or groups of systems) so that if a hacker does gain access it does not bring down the entire company, can mitigate potential damage (limiting the 'blast radius'). Ultimately, it’s all about good governance, concludes Hebbron from Savanti: “Take a zero-trust approach, accepting that an attack is likely, and plan for how you will catch it and recover as quickly as possible,” he says. “Think about your back-ups, your testing, and how you would rebuild in the event it does happen.”


Sectors
Published on
October 10, 2023