Turnkey Consulting | Key View

AI’s Influence in Human Risk: Common Scenarios and How to Prepare

Written by Becky Gelder | 12 August 2024

The promise of increased efficiency, automation of repetitive tasks, and streamlining of processes has made AI hard to resist. Indeed, it harbours great power and great potential for businesses. Equally, however, AI’s strengths offer hostile parties an alternate vector through which to attack, creating risks that, if realised, could counter its benefits and disrupt business as usual. 

Balancing the risks and benefits of AI requires knowledge of both the internal and external threats it poses as well as the best controls to implement across the people, process, and technology triad. In this article, we explore two common scenarios where AI is influencing human behaviour and how you can create a human risk strategy that empowers employees and limits exposure. 

 

Scenario 1: Phishing threats 

One of the most well-known human-centric risks is phishing: a form of social engineering where attackers deceive victims into sharing sensitive information or installing malicious content. 

The sophistication and effectiveness of phishing attacks varies based on the attacker’s ability to make a message appear authentic and personalised. Historically, many phishing emails included tell-tale signs that they were not legitimate – poor spelling, grammar, tone, graphic quality, to name a few. But with the help of AI, scammers are now able to work around these issues and increase the speed, complexity, and believability of their social engineering efforts.  

Other AI social engineering enhancements, including video and voice spoofing – more commonly known as ‘deepfakes’ – add an extra layer of misdirection and lure recipients into a false sense of security. These visual and audio elements increase the vulnerability, urgency, and the likelihood of recipients interacting with the threat actor. 

 

How to respond to phishing threats 

On all fronts, the use of AI is making phishing attempts look more authentic and harder for humans to distinguish. So how can organisations mitigate the risks they pose? 

From a technology perspective, combatting AI-enhanced phishing threats can take several forms. One approach is using…you guessed it…AI to identify and filter out such attacks. Alternative access control components such as Multi Factor Authentication (MFA) and/or Mobile Device Management (MDM) solutions are also strong options to ensure policy is enforced across your ecosystem.  

These technologies might not be the best option for every organisation due to budget, scope, or impact on business operations. Additionally, an over-reliance on technology for breach prevention means that employees may become less capable of spotting, responding to, and reporting potential threats. A lot of this technology also requires a level of human judgement to ensure its efficacy. 

Human risk strategies can bolster your technical controls. When it comes to phishing and other human-centred attacks, we recommend two key efforts that encourage critical thinking to drive workforce agency and resilience: 

  • Improving training – Do your employees see cyber security training as an annual tick-box exercise or as relevant to their roles? We recommend taking a considered and tailored approach, utilising the data produced by awareness and phishing campaigns to enhance future training efforts and using the outcomes to target areas of greater vulnerability or specific user groups facing greater risk. By paying attention to the different risk profiles of the individuals and departments across the organisation, you can make training more relevant to the user’s role.

  • Reassessing culture – Moving too fast? Users that feel pressured into a mentality of ‘I have to do this right now to avoid reproach’ won’t stop to think about the potential consequences of clicking on a link or sharing sensitive information. Similarly, if the organisation takes a hardline approach to mistakes, users may not share information about errors or breaches for fear of repercussions. It is important to foster a culture of openness and questioning, empowering employees to speak up if they spot something, and to report mistakes as soon as they’re identified. 

 

Scenario 2: Data Leakage 

Employees are increasingly turning to AI tools to assist in their day-to-day roles, with Large Language Models (LLMs) and generative AI like ChatGPT assisting with report writing, programming problems, and general knowledge queries.  

But what happens when an employee feeds an AI tool sensitive data, such as your company source code, PII, or intellectual property? Many users don’t realise that the content they enter as prompts for the AI is retained, used to train and improve the AI, and in so doing, could be subject to leakage if another user asks the right questions. 

Losing control of data can have several repercussions depending on the sensitivity, content, and volume of data lost. This includes potential regulatory fines, loss of business or reputational damage or competitive advantage, and further, more significant breaches that make use of the leaked data. 

 

How to respond to data leakage 

As with phishing, avoiding data leakage via employee AI usage requires a multi-faceted approach. Utilising data loss prevention (DLP) controls can flag potential issues before they become a reality, but they cannot control employees using AI on unmonitored devices or in unexpected ways. Likewise, implementing an AI-use policy can give guidance to stop inappropriate use, but its effectiveness requires tailoring it to the right level for your organisation and communicating effectively to the wider business.  

To create a more security-engaged culture and mitigate data leakage via AI, we recommend: 

  • Balancing risk and reward – Limiting the use of AI or implementing strict policies around it may serve to drive the usage underground. Understanding the ways in which employees currently use and benefit from AI will help sculpt more accessible and implementable controls to ensure compliance from teams. This should be done continuously through regular reviews of AI app activity and data to support adaptation to new risks or usage trends. 

  • Increasing understanding and trust – If users do not know about the threats AI pose to their organisation, how can we expect them to avoid missteps? Ensure that training is available and that reminders are provided about the dangers of data loss. When organistions provide clear and reasonable guidance for use, employees feel prepared and trusted to reap the benefits of AI in a more secure manner. 

  • Encouraging clear communication - If you have policy surrounding AI use, make it readily available in an easily consumable format. Very few employees will read a 20-page policy paper, but they’re far more likely to engage with smaller bulletins or a succinct video. Likewise, as the threat landscape changes and new AI trends emerge, keep employees up to date on policy developments or changes to acceptable use. 

 

Conclusion 

Effective human risk strategies start by recognising the current and common behaviours of the workforce and the numerous external drivers that impact their interactions with technological solutions. This includes acknowledging both the value and the risks associated with AI. 

Phishing attacks and AI breaches can be limited through the empowerment of the workforce to spot threats, report them to the relevant teams, and respond to them promptly. That’s why it’s so important to encourage a culture where critical thinking is valued and supported as well as backed up by a strong learning foundation specific to the risk profile of each employee. 

Explore further insights on how to build a secure culture throughout your organisation.