Mike Britton

As AI becomes commonplace in professional settings, companies must ensure that the fast-moving tech works in tandem with their digital transformation goals. In this interview, Mike Britton, CISO at Abnormal Security, reflects on how AI shapes the current professional landscape and its impact on digital transformation strategies.  

What are the key challenges that C-suite executives face when implementing digital transformation initiatives, particularly concerning cybersecurity and AI integration?

As more security teams look to bring AI into their security technology stacks, one of their biggest obstacles will be ensuring data volume and data quality. Investing in data infrastructure to ensure access to complete, accurate, and reliable data, with the ability to respond dynamically to changes in the IT environment, is key for AI models to effectively recognise patterns and detect anomalies. It is only by truly understanding the full scope of the data that AI can be effectively used to mitigate attacks. 

For example, Abnormal Security uses an API-based architecture to ingest data from across an organisation’s email and SaaS environment. This allows the platform to baseline the known-good behaviour of every employee and vendor in the organisation based on each user’s communication patterns, sign-in events, and thousands of other attributes. The platform can then apply advanced AI models, including natural language processing (NLP) and behavioural analytics, to detect abnormalities in email behaviour that indicate a potential attack, ultimately preventing those attacks from reaching end users. 

How do you see the role of AI evolving in digital transformation strategies within organisations, especially concerning email security and threat detection?

As email attacks become more sophisticated and security teams simultaneously grapple with reduced budgets and cyber-skills shortages, more security teams are turning to AI as a tool that not only helps improve defences, but also helps regain productivity. There are a few areas in security – and especially in email security – where introducing AI could help security teams see improved efficiency. 

By understanding what normal user behaviours look like, AI-based tools can then detect deviations from the norm that could signal malicious activity.

One example is using behavioural AI to detect advanced email attacks, including text-based social engineering attacks. Socially engineered emails typically lack the traditional indicators of compromise (things like malicious links or blocked senders) that traditional secure email gateways and other rule-based defences look for. By understanding what normal user behaviours look like, AI-based tools can then detect deviations from the norm that could signal malicious activity. 

Another area where AI could be applied to transform email security is in automating workflows around user reporting of phishing emails. Manually triaging and responding to user-reported emails can consume hours of skilled analyst time, even though the majority of user-reported phishing emails are ultimately deemed safe. Using AI to inspect and evaluate these emails (and automatically remove those that are part of a larger campaign) can accelerate this process and free up valuable security analyst time for more strategic tasks. 

From your experience, what are the most common misconceptions among C-suite leaders regarding the adoption and deployment of AI technologies in their digital transformation agendas?

When it comes to AI adoption for digital transformation initiatives within a business, there is a common misconception among C-suite leaders that AI solutions can simply be plugged in and start delivering results immediately. 

AI is an extremely powerful technology, but there are a number of factors to take into account before you can begin to reap its benefits. AI solutions work best when applied to specific problems and well-defined use cases, so making sure your vendors deeply understand your security challenges is a first step. AI also needs sufficient quality data to operate effectively, so ensuring tight integration with existing data sources is another key step. 

There is another common misconception that AI will replace human workers. In reality, the most effective use of AI today is to augment human security analysts, helping them to automate or speed up more of their manual processes, so that they can focus on the more strategic aspects of their jobs. 

Can you provide examples of successful AI applications in cybersecurity that have significantly enhanced organisational resilience and efficiency, particularly in mitigating email-based threats?

Due to its widespread use and vulnerability, email has long been a primary target for cyberattacks. Despite significant investments in email security solutions to combat threats like spam, ransomware, and credential phishing, losses continue to rise. AI is proving to be an extremely valuable tool in countering these threats. 

The application of behavioural AI in email security – that is, understanding human behaviour to detect anomalous (malicious) activity including phishing, social engineering, and account takeover – has been shown to help organisations improve their defences, while also reducing costs and boosting productivity. 

For example, according to Forrester, the average 10,000-employee organisation that deploys Abnormal’s AI-native human behaviour platform achieves a total ROI of 278 per cent in three years, with a payback period of less than six months. During that period, the average organisation displaces a legacy secure email gateway solution, prevents $4 million dollars in losses due to business email compromise, and reduces security analyst hours spent on email security tasks by 95 per cent. 

According to one customer, Robert Woods, Cloud Computing Director, Kroenke Sports & Entertainment: “Abnormal has delivered significant time savings. I’ve gained 75 per cent of my time back that I used to spend on email and now I can focus on other aspects of cloud security.”  

What are the critical factors that C-suite executives should consider when evaluating AI-powered solutions for their cybersecurity needs amidst their digital transformation journeys?

The rise of generative AI, with tools like ChatGPT and Gemini, has been incredibly rapid, having a transformative impact in just a couple of years. These tools are popular for their efficiency and legitimate uses, but cybercriminals exploit them, too. As AI technology evolves, organisations must prioritise AI-powered solutions to enhance their cybersecurity defences, utilising defensive AI to stop this malicious AI. Integrating AI into their strategies will be crucial for C-suite execs looking to stay ahead of increasingly sophisticated threats and ensuring robust protection against email-based attacks. However, before choosing security tools, companies must understand that security tools need to “think” like security teams, which can be done through the power of AI. Rather than flagging every unusual event as a threat, the most effective email security platforms will evaluate the message and correlate events to determine the likelihood of a threat. This process is similar to how humans make decisions. For instance, if a drop of water lands on your head, you don’t immediately assume it’s raining. You consider the context; maybe it rained yesterday, but the sky is clear today, making rain unlikely.  

Integrating AI into their strategies will be crucial for C-suite execs looking to stay ahead of increasingly sophisticated threats and ensuring robust protection against email-based attacks.

AI can take this further by analysing historical and current inputs to predict the most likely cause of an event, utilising thousands of signals to determine whether something is simply unusual, or it is truly malicious. Abnormal Security uses various AI techniques, including behavioural analysis and natural language processing, to detect threats. Our systems quickly identify anomalies, not just as deviations from a baseline, but also inconsistencies among other anomalies in a user’s activity. This sophisticated approach ensures improved protection for users, with more accurate threat detection, ultimately leading to fewer false positives than other platforms. 

How do you recommend that organisations balance innovation and risk management when integrating AI technologies into their digital transformation roadmaps, especially within sensitive areas like email security?

AI has incredible potential to transform a number of business functions, including security. And while organisations should be thinking about AI as part of their innovation strategies, they should also be cautious of the risks it presents. 

Security teams should be careful not to over-rely on AI. It’s not a silver bullet, but rather a tool that can be layered with other defences for stronger overall security, while improving efficiency and elevating security team members in their roles. This means that security leaders should not rely exclusively on AI-based email security technology to protect their organisations, and they should continue to implement security awareness training and other foundational security measures like multi-factor authentication and password management. 

Another common concern surrounding AI is that there tends to be little visibility to the end users around how it operates and makes decisions. Any company that uses AI in its solutions also has a duty to prioritise transparency as much as possible, with assurances around how the AI operates and how they manage users’ data privacy. 

In your view, what are the emerging trends or advancements in AI that C-suite leaders should closely monitor to stay ahead in their digital transformation strategies, particularly within the realm of email security and threat detection?

Cybersecurity leaders should be conscious of how threat actors’ attack tactics are evolving, so they can adapt their defences accordingly. 

One of the biggest trends we’re seeing now is attackers using generative AI to write greater volumes of highly sophisticated email attacks. Now, even inexperienced petty criminals can leverage widely accessible tools like ChatGPT to craft perfectly written, personalised, and seemingly realistic emails, making them exponentially more difficult for employees (and legacy email security tools) to identify. 

The rise of AI-generated threats underscores the importance of having AI-based defences that can detect the subtlest changes in email behaviour indicative of a potential attack. In fact, a recent study found that 82 per cent of IT decision-makers plan to invest in AI-driven cybersecurity, and over 94 per cent of security leaders believe that AI will significantly impact their cybersecurity strategies within the next two years.  

Could you share insights or case studies illustrating the tangible business benefits that organisations have realised through the strategic adoption of AI-driven solutions in their digital transformation initiatives, specifically in the context of enhancing email security and minimising cyber risks? 

Many of our clients have seen significant benefits from AI-driven solutions. For instance, Mace, a leading commercial construction and consultancy firm, faced challenges with advanced email threats bypassing its signature-based defences, and it needed a shift in its security mindset. Traditionally, construction hasn’t been overly focused on cybersecurity, but Mace’s high-value projects require robust data protection.  

Vendor email compromise was a major issue for Mace. Their vendors were being compromised and used as attack vectors, where attackers would use legitimate accounts to send attacks to Mace employees. Since trusted emails lowered users’ guards, this strategy often results in more effective attacks. Mace needed a smarter defence layer, choosing Abnormal due to the behavioural and language-based AI used to detect deviations in email content and intent, even from known vendors and other trusted sources.  

Similarly, Softcat, an IT services and consulting company, aimed to enhance security for its employees and customers. The growing company faced evolving email threats despite using Microsoft Exchange Online and SEG. Softcat’s Head of Information Security noted they received around 100,000 inbound emails daily, with a few account-takeover emails slipping through.  

Softcat explored API-based vendors and quickly set up proofs of concept, ultimately choosing Abnormal for the ability to detect advanced attacks and reduce false positives. Abnormal also automates responses to user reports, freeing the security team for other tasks. 

 

Executive Profile 

Mike Britton

Mike Britton is the CISO of Abnormal Security, where he leads the information security and privacy programmes. He is integral in building and maintaining the customer trust programme, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. He also works closely with the Abnormal product and engineering teams to ensure platform security and serves as the voice of the customer for feature development. 

Prior to Abnormal, Mike spent six years as the CSO and Chief Privacy Officer for Alliance Data and previously worked for IBM and VF Corporation. He brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies. Mike holds an MBA from the University of Dallas and a BA in Political Science from the University of Mary Washington. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here