focus hand of businessman show virtual graphic data connect with AI.Using AI to sucess everything, Futuristic data technology.

By Robin Campbell-Burt

AI has changed the world forever. But what can we expect from AI in 2024? In this article, Robin Campbell-Burt investigates the potential benefits and risks of AI in 2024 and what cybersecurity experts think.

It’s safe to say 2023 has been the year of Artificial Intelligence (AI). The use of tools such as ChatGPT, means that advanced AI is now widely available and at minimal cost. Businesses across the world are now trying to figure out how they can implement AI into their operations to make them more efficient and profitable.

However, there is a yin and yang effect when it comes to AI. Whilst there are plenty of opportunities, there are also plenty of risks. So, what does the future hold for organisations when it comes to AI. Where are the opportunities and where are the risks?

Andy Patel, Senior Researcher at WithSecure, likens the rush to add AI technology to business processes to the implementation of Internet of Things (IoT) devices.

“AI-powered services and products will be rushed to market as competition amongst startups and established corporations continues to heat up,” said Patel. “Not having AI functionality in your product will mean the difference between it being viable and useless. And that means little-to-no attention paid to security, just as we saw with the first IoT devices. If it’s smart, it’s vulnerable is about to take on a whole new meaning.”

This sentiment is echoed by Michael Adjei, Senior Systems Engineer at Illumio: “Risks will be exacerbated in situations where the technology in question is free for public use but not explicitly bound by the appropriate internal corporate security limits of compliance, confidentiality, and non-disclosure agreements, especially also if the users of the technology work with critical information in research and development, intellectual property, or sensitive data.”

Whilst 2023 was seen as an experimental year for AI, 2024 will be slightly different. Kev Breen, Director of Cyber Threat Research at Immersive Labs, said there will be more focus on AI functionality.

“In the year ahead, we’ll hopefully see the hype around AI die down and become more of the norm so that we can focus on the many benefits of using these tools to do work more efficiently and effectively,” said Breen. “A handful of organisations are dedicating ample time and resources to the actual use cases of this technology, and we can expect more businesses to follow suit.”

For John Pritchard, Chief Product Officer at Radiant Logic, he believes that the pressure for organisations when it comes to AI will be accurate testing.

“The challenge in natural language processing is to ensure the AI models provide accurate and reliable information without engaging in chat hallucination,” said Pritchard. “This will put pressure on companies to assess and test the accuracy, appropriateness, and actual usefulness before being accepted.”

The challenge for organisations when it comes to AI isn’t just implementing or using it, but also how it is used by cyber criminals. As said by Merium Khalid, Director, SOC Offensive Security at Barracuda: “Attackers are leveraging advanced AI algorithms to automate their attack processes, making them more efficient, scalable, and difficult to detect.”

Erez Yalon, VP Security Research at Checkmarx, agrees and believes that AI offers “a complete ‘green field’ of new opportunities, from attacking AI frameworks and LLM users all the way to AI-supported exploits.”

The aim for cybercriminals when developing new techniques is to avoid detection. Patrick Ragaru, CEO at Hackuity, highlights evasiveness as one of the key reasons AI will be used in cyberattacks.

“As AI continues its relentless march toward greater sophistication, especially regarding generative AI, it stands on the brink of an impending surge in AI-powered attacks. These attacks encompass a wide spectrum, ranging from deepfake attempts that craft smarter and highly personalised phishing strategies, to malware that ingeniously adapts to evade detection, along with automated attack path discovery and exploitation.”

Tyson Whitten, Vice President Global Marketing at Jscrambler, argues that organisations will have to update their cybersecurity strategies to deal with AI-powered attacks.

“Companies will be driven to evolve their security strategies and implement measures such as JavaScript code protection that mitigates LLM-powered threats from leveraging early automated learning steps.”

AI not only advances current attack techniques but also introduces completely new threats. Sabrina Gross, Regional Director of Strategic Partners at Veridas, believes that with upcoming elections, the threat posed by deepfakes will dramatically increase.

“In 2024, deepfake abuse is going to significantly increase. This will become particularly prevalent on social media, especially with elections in the US and EU and potentially in the UK. It will become a popular technique among cyber criminals for financial crime, with voice deepfakes being used for phone fraud. 

As a result, over the next year, customers will expect organisations to have processes in place to prevent fraud and to ensure they are actively investing resources which combat deepfakes.”

However, as mentioned before, AI is yin and yang. Whilst AI can be used to attack organisations’ networks, it can also be used to defend them. Like many other sectors, the cybersecurity industry has been able to enhance resilience and improve operations through AI.

As Joseph Carson, Chief Security Scientist & Advisory CISO at Delinea, said “Cybercriminals will increasingly use artificial intelligence (AI) to automate and enhance their attacks. In response, cybersecurity defences will rely more on AI and machine learning for threat detection and automated incident response, creating a continuous battle of algorithms.”

Rob Bolton, VP EMEA at Versa Networks, echoes this too: “In 2024, the race between security teams fixing vulnerabilities and threat actors exploiting them will continue, however, it will be done by AI instead and at a much faster pace.”

AI can assist organisations and security teams in various ways, from creating a proactive stance to addressing current skills gaps. One area Yaniv Vardi, CEO at Claroty, believes will particularly benefit is the resilience of cyber-physical systems.

“With the rapid increase of IoT devices, there’s an abundance of data, and generative AI will help harness this data for better security and operational insights,” said Vardi.

Sabeen Malik, VP Global Government Affairs and Public Policy at Rapid7, mentions that “with AI coming and more advanced automation techniques, the majority of detection and remediation or prevention work will occur automatically.

Ultimately, AI will completely change the cybersecurity industry for the better and the worst. As Versa Networks’ Bolton says: “Success among security teams will be measured by having applications that can surface anomalies hidden in the telemetry details to solve security issues, rather than just asking your teams to work harder.”

About the Author

Author - Robin Campbell BurtRobin Campbell-Burt is the CEO of Code Red. He has a huge level of depth in campaign strategy, reputation management, brand building and general media hustling for B2B companies. He now brings this experience to the fore, leading a team of 20 people in a specialist cybersecurity public relations agency and working with some of the biggest named companies in the sector, as well as upcoming innovators entering the space for the first time.

LEAVE A REPLY

Please enter your comment!
Please enter your name here