2025 Predictions: GenAI Loses Steam as Defenders and Attackers See the Rise of Autonomous AI

December 11, 2024
Vectra AI
2025 Predictions: GenAI Loses Steam as Defenders and Attackers See the Rise of Autonomous AI

As 2024 nears to a close, the AI hype seems to be approaching the inevitable disillusionment stage. The initial excitement surrounding AI’s potential in cybersecurity is slowing as the industry pauses to weigh AI’s costs against the actual delivered value. In this blog, the experts at Vectra AI discuss their predictions for 2025, noting the things all security teams, CISOs and SOC leaders should keep top of mind heading into the new year. 

The Rise of Autonomous AI as AI Copilots Lose Steam

Oliver Tavakoli, Chief Technology Officer, predicts that the initial excitement surrounding security copilots will begin to diminish as organizations weigh their costs against the actual value delivered. With this, we’ll see a shift in the narrative toward more autonomous AI systems.  

Unlike AI copilots, these autonomous solutions are designed to operate independently, requiring minimal human intervention. Starting next year, marketing efforts will increasingly highlight these autonomous AI models as the next frontier in cybersecurity, touting their ability to detect, respond to, and even mitigate threats in real-time - all without human input.

Distinguished AI Researcher Sohrob Kazerounian agrees, saying that as GenAI hype begins to wane, the security industry will turn its attention to agentic AI models in 2025 as the primary means of creating robust, production-grade AI systems that are ready for customer scrutiny. Unlike early approaches to working with Large Language Models (LLMs), agentic approaches will increasingly make use of LLM “agents” that have been prompted, fine-tuned, and given access to only the necessary tools required to achieve a well-defined and particular goal, rather than being tasked with a complete end-to-end mission.  

However, we shouldn’t overly anthropomorphize these agents. Think about instructing a person to solve a complex task in a single shot, without breaking it down into any of the sub-tasks required to achieve it. Instead, the agentic model breaks high level objectives into well-defined sub-tasks, and defines and equips individual agents with the ability to execute on each of these sub-goals. By allowing the agents to interact, scrutinize one another, and so on, they can collaborate with each in a manner that ultimately improves the accuracy and robustness of Gen AI models.

Attackers Will Become More Sophisticated with AI and Exploit GenAI Chatbots

It won’t just be defenders getting more sophisticated with AI. Deputy Chief Technology Officer Tim Wade predicts that in 2025 we’ll start to see a clear distinction emerge between groups that masterfully apply AI and those adopting more simplistically. The attackers who skillfully leverage AI will be able to cover more ground more quickly, better tailor their attacks, predict defensive measures, and exploit weaknesses in ways that are highly adaptive and precise.  

Sharat Nautiyal, Director of Security Engineering for APJ, predicts that by 2025, threat actors are expected to exploit AI for initial access through tactics like deepfakes and sophisticated phishing. While AI will evolve, core attacker behaviors, such as establishing a foothold and establishing a command-and-control tunnel, abusing identity and moving laterally will persist.  

Kazerounian believes that GenAI chatbots will cause high profile data breaches in 2025. He says that we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would have otherwise been assigned. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations.  

Because of the seemingly simple and human means by which we can “instruct” LLMs (i.e., natural language), many organizations will overlook the various way in which attackers can jailbreak their way out of a chat in order to get such systems to behave in unintended ways. To complicate matters, these types of jailbreaks are likely to be unfamiliar to security practitioners that haven’t kept pace with LLM technology. For example, jailbreaks can occur from seemingly disparate interactions like users asking that an LLM begin its response with a particular phrase (e.g., “Of course, I’d be delighted to help you with that”). Threat actors could also instruct an LLM to pretend it is an author of fictional novels, writing a story that includes whatever vaulted secrets your organization is trying to keep out of the hands of attackers. In either case, the world of attacks on LLMs will look unlike anything we have seen in the past in more traditional security contexts.  

Regulatory Overload Will Give Attackers the Advantage  

Christian Borst, EMEA Chief Technology Officer, predicts that, as we move into 2025, the increasing focus on regulatory compliance will start to overwhelm defenders, inadvertently giving attackers an upper hand. Teams are already stretched thin, dedicating significant resources to meeting compliance requirements, which, while essential, can sometimes detract from more dynamic threat detection and response strategies. This compliance-centric approach risks creating a checklist mentality where organizations focus on ticking boxes rather than building a holistic, proactive security posture.

Italy Country Manager Massimiliano Galvagna agrees, saying that the increased focus on regulatory compliance introduced by regulations like the EU’s NIS2 Directive could risk overwhelming organizations, making it easier for attackers to gain the upper hand. To counter this danger, organizations will need to strike a better balance between regulatory adherence and adaptive threat management by investing in technologies such as artificial intelligence-based cybersecurity tools that can help automate both compliance and defense efforts.

AI will create a “Red Queen’s race” that Shifts the Pace of Cybersecurity Innovation

So, what should organizations do? Borst predicts that as AI advances are giving cybercriminals the tools they need to carry out faster and more targeted attacks, organizations will need to put in a huge effort to stay resilient. Like the Red Queen’s race evolutionary theory – based on Lewis Carroll’s Through the Looking-Glass – we’re in a new world, taking part in a constantly accelerating race. It’s not enough to simply keep pace anymore, and those who do will face extinction.  

SOC teams need to move faster than they did in 2024 to stay ahead of the curve, creating an innovation buffer should a major shift in attack techniques arrive. This could happen at any time, for example, a new GenAI development could give attackers additional punching power organizations might not be prepared to handle.  

To accelerate their pace of innovation and win the Red Queen’s race, organizations must be laser focused on optimizing their security stack, ensuring they are focusing on solutions that can cut through all of the noise and help them to identify and respond to threats more quickly going forward.

FAQs