In 2023, generative AI hit the mainstream, with more individuals across regions, industries, and seniority levels leveraging the technology. In McKinsey’s “The state of AI in 2023: Generative AI’s breakout year,” 79% of respondents say they’ve had some exposure to generative AI either for work or outside of work, and 22% say they are regularly using it in their own work.
While generative AI adoption and investment is on the rise, McKinsey’s report showed that many organizations are not addressing potential risks from generative AI. Less than a third of respondents said they have measures in place to reduce the use of advanced AI technologies to mitigate cybersecurity risks.
The Rise of Generative AI in Business and Cybersecurity
In 2024, generative AI is projected to play an even more pivotal role in business and society. According to recent studies, the adoption of generative AI is set to grow exponentially, with a forecasted 30% increase in use across various industries. As these intelligent AI and machine learning-empowered systems become more ingrained in our daily lives, the cybersecurity landscape is poised for both challenges and innovations.
Below, the experts at Vectra AI have compiled their generative AI-related predictions relative to cybersecurity for 2024, noting the things all security teams, CISOs and SOC leaders should keep top of mind heading into the new year.
Generative AI: A Tool for Innovation and a Challenge for Cybersecurity
Preparing for the Future: Cybersecurity Strategies in the AI Era
Oliver Tavakoli, Chief Technology Officer, predicts Generative AI will erode the effectiveness of email security. “In addition, the utilization of generative AI in social engineering attacks will serve as a wake-up call that current lines of security defense are inadequate. Consequently, I expect organizations will pivot towards downstream security approaches such as Zero Trust, micro-segmentation, and detection and response mechanisms.”
Political Implications: AI's Role in Elections and Information Integrity
Sohrob Kazerounian, Distinguished AI Researcher believes leading up to the 2024 election that Generative AI will influence election distrust and disinformation. “The broad availability of generative AI models, and their relative ease of use, will have far reaching effects given that the U.S. is currently in an election year. Disinformation at scale, with quality content (e.g., faked audio and video of candidates, mass produced fiction masquerading as news, etc.) will become easier than ever before. The inability to trust our senses could lead to distrust and paranoia, further breaking down social and political relations between people.”
Shift in AI Trends: LLMs Decline, Deep Fakes Rise
Christian Borst, EMEA CTO, believes that widespread LLM usage will fade away, but deep fakes will skyrocket. “Many organizations are exploring ways to use Large Language Models (LLMs) following the initial wave of hype this year. But when you scratch beneath the surface, it’s clear the novelty factor will soon evaporate. LLMs are typically quite difficult to use, because they are unable to understand context or provide reliable outputs, so the wider practical use of LLMs is restricted. Next year we will therefore see businesses scale back their use of LLMs as they wait for these tools to become more functional and user-friendly.
“Threat actors will face the same issues with using LLMs, so we likely won’t see much complex activity like AI generating malicious code. But, we can expect cybercriminals to harness generative AI to create more realistic and sophisticated deep fakes. This will give them a better chance of tricking users into giving up sensitive data or clicking on something malicious through more convincing audio or visual phishing lures.”
Regulatory Landscape: Balancing AI Innovation and Control
And when it comes to generative AI (and AI more generally), regulation and policy will play a large role in 2024. According to Hitesh Sheth, Vectra AI CEO, AI’s future will hinge on regulatory decisions. “In 2024, I predict we will witness monumental progress in AI regulation and policy. Building on President Biden’s executive order on artificial intelligence, decision-makers across governmental bodies will evaluate and put into place more concrete regulations to curb AI’s risk and harness its benefits. As AI continues to evolve, it will be important for these developing regulations to strike a balance between advocating for transparency and promoting the continued innovation that’s taking place at a rapid pace.”
Sohrob Kazerounian, Distinguished AI Researcher, agrees, predicting AI's ethical, legal, political dilemmas will drive litigation. “Ethical, legal, and socio-political questions regarding AI will only get thornier. Given general political paralysis in the U.S., it is unlikely that robust regulatory and legal frameworks around regulating AI will emerge as fast as they are needed. Lawsuits regarding copyrighted material being used to train generative AI models will increase in number.”
Conclusion: Embracing AI with Caution and Responsibility
There is no doubt that generative AI will transform the cybersecurity landscape in significant ways. And it has the potential to advance security tools, improve threat intelligence and transform security operations centers. As we embrace these opportunities, let us tread cautiously, armed with knowledge, resilience, and a commitment to stay one step ahead in the intricate dance
Generative AI presents a new and exciting opportunity to use technology to aid in the fight against cyberattacks, which is Vectra AI’s founding mission. To learn more about the AI behind Vectra Attack Signal Intelligence, download our latest whitepaper today.