Social engineering explained: the human attack vector reshaping cybersecurity

Key insights

  • Social engineering is the leading initial access vector, responsible for 36% of incidents (Unit 42, 2025) and a factor in 60% of breaches (Verizon 2025 DBIR).
  • AI has industrialized social engineering, with 82.6% of phishing emails now AI-generated (2025), ClickFix campaigns surging 517%, and deepfake files growing from 500,000 to over eight million in two years.
  • Real-world losses are accelerating, including the $1.5 billion Bybit heist (2025), $300 million in Scattered Spider retail attacks, and 12.4 million records stolen from CarGurus via a single vishing call (2026).
  • Prevention alone is insufficient. Modern defense requires an assume-compromise mindset with behavioral analytics, identity monitoring, and post-compromise detection.
  • Phishing-resistant authentication (FIDO2/passkeys) is the only effective defense against coordinated vishing and adversary-in-the-middle combinations.

In 2025, a single social engineering attack against a cryptocurrency exchange resulted in $1.5 billion in losses — the largest theft in crypto history. The attacker did not exploit a software vulnerability. They impersonated a trusted open-source contributor, earned a developer's trust, and walked through the front door. This is the reality of modern social engineering. According to Unit 42's 2025 Global Incident Response Report, 36% of all incidents began with social engineering as the initial access vector. The Verizon 2025 Data Breach Investigations Report found that 60% of breaches involved the human element. Attackers are not breaking in — they are being let in.

This guide covers what social engineering is, how it works, the AI-powered techniques redefining the threat in 2025 and 2026, real-world case studies, and how organizations can detect and respond to attacks that bypass every technical control.

What is social engineering?

Social engineering is the use of psychological manipulation to trick people into performing actions or divulging confidential information, exploiting human trust rather than technical vulnerabilities. In cybersecurity, the term encompasses every attack that targets human decision-making — from a phishing email impersonating an executive to a phone call from a fake IT help desk. The NIST glossary defines it as "an attempt to trick someone into revealing information that can be used to attack systems or networks."

What makes social engineering uniquely dangerous is its position as an umbrella category. It includes digital attacks like phishing, voice-based vishing, text-based smishing, and physical techniques like tailgating. It also includes hybrid attacks that chain multiple channels together — an approach that has become the norm in 2025 and 2026.

The numbers are stark. Unit 42's 2025 incident response data shows that social engineering was the initial access vector in 36% of all incidents they investigated. Verizon's 2025 DBIR found the human element present in 60% of breaches. These are not legacy statistics from a decade ago. They reflect the current threat landscape, where even organizations with mature security programs remain vulnerable because their people are the attack surface.

Social engineering is distinct from hacking in an important way. Hacking exploits technical vulnerabilities in systems and software. Social engineering exploits trust, authority, fear, and urgency in people. In practice, most modern attacks combine both. An attacker uses social engineering to obtain credentials, then uses technical exploitation to move laterally. This is why organizations need both prevention and detection — and why insider threats are deeply intertwined with social engineering defense.

Why social engineering works: the psychology of manipulation

Social engineering attacks succeed because they exploit well-documented psychological principles. Robert Cialdini's six principles of influence provide a useful framework for understanding why these attacks work.

  • Authority. Attackers impersonate executives, IT administrators, or government officials. People comply with perceived authority figures without questioning legitimacy.
  • Urgency. Messages create artificial time pressure. "Your account will be locked in 15 minutes" forces quick, unreflective action.
  • Social proof. Claims like "your colleagues have already completed this security update" leverage conformity.
  • Scarcity. Limited-time offers or exclusive access create fear of missing out.
  • Reciprocity. Providing something of apparent value (a free tool, a helpful document) creates an obligation to reciprocate.
  • Liking. Attackers build rapport and familiarity before making their request, especially in long-running pretexting operations.

These triggers bypass rational decision-making. The Verizon 2025 DBIR found that the median time from a phishing email landing in an inbox to a user clicking the malicious link is just 21 seconds — with data entry beginning 28 seconds later. Technical controls alone cannot compensate for decisions made in under half a minute.

How social engineering works

Social engineering attacks follow a predictable lifecycle. Understanding this lifecycle is critical for defenders because it reveals multiple points where attacks can be detected and interrupted.

The social engineering attack lifecycle

  1. Research and reconnaissance. The attacker gathers intelligence about the target organization and individuals using open-source intelligence (OSINT), social media, corporate websites, and data from previous breaches.
  2. Pretext development. The attacker creates a believable scenario and identity — an IT support technician, a vendor, a new employee, a trusted open-source contributor.
  3. Engagement and hook. The attacker initiates contact through the chosen channel (email, phone, SMS, in-person) and establishes trust or creates urgency.
  4. Exploitation. The victim performs the desired action — clicking a link, entering credentials, transferring funds, granting access, or running a command.
  5. Lateral movement and escalation. With initial access secured, the attacker expands their foothold through credential theft, privilege escalation, and movement across the network.
  6. Exit and cover. The attacker extracts data, deploys ransomware, or otherwise achieves their objective, then removes traces of their activity.

A six-stage linear flow diagram showing the social engineering attack lifecycle from reconnaissance through exit, with labeled nodes for each stage and example activities beneath each.

Consider how the Scattered Spider group operationalized this lifecycle in their 2025 retail campaign. During reconnaissance, they identified IT help desk procedures at major UK retailers including M&S, Co-op, and Harrods. They developed pretexts as employees needing password resets. They engaged help desk staff by phone, using employee details scraped from LinkedIn and corporate directories. The exploitation phase involved obtaining password resets and MFA enrollment changes. From there, they moved laterally through corporate networks, ultimately deploying ransomware with an estimated $300 million in combined impact.

Social engineering tactics and psychological triggers

Beyond the lifecycle, attackers employ specific tactical patterns that defenders should recognize.

Authority exploitation remains the most effective trigger. Attackers impersonate C-suite executives, IT departments, legal counsel, and regulatory bodies. Urgency creation follows closely — fabricated deadlines, fake security alerts, and time-limited offers all force targets into rapid action without verification.

Fear appeals have grown more sophisticated. Rather than crude threats, modern attackers reference real security incidents, genuine compliance deadlines, or actual organizational changes to make their scenarios believable.

AI has fundamentally amplified every one of these triggers. Where a human attacker could craft a few dozen personalized pretexts per day, AI-powered tools generate thousands of contextually relevant, grammatically perfect messages in minutes. This shift from craft to industrial scale is the defining change in the 2025--2026 threat landscape.

Types of social engineering attacks

Social engineering encompasses more than a dozen distinct attack types. Each exploits different trust vectors and delivery channels. The following catalog covers the major categories, with links to dedicated deep-dive pages where available.

Phishing is the most prevalent form of social engineering. It uses deceptive emails, messages, or websites to trick victims into revealing credentials or installing malware. For a comprehensive breakdown, see phishing.

Spear phishing targets specific individuals or organizations with personalized content derived from reconnaissance. See spear phishing for detailed coverage.

Vishing (voice phishing) uses phone calls to manipulate targets. Attackers impersonate IT help desks, bank representatives, or executives to extract credentials or authorize actions. The 2026 CarGurus breach demonstrated vishing's potency — a single voice call yielded SSO credentials that led to 12.4 million records being exfiltrated. Vishing has been professionalized through organized groups recruiting callers at $500 to $1,000 per call (The Hacker News, 2026). For more on this attack type, see vishing.

Smishing (SMS phishing) delivers social engineering via text messages containing malicious links or urgent prompts. Mobile devices present a smaller screen that makes URL inspection difficult, increasing click rates. See smishing for a deeper look.

Pretexting involves creating a fabricated scenario to trick a target into providing information or access. Unlike phishing, which often relies on a single message, pretexting typically involves sustained interaction and relationship building. The Bybit heist (2025) was a pretexting operation where the attacker spent 20 days posing as a trusted contributor before executing the theft.

Baiting offers something enticing — a free USB drive, a download, exclusive content — to lure victims into compromising their systems. Digital baiting now includes fake software updates and AI tool installers targeting developers.

Tailgating and piggybacking are physical social engineering techniques where an unauthorized person follows an authorized individual through a secure entrance. These remain relevant in corporate environments, particularly data centers and secure facilities.

Quid pro quo attacks offer a service or benefit in exchange for information. A common example involves attackers posing as technical support offering to fix a problem in exchange for login credentials.

Watering hole attacks compromise websites frequently visited by the target group, turning trusted resources into infection vectors.

Business email compromise (BEC) involves impersonating or compromising business email accounts to authorize fraudulent transfers or redirect payments. The FBI IC3's 2024 report recorded $2.8 billion in BEC losses and 193,407 phishing and spoofing complaints in 2024 alone.

Scareware uses fake security alerts to convince victims their systems are infected, driving them to install malicious software or pay for unnecessary services.

Comparison table: social engineering attack types

Caption: Common social engineering attack types and how to identify them.

Type How it works Delivery channel Example indicator
Phishing Deceptive messages impersonating trusted entities Email, web Mismatched sender domain, generic greeting
Vishing Voice calls impersonating authority figures Phone Unsolicited call requesting credentials or MFA codes
Smishing Urgent text messages with malicious links SMS Shortened URLs, pressure to act immediately
Pretexting Fabricated scenario with sustained interaction Any channel Elaborate backstory, requests for unusual access
Baiting Enticing offer to deliver malware USB, downloads Free software, unsolicited attachments
BEC Compromised or spoofed business email Email Wire transfer requests, changes to payment details
Tailgating Following authorized person through secure entry Physical Unknown person without visible badge
Watering hole Compromised trusted website Web Unexpected downloads from familiar sites

AI-powered social engineering: the 2025--2026 threat landscape

AI has transformed social engineering from a craft practiced by skilled individuals into a scalable industry. This section covers the techniques that no competitor page in the top 10 search results adequately addresses — and the ones that security teams need to understand right now.

AI-generated phishing at scale. Research indicates that 82.6% of phishing emails now incorporate AI-generated content (2025). AI eliminates the grammatical errors and awkward phrasing that once served as reliable detection signals. The Anti-Phishing Working Group recorded over one million phishing attacks in Q1 2025 alone. AI-driven phishing is no longer an emerging trend — it is the baseline.

Phishing-as-a-Service (PhaaS). Subscription platforms costing approximately $200 per month provide AI-generated templates, real-time credential interception via adversary-in-the-middle (AiTM) techniques, and custom phishing kits that sync with live voice calls to bypass multi-factor authentication. Only phishing-resistant authentication methods (FIDO2/passkeys) are effective against these coordinated attacks.

ClickFix and InstallFix: the fastest-growing social engineering technique

ClickFix campaigns surged 517% in 2025, making them one of the fastest-growing social engineering techniques in the current landscape (Cloud Range Cyber, 2026). The technique tricks users into copying and executing malicious commands, typically by displaying fake browser error messages or update prompts.

In 2026, ClickFix evolved to use DNS-based payload delivery (The Hacker News, 2026), making detection significantly harder. A developer-targeting variant called InstallFix mimics AI tool installers, with at least 20 campaigns targeting AI tools observed in February and March 2026.

From a defensive perspective, organizations should monitor for anomalous DNS TXT record queries and implement endpoint behavioral analysis that detects clipboard-to-command-line execution patterns. The key detection signal is not the social engineering itself but the post-compromise behavior that follows.

Deepfake-as-a-service and voice cloning

Deepfake files grew from 500,000 in 2023 to over eight million in 2025 (Cloud Range Cyber, 2026). Voice cloning technology now requires just three seconds of audio to produce a convincing replica, and research indicates that 70% of people cannot distinguish cloned voices from real ones. Industry projections estimate deepfake-related losses will reach $40 billion by 2027.

The Arup $25 million deepfake case (2024) illustrates the threat. Attackers created deepfake video representations of multiple executives during a live video conference call, convincing a finance employee to authorize wire transfers. Video and voice are no longer reliable identity confirmation methods for high-value transactions.

AI scams surged 1,210% in 2025. The vishing-as-a-service model has professionalized these attacks further. The SLH supergroup — formed from the merger of Scattered Spider, Lapsus$, and ShinyHunters — actively recruits vishers at $500 to $1,000 per call (The Hacker News, 2026). These callers use custom phishing kits synced with live conversations to intercept MFA tokens in real time.

Agentic AI social engineering represents the next frontier. Security researchers predict that autonomous AI agents will run full phishing campaigns — from target selection through credential harvesting — without human input by late 2026. The SecurityWeek Cyber Insights 2026 analysis details how these autonomous capabilities are expected to reshape the threat landscape.

A side-by-side comparison diagram showing the traditional social engineering attack chain (manual research, single-channel delivery, limited scale) versus the AI-powered attack chain (automated OSINT, multi-channel delivery, thousands of personalized attacks), with labeled stages and time estimates for each.

Social engineering in practice: real-world case studies

The following case studies from 2024 through 2026 demonstrate how social engineering techniques translate into real-world impact. Each incident carries specific defensive lessons.

Bybit cryptocurrency heist — $1.5 billion (February 2025). North Korea's Lazarus Group socially engineered a Safe{Wallet} developer by posing as a trusted open-source contributor (SecurityWeek, 2025). The attacker maintained access for 20 days before manipulating a multisignature wallet transaction. Chainalysis confirmed this as the largest cryptocurrency theft in history. The lesson: supply chain trust must be continuously verified, and contributor access requires behavioral monitoring.

Scattered Spider / SLH retail campaign — ~$300 million (2025). The group targeted M&S, Co-op, and Harrods through IT help desk impersonation, obtaining password resets and MFA changes that led to ransomware deployment (CmdZero, 2025). The FBI issued warnings about the group expanding to target airlines (The Hacker News, 2025). The lesson: help desk procedures need out-of-band identity verification for all password resets and MFA changes, as recommended by CISA advisory AA23-320A.

CarGurus vishing breach — 12.4 million records (January 2026). ShinyHunters used voice phishing to obtain SSO credentials from a CarGurus employee, exfiltrating 12.4 million customer records (BleepingComputer, 2026). The lesson: a single compromised credential from a vishing call can cascade into a massive data breach.

Coinbase insider bribery (2025). Criminals bribed overseas support staff to leak customer data — demonstrating that social engineering extends beyond deception to include financial inducement. The lesson: insider threat monitoring and access controls must cover outsourced and offshore teams.

Signal and WhatsApp diplomatic targeting (2026). Russia-linked actors compromised secure messaging accounts of diplomats and journalists, exploiting trust in encrypted platforms. The lesson: even secure channels are vulnerable when account access relies on social engineering.

Lessons learned from 2024--2026 incidents

The pattern across these incidents is clear. Help desk procedures need out-of-band identity verification. Video and voice are no longer reliable identity confirmation methods. Insider threat detection is part of the social engineering defense model. And supply chain trust must be continuously verified — not assumed.

The financial scale is unprecedented. The United States lost $16.6 billion to social engineering in 2024, a 33% increase year-over-year. The average global cost of a data breach reached $4.88 million in 2024 (Ponemon Institute). BEC alone caused $2.8 billion in reported losses in 2024 (FBI IC3).

Caption: High-profile social engineering attacks and their defensive takeaways.

Incident Date Technique Impact Key lesson
Bybit heist Feb 2025 Developer pretexting via open-source impersonation $1.5B stolen Verify supply chain contributor identities continuously
Scattered Spider retail 2025 Help desk vishing and impersonation ~$300M, ransomware Require out-of-band verification for all help desk changes
CarGurus breach Jan 2026 Vishing for SSO credentials 12.4M records exfiltrated Single credential compromise can cascade to full breach
Coinbase insider 2025 Bribery of overseas support staff Customer data leaked Monitor insider access including outsourced teams
Arup deepfake 2024 Deepfake video conference impersonation $25M wire transfer Video/voice alone cannot verify identity for high-value actions

Detecting and responding to social engineering

Most cybersecurity content on social engineering focuses exclusively on prevention — awareness training, email filters, and policies. Prevention matters, but it is insufficient. The assume-compromise philosophy recognizes that skilled attackers will eventually succeed in manipulating someone. The question becomes: how quickly can you detect and contain the post-compromise activity?

Detection signals and red flags

For employees, social engineering red flags include unexpected urgency, authority claims from unknown contacts, unusual requests that bypass normal procedures, and resistance to verification. Training people to recognize these signals has value, but the data on effectiveness is mixed. Training vendors claim that security awareness programs can reduce the phish-prone rate from approximately 30% to under 5%. However, the Verizon 2025 DBIR — an independent, multi-source study — found that phishing click rates remained "unaffected by training." The reality likely sits between these positions. Training is one layer in a defense-in-depth strategy, not a standalone solution.

For security teams, the critical detection signals come after a successful social engineering attack. The Verizon 2025 DBIR found that 85% of social engineering breaches result in credential theft. This means the post-compromise indicators that matter most include anomalous access patterns, unusual identity threat detection and response signals, impossible travel between locations, abnormal privilege escalation, and unexpected lateral movement across the network.

Building a social engineering detection program

  1. Implement out-of-band verification for all sensitive requests including password resets, wire transfers, MFA changes, and access modifications. This directly addresses the help desk attack vector used by Scattered Spider, as recommended in CISA advisory AA23-320A.
  2. Deploy behavioral analytics to detect anomalous identity usage, credential misuse, and lateral movement that follows successful social engineering.
  3. Adopt phishing-resistant authentication (FIDO2/passkeys) as the primary defense. This is the only effective countermeasure against coordinated vishing and AiTM combinations.
  4. Combine simulation-based training with technical controls. Train for recognition and reporting, but do not rely on human vigilance as the primary defense layer.
  5. Build a reporting culture without blame. Employees who report suspicious interactions — even ones they fell for — provide critical early detection signals.
  6. Establish incident response procedures specifically for social engineering scenarios, including credential reset protocols, session revocation, and forensic investigation workflows.
  7. Implement zero trust architecture to limit the blast radius when social engineering succeeds, ensuring that a single compromised credential does not grant broad access.

Organizations seeking additional guidance should review CISA's guidance on avoiding social engineering and phishing attacks.

Social engineering and compliance

Social engineering maps to specific controls across major compliance and security frameworks. GRC teams can use these mappings to structure their programs and provide audit evidence.

Framework mapping table

Caption: Social engineering controls across major compliance frameworks.

Framework Control ID How social engineering maps Evidence link
MITRE ATT&CK T1566 (Phishing), T1598 (Phishing for Information), T1204 (User Execution) Maps initial access and execution stages of social engineering attacks MITRE ATT&CK framework
NIST SP 800-53 Rev 5 AT-2(3) Social Engineering and Mining — requires training on recognizing and reporting SE attempts NIST SP 800-53
CIS Controls v8.1 Control 14 (Security Awareness), Control 9 (Email and Web Protections) Training, phishing simulations, and technical controls for email-borne SE CIS Controls
ISO 27001:2022 Clause 7.3 (Awareness), Annex A 6.3 (Information Security Training) Requires SE awareness in training programs and evidence of competency ISO 27001
NIS2 (EU) Risk management, incident detection, supply chain security Implicit coverage through mandatory risk management and incident reporting obligations NIS2 Directive

Modern approaches to social engineering defense

The cybersecurity industry is converging on a multi-layered approach to social engineering defense. Current solutions include behavioral analytics platforms that detect post-compromise activity, ITDR tools that monitor for credential misuse, zero trust architectures that limit blast radius, and phishing-resistant authentication that eliminates the credential theft vector entirely.

Emerging trends from RSAC 2026 point toward behavioral science integration — applying psychological research to improve both training and detection. The Humanix innovation sandbox finalist demonstrated a people-oriented approach to social engineering detection that treats human behavior as a data source rather than a weakness (SecurityWeek, 2026). The signal-over-noise imperative is also gaining traction. Organizations are moving from alert fatigue to actionable threat signals, prioritizing the behavioral indicators that reveal real attacks rather than flooding analysts with low-fidelity alerts.

How Vectra AI thinks about social engineering defense

Social engineering attacks that succeed result in anomalous identity behavior, lateral movement, and privilege escalation — exactly the post-compromise signals that Attack Signal Intelligence is designed to surface. Vectra AI's assume-compromise philosophy treats successful social engineering as inevitable and focuses on reducing dwell time through behavioral detection across identity, network, and cloud surfaces. The goal is not to prevent every social engineering attempt but to detect the attacker activity that follows within minutes rather than months.

Future trends and emerging considerations

The social engineering threat landscape is evolving faster than at any point in cybersecurity history. Over the next 12 to 24 months, organizations should prepare for several critical developments.

Agentic AI will automate full attack chains. Security researchers predict that by late 2026, autonomous AI agents will execute complete social engineering campaigns — from target selection and OSINT gathering through credential harvesting and initial exploitation — without human involvement. This represents a fundamental shift from tool-assisted human attacks to fully autonomous operations.

Deepfake capabilities will become commodity tools. With projected losses reaching $40 billion by 2027 and deepfake files already exceeding eight million (2025), the technology is rapidly democratizing. Organizations should implement multi-channel verification for any transaction involving video or voice confirmation, and invest in detection tools that analyze media authenticity.

Regulatory pressure will intensify. NIS2 enforcement across the European Union is creating new incident reporting obligations that directly affect social engineering response timelines. Germany's BSI registration deadline of March 2026 signals broader compliance expectations. Organizations should map their social engineering defenses to framework controls now rather than scrambling to comply later.

The vishing-as-a-service economy will mature. The SLH supergroup's recruitment model demonstrates that social engineering is following the same as-a-service trajectory as ransomware. Expect professionalized call centers, specialized phishing kit developers, and tiered service offerings to become the norm. Help desk hardening and out-of-band verification procedures are the most direct countermeasures.

Identity will become the primary battleground. With 85% of social engineering breaches resulting in stolen credentials (Verizon 2025 DBIR), the post-compromise identity layer is where detection matters most. Organizations should prioritize ITDR capabilities, behavioral analytics, and phishing-resistant authentication as their top social engineering defense investments for 2026 and 2027.

Conclusion

Social engineering is not a new problem, but it is a fundamentally transformed one. AI has industrialized deception, making it faster, cheaper, and harder to distinguish from legitimate communication. The case studies from 2024 through 2026 demonstrate that social engineering now causes billion-dollar losses, targeting everyone from help desk staff to C-suite executives to open-source developers.

Prevention remains important — phishing-resistant authentication, out-of-band verification, and awareness training all reduce the attack surface. But the organizations best positioned to survive social engineering attacks in 2025 and 2026 are those that have embraced the assume-compromise mindset. They invest in behavioral analytics, identity monitoring, and post-compromise detection because they understand that someone will eventually be tricked.

The question is not whether social engineering will target your organization. It is whether your detection and response capabilities will find the attacker before they find what they came for.

To learn how behavioral detection and Attack Signal Intelligence surface the post-compromise signals that follow social engineering attacks, explore the Vectra AI platform.

FAQs

Is social engineering a cyber attack?

What is the difference between social engineering and hacking?

Is phishing a form of social engineering?

What is a commonly used type of social engineering?

What is social engineering in simple words?

How much does social engineering cost organizations?

What do social engineering attacks rely on?