Forget hacking systems. Today’s most dangerous threat actors hack people — and with artificial intelligence (AI) in their arsenal, they’re better at it than ever. The 2025 Cyber Risk Report1 from Resilience confirms that AI is “supercharging” social engineering attacks, making them faster, more convincing and harder to spot.

The formula is deceptively simple: Exploit two powerful human instincts — the desire to be helpful and the impulse to act fast under pressure. Bad actors manufacture urgency, trigger panic and count on victims to react before they think. It works. A lot.

According to the Microsoft Digital Defense Report 2025,2 phishing attacks grew in both scale and efficiency over the past year. The numbers tell a sobering story:

  • AI-automated phishing emails have surged 4.5 times, with a 54% success rate — compared to just 12% for standard attempts.
  • AI automation could increase phishing profitability by up to 50 times at minimal cost to bad actors.

That return on investment is a powerful incentive — and it’s driving rapid AI adoption among cybercriminals.

How AI is powering the threat

AI gives bad actors capabilities that were once out of reach:

  • Hyper-personalized attacks. AI analyzes public profiles, social media and breached data to craft messages that reference real names, roles and relationships.
  • Flawless communication. Advanced language models produce grammatically correct, contextually appropriate messages, eliminating the typos and awkward phrasing that once signaled a scam.
  • Scale and speed. What once required significant manual effort can now be automated and deployed at a massive scale.

Technical controls alone can’t stop this. Effective defense requires advanced detection tools, continuous employee education, identity verification processes and a clear-eyed understanding of how AI is reshaping the threat landscape.

Two attack scenarios to know

Business email compromise (BEC) and wire/ACH fraud. Bad actors infiltrate email accounts through phishing or tricking employees into revealing login credentials. Once inside, they quietly redirect emails, impersonate the account holder and manipulate clients or customers into sending payments to fraudulent accounts. Both parties end up as victims: The account holder falls prey to BEC, and their client is socially engineered into paying the wrong party.

Phone and text impersonation scams. Posing as bank fraud or security representatives, bad actors contact victims about “suspicious transactions” requiring immediate verification. When victims hand over a security code sent via text, bad actors gain full account access — then transfer funds to offshore accounts while claiming they’re running “security updates.”

Strengthening your defenses

  • Implement a wire fraud incident response plan. The first 24 hours after a fraudulent transfer are critical to any recovery effort
  • Add a footer to outgoing emails such as, “We will never request funds via text message or a change in banking information via email.”
  • Create a blame-free reporting culture where employees feel safe flagging suspicious activity — even when requests appear to come from senior leadership.
  • Update phishing training to reflect AI-enhanced threats, including realistic deepfake scenarios that emphasize behavioral cues over technical indicators.

The most effective defense combines informed employees, disciplined processes and adaptive technology.

How insurance can help

Two policy types may respond to social engineering losses:

  • Cyber policies can provide breach response resources and indemnity coverage for monetary losses. Social engineering coverage — once sub-limited — is increasingly available at higher limits, with excess carriers willing to drop down over this coverage as well.
  • Crime policies can cover direct monetary loss.

When both policies apply to the same loss, carriers typically share it on a pro rata basis, depending on policy language. Review both policies before an incident to understand how they respond. 

Final thoughts

The threat landscape keeps evolving — and your defenses need to keep pace. Combining awareness, discipline, the right technology and smart insurance coverage gives your organization a fighting chance against even the most sophisticated attacks.

For more insights into developing a cyber risk management strategy, contact a HUB ProEx Specialist today. View more articles in HUB’s ProEx Advocate Articles & Insights Directory.

NOTICE OF DISCLAIMER
Neither HUB International Limited nor any of its affiliated companies is a law firm and therefore cannot provide legal advice. The information herein is provided for general information only and is not intended to constitute legal advice as to an organization’s or individual’s specific circumstances. It is based on HUB International’s understanding of the law as it exists on the date of this publication. Subsequent developments may render this information outdated or incorrect, and HUB International has no obligation to update it. You should consult an attorney or other legal professional regarding the application of the general information provided here to your organization’s specific situation and particular needs. 

Resilience, “The 2025 Cyber Risk Report: How cybercriminals changed the game,” February 25, 2026.
Microsoft, “Microsoft Digital Defence Report 2025,” accessed March 13, 2026.