Threat Actors are integrating AI in their social engineering attacks. With artificial intelligence (AI), social engineering schemes are becoming more sophisticated, creating new challenges in the insurance industry.
98% of cyber incidents involve some form of social engineering1. Historically, threat actors have manipulated trust, exploited fear, and created a sense of urgency to deceive victims. Their techniques include, without being limited to, phishing, pretexting, baiting and tailgating, and often involve impersonation and psychological manipulation to extract sensitive information or induce actions that compromise security and/or cause financial losses. The only thing that's changed through the years is the method of attack.
The integration of AI into social engineering marks a significant shift. AI makes these attacks more effective by generating convincing phishing emails, conducting advanced reconnaissance, and simulating human-like interactions. For example, AI can improve the grammar and spelling of phishing emails, eliminating many of the traditional red flags and making the emails appear more legitimate. AI has made social engineering attacks much more convincing and harder to detect. AI also allows threat actors to automate and scale their attacks, an increase in the speed in which they are committed.
One of the most concerning developments is the use of deepfake technology and synthetic identities, which include machine learning and media manipulation to impersonate individuals in a more convincing manner. Attackers can now engage in very sophisticated social engineering schemes that appear to be real but are actually fake virtual identities2.
These top threats likely will continue to affect organizations in 2025:
These include social engineering attacks such as forgery and voice exploits. Threat Actors have focused intensely on social engineering to trigger phishing-linked malware and business email compromise (BEC) attacks these attacks. These types of claims have increased by 100% over last year.
In an AI voice scam, an Insured issued payment to unauthorized individual after receiving a phone call from someone posing as a legitimate customer. The Insured updated the bank information via phone and transferred significant amounts to the unauthorized individual. The insurer paid the financial loss under a Commercial Crime Policy.
Employees at a software company were targeted with fraudulent texts from someone claiming to be part of the company's IT team. The employees were instructed to click on what appeared to be a legitimate link to fix a payroll issue. One employee clicked the link and was taken to a fraudulent landing page requesting credentials. The unauthorized individual then used AI to clone the actual IT worker's voice in order to obtain the multifactor authentication code necessary to gain access. The unauthorized individual got access to customer accounts and stole $15 million.
Carriers report that BEC in 2023 increased in frequency and severity under both Cyber and Crime policies5. Also, documented by the recent FBI IC3 report as the second-costliest type of crime, BEC accounted for $2.9 billion in reported losses last year6. At McGriff, we are seeing the same trend. More than 63% of our Crime and Cyber claims in 2024 are the result of BEC ranging, on average, from $150,000 to $2.5 million. Nearly 95% of the claims reported during 1H2024 have already been paid by the Insurers.
In 2024, ransomware attacks surged, both in frequency and sophistication. Different than 2022, ransomware attacks are incorporating AI as this makes the attacks faster and easier for threat actors7. At McGriff, have certainly seen more ransomware claims than we did last year, although not as many as 2022. On average we're seeing two per month in 2024, with an average initial demand ranging from $5 million to $10 million. Clients that have paid the ransom have been able to lower the extortion amount, paying somewhere between 10%-25% of the initial demand.
A recent report by Coalition reveals a staggering 68% increase in ransomware claims severity.
In the past several years courts have issued notable rulings pertaining to coverage for losses resulting from social engineering fraud among other computer-related claims. Because the court decisions show mixed results, it is crucial you review your policy's terms and conditions. In addition to mixed court rulings, confusing terminology in a Crime or Cyber Policy creates insurance coverage challenges: If given options to purchase coverage for (a) “computer fraud,” (b) “funds transfer fraud,” (c) “fraudulent instruction,” (d) “social engineering”, or (e) “impersonation” just to mention a few, would you know which one insures against an invoice your company received from a spoofed vendor email? What about the opposite scenario when your customer is deceived by an email purporting to be from your organization?
One example is a Claim received at McGriff this year where an unauthorized individual acting as a current sub-contractor (vendor) of an organization sent an email to Insured requesting an update on the ACH information. Upon updating, funds were transferred, and the organization suffered a financial Loss. The organization had a Crime Policy that provided coverage for Funds Transfer Fraud9. Social Engineering, and Computer Theft. The Insurer argued this was a Social Engineering Loss (which had a lower sub limit in the Policy) rather than Computer Fraud or Funds Transfer Fraud which provided higher Limits allowing the Insured to recover more of the Loss funds.
We will pay for loss of or damage to money, securities and other property resulting directly from the use of any computer to fraudulently cause a transfer of that property from inside the premises or banking premises: 1. To a person (other than a messenger) outside those premises; or 2. To a place outside those premises.
The policy provides coverage for internal and external fraud.
We will pay for loss resulting directly from your having, in good faith, transferred, paid, or delivered money, securities or other property in reliance upon a transfer instruction purportedly issued by an employee, or any of your partners, members, managers, owners, officers, directors or trustees, but which transfer instruction proves to have been fraudulently issued by an imposter without the knowledge or consent of your employee.
We will pay for loss resulting directly from your having, in good faith, transferred, paid, or delivered money, securities or other property in reliance upon a transfer instruction purportedly issued by your customer or vendor, but which transfer instruction proves to have been fraudulently issued by an imposter without the knowledge or consent of your employee.
“Transfer instruction” means an instruction directing you to transfer, pay, or deliver money, securities or other property.
On August, 2024, the Insurer offered to pay $100,000 under the Social Engineering Fraud coverage (which had a limit of $100,000). The loss amount was around $500,000. The Insurer also advised that the Funds Transfer Fraud Coverage did not apply to the submitted Loss that provided a $5,000 deductible, up to a Policy Limit of $1 million. After several conversations with the Insurer, they reverted their position and paid the full loss under the Computer Fraud coverage. McGriff was able to argue that in the jurisdiction this incident occurred, the policy must clearly define that voluntary payment is Social Engineering regardless of the place and how the fraud was committed using a computer. Because the Insurer agreed the language was not clear, they paid the claim.
Imagine if a director of a public company was tricked into transferring funds to an alleged account from what sounded like the employee's boss's voice on the phone. After the call, the employee received an email with updated bank information and made the ACH payment. The employee called back to confirm the instructions and the unauthorized person confirmed. It was later discovered that the call and email were made by an unauthorized person10. During the investigation, it was alleged that the Directors and Officers implemented the wrong AI strategy which caused significant financial loss, and among other things, lead to a drop in the firm's stock price. Following an alleged stock drop, a company and certain of its D&Os were sued in a securities class action alleging that the defendants committed securities fraud by making false and misleading statements with respect to the company's AI capabilities11.
The above scenario is a trend leading clients to transferring funds to an alleged “new” or “updated” bank account based on what seems a legit call. And because the information is being confirmed via phone, clients believe the transaction is legitimate. The above scenario perfectly illustrates the implications of AI in social engineering attacks. Organizations could be looking at crime, cyber and D&O insurance for coverage. And, while the scope of coverage will depend on the specific terms of each insurance policy, a number of policies could apply for AI-related claims and losses.
Crime policies can cover monetary loss caused by a range of criminal acts such as social engineering, computer crime, wire transfer fraud, business email compromise, impersonation fraud, invoice manipulation, and others. Considering the different coverages a policy could offer, McGriff recommends you review the policies terms and conditions closely and discuss with your McGriff Broker to learn how your insurance could provide coverage and the policy requirements.
Cyber policies are generally designed to respond to security or privacy incident and related expenses. To the extent the social engineering or related methods12 caused a privacy or security violation the first party coverage could be triggered. Cyber liability policies will frequently offer social engineering type coverage as an optional insuring agreement, typically subject to a sublimit.
Directors and Officers can be sued for a social engineering claim, particularly if it is found that they failed to implement reasonable security measures to protect against such incident, which could lead to significant financial losses for the company and potential shareholder lawsuits alleging negligence in their oversight duties.
Organizations should be taking a proactive approach when it comes to AI, as various types of policies could provide coverage. The key to a successful claim handling and resolution is understanding how each policy, in your insurance program, could apply and all policy requirements. Work with your McGriff Insurance Broker to review your AI related exposures and the different coverages provided in your current insurance program and additional options.