The cyber market continues to be stable so far in 2025. That means customers in almost every industry class and across all revenue bands can purchase affordable cyber insurance coverage with sufficient policy limits to address many of the foreseeable incidents their companies may encounter.
Part of this market stability and favorable pricing is due largely to relatively strong loss ratios and competitive pressure from new market entrants, creating more options for cyber risk transfer.
This trend is expected to continue for now despite ongoing ransomware incidents and more litigation against cyber policyholders alleging privacy violations related to website tracking technologies.
Business email compromise and social engineering incidents resulting in the theft of funds also continue to plague many companies, especially smaller businesses that may lack more robust authentication methods. Since cyber policies can cover all of these threats as well as the first-party losses that accompany many attacks, we’ve seen no shortage of covered cyber insurance claims. The good news is that cyber insurers are setting premiums and achieving sufficient growth targets to cover developing losses, at least in the near term.
According to a recent report from Chainalysis, extortion payments to threat actor groups dropped 35% in a 2024 year-over-year comparison. Much of this decline was attributed to refusal to pay, even though the number of victim companies increased.
“Commenting on the research, Lizzie Cookson, Senior Director of Incident Response at ransomware recovery specialist Coveware, argued that improved cyber resiliency is enabling many victims to resist demands and explore multiple options to recover from an attack. ‘They may ultimately determine that a decryption tool is their best option and negotiate to reduce the final payment, but more often, they find that restoring from recent backups is the faster and more cost-effective path,’ she explained.”1
Experts such as Kivu Consulting Incident Response Director Dan Saunders report that approximately 30% of negotiations result in the victim paying an extortion demand.
“Generally, these decisions are made based on the perceived value of data that’s specifically been compromised,” Saunders said.1
Another contributing factor to the decline in ransom payments may be law enforcement action against major threat actor rings. The takeover of computer infrastructure used by criminals and the prosecution of certain LockBit and BlackCat associates has had a serious dampening effect on the threat actor community.
“Cookson noted: ‘The current ransomware ecosystem is infused with a lot of newcomers who tend to focus efforts on the small to mid-size markets, which in turn are associated with more modest ransom demands.’”1
It is too early to tell if this downward trend in extortion payments will continue, or if we are in a season of threat actor re-tooling and reconstruction of their affiliate networks across their own criminal ecosystem.
Cyber insurance underwriters still expect companies of all sizes and operations to emphasize cyber security best practices, drill and plan for potential attacks, and take advantage of the network of approved vendors that brokers and insurers can recommend to help prevent, detect, defend and recover from costly and disruptive cyber attacks.
A recent breach of large and small telecommunications companies across the U.S. may very well turn out to be the most momentous hack of all time. The Chinese-linked Salt Typhoon cyberespionage operation was a massive series of attacks to capture the meta data behind calls and messaging.
“What we found particularly remarkable in our investigation is the gigantic and seemingly indiscriminate collection of call records and data about American people, like your friends, your family, people in your community,” said Cynthia Kaiser, FBI deputy assistant director in the bureau’s cyber division, at the 2025 Zero Trust Summit in February. “The impact of the breach could last forever.”2
Salt Typhoon could cast a long shadow, since the stolen data included call records on a huge swathe of Americans, including minors. How the Chinese plan to monetize or weaponize this type of information is unknown, but experts are concerned over this heightened “level of insidiousness from Bejing,”² as Kaiser characterized it. This reflects [Chinese] “ambition and reckless aggression in cyberspace.”2
Originally, Salt Typhoon was suspected to be directed at high-level targets like U.S. politicians and executives of major corporations. But the fact that the attack cast such a wide net and that the data could be used for a myriad of nefarious efforts, the impact of this breach could last decades.
Besides U.S. Treasury Department sanctions in January on an alleged hacker and cybersecurity company with ties to a Chinese intelligence agency, we’ve yet to see the full U.S. government response to the Salt Typhoon hacks. But given the breadth of the attacks, it is plausible to expect an appropriate and proportionate response from the Trump administration.
This major breach could lead to future breaches or other targeted efforts to disrupt U.S. businesses or compromise the public trust. Given the hack’s novelty and the vastness of the data collection, it’s critical that corporate IT experts remain vigilant in protecting cyber resources and deploying layered security to insulate their companies from sophisticated attacks.
Artificial intelligence has been used in several cyber security methodologies as well as various forms of business analysis and risk modeling. While AI itself is not new, novel applications and the potential for broader use cases that can be scaled exponentially will change how data is harvested and digested for a myriad of purposes, both good and nefarious.
Machine learning, predictive analytics and other AI methods have been in place for quite some time for business purposes as well as for endpoint detection and other observation tools that detect malicious code or other anomalous activity as part of robust cyber security. But what is meant for good can always be twisted to conduct cybercrimes, improve the believability of phishing campaigns, and to scale attacks so that more victims can be ensnared with less effort.
To understand the good and bad that may come from increased dependency on AI models, it helps to understand the landscape. The following chart helps describe the types of AI methods and some of their distinguishing attributes.
Uses artificial neural networks to recognize patterns and make decisions in a way that mimics the human brain. This type of AI tends to be reliable and efficient and highly accurate. An example would be using AI on mobile phones to quickly and reliably identify malware.
Creates brand new content based on the structure and pattern of existing data. This type includes ChatGPT and other image generators. These models can be predictive despite the error rate and “AI noise” or “AI hallucinations” (made-up answers) that might present.
Assists users in performing a wide range of tasks. They are trained on vast quantities of data that are publicly available. This model pulls volumes of data which uses lots of energy and computing power. An example would include OpenAI.
Designed and built for a specific, focused-use case. They are typically trained on proprietary data sets. These models distill the data to specific data sets and are not as resource intensive. An example would be DeepSeek.
Cyber insurance underwriters have just recently amended policy language to address the increased adoption of new AI technologies. For the most part, these endorsements are designed to extend coverage in anticipation of these emerging use cases. Though we haven’t seen markets exclude AI per se, many underwriters are asking the insured to explain how they’re training employees in AI application usage and what guardrails they’re using to ensure that non-public information remains confidential and protected from AI misuse.
Below are a few excerpts from two cyber markets that have amended specific definitions.
Fraudulent Instruction means the transfer, payment or delivery of money or securities by, or on behalf of, an Insured as a result of fraudulent written, electronic, telegraphic, cable, teletype or telephone instructions provided by a third party, including any fraudulent instructions resulting from the use of deep fake technology, synthetic media, or any other technology enabled by the use of artificial intelligence, that are intended to mislead an Insured through the misrepresentation of a material fact which is relied upon in good faith by such Insured.
AI security event means the failure of security of computer systems caused by any artificial intelligence technology, including through the use of machine learning or prompt injection exploits.
Artificial intelligence in and of itself is not a separate exposure and its use is already so widespread it cannot be carved out from insurable operation risk. But as more organizations adopt these technologies, it will be important to monitor any new loss developments.
As of now, AI can be largely characterized as teaching machines to perform autonomous functions without humans but with expert eyes on inputs and outcomes such that inaccuracies can be identified and dispensed with expeditiously so that bad data does not inform the large language model and perpetuate flawed assumptions.
Suzanne Gladle
ERA Cyber Practice Leader
Newsletters
March 2025 Employee Benefits NewsletterRead More