Join professionals from Troutman Pepper, Keiter and McGriff to learn more about the growing threat of cybercrimes. Organizations of all sizes know that no one is exempt from cyberthreats. During this session, we will discuss insurance challenges, litigation, cyber hygiene, best practices, recent incidents, and more.
Key Takeaways:
Cybersecurity challenges
2025 state of cyber risk
Developing risk trends
State of Cyber Insurance
Privacy law overview
Regulatory red flags
Hello everyone.
Organizations of all sizes know that no one is exempt from cyber threats and a cyber attack can happen to anyone at any time.
On behalf of each of you for joining us today as we discuss cyber, the good, the bad, and the ugly.
My name is Lesonya Wilder and I will be your host for the call today.
There are just a couple of housekeeping points that I would like to share with you before we get started.
This is a live session and you are in listen mode.
Yeah.
Thanks, Lesonya.
Good afternoon, everybody.
And this is a artist slammer.
For today’s session, I’m Alan Delahunty, Producer in Richmond, VA McGriff, and appreciate everybody joining today.
Here’s today’s panelists.
We have Chris Michella and Jean Fishel.
Chris, if you want to introduce yourself, please.
Sure.
My name is Chris Michella.
I’m a senior Manager at KAIDAR here in Richmond and I work on a risk advisory services group.
And of course I focus on cybersecurity mostly in the form of SoC 2 examinations, CMMC consulting and soon assessments, penetration testing and cyber risk assessments and an active software developer as well.
So able to leverage that experience and the services I provide to clients and Gene.
Sure.
Thanks, Alan.
Great to be here.
My name is Jean Fishel.
I am counsel at the law firm of Troutman Pepperlock.
I assist companies and organizations really with all things cyber privacy and AII, assist in compliance, reviewing policies, operations to ensure they’re complying with all the various laws, some of which we’re going to be talking about today.
I also represent companies in federal and state regulatory investigations.
And prior to coming over to Troutman a few years ago, I was 20 years at the Virginia Attorney General’s Office, where I headed up privacy enforcement over there for most of that time and was also a cyber prosecutor both on the federal and state side.
So great to be here today.
Yeah, Thanks, Jean.
And here’s today’s agenda talking about cybersecurity trends, privacy law overview, regulatory enforcement actions and regulatory red flags.
Without further ado, over to you, Chris.
Awesome.
Thank you.
You know, there’s always a lot of activity in the cybersecurity space.
So in our limited time today, I wanted to share some thoughts on three of the larger trends that I’ve noticed over the last year or so.
Broadly speaking, those are the rise in supply chain attacks, the impact of AI on cybersecurity, and some of the more advanced techniques that we’re seeing being used to target end users.
Next slide.
So over the past year or so, I’ve noticed a rise in news reports related to various types of supply chain attacks.
And apparently I wasn’t the only one because IBM and their annual cost of data breach report is reporting for the first time that supply chain attacks are prominent initial attack vector accounting for 15% of the all data breaches, which is second only to phishing attacks.
So to start, I guess what is a supply chain attack?
Well, but in short, it’s when an attacker targets a third party of some sort instead of targeting the victim company itself directly.
And what makes these attacks especially pernicious is that the attackers can often, not always, but often can use the compromise third party to attack and extort many other companies all at once.
IB.
Ms.
Report also indicates that of all attack types, supply chain attacks took the longest to both identify and contain.
Next slide.
OK, so on this slide and several of the following slides, I’ll be reviewing some of the supply chain attacks that have occurred recently to hopefully illustrate some of the forms that these attacks may take.
Starting with two of the more famous ones that you may not have known were supply chain attacks.
The famous target payment card data breach of 2013 was started when an attacker stole the login credentials that an HVAC vendor used to interact with target systems.
Then in 2020, attackers compromised the source code of a prominent commercial network monitoring tool called Orion from the company SolarWinds.
After the source code was compromised, when the automatic updates went out to customers with Orion deployed on their networks, the malicious code that was added by the attackers into the source code provided the attackers then with remote access to roughly 18,000 victim companies.
I’ve no even though that’s that news is a bit old.
The recently the SEC fined several companies or those are companies, you know, agreed to pay penalties for essentially under reporting the impact of the Orion breach on those companies.
And this really goes to show that there are, you know, regulatory impacts and considerations to these attacks as well.
And, you know, just because it was a supplier that was initially compromised doesn’t necessarily let the victim company off the hook from a a regulatory perspective.
Next slide.
OK, so from here on out, all the examples will be more recent.
And I’ll emphasize that, you know, these are just the ones I picked for this presentation.
There are plenty of others that got left on the cutting room floor on a count of time.
OK, so to start with Clorox.
In July, Reuters obtained a copy of Clorox’s lawsuit against Cognizant, which is an IT provider that provided Clorox with outsourced IT helpdesk services.
The lawsuit alleges that an attacker simply called the Cognizant Managed IT Helpdesk, claimed to be a user requesting credentials to access the network, and the Cognizant helpdesk employee just gave that person login credentials.
This allegedly led to a ransomware attack on Clorox that apparently so greatly interrupted their operations that they were unable to actually ship their products to retailers for an extended period of time, which accounted for about 330 million of the 380 million in total damages.
So while the Clorox hack was simple in that it targeted their outsourced IT help desk personnel, this next one was actually extremely sophisticated.
This attack targeted piece of open source code that is deeply embedded in Linux operating systems, and that piece of code is called XZ Utils.
OK, so and just for reference, you know, Microsoft dominates the desktop operating system market share, and just as they do that, Linux dominates the market share for enterprise servers, which is, you know, why this is an important story.
So in 2024, a database developer noticed that his attempts to access servers over a common remote system administration protocol called SSH, we’re taking a bit longer than usual, about a half second longer than usual, and decided to investigate why.
Well, he dove in and eventually uncovered a remote access Trojan, also called a RAT, that provided backdoor access to whoever had a secret key.
And so the attack basically gave the attacker a master key to access any infected machines that were exposed to the Internet.
And I want to emphasize that this was just an absolutely absurdly sophisticated attack.
It required the malicious developer to build an online reputation over a period of years as an open source contributor, gained contribution rights to the Xe Utils source code, and develop an ingenious way to embed and hide that master key in the software.
And as a result of all that, most analysts think that it was the work of the nation state.
This infected package thankfully had not yet made it into production Linux distributions, but it was already in the testing versions of many Linux distros, and it was literally weeks away from being released into the long term support version of Ubuntu, which is the most popular Linux distribution.
So officially, I guess we can categorize this one as a near miss, but it’s truly astonishing just how narrow a miss it was and how catastrophic the result would have been had it not been caught by a curious developer.
And it kind of also makes you wonder if there have been other similar attacks that have been successful that we just haven’t noticed yet.
Thank you.
All right, so before we get into the polyfill story here, a bit of background, right?
So whenever you visit a website, there is, you know, pretty much a guarantee that your web browser is downloading JavaScript code.
And it’s highly likely that some of that code is served by a third party content delivery network or CDN, both of which are totally normal.
JavaScript is the language of the web.
It is what provides all the interactivity that we enjoy on all the websites and many of the desk desktop applications that we use as well, like Microsoft Teams, it’s a JavaScript application.
Well, polyfill dot IO was one such CDN, and in short, the goal of polyfill was to fill the gaps between a browser’s ability to run modern JavaScript functions in older browsers that lacked the native ability to support those functions.
So with polyfill, developers were able to write code using the more modern and developer friendly functions without breaking the website for users using older browsers.
Extremely helpful, right?
Well, in 2024, a Chinese company purchased the polyfill dot IO domain and modified the JavaScript code that was delivered over the CDN.
So when a visitor went to a site, the malicious JavaScript code was downloaded from the CDN into the user’s browser, which was then used to steal user browsing data, like, you know, data that was entered into online forms as an example.
And it redirected users as well to scam websites.
So we can add malicious corporate takeovers to the list of the way these things can happen.
And believe it or not, there’s actually a lot of examples of the same type of thing happening with popular web browser extensions as well.
Recently we have a massive data breach of Salesforce customer data as well.
This breach occurred when hackers penetrated Sales Loft, which is a third party provider of an AI chat bot called Drift that integrates with customer sales force instances.
We don’t know exactly how the hackers got into Sales Loft, but we know that they eventually stole something called Oauth tokens.
So what is an Oauth token?
Well, whenever you give a third party application access to your data, that application is provided in Oauth token, right?
The token is a long cryptographically unique string of text that the application uses to authenticate itself to the system and then access your data to do whatever it is that the application does.
So an example might be if you use a scheduling service that integrates with your calendar, that scheduling service needs to be able to read your calendar, see availability, modify your calendar so that appointments booked through the service automatically appear in your calendar.
Well, you know, companies who were using the AI, the Drift AI chatbot had to give it access to their sales force data so that the chatbot could retrieve the relevant information when responding to customer requests.
So you can see where this one is going.
The attackers used the OAF tokens to steal vast amounts of data from around 760 organizations.
And very recent reports indicate that the attackers are attempting to extort Salesforce and that Salesforce is refusing.
So even though it wasn’t Salesforce’s fault, the attackers are still going after Salesforce.
But like I said, Salesforce is refusing and I, I expect the hackers to, you know, start moving on to some of the other victim organizations and to start leaking the information if they haven’t already.
Next slide.
OK, so the last two examples here relate to something called NPM, so I’ll provide a bit of background on that.
NPM stands for the Node Package Manager, and an NPM package is a bit of bundled code that provides some sort of convenience for software developers.
The code is typically JavaScript and it can run in a web browser or on a Node JS server.
So, for example, an NPM package might make it easier for developers to deal with dates and times within an application, which isn’t a notorious pain in the ****.
It could provide data validation when you know users input data into forms.
It could provide functions that are available on the server so that developers can read and write from a database.
That sort of thing.
And using third party libraries and application development is a standard, an expected part of software development, and it’s especially prevalent in web development.
So npmjs.com is an online repository where NPM packages are published and where developers go to get and use NPM packages.
And it is the largest open source package repository in in the world.
Now, again, on account of time, I won’t get into all the details of these two, but in short and two separate attacks, both in September, attackers were able to compromise hundreds of popular NPM packages that together accounted for billions of downloads per week used by companies within their software development, you know, all across the world.
The ultimate effect of the attacks is that they were used to infect both live websites to steal cryptocurrency from users as they were transmitting it, as well as to steal, you know, quote UN quote secrets.
An example of a secret might be like an Oauth token like I described before, but secrets could also be things like credentials to access cloud infrastructure platforms like AWS, Google Cloud Platform, Microsoft Azure, and then you know from there potentially access sensitive data.
So we’re we’re still only beginning, I think to see the ultimate impact of those two attacks.
Next slide.
OK, so as we’ve seen in these examples, right?
These attacks can take many shapes.
Service provider personnel can be targeted.
Nation state attacks on deeply embedded server code can happen.
Malicious corporate takeovers compromise source code delivered through automatic updates, compromise source code delivered through web browsers, compromise source code spreading to more source code and service providers failing to protect the secrets that are used to integrate their systems with our systems.
So what are the key takeaways from this?
You know, the, I think the traditional vendor risk management processes that we’ve sort of been used to where we vet vendors, we document our risk assessments.
They’re still necessary, but they’re clearly not enough.
And I think, you know, the reality is that whenever we adopt A new digital tool of some sort, we’re not just inheriting the risk of that one vendor, but we’re inheriting the risk of of, you know, their entire supply chain as well and their supply chain supply chains.
So I think the reality is also then that there’s a certain amount of risk that is, you know, completely uncontrollable and completely unpredictable, and that most organizations probably underestimate that risk.
And it’s underestimated, I think, because with each new vendor that we add, the risk really scales exponentially.
Not it’s not additive, right?
You know, there’s only so many third party service providers and vendors out there and a certain percentage of them will get hacked, right?
It’s just a, a fact.
So, you know, as organizations integrate more third party technology vendors and service providers, you know, the cumulative likelihood that one of them is going to get hacked increases.
You know, said more plainly and kind of a blunt metaphor is that it’s kind of like playing Russian roulette, but each time you pull the trigger, you’re adding another bullet into the cylinder.
So at the end of the day, I think organizations need to, you know, better understand and respect that risk.
And recognize that it it can’t just be documented away.
Every tool and every vendor really needs to be scrutinized deeply as usual.
But also, I think organizations need to honestly ask the question, is this tool that has access to our sensitive data really necessary?
Do we really need that chat bot?
Do we really need that AI note taking service to be in all our online meetings?
You know, sometimes the answer will be yes, right?
And after all, we do need digital tools to perform digital work.
But I suspect that with, you know, proper weight given to that, you know, uncontrollable and unpredictable risk, the answer should probably be no more often than it is.
Next slide.
OK, you know, another mega trend from the past year or more, to no one’s surprise of course, is artificial intelligence.
And you know, like any tool, AI can be used for good and it can be used for I’ll, you know, according once again to IBM, roughly 16% of data breaches, you know, attackers use AI to enhance their attacks.
So how are attackers using AI?
Well, by far the most prevalent use from what we’ve seen is that they’re basically using it to write better social engineering and phishing emails.
You know, long gone at this point are the days of relying on the telltale signs of poor addiction and bad grammar and e-mail.
But as we’ll see on the the next slide in a minute, it goes far beyond that.
AI is also being used by attackers to lower the technical barriers to create functioning exploits.
So just as you know, regular software engineers can use AI to assist in writing code, hackers can use the same tools to, you know, research software exploits and take advantage of known vulnerabilities.
And in short, it means more hackers can write more exploits.
You know, defenders can and are using AI as well.
Large language models, I think are actually quite good at providing, you know, good or at least serviceable answers to basic security questions to help fill knowledge gaps within companies.
You know, replacing hours of research in a in a single prompt.
So that’s a good thing, right?
It’s also being incorporated into the tools that are being used, you know, such as to identify vulnerabilities and software code before that code is released.
And it’s also being used to assist in threat detection and response where, you know, an AI system can, you know, better correlate and identify a high risk events on a network.
And then administrators can configure the systems to take automated actions based on that risk level.
So, you know, something that’s a very high risk might result in, you know, the associated user being disconnected from the corporate network to stop whatever is happening from continuing to happen or to spread.
We’ve also seen the AI models themselves are being attacked.
We won’t get into the details of all these things, but prompt injection model inversion, model evasion, and data poisoning are all different methods being used to attack AI systems.
If you want to know what those items are, you could probably ask an AI and it would probably give you a pretty good, pretty good answer.
OK, on to my last slide here, Alan.
OK, so also to no one’s surprise, end users are still the single largest initial attack vector for successful data breaches, again according to IBM.
And they say in their report that roughly 36% of all successful attacks start with an end user.
And then as noted on the previous slide, with AI, the attacks that end users are encountering are becoming more and more sophisticated.
The standout story that I came across in 2025 was a finance employee at a major international engineering company who made 15 fraudulent wire transfers after being in an online meeting with AI.
Deepfakes made to look and sound like his colleagues.
So seeing video and audio, interacting with people who he thought he worked with.
It was actually digital replicas.
So, you know, year after year we see the same thing.
You know, end users are continuously reported to be the most common initial attack vector in successful attacks.
And year after year we’ve been kind of preaching the same thing, right?
And users must be trained to identify these attacks.
All that is still true end users.
End user security training is critical, but I think it’s clearly not enough considering that 36% figure, right?
The data is clear on that.
So I think companies really must assume that eventually someone will fall for a phishing or social engineering attack.
And the thing about these attacks is that you can’t predict who, and you can’t predict when it will happen.
So what’s the answer?
Well, yeah, There’s a cybersecurity marketing buzzword called Zero trust, but I personally find that term to be a little sterile in a nondescript you know, what do you do when you can’t predict to and you can’t predict when?
Well, I think what it really means then is that we have to assume that every user computer is hostile at all times, and we have to design our internal processes, our systems, our networks accordingly.
That’s what zero trust really means, and doing so can limit the damage done when a user does fall for some type of social engineering or phishing attack.
And thus concludes Chris’s cybersecurity horror show.
And I will turn it over to Jean to take us through the legal horror show.
Thank you, Chris.
And that was a, that was a great overview of the threat landscape, the current threat landscape facing organizations.
What we want to pivot to in the second part is to really talk about the privacy slash cyber legal landscape and, and really want to do so, so that you can think about your own organization’s policies, procedures and operations and potentially adjust them accordingly relative to the pertinent laws that are out there.
And also to bring together all of your stakeholders which need to be brought together to think about the the legal exposure that exists.
And that includes your executives, your, your C-Suite folks, your, your IT folks, your even your vendors and of course, unfortunately also your attorneys, including your in house or or outside counsel.
So let’s go on to this next slide here and we’re going to take a look at the, the current privacy landscape as it relates to the states.
So just to kick off here, many of you I’m sure are aware there is no overarching federal privacy law that that governs the the handling of data or or even as related to cybersecurity measures.
It’s unlikely there is not going to be some sort of overarching federal law that supersedes state law anytime soon because no one can on the federal side agree on anything.
And frankly we’ve we’ve seen that to be the case over the last 1718 years as they relate to a database breach notification laws.
So of course, if you suffer database breach, you have to potentially report it to all 50 states now, plus US territories, you know, regulators in those States and also affected consumers.
Those state laws have been on the books beginning back in 2007, 2008.
And while there’s been federal legislation over the years to attempt to create one database breach notification law, it’s it’s never passed.
And so that just shows you right there the last two decades, folks can’t agree on how to regulate privacy.
But what we have here on the slide is, is really the current landscape of what I’m going to focus on, which are the comprehensive consumer privacy laws.
So these are major, major privacy laws that affect private organizations that started over five years ago with California enacting the CCPA then, then Virginia followed suit.
And now currently we have 20 states that have enacted these comprehensive consumer privacy laws.
And you see on the map, on the screen, you see it, it, it says last updated in July of this year.
It’s, it’s virtually been unchanged since then.
So this, this really represents the, the current landscape.
The, the states in green are the states that have actually enacted these comprehensive privacy laws.
Some of them are in effect.
You can see on the left those states that are currently in effect, Most of the others are going to take effect in mostly 2026.
But you can also see from this that other states are, there are a few other states that are currently considering legislation.
There’s been a similar legislation passed in other states that you can see from the darker grey that just didn’t make it anywhere and they’re still haggling over how these are passed.
I think, you know, at some point over the next decade, you’re going to have most states with some form of a comprehensive consumer privacy laws that are passed.
What you should be aware of with these laws.
And we’re going to, I’m going to talk about some common features of them here in a second in the next couple slides and what the laws do.
But as they relate to these privacy laws, some states have it have authorized, some legislatures have authorized state agencies or state attorneys general offices to make rules going forward associated with these laws.
So there’s rulemaking authority in some states, like for example California with the, with their CPPA agency.
Again, I’ll talk a little bit more about this.
And also a state like Colorado who actually just yesterday or within the past week, their Department of Law and their AG who were authorized to make rules in with their related to their Consumer Privacy Act, actually just came out with some new rules this past month that are currently in the approval process with the AG, but that focus on ensuring greater protections and it before minors in certain circumstances.
So I, I just generally raise this because in some of these states, once the laws passed and the statute is in the books, that’s not necessarily the end of the story.
There could be additional rules that are that are going to come out again, depending on the state and the rule making authority in those particular states.
But anyway, this is the, this is sort of the current landscape with these comprehensive consumer privacy laws.
So what generally do these laws do?
What do you, what do you need to be aware of?
Well, let’s let’s move on to the the next slide here.
And what I’ve done here is I’ve listed some common elements of these comprehensive privacy laws.
Now generally speaking, these laws apply to controllers and processors of personal information, consumer personal information.
And in the next slide, we’re going to talk about, well, what is personal information?
What’s the scope of that?
We’re going to talk about it here in a second.
And just to to jump ahead a little bit, it encompasses most at this point, most anything where you can identify someone by a a data point.
But anyway, if you’re, if your organization is a controller or a processor of data, you need to be aware of the requirements that are popping up in the States and states.
And certainly if you operate on a on a national scale or in more than one state, you need to be aware of the privacy slash cyber landscape of that’s out there.
These laws generally address 2 broad categories.
They give consumers more control of their data and and grant rights to these consumers as they relate to how a particular company or organization is handling the data.
And they also on on in the other category basically impose cybersecurity requirements or requirements and how companies handle the data.
Those are the 2 broad categories of these laws.
Now generally speaking, these laws do vary among each other.
Some are stricter than others, like California has a very strict privacy laws.
Some are are looser and don’t have as many regulations.
But generally speaking, what you’re what you’re seeing on the screen applies to almost all of them.
On the on the left hand column near under consumer control, controllers now must provide notice to consumers of how their data is being handled and used.
And then consumers have a right to access that data and confirm that data.
They have a right to delete that data from a company’s databases.
They have a right to obtain a copy of their data.
They have a right to correct the data if there are errors, and they also have a right to opt out of the sale of that data in many circumstances.
And again, the laws get specific on when they’re allowed to to opt out, but generally speaking, they can opt out of the sale of the data out of targeted advertising and certain kinds of profiling that produces legal effects.
And because of that, because they have the right to opt out, these laws require that companies employee opt out mechanisms.
Some of them are have to be pre approved these mechanisms by state agencies, again depending on the state.
But just generally speaking, there has to be a mechanism for the for the consumer to opt out regarding these uses of the data.
On the other column, there are regulations in these laws that govern data handling and how companies have to handle the personal information it has.
It requires limits on the collection of certain data, meaning the data that a company is collecting has to be limited in its purpose, and it has to be relevant to the purpose for which your your organization is using the data.
And your process of collecting and handling the data must be consistent with the notice that you’re providing to the consumer.
So for example, if you’re collecting the data for some medical purpose, you can’t, it has to say you’re collecting for medical purpose, but you can’t turn around and use that for something like advertising if the notice doesn’t say that.
And then of course, these laws implement, they require reasonable physical, technical and administrative data security practices.
And that’s usually what the laws actually say is reasonable.
So you have to implement reasonable, basically cybersecurity if you’re controlling or processing this data, you know, if you’re following, certainly if you’re following the NIST standards, National Institute for Standards and Technology, a lot of times that’s going to pass muster.
Or if you’re approaching that.
But you know, different, these different states have their regulatory bodies have opined on that in a lot of cases, what they consider to be, quote, reasonable secure cybersecurity safeguards, almost all of these have a separate category of personal information that they’ve deemed sensitive data.
And so they’re often heightened requirements for processing this kind of data.
Like I just mentioned, in Colorado now with the proposed rule coming out related to their privacy law, they companies have to get parental consent if they, if they believe that they’re targeting minors for anyone under the age of 13.
And if it’s someone 13 up to the age of 18, they have to get consent from the user, affirmative consent to use that.
So children’s data is very important.
It’s almost always singled out in these laws, but also any other type of sensitive data that includes religious beliefs, sexual orientation, mental health information, immigration status, and even geolocation data.
These are categories of sensitive data that you need to be aware of.
If you’re handling this kind of data process and controlling it.
You need to be aware of these categories in these specific state laws and, and, and take the appropriate measures.
So let’s go to the let’s go to the next slide here.
We’re going to talk about, well, what, what is personal information?
Well, let’s use California as an example.
It’s generally the broadest, but frankly, in this first sentence, you see up here, many, many of these consumer privacy laws have a similar definition.
Personal information is information that identifies, relates to, describes, is reasonably capable of being associated with.
Or could reasonably linked be linked directly or indirectly with a particular consumer or household.
And so you can see it includes real name, alias, postal address, IP addresses, e-mail addresses, Social Security numbers, driver’s license numbers, passport numbers, even browsing history and Internet activity, and of course, geolocation data, like I said.
So this kind of data, if you’re controlling the processing yet, you’re going to fall within the purview of certainly California’s consumer privacy laws, but many, many other of these laws.
Now, some of the state laws, and of course, we don’t have enough time to dive into every specific state law.
Some of the state laws have a narrow, narrower definition of personal information.
But I put California’s up here as it’s the broadest and certainly if you’re doing business in California or if you’re just handling any of this sort of information, you really need to double check that you’re in compliance with these comprehensive consumer privacy laws.
Also, I would be remiss if while this presentation is focusing on these, these comprehensive laws, which are the biggest and, and really most important privacy laws.
I’d be remiss not to mention that if you’re dealing or if you utilize AI artificial intelligence, that there are special problems that arise with AI systems in complying with these laws that we’re talking about right here.
These comprehensive laws specifically like effectuating data requests.
So if you, if you, if you’re using personal information, personal data in in an AI system and, and a consumer request that, that information be deleted, well, AI systems have a hard time of forgetting in a lot of instances.
So how are you effectuating these data requests if you’re using AI and your AI is touching any of this personal information?
Also, we have a few states that have passed AI specific laws, including California, Colorado, Utah in in Texas as recently this year passed a a fairly comprehensive AI law.
So AI specific laws are popping up and we could spend an entire afternoon talking about AI specific laws also.
But just be aware that AI and personal information touching AI presents special problems that you’re going to need to work through.
All right, let’s go on to the the next slide here we’re going to talk about some enforcement actions, some, you know, the enforcement mechanisms that generally exists in these comprehensive consumer privacy laws and also some specific actions that have that have taken place recently.
So I put up here Virginia as an example.
Virginia was the second state to pass a comprehensive consumer privacy law.
The enforcement provisions and the terms and the statutory penalties and those sort of things in Virginia is fairly common across.
This is a fairly standard, standard example of penalties and enforcement as they relate to these comprehensive laws.
Generally speaking, these laws, almost all of them are enforceable by the Virginia or by the that state’s attorney General’s office.
And that’s the case here in Virginia.
There are a couple sections in the Virginia AG’s office that that look into violations, potential violations of consumer privacy, and they are the consumer protection section and the computer crime section.
And most states now have, they’ve either integrated it into already existing sections, their enforcement, as has Virginia, or they’ve created entire units like Texas.
Texas, created over the past couple years, has created an entire enforcement unit just for their consumer privacy law, and they’re devoting a lot of resources to the enforcement of those laws.
And some states have more resources devoted to an enforcement investigation under these laws than others.
But as far as penalties goes go, you can see here in the Virginia CDPA Consumer Data Protection Act, that’s what that stands for per violation and AG can seek up to $7500 per violation of the statute.
Virginia allows for a 30 day cure.
So the the AG has to give notice and a chance if, if you can cure it, to be able to cure it within that penalty before they file suit.
In every case, in every state, there’s injunctive relief available, which means simply that the AG can potentially seek and a court can potentially order you to make actual changes to how your company operates, force you to make certain changes in your operation.
So it’s not just the monetary penalty, it’s potentially much more expensive mandates or requirements that you change your practices.
In Virginia, there’s no private right of action under the, the consumer, their consumer privacy law, which means an individual consumer cannot sue under the law.
And that’s not the case in every state.
Although most states do not have a private right of action in their consumer privacy laws, but some do.
California does.
In certain situations, there’s a private right of action.
And also I, I just want to also point out, I mentioned it earlier, even though we’re focusing on the consumer privacy laws, don’t forget that there are data breach notification laws that sometimes the penalties are even greater where you have to, and every state has a data breach notification law where you have to notify potentially a, a state regulator if you’ve suffered a breach of personal data, personal information, and also the affected consumers.
So, and of course those have been on the books for almost 2 decades now.
But anyway, just just be aware all of this is out there.
Let’s go to the next slide.
I want to show the example of California, which dedicates probably more resources than any other state to the enforcement of privacy related matters.
Now, under their consumer privacy law, both the Attorney General and the California Privacy Protection Agency have enforcement authority under the CCPA.
But the CPPA, the California Privacy Protection Agency, was created shortly after the law was passed.
They have an entire agency, state agency dedicated to not only they, they make rules associated with the law, but they have investigators and enforcers of the law.
They’re dedicated privacy regulators with subpoena power and enforcement power and again, rulemaking power.
Virginia, California, like Virginia has a $7500 penalty per violation, can also seek conjunctive relief.
There used to be a mandatory cure.
But that no longer exists.
That was after the law was first passed.
And as I mentioned, there is a limited private right of that action under the law if there’s a breach following a failure to implement a reasonable security.
So, but I, I point out California because it’s the most extreme example of resources being dedicated for the enforcement.
Of course, California has they’ve been very active in the enforcement of their law, maybe as active as any other state.
I would point out that the CPPA recently just passed new rules under the, under the CCPA, the Consumer Privacy Act that go into effect on January 1st of 2026.
And that includes now mandating A cybersecurity audit annually that has to be filed with the CPPA.
That didn’t exist in the previous version of the law, that companies that fall under the purview of of this law have to file a risk assessment with the agency by I think April of 2028.
And now they’ve started addressing AI use in in the rules here that they’re promulgating under the consumer privacy laws.
And basically that’s going to require again, beginning next year, companies to provide notice of use of AI in its operations and the ability to opt out of AI being used regarding a consumer’s personal data.
So these new rules just came out this past week.
In fact, I think it was yesterday from the CPPA.
And so it kind of highlights my earlier statement of you really have to be on top of what’s going on with this rule making authority in these states as far as enforcement goes.
The this just this year the AG had the largest ever settlement in June, I think it was June under the CCPA and it was a one and a half $1,000,000 settlement against Healthline, which was they they claim was the largest ever that they they forced the settlement for that monetary penalty.
And basically Healthline was sharing data of consumers who had previously opted out of the of allowing them to share the data.
So it’s fairly egregious case.
They were also the company’s vendor contracts weren’t up to snuff.
The the CCPA requires vendors that companies who are hiring vendors who are touching this personal data maintain certain cybersecurity requirements and and safeguards.
And they weren’t doing that in just a couple weeks ago, the CPPA, the agency settled its biggest case ever within the past two weeks for 1.35 million against the Tractor Supply Company.
And that basically what was happening there is the allegations were that they did not the Tractor Supply Company didn’t have a an effective opt out mechanism on its site.
It, it was, it just, it wasn’t processing the requests.
The the apparently the mechanism wasn’t working on the site and also it wasn’t recognizing if a a user was using a certain browser wasn’t recognizing the opt out request.
And they also had not updated their their privacy disclosures, the required disclosures under the law.
So two big settlements that just happened, one just within the past couple weeks and 1:00 earlier in June here.
But you can see here the a GS office over the last two to three years under the CCPA has conducted what they’ve called investigative sweeps and they’ve done it against online advertising and sales.
They’ve done it looking for to make sure opt out mechanisms are working properly as I just spoke about these settlements and they’ve also looked at streaming services to make sure they’re handling data appropriately and providing the proper notices and employing the the proper opt out mechanisms.
I listed a few examples of settlements here.
These are from over the past three years of against Sephora, DoorDash Glow under the CCPA for for various amounts for, you know, failure to provide opt outs mostly and also for a cybersecurity failure to employ proper cybersecurity.
So just just a few examples of what’s been going on.
I again highlight California in particular.
They are very active.
If we go to the next slide, I’m going to show you Texas just this past year announced A lawsuit and Texas has been very active.
I I mentioned they created a specific unit just for investigating and enforcing under the Texas Data Privacy Security Act and they sued an insurance company here over the past year for allegedly unlawful collection, use and sale of data regarding geolocation in the movement of Texas drivers.
So data collected and, and being sent back to insurance companies that that certain cars were collecting through their movement.
And so, you know, one of the subsidiaries as, as a data broker, they’re also data broker laws associated with these privacy laws where data brokers have to register with the state.
That did not happen here with the subsidiary, the insurance company.
And again, they didn’t obtain the proper consent for sharing this geolocation data, allegedly.
That’s according to what the lawsuit says over.
And if we go to the next slide over in Michigan, earlier this year the Michigan AG filed suit against Roku, which is a streaming service, of course, with a bunch of apps and streaming apps on them.
In this suit, the Michigan AG is asserting that Roku was violating the Child Online Privacy Protection Act, which is actually a federal law, but empowers state AGS to ensure that they’re obtaining proper consent in collecting children’s data and that they also are required under COPPA to screen for underage users and then obtain the parental consent after they screen for them.
Michigan’s alleging that Roku improperly collected and used and retained children’s data in violation of those provisions under Cobb, and also alleging that they were misrepresenting their privacy practice.
So doing what they’ve so whatever notice they had, according to the Michigan AG, wasn’t sufficient and describing how they were using the data.
Interestingly, in the last two days, Florida, the Florida AG followed most an identical suit against Roku for this.
So now there are two lawsuits against Roku for for handling children’s data.
And I highlight this because Michigan and, and, and Florida, they don’t have the comprehensive consumer privacy law yet that many other states like Texas and California do.
But there are other mechanisms for AGS to operate on some under federal law.
And also highlight this because if you’re handling children’s data, children’s data has been a a particular concern of state AGS and enforcement.
And in fact, let’s move a couple slides ahead.
We’re going to talk about some of these red flags you’ll see here.
If if you’re handling data for particular consumer demographics like the elderly and children’s data, you need to be aware that you’re going to come under greater scrutiny.
Sensitive data like Social Security numbers, health data is a concern of a GS and also how you’re storing it.
These are all red flags for regulators and and they will take a harder look at your practices pursuant to these consumer privacy laws and even the data breach notification laws.
Go to the next slide here.
Real quick.
Other regulatory red flags you want to be aware of is if you’ve suffered A breach, if there’s a bad actor involved versus an accidental breach, the bad actor, there’s a higher risk for identity theft and fraud in that instance.
So regulators are going to take notice.
Certainly if the data ends up on the the dark web, even things like media attention, it’s going to garner the attention and scrutiny of regulators and it’s certainly your legal obligations under both federal and state law.
And finally, on the my last slide here, you know, regulators are going to look at, again, this is in the wake if you’ve suffered an incident, unfortunately, they’re going to look at your response, your organization’s response, whether there’s a significant delay, whether you’re cooperating with the regulators, when they come, when they come knocking or even alerted law enforcement, law enforcement, they, they look favorably if you’ve got law enforcement involved.
And also, of course, there are political considerations for regulators, Many of them, well, well, most of state AGS in particular are elected officials, Not all, most are elected officials.
They want to make a splash in, in, in the area of privacy and particularly if you have any of these sensitive categories we spoke about you, you’re going to have a higher risk of being exposed.
So that’s just the general overview of what you’re potentially facing here on the regulatory side and the legal landscape.
And hopefully that helps shed some light.
Yeah, Thanks, Jean.
Thanks, Chris.
There’s been a couple of questions come in, but we don’t have time for that.
So we’re up against the hour.
We’ll make sure Chris and Jean responds to everybody with those answers and everybody will receive a copy of the powerpoint.
Appreciate everybody’s time today and hope you have a good afternoon.