CDPA: Untangling personal, public and sensitive data
A close look at Virginia's data privacy law reveals potential loopholes and regulatory uncertainty
The newest comprehensive data-privacy law in the US broadly protects personal data — with substantial caveats.
On March 8, the Consumer Data Protection Act (CDPA) officially became law in Virginia. The upshot: At the start of 2023, a lot of people and companies touching Virginians’ personal data are going to have to do things differently or risk paying the price (up to $7,500 per violation plus costs and attorney fees).
CDPA specifically exempts nonprofits, post-secondary schools in Virginia, Virginia state and local government agencies, and businesses/entities subject to HIPAA or GLB (so as to avoid a federal preemption issue). Otherwise, CDPA applies to all Virginia-operating or Virginia-targeting businesses and entities that:
- Control or process 100,000 Virginia consumers’ personal data in a calendar year (regardless of revenue); OR
- Control or process the personal data of 25,000+ Virginia consumers in a calendar year AND get more than 50% of their gross revenue from the sale of personal data.
All of the above demands an answer to one last question: What makes data “personal”?
CDPA defines personal data as “any information that is linked or reasonably linkable to an identified or idenitifiable natural person.” (There are a couple of exceptions; we’ll get to those later.)
CDPA exempts data already regulated by certain federal laws (including HIPAA, FCRA, FERPA, the Driver’s Privacy Protection Act, and the Farm Credit Act), “emergency contact data,” and employers handling employees’ and contractors’ data as in the normal course of employment. Otherwise, CDPA’s dominion over Virginia consumers’ “personal data” is purposely very broad. In general, if a company is subject to CDPA’s requirements, then CDPA allows Virginia consumers to opt out of having that company processing their personal data.
But what about the really personal stuff?
Sensitive data under CDPA: broad and uncertain
While CDPA is generally an “opt out” data-privacy law, collection of certain data requires an opt in. Specifically, CDPA categorizes some personal data as “sensitive data”; CDPA-subject companies can process sensitive data only with the consumer’s opt in.
CDPA defines “sensitive data” as “a category of personal data that includes” (exact language follows):
- Personal data revealing racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status;
- The processing of genetic or biometric data for the purpose of uniquely identifying a natural person;
- The personal data collected from a known child; or
- Precise geolocation data.
“Sensitive data” might be even broader than this because of the clever use of the word “includes” prefacing this list; it appears that the Virginia Attorney General can consider other categories of sensitive data that Virginia legislators didn’t think of. This means a lot of regulatory uncertainty to come.
Still, another critical word drastically limits what can be considered sensitive data: “personal”.
Public data isn’t personal data
As broad as CDPA’s definition of “personal data” is, there are limits. CDPA’s definition of “personal data” specifically excludes (1) de-identified data and (2) “publicly available information.”
The former exclusion probably goes without saying (if it’s de-idenitified, it’s probably not linked or linkable to an identified or identifiable person). The latter exclusion, however, is a big one; if it’s public, it’s not personal — and if it’s not personal, it’s not sensitive.
That’s a big limit on CDPA’s reach (read: “loophole”). It gets even bigger when you look at how CDPA specifically defines “publicly available information” even more broadly than as it might otherwise be interpreted.
Naturally, that definition specifically includes data “lawfully made available through federal, state, or local government records”. If the information doesn’t fall into that subcategory, then it is still “publicly available” if the business has merely “a reasonable basis to believe [that the information is] lawfully made publicly available to the general public through widely distributed media, by the consumer, or by a person to whom the consumer has disclosed the information, unless the consumer has restricted the information to a specific audience.”
To wit, if a Virginia consumer makes an unrestricted, public post containing otherwise sensitive data on a major social app (or, for that matter, if someone else to whom said Virginia resident had disclosed that information were to publicly post that information in a lawful manner), that would generally seem to fall into this very large loophole for data controllers and data processors. After all, that data cannot possibly be “sensitive” under CDPA because it fails CDPA’s “personal” requirement by virtue of public availability. (Prediction: A spike in sales of software that scrapes publicly available information from social-media profiles and news sites.) Ditto for non-anonymized information in many public news sources, most public government records, or in any other public medium or public forum.
Other laws and regulations may overlap in this area depending upon the circumstances (and those other laws and regulations are outside the scope of this article), but if a data controller can point to their reasonable belief of the “publicly available” nature of the information they process, CDPA itself does not appear to require any reporting, correcting, or deletion duties to a consumer regarding this data.
Moreover, in any event, CDPA contains a catch-all “please don’t strike this down as unconstitutional, Your Honor” clause stating that CDPA should not be interpreted to encroach upon First Amendment protections.
This article originally appeared on MarTech Today.