Colorado

ANALYSIS: As AI Meets Privacy, States’ Answers Raise Questions

While artificial intelligence may stimulate future debates, it is already part of the current practice of many lawyers. And in 2023, companies doing business in four states — California, Virginia, Colorado, and Connecticut — must comply with consumer privacy laws that govern AI-powered computing. The regulatory responses proposed by these states to use AI in compliance with data protection laws already raise questions that are likely to remain long after the laws have come into force.

There are some major similarities between the AI-related requirements of these laws, including mandatory risk assessments and individual rights to object to certain automated decisions. But there are also some major gaps – particularly in terms of available remedies for harmful consequences – as well as various inconsistencies in the laws.

With these issues and a topic as complex as AI, there is likely to be significant ambiguity over the next year.

AI invasion of privacy law

For those new to AI, the capability known as “machine learning” makes it easy to analyze data at scale to predict outcomes. Numerous industries are already using this technology for useful purposes ranging from defending against cyberattacks to developing safer scooters.

The rapid implementation of AI has also prompted closer scrutiny of its risks, particularly in relation to discrimination and social media harm. One area of ​​concern for data protection authorities is the vast pools of personal data that machine learning often requires.

Businesses subject to the EU General Data Protection Regulation (GDPR) since it came into force in May 2018 should be familiar with the AI-related requirements of this law, for which the European Commission issued guidelines around five years ago.

The GDPR refers to the automated processing of personal data for predictive purposes as “profiling”. Additional GDPR provisions govern “automated decision-making” that may result from profiling or other processing methods.

In the United States, these Terms and related provisions have been partially incorporated into four states’ comprehensive consumer protection laws that will come into force over the next year. The chart below compares AI-related requirements from the GDPR and the privacy laws of California, Virginia, Colorado, and Connecticut.

To enlarge this image, click here.

Some (non-algorithmic) predictions

As privacy enforcement priorities take shape over the next year, businesses and privacy advocates should seek answers to three questions in particular.

1. How should companies explain the logic behind automated decisions?

Once the statutory rulemaking processes currently underway in California and Colorado are complete, both states will require companies to explain their automated decision-making logic to individuals. These requirements are clearly inspired by the GDPR’s mandate to provide “meaningful information” about such logic.

However, some privacy scientists wonder whether it would be worth giving US consumers an explanation of how AI works. The Stanford Institute for Human-Centered Artificial Intelligence, in a January 2022 article, recommended that California instead require companies to provide details about the content and sources of data used for automated decision-making.

Colorado’s proposed rules are a bit more forward-thinking in this regard, as they would require companies to tell individuals what types of personal information are used to make automated decisions and provide an “understandable explanation” of the logic. Still, businesses will likely need further guidance on how to meet this new requirement while minimizing consumer confusion.

2. What remedies do individuals have when automated decisions cause harm?

Colorado, Connecticut and Virginia will require companies to allow individuals to opt out of having their personal information used for automated decision-making. Each state law expressly limits this right to decisions that could have serious consequences, including those affecting employment and lending. The CCPA requires California to enact regulations that allow for a similar right to opt out, although it is currently uncertain whether this will also be limited to certain categories of choices.

However, these laws all fail to specify what actions individuals can take when harmed by automated decision-making. In contrast, the GDPR and the national data protection laws of Brazil, China and South Africa each grant individuals some form of legal remedy, such as B. The right to contest an automated decision or otherwise obtain human verification. The Blueprint for an AI Bill of Rights recently released by the White House similarly promotes a right to humane consideration of “high-risk” matters.

Granted, individuals can often challenge automated decisions through other applicable laws, such as the Fair Credit Reporting Act or the Americans with Disabilities Act. But for decisions with significant implications that don’t affect creditworthiness or result in unlawful discrimination, companies likely need more clarity to effectively assess the risk of complaints about AI logic gone wrong.

3. How will states enforce the right to delete personal data from algorithms?

In addition to the right to opt out of the processing of certain personal information, each state grants individuals the right to have their personal information erased. But none of the data protection laws of these states – and neither does the GDPR – explicitly address how the right to erasure relates to personal data used to design AI algorithms.

The Stanford article suggested that companies could address some privacy concerns by creating synthetic data to essentially replace an individual’s personal information, thereby avoiding the cost of retraining an algorithm to operate without such information. Of course, it would be very helpful for companies if government regulators signaled their approval of such a practice as a valid means of meeting erasure requirements.

To make matters worse, the Federal Trade Commission has begun enforcing the complete deletion of algorithms that rely on unlawfully collected personal information. Organizations need to embrace this novel approach no matter where they do business.

States may also choose to analyze public comments submitted to the FTC’s ongoing commercial surveillance rulemaking — which includes automated decision-making, among numerous other issues — to shape their own guidance in this evolving area. Even if the FTC doesn’t meet its lofty goal of passing a comprehensive federal privacy rule, states are well positioned to push the baton of AI regulation.

Access additional analysis from our Bloomberg Law 2023 series here, featuring trends in litigation, transactions, ESG & employment, technology and the future of the legal industry.

Bloomberg Law subscribers can find related documents with practical guides, tools for tracking new laws and extensive reference materials on our website Practice Center Data Protection & Data Security Resource.

Top privacy and cybersecurity experts shared insights on how to keep up with evolving compliance standards Bloomberg Law 2022 Internal Forumnow available on-demand to all readers who register online.

If you’re reading this on the Bloomberg terminal, please run BLAW OUT to access the linked content or click on it here to view the web version of this article.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button