By now, you have probably heard about OpenAI’s ChatGPT, an artificially intelligent chatbot, and similar chatbots that have launched in its wake. (Chris Deubert and I have previously written about it here.)
Since its launch, ChatGPT is estimated to have reached more than 100 million users worldwide. It is remarkably advanced, although not infallible. ChatGPT can write essays on complex topics, resumes, cover letters, songs, and fiction, and even pass law school exams. ChatGPT is making waves throughout various sectors and industries and raising important questions about ethics, art, education, employment, intellectual property, and cybersecurity.
From a data privacy perspective, ChatGPT has the potential to challenge and transform privacy frameworks. For example, the European Union and the United Kingdom grant data subjects the “right not to be subject to a decision based solely on automated processing” with certain exceptions. Those rights appear in the EU’s General Data Protection Regulation and the U.K.’s Data Protection Act 2018, the U.K.’s version of the GDPR. Automated decision-making is a serious concern as advancements in technology enable more efficient processing of data.
Because the United States does not have federal privacy legislation, California has taken the lead in advancing privacy rights for consumers, and several other U.S. states have followed suit. California’s privacy agency, the California Privacy Protection Agency, has established a subcommittee to advise on automated decision-making, so it is possible that the United States could adopt prohibitions or restrictions on automated decision-making similar to what has been done in the EU and the U.K.
Although automated decision-making can be useful for organizations, there are serious concerns and risks to individuals subject to such processes, such as adverse legal effects based on processes they may not understand or that may be exacerbating and replicating biases and discriminatory practices. For example, the American Civil Liberties Union has opined that “AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination . . . bias is in the data used to train the AI . . . and can rear its head throughout the AI’s design, development, implementation, and use.” Similar concerns were raised in a 2022 Constangy webinar on AI featuring Commissioner Keith Sonderling of the Equal Employment Opportunity Commission.
Further, the Italian data protection authority is investigating additional data privacy implications of ChatGPT, such as whether it can comply with the GDPR, its legal basis for processing, collecting, and storing mass amounts of personal data, and its lack of age verification tools. In the meantime, Italy has temporarily banned ChatGPT.
How organizations will balance the utility of ChatGPT with the privacy rights of individuals, and how regulators will address the risks posed by emerging technologies will continue to unfold in the coming months and years. Because it often serves as a trailblazer in the regulation of data and technology, we will continue to provide updates on the actions of the data protection authorities of the European Union as well as the EU’s proposed regulation, the AI Act.
The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation.
Subscribe
Contributors
- Suzie Allen
- John Babione
- Bert Bender
- Jason Cherry
- Christopher R. Deubert
- Maria Efaplomatidis
- Sebastian Fischer
- Laura Funk
- Lauren Godfrey
- Taren N. Greenidge
- Chasity Henry
- Julie Hess
- Sean Hoar
- Donna Maddux
- David McMillan
- Ashley L. Orler
- Todd Rowe
- Melissa J. Sachs
- Allen Sattler
- Matthew Toldero
- Alyssa Watzman
- Aubrey Weaver
- Xuan Zhou