Artificial intelligence tools, like ChatGPT, have received a lot of attention this year as employees have begun to use them effectively and strategically in their day-to-day, with or without buy-in from management. And while AI tools can greatly improve efficiency and productivity in the workplace, they can also introduce risk that employees and managers may not understand.
Constangy cyber team vice chair, Sarah Rugnetta, recently spoke to Risk Management Magazine on the additional risk that may be introduced by organizational use of ChatGPT and other AI tools, including the possibility of cyber threat actors finding ways to exploit artificial intelligence to infiltrate company systems.
“Generative AI is still in its infancy. We’re waiting to see whether and how this technology will be leveraged to increase the current threat landscape,” she says.
Sarah Rugnetta is a partner and vice chair of the Constangy cyber team based out of our New York office. Sarah has more than 15 years of experience working in privacy law and focuses her practice on advising clients on business-oriented strategies to mitigate data security and privacy risk. She is a former privacy officer and former state regulator with extensive experience serving as outside counsel for businesses in the fields of health law, data privacy, risk management, and compliance with domestic and international data privacy laws, including HIPAA, GLBA, FERPA, the GDPR, and the CCPA, and other comprehensive state privacy laws.
Read the full article here.