In early August, the National Institute of Standards and Technology released the initial public draft of its Cybersecurity Framework 2.0. The draft is a long-awaited update to a framework that’s been in place for almost 10 years: The Framework for Improving Critical Infrastructure Cybersecurity, first released in 2014 and updated in 2018.
Boards of Directors for public companies across the country are likely to be taking stock of their companys’ cybersecurity practices and strategies after the Securities and Exchange Commission’s adoption of the Cybersecurity Incident Disclosure Rule on July 26. Although the SEC removed the requirement for corporate boards to include members with cybersecurity expertise, it still intends for the Rule to result in greater transparency of companies’ cybersecurity governance and to aid in investor understanding. The Rule presents additional reasons for companies to determine who, if anyone, on their Boards can help with oversight of cybersecurity governance.
As a former Special Agent for the Federal Bureau of Investigation who investigated cybercrimes involving children, I know from experience that the topic of increasing online protections for minors provoked intense debates among law enforcement, social services, parents, and the civil rights communities.
Often the discussions focused on how to preserve the positive impact of the internet while addressing the negative aspects, such as the facilitation of cyber bullying, narcotics trafficking, and various forms of exploitation. While others continue the discussion, Texas has stepped beyond the debate and enacted a new regulatory regime intended to shield certain materials from being viewed by minors, and to limit the collection and usage of their data.
This year has proven to be active in terms of state privacy legislation. In addition to Montana’s Consumer Data Privacy Act, the state has now passed a Genetic Information Privacy Act.
On July 31, the California Privacy Protection Agency’s Enforcement Division announced that it would be reviewing connected vehicle manufacturers’ and technologies’ privacy practices. Connected vehicles contain features that collect information about owners and riders, including location sharing, web-based entertainment, cameras, and smartphone integrations.
EDITOR’S NOTE: This is part three of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence
As with all other products and technologies, we can expect to see (and in fact already do see) the emergence of varying approaches to governance for artificial intelligence systems. Currently, AI oversight may be addressed within independent federal, state, and international frameworks – for instance, within the regulation of autonomous vehicle development, or laws applicable to automated decision-making. So, how can we expect regulatory frameworks to develop for AI as an independently regulated field?
The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation.
Subscribe
Contributors
- Suzie Allen
- John Babione
- Bert Bender
- Jason Cherry
- Christopher R. Deubert
- Maria Efaplomatidis
- Sebastian Fischer
- Laura Funk
- Lauren Godfrey
- Taren N. Greenidge
- Chasity Henry
- Julie Hess
- Sean Hoar
- Donna Maddux
- David McMillan
- Ashley L. Orler
- Todd Rowe
- Melissa J. Sachs
- Allen Sattler
- Matthew Toldero
- Alyssa Watzman
- Aubrey Weaver
- Xuan Zhou