Cyber AI Chronicles II – AI-enabled cyber threats and defensive measures

EDITOR’S NOTE: This is part two of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT.  This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.

Recent developments in Artificial Intelligence have opened the door to exciting possibilities for innovation. From helping doctors communicate better with their patients to drafting a travel itinerary as you explore new locales (best to verify that all the recommendations are still open!), AI is beginning to demonstrate that it can positively affect our lives. 

However, these exciting possibilities also allow malicious actors to abuse the systems and introduce new or “improved” cyber threats.

For example, malicious actors have already begun exploiting AI in the development of synthetic media—commonly referred to as “deepfakes.” Just last month, the FBI warned of malicious actors creating deepfakes of sexually explicit content to extort, coerce, and harass victims. Although deepfakes can be created without the use of AI, AI makes the content more believable and therefore more dangerous. AI can also draft more believable phishing content—thereby increasing the likelihood that an intended victim will fall for the scam. Malicious actors have even combined these actions to create audio deepfakes of trusted parties calling to ask friends, families, and colleagues to send money to help them in an emergency. 

AI-enabled threats don’t stop at creating more believable scams. It is possible, or likely to soon be possible, for AI to be used to generate or improve existing malicious software—or “malware.” This AI-generated malware makes it easier for malware variants to evade network defenses. Commercially available AI tools like ChatGPT have safeguards in place to prevent this abuse of their capabilities, but a maxim in information security is that there is no such thing as perfect security. In other words, we cannot reasonably expect commercially available AI tools to prevent all attempts to create malware.  Worse, it is possible that new AI systems will be introduced expressly for malicious purposes.

There is, however, a flip side to these new threats—the ability for AI systems to be used to improve our information security measures. The National Security Agency has noted that it sees the opportunity to support efforts to secure and defend networks. This opportunity is due in part to AI’s capacity to review and analyze massive data sets to recognize patterns. These patterns can identify previously used or slight derivations of past malicious activity. AI can also be used to rapidly perform a large volume of security measures that are time consuming and require repetitive or near-repetitive actions. As AI’s capacity to learn from the past and predict future behavior improves, its capacity to better predict novel, emerging threats will also improve.

It’s too early to tell whether AI will be a greater boon or bane to information security, but we can influence the outcome. Doing so begins with understanding the possible uses and abuses of this revolutionary technology. 

The Constangy Cyber Team assists businesses of all sizes and industries with implementing necessary updates to their privacy and compliance programs to address these complex and evolving information security developments.  If you would like additional information on how to prepare your organization, please contact us at cyber@constangy.com.

The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation. 

Subscribe

* indicates required
Back to Page