Skip to content

Defending Against AI Attacks

AI challenges and opportunities for black hats, white hats, and blue teams

By Dan DeCloss, PlexTrac Founder/CTO

In this third article in my series exploring the potential impact of artificial intelligence on cybersecurity, I’m covering the opportunities and threats for defenders. In my first piece, I considered the areas where AI is most likely to be used and exploited by attackers. In the second part, I covered the potential for AI to enhance offensive security practices. 

Now I would like to consider the ramifications of AI for defenders. When I think of defense, I am including all preventative activities in cybersecurity, like vulnerability management, patching, DevSecOps, GRC, IR, etc. Aside from structured organizational blue teaming, I think it’s also worth mentioning a few ways individuals can protect themselves from AI-enhanced threats. 

AI for Defenders

I’ve spent most of my career focused on offensive security, but have had a unique perspective leading a security team and growing a program from the ground up. That said, I’ve spent significant time collaborating with defenders to help them understand how to prevent the vulnerabilities and what controls to invest in for optimal defense. Defenders have their hands full with constant system upkeep and security hygiene, in addition to remediation of vulnerabilities uncovered by proactive testing. I see several areas where AI can offer significant help in scaling defensive efforts.

1. Automating remediation management  

First, AI is excellent for scaling, automating, and enhancing remediation management. AI excels at rapidly processing large data sets. This aggregated data is the foundation for building a comprehensive tracking and remediation plan. 

Generative AI will also be valuable in enhancing data with recommendations for remediation. Training an AI model, with security in mind, on information for your business context will enable it over time to suggest remediation steps based on the priorities of your organization. While human oversight and validation will always be necessary, AI used in this way can help even less experienced defenders respond more efficiently to incidents and remediate the most critical vulnerabilities preemptively. Like all new technology, AI will be necessary to scale remediation and response to keep up with the speed at which AI will also be used to scale attacks. 

2. Leveraging behavioral analytics

AI is already heavily used to assess behavior and trend the deltas between what is perceived as normal activity and abnormal or anomalous behavior. With AI becoming more accessible and cheaper, defenders should continue to use AI where they can to assess behavioral analytics. 

The key here is letting AI help you determine truly anomalous behavior and attack vectors. Think about the concept of feeding an AI model with threat intelligence on threat vectors and determining if you’ve actually tested for those vectors, let alone have vulnerabilities or gaps related to them. 

3. Using AI against itself

In much the same way we know attackers are leveraging AI, it is crucial for blue teams to leverage AI to thwart the same techniques. This is particularly relevant in terms of identifying attack vectors and using AI to quickly identify how to exploit those vectors. 

In terms of phishing, using AI to assist in identifying and detecting new AI-generated deepfakes or carefully crafted phishes will be absolutely necessary. Using AI as part of your phishing campaigns, table top exercises, and purple team engagements will become an expectation moving forward.

4. Developing and testing code

AI can be beneficial not only to help write code faster but also to determine security vulnerabilities within that code through pattern recognition. Tools like Copilot Github can not only help teams scale and write more code but also identify security flaws being introduced — which are important both for scaling development and security. 

5. Maximizing reporting and analytics

And finally, with the amount of data being generated and consumed within environments, AI can be a boon for reporting on key trends, risk categorization and prioritization, and key metrics. The entire goal of a security program is to answer four key questions:

  1. Are we focused on the right activities?
  2. What should we focus on next?
  3. Are we making progress?  That is to say, is our security posture improving based on our investments?
  4. How do we compare to our peers?

Combining your data sets with AI to help produce metrics related to these key questions should be the primary focus of your security program.

Personal Preparedness Against AI Attacks

The above points obviously apply to security in an organizational context. I also think it’s worth addressing some ways everyone as individuals can defend themselves and their organizations from nefarious use of AI. 

The rise of AI-produced deepfakes and increasingly effective phishing tactics make human multi factor authentication something to consider in both professional and personal settings. Determining unique, personal, and tech-free ways to verify the identity of a person or validity of a communication is something we all should be thinking about. 

Additionally, we should continue to focus on increasing education around AI-produced social engineering attacks and their efficacy, whether that is with employees, senior adults, or children and teens. While AI in different forms has been around for quite some time, those who aren’t in the tech space are likely still unaware of or intimidated by the new and rapidly advancing capabilities of AI. Bullying, coercion, and even just mistakes with significant consequences  — like loading sensitive business or personal data into a public AI model — can occur at lightning speeds. It is well past time to talk to our loved ones and provide clear guardrails to employees around AI.  

My Key Takeaways on AI  

In conclusion, I have three key takeaways when considering the potential of AI for security defense: 

  1. The importance of human validation of automated and AI-produced data and recommendations — As we all know AI is far from perfect in its current iteration. Often it’s only as good as the data it’s been trained on. We can’t become completely reliant on AI for any aspect of our security programs. While AI can process information, provide recommendations, and even make fixes at speeds that will help us scale, human analysis and validation will still be essential to true security. AI is still just another tool that must be trained, maintained, secured, and wielded by knowledgeable people in order to provide maximum value. 
  1. The balance of security best practices (as implemented or recommended by AI) and practical usability — This point again speaks to the value of human oversight with AI. Clearly I don’t see AI as an agent displacing all humans any time soon. While AI is critical to making us better and faster, it doesn’t have the uniquely human ability to make judgments. AI may recommend or even action a control or fix that just isn’t reasonable for a particular system or priority for an organization. Realistic security requires balance. 
  1. Prioritization based on your organization context and risk tolerance — AI can and will process massive amounts of information quickly. This is one of its greatest strengths for security. However, many teams are already overwhelmed by information. Prioritization will be essential to scaling successfully with AI. Training AI with organizational context will be an important step in data management and prioritization necessary to focusing on the most critical vulnerabilities and threats to your environment. 

There’s so much potential for artificial intelligence to raise the bar in security across the teams. I’ve only scratched the surface in this series. If you’d like to learn more, check out our Friends Friday cast with leading experts in cybersecurity and AI, Jason Haddix and Rey Bango.

Dan DeCloss
Dan DeClossPlexTrac Founder/CTODan has over 15 years of experience in cybersecurity. Dan started his career in the Department of Defense and then moved on to consulting where he worked for various companies. Prior to PlexTrac, Dan was the Director of Cybersecurity for Scentsy where he and his team built the security program out of its infancy into a best-in-class program. Dan has a master’s degree in Computer Science from the Naval Postgraduate School with an emphasis in Information Security. Additionally, Dan holds the OSCP and CISSP certifications.

Liked what you saw?

We’ve got more content for you

Request a Demo

PlexTrac supercharges the efforts of cybersecurity teams of any size in the battle against attackers.

See the platform in action for your environment and use case.