Skip to content
NOW AVAILABLE Feature Release! Learn About Our Enhanced Capabilities for Prioritizing Remediation Learn more >>

Leveraging AI for Offensive Security

AI challenges and opportunities for black hats, white hats, and blue teams

By Dan DeCloss, PlexTrac Founder/CTO

This article is the second in a series on the impact AI is having and will continue to have on the cybersecurity industry. In the first article, we discussed how attackers are leveraging AI to speed up their attacks, adapt their attacks to the landscape, and improve their overall stealth thus accelerating the speed and depth of their attacks. 

In part two of our series, we’ll discuss how the white hats, or offensive security teams, can also leverage AI to do much of the same to support proactive security testing rather than nefarious goals. When referring to offensive security, I use the term broadly to include all offensive and proactive activities conducted by a security team, including all forms of assessments such as penetration tests, red team exercises, vulnerability scanning and assessments, risk assessments, etc.

AI for offensive security 

Offensive security practitioners and all of their activities can benefit from the application of AI. We already use AI in many forms, so we’ll focus on generative AI and the ways it can increase effectiveness and efficiency. Hackers in general are naturally curious, experimental, and interested in pushing the limits of technology. As we explored in our previous article, attackers don’t have the bounds of ethical limitations and are already using AI for fun and profit. Thus white hats need to use the rapidly evolving applications of AI like the black hats do, or we’ll all be left far behind. 

So how can offensive security teams make use of AI right now? Here are three ways I see that AI can scale proactive security efforts to help keep pace with the escalating threats. 

1. AI for test automation and planning 

One of the most immediate usages of AI for offensive security practitioners is in the planning phase. Training AI with threat intelligence enables teams to instantly know the best techniques and tactics to test within the context of a specific environment. AI can aggregate and correlate large data sets much more quickly than a single pentester, so it is a no-brainer to use it to automate the leg work in planning for a proactive engagement. It can also be used during the engagement to help you adapt to the environment. You can use AI to help determine the best next steps to take, eliminate a lot of research time, and reduce the number of failed attacks.

Like most AI strategies, the play here is really about scaling efforts. AI can certainly support scaling at the onset of a testing engagement so the pentester can orchestrate better, more extensive, and more focused testing with less time and manual effort. 

2. AI to improve testing coverage and visibility 

Generative AI is valuable in the testing process and engagements and puts organizations on a course toward truly achieving continuous assessment and validation. We should all be using generative AI to improve testing coverage and velocity to scale proactive security efforts. I see AI being advantageous in scaling both the quality and quantity of proactive assessments, leading every organization towards a continuous exposure management paradigm. 

Every pentest or security assessment is limited by resources of people and time. The truth is that at the end of an engagement, every tester inevitably feels they left something out and there is still more to be found. It’s the curse of a tester to feel like maybe a little more time or knowledge would have uncovered a complex exploit that could be a game-changer. That is the scenario in which AI can play a huge role for the offensive security team. 

AI can help offensive security practitioners focus their efforts on more advanced, creative, and specialized exploits. We have always used automation to help clear the low-hanging fruit in security testing. AI is simply raising the level of the low-hanging fruit so that skilled white hats can reach even higher. Similarly, AI can help identify gaps in the environment more quickly so that you can achieve better attack surface coverage. This saves valuable time for a tester to be able to focus on those more complex exploits and allows them to apply their skills where they will be most valuable.

Not only is AI helpful in raising the bar regarding test coverage and the comprehensiveness of proactive testing, but it also supports scaling the frequency. With a true continuous assessment and validation program as the goal, harnessing AI will be critical to achieving short, iterative cycles of testing and remediation. Powered by threat intelligence, AI can support the identification of potential threats faster and accelerate proactive testing, validation, and retesting. The rapid aggregation of data and information necessary to prioritize, test, remediate, and retest at scale can be realized by applying automation and AI. 

3. AI to improve reporting, visibility, and trends

No matter the type of proactive activity — whether pentesting, red teaming, security assessments, vulnerability scanning, adversary emulation, etc. — scaling with AI will inevitably increase the amount of data. That data must be communicated and prioritized for it to be useful in improving security posture. This means that using AI for proactive activities necessitates using it for reporting, visibility, and documenting risk effectively. 

Drawing on the vast array of information that AI can consolidate in seconds will definitely help in reporting, analytics, and risk identification. Leveraging AI to correlate data at scale and assist in identifying and communicating significant trends can make it easier and more compelling to present key risks and priorities to stakeholders. This speeds up the decision process around where to spend precious security dollars and resources — resulting in quicker posture improvement and better ROI. 

Using AI to improve the quality and speed of writing report components, like narratives and findings, is another obvious application. But be aware of potential security and privacy issues this application can create when using open-source models. As tools like PlexTrac evolve to apply generative AI strategically with security in mind, we’ll begin to realize the full value of AI for security reporting and communication. 

Leverage the existing tools to your advantage (just like the black hats)

One of the frustrating things about using AI for offensive security is simply limited resources. Most offensive practitioners are already maxed out trying to do their jobs, and incorporating AI can feel like one more thing requiring research and experimentation before it actually adds value. While we may all be fascinated by the potential of AI, limitations inherent to doing business can make getting the most from it, a chore rather than a challenge. 

My recommendation is to start by using the tools that already exist for little or no cost and to build off the research of those who are already investing in it. Obviously, we need to keep privacy and security in mind, but AI tooling and data are more accessible than ever, thus leveraging what is available is a great place to start. While we all want — and need — to get on board and keep up with the latest developments in technology including AI, some people will take a deep dive. There’s no reason we shouldn’t take advantage of their efforts to gain value faster. 

Daniel Miessler and Jason Haddix are two security practitioners and researchers working in AI who I follow. Daniel Miessler’s newsletter, Unsupervised Learning, is a great resource for thoughtful information and practical applications of AI in the security space. Jason Haddix has created SecGPT in OpenAI specifically to support offensive security practitioners. 

It’s also important to take some time to play with the tools that already exist to discover for yourself their benefits and their risks. The most immediate application of AI for offensive security practitioners is to use the existing models with their data to multiply their efforts in researching, planning, building payloads, and actioning threat intelligence. 

Communicate, communicate, communicate

To conclude, the application of AI to offensive security practices is only valuable when effectively used to improve security posture. In other words, the scaling of offensive efforts must also result in the scaling of defensive efforts. To support this data transfer, white hats must increase their communication at the rate they increase their testing — and that communication can’t just be more, it must also be better. Risk prioritization will be key to successful actioning of the new quantity and quality of information that AI can enable offensive security practitioners to produce. 

Stay tuned for a discussion on the potential for AI to support defenders. 

Dan DeCloss
Dan DeClossPlexTrac Founder/CTODan has over 15 years of experience in cybersecurity. Dan started his career in the Department of Defense and then moved on to consulting where he worked for various companies. Prior to PlexTrac, Dan was the Director of Cybersecurity for Scentsy where he and his team built the security program out of its infancy into a best-in-class program. Dan has a master’s degree in Computer Science from the Naval Postgraduate School with an emphasis in Information Security. Additionally, Dan holds the OSCP and CISSP certifications.

Request a Demo

PlexTrac supercharges the efforts of cybersecurity teams of any size in the battle against attackers.

See the platform in action for your environment and use case.