The New Artificial Intelligence Opportunities and threats in offensive security In PlexTrac’s inagurable episode of Friends Friday — a cast bringing you informative conversations with innovative thinkers, creators, practitioners, influencers, and leaders in the industry — Dan DeCloss, founder and CTO, hosted Rey Bango and Jason Haddix for a conversation on latest developments in artificial intelligence for offensive security. Jason Haddix is the CEO and chief hacker at Arcanum Information Security. Jason has had a distinguished 20-year career in cybersecurity previously serving as CISO of Buddobot, CISO of Ubisoft, head of Trust/Security/Operations at Bugcrowd, director of penetration testing at HP, and lead penetration tester at Redspin. Currently, he specializes in recon, web application analysis, and emerging technologies. Jason has also authored many talks on offensive security methodology, including speaking at cons such as DEFCON, Besides, BlackHat, RSA, OWASP, Nullcon, SANS, IANS, BruCon, Toorcon and many more. Rey Bango is principal cloud advocate at Microsoft focused on empowering companies and information technologists to take full advantage of transformative technologies. Since 1989, Rey has explored the world of information technology through the lens of software developer, open source contributor, cybersecurity practitioner, and now, as an advocate for the secure and responsible use of artificial intelligence for social good. Watch the full episode or read on for the highlights of their conversation. The New Artificial Intelligence: Opportunities and threats for offensive security Opportunities for leveraging AI Jason Haddix started the conversation by describing how he has been using a variety of consumer-level and open source AI tools to enhance and automate his pentesting and red teaming workflows. He explained that he uses AI everyday to accelerate some of the manual aspects of the testing he does, like vulnerability scanning, and has customized GPTs to assist him. Jason said, “Once I understood [the tools], that workflow was 2 seconds with the AI. And so I can create a vulnerability check as soon as a vulnerability comes out. And that’s a simple example. There’s tons of web vulnerabilities that you can make these bots to help you build exploits with scanners. And it’s not just vulnerability checking. If an exploit comes out, I can modify it, I can change its code to bypass antivirus. So there’s a million little tiny applications of, at least, generative AI in offensive security right now.” Rey added that it’s critical when using AI tools to always validate the code. As useful as AI can be and as many readily available tools as there are now, the fastest way for them to become a liability to your work is to trust them or their developers without validation. Rey explained, “I’m glad that Jason said something — and it was really important — the fact that he checks the code. GitHub Copilot is amazing. Sometimes I feel like it’s a wizard and it’s magical, but just going ahead and taking code at face value and assuming that everything’s fine, it goes back to that old saying that we have in offensive security, ‘Always validate the code that you’re running.’ If you pull something from GitHub and you don’t know what it’s doing, how do you know you’re not going to get popped yourself? When you’re using AI solutions to build anything out and you don’t know what the AI is going to build, you hope that it’s building solid and safe code, but ultimately, it is the responsibility of the developer. You have to make sure that it’s working as you expect.” Threats posed by AI The group went on to discuss several ways AI is bringing new threats to the landscape that offensive security practitioners, teams, businesses, and even individuals should be prepared for. While many of the tactics and techniques are not realy new, AI is assisting threat actors in producing better attacks at scale. Some of the threats mentioned included the following: Content and deepfakes Novel Python packages Malicious GPTs To prepare to respond to these threats, Rey suggested, “And so what I would urge everybody to consider is just, in much the same way that us as red teamers will look at MITRE, to go ahead and look at the MITRE ATT&CK framework. MITRE has created the Atlas framework. And Atlas is an analog to the MITRE ATT&CK framework, but it outlines TTPs for LLM-based types of threats and attacks. And so I would, I would urge everybody to look at MITRE Atlas along with the OWASP Top 10 for LLM Applications.” Communicating about AI Rey noted that another area that will need to develop on the offensive side in response to AI will be communication and documentation. Offensive security professionals will need to not only leverage the tools and test for the threats but also communicate and document what they are finding in ways that empower their organizations and clients to respond effectively to the new emerging threats. He explained, “So, yeah, I think part of this is also going to be the documentation side. It’s a new muscle that offensive engineers have to start thinking about. So we think about traditional pentests, think about traditional red teaming, and all the information that has to be aggregated and presented to the customer so they can take action on it. And so now there has to be a whole new thought around how you explain these very conceptual topics to a customer. How do you help them better understand that, yes, a deepfake doesn’t actually exist? And how do you protect against that? How do you tell a customer you have to create human 2FA when they’ve been struggling with digital MFA? Now we’re getting into the point where even the way that we report back to the customers on assessments has to be in a way that truly can break down these very, very unique scenarios.” Further learning on AI The group concluded the cast by sharing a number of resources that they found helpful and that listeners could check out for further study on AI in cybersecurity. Those resources included the following: “AI poisoning could turn models into destructive ‘sleeper agents,’ says Anthropic” article Not with a Bug but with a Sticker book OWASP Top 10 for LLM Applications Release podcast “Threat Modeling LLM Applications” article “WormGPT: What to know about ChatGPT’s malicious cousin” article Jason closed by sharing his optimism about the opportunities and challenges presented by AI, “If you talk to us and you’re red teaming, we think of hacking that whole ML/AI security part where we would attack the algorithms or the data they’re trained on or even use them to attack other things, this is almost greenfield for us these days. It’s a whole new skill set that offensive security people will actually have to learn as well. And so I think that this is a tremendous time for offensive security people to pick up a really new skill.”
Vulnerability Assessment vs Penetration Testing: Understanding the Key Differences Vulnerability Assessment vs Penetration Testing READ ARTICLE
Unlocking Continuous Threat Exposure Management: New Features for Prioritizing Remediation Based on Business Impact The evolution of product security READ ARTICLE