Skip to content

VIDEO

The New Artificial Intelligence: Opportunities and threats for offensive security

Join Dan DeCloss as he hosts two leading thinkers on emerging tech in cybersecurity, Rey Bango and Jason Haddix. They will discuss both the opportunities and challenges that rapidly advancing artificial intelligence is creating in the offensive security space. From the technical to the strategic, Jason and Rey will share how they’ve been leveraging AI and preparing for the threats it poses.  

Series: Friends Friday (A PlexTrac Series), On-Demand Webinars & Highlights

Category: AI, Thought Leadership

   BACK TO VIDEOS

Transcript

Hey, everybody. Welcome to our next edition of Friends Friday. We’re really excited to have you. Happy Friday, by the way. We’re really excited to have this topic today. We’ve got some very special guests that we’re excited to have on. We’ve got Jason Haddix and Rey Bango.

Today’s topic is going to be AI in offensive security — the threats and opportunities. We’ve got a fun discussion, but first, I’d just love to introduce our guests. Super thankful for their time spent with us today. Rey, I’d love for you to introduce yourself and then, Jason, introduce yourself.

Awesome. Awesome. Thank you for having me. I’m Rey Bango. I work for Microsoft. I’m a principal cloud advocate. I focus on cybersecurity and AI systems. And more importantly, I’m focusing on how AI is part of the security solution, including how it can be used for defensive purposes, and in this case, how can it be used for offensive purposes, too. So that’s kind of an interesting topic.

Awesome. Hey, everyone, my name is Jason Haddix. So I am the CEO, head trainer and consultant for Arcanum Security, my consulting company. Basically what I do is I red team a lot and I’ve been integrating AI into my red teaming in the day-to-day kind of workflow and been trying to learn kind of as a consumer like everybody else. But I’ve had a long-tenured career in offensive security, so about 20 years in offensive security as well as I’ve sat in the CISO seat before. So really I’ve done a little bit of everything, but my heart has always been in offensive security.

Awesome. Well, I mean, yeah, super, super excited to have you guys. Thanks so much for joining. We love being able to use this opportunity to talk about key topics in the security industry. Obviously at PlexTrac, we’re really focused on proactive security and taking a proactive mindset. And so we just really felt like this would be a great topic to learn from some experts as well and to just have some perspective for anyone that’s following us. So definitely hit that Like button, follow the Friends Friday aspects, and stuff like that.

We’re excited to have this one, but let’s dive in because we want to in prep for this. We could have recorded the session live back then, but we’re excited to be kind of talking about this and we know we’ve got a lot to cover, but would love to just kind of start with where in the AI journey do you see it really being used from an offensive security perspective? I know, what are the opportunities that proactive security teams and offensive security teams can really take. I mean, Jason, I know this is kind of like where you’ve been spending a lot of your time and focus, so would love to kind of hear some of the things that you would advise people and are seeing as you’ve done your work.

Yeah, I think that, I mean, really, I have my AI tools up on my second monitor all the time, so it’s consumer-level APIs that I use. Right. So a lot of stuff that’s OpenAI now, a lot of stuff that’s clawed and stuff like that, and some of those models and what they’re helping me do. It really took me a little while to wrap my head around exactly how I wanted to use it, but I really just started asking myself a ton of questions like, okay, day to day, what am I doing as an offensive security professional?

It breaks down into several things. You’re doing manual work, but if you break your manual work down, there’s a whole bunch of small components in web testing or exploitation or things like that. Really, the first thing that I really outsourced to several custom GPTs I made was vulnerability scanning. Right? So a core part of internal assessments, a core part of even red team assessments if you can keep from getting banned by your client or noticed by your client. But these days everybody’s scanning everybody, so it’s really not that big of a deal. But let’s say like a new advisory comes out tomorrow, right? And as an offensive engineer or an offensive security person, there used to be this gap between how much code I needed to know to be dangerous in the red teaming world, and if I can at least read and understand the vulnerability and the exploit.

You know, a very common one that I’ve created several for is subdomain takeovers. Right? So this is the idea that a customer, um, of mine who I’m doing a red team assessment on, um, interacts with a third party SaaS, and then they decide, hey, they don’t want that SaaS anymore, but they have a branded subdomain with that SaaS off of their main domain. When you go there, it takes you still with a redirect to the SaaS company. What hackers will do is they will go to the SaaS company and claim your subdomain because you no longer have it registered to you. Then when people go to that link, basically they’re trusting that it’s yours because it’s on your domain. You can make scanners to pick these types of things up. There’s a lot of open source tools. I just mapped out the open source tool landscape. Then I started telling and started figuring out what those checks look like. And all those checks really look like in a scanner, like Nuclei or Nessus or something like that, depending on what you’re using is they land on a page with a little web bot and they recognize some text and it says this domain has not been claimed yet, or something like that, to write that in Nuclei.

Once I understood, that workflow was 2 seconds with the AI. And so I can create a vulnerability check as soon as a vulnerability comes out. And that’s a simple example. There’s tons of web vulnerabilities that you can make these bots to help you build exploits with scanners. And it’s not just vulnerability checking too. If an exploit comes out, I can modify it, I can change its code to bypass antivirus. So there’s a million little tiny applications of at least generative AI in offensive security right now.

Fascinating world, that’s, yeah, that’s great. And then like, so in terms of just being able to identify like new checks and things, like what would be some of the threats that like, you know, we might need to be like worried about? I mean, Rey, I don’t know if that sparks any of the thoughts in your brain, but like, you know, that to me is really fascinating on being able to quickly detect it. But what would some of the threats around some of this, you know, be posing?

No, I love that question. But the first thing I want to say is I’m glad that Jason said something and it was really important, the fact that he checks the code. And so here’s the thing I have to tell you, GitHub Copilot’s amazing. Sometimes I feel like it’s a wizard and it’s magical, but just going ahead and taking code at face value and assuming that everything’s fine, it goes back to that old saying that we have in office security. Always validate the code that you’re running.
If you pull something from GitHub and you don’t know what it’s doing, how do you know you’re not going to get popped yourself? That’s part of the same concept. When you’re using AI solutions to build anything out and you don’t know what the AI is going to build, you hope that it’s building solid and safe code, but ultimately it is the responsibility of the developer to take that and say, let me just validate this, let me make sure that even from a code perspective, forget about the malicious attack from the code perspective. You have to make sure that it’s working as you expect. You don’t want it to say, oh, let me just go ahead and delete my whole entire database because you decided to put in the wrong prompt. And all those things matter. Everything from prompt injections to insecure access to valuable data you might have. These are things that are real threats.

I was reading two articles, actually, last week. It was funny because it was perfect timing for some of the things that we’re going to be talking about today. And you have, for example, state actors that are using AI right now to create malicious campaigns to meddle in our elections. And they’re doing everything from crafting AI content to creating deepfakes that spread disinformation. So that’s a really important thing to think about when you’re reading anything on the Internet nowadays. These things are well done, and they take the time to craft these messages that could spread this information.

But even taking a step further, it was interesting, there was another article that I read where AI researchers found that AI chat bots were hallucinating and they were making calls out to non-existing python packages. So imagine if you have a non-existent python package and you’re a threat actor. You’re like, huh? Is that AI chatbot is going ahead and making a request to this particular package. Couldn’t I create it and then put a malicious piece of content in there? A malicious piece of code that does something? There you go. Boom. Now you have a new threat that maybe most of us hadn’t thought about. And while we have thought about things about, like malicious PI PI packages, malicious NPM packages, this is a whole new thing right here. This is hallucinations, where you are totally not expecting this package to even exist. And also, it’s there. And so the threats are real.

And so what I would urge everybody to consider is just in much the same way that us as red teamers will look at MITRE, to go ahead and look at the MITRE ATT&CK framework, MITRE’s created the Atlas framework. And Atlas is the, it’s an analog to the MITRE ATT&CK framework, but it outlines ttps for LLM based type of threats and attacks. And so I would, I would urge everybody to look at MITRE Atlas along with the OWASP Top Ten for LLM applications.

And. I’m sorry, I’m going here. It’s just this earpiece keeps falling out and I keep pushing.
I have a wonky ear. Your canal, it’s just like, this is the only one that does it. It’s like it drives me. Yes.

But, yeah, if you look at the OWASP Top Ten for LLM applications, they’ll go into more details about potential threats and how you can help to mitigate that. So you can, you know, I’m grateful that OWASP and MITRE have taken the initiative to go ahead and start outlining these things, because not only does it give defenders an opportunity to understand what the attack vectors are, it gives red teamers like Jason an opportunity to say, how do we test these defenses out? How do we make sure that our defenders are prepared to actually mitigate these types of attacks?

Yeah, that’s fascinating because, I mean, it almost is like, it’s like its own separate type of campaign, right? In terms of like, hey, can I, can I, and this is just me now brainstorming, thinking off the cuff, but like, can I train, can I, can I put information out there that would actually allow an LLM to start hallucinating on the vector that I actually wanted to go exploit? You know, so that was an interesting thought that I just had with it, with that, with that concept.

No, it’s a great, it’s a great comment. And, you know, just continuing on what Jason was saying earlier about him creating his own GPTs, it’s already been shown that threat actors are creating their own GPTs and they’re using it for expediting the creation of malicious code bases, helping them on phishing attacks, helping make sure that they sound realistic. And so we always look back at the phishing attacks and we’re like, every time you get a training, right, they say look for common misspellings or the grammar is not the way that you would expect. Well, guess what? Chat GPT, Gemini, all these different LLMs and chat bots, they’re doing a really great job right now of creating well worded, grammatically correct content. So it’s really easy to go ahead and create an email now that looks just like you would expect.

And in fact, it’s gotten so good that there was another article that I read about. Unfortunately, I felt bad for the CFO. The CFO was on a video call, and this was the interesting thing. A video call with what looked like the CEO of the company sounded like the CEO of the company, along with some other board members, and they convinced that CFO to transfer $25 million through video deepfakes that sounded so realistic. And so when you think about how much information we share out there, how much our faces are out there, all those things are definitely possible. So that’s another threat vector that we have to consider.

Another opportunity for red teamers to test out the defenses. Can you go ahead and imitate a really senior level person and have another person make a decision based on a video or voice recording, things like that. Right, Jason? Yeah, I’d love to hear your thoughts on that.

Yeah. So it’s, uh, it’s really interesting that you brought that up because, um, so this last DEFCON that, uh, that passed, uh, the social engineering village had two really good talks. One was, um, one that I found awesome was actually by two guys I know, um, Danny Golan and Preston Thornburg. And they did a presentation, presentation called fishing with dynamite.
And, um, it was at the social engineering village. And so what they did is they built an agent based, um, framework. I can’t remember what they called it, but it was, it was scary. So imagine, you know, your OSINT tools that you can play with right now, right? You can play with Multigo, you can play with a whole bunch of open source osint tools. What they did is they rigged up these OSINT tools via AI to go pull out data from the public web on, let’s say, you, Rey, and they were able to pull down images of you.

Why are you, why are you bringing me into the dome? Oh, I don’t know, man. I mean, why are you throwing me into the bus?

You were in the center of the screen. It would go out and pull basically every forum post you’ve ever made, every comment you’ve ever made in GitHub. It would pull out all the pictures it could find off you of the Google image database, everything it find, and then it would put it into an object storage bucket, and then it would take it from there and start analyzing the data. It would chunk it up and make it digestible by another AI system.

So what they did is they pointed it out Dave Kennedy, actually, during the call. So if you don’t know who Dave Kennedy is, he’s the CEO of TrustedSec, one of the most respected red team companies in the world. And so they pointed at him. And what it came back with was it correctly identified that David’s second passion behind security is health, basically. So he’s really into bodybuilding. He’s really into helping local communities. He started a health company this last year, and it identified all this information from his tweets, from his online presence. And, and it built a, basically a phishing email to him that said, hey, we are, I am the representative from bodybuilding.com. We are starting these local community chapters. And we’re looking for someone in the Cleveland area to be our brand ambassador for bodybuilding.com. And we’ve been, you know, we’ve heard about your community activity, how much you’re into health.

And it wrote like a half a page email that sounded perfect. And then, so they, they showed this example that the bot came up with. And then we tweeted at Dave live during the DEF CON talk. And luckily he was online and he was like, hey, I’m a security guy. I can tell you 100%. I would have probably clicked on this link that they put at the end of this email. Right. Because it was so contextual to him. Right. Like, it’s a very low percentage chance for him to be like, you know, oh, this is not whatever, you know, and they convinced, they made a convincing domain and everything, and so they built the system to do it. And this was last year. I mean, this was last, whenever DEF CON was, what, July, August?

And now the tools are even crazier. They’re like, they’re so much better even now. So the social engineering side of augmenting red teaming and augmenting real adversary like campaigns is a lot better these days.

And so, you know, I think we were talking before the show about, like, we now almost need human two factor authentication. Right. Like where we need to figure out a second way to verify it as us. So. Yeah, yeah. Like having some kind of out of band, like almost like a code word or something. Yeah.

Days of espionage or whatnot, right? Yeah, exactly. Sure. They still use some of that, but like. But, yeah, no, I mean, you know, that’s fascinating. Right. And I think it kind of highlights, you know, from an offensive security perspective, is that it’s really important as part of our jobs to try and stay ahead or at least stay aware of what the attackers can be doing from the threats perspective, to not only be able to advise, you know, your organizations or customers on how to protect these things, but also test for it, like you mentioned. Right.

So, yeah, I think part of the, part of this is also going to be the documentation side. It’s a new muscle that offensive engineers have to start thinking about. So we think about traditional pentests, think about traditional red teaming, and all the information that has to be aggregated and presented to the customer so they can take action on it. And so now there has to be a whole new thought around how do you explain these very conceptual topics to a customer? How do you help them better understand that, yes, a deepfake doesn’t actually exist and how do you protect against that? How do you tell a customer you have to create human 2FA, when we’ve been, when they’ve been struggling with digital MFA, now we’re getting into the point where even, even the way that we report back to the customers on assessments has to be in a way that truly can break down these, these very, very unique scenarios. They’re. They’re just. I don’t know how I could explain it right now. I don’t know how I could take that and generate a report that says how, here’s the situation, and this is how you’re going to solve it.

It’s a bit of a challenge because you’re generally reporting to people who are focused on driving business, driving success, keeping the company moving in the right direction. They need information quickly, they need to make actionable decisions quickly, and they need information in a digestible fashion. So the technical details aren’t what they’re interested in. They have to understand what’s the business impact of everything that we’re seeing here. And AI and these models are, that’s just a whole new world for them. It’s a definitely different space.

So it’s very interesting that we’re, that we’re in the predicament we are now with some of this stuff because, I mean, a couple of years ago, we had, you know, we still do to this day. We had a lot of crypto fraud, right? And so the crypto industry had to work a lot on integrating KYC know your customer, which provides, you know, real time data to, you know, combat fraud and make sure that you are accessing those services that deal with digital currency. And so I think that we’re going to move into a phase of know your partner. So, KYC, basically where we’re going to add these type of checks to any sensitive business logic. So one of the things that I think is really great is that, like, you know, over the past year, I’ve been integrating red and blue topics, you know, with my bots. And one of the ones that I made was a threat modeling bot that asked this question. And we basically type in what your application does and what your business process is, and you can feed it in documentation that already exists in your business, and it’ll identify areas to add KYC.

Know your partner and how to do it. That makes it easier than, you know, just trying to think of all these ways by yourself. There’s APIs, there’s, you know, photo identification, because live deepfakes are hard to do right now, prerecorded or much easier, you know, because of the compute power required, you know, custom passphrases for certain transitions and transactions. So it’s really interesting because we learned a lot of lessons from the crypto scene, whether you were in it or not, whether you liked it or not. We learned a lot of really great lessons from all the fraud that went on there. And I think we can use a lot of it actually to help us secure new age AI or new age like businesses who are going to influence AI or who are going to be attacked by AI methods.

Yeah, there’s even the, and you mentioned something about ingesting data. It just kind of jarred my memory. The other issue that companies are going to have to come to terms with is that there’s a lot of data that they don’t have full control over and that anything that gets ingested into a model can potentially become public. And so as you’re going through your process of transitioning into AI and leveraging it, there has to be governance in place to make sure that you’re managing that data properly so it doesn’t fall into the wrong hands. There was an example of a company that unfortunately they put their actual live code, production code into a model, and now that’s public. They thought they would have it constrained only to their session. But unfortunately, once you put that into a public model, that’s, it’s there. It’s not, there’s no, the genie’s out of the bottle for all intents, and you’re not putting it back in.

And so that in itself can end up becoming a threat. And it’s not even an insider threat. I wouldn’t call it generally an insider threat because it’s not malicious intent. It’s a mistake. Well, guess what? Now that information is out there and that’s where these threat actors can leverage that and they can leverage it for all types of things, whether it’s identifying plans for a new project that you’re going to be releasing or even just getting internal, sensitive and confidential information that could allow them to better attack the company in some fashion. That’s a real threat in itself. There has to be protocols for everything from governance to data loss prevention to just compliance.

Think about how compliance comes into this whole concept of ingesting data as you’re, you know, as you’re thinking about pumping your actual data, whether it’s sharepoint, whether it’s an SAP database, whether it’s Salesforce, CRM, and you’re trying to pump that into these models? How are you managing that? How are you ensuring that that data is only going into your private model so that you can use that to make decisions and it’s not going out there? And how do we prevent threat actors or in this case, red teamers like Jason from being able to get access to that data? You know, I’d rather Jason get it because Jason’s going to make sure that you lock it down. But, you know, if Jason can get to it, then we know. So it’s there.

Yeah, I think one of the things that, I mean, we’ve been jumping around a lot. I think one of the things for the listeners, too, if you’re just getting into this topic, right, there’s two worlds, right? One is the world of attacking the AI itself to get something out of it or have it do something bad for you. And the other is the world of using AI for our already known offensive security type of world, where we’re trying to break into thing as hacker. So if you’re a pentester, a lot of people watching a PlexTrac webinar will be pentesters.

We’ve jumped around to the two different worlds. It is really interesting that they are separate, though. One thing that I love to just talk to people about really quickly is that the word “red team” in ML actually means something completely different than it does in our world, offensive security. So red teaming actually exists in the machine learning world, but it is the idea of trying to get these trained LLMs usually, or these trained data sets when they have an output to be biased or to say something they’re not supposed to say or give you instructions to do things they’re not supposed to do, that’s known as red teaming in the ML world. Whereas if you talk to us and you’re like, red teaming, we think hacking that whole ML AI security part where we would attack the algorithms or the data they’re trained on or even use them to attack other things, is almost greenfield for us these days. Like, it’s a whole new skill set that offensive security people will actually have to learn as well. Like we did web, right? Like, you remember the days, Rey, when network was really all we had? Network hacking was it for a while, and then we transitioned to web, and it was a whole new, it was a whole new thing.

And so I think that this is a tremendous time for offensive security people to pick up a really new skill. And one thing you were talking about, Rey, is when, is when you train these models with data. There’s a lot of people who think that you can put maybe a master prompt or a system instruction in front of them that will protect them. And it has been proven already by the academic community, if you train a model with data, there is going to be somebody with some clever prompt engineering that’s going to be able to get that data out. So you should not have that mindset going into it that you can protect anything that you’ve trained on. It is just, it’s in there. Someone’s going to be able to get it out with some clever prompt injections.
So, yeah, I just wanted to discombobulate those two a little bit.

So, yeah, I would recommend. There’s a book that I would recommend. It’s actually by one of my colleagues here at Microsoft, and it’s called Not with a Bug, but with a Sticker. And I have it, I have it on my bookshelf. It’s a great book. I listened to it on audible, and it really does dig into the differences in terms of attack possibilities and how you have to think about protecting AI systems.

And I would say anybody who’s interested in the offensive side or just in terms of protecting AI systems, it’s a great book to go through. So Not with a Bug, but with a Sticker, attacks on machine learning systems and what to do about them. And it’s by my colleague, Ram Shankar Siva Kumar. So, yeah, amazing. He leads our, he leads one of our AI red teams here at Microsoft. And it’s, it’s amazing. It’s a great book.

Yeah, I was just on an AI symposium panel and I referenced that book because not only does it, did it give point in time snapshots of actually how to attack image AI. Right. That’s what it’s referencing with a bug. Not with a bug attacking images, image AI, but with the sticker and the references to the pattern that they made. They made this pattern that basically you could print this pattern on a jacket, you could print it on your car, you could put the sticker anywhere on anything. And any of these image recognition. Image recognition AI would basically just think you’re a cat or something like that, or just wouldn’t be able to parse the image no matter what they tried to do.

And this is really dangerous for, I mean, these AI systems and these ML models are being built into cars to figure out, is that a car next to you? Is that a stop sign in front of you? How do they do that? Well, it’s with these, you know, these algorithms that we build. And so that book also, it really, really like forces you to grow a couple new muscles in your brain as an offensive security person to understand that, you know, because I was so focused in on the LLM stuff and now I’m like, there’s a whole world about ML algorithms out there that, you know, have interesting attack vectors to them. So yeah, you mentioned MITRE Atlas and the OWASP LLM Top Ten. There’s a couple other really, really good blogs out there, one by one by Gavin Klondike where he analyzed the OWASP LLM Top Ten, which was really good. I’ll try to post the link at some point. And I mean, it is greenfield for researchers like us, right?

Like, I haven’t been having this much fun in offensive security for a while, honestly. So that’s awesome. Hey, this was like such an invigorating and like thought provoking session with PlexTrac Friends Friday. So I think a couple of key takeaways for me, you know, and then maybe, you know, and you guys kind of also shared is like from the opportunities, like, yes, AI is a greenfield opportunity for offensive security to be able to one think like the attackers, how they’re going to use it, be able to build infrastructure quickly and use AI to help automate as much of your workflow as possible. But also like what are the new things that it can help you when planning attacks and also communicating to the business, like what risks and impacts. But then also we have a lot of threats that, that just continue to emerge with this new technology.

Like any new technology and any new era. Right. You know, I think when GenAI kind of, really kind of hit the scene, it was the seminal moment for a lot of people. Like, oh, we really need to focus on what are the other threats that come with this. And I think you guys highlighted some great stuff, have some great, we have some great links that we’ll provide to the audience as well. And that book. And I know Preston, Preston’s a stud, so we’ll maybe we’ll have to get him on as well.

But yeah, if there’s any last things you guys want to say, then we can like, you know, make sure we get everybody to follow you guys on, on LinkedIn and anywhere else. But any, any closing thoughts or comments. So I would say if you’re interested in the offensive security part, right, like you want to use AI in OffSec, in your job as a pentester, red teamer, or if you’re a blue teamer, I have built a class that I just launched or that’s going to launch in May. It’s called “Red Blue Purple AI.” So it goes through basically two days of what we started talking about today. If you’re interested in that, check out my Twitter. It’s pasted all over there. You can find it. It’s called “Red Blue Purple AI.” And yeah, there’s a, there’s a pretty burgeoning community too, on the Subreddit for NetSec that talks about AI topics as well. So yeah, I would check out those things.

So I’m just gonna say, I’m gonna go register your horse. Yeah, I’m going anyhow, it’s Jason. I’m going to register because I know it’s going to be valuable.

You know, ultimately. You know, one of the things we always talk about offense security is, you know, the more you can think like a defender, the easier it is for you to be an offense, an offensive practitioner. And so it’s the best of both worlds because if you’re able to work in the offensive field and find something, then you can put your defender hat back on and say, hey, let me help you solve this problem. So somebody else who has mal intent doesn’t have it. So same way. AI is the same way.

And it’s a great topic. It’s definitely greenfield and it’s exciting. Yeah, yeah. Awesome. Well, thanks to both of you so much for taking some time with us. We will definitely inform everybody of all the resources that were mentioned in the thread today and then just appreciate your time and for everybody out there and enjoy your weekend. Have a good one.
Thanks everybody.