Skip to content
NOW AVAILABLE Feature Release! Learn About Our Enhanced Capabilities for Prioritizing Remediation Learn more >>

VIDEO

Purple Teaming for All: The Path to Adversary Emulation

Series: On-Demand Webinars & Highlights

Category: Purple Teaming, Thought Leadership

   BACK TO VIDEOS

Transcript

Well, thanks everybody, for joining. We’ll go ahead and get started because we’ve got a fun topic that should take the whole hour and excited about it. But welcome from everybody to Wednesday, April 27 edition of a PlexTrac Webinar with Sites and Aklia. We’re super excited to have George and Meryl Jones for a great topic. I mean, something that obviously we’re all very passionate about and really excited to share our tips and tricks on how to get into purple teaming and how to lead your team, whether that’s an offensive team or a defensive team on the path towards adversary emulation. So we’ve got obviously seasoned veterans to talk about all the tips and tricks that we can provide. So I will let each person introduce themselves.

But I’m the founder and CEO, PlexTrac. Been in the cybersecurity space for 17 plus years now was a penetration tester and then a security director where we started getting into purple teaming and adversary emulation. So it’s been a fun journey. And then obviously we’re very passionate about that here at PlexTrac. But then I’ll let George and Meryl introduce themselves. George, take it away. Thank you for having me, Dan, and folks in the room, or virtually, thank you for being here to chat with us.

My name is George Orcis. I’m the chief technology officer at Site. Been there two years now. Joined right when the Pandemic started. Figured best time to leave a corporate job and go to a lead startup. So far, so good. I’m still here and we’re doing awesome, so that’s great.

Apart from that, I love doing community contributions and working with folks like Meryl and Dan. So pleasure to be here. Let’s talk some purple. Also, I’ve done this with Dan before, so no offense, but I’m very excited Meryl is here because I’ve met Meryl two years ago just randomly in a conference, and she was asking some pretty cool questions and then contributed back to some of the Purple Team stuff that we are starting to get out, which is awesome. All of you should do that as well. So first time with Marilyn and real pleasure. Everyone always remembers their first time with me.

Dan does. Hi, everyone. My name is Meryl Vernon, currently the Purple Team program lead and a senior security engineer at Aquia Incorporated. Formerly used to work on the Zoom Red team, and before that I was a pen tester, and before that I worked in risk. So I often bring an enterprise risk point of view to things and a very point of view to things to offensive security in general. I’ve been in cybersecurity for about three years. So kind of the total opposite of these two gentlemen.

But almost that entire time I had a passion for purple teaming. I saw the practical implications of it right away, especially when I was a one woman Pen testing show and our resources did not have time to participate in my exercises or threat Hunt me. So I learned to do it myself. So purple Team is very near and dear to my heart, my OG love, I like to say, and I’m thrilled to be here with two of the Moguls in the niche. So we’ve got good stuff for you guys. Awesome. Yeah, thanks.

We’re super excited to have everybody here.

We’ll give you the agenda, kind of the topics we’re going to cover, and then we’re going to kill the slides. That way you get to see the lovely faces, whether that’s good or bad, you get it.

And then we’ll get rocking and rolling. So kind of the agenda we’re going to talk about. When would your organization be ready to start purple teaming? Why is purple Teaming so important? How do you get started to launch your Purple team? And then what is the path look like to true adversary emulation? We will have time at the end for questions, but this is meant to be interactive. So as we’re rolling through the topics, feel free to use the Q and A session. We do request that you use the Q and A portion because the chat can get busy and we might miss questions in the chat. So use the chat to banter and make fun of us. But the Q and A section would really be for true questions that hopefully we can help answer.

So, hey, we’re excited. Let’s get rocking and rolling. I’ll start off George and Marilyn.

What is purple Teaming? How do you view the world of purple teaming? And then what organizations be looking for in terms of when is the right time to start kind of thinking about this aspect to your security program? I’m going to feel the floor first because I started this at the Purple Team roundup we gave last year, and I’m trying to get people to redefine Purple Team in their brains because no one really knows what it is. Everyone asks me, is it a function of pen testing? Are you just doing more pen testing? And I’m like, no, we’re not pen testers. I mean, there’s an offensive portion of purple teaming, obviously, but really the goal of purple teaming is to actionably and measurably improve defenses, like in real time, to move that needle with each exercise that we conduct, not to simply put findings in a report, not to simply point out things that need to be fixed or lose them to a vulnerability management process that no one understands and 18 people are involved in. But to actually help you build new things in real time that improve your defenses across people, processes and technology, that’s all that it is. So once you understand that that’s the goal of purple teaming and purple Team exercises in general, you can very clearly understand we’re either ready for this type of thing because we’re not quite ready for proper Red Team engagement or we’re still building up the defenses. We’re still doing QA testing. We’re not quite there yet, in my opinion.

Yeah, I agree with that. And definitions are important, especially when you hear that some orgs are selling purple teaming or even Purple Team products that you’re like. I don’t think that’s what that means. But you hit the keywords, right? It’s collaboration between various infosec roles. Most of the time it’s not a dedicated theme. We are starting to see a couple of those form here and there. So verdict is still out.

And obviously, if you are an official Purple team or let us know because we love this stuff, we want to know how that works. We have a meetup, we have a chat channel. We will collaborate with you. Exactly. But essentially, anyone can do it, right? As long as you’re collaborating with others in your organization to more efficiently improve and test and measure your people process and technology. Like Meryl mentioned, that’s what purple timing is. So it might sound a little scary at first, but let’s dive in of who should do it, how to do it, and when to do it.

That’s a great segue because a lot of orgs are like, crap this buzzword purple teaming, are we doing that? What does it mean? Should we be doing it when we know we’re ready to be doing it? Do we have a pen tester? We can’t do it without a pen tester. We’re screwed. And I’m like, okay, I just want everyone to know. And especially because like George mentioned, we are seeing these pop up more and more. I work on a dedicated Purple Team now. I saw three more pop up this week, people reaching out. I’m starting this program.

How do I get started? I referred them all here, so I hope they’re here. But Purple Teamers, I just want you to know don’t have to be former Red team. They don’t have to be pen testers. They can come from anywhere. A Purple Team or can be a Dev. It can be a sysadmin. It can be a stock analyst.

It can be detection and response. It can be a Red teamer. Literally anyone with an interest in doing that collaborative fit and who knows one piece really well and has an interest in learning the other pieces can do it. You need to know some offensive actions. You need to know how to execute some testing. You need to know how to do some threat hunting. You need to understand how to implement CTI into what you’re doing.

But as long as you have a passion and an interest to learn the pieces, you don’t know, you can do the collaborating. You can bring people together. You can get them discussing on their own. And you can say, Now, I don’t know how to execute this attack, but I know that this attacker, in looking to achieve this objective, would do this piece here. How can you actionably detect for that? How can we block for it without really increasing false positives or taking too much of your time investigating tons of volume of incidents that don’t add value. So this is the discussion that we’re trying to drive. So a purple teamer can be anyone from anywhere at any time the second they decide they want to start doing it.

Yeah. And I think that’s a good point because I think we kind of have seen the evolution of it over the last I think the first time I ever heard the term purple teaming was in the Accubon days. I didn’t work for Acubon, but I worked with or I collaborated with a lot of Acupunct folks. And I was like, what is Purple team? They keep talking about this purple team stuff. That would have been ten years ago. Right. So, I mean, there was the notion, but it had this very distinct context of like, this is a very technical point.

It was just a form of a pen test, but meant to help kind of in this tabletop exercise kind of more real time. And I think as the industry has grown and as we’ve matured as an industry and even had more automation capabilities, it really has continued to evolve into a collaborative and continuous mindset. Right. I think that’s one thing that we are seeing a lot of our customers and our prospects that are starting to build out their purple teams wanting to know, how do I measure all this stuff? How do I actually have ways to collaborate and what are the things I should be focused on? And that’s exactly the goals in your experience.

One we talked about kind of who right? And then when would you start getting going? Right.

I think we run into a lot of questions around, like, I don’t know that I’m mature enough to do this. Right. My answer is any time today. Yesterday, five minutes ago, five minutes from now. Start anytime. We were just saying in the banter before the webinar started that, of course, there’s an intelligent point, right. When you’ve got defenses working so good and you’re able to pass audits and you’re reasonably compliant, or you think you’re reasonably compliant, maybe you’re about to start offensive testing.

You don’t have an in house program. Maybe your last pen test you got really beat up on and you’re just kind of feeling overwhelmed with more like you don’t need more sophisticated red teaming if you’re not even doing well on your pen test. So that’s obviously an intelligent place to start. George and I are big fans of going from blue to purple to red rather than from blue to red to purple. So that’s always a great time. But a lot of people start in the middle of an incident. Like, we were saying something’s happening, and we’re like, we don’t see the things we know this is bad, but what else are we missing? What other pieces of info? How do we stop this start Purple Team and yourself immediately reach out to someone and I will help you if you need me to.

You can start anytime to give a little story because Trent here just asked a great question of I’d love to hear recommendations on moving part traditional Red Team to full fledged Purple Team with that partnership between Red and Blue. And that’s where I came from and really came up with that ethical hacking maturity model, which we just updated based on what Meryl said. Right. But back then we started large financial, well, finance. So we had a big vulnerability assessment team, a big pen test team. We started doing Red teaming back 20 13, 20 14. We were one of the first industry Red Teams as far as FSI sac and all that fun stuff goes.

And we did a couple of engagements. Right. Real three months to six month long engagements, lots of planning, all stealth because it’s Red theme. And the first one we’re like, oh my gosh, that’s awesome. We did all of this with a $500 budget. We got here and not dispelling what we did. But it was cool for us.

It was not cool for the other team. Right. And then we did it again because we’re like, oh, we have to do this every three months. And after three or four of these, we are realizing we weren’t improving, like as an organization. And our CSO, one of my favorite CISOs ever, Charles Blonder back then said, this isn’t very healthy. You all are going to Black Hat def Con. And I saw Andrew here.

So actually just reminded me of that story because he worked there and was there a Black hat that time? And we met each other and oh my gosh, we’re all humans and we all work for the same company and we all have the same goals. So how can we help each other improve these particular things that we can improve? And we sat down, we’re like, well, why don’t we do this together? And of course, we weren’t calling it Purple Team back then. We just called it a hands on exercise. And it kind of started developing from that. And of course, very creatively. We came up with the name Purple. Right.

My five year old daughter helped out on that one.

What is Red and Purple, red and Blue Eagle. Right. But that’s when you already have a Red Team. So going back to the trench question here, but not everyone has that. Right. If we look at enterprises today, maybe 100 of them have an internal Red team. Some of them are starting to build the Red Team.

And that’s where we’ve been talking, that if you are starting the Red Team today and you already have that budget, where do you start? And we think Purple is going to be the best place because of the collaboration you’re going to establish. Remember, Red Team’s main goal is to improve the blue team’s detection and response. So why not work together from the get go, do some baseline? Because if not, it’s actually going to be boring to be a red teamer. You’re going to try a bunch of stuff. It’s all going to work. You’re then going to have to go replay and repeat the same thing over and over, and you’re not really going to improve that quick. So definitely start purple teaming now if you can.

And again, you don’t need a red team to do that. Blue team is understanding their detections and doing some emulation, some testing with even something as simple as Atomic red team is a very quick and easy start to where you are. Right? You’re not all the way to adversary emulation yet, but that’s okay. You start. Yeah, don’t sprint before you crawl. Don’t go right to adversarial emulation if you can’t. Retomic regime has been out there for years.

And if you can’t defend against those things, those are very open source, very accessible things that a lot of people can do. And guys don’t feel like, oh, my God, we’re starting with the open source thing. Like, we’re kind of like basic over here. You’re not. I pioneered a purple team, an enterprise purple team at Zoom Video Communications. And guess what? The first thing we did baselining with Atomic Red Team, because if we’re not defending against the stuff that is already out there in modules, what are we doing with ourselves? So, yeah, to address Trent’s question, it kind of goes back to the when. If you see that findings are persisting across red team apps, if you see that tickets from three opposite aren’t being remedied and aren’t being closed out, but you’re continuing to plan your next one, you might be ahead of them.

Like, a lot of purple teams got born. It went from blue to red to purple because red teamers got really sad, just kind of like executing these apps and nothing is getting better. And we feel bad that we’re finding the same stuff and we feel like a broken record. And then they were like, screw this. We’re going to help you guys fix it. So that’s where a lot of those started coming from. But again, if you have a full time, dedicated red team, that’s awesome.

That’s a beautiful place to be. You can switch off executing a proper red team up, follow up, purple team exercise. Execute a red team up, follow up, purple team exercise. You can literally do both functions at once. I was on a red team, and I was also a solo purple team, or at the same time. Those two things go beautifully in a continuous feedback loop. So that’s converting a dedicated red team to a purple team part time.

That’s a great way to go as well. Fully advocate for that, I’d be curious. I have my opinion. And when I was a security director, we started down this path, too.

How do you kind of navigate the concept of, like, we don’t have a Red Team, so should we even start a Purple Team or how do we convince management to invest? Because I think some organizations have even had trouble convincing that they should do an internal Red Team. And my philosophy and my theory is that going from blue to purple actually paves the way to have an internal Red Team. It does because when I was a security director, we were like, hey, for compliance reasons and all these things, you would get external Red Teams Ops. And they’re like, okay, well, where are the biggest gaps that we’re seeing out of this? And can we see quicker improvement? Right. So we started with Atomic Red Team and the Miter Attack Framework. And just like, hey, let’s pick like, we think we have some gaps largely in lateral movement. Right? And so let’s just pick some of the techniques that we are just not sure we can detect.

Right. And we had our Blue Team members. They were not like trained Red Team members, but they were interested in it and they wanted to kind of start to learn. So they started doing some of these exercises and executing these TTPs, and that helped us start to measure like, okay, here’s how we get better. Right. And that I think, paves the way for Red Team. Sorry, go ahead.

No, it is my favorite way to paint Purple Team’s value to managers who are unsure, who’ve never heard of it, is honestly to ask them about money, because that’s every manager’s pain point. Do you love spending money, sir, on things that you’re not sure are working for you? Do you love all these subscriptions to MSPs and Sims and things that we pay for that we’re not even sure if we’re using? And they’re like, oh, yeah, I love that. No, they don’t. So what you should do is say, I’m going to baseline test for you and just confirm that at least the things we’re paying for are doing the things they say that they’ll do. And if you confirm some gaps, like Dan said, then you can say, hey, we’re exploring a new solution. Before we spend that money, let’s use a Purple Team exercise during our proof of concept demo with this product. And can this product pick up the things that we know we’re not picking up ourselves? And if not, that product is not going to serve you.

It’s not giving your organ value. You just saved them $100,000 on that thing and you can find them a solution that will actually serve them for the budget that they’ve allocated. Because going back and revising budget and cutting vendors and products out and name of something else, it’s always a guessing game. Purple Teams even take the guessing out of that piece. And once they see that you’re saving them money, then you’re like, Guess what? I could do this across all of our defenses and reduce our tax surface and reduce our residual enterprise risk, which saves us on cybersecurity insurance. Yeah, you’re exactly right. One of the things we found, again, highly well financed financial.

Right. Has a lot of budget, has a lot of tools. We were probably using 10% of the capabilities of each one, even though at a higher level, you bought this product and everything the website on the marketing says it’s doing. It’s doing, right, right. Yeah, the marketing folks, we always speak the truth. You can trust me exactly from that perspective as well, was another reason we started doing a lot of the testing was figuring out and turning on these capabilities. Because as you know, you can’t just flip a switch and I have all the detections.

I’ve got 100% miter attack coverage with this EDR now. Yes. My EDR will save me. Yeah.

If I ever hear you people say your EDR will save you, I will hack you myself. I think it’s a good point because you can help validate the investment that the team has already made. So you’re already able to validate controls and configurations, or at least you’re like, hey, the things that we’ve purchased, they either aren’t working or they need to be configured differently. And that’s immediate value. Right. And it’s not a lot of upfront cost from a purple teaming perspective, to be able to do that, you can show immediate return on investment of that activity to your management and to your leadership to be able to show like, hey, the money that you spent isn’t getting the return that you expected, but you can also validate the controls that you think are in place, as well as be able to track and measure demonstrable progress. I think we worked on a research project late last fall with the Cyber Risk Alliance and polled people around.

Have you started purple teaming? Do you do pen testing? Do you do both? And how do you measure the progress and do you actually see results? The results of that research were phenomenal. I mean, exciting for us, right. In the purple teaming space, the people that are doing purple teaming, they’re doing more testing more often, and they’re seeing quicker progress on improving their security controls. Right. We say it and it’s nice to have data to back it. Right? Yeah. I think that’s some bullets in the Chamber to help build the case for building a purple team internally, even before a red team.

A red team. Yeah. And tertiary benefit, we’re not spending hundreds of thousands of dollars on sophisticated pen tests from folks like BHIS. We’re not wasting sophisticated red teamers time when they’re going to catch low hanging fruit or just be bored. We’re not demoralizing the blue side who’s like, oh, God, they just come in here and mess us up every time and we don’t stand a chance. You’re really bridging that gap between the teams. And once you confirm defenses once everything is working like it’s supposed to, because that’s a sweet place to be.

And you’ve confirmed that. You can put that feedback back into the CTI side and say, okay, now that we know we’ve confirmed these things, let’s say an Apt uses 20% of this. We could say realistically, how resilient would we be against an attack from these people based on the things that we know they do? Because we’ve seen it in the wild. This is realistically how well we’d stand up. They might get in here, but we’d stop them here, here and here. Well before they get to here. And now you are demonstrating the resilience of your cyber security program, which is, believe me, something your CISO is highly concerned with.

Yeah. It answers that question of can that attack that happened next door happened here with actual data. Right. Not a tabletop or marketing words of my tools supposedly say that they will, but do they actual data driven approach. Yeah.

It’s a practical application of threat intelligence. Right. It’s like, hey, we know this is happening in the wild. If we went and tested these things, would it work in our environment or would we have an idea that it would work. Right. And then you can actually show, hey, we tested it. We tested it again and did some tweaking and you show the progress of eliminating the threat before you know it exists in your environment, hopefully qualitative improvement.

It was like 20% before, and now we’ve used three different binaries, three different ways to test this thing, and we got it all three times. So I think we’re pretty well defended against that tactic or technique or procedure.

And God, I was going to say something else and I forget what it was. I don’t know. I think the red team component is an area of the industry where we are definitely understaffed. Right. There’s no doubt. Right. And organizations would love to build out their own red teams.

But truth be told, like, the talent pool is just smaller in that space. Right. It’s a much more involved skill set that takes quite a bit of time, I think, to become a really seasoned red teamer. But there are great ways to get started, and this can be one of them to really pave the way for your career path into red teaming as well, because you’re really coming at it from knowing the pain of the blue team, knowing the techniques that modern adversaries are conducting in environments, and then that really kind of starts to pave the way towards true adversary inflation and being a full fledged red team as well. So I think that the benefits of going from blue to purple to red are fantastic themselves. And once you go purple, you can take yourself beyond CVSS. Like, it won’t just be like, oh, man, that’s like a 9.8 out in the wild.

That’s a super crit. How screwed are we? It’s like, no, that is out in the wild. But over here where we have organizational context and we’ve tested against the kill chains where this would be a stepping stone. No, we’re good. We’re decently. Well, Mitigated, we’re doing all the things we can do. Okay, great.

That’s a great point. The shift from our vulnerability management side of a vulnerability and technology. It has a CBE ID, right. Common vulnerability exposure ID. It has a CVSS score, but personally worked on CVS. You look at individual vulnerabilities and technology, not putting anything else into context. Right.

You can’t chain two vulnerabilities together in CVSS. It’s just not part of the framework. So the move here really moves that thinking of issue is open or closed or has a high, medium low to now, do we have visibility into that? Is it forensically logged? Is there an alert, is there a response? And then of course, tuning those so that your people actually see them and respond to them and follow a process. I think that’s probably the biggest shift for folks are going from vulnerability management to doing some Red team and emulation. And really the shift there is probably the biggest one. And lots of these GRC platforms that organizations have used even for risk management. Right.

That just have open and closed. That’s not a thing. Here. You can run T 1033, you can’t close that. Sorry, anyone can do it.

But that trajectory. And that’s one of the reasons shout out here and kind of answering Chris’s question, not making Dan answer a question of what’s a good way to track metrics of improvement here. PlexTrac is a solution for that. So it is a vulnerability management solution. You could do all your traditional items, but working with them, we’ve been able to come up with these run books that instead of looking at a procedure as open or closed or high, medium low, you can actually track that as no evidence. Digitally forensically logged, responded to or alerted and responded to, which then when you do an attack chain, you can show, look here, we had this visibility and was responded to. And it’s going to be okay if you had no visibility on one particular procedure, as long as throughout that chain, you were able to catch it and respond to it.

So as you do this, as you Purple Team, you keep those metrics and shout out again, a PlexTrac, right. It makes it very easy to show where we were a week ago to where we are now against those same TTPs, as well as going way deeper and doing a lot more testing. But metrics is the answer to that question that Chris had. Yeah. And practical purple metrics, a lot of people are like, is that like mean dwell time, mean time to detect, mean time to respond? And I’m like, no, those are really metrics that you’ll see improvement from on the DSIR side. But those are really more metrics for that team, metrics that I use to prove values of my Purple Team program are like new detections created, improvement of detections over time, number of TTPs defended against before versus now, things like that, things that demonstrate that you’re filling in more of those gaps. You’re identifying the gaps so that your unknown, unknowns become known unknowns.

And then once you fill them in and mitigate them, now they’re known known. Right. Cti things. So CVS score does not equal residual risk. Risk can’t be removed. Risk can’t be removed, period. All you can do is mitigate on top of risk so that you decrease the likelihood of it.

And when you decrease the likelihood because likelihood times impact, then the risk goes down because one of those factors inherently goes down. So people who are out here saying you can’t remove risk, you can’t remove a technique number. Like George said, all you can do is try to have visibility and quality detection on it. Yeah. And I think a good point is that I think we just naturally shifted the conversation into risk and volume management touching that. And I think that is the power of Purple Teaming is that we’re starting to bring all of these disciplines together. Right.

And I think if there’s one thing that I’ve always hammered on, what is the problem we’re trying to solve? What is the mission at hand? And at the end of the day, we’re all trying to make sure that we have eliminated as much of the risk as possible to avoid or detect a breach as quickly as possible. Right. We know we’re going to get breached. Everybody knows that in some capacity, someone’s going to click on a link, someone’s going to do something stupid, even if it’s unintentional, how quickly can we detect that? And that’s the beauty of being able to run through these campaigns and these run books so that you at least have a perspective of like, hey, if we get hit by ransomware, we have 10 seconds. At least we know where the clock starts, right.

I think this is great. I think it helps clarify why the value is there. We’ve got data behind it, and this portion of the industry is really maturing and it’s exciting to see it continue to grow.

Maybe let’s shift to like, okay, so we understand who should do it. The win is like now, right? Kind of like planting a tree. You should have done it a long time ago or you do it now, right? I was better than ever. Exactly.

But how do people get started say they’re trying to just prove out the value without having any budget or a lot of resources. What’s your advice? And obviously both of you have done this from the ground up, right? Yeah. I will say for most enterprises, something like Atomic Red Team will serve you pretty well. You can install it and run it on your own. A single person can do it. If you can get access to your threat hunting or SIM platforms, you can threat Hunt yourself and you can do some very basic like I did ten procedures. Here’s what I learned.

This warrants us doing a greater volume of procedures and seeing what we can really do. I’m currently in a unique situation because I went to work for Aquia and we are building a custom product for our customers and it’s a completely custom built job and there is no effective way to really test for it. It doesn’t even have its own threat matrix yet. So we are literally designing how to purple team building. It’s a purple team program for products that are still being built. So it’s kind of exciting. So what I’m doing is basically learning that technology really well because you can’t defend it unless you know it.

You can’t break it unless you know it. You can’t defend it unless you know how to break it. And then I’m going to start designing intelligent ways for us to baseline again. Always start with baselining some proof of concept testing like we at the very least should be doing this. These configuration items should be in order. If they’re not, I can get this piece of information. With this piece of information, I can further myself here.

And that’s a bad place to be. And once we get those little things tweaked out, then we’re going to work on maturing it into more sophisticated customized testing. But don’t feel bad starting with automated. Don’t feel bad starting with open source. Those are great free or cheap places to start when you don’t have a budget. They’re out there. The tools are out there for you to use.

Don’t feel like because you’re not doing all your manual testing yourself that you’re not doing it right, because that’s absolutely not true. And then once you prove the value with the open source tools managers, please, once you see the value and you love it, don’t just say, well, you’re doing such a great job with open source, you should just keep trucking along. Give these people a budget. Get an enterprise solution like side like PlexTrac, because oftentimes I become the bottleneck. Well hey, we need this reporting. Well hey, what are these numbers? Well hey, how many of these did you do? And I’m like, okay, not only am I doing my job planning and executing the things, but now I’m also the reporting bottleneck to feed the blue side. And the more they can feed themselves.

Come to the tool and find the info they need for stuff that we’ve already done. Download and repeat a runbook that I already wrote for them and already crafted me already proved like pointed out some gaps and they think they have something reconfigured and retest, retest, retest. I can’t be on the retest wheel. That’s not where my time is the most valuable be spent. So please give your people a budget once they’ve proven the value of these things. That’s for how to get started. I actually want to give a shout out to Red Canary.

So they are a managed detection response organization, but they also have the Atomic Red Team tool, which is free and open source. And they also have an awesome threat intelligence team they’re ran by Katie Nicholls and awesome folks there. So they put out this report. I’ve been reading it since 2019. It’s the threat detection report. Essentially, they provide the top techniques that they’ve seen in actual attacks over their 3000, however many customers they have. So it’s an awesome report.

I linked to it there. But not only do you read the report, obviously, but make it actionable. Right? So the link I just posted here is our community threats GitHub that has a step by step. If you scroll down to the manual emulation without site, you can literally copy and paste the commands and execute them in nonprivileged command Exe privilegedcommand Exe PowerShell and privileged PowerShell. It tells you for each one also put in how to remove them. So if you’re renaming a binary, right, we want to clean up, especially if you’re doing it in testing mode. All this manually copy and paste and literally see if you have detections for some of the top techniques.

It’s a great place to start, start collaborating, right? Don’t just go and run this. Get permission, work as a team. You’re going to set off some bells and whistles. At least you should set something off.

Let me see. Number. Which one will definitely do it? I think number five should definitely you’re trying to dump Elsa. I would hope that anything in your environment will patch that even in place after this, they’re like, what the hell, man? You told them to just start executing number five left in the right.

If you’re a soft person and you need to do this on your own, use Atomic Red Team for your offensive actions that you don’t know how to do. Use the reports that people are writing for your CTI because you don’t have a dedicated CTI team, because you’re not Miter.org. Use these things out there and you can literally do it yourself. You can do it yourself. You today. Yeah. I mean, I think we always point people to the site community threats, which is awesome.

I actually didn’t know that you had converted the threat to texture report because we’re big fans of the threat detection report too into a campaign because shameless plug, you can import the site adversary campaigns into runbooks and there you have the whole way to collaborate on them. But also the Red Canary, Atomics as well as Miter themselves puts out their threat emulation plan. So these are just concrete. Like, here’s what these attackers are doing. What I like about the threat detection report is that this is what Red Canary has noticed of things that are not getting detected. They’re getting through. Right.

And so it’s like, okay, well, if I want to at least see how well do I compare against what they’re seeing in the wild? That’s a great resource.

You put it well, Merrill, it’s like there’s always a crawl walk, run scenario. Right. And then you do get into that build versus buy conundrum for anything in security. Right. So it’s important to say, hey, like, people’s time. Our time is a valuable resource. So you do have to always have that balance of what do I do and where is my time the best spent versus getting a tool in place or a platform in place to help us automate as much as this so that you can go faster.

Right. And then those dollars really translate into true value. I just feel bad that so many people say, I don’t have those skills. I can’t start. I don’t have a team that does that for me. I can’t start literally, if you’re using CTI reports put out there from people whose dedicated function is this and you’re using automated tools, who cares if they are automated to help you with the piece you don’t know and you’re able to pull metrics and present findings to management. You’re doing it.

That’s all it is. You’re doing it. You’re proving the value. You’re identifying the gaps, you’re showing where you can improve mitigations. That’s all this is, folks. It is so easy to do yourself and then to scale out with more resources, but you can absolutely do it. Yeah.

Okay, this has been awesome. So we’ve kind of talked about what purple teaming is, who should do it, why you should do it, and then even how to get started. Let’s now kind of shift to the whole goal of this webinar is like, okay, we’re paving the way towards true adversary emulation. George, explain to us the difference between purple teaming, red teaming versus adversary emulation. Yeah. So this also deserves a little history lesson on how the industry got there. And again, doing red teaming back in 20, 14, 20, 15, start doing some purple teaming.

And then the bank of England, who is one of the many regulators, if you’re in the financial space, came out with Sea Best. And Seabist does not stand for everything. Trust me, I’ve looked it up, and then the European Central Bank came out with it. Cyber EU, that does stand for something. It’s a threat led. I don’t know, something red teaming. At the end, Monetary Authority of Singapore came along.

Cory, over in Australia, all these started coming out. Essentially, their goal was to test all these financials in a similar way and come up with what they call thematic findings. So what issues do we have in the financial system in our country that we need to look into and improve as a whole? So those frameworks took a threat led approach. So instead of all you have to do red teaming or you have to do a pen test, which other regulatory requirements had. Right. And they weren’t defining those correctly. They said, look, you have to go out and get an actual cyber threat intelligence report for an adversary that is likely to attack you in our country.

Right. Being a global organization, we aren’t going to go after someone that was attacking South America. We have to focus on England. And you need to take that and build out these scenarios and then hire a red team to come in and do those scenarios. So that is really where the term adversary emulation came from. At the same time, vendors were coming out with this whole breach and a tax simulation, which the old school one. Right.

The old generation, the original ones that came out around that time were like these agents that would replay packets back and forth. And the regulator like, no, we don’t want that. We want a real test. We want a red team to go and actually emulate this adversary. I’m like, oh, adversary emulation. All right, cool. And it was highly successful across all these financials very similar issues were found.

I don’t know if they’ve actually published those results, but you can imagine large financials had very similar issues. They could then track those and try to get them improved and whatnot that’s kind of where the history of adversary emulation came from. And from our point of view, we were doing Yolo Red teaming objective base. We would take some stuff. One of the fun ones we did was after the Bangladesh Bank Heist, which essentially they were able to send money through the Swift network. We wanted to do that test. So that was a little bit of emulation, of figuring out what happened there and whatnot.

Other than that, it was really objective based. No, we weren’t really following any other adversary. But you think about it, it’s like, if I’m going to attack this organization, wouldn’t it make the most sense and provide the most value if we attack them just like a real adversary that’s going after them with the same behaviors. Right. Like more bang for your buck. Of course, a red team Yolo works, too. They’re going to get in.

But again, we’re talking about providing value. And I will argue from a rent team perspective, actually submitted this as a talk for summer hacker camp. We’ll see if it gets accepted. I’m actually going to go on and say that doing adversary emulation as a red teamer is harder than doing just Yolo red teaming or objective baseline concur. And of course, there’s the caveats, right? If we go and ask you to emulate Conti, that’s going to be easy because the playbook is leaked and you can copy and paste it. But as we get up in the Sophistication level, you’re now asking the red team to emulate some procedures that they might not be familiar with. See, a red team get contracted.

They like doing their fishing with their macro. You open the macro, then they do curb roasting because they know how to do that without getting caught. Then they crack offline. They come in, get domain admin. Right. Like, red teams have their attack chains as well. You just did 15 seconds to start.

Yeah, it’s possible for sure. Like having a favorite burner. We all have our favorite methods of compromise, and we’ll try that first every time. Exactly. So adversary emulation, more focus, a little harder to do because you have to stick to this plan. And the last thing I leave you with, though, is that if something in that plan did not work, you’re not done. No.

We will most likely try something else. And that’s where you, as a red teamer, can then go back to the stuff you know how to do. I will say that’s one of my favorite things about participating in Cybershield, which is where we purple team, the National Guard, is that, like, we have separate lab environments for each apt that we emulate their specific APts. We’re not just like, screw it. We’ll use whatever C Two we want and whatever module we want and whatever exploit we want. It’s like, no, if they wouldn’t do that, if we haven’t seen that they do that, you can’t do it. You are limited to these dozen things that they are known to do really well, and that’s it.

That’s all the creativity you get to work with. So it’s definitely easier as a red teamer. Just like, here’s your objective. Get there any way you want, you’re like, yeah, great. But when they’re like, you have to only use these boxes to get there. You’re like, oh, that’s a lot harder. So that’s where the sophisticated red teamers are going to have fun.

That’s where they get to do their thing and really shine. So I think in going from purple teaming to true adversarial emulation, if it makes sense to start with TTP, start with miter, start with your baselining, fill those things in. Then you can start doing a threat after driven. Right. Okay, so now that we know we’re defending against a certain number of TTPs, what TTPs would Deep Panda do? Are they one of our threat actors? Great. What would they do? And you take those from those and you make sure that a kill chain they’ve known to do, one that you can copy and paste. Like George said, a lot of them are out there.

Copy and paste that kill chain. Did it work or did you defend against it? Great. Okay, now you know you’re looking good to get actors. Now just give a red team an objective. And like he said, let them Yolo you let them or give them an adversary to emulate in a customized way. Say we want to do an objective they would do, but do it in a way we’ve never seen them do it before. We just want to see how well we would do it’s.

Kind of black box, you go from your white box, testing yourself of a known threat actor and stuff they’re known to do to just come at us, catch us if you can, and that’s how you graduate yourself. But if you go right from blue teaming until we think the firewall is working to that threat after adversarial emulation, there’s a lot of steps in the middle that you miss, and that’s why you’re getting beat up on so bad. So this natural progression really makes sense. And it’s also a more effective use of the expensive testing resources that if you don’t have in house, you’re going to have to pay a third party for. Well, it’s interesting because it’s truly up leveling the red team’s game as well. Right? Because like you said, hackers are lazy. I mean, as much as we love them, they’re going to do the things they know and that work.

Right? Why go develop a zero day when the front door is open? And I think this is actually a good point maybe even to make for organizations as they’re hiring for their external pen test and things like, hey, we really want you to focus on this scope of a type of adversary in addition to being able to just do whatever it takes, right? There’s certainly value for both, but certainly it’s like, hey, if you know you’re already susceptible to some of those things and you’re working on them, why have someone tested again when you can say like, hey, we know we have these issues, we’re working on them, and we’ll validate those later. But what we really want you to do this time is focus on these TTPs, and we need to know what other big gaps we have from these types of adversaries. Yeah. Also, stop focusing on initial access. Assume someone will click a link. Like Dan said, assume they’re going to get past your firewall. If they’re so motivated, do basic user access, do assume breach because they could spend all three weeks of their volume period coming at your firewall.

Eventually they’re going to get through, and then you don’t know how any of your defense in depth works. Please test those things first or give them a pen test report and say we’ve identified these gaps. Can you punctuate them again, or we think we fix some things. Can you hit these in your last few days? If you’ve got extra time, great place to go. Also, when it comes to the method of your purple teaming, like, let’s say you decided to do it, we convinced you. When it comes to your style of purple team exercise again, please don’t sprint before you call. Don’t go straight to CTF style.

You’re going to execute this over here. You’re going to Hunt them over there. I’m not going to tell you guys what they’re doing catching you if you can. Don’t do that. That’s not going to help anybody. Your defenses and your detection processes need to be so refined and so mature for that to be successful. That’s closer to that adversarial emulation, like CTF style.

Purple Team exercise. Please do open book, collaborative style. This is what I’m doing. This is why this is why I used this native process and not this one. This is why I used this binary. This is why I invoked it this way so that you’re teaching them mindset, you’re teaching them like, oh, I remember if I see that in a log, that wasn’t bad on its own, but Merrill also did these things around it, and those things together were bad. Do I see any of that else anywhere else? Or is it just by itself and you’re actually educating them on how hackers think, and they see you struggle.

They see your callbacks not work. They see your payloads not go off. They see how you triage those things, how you fix them, how you pivot. That’s all valuable information, and that’s where you need to start. It needs to be a conversation and not a CTS. Please, please.

Yeah, that’s great.

So true.

I’d love to open it up for questions, but any last thoughts on kind of, like, this path to adverse emulation? Hopefully audience might chime in, like, hey, if we didn’t clarify anything along this path, but I think we did a good job of kind of explaining, like, here’s how you can get started and here’s when and who and why, but what the ultimate goals are. And again, staying true to the mission of, like, hey, at the end of the day, we’re all the Purple Team focused on identifying the attacks, identifying the weaknesses in our environment, and hopefully eliminating as much risk as early in the attack lifecycle as possible. Right. So any last thoughts that we can open up to questions? Just be a person. Remember that we’re cybersecurity professionals, but we’re not always everyone’s favorite person to hear from. So be a person and know that the system you’re testing with someone’s really hard work. We help the same mission.

Like Dan said, get started yourself. Use the open source and automated tools. Please don’t go straight to CTFs. Do them collaborative. Get one person to join you from each team, if that’s all you can manage and just have a conversation, and then, yeah, we can’t say it enough. We recommend going from blue, Orange, yellow into purple into red later. I think it’ll really serve you and your budget and your.org I’ve got nothing to add.

All right, so we got a question from Alex. What were the biggest hurdles you encountered while building out your purple teams? So on my side again with the history of being red, not being very nice and human, like Meryl said, being very ego. Our Blue Team did not like us because we made them look bad, and we weren’t really helping them. So the culture was definitely one of the biggest things of organizations working in Silos and now coming together to work together. The other one was getting everyone together in the room on a particular date and time. Our first one was actually a whole week long purple team exercise. It was the first one we ran and getting everyone from their day to day, especially sock folks.

You work at 24/7 sock. There’s already tight schedules. Who comes making sure that their manager removed their day to day from them so they’re not multitasking and they can actually focus on it, being able to share and being open again, back to culture, share everything. Right. The red team shares their screen while they’re doing something. Then the blue team, the sock level one, shares the sock level two. The deeper folks, the Hunt team, we’re all there learning.

It’s not just, oh, the red team is going to school us for five days now. We get a lot out of it, too. So my answer is lots of culture items looking for here, Meryl’s response, and maybe we need to bring her back in six months after she’s been here building a property. Maybe the answer will be different. I always tell people I’m going to be the happiest person alive in six months or drinking myself in a dish. Depending on how this project is going, we’re going to see when it came to building a more straightforward team, because I was always used to controlling the red piece because I was a part of the red team, I would just say it’s resource constraints on the blue side, identifying your blue team members to participate, figuring out the right methodology to use to execute your exercise. Because our first one, I only had two or three people participating, but we had like 19 people just flying on the wall watching the exercise.

And the discussion wasn’t really there because people were afraid to ask and look stupid in front of their coworkers and things like that. So I figured out more intimate, like, I’m going to bring one vet from that team and one newbie, and the newbie watches us and we’re all used to working with each other and we get good discussion going and we’re not afraid to look like idiots. And then the next time the newbie is the vet and they bring one newbie and then they go back and tell their team what they learned and how it went so that we’re not taking too many resources away from active investigations and things like that, but also so that there is a lot of that really well driven discussion and these team members are learning to work together. Like, hey, Andrew, wherever we work together on that Pte, I think I’ve got incident right now. Do you see the things that you saw before? Great. We just headed this off before. It got too bad that bringing of people together and that rapport and that leafing is so crucial.

I would say that a lot of people were intimidated by a member of the red team. So one of the first things I do in every new position is I start reaching out to people and introducing myself. Hi, I’m Meryl. I work on the red team. I’m also going to be doing this. I think you’ll be a customer for my product. I think I’m a customer for some of your output.

Do you have any questions? And I become the go to like, Meryl, we’re so afraid. But can you guys punctuate this for us in an exercise or like, is this you guys? And you just need to make yourself again a person. So be sensitive to resource constraints. On the blue side, they’re even more under staff in the offensive side. Make sure that you find a methodology that works for you. You’re going to stumble a bit at first. Some of them are not going to go as planned, and that’s fine.

Make sure that you’re intelligently picking your TTPs. Try and use CTI influence if you can. That’s really where things should come from. And then make sure you identify the correct stakeholders. Because we left one manager off one time and all these other managers knew what was going on and this guy got bells and whistles and fires going off and he was pissed. So just try to make sure that you write your document, your plan, socialize it, be willing to hold people’s hands through it because they’re all going to be scared. Tell them you have stopped test conditions, you have escalation procedures, you have ROEs, you still adhere to.

You’re not here to dos the company during production hours.

But mostly that the hardest hurdle is really buying. And if you’re having trouble getting buy in, carve out some time, automate your job as much as possible. Make your processes efficient. Give yourself that extra time and just do it yourself and say, listen, in my free time, I executed this by myself. This was the CTR report I use. These are the open source tools I employed, and this is the value I was able to give you. Will you let me do more of this and just ask for forgiveness instead of permission? And I guarantee you they’ll be happy with you though.

Yeah, I think that’s just great insight. I know when I was getting our team kind of started on this path, one of the big things that we ran into is like, yeah, blue teams are constrained and we tried to say like, hey, we’re going to spend a certain percentage of our time on the Proactive side of security, right? Because otherwise you stay in the reactive and you stay in the legacy and then all the new stuff then becomes legacy and you never feel like you get caught up. But I think one thing we didn’t touch on much, but you do tend to see more higher morale as you start to show progress. Right? You’re like, hey, we’re actually making a difference. Instead of just feeling that sense of overwhelmed. I think we got time for one more question. I see one here.

I’ve been getting questions about what the difference between Red Team and penetration test and what value each add to the overall information security program.

We’re back to this debate about Red Team pen testing again, going back to history here.

Our thing with pen testing was regulatory based. Hey, do you pen test? What do you mean by pen test? Then we start looking at definitions. There’s a very public one out there that I will name PCI DFS that finds a pen test very scoped to a particular environment. You can only do this, this and this, and that’s not the way it should be, right? Penetration test should be threat intelligence led. Some of the recent regulations are kind of pushing for that as well. But the traditional definition was that a penetration test differentiates itself from a vulnerability assessment that you’re actively exploiting the vulnerability versus just validating that it exists, which makes a big difference, especially in orientations where availability is a very important thing, like the financial, like the energy sector, et cetera. So pen testing per definition actually does.

Exploitation does not necessarily find all vulnerabilities. That would be a vulnerability assessment finds the way in and allows you to measure how far you would go based on that particular scope. Most Red Team engagements aren’t so much of testing a particular technology, but an overall organization. It’s more of a holistic approach where you can also go after people, go after process. Often physical is allowed. Of course, you can have a physical pen test. So it really all comes down to scope to why you’re doing what you’re doing.

And most importantly of all, it’s about providing value, right? So many consulting companies that I work with that use our product, they will get asked, hey, can you do a Red Team? And then you start digging deep and it’s like there’s no point in doing that. You don’t have your assets. You don’t know what vulnerabilities you have. Let’s start here first. Understand all you have. Let’s do an attack surface scan and then find vulnerabilities and then focus on that. Then get to the pen test, then get the testing internal, assume breach, et cetera.

So it’s all about bringing value and definitions do matter, of course. Yeah, I’m going to take that a step further. Most Red Teamers will touch on trade craft as well. So for us, the big differentiation is that Pentesters try to come in and find as much as they can to give you a really well pasteurized overall overview of how that platform or web app or technology is sitting. And they’ll use a lot of automated methods and do a lot of validation as much as they can as well. Whereas Red Teamers we try to come in very stealthily, low and slow, unannounced, no basic user access given. Sometimes we do.

So the method tray craft is a little different. We do a lot more erasing our tracks and going into logs and deleting those things, whereas pen testers don’t. So much for places like Zoom. The differentiation is we have pen testers test the Zoom client, and we have the Red teamers come at Zoom as an organization, as a company, the people, processes, and technology, not just the client itself. So there is a big difference. But like George said, vulnerability scanning is just we think we have findings a pen test is rather than thinking someone could abuse that, we actually abused it and proved that it can be abused. And a red team is we did that, but very stealthily.

And you never knew we were there before we got out.

Yeah, absolutely.

I think that today the red team is just continuing to get conflated with a pen test team. Certain organizations, they’ll have like a pen test team and a red team, which can get a little confusing.

But I think it’s important. It’s more the distinction of the type of activities. Right. And what value they do provide is based on what are the goals of those exercises. And obviously, purple team fills an amazing gap for those different types of exercises as well. And it really helps the blue team understand the metrics they should be tracking and everything else. By the way, if you hop on a call with me, you’ll get so annoyed.

They’ll be like, yeah. So when we do our red team testing, I’m like, I’m so sorry. I’m just going to clarify. When you say red team testing, do you mean a proper red team operation or do you mean like a pen test or are you just referring to offensive testing in general? Because all three of those things are very different to me. They’re like, oh, right, sorry. We mean this thing. I’m like, let’s speak accurately because when I hear a red teaming operation, I’m thinking full apt.

Ae red Team like, sophisticated. Like, we spent three months planning it and we got four weeks to execute. It not the same as a pen test. So keep your definitions accurate out there. There you go. Well, we are out of time. I mean, this flew by.

This is fantastic. Do my favorite people, thank you so much for joining us on the webinar. Thank you, everyone that joined us. You know how to get in touch with us. If you have other questions, we will have a link to the recording. And definitely this is a community. We’re here to help everybody succeed.

And so please reach out. You’ve got plenty of resources in atomic red Team minor ingenuity site, community threats. Anything you can get your hands on will also help as well. So Meryl, thank you so much for joining us. Good luck. As you continue to progress. George as always it’s been a pleasure and thanks for joining us as well and we will catch you all next time.

If you need help with purple teaming, DM me I’m happy to help anybody. We also have a purple team meet up if you’re doing it if you’re doing it already. We have a purple team meet up where we collab and like bounce ideas and give each other strategy.

Fantastic. All right. Bye bye. Everybody having have a great rest of your day.