Skip to content

VIDEO

Purple Teaming Made Easy With ATT&CK®

Series: On-Demand Webinars & Highlights

Category: Purple Teaming, Thought Leadership

   BACK TO VIDEOS

Transcript

Alright, people have filtered in and we are excited to be sharing about Purple Teaming and purple Teaming with Attack today. Once again, welcome. This is our second installment of 2020 for Or PlexTrac webinar series entitled Purple Teaming Made Easy with the TAC. I am Dan DeCloss, founder and CEO of PlexTrac and excited to get this kicked off. I hope you guys are all enjoying your day. I appreciate taking time to learn with us today and hope you’re staying safe during the COVID pandemic and hope that everyone is enjoying from a warm or comfortable area in their house or office, staying safe. But again, thanks for joining us.

We’re going to go ahead and get started. Again, really excited to have you here. And we’ll dive into the topic. Purple teaming made easy with attack today. Shawn, our VP of Success, Shawn Scott is going to be presenting a little bit as well as I am going to present around Purple Teaming. Shawn will discuss around minor Attack and then we’re going to kind of dive into a brief demo of how you can conduct a simple Purple Team engagement using Attack and be able to highlight how simple it can be to start these exercises in your own organization.

So, a little agenda. We’ll just do a kind of a brief introduction, kind of dive into what a Purple Team actually is and what Purple timing is and how that would reflect in your environment, kind of get into the nuts and bolts of Miter and the Attack Matrix. Then we’ll kind of show you how to use the Attack Matrix to structure Purple Team engagements and Purple Teaming throughout your enterprise, as well as performing an attack based engagement demo with PlexTrac just to kind of highlight how simple it can be to get started doing Purple Team exercises using the Attack Matrix and being able to collaborate through a tool like Plex Tract.

Let’s kind of just set the stage. So what is Purple Teaming and what is the concept meant to be? Traditional sense of Red versus Blue from the Purple Team concept is that you have the Red Team. Which is primarily focused on doing proactive assessments. Deeply technical activities like penetration testing. Trying to engage in social engineering exercises that will truly exploit weaknesses within your environment and within your users. Or even physical assessments where they are trying to identify gaps in the physical security controls of the organization. To gain access into the facilities and be able to highlight any control gaps that you might have from the security perspective.

Very technical, typically from the Red Team concepts, very intricate in terms of the techniques and tactics that they approach. From the Blue team side. Blue Team is traditionally the defenders, the people that are doing the monitoring for these types of activities, trying to fill a lot of gaps in a lot of holes, identifying any intrusions and then responding to those incidents. As they come about, typically very reactive and in a defensive type of posture. So like to kind of sum it up, the red team is responsible for trying to make the bad things happen and the Blue team is responsible for trying to stop those bad things and at least detect them so that they can alert on them and respond.

If we were to expand this a little bit because here at Plex Traffic, we actually take a little bit more abstract perspective on purple timing and that the Red team is truly anybody that’s doing any kind of proactive assessment, trying to identify security gaps within the enterprise or within the organization that they’re attacking. So we would include things like vulnerability assessments, framework based assessments, any kind of team that’s doing a proactive approach towards security and identifying those gaps in the security controls, whatever mechanisms and whatever tactics and techniques they might be using. On the blue team side, we expand that a little bit more to include people that are implementing the controls, the security engineers or any of the It staff that are responsible for implementing controls, obviously patching the systems and making sure that all the systems are up to date. And then also developing policy and implementing policy, whether that’s from an administrative control perspective. So basically just stating what the policy is and that people need to adhere to it and trying to protect that from an administrative perspective or from a technical policy perspective where you can actually implement technical controls that adhere to the policy that gets implemented as well. So at the whole it’s more around risk management and whereas the red team is in charge of identifying risks in whatever capacity from a proactive perspective and then the Blue team is responsible for remediating those risks and preventing the risks. So taking a little bit more of a proactive mindset.

But making sure that the controls are in place and trying to be proactive as to what are the other controls that might have weaknesses in the traditional sense. We’ve seen a problem when we have this red versus blue kind of concept and when these red teams and blue teams traditionally go into a purple team engagement or just any kind of red team assessment. Whether that’s a black box penetration test or a more advanced vulnerability assessment. We’ve seen some issues and these are the problems that we want to resolve. Typically there’s a lack of common goals. What is the Red team actually trying to do and what are the goals of the assessment from the blue Team perspective? Oftentimes the red team is going to be trying to identify any and all techniques, using a broad swath of tactics to get into the organization and identify the weaknesses. Whereas the Blue Team.

They may have specific things that they want to be testing first because they know they’re going to have issues in other areas and so they may even avoid going to get a penetration test or more advanced assessment knowing that there are these certain weaknesses that they just know they haven’t fixed yet. So they may avoid altogether getting one of those more intense assessments. Also we have some uninformed threat modeling from the Red Team perspective. Again. They’re going to use tactics and techniques that they have in their tool belt that gets them into the organization. Whereas they may not actually be approaching the assessment from the perspective of the Blue Team where we’re in a specific vertical or a specific industry and these are the specific items and security controls that we want to test first or having known exploits out in the wild that exists for this type of industry or more likely to have right. And so once the engagement takes place, there’s a lack of post assessment collaboration being able to have the Red and the Blue Teams actually work together to identify what the remediation activities actually need to be and whether those have actually been validated.

Oftentimes even within internal Red teams, but especially within external Red teams, those teams are already on to their next engagements and have limited time to reflect with the Blue Team from the previous engagement. So there’s a challenge there regarding this post assessment collaboration on actually getting the work done for fixing the issue. And then we have this concept of people being nervous about flipping the quote unquote evil bits where we’re nervous to teach junior engineers or junior security people or even veteran security people how the adversaries are actually conducting these exercises, so how the Red teams are actually performing the hacking. And there’s been natural concern around teaching the defenders how to actually hack. But we are all under constraints regarding time and budget and skill set and talent just in general in the security industry. So the more that we can share knowledge the better. So the more that the Blue Team actually knows what the tactics and techniques are that they should be looking for, even if they can conduct them themselves on their own environment, that helps everybody to identify these security issues quicker.

Kind of a concept of a rising tide lifts, all boats are all ships. Those are some of the issues that we’ve seen when we talk about your more traditional red versus blue kind of paradigm and even in some of the traditional sense of purple teaming where the Red Team is coming in directly to just try and identify all the security weaknesses in one fell swoop and the Blue Team is overwhelmed with trying to identify all the ways to defend and detect from these techniques. So when we talk about purple teaming, we want to really, truly identify the ways that the Red and the Blue Team actually come together to perform these engagements in a more productive manner. So the mechanics of these really should be that they should be frequent and short duration engagements. We’re going to focus on a small set of techniques and controls that we want to identify fixes for right away. We want to come together and share what the objectives actually should be of this engagement. We want to iterate quickly.

So you may end up testing a small amount of controls and a small amount of techniques, but it might only be for a few days or even a week. Right. And you’re identifying closely between the Red and the blue Team, what are the actual objectives of this engagement, what are the actual items within our controls? Do we actually want to identify as weaknesses or if we’re actually doing well in those so that we can move on to the next set of controls? So what the objectives really should be is are we able to detect some of these activities and can we enhance our detection capabilities, can we remediate the items that are found in a much faster way? The more items that are found and the faster those are fixed, the better the security posture of the organization. Can we transfer knowledge quickly, not only from the red team to the blue Team? I think you often hear about the red Team transferring their knowledge of all their techniques for attack and proactive assessments to the blue Team, but the blue team should also be transferring knowledge to the red team of how they’re actually identifying these issues and how they’re actually identifying the fixes for them. That helps both teams up their game, so to speak, and how the red team is going to come back the next time and try to circumvent the controls that were put in place the first time. The blue team is going to learn from the techniques and try to identify what are the ways that they might come at us from a different angle. And so again, the more knowledge that is transferred between both teams, the better off we’re all at, the better off we are and then truly sharing an offensive security mindset.

The blue team is often overwhelmed with alerts and reactive measures that just come in and kind of swarm their day. But if a blue team takes a step back and is a little more disciplined and instead of spending all of our day doing the reactive things. We’re going to dedicate some time to being more proactive mindset and say where are the issues that we think would be identified in a penetration test or a more advanced security assessment or some kind of proactive audit and truly starting to identify what are the ways that we should already be trying to fix these things before they’re actually identified within an engagement or from a security assessment tool. So that is kind of our primer for what we consider to be effective purple teaming. We do have a white paper that discusses a lot of these concepts as well and you’re free to download that from our website. But this is the general overview of what we consider a PlexTrac to be effective purple teaming. And it’s truly a collaborative effort between both the red and the blue teams.

So with that, I want to hand the stick over to Shawn, who’s going to kind of dive into the minor attack matrix, and then we’re going to discuss how we can put these pieces together. Thanks, Dan. I appreciate it. All right, folks. Hey, I’m Shawn Scott. I’m the VP of Success here at Plus Track. I have to be having an opportunity to chat with you all today.

So we’re going to talk a little bit now about the nuts and bolts of the Minor Attack Framework. So up to this point, Dan has been giving you an incredible overview of the concept of purple teaming. But if you haven’t been involved in Purple Team operations in the past, or you really don’t even know where to start, we’re hoping that this can provide you with the foundation and at least a place to start in a way that’s very easily digestible. So who is Miter? So if you’ve heard of the Attack Framework, you’ve almost certainly seen the name Miter before that. And why should I care and why should I trust these people? Well, Miter is a pretty interesting and almost unique organization, and they have their hands on a lot of things. A lot of incredibly smart people work there. Back in my days in the DoD, I had opportunities to work with some of these incredible engineers and scientists on everything from crawling in the bellies of airplanes with new equipment to wildland fire to cyber security.

They’re a nonprofit, think tank and highly technical folks, and they are primarily funded by the US. Government. Even though they are a private or quasi private organization, they are a nonprofit with a mission for the public good. And in the course of their work, they partner very heavily with not just the DoD, but with US three letter agencies. And that gives them incredible, some might even say unprecedented access outside of those agencies, access to the intelligence that they are collecting on advanced persistent threats and large scale criminal type organizations. All right? But also they are charged and get paid to operate the National Cybersecurity federally funded research and Development Center. So there’s a number of these things in critical areas where we need smart people to move the needle, and this organization actually supports the National Cyber Security and center of Excellence.

So why do you care about this last bullet? Because ultimately, whether you decide to use minor or not, you’ve already paid for them. It’s your taxpayer dollars at work. Go through personally your dollars and your organizations dollars, and they’ve done some good work. So what is the attack framework? Well, it helps a little bit of history. It really was born out of some of the Fort Meat experiments to try to figure out how do we detect advanced persistent threats that are out there. Attribution is a very difficult problem because code is copied and it really is very tough to attribute things. But one thing that we have noticed over time is that malicious actors, they’re just like us.

They’re seeking return on investment and so they tend to use the same behaviors over and over again and the same group of people will do that same sort of thing. So if we have a mechanism for classifying malicious actions and in identifying ways of talking about those, commonly we can start to develop those patterns. And it AIDS in a tool for identifying who’s actually behind particular malicious activities. But ultimately this helps us all answer the question because we are classifying actual documented adversary activities in the wild, how good are we actually seeing those things and detecting them and responding to those things? And that’s the really neat thing about Attack, is that attack is not built on hypotheticals or it’s not built upon lab exploits or proof of concept by researchers. It’s built from incident response and the forensics that went into some pretty heavy investigations. What else is pretty cool is that this has been a purple concept from the start. From the get go at its creation, the Red teams have been using the Attack framework to build their scenarios.

But the Blue teams have then been using the same framework to measure their progress and their ability to detect and respond to these categorized behaviors. Before I go any further, I do need to acknowledge that I’m going to keep using the term Attack today. What I mean when I say attack is Enterprise Attack and that was the first one. But since Enterprise Attack there have been three other categorization matrices published by Mitre. Enterprise Attack is a post exploitation categorization framework. It’s right of boom after we get that initial access, right? preattack is a way of classifying all of those tactics and techniques and procedures that happen in the Pre expoitation phase, in that surveillance and that reconnaissance phase that leads up and provides the details and the ammo need to get that initial. Beach head mobile.

Pretty self explanatory there. It’s a classification system for attacks on mobile systems. And the newest member of the Minor attack family is ICS industrial Control systems. So as Simon says, try them all this stuff.

So, basics of attack. Once again, it’s not really that cosmic. It’s a categorization system. It’s a way for us to classify and commonly talk about things. So there’s three layers to the framework of which Minor really takes care of two and then more or less leaves the third to industry and into government, to us as the community. So at the top level you’ve got your tactics there’s I believe 13 of these. Now inside of those there’s techniques of which there’s hundreds and inside of those there’s procedures.

If you come from a DoD background, you’ve probably heard the acronym TT before. I have no doubt that it shares a common lineage. But we’re going to step through these things and kind of explain what each one means. So tactics, tactics are the adversary technical goals in any stage of the attack lifecycle. So if you’ve been in information security for longer than a month and you’ve probably seen an attack lifecycle line that looks something like this, we start with an initial access, maybe we start a pre exit location, but we end up with getting the crown jewels out of the environment. And all of these stages have a specific objective right in lateral movement. I’m trying to move from machine to machine, host to host, collection, get the crown jewels exfiltration, get them out.

Just last year, might have added another tactic which is impact. And the name is a little misleading because the objective is to cause an impact either through the denial of availability, the degradation of integrity, and to actually have an impact on the business function of the targeted organization. Now, because these are loosely framed on stages in the attack lifecycle, they tend to run somewhat laterally. You can’t get to collection if you haven’t established your initial access or if someone reboots the machine and you haven’t established persistence. But as we all know, as we move through the attack lifecycle, there are many opportunities for moving back down the chain and restarting a new environment down from the higher level of tactics. Those stages in the lifecycle are techniques. And techniques are just a bucket for how we group like actions that the adversary takes an example here of a tactic with three techniques inside of it.

So persistence here are three ways categories that you can use to achieve the tactical objective of obtaining persistence. And what’s interesting is that the techniques themselves can actually support multiple tactics. So later on today, we’re going to take a look at a couple of techniques that support defense evasion. But those same techniques can also be used to achieve tactical objectives earlier in the attack lifecycle in the privilege escalation tactic. I’ve only shown four of the techniques that are available for the tactic of persistence, but there’s a lot for some of these tactics. There are dozens and dozens of techniques at the bottom of the food chain, if you will. And really where the rubber meets the road are the procedures.

So the tactics and techniques are cerebral classifications, if you will, but you don’t actually type a technique into a keyboard and make a bad thing happen. The procedures are those no kidding, command line inputs or parameters that you’re feeding to your tool in order to attempt to trigger the effect that will meet the objective that you’re trying to accomplish with that tactic. So the good news is that for the most part, you don’t have to go out and recreate or invent these on your own. There’s a number of different organizations and efforts today that are cranking out procedures that are all mapped to the minor attack framework. So one of our favorites is Red Canary. They do a great job. They’re Atomic Red Team.

Their atomics are in some cases, they’re just one liners. Can this thing happen in your environment? Do you have controls in place? And so these become the procedures that you can test against, once again, where the rubber meets the road. So when you put all this together and here we’re just showing the tactics and the techniques, and I know you can’t read this, and that’s the point because this is an extremely robust classification matrix, which is why we need to use it with care and scope things properly. If you go to Miner’s website, you’ll notice that you can’t even fit on one window you’ve got to scroll because of the dozens of techniques that map to each of these tactics, not counting the procedures. And generally, once again, Minor leads the procedures to the community, but there are some available on their site as well.

All right, so now we’ve got an introduction to Mitre. How do we use this framework to facilitate our Purple Team engagements? Well, although planning, right? He who fails to plan, plans to fail. So when you’re sitting down and you bring your team leads from your Red and your blue together, getting back to those shared objectives that Dan talked about, how do we decide what it is that we’re going to test? This is where Miter really helps because we can take existing risk intelligence from all the various sources we have. And we can map that intelligence to the minor attack matrix. Over on the right hand side, you see that I have a partial screenshot of a tool called the Attack Navigator. And this is a free tool that is available from Mitre. You can go poke around and play with it on the website.

Nothing there is persistent, but you can set up your own server. And this allows you to come in and create heat maps. And what does the various colors mean? Well, they really leave it up to you. They can mean whatever you want them to mean. A dark red can mean that we’ve had a successful exploitation using this technique in our environment and there is no Mitigation in place. A light pink can mean that we have observed attempts to use this technique. They were initially successful.

We were able to respond. Yellow could be. This is external threat intelligence. We here at Flex Traffic believe that the best intelligence is always internally and locally developed threat intelligence. But that doesn’t mean that external threat intelligence isn’t valid, especially if you don’t have a good procedure for tracking and maintaining that locally derived threat intelligence. But then we need to decide, okay, great. We see where we’ve got some potential weak spots here based upon our heat mapping, but what are we actually going to test? Well, one thing that we would highly recommend is that you don’t peanut butter spread your test across multiple tactics.

We’re going to pick one technique across all through 13 tactics in the minor attack matrix. That would be like trying to create defense and death by putting one line back or every ten yards on a football field. That’s just not going to work for you. If you remember from our earlier slides, the tactics are relatively based upon an attack lifecycle and build upon each other. So if you are able to concentrate your efforts and really gain a high level of strong defense in one tactic, you could potentially break the chain that enables the further stages in the latter stages of the attack lifecycle. But when we’re doing our planning, we don’t just want to look at threats to determine what it is we’re going to test. We also want to take a look at where have we put efforts into defenses.

And a lot of organizations, they spend a lot of money on defenses. They really don’t have a strategic plan for how their defenses should deconflict, overlap, provide defense in depth, and the minor tax framework can do that for you. One subtle change to the screenshot on the right. As you see, this is another layer that I’ve created using the Attack Navigator, and this allows you to start mapping your defensive efforts. And once again, the colors can mean whatever you want them to be. Dark green can be that. We have implemented controls here.

We have tested them extensively, and we feel pretty good. Blue can be that we have implemented controls, but not tested them however you want to do it. But the great thing is you can overlay these two to really start to try to identify where those gaps that help you narrow down what it is that should be priorities for what you want to test. But then, of course, there’s that last step. It’s not just about the tactics and the techniques. It’s about the procedures that you’re going to fire. Right.

Those veterinary atomics Pretorian is another great source of those procedures. Here’s the thing that’s most important about the procedures that you select. Regardless of what you do, everyone’s got to be in the know. The whole concept here is to get to an end state at the end of the engagement where we know exactly what we’re going to do to strengthen specific defenses. And we do that by determining the effectiveness of our detection. And if the Blue team doesn’t know what shots are going to be fired, they can’t necessarily have their sensors in place to observe whether or not detection occurred. And we’re not trying to hide anything here.

Ultimately, at the end of the day, it’s about testing defenses and not just using whatever Dan said tools I have in my belt to make the bad thing happen. All right, so what do we do when we’ve got our plan? It’s time to execute well, this goes back to what Dan said, keeping the scope tight, keeping the number of techniques and even procedures that you’re going to test to a small amount of number. Because iteration is the key, right? We don’t just want to fire each one of these procedures each one time unless they weren’t successful. Our defenses are good. So after each iteration, what is it that we need to be talking about between the teams? Well, first of all, was the procedure successful? That’s the obvious one. But then from the blue team, was the procedure observed? And for all of these things, if it wasn’t successful or if it wasn’t observed, why or why not? And what’s the fix? What do we need to go implement right now? And even if it’s just on one test, test box, the target of whatever that procedure was, is this thing going to prevent the bad things from happening in the next iteration? And because every slideshow has got to have a preliminary iterative process graphic, here’s my proof of you that I can make a circle in PowerPoint. All right, we’re going to take a little squirrel moment here and talk about some tips for debriefing.

Because I come from a DoD background and one of the things that was drilled into me earlier was 90% of the learning happens in the debrief. And we just talked about the importance of iterative debriefing so that we can get to those fixes during the course of the engagement. So it’s worth taking a few seconds and talking about how do you do debriefing effectively, especially when you’re doing it frequently. Well, the first mistake I see a lot is that people don’t allow adequate time for data collection and notes prep. I would rather have a smaller vulnerability window. A period of the exercise is live and then have a two hour gap that allows the operators on both sides to collect their data, collect their notes, correlate their activities across their teams, confirm observations, and bring solid data to the debrief. Because if you ever walk into a debrief that starts right after everyone walks off the keyboard, so to speak, and you realize that they’re not very effective, nobody’s really got their debrief points ready to go.

So everyone needs to come with a debrief with their data prepared in a chronological data. Everything needs to be time stamped, ideally to a normalized clock. And this is for both red and blue. What do you need to bring to the debrief? Well, what was each action that was taken, each procedure that was initiated or on the blue team, each defensive response to an observed action? What was the method of the action? What was the payload, what was the fix? And then after you initiated your action, whether once again that’s on red or blue, did you observe any corresponding or subsequent action in response from the opposing team? Okay, so you bring this data properly sorted and. Correlated. And here’s one really effective way that I’ve seen debriefs run many times in the past. It’s very simple and it’s chronological.

You start out a clock on a big screen digital clock with the fast forward at the time that the engagement began, and you run the clock sequentially, you speed it up, it’s like, okay, we started at 800. All right, hit the clock ten times speed. Does anybody have anything before zero 830? Reads like yeah, stop. Okay, go with your data.

We initiated atomic one. Michael, one against box ten one three two appeared to be successful. Blue team, was this observed negative? That’s sort of back and forth so that you ensure that all of the events are shared, that no events go unrecognized, and that everyone has an opportunity that if it wasn’t observed during the exercise, to know that it wasn’t observed and to figure out how to it can be detected in subsequent engagements. All right, enough of the squirrel. Back to talking about execution with Mitre and let’s talk about your post engagement actions because once again, no engagement is done until the final debrief is complete. So what do you need to actually make sure that you do before you close up for a cold beverage on Friday? Well, remember, we started with planning that took into account our defensive mapping, that defensive heat map. Well, hopefully we have implemented, at least in a test environment, some things that we’re going to get out of production that are going to make us better, and we can update that for future iterations.

And then we need to talk about our velocity. The mistake that almost all teams will make when they first begin purple teaming is they’re going to bite off more than they can chew and it’s not going to allow them to. They’re going to have a mutual exclusive situation. We can either iterate or we can hit more test objectives, and we’ll always take the former over the latter. You’re looking for quality over quantity, measured and successfully implemented new defensive measures. So I would say that there’s one thing that everyone has learned over the last two months, is that sometimes quality really is better than quantity. I’ve got 50 roles of the cheap stuff in my garage I’m never going to touch, and I’m glad they traded them for a four pack of Sharm and Ultrasoft.

But finally, draft your next engagement. The environment is going to change. However, you’re never going to remember what the things that were on your mind, the things you want to focus on next time like you are while you’re still in the heat of the moment. So at least drop a rough sketch of what the next engagement is going to look like for you. All right, so now we’re going to pop on over to an example of execution using the flexratract platform. So let me get that up for you and Dan while I’m doing that. Did you have anything as far as a poll or any words for them here? Yeah, let’s go ahead and pull.

We’ve kind of been talking at you quite a bit here, but we got a quick question for you. Now that you kind of have a perspective of Purple Teaming and the Miter Attack Framework, we just kind of want to see what you guys are doing in your environment today. So I’m going to launch this poll, and if you just don’t mind, we’ll take a few seconds here to answer it. But are you currently utilizing the attack matrix in your security program today? Several answers funneling in here. Thank you for your participation. We’ll give everybody a few more seconds to vote.

All right. This looks like a good representation of the whole wait, a couple more trickling in. I guess we’ll give them a few more minutes or seconds.

All right, fantastic. So we will go ahead. Thank you for everybody for answering. I’ll go ahead and share those results just to kind of enlighten everybody that’s on the webinar. So it sounds like several people are utilizing the Miter Attack Matrix in some capacity, which is great. Others are not yet, but are hoping to, and then some are just now learning about it. So we’re glad to be able to share some of our knowledge and definitely would encourage you to look to your peers for additional information around the matrix.

Okay, Shawn, now what are we going to do? Are we going to actually try to do a Purple Team engagement within the last 20 minutes of a webinar? Yeah, Dan, you know, we’re going to do a hardcore pen test. I got my good buddy John Strand here with me, and the purpose of what we’re going to show here is actually going to be pretty simple. We’ve been talking about how Miter Attack is not unapproachable. It is something that people can use today and structure things easily in their environment. And you don’t need to try to do this, but we want to show how you can use a tool like PlexTrac in order to really rapidly build an engagement that’s going to give you some definitive and easy to digest results. So if you’re not familiar with Plex traffic, we tend to do most of our work in the context of reports. So I’ve got a particular client here, organizational unit Project Seven Industry, and I’ve built out a show an empty shelf report for this month’s May Purple Teaming exercise.

And you can see from the title here that based upon our analysis, that we want to focus on the tactic for defensive evasion, which is Ta Five. Okay. All right, so I’ve got this empty report here with nothing in it. What am I supposed to do with that? So based upon the fact that I know that I want to be testing various procedures from Ta Five, I’m just going to look for things that are in my database, write ups that are already associated with that particular tag for that tactic. And I see that I’ve got quite a few, and I’m just going to concentrate today on, I think, things that we can actually test with atomics. So I’m going to grab a number of these right off the get go here. And I think I’ve got plenty to go.

And I’m going to add these things to my report. Dan now what I’ve done is I’ve created a playbook, a run book, if you will. These are all the various things that we’re going to test in the course of this engagement. And you can see it’s not really a lot. And if I was to take a look and preview one of these particular atomics that we graciously stole from our friends at Red Canary, you can see that the actual command line inject here is actually just a one liner. But this one liner is going to provide us with actionable information that we can do in our environment. I think that my buddy Dastardly Dan on the red team here.

I’ll play the blue team. And hey, Dastardly Dan, would you mind kicking off our first procedure? I think we’re going to run misha atomic number one. Sure, yeah. And Shawn, just as a side note, I mean, I think it’s important to note that we’re conducting a Purple Team exercise right here now, but we’re completely remote. We’re able to collaborate over the phone and stuff, but we’re able to see the same view of the test. And so oftentimes, remote pen testing is definitely going to be a factor today in today’s world. But I think another important thing to keep in mind is that a lot of Purple Team exercises are conducted where you might bring somebody on site.

What’s important from this perspective is that it doesn’t have to be that way. You can definitely conduct remote Purple Team exercises and be able to still collaborate effectively on hey, Shawn, I’m actually conducting this test right now, which is actually what I’m doing while I’m talking to you and I’m taking a video of it, and I’m going to go ahead and say like, okay, yeah, you know what? I confirmed execution of this thing and just give me 1 second, Shawn, while I upload my evidence. Maybe while you’re executing that, I’ll make another couple more points of folks. You notice that we start out we brought these in from our database. Everything is at a severity of informational because we haven’t done the test yet. Everything is at a status of open, and nothing has been assigned to an operator. But these are all aspects of things that you can do collaboratively.

While Dan’s finishing up his test there, I’m going to hop over and just maybe take a look at last month’s report or April Purple Team engagement. And as you can see in this particular engagement, we ran it last month after the economics were run, the procedures, we’ve got some things that have been closed out. You can see that this one was never even assigned to anyone, so it probably wasn’t successful. Defense is already in place, but then we’ve got some things that we did need to clean up. And the great thing about PlexTrac is that you may have noticed that I searched for those particular procedures via tags. Well, these tags stay persistent and I can do analytics and I’m not going to show off analytics today because we’re running a webinar in about two weeks to show off a lot of the new things that we’ve got in PlexTrac, but a very easy way to be able to sort and just see, hey, how am I doing in tactic 1005 today? Or how am I doing against all things in the atomic? Dan, how are we doing with your pen test there? Well, Shawn, unfortunately I did succeed in one of the exercises. So I went ahead and I signed that to you.

I uploaded some evidence and then I escalated the severity to critical and then I made a comment. So if you can take a look and see if you may be able to make some changes there. I can see that, Dan. Yes, I have been assigned T 1170. All right, I’m going to go take a look what you got there? I see that you have updated its critical and it has been assigned to me. So what do we got going on here? Well, we got a video here, so let’s take a look at what’s going on. Hey, Dan, since I got you on the phone, why don’t you explain what you did here? Yeah, so basically I just took the atomic and I copied in the command execution that they informed us to try.

I used the URL that they did, popped open a command prompt and copied that in. And upon successful execution, it was expected that a calculator, the inventorious calculator, would pop, meaning that I’ve actually confirmed remote code execution on your system. So this is a bad thing, right? Typically, I guess just for anybody, if you randomly see calculator pops, something is probably in this, but in a real engagement, you wouldn’t normally see a calculator pop. So this just means, Shawn, that we’ve got some issues probably with your end point detection capabilities. We might need to push some group policy or potentially even just enable windows to tender. Okay, you know what? I’m going to go ahead and I’m going to send this over to my buddy Nick and I’m going to have him attempt for mediation. And Dan, can we get a retest for first thing in the morning? Yeah, we’ll do that.

Okay, perfect. All right, so that’s pretty much the extent of what we’re trying to demonstrate here is that this doesn’t have to be some cosmic high level exercise to really start being able to utilize the minor attack matrix in your environment, especially if you leverage all the wonderful tools that the community has provided. Now we’ve shown a very simple procedure here and we acknowledge that some of the procedures are more complex, but there’s a great entry point for just about everyone in your environment. So with that, I’m going to terminate our demo here and we’ll just really move us on to opening this whole thing up for a little bit of review and sorry about that and some Q and A. So just quickly Dan, why don’t you take this? Yeah, absolutely.

Our mission for today was to really try and introduce everybody to the purple teaming concept and how it avoids the kind of traditional red versus blue mentality and truly brings in that collaborative effort where you’re working together for a common goal. And the faster you can do these activities, the quicker that you can identify the gaps in your security posture, the quicker you’re going to get them to remediate the state and then actually improve your security posture. So these things don’t have to be these really long built up things where you’re saving a lot of money for just a huge purple team engagement once a year. They really should be iterative and they really should be ongoing and you don’t have to have a lot of skill and expertise to be able to do these things. Now it definitely helps and definitely you want to be employing either internally or externally, people that can go deeper. But if you don’t have that skill set in house today, you still can take the resources available to you and start practicing and start working on these things. And that’s the value that the attack matrix really brings to the forefront is that it provides this common framework, it provides this common language saying I want to focus on the lateral movement tactic for the next month.

Right? And so I’m going to read up on the different activities and techniques that adversaries are using in the wild. I can utilize resources like Red Canary or I can go and purchase tools that can also conduct some of these activities on my behalf. Things like size. And so all these things you can bring into your program. You don’t have to have the deep pen testing knowledge to still be able to identify some of those initial gaps. That would be what some people would either consider the low hanging fruit or things that you may have never thought of before.

The value that provides is that you actually can get started and you can actually start working quickly without like a deep amount of expertise. But then you start to learn and anytime you are in an environment where you’re learning new things, it sparks new ideas. Oh, I didn’t think about the fact that we’ve got this group policy configured in this environment or in this subnet differently than we have in the other ones. So I should test that differently or I should put a little more focus on this tactic.

At the end of the day, this really actually provides more effective engagements.

It’s not a wide swath every time of like, show us all the things that are wrong. It’s the red and the blue team coming together and say, no. I want to really make sure that we’ve got some of these tactics buttoned up. So if I’m concerned about privilege escalation, let’s dive into those techniques. What are the common techniques that people are going to be using? Let’s test those, make sure we can prevent those, and then we can start to talk about the next time one of the more advanced ones or what are the ones that people are using that might not have been documented yet. And then after that, after you’ve done that engagement, it just dovetails into the next engagement that you’re going to do, and we can’t emphasize this enough is that it’s a very iterative process. It should be weekly, if not daily, types of exercises.

Take an hour out of your day and just go test one technique right. And then the other value that this provides is you start to see your progress over time. You can do analytics and saying, like, hey, last time we tested these, here’s the progress that we’ve made on fixing these items.

Obviously, selfish, shameless plug. We feel like the best way to do that is using Plextrack. We provide a central platform to do these kinds of activities, and that’s really what we’re geared for. But if you don’t have PlexTrac, there’s certainly an opportunity to get started with purple teaming, using the Minor TAC matrix and being able to come to that more collaborative assessment framework and methodology. So with that, we want to just kind of open it up for any questions. We do have a few questions in the pipeline here, more of a band. I’d like to go ahead and maybe if you prepare to answer that second one, I’d like to go ahead and just demonstrate.

The first question was a technical one about the platform.

So one of the questions was, how did you actually put that video into that engagement, into that finding here? So in Plex traffic, we have the concept of a preview modal, which is what I’ve been showing you here, here, which is what the data looks like once it’s been entered. However, for each of these findings, I also have the ability of editing them. And so all Dan did was he came in here and he opened up the screenshots section and he uploaded the video. So it’s called Screenshots and Videos because it does both. Gave it a little title and saved it. He also went into the findings detail and gave me a little snippet here in the recommendation about what I can do to remediate this particular vulnerability. So we’ve been trying to not get too deep into the mechanics of PlexTrac and just focus on Miter in Attack and Purple teaming.

But if you’re really interested and would love to talk to you more about how you can build this stuff out with PlexTrac, definitely hit us up at sales@flexrack.com and happy to show it to you. Great. Daniel? Yes. This next one is if we try manual purple teaming exercises, how would we organize it, how would we go about selecting the production systems and how are we going to simulate attacks from our machines? So that’s a great question. So you don’t necessarily have to have all of the attack or penetration testing tools to be able to test some of these things. But if you use things like the Red Canary Atomics, where it’s an open source GitHub repository, Shawn, if you pop open that, maybe you can kind of go to the link, right. Open up another screen and bring over a window.

Yeah, it’s an open source project that actually just shows you all the potential techniques that you can use within Miter.

How I would do it is I would select a subset of machines that you want to do a sample on and those may be production systems, may not be, but some people will actually use a production build in a test environment. So it’s like this simulates if you’re testing a workstation defense, which workstations are usually going to be the first line of defense from a breach, right. Is someone’s going to try phishing your receptionist or phishing the HR person? And if that person clicks on a link or downloads a file and executes it, that first step foothold in is from that machine. So maybe take a workstation that represents the standard build for your organization or a very similar one from a technical policy configuration perspective and then try to run those commands manually. You can run them, the Atomics will actually show you like, hey, you just run this from the command line, run it using PowerShell or something like that. If it’s a Windows machine and then you just test it, you see if it works or not. And then you can document whether it does or not.

As you get more advanced, that’s when I would start recommending, when you feel more comfortable doing these types of things or you have more seasoned Red Team professionals, either that you’ve hired from a consulting perspective or you’re just using internally, then you can do the more critical systems. Right. Because you want to start getting into where are the defenses around our crown jewels? You definitely want to be careful in those kinds of scenarios that you’re not going to break something in production, but if you have skilled people behind the wheel doing those exercises, that’s the exact same as an adversary we’ll be doing as well. Right. And so I always encourage people to be careful but not afraid to test some of these defenses in production. It’s a matter of the fact that if you are breached, the adversary is already in your production environment. And so being able to simulate as close to that as possible is important.

So I think it’s a matter of just kind of breaking things down on what you want to test and on what systems and what those represent out of the whole of your environment and then just continue to go iterate quickly, I think. It doesn’t have to be a whole big swath of systems, right? It can just be a small subset.

Is actually the GitHub for Red Canary. They have a couple of different views. They have one matrix that’s broken out just by Linux and Mac and one by Windows. This is all tests. Basically it’s just mirrors the miter matrix. You’ve got your tactics across the top and then you’ve got your techniques. And when you hop into any given technique, you’ll note that you have at least one, sometimes multiple, in this case just one atomic test.

And it actually gives you both the prerequisites, the dependencies you got to meet where you can get the files that you might need to run this thing and the actual command line things to execute in your environment. One thing you will note is that some of these things there isn’t a test for you. There is no atomic. It’s open source project. And so if you’re interested in getting on board, I highly encourage you to help these guys out and contribute the test. There was another question we had about we showed pulling these findings in from the right of database in PlexTrac and the question was we’re pulling all the TTPs from attack. That’s not actually true.

I built those out about five minutes myself. But actually though, I did have a request. I like it because once they’re built, it’s very easy for us to export those into a pTrac file and then hang them out there on our documentation. So I’m going to get our team and we’ll talk about how maybe we can take on the topic away. We’ve already gotten a good bit of it done just for this demo on building those out for the defense evasion. But yeah, we’re happy to share what we have built. If other folks who have PlexTrac are interested in maybe helping build out the atomics into findings that people can import, please hit me at the supportoflexrack.com and I’d love to have your help.

Yeah, and I’ll just piggyback on that Shawn, is that we actually are planning on getting as many of the atomics into PlexTrac as possible within the write ups database. So we can put some of them now and want to create the rest of them and keep them updated as best as possible according to their GitHub account. That’s the best resource that we feel we have today in terms of what we can put into the write ups database. The other valuable aspect of using Plex Jack for this is the ability to tag these items with the technique and the tactic. So that not only can you do analytics on those, but if you’re using other tools that are testing against the minor attack techniques and those are ported into PlexTrac, whether imported into PlexTrac, whether it’s through the tool imports or some other mechanism, if they are tagged correctly, you can still track all of those across the different sources. So we do plan to bring in all the atomics from the Red Canary, and then we do support other tools like Site that also will tag items that are automatically being executed with those techniques. Right.

We got another question here. Someone asked if PlexTrac integrates with Elk stack. So I want to get a little deeper into what you’re specifically looking. Obviously, Elk Stacks made up of three components, everything from the collection to the normalization of the data to the display in Cabana. So today we don’t have any integrations designed. But philosophically, PlexTrac is a platform is about helping you aggregate all the ways that you identify risk in your environment. And that’s after those risks have been validated.

Right? And so we haven’t really focused a lot on real time integration with Sims and other real time alerting things because at that point, that data has to get to be triaged. And we’re really kind of focused on the okay, when your team comes in on Monday morning and it’s time to get the real work done, where do they go to find out what are the remaining unremediated risks in their environment? Things that have already been validated that we’ve maybe done some planning on how we’re going to remediate them, or we have a platform to collaborate on how we’re going to tackle them. So we definitely would be happy to chat with you about what data that you’re looking to type in automatically. We do have an open API and hit me up at support Atrack.com. I’d be happy to chat with you about that.

And I will answer one last question that I guess I answered privately, but it is a good question. And then we’ll wrap it up and let everybody get back to their day. But this question was, can we combine Lockheed Martin cyber kill chain with minor attack? That’s a great question. So, Lockheed Martin came out with the cyber kill chain several years ago, and it utilizes similar categories from a threat intelligence perspective as to what are the different phases of the cyber kill chain for execution and eventually exfiltration.

This dovetails on my previous comment is like that. It’s actually great. And we would encourage utilizing as many frameworks as possible that tie into the attack lifecycle as a whole. And so since the kill chain is geared around the attack lifecycle, we would just recommend that you tag those items that you’re using with the kill chain under the same categories as the minor attacks. So if you have one activity that really accomplishes something in, like, say, weaponization within the Lockheed Martin kill chain and execution within minor attack, you can tag that finding with both of those tags, and then you can do analytics on both frameworks. There’s definitely value, and I think it’s really important that, similar to the way you might have multiple GRC related frameworks, there’s now becoming more kill chain related frameworks as well.

Go ahead, Shawn. Yeah, I just wanted to take a moment to make a Shameless plug. We’re trying to do two of these a month. One really focused on sharing knowledge with the community, but then one for our current Maybe customers and people who are really interested in PlexTrac, because our development team is moving very quickly, cranking out new features, and while we’re doing our best to keep everybody updated via release notes and documentation, sometimes seeing it in action is the best way to learn. So probably be a little shorter in length and really just cover things that have rolled out in the last six to eight weeks since the last time that we provided an update on our system enhancements.

That concludes the second installment of our webinar series for 2020. We really appreciate you guys joining us. Don’t hesitate to reach out with any questions. We’d love to hear feedback, too, on potentially how you’re doing purple teaming in your environment, how you’re utilizing the minor tact matrix. And obviously, if you have any questions about PlexTrac specifically, don’t hesitate to reach out. We are looking forward to seeing you again on the 27th. Shawn, is that right? That’s correct.

All right, thanks, everybody, and have a great rest of your day. Bye.