Skip to content
NOW AVAILABLE Feature Release! Learn About Our Enhanced Capabilities for Prioritizing Remediation Learn more >>

VIDEO

Measuring Your Offensive Security Maturity: Triumph with Technology (Recorded Webinar)

Echelon Risk + Cyber and PlexTrac are pleased to present the final episode in our webinar series diving into the key areas of your offensive security strategy: people, process, and technology. This installment focuses on technology — a vital but sometimes unwieldy component in building a mature program.

Series: On-Demand Webinars & Highlights

Category: Thought Leadership

   BACK TO VIDEOS

Transcript

Good morning, everybody. Or afternoon, depending on where you are joining us from. We’re excited to be sharing some some of our intimate knowledge of measuring your offensive security maturity with you today. This is the fourth installment of a Webinar series that we’ve been doing with our friends and colleagues over at Echelon Risk and Cyber. We’ll let people join for a few short moments and then we’ll get started. But we’re excited to have you here.

People are continuing to join and trickle in, so it’s exciting.

Always interesting to see when people drop where they’re coming in from. Like, hello from La. Hey. From Guam, other side of the world at Hack Spacecon, where Dahvid and I just were a couple of weeks ago, they were talking about, oh, I’m from Seattle, I’m from here. And there was a person there that was working for the DoD who had flown in from Guam via Alaska. Like, he’d flown from Guam to Alaska and then decided to go from Alaska to Florida. You win the Furthest attendee award.

Yeah, they like long flights, trying to rack up some miles before.

Yeah, all paid for. That’d be interesting. Do you think the government paid that Tsp bill? So he had come to the PlexTrac workshop at Hackspacecon and the gentleman from Guam, and he had said that when he gets work from into the North America, he takes some time to experience the Lower 48 because he usually ends up in Alaska and those types of things, doing some security register reviews and stuff like that. So who knows? Probably not. He probably paid for the Guam to Alaska, and then he put the bill for everything else.

Well, all right, well, we got a good crew that has joined us today, so let’s go ahead and just get started because we got some fun stuff to talk about and dive into. So once again, welcome, everybody, to our Webinar. This is the fourth installment and I think the final installment of our series on measuring your offensive security maturity. Today we’re going to talk about triumph with technology. And as always, we’re joined with our esteemed colleagues over at Echelon Risk and Cyber. So thank you for taking some time out of your day. We hope that you find this informative and beneficial to your programs.

If you’re either considering spinning up your own offensive security program, partnering with another consulting firm, or MSSP to bolster and enhance your offensive security program, or just interested in learning more about how these things work and what folks are doing in the various sectors. So, got a wealth of knowledge and a great crew with us today. I’ll let everybody kind of introduce themselves. I’m Dan, founder, CEO of PlexTrac, former Pen tester and AppSec guy, and excited to be sharing the stage with this esteemed crew. Mr. Desko? Yeah. Hi, I’m Pittsburgh.

Dan. Happy to be here with my brother, Boise. Dan, we’ve known each other for a long time and happy to be on the last of this installment and talking about the tech stack to support a great program. I’m the CEO managing Partner of Echelon Risk and Cyber. We’re a full service cybersecurity firm. We perform all aspects of offensive security engineering, everything from adversarial, simulation and emulation all the way through your standard Pen testing. So happy to be here and thanks for having us again.

Yeah, I’ll take the next one since I’m next in the picture. Slide deck there. My name is Nick Popovich. I’m the hacker in residence at PlexTrac and I’m here because Dan told me to be here. Boise, dan told me to be here. No, I kid. I have a background in offensive security operations, both as a practitioner in a consultative sense, as well as a large internal team and directed teams and also ran red teams.

And now I help provide some technical oversight and context at PlexTrac and really excited to talk about triumphing with technology as you mature and how the tech stack can affect the maturation level of your security program.

And then that leaves me the most important Dan, as I am the offensive security lead here at Echelon. Maybe not necessarily the most important Dan, but probably the coolest one. As you’ve probably heard me say before, I’m not emulated mob boss of a group of emulated criminals. So kind of bring that adversarial mindset and flavor to every sort of engagement that we partake in and done that ever since I’ve left the military doing offensive cyber operations for them. So love this job, never feels like a job. Really excited to kind of dive into this last bit of learning about your offensive security maturity and how you can improve or build it out.

Great. Yeah. For those that are here in attendance, definitely we like to keep these as collaborative as possible. So we would ask that you utilize the Q and A functionality within the webinar settings so you can go to the Q and A function and ask questions during the presentation. We’ll either try to field them while we’re in the midst of it or we do have some time saved at the end for questions as well. So please collaborate. We’ve got a wealth of knowledge and experience here on the panel and definitely feel free to pick our brains and we can always get back to you if we don’t know the answer to a question.

So with that, let’s dive into kind of what we’re going to be kind of focused on today and the agenda. So the tools of the trade within offensive security and just kind of cybersecurity in general continues to change and evolve over time and it also evolves with the threat landscape. Today we’re going to talk a lot about what is the ideal tech stack for building an offensive security program. How do you evolve that over time and what are some of the key factors that go into choosing your tech stack. How you balance that with the maturity of the program and the business and where you want to head. How do you evaluate emerging technologies and make some of those trade offs and then continue to drive value and showing the value from all these investments? So it’s fun. This one would be a fun one because there’s just all kinds of direction.

We’ve already discussed taking this conversation, so I’m excited to dive in. But let’s get started with kind of what does it mean to curate an ideal tech stack for your organization? What are the ways to empower a team to focus on how you’re identifying the tech, what factors need to be going into the evaluation of this? But what are some of the techniques that you all have used to build the tech stack for your offensive security program? Yeah, I guess I’ll jump in on this one first because this is kind of still fresh in my mind from doing it for the team nowadays. But I think when you start to look at your tech stack as a whole and what you want to accomplish, you kind of have to ask yourself a few questions. The first one being, is it collaborative? A lot of tools out there nowadays are still functioning with the idea of like it’s a silo kind of endeavor, right? Great example is something that everybody uses, like Metasploit. Great tool out there. A lot of pen testers use it day in, day out. But how collaborative it is, it really for your teams.

You might have one individual operating their own metasploit server, another individual operating their own metasploit server. There’s no shared idea there. So as you start to go through an engagement, whether that’s consultative in nature or internal, unless there’s communication between the two of them, which I mean, we’re all cyber people, so we know how sometimes we can be a little introverted in this. The collaboration piece tends to completely get missed and you start overlapping and losing value across the board. So that’s usually the first question is like, is it collaborative? And and the second question is like, is it up to date? There’s a lot of tools out there that worked two years ago, a year ago, but are now completely outdated in the sense of what are they trying to accomplish in the attack? Techniques TTPs. Couldn’t remember the whole acronym meaning right there. Techniques, tactics and procedures.

Man, English is tough on a Tuesday. I’ll tell you what, if you’re looking at those two pieces right there, is it up to date and is it collaborative? You can pull together a suite of tools and ultimately make your team way above the rest just from having the right little products in the bucket.

I tend to think too. Oh, go ahead, Dan. Nick, what are your thoughts here? Sweet. I read your mind. That’s how in tune we are. You know what’s interesting is I also tend to think that the question you’re facing when you talk about tech stack, you also then have to break it up into two further subcategories. In that there’s the operational tech stack to think about and that’s execution.

That’s the tooling to assist with identifying flaws, with tracking, with reporting, with command and control. And there’s so much involved that that in and of itself needs an owner and needs to be thought through and couldn’t be somebody’s entire life is the operational tech stack. But then in support of operations, there’s the administrative tech stack that also goes into place. So the stuff like project management, resource allocation, the business aspects, and while some general tooling and technology can assist, there’s also how do you best implement that oversight and administrative tech stack that can support operations. And in that finding the harmony. A lot of our tooling is going to be things that are either open source or they’re bought, but they are designed with focused purposes like attack and all sorts of interesting things. And that in and of itself may not be conducive to be managed, tracked and be able to show that.

So that’s something that I learned having had to build as a practice manager and a managing principal consultant, building out the tech stack, that’s super fun. As far as operational all the hacker nonsense, that’s a lot of fun. And then moving into being a director and having oversight and administration realizing that’s a whole job in and of itself. So I think focusing on and there’s similar requirements. You need to be able to collaborate well. You need to be able to report. You need to be able to ensure that the learning curve isn’t you’re spending so much time administratively managing things that you’re not losing out on the value.

That you’re trying to show organizations that are under your purview and that’s assessing their security, hoping that the output from that assessment activity can raise that security posture. So it’s just something that I think especially for folks who are going into it focused, you need to be able to either split focus or afford cycles to both administrative tech stack and operational. Yeah, I think that’s a great point, and I’ll probably lean on Pittsburgh Dan as well. But I view this as there are certain kind of tiers to your maturity levels as well, right within your organization, whether you’re building out your own consulting firm versus, like, an internal security team that’s going to be focused on proactive security testing. But there’s only so much resources that one person can manage or handle at a time. The tech stack definitely has to enable and empower the team to get their jobs done in an effective manner. So I think that it’s important as you embark on, hey, building out your tech stack.

Yeah, there’s lots of cool things to utilize, but with every new tool or process that adds additional cycles to the level of complexity that your team has to deal with. Right. So I think that you even alluded to it, Nick, in terms of some tools, you can have a person or more than one person just simply administering the tech and managing the tech. And so that may be necessary for a very large team, but it probably is not effective for a smaller team. And so I think those are definitely some things from just the leadership and the management side of things. Dan, what are your thoughts? Yeah, I was going to say for us, of course, with us being a consulting firm, we have to set up a tech stack that’s kind of wide and diverse and can handle a lot of different client needs. So there’s obviously a lot of sort of base level tooling that we would use on a lot of different types of engagements.

Right. But then it gets more specific based on the actual industry the organization is in, what the objectives are, what technology they use, and it starts narrowing based on what our clients have. I think one of the big lessons learned over my career is one of the things you have to watch and look out for. I think folks feel comfort with certain things and people get comfortable with certain tools that they like for whatever.

I think challenging that to make sure that okay. Hey, Nick, that’s great that you like to use this tool to accomplish this part of the attack lifestyle or lifecycle, but is that the most ideal thing that we should be using from a risk management standpoint, from a cost standpoint, from wide use amongst the team and support of the tool. So there’s a lot of things that sort of go into account when you choose what to do. And I would say a lesson learned over time is it’s good for folks to have habits and the tools they use, but don’t let that be one of the overriding factors of why you choose to use things. So if you were to break down kind of like a foundational tech stack for a group that’s kind of starting to get going or kind of as you spin up a program, I think we kind of broken them down in like two categories, right. You’ve got the operational and then you’ve got the management or the actual, I guess, the execution administrative. Yeah.

What would you say are some baseline, maybe categories or even specific products? What is the balance based on where you guys see someone’s maturity level in kind of building that up? In my experience, I’ve seen the execution side. A lot of folks, you first need to decide are you going to host it in the cloud or are you going to host it in racks and make a decision there? And I think that that ends up being the case for a lot. If you’re going to have tools that are going to be sending probe and response. Like you’re going to have bone scanners, you’re going to have tools for maybe email phishing. You’re going to have tools for command and control. You have to make the decision. And those are kind of table stakes must haves, right? If you’re going to do an offensive security practice, you’re going to be sending packets, you’re going to need a place for home.

If you’re going to have remote devices out there, you’re going to need a VPN concentrated to call back to. And so the decision of do we host that in the cloud where we can easily back up and control, but then you have to deal with SLAs and you have to deal with are we allowed to do this? Are we going to get blocked all of a sudden because we look malicious? And then the idea of do we control it ourselves, do we rack, mount our own servers, do we host it? I think setting up that strategy is something that for a Nissant practice or a Nissant Burgeoning new group, they probably don’t have the desire or wherewithal to rack and stack a whole bunch of servers. And so making the decision for what is the bare minimum we need to execute operations that are successful. After you’ve had a strategy session and said, this is what we’re going to say that we’re going to provide, then you align that with the requisite hardware, whether it be you own or you don’t, there’s so many different rabbit holes you can go into. Like you need a communications infrastructure. Like, do you just go use Slack? Do you set up your own pigeon server? I think at the beginning you need to make a decision and then that decision will dictate as you mature your investment, your investment in an Azure or a GCP or an AWS. And then all of a sudden your tooling is going to align and say, well, look, we use an AWS for hosting all our scanners and then we’re going to use Lambda and we’re going to use all these different things.

Or you say you’ve got a super smart person who’s VMware all the way and those microservices are what they live and breathe. And they say, you know what, you go buy me some old Dell servers and we’ll host it in my garage. Just have to start making those decisions.

It’d be interesting to hear from the Echelon folks what kind of categories and thoughts you have as you build those out and what matches at different maturity levels. I think to kind of add on to the point, right, of like, where do you live? How do you live? I think there’s a lot of lessons that can be really taken from our I don’t want to say siblings because that sounds like a terrible word, but essentially that’s what they are. They’re siblings of the criminal nature. Sitting on the other end of the fence here where we’re trying to emulate exactly what they do. Right. That’s why I make the joke. I’m the emulated mob boss of the group of emulated criminals.

There’s a lot of lessons that we can learn from the way that they set up their environments throughout the world, whether that be a nation state or an e crime actor in how we as offensive security professionals can set up our environments. Right. And the beauty is a lot of these large tech companies and security companies as well do a lot of great research into how they operate and so kind of emulating that tech stack, what they’re going and doing, how are they doing it, the good old fashioned rack and stack. Throw a Dell in a garage. While it sounds like a great solution on one end of the spectrum, it can be never completely never a good no, I’m just kidding. Yeah, it’s great for like you don’t have to consistently pay a charge fee other than your electricity bill, and I don’t know how you would really justify that expense cost every month. Right.

That’s not how we do it, by the way. Yeah, no, it’s not. But there’s a lot of skills and techniques that are now being implemented using tools like Cloudflare or AWS CDNS and any of these tools that have rotating IPS through a dynamic DNS name that’s almost the standard nowadays. And trying to understand how your tech stack is going to work with those components is really going to set you up for success down the road. Because while the Dell solution great for Pen testing might not be the best solution for Red teaming, and if that’s something that you want to get into as an organization, as a team, you have to think, what is the future repercussion of what I’m choosing? Right. In the long run, the cloud environment is great for Red teaming. In the short run, something immediate like Dell or I always say Dell is the quick thing because I probably have four or five Dell servers sitting at home right now too.

R 620, yeah, they’re super cheap. Great. But yeah, that works for that short term Pen test piece. But do you want to reset up your entire environment just for Red Teaming later down the line? Or do you want to set yourself up for success now and then as you reach that port where you can get organizations to trust your team members and your organization as a whole for being a Red Team expert? You want to set it up then, right? So that’s the questions you have to ask yourself when kind of looking at where do you start, how do you choose? Yeah, I think if you’re building an in house team, I think the route, the path is a lot different. If you’re building an in house team, I would absolutely want to leverage threat intelligence on a lot of things, on how we hire on how we set up and stage the tools that we plan on using for our types of engagements, what the goals of the team are for those types of engagements. Right. Maybe it’s not red teaming, maybe it’s more continuous validation of certain security controls.

So I think if it’s building an in house team, really understanding what are the exact TTPs that we want to set this infrastructure up to run and build it with that in mind. And of course, that’ll change over time. It always will. But I think when you’re first building, having that connection is critically important and not just doing it because, hey, I’m most familiar with this, or I write a blog that says this is the way we should do it. Right. Do it with an actual purpose in mind. Yeah.

That kind of segues into how do you evolve this technology as the program goes. So not only we talked a lot about kind of like some of those goals around your infrastructure and how you want to establish whether this is going to be a continuous thing versus point in time where it’s kind of build up and tear down periodically for the Echelon guys. How do you evaluate this based on your customer needs? Do you approach it from a perspective of this customer is going to need this now, but they have a goal of being here and so you kind of build that roadmap for them.

I’ll let Dahvid really answer this. He’s in more of the day to day. But if it’s a true Adversarial simulation or red teaming type of engagement, that’s where things get interesting for us. Right. We leverage general threat intelligence to kind of tell us which directions we should be going in and which TTPs we should be emulating. And then our team figures out, okay, if we’re emulating certain TTPs, are we using the right tools for those? And do we have the right skill sets to emulate those correctly? Right. So there’s a baseline of tools that we have in place to cover the 80%.

It’s kind of the old Thretto principle, the 80 20 rule. I’d say 80% of what we need to do in the Adversarial Universe is generally covered. It’s that specialty sort of 20% where we get into real specific TCPs on engagements, where that’s where we get crafty and sometimes even build our own tooling.

Yeah. I think the brilliance of cybersecurity in general is tomorrow is going to be wildly different than today. Right. It’s one of the few career fields where it’s almost impossible to have somebody considered an expert because there’s nothing that stays the same day in, day out. And that means that we have to find ways to evolve not only our technology stack, but our skills and priorities through each and every day. So when we start to look at what we would assume is going to happen in the future, what is the trend for phishing or what is the trend for digital pickpocketing, right? Like stealing tokens and passwords and whatnot? We have to find ways of evolving our tools into what we see as a trend. But ultimately it is one of those few things where what we may predict is completely wrong.

And then the next thing you know, you’re coding up or typing up something custom right then and there, because that’s the only way that you’re going to keep in that sort of adversarial simulation or Emulation spectrum.

So evolving is kind of a hard thing to talk about because it is one of those day to day aspects where you have to look, recover, understand. We do constant AARs on our side after action reports for those who aren’t familiar with the term. And those after action reports go into like, okay, did the technology stack that we use, did that work? Right? Even for an everyday pen test, did that work? Because our tooling on the defensive side is constantly changing as well. So the best thing I can suggest here is have a team of robust individuals with understanding of coding and application development. It might seem kind of counterproductive to have a bunch of hackers that know how to code and create because hackers break. They don’t create. But at the end of the day, if you really look at it, these large organizations that are performing in crime and nation states, they have a plethora of individuals that will, oh, hey, this exploit no longer works.

We need to code this new right, or our C Two is no longer falling back on the newest version of Windows Eleven. We have to edit the way that this works. So evolving your technology is as much of evolving your people as it is the actual stack itself. Yeah, that was going to be kind of insight, is that your program grows, your people grow, right? We’ll get into, I think in the next kind of section we’ll talk about kind of the whole build versus buy discussion and when you start making some of those evaluations, but I think that your program is going to grow. And if you’re building an internal program, right, you may have set of tooling for bone scanning and automated pen testing that now you may have outgrown or it’s just not meeting the needs of your evolved program as well. So that’s where I think being prudent and diligent around, hey, we’re going to evaluate, is this solving the problem for us today versus what it was doing? And kind of speaking to that comfort level. But I think that you brought up a good point that I was going to highlight, Dahvid, is around your people are going to continue to grow and evolve as well, and your team will change.

And so how you’re training them is as big of importance as the technology that you’re bringing into the program. Nick, you were going to say something? No, that’s a great point and I think keeping the people part of this conversation front and center is absolutely apt. One thing that I wanted to put out for the good of the group is just a really practical, technical, tactical point of a measuring stick for a maturation process or kind of gauging where you are. I think there’s a point when you’re beginning a program, regardless of consultancy or internal team, that the tech stack is simply there to support the mission and execute what needs to be done. Very tactically in the moment, you have a requirement for X number of pen tests or to do pen testing to AppSec, engagements, et cetera. And it’s really there just kind of I’ll say it along the lines of it’s the minimum necessary to get the job done. And as you find your Efficiencies, you find your cadence, you can begin to bring in complementary tech stack scenarios and tooling that can aid and support.

And then I think in my experience, a gauge that your team is continuing to be bleeding edge and is really starting to mature beyond just keeping the lights on for your practice, for your consultancy, or just keeping the lights on for your team’s. Charter if you’re an internal team is when you take the ability to evolve, having a hack lab evolve, being able to install the EDR solutions, being able to install the proxy solutions and the defensive technologies where you have a non billable, either research unit or a lab that is not directly executing engagements but is there for research purposes. And I’ve gotten to be involved in building those both in a consultancy and for a Fortune 50, wherein you’re becoming a customer of the Enterprise Proxies. You’re getting and licensing EDR and AV solutions and you’re being able to almost simulate an ecosystem for the sake of identifying areas for improvement. That’s where you start getting your CVE’s if that’s what your company allows and those types of things. But it also is not only is it proving that your tip of spear and to Dahvid’s point, you’re staying bleeding edge. Sometimes that is a very difficult thing to convince the powers that be beyond the offensive security leadership like the offset directors and managers and VPs, they get it.

You’re like, hey, we need to be able to tee up C Two that’s going to bypass EDR and do things for us to demonstrate what the E crime folks are doing. We need to have every EDR we can have. That is a good litmus test, so to speak, of a maturing program where you’re able to start spending time setting up enterprise infrastructure that is not directly in line with executing engagements, but it allows for research.

Is this also the point in this webinar that we want to poke the elephants in the room and kind of discuss like using AI to help evolve your technology space? I know it’s like the most overused terminology right. I think the next slide is probably no, we’re there. Right? There it is. We do have a question that I think is kind of like a good transition before we dive into that because I think that’s I’ll take the rest of the time this individual is saying, I’ve heard in a typical organization there’s anywhere from ten to 15 separate tools or technologies in their It and or security stack. Is this normal? Is this what you typically see? I’ll withhold my commentary until I let the group go, but I’d be curious, like Dahvid and Dan, what do you typically see in your engagements and some of your customers? Yeah, I think we see a lot of the ten to 15. Sounds about right. In most engagements, though, I will say of that ten to 15, a large percentage of it, like 75% of it, tends to be the same across the board for most organizations.

Right. They use it in AV and EDR. They’re using something like counter phishing technology and whatnot. But I think that the big thing there, or something to note there from an offensive security standpoint is like each one of them has their own individual vulnerabilities that are unique to its stack and the implementation of those technologies can create its own vulnerabilities and issues that arise from that. I do think it is a problem across the board as a cybersecurity industry, how we just can’t agree on best practice, standard setups. Typically these things get pushed out and they have like all the security controls are off, right? I’ve seen AV and EDR products not have the check to see if packet is malicious kind of things, right? And you just click that on and all of a sudden your C Two no longer communicates and you’re like, that was it, there was a check mark. Why isn’t that starting from the beginning? But as far as understanding each individual technology as it is, I don’t think we have to go as in depth to understand what the difference between a McAfee from a trend micro or a silence from a crowd strike or something.

In that sense, they all operate in a very similar fashion. Understanding the basics of them helps inform how an offensive security team can develop their own tech stack to circumnavigate these technologies.

Sorry, go ahead. I definitely seen a lot of different technologies in the stack, right? And I think that informs how the offensive security program should build their technologies too. And I think it always comes down to like, what goal are we trying to accomplish, what problem are we trying to solve? And that can help build it in because Dahvid, you alluded to it is that an organization’s individual tech stack is actually going to be in and of itself a fingerprint, right? And how do you test against that fingerprint and truly evolve your testing strategy towards that? Which is why I think we all would agree that a continuous testing program, a continuous validation approach is going to be the best way. Because as the It team and security team evolve their tech stack, that’s going to also inform how the offensive program should evolve their tech stack. Sorry, Nick, what were you going to say? Yeah, I think when we look at organizations that have a significant amount of those technologies, so many of them are not, like Dahvid said, not configured appropriately, implemented appropriately. And then there’s this over reliance on the technology as the security blanket. And they either haven’t been tested, don’t test, or don’t they scope the test out so that they’ve got AV, EDR, XDR, DLP, they got some sort of Sore, they’ve got Deception Tech, they’ve got Canaries, they’ve got a significant amount of technology and they’ve spent all their money on this technology.

And then when you go and actually practically test its efficacy via some sort of Pen test or even a configuration review, the reality is, I think, an over reliance on tooling and the disparate nature of how those tools interact. I think one of the big laments of anybody that’s worked for an organization that has more than ten people, when you start getting a laptop, that’s a corporate controlled laptop, it’s so slow because of all the different agents that are running on it and all the different things. But are they configured with the right check boxes? Are they configured appropriately? And what I’ve seen over the last 1520 years is there’s a big push to buy it because we got to use the money and got to get the budget, buy it, we stack it, turn it on and we’ve added it to some overworked admins other duty as assigned. And that person’s got eleven different things. They don’t have the time to become an expert in XYZ and technology. And so it’s just being aware of consolidating and making sure that the core things that are going to protect your environment. Because this is an interesting question.

We’re talking about offensive security practices, both internal and consultative and every way that you could form them, the tech stack that supports that. And this question was related to also the tech stacks that are observed in organizations that are possibly under review and those types of things and really coming into it with an idea of we need to consolidate and really figure out what the objectives are, what works well and what doesn’t.

Yeah, and I think anything like practical examination of the best word of the day efficacy of these technology stacks. Right. Just understanding that from the practical examination point should be enough for organizations to really start building a standard towards that. But again, like I was mentioning before, when you want to build your own tech stack and as an offensive security team, don’t try to focus it on one particular state because there’s so many XDRs IDRs. The amount of RSA list that we can go down for acronyms is amazing in today’s world, but they all operate in a very similar fashion. And with that being said, I totally lost my train of thought, which is great ADHD as a whole, but no, what am I trying to say here? When you build your tech stack, don’t build it to one idea of what a tech stack should be because that tech stack could be configured a thousand different ways to either the least secure or the most secure. And if you build it to what the idea of what they’re trying to accomplish as a defensive team, you’ll be able to expand your capabilities and knowledge to a much greater audience than what you could if you were just only focusing on defeating the Avs of the world, like your McAfee’s and trend micros and semantics, right.

You’re going to miss out on all the crowd strikes, the silence, the carbon blacks, all those guys. So when you look at your technologies, you really need to be focusing down on what is the idea of what’s being used across the board and how are they being configured? And then to Nick’s point earlier, train yourself, learn about it, expand research. That’s how you keep up in the bleeding edge of things.

Hopefully I recovered there and I think that’s fantastic. So when we kind of talk about like, hey, now, times change, right? Bleeding edge becomes obsolete and the next thing is the bleeding edge. And even today, I would say we’re in the midst of a conversation as an industry at large around some transformational technology, right. AI particularly, but it plays into the conversation of how do you evaluate some of these emerging technologies? I think we want to have a deep conversation around AI specifically, but there’s also the notion of as you grow and evolve and you’re evaluating new things and new techniques and your footprint or your tech stack is changing from what you need to be testing against. Where does the build versus buy conversation come in? How do you address that? What are some of the key factors that go into evaluating, hey, we’re going to go purchase a new product, we’re going to go institute a new open source product, or we’re going to go build it ourselves? Right? I’d like to spend a little bit of time there and then I’d love to dive in into your thoughts around kind of how AI is shaping some of these conversations as well.

Nick, what do you start from? I think Dan should from a practitioner’s perspective that’s doing offensive security, right? I think there is a lot of things that offensive security practitioners can leverage AI to help them with. So Dahvid mentioned earlier, you have to have some knowledge in coding, right? Whether it’s to tweak some tools that you find and use or to maybe even build some of your own tools. I think leveraging those large language models to help fight through some of those coding issues could be a huge help to an up and coming offensive security engineer. I also think being able to write really good narratives and report finding write ups could also be a really unique way to leverage AI and these LLMs. That also comes with a lot of risk though, right? You have to be extremely careful about the types of details that you put into these models, understanding that once that IP goes into the box, it’s in the box and you can’t pull it back, right? So there’s that trade off there of like, okay, I know that this tool can help me be much more effective at my job, but it heightens the risk profile, and you really have to watch what you put in it. So I think there’s a lot of ways that really any practitioner can use these tools to become a much more efficient and effective at their job. But with those risks now involved and that’s not even sort of diving into, okay, what are the general cyber risks of the company itself using AI? Everybody that’s listening is getting inundated with different interesting things on the newsfeed because this is a transformative technology and we’ve really hit our AHA moment where some serious breakthroughs are happening.

But at this moment in time, there is still the idea that while you evaluate emerging technologies and that is related to AI, as it is related to new scanning technologies, as it is related to protective technologies and defensive technologies and telemetry technologies, you really need to evaluate it for the goals. You have to have goals in mind. First, I think we have to make sure that we’re as technology is described to us, we don’t let the articles prescribe it to us. We have to be the owners of our own domains and say, all right, this new, exciting, shiny technology is here. Let’s understand what is the goal of this technology? What can it do for me specifically? And then in that, start evaluating it very practically for your use. Case dilbert cartoons or funny cartoons aside, where you talk about, oh, don’t let the leadership see the CIO magazine or Technology magazine because they’re going to go in and circle everything and say, we got to buy all of this because it’s trendy, which I don’t agree with. I think a lot of our technology leaders are using the right types of measuring sticks to decide what makes sense for technology.

But we have to, as the practitioners, make sure because I’m sure everyone on this call has already been hit up by a number of people on advisory boards and other CEOs and other folks who are like, hey, this AI, what are your thoughts? And then last month it was, hey, this new tool, what are your thoughts? I think making sure that we also don’t get caught up in hype and hysteria and really, practically, really objectively understand the goals, like what we’re trying to do. I think a perfect example is understanding when it comes to AI as an example, it is the perfect tool, as Dan was speaking to, to be a force multiplier. But as it stands, a lot of these models really are only used, they’re only as effective as the people with the requisite knowledge to understand how to utilize them. I personally am trying to use a bunch of them every day just to experience and put them through their paces. And I’m recognizing that there’s so much value, like I’ll never have to remember another reg x again, which is a risk in and of itself. Because if I’m on a desert island and the fate of the world and the nuclear codes require me to remember how to write a reg x, I’m going to be hooped because Iptables rules regexes a ton of stuff that I just don’t feel like remembering that I had to research before. I can do with an AI prompt immediately.

But if I go take my 13 year old his prompts to an AI large language model is going to be markedly different than mine. And so again, you’re recognizing that they are force multipliers and tools. And what’s interesting, and I’m seeing something that I think is going to be interesting is looking at the difference between generative AI models and predictive AI models, wherein generative AI is used to generate from it’s creating new, it’s taking data in and trying to somewhat create. Some will argue that some legal people will be like, no, it’s just copying from its database and get a whole nother slew of intellectual property lawyers who are like, let’s find a way to build a practice around suing AI companies.

But then you have the idea of, wow, how amazing for Malware to really be able to obfuscate itself truly and create itself new. And then in predictive AI, on the other side of it, you have that I mean, that’s the old heuristic right? In EDR parlance or AV parlance predictive, where you could take data, take historical trends, data from the past and try to make predictions. Obviously you see it in finance and you see it in other places where predicting outcomes is advantageous for you. But in cybersecurity, if you could have models that could predict and then what’s interesting is what happens when you start getting them to feed each other. That’s another concept that we’ll leave for another day and let the futurists talk about. But when you have predictive AI, then leveraging generative AI. Cats and dogs living together, mass hysteria.

The point is, when we evaluate its use in our environment, we have to look at the risks. A lot of organizations are concerned with the excitement and are shutting it down at their perimeters because they’re worried about intellectual property shooting off to who knows where and not being the exposure, not knowing what your exposure is. And I feel really sorry for those DLP teams, the people who are the data loss prevention engineers, who are there trying to make sure and play Whack A Mole with all the different extrusion points for data, those are probably the first ones going to the enterprise and saying, hey, let’s shut this down. I think recognizing tooling needs to be carefully evaluated against the goals, against what you want to produce from that tooling. So when you’re evaluating it, thinking through not just what goals of your peers are, not just what goals of you think they should be, but practically speaking, what are your goals for three weeks, three months, and three years down the road? And that can then align to your technology decisions. Just quickly add on this, because Nikki brought up a good point, which is something I like to talk about and rant about a lot on LinkedIn, if you guys ever follow me on it, is like how trendy cyber is in general. And from that trendiness, we get a lot of emerging technologies that are copycats or snake oil.

Or snake oil is probably a bad way of putting it because they still provide value. They’re just not providing the value that they claim they’re providing.

There’s a lot of technologies that come out, and especially for offensive security, because now pen testing, red teaming, that’s becoming the new hotness, right? The closed doors where there was all the gatekeeping to get into the field is slowly going away. Thankfully, that’s better for the industry as a whole, but on that same field, that’s opening the doors to vendors that don’t necessarily have the right ideas for what an offensive security team should be doing, right? There’s a lot of great tools out there, and I’m not going to name any bad ones or good ones in general, but what I think is when you’re looking at evaluating your emerging technologies, that first part, evaluating is super important, right? Don’t do the, hey, let me go to the CIO magazine, circle it and go, hey, this is the tool we need, right? That new C two platform, new hotness. This thing is great. Everybody’s talking about it. Marketing does a good job because marketing has done this for years for a lot of other things. So we’re only now starting to experience that in cyber as a whole. So when you are looking at things like AI or driven tooling that is AI enabled or whatnot, definitely take the time, look through it, try to use it in a realistic environment, something that shouldn’t be on a client environment.

Don’t test the tool, right? Then there test it in your little cyber range. But it is. It’s an evaluation. And you shouldn’t ever look at it and be like, we’re going to use this tomorrow because Google said so, or we saw this new webinar and this is what was coming out, or this is what everybody said was the best technology at RSA. Take your time. Read through it, know what you want to do from it because not every tool is going to work out for you, especially for what your goals are. But yeah, that’s my quick hot take right there for that.

Yeah, no, I think that’s good and I think it’s important to kind of emphasize too, like, hey, what is the total cost of ownership of having this tool and then compared with the value that it’s going to provide or a technology. Right. And again, it kind of comes back to what are the overall goals Nick was mentioning. But if you’re truly evaluating like, hey, this is going to save us time, especially from a Red team perspective or pen testing perspective, this saves us time in kind of evaluating and hitting on that low hanging fruit. So I can spend more of my expertise on the more intricate exploits that’s going to provide a lot of value. Right. Because otherwise you’re spending a lot of manual effort or additional time and effort in finding things that can be found via automation.

Right. And so I think that there’s always this balance that we have. And AI is a great concept to discuss too, of like, is it going to replace pen testers? Right? Is it going to replace more technologists? There’s obviously a realm of discussion that can be branched out from that, but for today it’s actually, hey, can it, can it be utilized in some fashion to help us actually get our jobs done more effectively and efficiently? Because there’s always going to be a manual component, especially in the offensive space.

Yeah. Think of AI as Jarvis from Iron Man. Right. Tony Stark still built his own equipment. The AI just helped.

Yeah. I worry with our team leveraging some of those automated tools that they’re going to not gain some of the skills that build some of the base understanding. And it’s that fine line between yes, some of the enumeration phases of a test can get laborious and boring and it can be sped up by some of the automation tools that we use. However, we still want our junior engineers, junior analysts, to be building some of those base skills as well. So it’s like, how do you find that balance between let them leverage that and get the efficiency, but also find a way to still build up the skills you want them to have at the end of the day. Yeah, something I used to do for that was I wouldn’t forbid consultants that worked for me from using SQL map, but I would want them to be able to tell me what it was doing every step of the way. And people really quickly learned when you came onto my team that if you use the tool, you had better watch it through burp or watch it through wireshark, open up the code to start reading it.

Because I’d like to make sure that to know that. And I think that almost goes into a training aspect because there’s always going to be tools that are force multipliers that do things better and more efficiently. I wouldn’t even say better do things efficiently and let you do it at scale. That I don’t know how to craft a TCP packet with scapey. Early on in my career, I let Nmap do that for me. But understanding that knowing how to do that makes you more lethal. Well, lethal is a poor word, but makes you more technically savvy and able to adapt, improvise and pivot more expertly.

That absolutely. I think we do have to find that balance, Dan. And it goes down to that culture of making sure that you’re always training and not being good enough isn’t good enough at times when it comes to, yeah, you got the job done, you got it working. But let’s help elevate your technical prowess on understanding by going deeper and still leveraging the tools for efficiency while making sure that you’re giving people the training and the opportunities and the incentive to continue to understand. And then also, every moment is a teachable moment. Every time somebody knocks something over because they use the tool that did something they didn’t expect, I was like, well, here’s the thing, I would like you to write me up a paper explaining what the tool was doing, why it did it, and what you learned from it.

I call it the Calculator concept.

I too have a 13 year old son that is in the midst of while he’s finishing up algebra, pre algebra. I guess we talk about like, hey, just because you know the formula to plug into the calculator and it gives you an answer, doesn’t mean you know the math. That sounded old now when, when I was growing up, like, you couldn’t use calculators on certain tests right. In certain classes. And so, you know, so it’s a similar concept of like, you know, SQL map. Yeah. What is it doing? Like you, you need, you should understand how it’s evaluating itself so that you could synthesize it if you needed to in some other fashion or potentially go write it better or tweak it.

Right. I think that’s an important thing is like metasploit, for example, and driving payloads there’s quite often, I would presume. In my experience, you have to adjust payloads to actually bypass certain technologies and you kind of have to know how to do that on the fly to a degree. Right. At certain points in time, every payload that you generate out of meterpreter or something like that is not always going to work, but it’s incredible time saver to have it generate a payload. Right. So I think those are some of the key concepts that also plays into how you evaluate technology so that you actually can ask those questions when you’re evaluating it.

Like, what is this doing under the hood? And does this actually do what I want it to do and save me time, right? Yes.

Okay, so we’re kind of coming down to the end. We talked about building a tech stack, evaluating a tech stack as you grow and the different technologies that may go into it. And then how do we kind of wrap this all up in terms of driving the most value? What would you say are the key things that drive value from your technology investments? What are the key things that you think every team should actually have as the forefront? Like, hey, this is how we know we’ve made a good decision and produced value from it.

Efficiency and accuracy. Right. Those are the two biggest things right there.

It should come as no surprise to anybody that the largest cost derived in a company is from its individuals. Right. And the fact is that most defensive security teams are very highly skilled and sought after individuals. So they’re able to use your tech stack to its fullest capabilities to only expand their time and effort to focus on that further. 20%. Right. The value that’s driven from that is tremendous to the organization or clients that they’re working with.

So at the end of the day, quite simply, it’s always about efficiency and accuracy, because if you start reporting the wrong things too, you lose trust. And that’s the one thing that you never want to lose as an offensive security team.

My quick hot take finding a way to calculate ROI from the tools that you use, I think is very important. Honestly, some of the tools that we use, it cost as much as hiring an FTE or close to it when you add it all up.

So I think when you look at it, you have to say, okay, is our team being as or more effective than we would otherwise by employing these tools, are we seeing that ROI across the board or is this thing becoming shelfware? I think you could quickly figure out the tools that are becoming shelfware. No one’s using it.

So, to me, that’s big. Right. And for us, one of the big things where we were finding ourselves getting bogged down in a lot of time was the reporting phases. Right. And for us to be able to leverage a tool to more swiftly maneuver and manage the reporting phases and get that not only efficiency down, but we also bill at the end of the test when we get a report out. Right. So the quicker we could do that, the quicker we could also bill.

So there’s a lot of ROI on different tools and the ways you could look at them. And as a consulting firm owner and manager, that’s what I look at quite a bit.

Yeah. And I come back to the metrics and the outcomes that you’re delivering and showing right. And trends over time, particularly from, like we talk a lot about, are we able to show progress in our security program. Right. And so, obviously, the end goal of a security offensive security engagement or a continuous assessment program is to actually show, hey, we’re helping the organization become safer. And so if you can show that you’re accelerating security posture improvement, control validation is continuing to show progress in terms of how frequently their configuration is improving and changing. And that your tooling, and your overall program is participating in that or facilitating that.

That’s really how you start to show the value. Right.

If the technology that you’re investing in is actually causing it to go the wrong direction, then it’s time to evaluate, like, hey, are we doing things the most effectively and accurately? Right. I think that’s my take from a security program and posture assessment perspective.

Nick, you’re good? All right. Hey, we’ve got 1 minute left. I want to kind of open it up. We have some questions that were great. Thank you for participating. I know we’ve got a short amount of time left, but if there’s any questions from the audience, we gladly take them now. But while I advance the slides and let some folks chime in if they want to, there’s always plenty of resources.

You can find resources on our website. You can also look into our product as well, but stay tuned. And then, Dan, Dahvid, anything that you all want to share in terms of resources that you might have available, and I’ll go ahead and stop the share now, but thank you for your time and spending it with your expertise, and hopefully everyone found value out of this webinar series. It’s been fantastic. Yeah, I love these chats. Thank you so much for having us again, and it’s fantastic to get together with you guys and talk about these important topics. So look forward to the next one.

Absolutely.

Well, we are at time, and we respect that for everyone. So thank you so much for spending the last hour with us. We wish you all the best. As you start to embark or continue to improve your offensive security program and hopefully that you found this valuable, you always know how to reach out to us to provide any insights that we’re happy to help with. And we wish you all the best and good luck. Have a great rest of your day, everybody. Have a great day.

Thanks, everyone. Great to see you, everyone. Bye.