Hello, and welcome to today’s remote session. This is Tom Bechtold, your host from Secure World Digital. I want to welcome everybody. Today we’re going to be talking about continuous assessment as a mindset, a top down approach to a better security posture. So I want to say thank thank you to the folks over at PlexTrac for making today’s remote session possible. I always have something to say, of course, about our topics. And I was thinking about today’s, and I was like, it kind of makes logical sense to me to be continuously kind of assessing the networks, right? Because right now, currently, and this is just me, it seems like folks are kind of trying to do more with less staff, less everything, right? Less hours in the day probably, too, because they’re doing more stuff.
So they’re kind of just trying to get done what they can or at the very least, kind of stuff that they’ve always been doing, right? So almost like we’re checking the box, right? We’re doing this quarterly. We’re supposed to. But the bad actors, they don’t take breaks. We know this from the holiday season. They were scamming up a storm over the holiday with different tracking stuff. Now they’re getting into HR stuff and tax season, they’re going to start getting us on that stuff. We have to kind of change our mindset a little bit about the way we’re assessing things.
So today we’re going to be talking with Mr. Nick Popovich over at PlexTrac, and he’s going to kind of give us a different idea of how maybe we should be doing this stuff with continually assessing what’s happening, right? Because that’s really what the bad guys are doing. They’re constantly probing our networks, trying to find those weaknesses or those employees where they can kind of make them the threat now. So that being said, I’m going to get moving with our housekeeping, and then we’ll get right into stuff with Nick real quick. We’ve got our slide. There’s a few URLs in there in the resource tab you’re going to want to grab. There’s some really great stuff that they sent us over.
This time. We’re taking questions throughout. I’m going to try to do them all at the end. I’ve got a few that I’m going to ask Nick as well. But certainly if you’ve got something you want to know, throw that in that Q and A and we’ll get to those. If you have any problems with the slides not moving or you’re hearing stuff weird or stuff just not working right, refresh your screen. That usually fixes most of the issues.
This is available on demand, so if the issues really are bad or you just got to drop or whatever, come back, use that same link. You can revisit this webcast within the next six months, and you can also share it with colleagues. So if you know anybody that’s having issues with assessments or just getting their hit around this stuff. Shoot them a link, they can join on demand. That being said, I have a quick poll question for everybody. Just kind of want to see where everybody’s kind of coming from right now. When this boils down to how often are you doing assessments? So are you doing it one time a year, twice a year? Maybe you’re doing it quarterly, right? It’s maybe you’re already ahead of us, right? You’re doing it continuously already? Or maybe you thought the It group was doing it right.
Who knows, right? So make a choice. Let us know what you guys are thinking about right now. I do want to let you guys know we have a new segment that I’m going to throw at you guys later on after we do the Q and A. It’s basically kind of our secure world news. So I’m going to give you guys some of the highlights from the last couple of weeks, just a couple of stories, and I’ll share that at the end. So let’s give you guys a moment or two more to lock in your votes on the polls. So how often are you currently performing assessments? Is it once a year, twice a year? Quarterly? Continuously? Or maybe you thought somebody else was doing it right.
Hope somebody was doing it right, if that’s the case. Let’s see. Take a peek at these results.
We’ve got a lot of folks saying 42% about once a year. Looks like our second place. We’ve got a few people that are actually doing it continuously already and then quarterly at 15%. And we do have some folks that thought, you know, maybe another group or It was doing it or hopefully the It people are doing it right. Nick, what do you think? Does this kind of look like what you’re seeing when you’re chatting with folks? Hey, thanks, Tom. Yeah, this data that we just collected in this spot assessment, spot questionnaire we got here really meshes, I think, with the trends that we’re noticing both at PlexTrac and just from being a hacker about town, talking to organizations. It’s kind of the idea of we get that one assessment and so there’s at least some visibility, we have some telemetry into our security posture, and that one assessment at least is a good baseline.
And then, folks, the driving factor around why do you only do certain assessments? Can be varied. But I think a lot of times it comes down to resources. Resources being money or tapped out. Resources from not being able to deal with and engage with or maybe preconceived notions of thinking that an assessment is going to be this big project that requires a lot of time, effort, and money. And so it’s kind of like we’re going to do our one assessment, get a baseline for the year, and build off of that and hope things go well. And then sometimes there’s regulatory requirements for assessments at different intervals and those types of things. And so what I see here kind of meshes with my expectations.
Okay. I was curious though, because I came up with the poll and I threw in that last one, just kind of just in case kind of a thing.
Does that happen a lot where folks are kind of basically thinking that somebody else is kind of doing this stuff? It really depends on the organization, depends on the size of the organization, also depends on their role. If somebody’s tuning in and they’re with the legal department, they may not have insight. And so I thought it was doing this maybe I thought, and I hope might be somewhat synonymous. However, if these are system architects and asset owners or custodians of applications or data, they may not be tasked with the security aspect or assessments. They just are tasked with keeping the machines running, the data flowing, the database connections. And so their hope is, I thought it was doing this as a part of their own processes and part of release management, part of a vulnerability management program. And so there could be interesting, I’d be interested maybe in follow up questions for folks who say, I thought it was doing this, maybe hammer in on that a little bit and get a little context around it or ask questions about it.
We could talk about it live. But yeah, I think this definitely meshes with kind of my pulse or what I would have thought the industry was working through. Excellent. Yeah. Throw that out there if you feel comfortable. Put that in the Q and A and we’ll bring that up a little bit later. So thank you.
Excellent. Let me get you to your presentation. So continuous assessment as a mindset approach to a better security posture. Right. So this is actually we’ve been talking with Nick here for a couple of minutes now, but it’s Nick Popovich. He’s the hacker in residence over at PlexTrac. And he’s one of the good hackers, by the way, so he’s one of the white hat guys.
Otherwise I wouldn’t have him on here. I don’t like this.
Take it away now. Right, I would probably be wearing a box mask. Right, right, exactly. Yeah. Thanks. So just a real quick wag at why I’m talking to you and why I think I should be. I really don’t think Tom asked me to.
No, I’m just kidding. My background is in penetration testing, adversary emulation, threat intelligence and red teaming. And my role at PlexTrac is to really that hacker in residence. What does that mean? That means my job is to do a lot of different things. One of the primary pieces of the puzzle is being the voice of the hacker, staying on emerging threats, emerging trends, and being able to provide insight, innovation and guidance to leadership at PlexTrac and also Plexrect Partners. And so with my background, having been in the military and in enterprises and spending a lot of time performing and executing assessments. That’s where I’m bringing in that context.
And also now in my role at PlexTrac, I get the opportunity to talk to hundreds upon hundreds of organizations that are both pen test providers, consultancies enterprises that consume pen testing services or security assessment and testing, or they do their own. And so now I even have more insight than just kind of my own network that I used to have. And so we’re going to talk about trends that I’m seeing and really do some compare and contrast between the idea of what I’ve named point in time assessments. Or they could be called snapshot in time or traditional assessment and testing. We’ll look through what those types of tests and assessment, what they’re trying to answer, and the goals of those assessments. We’ll talk through a disadvantages and the advantages of both of them and then really try and answer this question of do I choose one or the other? Or is it a both scenario? And so first we’ll start to make some definitions here. When I talk about point in time assessment, I’m really speaking to that traditional assessment.
And the folks that answered the survey questions said one time a year, my thought would be, even folks who said quarterly, the thought would be that these are the assessments that probably are most familiar. That’s where there’s a project manager assigned and there’s dates associated with it and you’re getting resources and you’re setting up scope and there’s rules of engagement and it’s a very structured project based approach to assessment and testing. I think there’s a lot of value in that. But what does that mean? So typically what that assessment is going to show you and what that test is going to show you is the static state of an environment in a certain period of time. In fact, most of the organizations that I worked with that were consultative in nature are either terms and services or our statements of work said things like this represents the environment at a certain date that’s not just indemnifying the company. So that five weeks after the test and the company that got a pen test gets hacked. And you say, well, look, we tested your environment at this point and it looked like this.
It’s also just a reminder that this is not a living dynamic document. These findings, these vulnerabilities, the things that you’re concerned with, are very static in the snapshot in time. Also, the scope of the assessment really has been defined and kind of set in stone. There is a degree of malleability even in a project based point in time assessment where the scope of the assessment is agreed upon and during execution, there may be assets identified or a tax surface identified through an Iterative approach that weren’t in the initial scope. And so there’s back and forth many times where organizations may adjust the scope slightly, but generally speaking, the spirit of the scope is pretty static. And one of the execution strategies of these is there’s always going to be leveraging at times some modicum of automation, expertly executing assessment activity, leveraging some form of automation, whether it be tooling scripting, something for efficiency, some automated tooling. But there is also an expectation I would hope.
And as where there should be of manual activity, there’s expertise behind the tooling and then there’s expertise adding value beyond any kind of automated discovery tooling that’s used maybe vulnerability assessment and scanning tools or scripting or those types of things. Typically those need to be executed by expert practitioners. And so there’s that expectation that there’s going to be automated assistance but there’s going to be manual. And then from the consumer of the services point of view in between those annual assessments or quarterly or even whatever the cadence is in between those assessments, if they’re structured to be kind of the snapshot in time or point in time, there’s really no way to identify the state of the environment in between those tests. There’s a hope of referencing data and saying are they fixed or not? And maybe there’s been some activity where there’s retesting performed but generally speaking it’s the great unknown and then there’s the unknown unknown. Has there been new assets added that should undergo assessment? Are flaws that have recently been released in between those assessments? Are we affected by those? These are the types of questions that are very difficult to answer and they may require additional point in time subset of testing or spinning people up to go do subsets of testing and those types of things. And so the point in time assessment, that’s what it looks like.
And we’ll speak to some of the advantages and disadvantages in a bit. But I did want to mention that although we’re talking about a top down approach and the continuous assessment paradigm, I think point in time assessments have a lot of value especially for establishing a baseline. And when you talk about establishing a baseline it’s so that any type of continuous assessment has something that has been established as the baseline that you can gauge the rest of either improvement or scope or something along those lines that allows you to say at least this is where our start was. So when we talk about the continuous assessment paradigm, what we’re really getting to is this idea that we’re going to leverage probably more automation. However, we’re not going to be relying on it slowly, excuse me, solely. We are though going to leverage expertly tuned continuous learning models and continuously tuned with manual intervention. But we’re going to be leveraging a lot more automation.
So this can run not just nine to five or not just during times that, that human beings are are operating the machinery, so to speak. Another aspect that is a difference between point and time is the scope of the assessment. Can be more dynamic and fluid in that either by using a set of predefined rules or some sort of agreement that allows the scope to be fluid while there still are, checks and balances are usually in place. Examples of what I’m talking about are for a point in time assessment you might get a subset of an organization’s IP space or the applications under certain domain names. Maybe they say things like Dev Company.com and QA Company.com are in scope for this assessment and folks who also do bug bounty programs will recognize the idea of a scope. And going outside of the scope is forbidden for the most part. Whereas in a continuous assessment paradigm the idea of anything underneath the parent domain, anything within this IP subnet range, anything where there’s autonomous system number shows up or there’s an organizational link, there’s a set of rules that says you don’t have to spot check and constantly ask if these things should be within the scope and added to assets.
And for mapping the attacks reference, it’s a set of rules that’s pretty high level that allows you to dynamically add assets and items into the scope of the assessment. And there is this concept that in a continuous assessment paradigm, depending on the methodologies used in the cadence, it can have a more real time view of an organization’s environment. So I say real time ish in that there’s still going to be some semblance of a delta in between asset discovery, in between vulnerability identification and fingerprinting, in between those types of things. However, rather than waiting in between annual assessments or in between its months and months in between, it could be a number of hours or days. And so when questions are asked like are we vulnerable to this type of scenario? They could be more accurately answered because you’re not having to say well when we tested six months ago we weren’t well that flaw wasn’t there six months ago and a bullet here that’s not there, that both testing paradigms can take advantage of. But I think continuous assessment that leverages threat intelligence. And so something that the Plexrac CEO is fond of and has been but more recently, more vocal is this idea of threat informed testing.
So threaten formed pen testing. Threaten formed red teaming threat informed assessment and testing. Where you’re taking the data that some of these organizations are supplying that are very valuable, where you have real threat actor information, threat intelligence that’s curated, that’s provided to you, however it’s containerized and packaged and delivered, whether it’s through feeds or whatever the case may be. Taking threat intelligence data either curated for you or having a team that curates it and using that data to drive your assessment and testing really adds that value because you’re taking into account threat intelligence that may be impactful for your industry, for your specific organization and those types of things. And so continuous assessment allows you to bake in the common, maybe IOCs or common techniques, common TTPs, common threat feed data or threat intelligence data rather than just relying on a feed. And I’m not anti feed, but there’s so many different ways that you can swallow the pill of threat intelligence, but you’re dating that data expertly curated for you and your environment, hopefully. And then taking that data and being able to bake it in allows you to have that more threat and form view more realistically and rapidly.
Now, that being said, point in time assessments can still take that threat informed approach. But again, you’re at the disadvantage somewhat if you’re only relying on point in time assessments of being like, well, that was the threat landscape then and that’s what we look like then. What do we look like now, what do we look like today? Great unknown.
So what is the point of these types of assessments? Realistically, all of them generally want to answer a few of these questions and they go about it in different ways. So in one breath we see we’re trying to just identify a tax surface. We want to inventory our assets, inventory our applications, and then identify what is a viable attack surface that I’m presenting, what are the points of presence, what are the points of ingress egress? Where am I possibly vulnerable? And it’s not even so much vulnerability to that point. It’s just saying what do I have that could be vulnerable? What is my viable attack surface? And what that looks like could be communicable nodes in an IP address space at the network level. It could be application with API endpoints. It could be if you’re doing physical things, it could be buildings, what is your attack surface, where can there be a problem? And then what this testing is trying to do is map out vulnerabilities. So if organizations have determined that these are vulnerabilities, maybe through tooling or through the CVE process or vulnerability disclosure, bug bounty programs, whatever the case may be, we try and take known vulnerabilities.
And then if you have the capability, the expertise and some a little bit of luck, a little bit of skill maybe finding vulnerabilities that are Odes and conditions that haven’t been known. The idea is you take vulnerabilities and you’re mapping it to the viable attack service and this then allows you to start to qualify and quantify the risk. Because if you can, those of us who have been in this for a while and anybody that’s thinking of it logically, the idea of a flaw and the likelihood of that flaw taking advantage of and the impact of that flaws being taken advantage of, have we can then start to quantify the risk. You can’t really start to qualify and quantify risk and environment. If you don’t know all of the attack surface and you haven’t decided to map vulnerabilities to them, there’s no data to perform risk assessment, risk analysis. So this allows us, as organizations and data custodians and system owners to quantify the risk of our map attack surface after we’ve identified the vulnerabilities. And then the natural next step would be we want to mitigate the observed flaws.
And so mitigate is either fix the flaws if capable or put in compensating controls that allow that risk level to be decreased. We put in telemetry, we put in different types of blocks. We put in different types of things that deter, prevent and detect attackers from taking advantage of any observed vulnerabilities in our viable attack surface. But the reality is you can’t fix what you don’t know about. You can’t know about it if you’re not looking. And so that’s what an assessment is. It’s taken a good hard look so that’s generally and in a point in time or traditional assessment, the questions that we’re looking to answer here is what did the environment look like then and what was or wasn’t fixed? Now, that sounds maybe a little trite and distilling it all into two bullet points is like, well, there’s a lot more to it than that.
But if we really look at its core, we’re saying this is our security posture at a point in time, let’s present this. Let’s make there could be other subsets of, you know, is our security spend making sense? Have we invested the right people tooling technology stack? Were we able to see certain types of activity? I would kind of tend to say that that would be more in a purple taping, a threat emulation exercise to really look at telemetry. But there’s no reason why security assessment and testing can’t be multifaceted. And putting your people and processes and telemetry controls under a microscope and seeing if they work as expected is absolutely a normal byproduct of this testing. Even though I believe, too, this could be in the title of a completely different talk. There’s a difference between security assessment and testing for the sake of identifying flaws and security assessment and testing for the sake of testing telemetry controls and maybe blue team testing. So that’s threat Emulation, purple teaming and that and threat simulation activity is more conducive to that.
But it’s really about the idea of saying in a point in time assessment, okay, so we’ve identified these flaws and this is what we looked like at that certain time. Then the next is, was it fixed or not? Was it fixed appropriately? You think it was fixed? Let’s see if it was fixed. And really determining if those mitigations put in place are working appropriately, if they’re useful, and if they really mitigate. The problem, I think that in and of itself can be an entire assessment type. Is that retesting activity? Because depending on how you partner with organizations, whether it’s an internal team of testers and assessors who are working with you, or it’s a third party that you’ve enlisted, at times, the system owners are really meant to know how to make sure systems are operational and effective. And when it comes time, even taking the recommendations from a security test can be difficult where they put in maybe fixes that are recommended generally without deeply understanding the flaw. And they put in a fix and it works for a very specific condition.
However, if you just slightly modify the condition, the flaw is still present. And so that’s why that was it fixed? Is an important question. Was it fixed and was it fixed all the way? Or are you going to play endless whacka mole? An example being let me just take an example from an application space. If a parameter in your application has a cross site scripting flaw and you look at that and that’s the only parameter on the test, and in the test, they didn’t say anything other than there’s a cross site scripting flaw in the ID parameter. And you go and your developers fix that one instance of the flaw in the ID parameter. However, if looking up in the application, you see that all parameters actually don’t sanitize that input and there’s dozens of parameters in the website that allow you to execute cross site scripting payloads. Perhaps the tester thought you’d see the point for what? Maybe it was an automated test result and it only got good requests with that one parameter.
The point is you fixed the cross site scripting flaw in the ID parameter. However, there are a dozen parameters that also are flawed and so you actually need to go up and fix it at a higher level. So that’s just an example of playing security whacka mole, making sure it wasn’t fixed excuse me, or making sure things are fixed and how paramount that is. And so when we transition to continuous testing and the questions that are trying to be answered in this paradigm, we see that it’s what does my environment look like now? And I put now in quotes just because I’m a bit of a stickler for literality. literality a word. I don’t know. Somebody tell me.
literality isn’t a word, but being literal. Maybe the last scan for these assets, and maybe it’s not just scanning, but the last scan and manual review of findings that bubble up from a certain rule that require manual review from your continuous assessment provider, whether that’s an internal group, team, tool or company.
Maybe it’s not exactly real time because the scans were a couple of hours ago, a couple of days ago, or the manual review was a couple of days ago. But it’s very close to real time. So you could say within the last day or two or last couple of hours, what does my environment look like now? And it’s so nice to know that the dynamic nature and the malleability of the scope allows you to, if endpoints are spun up, maybe subverting a process where you have some shadow. It where somebody decided to put up and run some sort of peer to peer sharing server in your environment. That because it’s in your IP space or because it’s using a domain name or whatever the rule is that you establish for your dynamic scope in this continuous assessment, it’s going to get picked up. And so now we can see the difference between what my environment looked like then, what it looks like now, what does my scope look like, what are my assets look like. You can differentiate, see the delta, what’s new, what’s old, what’s got flaws in it, what’s got new flaws in it, what needs to be fixed, what’s recently identified new.
And this has to do not only with assets and attack surface, but vulnerabilities and flaws. And so continuous assessment and testing really, I think, can somewhat aid the age old kind of the adage where it says CSOs and CSOs and CIOs, how do they sleep at night? I’m curious about that too, having the responsibility of the security posture and fidelity of an environment be on my shoulders and be my job.
I don’t know how I’d be able to sleep at night, but I know if it were me, I would want there to have been a solid baseline established. And I want to know that there’s continuous assessment of testing because in between pen tests, in between red teams, in between vulnerability assessments, not being able to answer those questions of what needs to be fixed, what does my environment look like, have they been fixed, are they fixed? Are there new flaws? Are there new assets that I don’t know about? I wouldn’t be able to sleep. And I think a continuous assessment approach AIDS that.
And so we’ll talk a little bit about some of the limitations and challenges of both of those. And then you’ve heard me allude to it now, probably. And I think that oh, Tom, thank you. literality is a real word. Appreciate the support on that. That’s good to know. I’m going to add that to my dictionary.
I think some folks listening may be able to infer some of these challenges based on some of the things I’ve mentioned in inflection and tone and those types of things. But let’s talk about some of the challenges and point in time assessments. The biggest and the first is it’s not dynamic, it’s a snapshot in time. It was what it looked like at one point. There can be a heavier, higher cost at times because you’re engaging a group of folks and there’s a lot of prep time. There’s a lot of getting people together on calls and planning and having folks assigned to the gig. And there’s just a lot of administrative overhead.
Not just costs like dollars and cents, but there’s cost in time and preparation and spin up. And then there’s literal cost engaging with traditional snapshot and time assessments in the project based paradigm that we have, there’s a dollar cost to it. And it can be. Higher and the scope being very static and being limited and sometimes that’s driven by the cost. I think there could be a number of folks on here who would agree that they want to have more testing done and more scope. It’s just they have a budget and their budget predicates them having the scope be limited because that’s what they can afford. And then it’s a timebox fashion.
I think when I was a practice director and had Pen testers working for me or when I was a Pen tester myself, the idea is that any Pen test could be eternal. Given enough time, I will hammer and hit on something and eventually maybe find something. And if you don’t bookend a test and you don’t put limits on a Pen tester they come in and say so you think you’re done with that app? You’d be like never am I done with this app. Never am I done with this test. You could continue going for the end of time and so it’s time boxed. And that also is a driver by cost because a week of dedicated assessment and testing cost x whereas a year of it is orders of magnitude more. And so those limitations and challenges are what are present in that point in time assessment methodology.
So some of the challenges with continuous assessment and testing that are unique and specific to it because some of the similar challenges you may end up finding them. Maybe there’s a cost scenario, maybe the scoping, et cetera. But I think something that would be very unique is dealing with the data. So just like with any kind of utility or suite or automation or tooling or application, if you purchase it and you do nothing with it, you’re not adding value. And so if you have assessment and testing going 24 by seven constantly showing you new attack surface and vulnerabilities and exposure but that data is just being ignored or flopping into a report somewhere, not being actioned or you just don’t know what to do, you’re overwhelmed with this data. That’s a problem. The depth of assessment at times from a continuous paradigm because of leveraging automation can be a concern.
Folks can disagree with this. And I’m not saying this universally, saying that in every instance this is the case, this is observational. From my opinion, when you leverage more automation you’re going to lose some depth. I think that that’s just common sense.
I love the concept of AI and machine learning but we’re still in the sand in that we’re still learning and building. And the reality is, from my perspective, the ability to adapt, improvise and do some of that voodoo that they do from a real human operator behind the controls is still aided by expert tooling, aided by maybe even AI enabled tooling that they also understand how to leverage and use expertly. The ability for manual intervention to be able to go deeper and improvise and adapt and do some neat stuff. You’re going to lose some of that with automation. But what you gain with automation is the ability to do maybe a lot more with a lot less. So that depth of assessment is a concern. And then the idea of specialization.
There are some types of systems, applications and networks that are more conducive to automated assessment and testing. Therefore, continuous assessment and testing is a little bit easier on these types of organizations or systems or applications or networks. However, for highly exotic and specialized things, it may not be conducive to do automated assessment. And I keep saying automated. I don’t want you to hear me wrong and I apologize. I don’t want you to think that I think continuous and automated are synonymous. So let me step back a couple of sentences.
There are some systems that are not conducive to continuous assessment. Now I’ll star and say the reason I believe that is because they are not conducive to automated assessment very well. Either they’re likely to experience adverse effects due to their technology stack or their technology stack is so varied and nuanced and requires so much insight and understanding that automation can’t really be expected to be able to take advantage of and identify flaws. And so the specialization and the specialized nature means that continuous assessment isn’t going to be the panacea for all environments and we have to keep that in mind.
It’d be interesting too before I move on. And I’m not even paying attention to any of the question stuff, so maybe I’ll lean on Tom for this in some of the listeners, viewers, people on the Internet, what in your mind are some other limitations or challenges from either of these assessment and testing perspectives that come to mind to you? And so if you want to put some of those into questions we can do that. And then Tom, I can pause for a second if there are any questions before I move on to my next slide. We can talk through that. I see a red dot on the question thing. So that means maybe there’s a question or two. There is, there is some, there actually one of the challenges though, that was indicated is the high resource requirements associated with continuous assessments.
Can you kind of talk about that? Like the storage? Yeah, that’s a great point. I’d be interested from the person that ask that question if they want to say hey, that was me. And this is what I meant specifically because the way I take that is what do you mean resources? And yeah, the resource requirements could be like having to have a large set of systems that are going to store the data, a requirement to have the tooling and systems that can perform automated data. But then there’s also the idea of the resources being resources to set it up, resources to administer it, resources to run it. You’re going to need database space, you’re going to need disk space, you need app space, and then you’re going to need resources that are going to actually deal with the data. So that my limitation and challenge of dealing with the data. That’s a perfect point of if you don’t have the resources to do it, and you’re a one person shop or you’re a five person shop for Giant Global Enterprise, and your continuous assessment activity is going to take two FTEs perhaps, or one FTE, and you just don’t have the budget for it to deal with the data appropriately.
Does continuous assessment make sense? That’s absolutely a great point. The resources necessary for a continuous assessment paradigm is a great point. Let’s see another one here. Chris has a good point. Actually successfully using data visualization to make the continuous assessment data digestible has been a challenge for them. What do you think about this? Fantastic. Yeah, that’s a fantastic I don’t remember the name of the product and I probably shouldn’t have done it, but I was experimenting many moons ago with some data visualization tools for like sock analysts and stuff.
And the attempt was to make alert it’s actually kind of cool. The attempt was to make alerts and network data and packets appear visually so you could see a connection. You’d have nodes and there were lines connecting them. It was very tron looking, or it was visually appealing. And then you would see network packets going. And when I was testing it in a lab with two or three things, I was like, this is amazing. And then it literally looked like the Battle of Yavin when I ran an Nmap scan across like the top 5000 ports.
And each port was like a different color, and each packet from my node was a different color, and it was like, pew, pew, pew pew pew. And it was great. And so then I was like, this is fantastic. How cool for a sock to be able to look up and see pew pew, pew, pew pew to aid them, not just alerts on. But then what was interesting is when I deployed it into an environment that had several hundred endpoints, I’m not even talking about thousand, I’m talking about several hundred and all of that data, even trying to tune it for weeks, seeing all of the pew, pew, pew pew, it was completely lost in the noise. There was no way for it to bubble up at that time with that solution. Now folks who maybe use the solution expertly probably can know how to tune it better.
But yeah, being able to try and visualize the data and make it digestible and useful and being able to triage is absolutely something that takes, again, goes back to resources. You need to be thoughtful with it and perhaps it can be overwhelming. And so having maybe trusted experts come in and be able to help you deal with that data and I do wonder too if and I’m not super read into all of the different visualization tools out there and I think they’re getting cooler and cooler. So that’s a huge yeah, that’s almost probably a whole another project in and of itself. So you have your continuous assessment strategy and now you have all this great data. It’s like, well now I got to go through an RFP and find a visualization platform or product that will help me deal with that data. Yeah, absolutely a great point and I know anecdotally from my experience dealing with some visualization tools that they work.
But then I think my assumption would be that newer tools now too are going to probably start leveraging some more of the machine learning capabilities and rules, engines that can help you establish baselines and establish things that will bubble up similar to traditional alerts in a sock or something along those lines. Because you get lost in the noise visually. You can get lost in the visual noise. Get lost in the why would you say that for something visual? Get lost in the bits and bytes. Exactly.
Next one I’ve got one from Jim, more of a point. The point in time is wrong and it’s outdated by the time the report is delivered. So I think that’s valid too. That’s a great point. Right. That’s really what I’ve been driving at is now I will say that’s actually a great way to we’ll talk about some advantages in a moment and then after that I’ll go into my kind of wrap up slide and we’ll talk through maybe some more questions. I think the point in time assessment there are folks who are leveraging it and I’m going to be real and this may sound harsh and I don’t mean it to be, but it is, I think an antiquated view.
If you’re solely relying on that testing methodology and that testing paradigm and it’s your only strategy. I think that there is value in it though. So I don’t want you to miss here. I think a snapshot and point in time assessment has its use and after this slide we’ll talk about a little bit of it more. But I do tend to agree that I think a reliance on solely point in time you’re probably doing yourself and your organization a disservice. However there’s always caveats to that. Right? Like better something than nothing.
I would rather some shop do an annual pen test or an annual vulnerability assessment versus head in the sand and be like gee gosh golly, I hope nothing happens. So you do what you can with what you got and hopefully we can talk through some strategies that are going to allow you, especially if you’re choosing partners or maybe you’re choosing to staff your internal teams or you’re choosing the partners to help you with this. You can start looking at folks who understand and leverage different paradigms in a positive way. So I’m going to talk about some of the advantages real quick and then we’re going to talk about the question of which one should you choose, a or B, continuous or point in time? Some of the advantages I’ve been doom and gloom and talking about how you can be messed up and not have quality from the assessment and testing. Let’s talk about some of the advantages. So a point in time at times can be specialized. You have the ability to take maybe consultants or folks with a specialization in maybe some esoteric skills or exotic technology stacks or they just have a specialization that allow that depth with an assessment and testing in a project based way.
They get to take the time because it’s a very focused maybe assessment. It can be very specialized in nature and it could be useful as a temperature check like we talked about a baseline, we talked about establishing your baseline. Then perhaps because of budgetary requirements, you just said hey, we did this six months ago, let’s check it again. And you’re like hey, we’ve gotten better, we’ve gotten worse, we haven’t gotten anything in between. So it’s decent for a temperature check. It can be thorough at times. Again, depending on your partner skill sets of maybe internal staff or the partners you partner with, there can be a more comprehensive or a thorough feeling of that type of assessment because again that goes into schedule like you can expect when it’s happening.
You can allocate the appropriate resources to it. Now there is a slight detriment that maybe if it’s scheduled and that there’s some expectation, maybe there’s a false sense of folks acting the way they should and patching things all before the test. You know, what if they’re patching because they’re worried about getting in trouble because of a pen test? Hey, at least they patched, right? But the idea that the assessment in a traditional sense would be scheduled and maybe more deep and thorough and allow you to prepare for it and prepare to allocate the resources and a lot of times the boards or the C suites or the leadership channels, they have this expectation. And so maybe a new technology stack and a whole big ask for a big pile of money for this new paradigm falls by the wayside. But they’re somewhat used to the idea of we got to allocate budget for some sort of annual or biannual test. So something’s better than nothing. Continuous assessment as I’ve kind of said, it’s near real time.
You’ve got the ability to see take a temperature check but they take it every day, not just every six months. It can, at times, be cost effective in that there are a lot of organizations that are trying to really afford that pen test as a service. Or there are organizations or tooling or software application suites. Or application suites. Rather that can be purchased by enterprises or leveraged by providers or consultancies can have a cost effective maybe the cost is a little bit easier to swallow for a service or an application that does this versus project based pen testing or project based assessment. Testing from a company can be collaborative as well. Now that’s somewhat a little bit of a misnomer because realistically collaboration is how you it depends on how you execute.
Like a point in time assessment could be collaborative if the assessors are collaborating. A project based pen test where the pen tester is having daily check ins or asking you to hop into calls and working with you or maybe they’re on site with you in the conference room and they’re chatting through things. There can be a collaborative nature to it. But I think continuous assessment was since it’s borne from automation and since there’s a lot of the idea of establishing rules and being able to do a lot of things digitally, it naturally breeds collaboration to be able to especially if you’re using a system designed for continuous assessment. You can take disparate teams, get them into the same system to interact with the data or something along those lines. So there’s kind of while you could argue, and I wouldn’t argue with you, I’d agree with you, you could argue both assessment types are collaborative in nature. I think continuous assessment is kind of born from collaboration.
And then I think the idea of an assessment model that evolves along with you, evolves with your technology. Stack, right? If we’re talking about SAS first, if we’re talking about a whole bunch of cloud nonsense and we’re talking about organizations that don’t have infrastructure, they just have apps and containers and a whole bunch of different situations where maybe a security assessment and testing organization you partner with is very structured towards something like an old school project based assessment. The continuous assessment model can be tweaked and honed with your tech stack as you evolve. Whereas you might have to go through and find different providers and sometimes it might be trial by error or hire people. If it’s an internal team and you don’t know what you don’t know. Are they assessing your app stack appropriately? Are they assessing your cloud infrastructure or the infrastructure as a service or the platforms as a service that you’re using? Are they effective in that? And I think because continuous assessment is an evolution in and of itself, it naturally is going to evolve with your CI CD pipeline, with the idea of assessing apps maybe in APIs differently than traditionally. So now the question is which one should you perform point in time assessments or move to a continuous model? And my answer is yes.
And so that’s tongue in cheek. But in my personal opinion, I think it’s a one two punch. Your jab is perhaps the point in time assessment and your cross is the continuous assessment. Because I think if an organization leverages solely continuous assessment, I think they may at times miss. They may miss things. They may miss something that an expert assessor that has been engaged for a point in time assessment of a very specific set of assets may not miss because since it’s leveraging so much automation, there’s just the inability to, like I mentioned before, improvise, adapt and identify flaws that a tool is simply not going to find. And so I think relying solely on continuous means that there may be going and there’s not as much depth.
So you may be missing some attack surface. However, I think as well that the point in time assessments can be your baseline, which the continuous assessment builds from. So you do your point in time assessment and then you continue to build from it on this continuous basis that helps you identify flaws are being fixed, identify telemetry. Can you see these types of things? So I think the answer to me is why not both? Now, that being said, some organizations will say, oh, that’s fine and good for that, Nick. The imaginary company that you have in your head that has unlimited budget and time for everything, well, of course you’re going to have to scale things to which you can deal with. But in a perfect world, you’re going to create a strategy where you’re leveraging both. And I think from a provider standpoint so for those consultancies and providers that are adding value in a services industry, I think evolving your practices into one that does afford both.
You have the ability to have a continuous assessment approach as well as snapshot point in time assessments that can go deep and can be having that McDonald’s menu as you are, or a menu of I shouldn’t say that. But having that menu of services that not just includes what you’ve been doing for the last 20 years, but shows an evolution into a continuous model and paradigm. And then similarly, I think organizations who aren’t providing services aren’t providing security services, but they’re providing either a product or a suite of enterprises, so to speak. Organizations with computers. Let’s just say that I think looking either internally or from your trusted advisors and providers that you have, whether it’s staff that’s inside that’s dictating these decisions, whether you’re getting third parties to do it for you, I think finding ways to ensure that you’re hitting it with the one two punch of snapshot and time assessments that are then augmented with a continuous assessment paradigm. Because I think some are going to find certain things and expose your environment and your attack surface in ways that the other won’t.
That’s where I land, and I want to take the last few minutes to have a discussion and wrap up with comments or questions that we’ve got, if we have any others. Definitely there’s a bunch in there I’ve been meeting. I wanted to get back to one of our guys from the last break, but Peter wanted to know when a point in time assessment is done and there’s no remediation performed, is this where a continuous assessment would be most appropriate? That’s a great question. I think some of this comes down to either corporate culture, communication and priority.
If a snapshot or a point in time assessment is done and there’s no remediation, you really have to go back and wonder, organizationally, why was the assessment done? Was it done because it’s required to by law? Was it done because they were told to and they don’t really care about it? Was it done with the genuine intent to have things fixed? I would have to do some analysis and say, why wasn’t it fixed? Was it fixed and it wasn’t tested? Again to tell it was fixed? Was it fixed because it’s too hard to fix? What’s that answer? Because if an organization isn’t capable to fix a point in time flaw, I’m wondering if they’re either at the maturity level or it may not even be organizational maturity, it may be priority, they’re not putting resources towards it. If they’re not going to fix one flaw or a dozen flaws or 100 flaws from a snapshot of time assessment, would they fix the flaws that are being shown to them every day? I think you could argue yes, because then it’s top of mind. Instead of having this one assessment that you can bury, if every single day the dashboard shows these seven flaws that are identified by our system, they’re not going away, perhaps that could be a way to leverage the continuous nature and be like, it’s still here. It didn’t go away. The next day. It’s still here. It didn’t go away.
Because you can always put that fat pen test report into a drawer, file it away and move on. But if you have a dashboard that’s constantly saying these flaws are there, perhaps that could be a mechanism to get it fixed. But I really think the route you need to organizationally come in and find out. All right, if we’re paying good money to get ourselves tested, why aren’t we actioning this sort of a follow up, I guess.
Do you have stats or anything that shows like a general cost difference between a point in time versus doing kind of continuous? Is there any kind of that would have been a good idea? I don’t right now, but I have a feeling that they exist out there and so let me ask Chat GPT real quick. No, I’m just kidding.
We’re seriously higher volume than I don’t all I have is anecdotal. All I have is anecdotal. I think that’s such a beautiful, terrible, beautiful, horrible, wonderful part of where we are in this time and space and in our industry. We have so many options in cybersecurity services and in providers, I think you’re going to see just a wide swath. And so what’s neat about that is you do have the capability to find what fits you and your appetite and your budget. What’s tough is analysis, paralysis and information overload. Like how do you know who to pick? How do you know if they’re any good? How do you know their solutions are up to snuff and that’s a tough nut to crack.
So I don’t have any analysis and cost breakdown and spread. I can just say anecdotally. From what I’ve seen, the pen testing as a service or the continuous assessment models that have been out there have really tried to become cost effective. Especially if we think of like some of the MSSP space or providers who are trying to either leverage other providers or bolt on some services that can be stomached in a way that adds some value beyond just running a vulnerability scan tool but is cost effective. But unfortunately I don’t. So I think that really ends up being a scenario where you really have to find yourself a trusted advisor. When I say trusted advisor, I mean either an individual or a company or a group or a set of peers or colleagues, those folks who have their fingers on the pulse of happenings and what’s going on and can suggest.
There’s also those what do they call them, the analytic analysis companies that provide that kind of information. But I tend to be a little bit more raw. I trust but verify. And so my circle of trusted, my black book of colleagues and folks who I can ask those kinds of questions be like, all right, so this organization or this is the type of thing I’m trying to do, what organizations excel at that in your experience? And I actually have a really unique kind of vantage point. Being at PlexTrac, I get to see so many, I’m not kidding, hundreds and hundreds of organizations that provide assessment of testing, consume them, do them themselves, or provide them as a service.
That’s my suggestion is to be involved and ask those types of questions and then, unfortunately, going to have to sometimes it’s luck of the draw. That’s tongue in cheek.
I think it’s a fair answer though. I mean, I like it. Okay, let’s move to I’m going to actually advance our slide because you’ve got some really cool resources on there while we’re doing some more Q and A. Let’s see. Jim had a response from us earlier. He says responding as fast as assessment information comes in is challenging. Plus the prioritization of findings gets complex when differing controls and mitigations and accepting risk across different systems and environments means finding it on system A is serious and doesn’t matter that much on system B.
As volume from continuous gets high, this becomes harder to address. Any comments? You’re not wrong. First comment is you’re not wrong. I think that may have been the question about with resources and continuous assessment and testing. You’re absolutely right. Information overload and deluge and it does become difficult. I think that’s why systems and platforms and paradigms that allow you to establish and provide you the capability to prioritize and triage and be able to add your own models that can detail risk and help you, especially with maybe some sort of intelligent adding intelligence.
And I don’t mean threat intelligence, adding intelligence into prioritization, not for a shameless plug, but that’s the telemetry that PlexTrac is on organizations that are trying to help you. Everybody has so much data being able to know what am I supposed to prioritize, how do I prioritize it, how do I get it fixed, how do I check that it’s fixed? How do I make sure that it’s getting all of that? It is challenging and that’s actually one of the reasons why I really like I’ve aligned myself with PlexTrac because that’s their mission. And I know that there are so many other thousands and hopefully many thousands of like minded folks who are trying to solve that. And I think it’s solved in a number of ways. I think finding a mix of personnel and applications and systems that can allow you to start doing it’s not a stamp, I didn’t call it good scenario. It’s not an easy fix. I mean, I think you just described people’s careers, right? That could be staff, that could be a team.
The vulnerability management program, the risk management team, they’re nine to five. That could be people’s. Nine to five jobs is dealing with the data. And again we get to this point where folks are eye rolling be like, I’m a team of three, I’m a team of seven, I’m a team of 15 and we just don’t have enough people. And that dovetails into a whole other scenario. But I think as we continue, it’s the one two punch. Earlier I said it’s a one two punch of both assessment paradigms.
I think it’s also a one two punch of the right people with the right tooling married up. So you have your apps that say they’re going to do great things. You have your blinky boxes, you have your solutions and that’s all fine and good, but then you invest in the people that are going to run those, tune those and expertly use those and you don’t pile on with what I like to call the other duties as assigned. Itis when you have somebody and you hire them as the firewall engineer and then they become the vulnerability management program person, then they become the patching management person and then they become the continuous assessment dealing with person. That’s why they quit. And so I think the idea of understanding that, look, if we’re going to continue to do business in 2023 and beyond, we have to ensure that we’re leveraging the appropriate platforms and at the same time placing the appropriate people that can leverage those platforms to get value from them. Because I think that’s really easy to say, not easy to do, right? It seems to me like with the automated.
Piece. Does that help with compliance and regulatory challenges? I think anytime you leverage automation you just have to tread lightly. You can’t trust it too much. You trust but verify. I think automation is like anything, a tool that can be wielded expertly by those who know how to wield it. And so certainly if there are compliance, initiative and verticals that are, people have different differing opinions on the necessity and utility of compliance. But one thing that I’ll mention is compliance at least makes it a discussion point.
It puts it on the board’s radar, it puts it on the CIOs at the sea levels. At least you have some requirements and a minimum standard that you have to meet. And so with those conversations perhaps leveraging automation can meet the spirit of some of those controls. But then it’s up to the onus is on the organization who are the custodians of the data or the recipients of assessment and testing to do security for the sake of security, not security for the sake of compliance. But compliance is that necessary oversight and beginning conversation. It’s kind of like the standard. If we didn’t have a standard, folks will be doing what they want with your credit cards and they still do but sometimes they get in a little bit of trouble for it.
So yeah, I think a big bullet point that I would leave is anytime we’re leveraging automation we have to do so thoughtfully, carefully and you trust but verify. You trust that you implemented the automation and appropriate you trust you implemented the automation appropriately and then you verify it, whatever that means. That means with some snapshots, that means with checking it consistently, that means with benchmarking. I am not anti automation but I’m also not the type to say you can only get value from manually crafting packets and mailing it through the postal system.
Fair enough. I got a couple more and then we’re going to be at the end of our time, anything that we missed I’m going to go ahead and send off to Nick. He’ll get back to you as soon as he can. Let’s see here. Nice comment from Kristen. Fantastic webinar. They deal with internal vulnerability management programs and they’re taking plenty of notes and they thought this was a great webcast.
So good job. Nick, not really a question there but I just want to give you some kudos. I will take kudos all day seriously. So thank you Kristen for that. Joe wants to know what’s your strategy to start with minimum cost and labor and then kind of scale up. What would you recommend there? Yeah, I think getting started with using some open source solutions to begin discovering assets and looking at different open source solutions so that you can just start learning and discovering. And then you start with vulnerability scanning, start with a low cost tooling approach and then begin as you start to identify vulnerabilities and move up, then you can start to as your process matures and your capabilities mature, you can start enlisting more help or hiring more staff.
But start with something, start with don’t. Keep in mind you get what you pay for. So with an open source strategy, you’re going to have fantastic tooling that’s going to take a lot of maybe effort to understand or maybe not, but you’re going to put in the effort and then you move into lower cost identification, vulnerability scanning solutions and those types of things and you build and continue to build. Crawlwalk, run, crawlwalk, run. And that’s what it looks like to me. Okay, one more and then I’m going to shift gears and kind of close this all out. Let’s see here.
What do I got in here? Here we go. Thomas actually, pretty good question here. Actually, we have resources up right now, but he wanted to know if there’s any other info or resources to help us identifying what areas the low hanging fruit would be most conducive to continuous versus point in time assessments. I would also say, Thomas, there’s other resources in the resource tab in that little menu bar down at the bottom. Check those out too. But Nick, what do you got? Man, that’s such a big question. I hate to do a shameless plug, but I hope Thomas and others hit me up on LinkedIn.
Find me on LinkedIn. Let’s have a discussion. I think there’s a lot of resources. I think getting involved with different communities, information sharing exchanges and getting involved with other organizations or other people of like minds and having those discussions because I could give you some perspective, but it’s really the value of community. Being a part of the security community and then your local security community, or local being even not geographic, but industry centric, that is super valuable because I think a lot of folks are going to be able to grind smooth what’s worked for them, and you can glean a lot of value from that. So your different organizations and communities that allow you to have conversations like that I think would be hugely valuable. Excellent.
And I’m going to do a shameless plug too. Go to Secure World and make some new friends and then you guys can collaborate and actually have a trusted network of people that you wanted to work with. That’s what we do. That’s what I love about what my job is. I’m kind of connecting folks together to share this information to kind of help stop these bad actors. Right. So Nick, close us out if nothing else, man, what should people have gotten from our talk today? Trust but verify.
Take a look at what you’re doing, see if it’s right. Verify that works the way you expect it to. I like it. All right, I’m going to shift gears. Nick, thank you so much. I’m looking forward to our next one together, whenever that might be.