Skip to content

VIDEO

The Future of Manual Security Testing: Will video kill the radio star?

In a world of rapidly improving tools to automate, augment, and perform security testing, it’s not a huge leap to envision a future where manual pentesting done by humans is obsolete. But are we there yet? Will we ever be? Professional hacker and owner of Rotas Security Nick Popovich and PlexTrac’s Jordan Treasure will discuss the unique value manual testing by skilled humans can provide and how advances in automation and AI can make humans better without replacing them. Tune in to hear some physical security testing and cyber ops battle stories and their predictions on the future of manual testing in cybersecurity. 

Category: Informational Series

   BACK TO WEBINARS

Transcript

Hey, everybody. Welcome to PlexTrac’s Friends Friday. We’re extremely excited to be joined by our guest, Nick Popovich. I’m Jordan Treasure. I’m the senior manager here at PlexTrac for our customer success. My background is in security. I spent a good bit of time with the Department of Defense doing threat-hunting risk assessments. I’m a former GCFA. All of that. And we’re here today to talk about automation in the industry and specifically how. How automation is growing. It’s improving. I promise this isn’t specifically AI related. Automation has been around for a long time, as everybody knows.

So joining us, like I said, is our old friend Nick Popovich, who is extremely qualified and just an all-around great guy. Nick, would love for you to introduce yourself, please.

Yeah, thanks, JT. Nick Popovich. I’ve come from the Department of Defense and Military as well. Moved into the private sector and been focused on adversary emulation and offensive security for a number of years. Started Rotas Security and been hacking and cracking for a number of years.

And I think it’s cool that Jordan and I shop at the same store and got the same shirt. How did that happen? That’s crazy.

It’s almost like our producer Anna Lee sent us some shirts.

That’s great.

Well, thank you so much for joining us, Nick. And, you know, this is a new thing for PlexTrac. Friends Friday, really meant to be a conversation. It’s not necessarily a sales pitch or anything like that, really. It’s all about thought leadership and just kind of getting everybody on the same page, if we can. So with that, like I said, we’re here to talk about automation today.

And, you know, the working title of this right now is, Did video kill the radio star? If you are young enough, that reference won’t make any sense to you, but Google it, it’s great. But the question is, automation is coming in and where do, you know, You manual testers and just the concept of manual testing itself fit in. And we’re kind of trying to pull that thread today. And our friend Nick here has a lot of experience doing manual testing, both the actual manual testing as well as the physical manual, like actually manual. So, Nick, I’m really curious to get your kind of just surface-level thoughts on automation manual testing and kind of where everything meshes today from your point of view in the industry.

Yeah, you know, I think it’s. The idea is really tools are only as good as the folks that wield them and the expertise behind them. And so there’s a lot of talk about a lot of different automation tools. There always have been. And whenever there’s a new paradigm or a new introduction to some sort of tooling or system, there’s naysayers, there’s folks who are early adopters. And where do you sit? And I sit right in the middle. I certainly say that automation is not the devil. It helps when you’re trying to scale in an organization and if leveraged by folks with the requisite expertise to understand it, to tune it, to leverage it to its fullest potential, automation can be a huge asset. Where I think there’s trouble, especially maybe in unregulated industries, is where folks can’t go beyond the automation if they’re beholden to only leveraging systems and tooling without being able to understand the results in an expert fashion or be able to innovate and adjust and manually take advantage of technology stacks.

If you’ve built an entire practice or business model around automation with no human interaction, you’re going to be missing things. But if you’re only doing manual human interaction with tech stacks, you’re going to not be able to scale and you’re going to also then miss things. The difference between being able to assess an organization with 5000 endpoints, if somebody’s going to do that and do it manually, you’re going to get different results than somebody who’s going to leverage automation and then augment that automation with, and enhance that automation with expert manual activity.

Yeah, yeah. And I think you made, I think you made a great point about the, you know, it’s the, the tools are great, but it’s like the carpenter is only good as his tools and the tools are no good without a carpenter kind of thing. Right? So that idea of, you know, how do we get, how do we get the right mix of the skill set and the, I guess, contextual ability for people to understand what the tools are performing and what they’re doing is kind of the big thing that I see is it’s, you know, a lot of people just want to push a button and have their job done for them. But the thing that sets us aside from AI taking all our jobs tomorrow is that human piece of, hey, I’ve got some context here that maybe automation doesn’t have today, and I can document that context, make sure it’s fed into the automation piece around how it’s configured or how the assets that are being scanned or configured, things like that, that context piece, without it, it gets a little, you know, how do you scale if you don’t have automation.

But if you do have automation, when we talked earlier, the example that I used was the dam. Like, if we created some really cool tool to go out there and just make sure that the dam is always safe and there’s no cracks in the dam or anything like that, you’d probably still want an engineer to go out there and inspect the dam from time to time, make sure the tools are working as expected. There isn’t something, some new environmental factor that, you know, you want to be aware of that the automation wasn’t originally configured for. And that’s just that human, you know, two wet eyeballs on target, just go see what’s going on out there in the world. So that’s kind of where, where I sit, but I’m curious to know if you do have any good examples off the top of your head. You know, automation was, was, you know, a component and not necessarily even a failure, but, you know, where the, the manual component of the testing that was occurring kind of saved the day or just maybe even took that great automation to the next level.

Yeah, you know, and its kind of tangentially but still relevant. I’ve had some opportunity to train a lot of folks and it’s really interesting. You can really tell someone’s mettle by taking the ability to leverage automation off the table, maybe doing some sort of assessment on skill sets or something like that. Or perhaps you get an engagement where there’s a requirement to be low and slow or avoid detection. There’s a big, in the realm of pentesting and red teaming specifically, which is my world, trying to avoid detection. The easiest way to get detected is to leverage automation gates open wide because, you know, a one-to-many port mapping is going to occur. And so it’s just interesting to watch people who maybe are even seasoned in this. But when you take away the tools that they’re used to, you really then decide where their skill sets are because if they’ve leaned too heavily on the tools they can’t operate without them. the best testers then can also leverage those tools expertly.

So I think one of the best examples I have, and folks that are listening right now would be able to probably attest, is web application testing. Web application testing, there are a lot of phenomenal tools and we leverage them. When we do app testing, we’re going to leverage tools that spider the environment, that look through pages, that make some of the requests and really give you a sitemap and start looking for some of the, maybe the tiki tack stuff or just some of the low hanging fruit. Identify the sitemap, but the ability to abuse business logic and application flow and ensure your requests are still authenticated while you’re assessing the app really requires a skilled human.

And so we just last year had an engagement that this organization brought us in, and they had been tested four different times and had been attested that this app, there’s nothing wrong with this app. And they were battling a lot of fraud to the point that there was law enforcement involved. Because the app had been attested to, there’s no flaws in it, by a large government organization, a huge provider, several internal teams and so on and so forth, had come in and said, the app is fine, getting acts coming in. They were saying, there’s got to be insider fraud going on, and we’re not a forensics investigator, we can’t say that. But what they did do on recommendation was have us come in.

And again, we leverage some of our automation, but we then map out the application, start looking through it, and within ten minutes, we were able to identify a flaw that would be able to be realized from an external source. But there was no way of automation finding it because of how the token in the request was handled. All of the automation tools were basically logging out, and they were not able to really deeply assess. And even if they stayed authenticated the way that it worked, it took a human element to say, hmm, that’s interesting. But within ten minutes, we were able to identify, identify a session fixation and an idor condition that was accessible from the Internet that allowed significant access to the application to occur and really predicated some of this fraud. And that was like $1.2 billion worth of fraud that was dealt with. And having to come in and have the client say, listen, you’re going to have to get your deposition suit on because you might have to come have conversations about this. And that’s after some huge names in the federal space, in the consultative testing and assessment space had come in.

But then getting access to the prior reports, you looked and it was all just tooling. It was only tooling. And that tooling gets you so far, but you really have to leverage the ability to improvise, adapt and survive.

Yeah, absolutely. Yeah. And it all comes down to, it’s just conceptually, the automation piece is great because you can go in, you can set everything up, and you can just start moving at scale relatively quickly. But the issue that most organizations have is that every little corner of your network or your applications, your tech stack, whatever you want to call it, is going to have its subtleties that maybe don’t equate to exactly how the automation is configured. And that human piece gets you that last mile, I think, is really the important thing. It’s just like you said, automation is not the devil necessarily, but it has its limitations. People do. And the complement of the two is really where it’s at, for me anyways, that I see out there.

I really loved what you said about what happens when you take the toolkit away. What happens when you take away that automation toolkit. And in my career, when I was doing security full time, one thing we saw is we went into a lot of environments that were very restrictive as to what tools, hardware, software we could bring into the environment. And right away it separates the real meat eaters from everybody else, because all of a sudden you’re going back to the basics, network fundamentals, packet structures, those kinds of things that automation is absolutely helpful with. But if you don’t know those basics, can you really know, is the automation functioning appropriately? Is it getting the things that you want it to? Are the people who are configuring that automation capable and aware of what’s out there that you’re hopefully scraping for out there or troubleshooting? The thing that always blows my mind is just the implicit trust of a tool out the box.

Yeah, perfect. Here’s a really quick example. One of my favorite, when I was interviewing, when I interviewed consultants, one of my favorite questions is to walk through how does an NMAP discovery scan work? And that tells me, have they ever run NMAP through a wireshark? And then I’ll say things like, why don’t you, do you trust, do you implicitly trust out the box? NMAP’s discovery paradigm, especially for an external. And who, like, do we not all love Nmap? I built my career off of NMAP. Like, NMAP is a beautiful, beautiful tool, and it’s automation. But if you don’t wield it well, if you don’t understand that it first sends, you know, I, you know, a syn packet with an empty, you know, request and different tcp packets to 22 84 three. And if those ports aren’t open, it’s going to mark a host as closed. It might send certain ICMP requests. And so for a perimeter network, if you don’t understand, if you haven’t looked at how the tool works and watched it on the wire for a moment and understand that you need to adjust the default discovery, you’re going to miss a significant amount of attack surface because it’s going to mark host is down if it doesn’t have port 22 84, three or respond to ICMP, it’s going to slow us down. But how many servers are out on the Internet that have port whatever open or different web servers or services or don’t respond to ICMP? So that’s just a perfect example of we love and use NMAP every single day and it’s automation, but without wielding it appropriately and then troubleshooting. If you’re using SQL map, if you’re using Burp, if you’re using different tools that are going to provide you value and you don’t know how to interpret the results or ensure the results that you’re getting are accurate and appropriate, you’re really not leveraging that tool to the best of its ability.

Absolutely. Yeah. And really, I mean, for CISOs and things like that, when you look at the cost of some of these tools, and if you’re going to spend probably a significant amount of money, especially if you have an extensive toolkit, you’re going to spend a good amount of money. And while the idea for a lot of people in that C-suite, that executive leadership level automation helps with cost scaling those kinds of things, if you’re not investing in the setup configuration and then the care and feeding over time of that automation, that toolset, you’re really just throwing your money out the window in some cases. And I think that’s what gets a lot of these organizations in trouble, or at least sideways from time to time, is it’s like, well, I’ve got a million-dollar tech stack to protect my infrastructure and everything within. And it’s like, yeah, but are the people that are managing that tech stack, are they resourced appropriately? Do they have the training that they need, those kinds of things? So I think that’s a big one too, is you can hire people who are incredibly capable individuals. We all know how fast technology grows and changes, we know how many, how just a version of a well-known application can change. And it’s not just about getting people who are necessarily as credentialed as possible, but really that keen mind feat, to quote D & D five e. But that keen mind feed of individuals who are problem solvers and investing in those individuals, I think is a big thing. So, yeah, really, really enjoy it.

So something else that we’re gonna take maybe a little bit of a turn on is the physical pentesting space. And everybody likes physical pentesting. It’s cool, it’s sexy, it’s fun. If you’ve seen Ocean Eleven.

Yeah, exactly. James Bond, Oceans Eleven.

But with automation and AI coming out, there’s probably going to be some changes to how things go. And I think one thing is that everybody at least has a surface-level idea of the manual pentesting from the, like I said, conceptual from the tester’s point of view. Right. I mean, how many times have you seen one of these? There are twelve year olds walking around with these things, these flippers, trying to figure out how to.

Don’t go to Canada with that. Yeah.

Oh yeah. Really? Yeah. Is that, is that, are they illegal in Canada? I didn’t know that. They’re putting out legislation to not allow it to be there. Yeah. Wow, crazy. Okay, cool.

Well, right now I know that there are some kids in my city that used one of these to get into an arcade at night, which I mean, to me, if I own that arcade, I’d be like, you guys can just play for free, I don’t care. Yeah, you just gave me a free test. I know I need to upgrade my locks. Exactly, man. But you know, from the automation side of. Okay, I’m an organization trying to defend against, you know, physical entry and those kinds of things. Do you have any insight as to how maybe automation is testing that or maybe what’s on the horizon beyond just the automation piece? Yeah, I mean, there’s a lot of interesting things.

Leveraging graph theory to take, you know, maps and be able to start aligning blind spots. And already in the industry, when you’re looking at camera coverage and different motion sensor, you know, they’re having systems that will help you determine the right angle for your cameras and the different things that you need to have out there. So from a kind of a proactive planning and site survey perspective, there are a lot of neat things coming out with leveraging some of the tooling and automation. Really being able to start correlating alerts and being able to do some of the stuff on the back end is neat. We used to tell clients, and we still tell clients that typically, unless you have eyeballs monitoring or a really great automated system for cameras, cameras are not an active control. They’re, they’re investigative control. Like you go back and look at the videos, unless you have someone watching it. And a lot of folks don’t have people watching it.
And then when they rely on the motion, they turn the motion off because every breeze or every time the air kicks on and a ficus flutters. Yeah, it sends an alert. So cameras have historically not been an active control unless you have a paid-for 24/7 SOC or someone monitoring it.

Well, with some of the automation technologies, it’s getting really good being able to determine what, you know, learned behavior and those types of things when it comes to being able to execute physical security assessment and testing activity. Still, there’s not too much that we’re able to leverage in the automation world other than intelligence operations and gathering information. I will say that, truth be told, we do leverage chatbots quite a bit now with social engineering activity being able to at scale, you know, have trained models that can start trying to coerce users into going to sites so that we don’t have to have actual individual consultants doing it. We’ll have a consultant managing a small army of chatbots that are maybe doing active phishing engagements that are getting them go to a chat and leverage that.

So there’s some uses for automation, but it’s tough right now until we were able to get like an iRobot that can go in and try and clone a hid reader. Yeah, there’s still definitely a boots on the ground aspect of physical testing to actually put your hands on the crash bar and push in or try and get into the facilities.

Absolutely. Yeah. And, you know, it’s just, it’s a, it’s a wild world that we’re living in where you, you know, you can go watch a Netflix documentary or anything like that and see, you know, oh, Lockheed Martin’s developing this AI that can recognize, you know, your gate. Right.
How do you walk? Right, walk, yeah. And things like that. And you’re like, wow, like, you know, face value. You’re like, well, it’s going to be impossible to break any into anywhere. And then the reality of how far away is that technology? Who’s actually going to implement it? Who’s going to use it? Right. Yeah. How.

Yeah, sure. I’m not. Maybe the, you know, the NSA and some huge places are going to be able to afford that, but I don’t think that the regional bank down the way. Exactly, exactly. And that’s, you know, that’s kind of the interesting thing with the, you know, I’ve been lucky enough to be able to do some physical stuff myself in the past, and, you know, there’s those, you know, those attributes, those things like, you know, the, you know, hi-vis vest and the clipboard kind of thing. Like, you’ve got, you’ve got those aspects. And for anybody who’s playing the home game, basically, if you put on a high-vis vest and a clipboard, you can walk into a lot of places. It’s pretty crazy. So just go into a hospital and ask for the keys to get on the roof and half the time they won’t even ask why you’re there. So the, you know, there’s those pieces that are great and how do you, you know, how do you automate just the human component? And really, at the end of the day, you can’t. But I think as time goes by and as AI starts to take off, we’re going to start to see that game change a little bit in the, the physical testing space. And, you know, and it’s sad for the guys that do it. I, by no means my physical pentest or anything, it’s sad for the guys that do it, but we have to remember that our job here is to like, make things safer and better.

That’s right. Security posture. Yeah, that’s what I tell folks all the time. My professional failure is a success in theirs. And raising the security posture of those organizations that are under our purview is the reason we’re here. It’s not just professional hubris, those types of things.
One of the things, I think, where automation can really be leveraged in all the different security paradigms and facets and is being able to help with the noise, like correlating events, correlating data, and getting it to the right dashboard, getting it to the right people, so that there’s an active response.

Thinking about most of the physical testing, unless we pop an alarm that blares out or it requires a response, most of the activity is stuff that’s investigated later. But then there’s always iocs, there’s indicators of compromise that this system didn’t talk to that system, and that’s really another facet. All these different tech stacks that you mentioned earlier, Jordan, are they communicating well, are the logs and are the incidents and events being able to be correlated and dealt with? And I think as we continue to mature as organizations and the barrier to being able to have a cohesive communication in your tech stack and your alerting and those types of things will be super valuable. And that’s where I see a lot of value, not just from assessment and being able to automate, perform automated assessment activity, but the real value is going to be derived from being able to take a lot of the legwork out of connecting the dots from, okay, this was this IP address, which is this system, and go to nine different dashboards. And then you’re like, that’s the binary that was downloaded here where you really get a cohesive kind of telemetry view.

Absolutely.

And the telemetry is huge. I think with automation is, and that comes back to the people that are setting it up, managing it, care and feeding is, it’s like you’re trying to aggregate. That’s great. But are you getting it in the right order, the right timing, or the right things being aggregated? What’s being left out in the cold, that shouldn’t be what’s coming in, that shouldn’t be those kinds of things.

So, yeah, no, I fully agree. Fully agree. So here we are. We’re getting to the end here. So per our producer, Anna Lee, I’m going to go ahead and give my prediction for manual pentesting in the future. So where I see things going is just like every other form of automation, there will be changes, there will be people that come out of a significant increase of automation over the next decade that maybe don’t quite fit the mold anymore, and that’ll be based on their ability to adapt those kinds of things. But I think just like with going from the cart and the buggy up to the automobiles, it’s just people will adapt, people will change. There were no auto mechanics 150 years ago. Right now, there’s, how many are there in every town in America. That’s my view for manual testing is it’s that manual testing isn’t going to go away, but it’s going to look different.
And that’s my very high-level generic prediction. And I would love to hear your thoughts and maybe even some specifics if you’ve got them. But I’m curious to know what your thoughts are in the next ten-plus years.

Yeah, it’s tough because I’m a terrible futurist, but I really, I don’t want to just say I agree, but I love it because if you think about it, there are roles and jobs that exist now solely because of new technologies that have been implemented. And, you know, folks who went to VCR repair school probably aren’t repairing VCR’s anymore. You know, probably not even repairing DVD players. But the advent of a necessity to adapt and new, that’s why the doom and gloom of all this new automated AI or robot overlords come in, which I welcome fully. I want them to know I’m on record. I welcome our robot overlords. You know, there is going to be an aspect that we can’t even determine of the types of roles that are going to come out now. I think that when you look at some of the requests that are coming out. Some of them are knee-jerk reactions. And you have things like, we want to red team our AI, or we want to hack our AI models.

And then there’s the prompt engineering already shown up on resumes. I’m literally getting resumes now that say they’re experts in prompt engineering. And I’m like, me too. Yeah, no, how do you get that? I do think that the ability to be able to communicate and speak the language of the tech stack to the tech stack in natural language processing and the ability to somehow be able to be an interface, there’s going to be a lot of liaisoning, and I think we are going to lose a lot of the institutional knowledge with technologists who aren’t able to write an assembly and cobalt anymore. But I think the roles are going to, you’re going to have to be able to liaison and interact with the tech stack in novel ways, and those roles are going to look interesting. I do still think there’s always going to be a necessity for a deep understanding of technology under the hood.
There’s going to be tinkerers and a necessity, and I think, it’ll actually be more important. I’ve also a little bit of doom and gloom. I think there’s been a bit of a bubble that’s been brewing for the last decade where we’ve continued to lower the barrier to entry, lower the barrier to entry, leverage different aspects of technology, and we’ve started to lose some of the technologist aspects that are necessary to be experts in the field. And I think that that bubble is starting to burst where folks are like, we actually need folks who truly understand this. They’re not just following a script or following, you know, what, what the tech is telling them that I always tell my consultants and the people that I’m mentoring, you know, never let the computers win.

Yeah. Never let the computers win.

You are. You never let the computer win. You’re gonna get to do what you want it to do. That means turn it off. Yeah. So, yeah, I mean, that’s kind of a vague. I do agree. There’s gonna be roles that we don’t understand, but I think liaisoning with the technology and then from my perspective, being able to hack the snot out of it is still gonna be a necessity.

I love, I love that outlook, the liaison kind of concept and the idea that, you know, yeah, the technologist. It’s an interesting thing where we sit today, where, you know, the average person interacts with a level of technology that was inconceivable 20-30 years ago. And now, you know, but they interact with it and they don’t fully understand it. And the technologists, the technologists that work for me, they are my go-to in almost every matter, and I feel bad because I don’t have enough of them. So I take them with me everywhere I go. But, you know, it’s, it is, it’s going to be a real thing.

The thing that I’m the most excited for is that first time that somebody gets into an environment for prompt engineers as a, you know, fully qualified prompt engineer account, and to be able to just speak candidly with a tech stack and get it to do whatever you want, that will happen someday, and I hope it doesn’t, you know, break anything too serious. Earl Grey tea report. Can’t wait. T Earl Gray hot. I can’t wait to be able to be like, computer, pick out my clothes today.

Exactly. Earl Grey hot. That’s what I love about working for PlexTrac. They pick out my clothes, man. Well, anyways, okay, well, hey, if you’re still listening, thank you so much for joining us. It’s been an absolute pleasure, Nick. Really appreciate you spending some time with us. This is a new thing for PlexTrac Friends Friday. Our hope is to continue it, make it valuable to anybody who tunes in with that.

If you have any questions or comments, this is going to be going out on LinkedIn. Please feel free to comment down below, ask questions. Please be nice to Nick and I. We’re both very sensitive young men. But if, yeah, if you have any questions, please feel free to reach out to us. And then, Nick, if you would, please, if you have anything to plug, if you have any socials you want to talk about, give us a little quick intro to Rotas.

Oh, absolutely. Please go ahead. And we don’t come out. So, yeah, you know, if you need security assessment and testing, penetration testing, adversary emulation, red teaming, purple teaming, hit us up. Rotas Security, https://rotassecurity.com/, Rotas Security on LinkedIn. I’d love to hear from you. The operators are standing by.

Outstanding. Hey, thanks for joining us. We’ll see you at Hack Space Con here in a little bit. So thanks so much, Nick. You have a great day. Thanks for joining us, everybody.