Skip to content


End-to-end Pentest Automation: How to Adopt a Continuous Validation Strategy (w/ Pentera)

In this webinar, experts in the infosec continuous testing and validation space will explore the benefits of performing continuous validation of your security controls, the challenges typically associated with traditional pentesting processes, real world examples of organizations that have embraced automation to enhance security posture, and how Pentera and PlexTrac make this strategy feasible in an efficient way.

Series: On-Demand Webinars & Highlights

Category: Pentesting, Thought Leadership



All right, well, hey, we’re a few minutes past the hour here. We’ll go ahead and get started here. I’ll get the slides going. Promise that we won’t make this much a death by PowerPoint. Just a couple of slides here to get us started. All right, folks. So Nelson, screen look good? Yeah, perfect.

Outstanding. All right, well, hey, thanks everyone for joining us. We’re here to talk today about end to end Pen testing automation, how to incorporate some of the newer emerging tools into your overall strategy for maintaining more continuous testing and more continuous visibility than aligns with the traditional paradigms there. And so this is a little unusual. This one is a joint effort between PlexTrac and our good friends at PenTera. And we’re hoping to tell you today how you can leverage some of the benefits of both of our products to assist you in these efforts. Nelson, you want to go ahead and introduce yourself and give a little bit about your background? Sure.

So my name is Nelson Santos. I’m a principal sales engineer here at Pantera. My background has been mostly on Red team, spin testing, things like that, and then kind of moved to a purple team kind of side of the business a few years ago. And now finally I’m on the dark side, right on the sales side of things.

I can understand that going from operations to business is kind of my path as well. So Sean Scott retired from the Air Force in 2018, commanding a cyber operations squadron and then tried to do my own thing for a little bit as a consultant and then in 2019 joined forces with Dan DeCloss, our founder, to get PlexTrac going and pretty much worn every hat. And now I’m blessed to be in a position of really just getting to work with our clients, understanding their needs and ensuring that the product continues to evolve to meet what those needs are. And some of those needs are what we’re going to talk about today. So I’ll go ahead and move on to the next slide and tell you a little bit about what we’re going to talk about. So we’re going to talk a little bit at first about theory. Where does continuous validation and this concept of automated technical recurring testing fit as a component of your overall continuous testing and remediation lifecycle? We are not offering silver bullets here today.

We don’t have steak oil that you take two shots, chase it with some tequila, and everything’s going to be great in your environment. But we are going to present one component of an overall strategy that we think is the most effective strategy out there for maintaining that overall and continuous visibility and remediation lifecycle. We’ll tell some more stories a little bit about how some of these strategies have helped some of the people that we work with from a couple of different perspectives. Right, so the Pantera folks are the people that collect the information and on the PlexTrac side we’re more involved in the dissemination and the remediation tracking of that information and then we’re going to go through just some quick demos here. It’s not going to be death by demo. And by no means are we going to show you every nook and cranny of our products, but we are going to show you the workflows that are associated with how you can implement continuous validation in your environment and then make sure that the results of that don’t get buried in the bottom of a desk drawer printed out in some PDF that collects dust that nobody ever actions on. Right.

We’ll finish up with the standard Q and A and as always we look forward to your questions. You can go ahead and drop those at any time. If we happen to notice them as we’re talking and it makes sense, we’ll answer them live. If not, we’ll catch up here at the end. Nelson, anything to add on any of that before I get started? No, I think it was a great introduction and as you said, we’ll show how the two tools are great separate but they’re amazing together to make sure that you have your continuous validation, continuous testing actually. And you can keep on top of the findings and remediations.

All right, so let’s talk a little bit about this. There’s so many buzzwords out there.

Continuous testing, continuous validation, continuous remediation. There is continuous exposure management, continuous visibility.

What is all this about? Why does this seem like it is the topic of every blog article that you’re reading on medium these days? Well, from our perspective what we see is that people have realized that the world of the adversary moves way too fast for the old paradigm to work. And when I say old paradigm, what am I talking about? It’s like how do I protect myself? And by protecting myself I don’t mean buying some new tool, but how do I actually validate that the solutions, the defensive solutions that I’ve implemented are going to be effective against a dedicated adversary, right? Or maybe even a not so dedicated adversary, right? Hopefully whatever flavor of Dr XDR and Dr keeping you safe from the script kiddies out there. But traditionally people focus on that bottom left quadrant there. Well hopefully they’re at least getting that annual third party pen test or they’ve got resources if they’re lucky enough to have those dedicated internal resources that are doing that for them that are taking that offensive perspective, right? And that’s awesome. And quite frankly I still think that there is no better data that you can get as a defender than the results of a truly offensive engagement. Would you agree there, Nelson? I completely agree. We’re going to talk about automation and how it can help, how it’s necessary nowadays.

But it does not replace the manual pen test. Right. It augments it and it allows them to actually be a lot more targeted and the Pen test doesn’t have to spend time doing the tests that can be automated. So I completely agree with you. Yeah. And I’ve got so many bros that I love hanging out with the cons that this is their bread and butter, but I also know how much they charge.

We’re talking the price of a decent sized car for most engagements, right? And those aren’t the sorts of things that you can bring someone in once a month. You’re not going to get the budget for it. It’s not going to happen. Even at security focused organizations, we got to make those trade offs. How many times a year are we going to pay a third party to come in and supplement our manual offensive efforts? And so, unfortunately, you get that once a year in most cases. Look from that perspective and once a year is kind of the bare minimum. And we preach that we would like to see that done more often, but realistically, I’m not seeing it in most organizations.

What are you seeing from your perspective, Nelson? No, so you’re absolutely right. And I think the other thing too is Pen testers, especially if you really good Pen testers, what they’re going to try to do is go through the avenues that are more complex and that do require a lot of that human creativity, right? So it’s just a nature. I did Pen Test for quite a while and that’s what I wanted to do. I don’t want to do the simple attacks that I can do with just plugging in, responder and stuff like that and do that simple stuff. I want to go for the interesting attacks. So not only do you get that limited visibility because they’re going to have to choose, as you said, they’re expensive. You’re not going to be able to usually afford to have them happen continuously, but they’re also going to be very limited in scope because they’re going to concentrate on some assets.

You just don’t have the bandwidth to cover too much or else be prohibitively expensive. And more importantly, I think they’re going to look for the very sophisticated stuff. Right. That’s the interesting stuff that the Pen tester is really good for is to look for the attacks that are not, as you said, not scriptked attacks and things like that. And again, so you’re absolutely right. I think you get those blind spots because of those things and there’s only so much time. What were you usually charging for 40 hours on those things? A 40 hours engagement for most of them.

Yeah, 40 hours engagement. And then got to remember for reporting as well, right? Yeah, exactly. So you spend a good portion of that doing reporting, maybe half of that time, unless you’re using PlexTrac, but that’s a topic for another webinar.

Bottom line is they’re awesome. They’re the foundation, right? And they’re going to give you great data, but they’re not continuous, they’re not repeatable, it’s just not economically viable. Right. So traditionally, people have supplemented those with your vulnerability scans, right. Whereas your pen test, your manual pen tests are looking at the more complex types of attacks with your vulnerable scans, you’re just looking for the low hanging fruit. Right? Where are the open doors at? And those are great, but you don’t get a lot of CVEs for buffoonery, right. For people misconfiguring things, leaving default passwords in and all those other things.

And quite frankly, not every CVE is going to get a patch. Right? Yeah, I think it’s absolutely right. And you also don’t get a lot of context with the vulnerability scanning. It’s just hard because they’re looking for vulnerabilities and they’re giving you a vulnerability. CVS score of ten on a certain machine doesn’t mean the same on, let’s say, a workstation a little kiosk that stays on the front by the front desk that doesn’t have access to anything interesting is not the same as having even eight vulnerability on the domain controller. Right. So it’s really hard to get the context around the vulnerabilities because you don’t know how far an attacker can go by leveraging those using just the vulnerability scanning.

And quite frankly, not everything gets a CVE.

What I have seen with some early stage software companies is they just don’t have the deployed base that is going to be good enough for the folks who hand those things out at Mitre to say, yeah, that warrants a CVE. They process like 28,000 CVE’s last year and they declined, I think, twice that many submissions. So those things don’t necessarily exist. And if they don’t exist as a CVE, the chances of a plugin being written that’s going to detect that vulnerability for this niche product, it’s just not very high. Right.

Something else I see in the OT space a lot is vulnerable fatigue and it doesn’t even necessarily be in the OT space. We’ve got some clients that sell packaged solutions in support of things like communications infrastructure and so it’s, it’s traditional, it but because it’s so tightly integrated in many ways, it’s treated like OT in the ICS space. And so there is an extreme reluctance to just blanket patch things just because we can, because oftentimes these are deployed in public safety type functions, right. If we were to just, okay, great, we’ve got our automated patch management system and we’re going to feed the results of our Nested scan into it and let it go to town, that’s just not going to happen. And so what does that mean? Well, that means that every single time that they rescan, they’re going to see those same old vulnerabilities over and over and over and over again. And how many times in the last just couple of years have we seen a new way of exploiting an old, low or medium priority vulnerability that now has escalated that and so you get that vulnerable fatigue. What are you going to do when you get a scan report that’s got 1200 vulnerabilities in it? Exactly.

I mean, not only it’s impossible to even if you wanted to stay on top of it, but as I said, not only on OT, but also It environments, critical servers, services like Active Directory or some specific specialized application or something like that. Sometimes people just don’t want to patch it or they’ll say, well, we’ll patch everything that’s critical or high and above or something like that. But as you mentioned, when we’re running actual real attacks, it’s not about exploiting a single vulnerability. A lot of times going to be about building up from a media vulnerability, leverage with the low vulnerability to gain initial foothold from their traumascular privileges and things like that. So a lot of times these systems are not going to get back. And as you mentioned, the vulnerability fatigue then stays there. People just start automatically ignoring those.

I remember for a time. SMB signing is a great example of one CVS score of one. So every single time that vulnerability kind of comes with that thing, we just automatically ignore it. It’s just, no, I don’t care about it, I don’t care. But it’s the thing that makes some of these attacks that are very common. You mentioned responder right, the plan Responder Right. What’s one of your ways of Mitigating that actually take care of these lower priority vulnerabilities? And now you’ve cut off a vector for us? By no means.

I think you would agree with me saying that vulnerability scanning shouldn’t be one of the foundations of your security program. It’s a must have, it’s a must do if you’re not doing it today. Number one, you probably stumbled into the wrong webinar.

I would hope that by now you are performing routine and regular scanning. And not just that we’ve been talking network centric here, but also on the application side, routine as part of your SDLC cycle there, you know, routine vulnerabilities is extremely important. And so both, you know, that and Juju Pentagon, you know, those are what we had, you know, before we started thinking, I think, as a community over the last few years as to how do we do better, right? And so there’s really two areas that I see. There’s plenty of other things people can take a look at this model, no model is perfect, right? And they can say, well, you’re missing this and you’re missing that. But there’s others. For a mature program, I see two primary ways that they are now augmenting the traditional methods with new methods, right. One of course, is purple teaming.

Not what we’re going to talk about today. If you’re interested in learning about how purple teaming fits into your overall continuous testing and remediation lifecycle, I did a webinar along with Nick Popovich, our hacker in residence, about a month ago. Feel free to check that out. But the purple teaming is nice because it is a human in the loop that a smart thinking human can do things like chain together those lower priority vulnerabilities that Nelson was just talking about to truly demonstrate the offensive impact but also get buy in from your stock. You get that real time stuff. Nelson, I know you’re an evangelist for Purple Teaming. Obviously your thoughts on how that fits into your overall lifecycle there.

Yeah, I think Purple teaming is the best invention since Red and Blue teaming. But I think it’s the right approach. Right. It’s to exactly have this it’s not just about the team itself, but just the approach and the whole paradigm of having someone inside your company that’s testing those continuously testing your security systems. And not just the security systems, but your whole security program. It’s not just about patching. It’s not a security team problem.

It’s an it problem. Right. It’s handled by it. But the tools and the whole process around patching, around the vulnerabilities, vulnerability management, that’s a security issue. And having a Purple Team is, I think, again, fundamental to a modern organization, even if you don’t have that’s a good thing about Purple Teaming, right. Before you have to have two teams, the Red team for these two people, Red and Blue. But now you can have a single person, initially at least, that’s doing some hunting, that’s working with your tools and making sure everything works perfectly.

Yeah. Well, now that we talk about how great Purple Teaming is, let’s talk about some of the challenges. Becauserac we have had modules now for going on three years that are written to enable and facilitate Purple Teaming. Right. We are big believers in the value of it, but quite frankly, it’s resource intensive, right? Yes, you can do it with two people, but generally those two people will still report to different people. Right. And if you’re going to devote someone from your sock to a Purple Team engagement, well, you can’t leave the sock on land, right.

So you got to have someone there. And if you scope it up to some of the when I was in the military, we called Purple Teaming participative threat, Emulation, Pte, right. And that’s great news. These giant events where we’ve got 50 people, that’s not going to happen, however, but even to scale that up, it is difficult. You’re taking time out of people’s days. They got other duties. And so the best case that I have seen in a corporate environment is a scheduled once a week Purple Teaming event.

Right. So you get a Thursday, they’ll have a three hour vulnerability window where everyone comes to the table. They’ve got a very defined scope because it takes time. You got to make sure that you got all the routes and access in place. But that’s the best I’ve seen. And the reality of the matter is most of the organizations that I have seen that are doing Purple Teaming are generally only doing four to six engagements a year. What’s your experience there, Nelson? The other thing too, I think about purple teaming is one of these advantages, is the same reason why you always get pen tests from third party you don’t get from someone inside the company.

Right. You want a different perspective. The purple team, they’re amazing what they do. And as I said, I think they’re necessary.

Even with the shortcut you mentioned, it’s not like they can do continuous testing. They have to kind of work around whatever constraints the company might have. But beyond that, they’re part of the company, they know the culture, they know the products and everything. So they’re going to be biased, their analysis is going to be biased in some way. So having a separate perspective and that’s why as you have there on the graph, the pen testing is still necessary. Even if you do have purple team, if you have red teams, I think it’s something to keep in mind, right. That they complement each other, but they certainly don’t replace each other.

All right, so we’ve talked about the three sections of the pie. I’ve got a stomachache from eating so much, by the way, I realized that at my anniversary this weekend. An eleven inch cheesecake from the Cheesecake Factory is a lot of cheesecake, but I digress. All right, so we’ve identified the value of three of the four components. We’ve also identified some of the shortcomings. Right. And so what if we had a method that we could bring the offensive perspective to the fight, right? We could do it in a recurring and routine basis and we could demonstrate the value.

Well, I wouldn’t say the value. We could demonstrate the impact of having a large number of unremediated, maybe not all by themselves critical, but creating that a logical attack path. Right. What happens when you get the Swiss cheese, the Swiss cheese slices, right? Each one’s got 500 holes in it. You put enough of them together, you’re going to find a way through. How do we do that? And that’s where I think continuous validation comes in. And Nelson, I know you’re the expert on this, so I’m going to let you take that softball and roll.

So yeah, the continuous validation, the important aspect of it, as you said, it shows you the path that’s going to be taken by an attacker to gain access to a certain environment. Right. It’s not about an exploitation in isolation. Usually nowadays, even if I can get into one system, that’s usually not going to be the system that has the crown jewels. Hopefully the system are going to get in is just going to be a stepping stone. So I can jump to another system or even within that system to actually exploit it. There’s going to be multiple steps I’m going to have to take from there.

I’m going to have to jump to somewhere else. I’m going to have to have all of these things in concert to be able to actually have a full compromise and extract what I want from the company. Plus, we are in the age of cloud where changes to environment don’t require you having to buy some hardware, taking a month to receive it, someone having to go to a data center to rack it and do all that. It was easy to control that because it just took so many steps. With cloud or hybrid cloud environments, I can have new servers pop in in a few seconds. Right. It’s not something that it’s something that makes keeping track of your environment really hard.

And we talk about pen testing and even purple teaming. So now having these exercises, even if it was like every two months, three months, when your environment is changing every week, it’s not realistic to think that you’re going to be protected then. And that’s where continuous validation comes in. The process is as you’re validating vulnerabilities, as you’re finding new problems, new paths, as you mentioned, to get into the environment, you’re fixing them at the same time. You’re just running other assessments and you’re doing other scans and you’re working with all these tools in concert to make sure that your environment is not protected. At least you know where the vulnerability of the problems are as the environment evolves outstanding. Well, hey, we talked a lot of theories.

We’ve talked 15 minutes wax and poetic about our vision for how to secure your world. How about we talk a little bit about some of the real world successes that we’ve seen and then maybe we can get some of the buttons mashed in. So hey, Nelson, why don’t you start us off here and maybe tell us the story. Sure. So we had a client. We have a client. I should say that basically they were using the traditional Viability scanners pen tests twice a year, just the general stuff, had latest EDR and all of that.

They did a proof of proof of value with Pinterest and they found gaps. They had, as I said, the latest EDR out there. The problem was the end of the EDR could detect a lot of the attacks that Pinterest performed, a lot of techniques that were used, but there were some systems that were considered critical systems that weren’t allowed to have the EDR installed. Plus we found other environments with the POV. It was extended POV that had the EDR, but just didn’t have the policy that was actually enforcing the remediation of these issues. Again, because it was considered a critical system. So it’s very heterogeneous network.

You had different offices, different places, but also different servers, different admins, things like that. And again, because Pinterest can now cover a lot more ground, even during the value, we’re able to find a few of these gaps. So they implemented Pinterest and they started using Pinterest now to run assessments weekly. So basically the way that they did it was they ran a weekly Pantera assessment. They stopped it on the weekend and then as soon as that was done and generated reports and everything, they would start a new assessment. So you really had real what you would imagine as a continuous test of the environment. The next problem that they realized was that, okay, now we have all of this separate data from the separate environments where we stop Interra, but we want to have a unified view of where everything is.

So they had the environment in isolation. And Ventura is a great tool and we try to make it easy for you to have a holistic view of the environment. But I’m a firm believer in specialization, right. PlexTrac is amazing at doing that at aggregating data and using the data that’s right, and aggregating the data and make it so that you have a single pane of glass or a single view of the environment. And that’s exactly what they did. They got Plexrac, I think they got it both around the same time and they started using both tools together. So now they had a tool like Ventura that could show them the paths that an attacker will go through.

And they had a holistic view because now when the CSA wanted to see something, once you all show me all the paths across all of my different networks, the different companies that were part of this subcompanies that are part of the company and everything and the two tools together oh, let me rephrase that. There were more tools involved as well. There was a vulnerability scan, results from Pen tests and even other tools to do more specialized attacks for applications, cloud and that kind of stuff. And now they can unify all of that and have a single place to view it by using PlexTrac and Terra and these other tools. So it was extremely successful.

It really increased the visibility that they had to the environment. So now even they knew exactly where the problem could come from. Right? They knew the path, some of them they couldn’t fix, but at least they know that should, they should monitor better. They knew where the gaps were and they were able to then, if not address the problem, at least mitigate in some way. Awesome. Well, thanks for that, man. One thing I was expecting, we’ve been talking a lot about how an organization would protect itself, right? But here at PlexTrac, probably about half of the people that we work with are actually third party providers, service providers, right.

And I’ve been scrolling through our list of attendees here and we do have quite a few of those folks. And so what I wanted to use my story time to highlight was how tools like Pantera are actually being used effectively to bring new revenue streams for those service providers. So I’m going to tell the story of one of our service providers we actually have quite a few of our service providers that are dabbling in a space that I’m going to just put under the umbrella of continuous pen testing as a service. And they all call it different things, right? And it may or may not be continuous pen testing, but it is some sort of provision of recurring offensive techniques in your environment, right? And so what this provider has done is under the umbrella of Pen testing as a service, is they set up much like an MSSP would do for just basic vulnerability. They set up this recurring service where they, they work with, with the, their client and they provide them a menu. It’s like, okay, we are going to come in and we are going to do the manual Pen testing. And how frequent would you like that? Okay, great.

We’re going to try to sell you on four times a year, but you’re probably going to buy one. But now what do we do to augment that? And they’re bringing tools, one of which is in the primary one, quite frankly, pantera to the fight. It’s like, okay, great, now let’s augment our manual Pen testers with Pantera. We’re going to deploy this. How frequently would you like us to do this? Right? And they really get an opportunity to gauge what the appetite is of the client and put together a package of services. And it doesn’t just stop with the running of the Pantera engagements because then they have an additional service they offer which is, would you like us to do manual validation? Would you like us to pull the thread on this a little bit further? So think of it as a manual Pen test light, right? You’re not having to buy the whole kit and caboodle of a 40 hours engagement, but now maybe we are able to sell you a handful of hours on a recurring basis by the same offensive security professionals that might be involved in your manual pen test. And now we are providing you even more of a holistic view from the offensive perspective for your environment.

And then they’re taking all that data and they are because it may come from many different sources, could come from the manual Pen testing, could come from Betera, they may be using other tools as well, specialized tools and ingesting all that into PlexTrac to consolidate and provide that unified visibility to their client. Nobody wants to be poking around 15 different dashboards, right? Especially if you’re the consumer of the data and you also want the ability to prioritize from the different sources. And that’s what Flux tracks bring into the fight for them is they’re getting PenTerra, other data, manual testing, bringing it all together and they’re making bank off of it right at the end of the day, which is what they want to do. But more importantly, quite frankly, is that their end clients are getting services that they just can’t provide. Themselves. They just don’t have the ability to organically do that sort of thing.

We’re seeing a lot of the growth of this line of business. It smells, looks a little bit different depending upon who’s offering it. But if you’re looking for new ways to offer valuable services to your clients and you are a third party service provider, think about that. If you’re an existing Plexracker Pantera client, talk to your CSM about how they’re seeing other people offer those services. Awesome. Well, I’m going to stop presenting at this point because we are going to now maybe give you a little look at how this all works. So, Nelson, feel free to take it away.

Cool. So we just had a question about that. We’re going to show the platforms and DS Three test. I’m going to show them right now. So it’s going to be a very quick demo. If you’re interested in seeing an extended demo or POV, just contact us and we’ll go over that. But this is basically how you would configure a Pinterest test.

So we started what we call a penetration test or a black box, meaning Pinterest. This is not a bad solution. You’re not installing agents or anything like that. You place a box with Pinterest or you can do it remotely as well. There are ways to do that, but logically speaking, you’re placing a box Pinterest instance inside an environment and you run in the assessment from there. So think of it as a Pen tester coming with a laptop or connecting to your environment through VPN, whatever. But it’s running from another machine.

It doesn’t have direct access to any of the victims. You would set up something like the name, description for the environment, ranges that you want to include on that assessment. Some of this configuration about duration, I mentioned how we have all the clients that’s running this for seven days. Well, it’s actually six days and I think 23 hours and then rerunning it again afterwards. But some of these configurations here how you want Pinterest to behave. But as you can see, the setup is even simpler than a vulnerability scanner. There are no choices of, oh, I want to run this vulnerability or this other one, because built into Pinterest is the number one thing that we take into account when developing a new technique for interior, is how safe it is.

So we just don’t include on the platform anything that’s unsafe. We don’t need to, as you see, to be able to progress to most of the attacks. Again, the idea with Fintera is not to use the latest and unstable proof of concept exploit that’s out there. The idea is to see what to use the most common techniques that compromise networks. Once this is all configured, you would create and run the assessment for this quick demo here, I just have an assessment that was run previously. If finished, it really depends on the number of assets that you have. This is a small network of 21 devices.

But what Pinterest will show you is all of the machines on the environment that it could find. It’s going to color code them and do some pretty stuff about the attacks and the vulnerabilities that were found on them. And it shows that the vulnerabilities, this is similar to vulnerability scanner, but even a little more restricted because as I mentioned, Ventura is not a replacement for the vulnerability scanner. It has a built in vulnerability scanner because that’s the entry point for a lot of the attacks. But it’s not a replacement for your current vulnerability scanner. The idea here is to only show vulnerabilities that are either directly exploited or can be used with other vulnerabilities to gain a better foothold on the network. Where things become different from vulnerability scanner is with the achievements.

So the achievements are the things that Venture is able to do. By leveraging these vulnerabilities up here. So we can see here that it was able to eventually get the clear text password for a domain admin. It did some exploitation of some exchange vulnerability, it extracted information from a bunch of machines on the environment and things like that. I’ll elaborate a little bit on that in a second. Pintera will show you the vulnerabilities. But what I want to this is standard stuff, as you would imagine.

As Sean mentioned, some of these things are just misconfigurations or things like that. Some of them have CVS. But what’s important here is that Pinterest is classifying the revision priority or prioritizing the vulnerabilities based on the actual things that you was able to do, the attacks that it was able to perform with those vulnerabilities. I said before, you can have a 9.8 or even a ten on a little kiosk. That’s not on the domain, it’s on the network, but it’s not on the domain, doesn’t have anything interesting on the environment. But if I have a 4.7 that allows me to get domain admin, that’s a lot more important to fix than this ten over here. Well, in this case here’s, zero login.

So it’s a problem too. But as we see, this actually was mitigated. This was mitigated by the EDR on the machine. So vintage wasn’t able to actually leverage the vulnerability to go far, which is why it’s low here.

But the next steps to understand the attacks themselves, pinterest has what we call the attack maps. So the attack maps, which is things a little bit, they are these graphs that Pinter creates, that actually Blackstrack, incidentally can create these types of graphs as well, which is pretty cool. But Pintera allows will show you the vulnerabilities that were found. So the little nodes with the broken shields of vulnerabilities, the nodes with the trophies are achievements or successful attacks. So because of this vulnerability, which is the 4.7, pinterest was able to capture a net Nclm credential. It was then able to do a man in the middle attack, right? Or a relay attack, or actually, sorry, in this case, it was able to crack the net and CLM Credential directly, so it didn’t even have to do the relay attack. Here is the credential.

Here’s the password that Pinterest crack. So it does have a built in password cracker. The next steps to validate Credential. The fact that it was able to crack doesn’t mean that it’s still valid, right? Pinterest, in the meantime, while Pinterest trying to crack it, the user could have changed the password, but Pinterest found the domain controller automatically. You didn’t have to point it to it or anything like that, and it validated the Credential. And then finally, it gives the achievement, because it knows now that this user is a domain admin. And Pinterest can do this.

Now, from here, if we wanted to, we could allow Pinterest to sync with the domain controller, download all the user hashes, and crack them locally so Pinterest is able to do this. Again, none of this should be new to you guys. These are things that are performed during Pen tests. But Pinterest is doing this autonomously, and more importantly, it can cover a lot more ground in less time than a Pen tester could. And you can run this as often as you want. You’re not paying per number of runs, so you can run this thing every day if you wanted to, and you wouldn’t be charged more for it at all. So there are a bunch of other achievements here that Pinterest would go through and find.

It web application vulnerabilities, standard stuff like extracting data from machine or extracting NTLM hashes from the same database and things like that. Right? And Pinterest can then use this information to move laterally on the environment so it can use the attack sorry, the credentials that he got from one machine to do a pestic hash attack, for example, to other machines. It can, as I mentioned, go to my controller, try to connect to my controller, can run ransomware emulation, where you would choose a type of ransomware, and Pinterest would use the same techniques to encrypt some files on the machine and things like that. All of that in a safe way. Of course we encrypt the files, but we don’t actually replace the original files with encrypted ones. So it’s all run in a safe way. Everything that Pinterest does is aligned to the Mitotech matrix sorry, to the Mitotech framework.

So, as you can see here, Pinterest will show the techniques being used, the sub techniques being used, and it even has a full miter tab that allows you to see an overlay of the findings against the miter attack framework tech matrix. Sorry, at the ender will generate a report that will give you all the information about the findings that Pinterest had and all of that. Now, the problem is, what if you want to customize this report or as the client that I mentioned before, what if you have multiple instances of Pinterest deployed on different environments because it just makes more sense, business sense, for your company to do it that way and you want to consolidate that data. Pinterest gives what I think is a really nice report, but it doesn’t give you the power to modify this and to align it to your own needs. So that’s where you would need a tool, or you need a tool that allows you to do that to aggregate the data. Or as I mentioned, you might want to augment this with the data from a vulnerability scanner or with the data from a Pen test that was performed six months ago, whatever. And that’s where a tool like PlexTrac will come in.

With that, I’ll stop sharing and I’ll come off mute.

Thanks for the setup, man. I appreciate that. And that was awesome. And I always love looking at that interface. And it is beautiful. It is absolutely beautiful. Well, thanks.

Well, now I’m going to take it from here. And what I mean by that is great. We’ve got the data from PenTerra, and if we do want to do some of those things that Nelson was talking about, how can we do that? So just like Nelson offered his alibi there, this is not going to be a full PlexTrac demo. If you are interested in getting a full PlexTrac demo, please hit us up at or I think any of those things will work, but hit our website, somebody will reach out and get to you. But Plexrack is a platform that is designed and excels at bringing together very different sources of information, right? And so I’m going to just demonstrate how the workflow might look. And there’s many different workflows that you could use if you are in fact using Plexrack in conjunction with Pantera.

So Plexrack, we organize all of our different sources into buckets of data that we call clients. You can change his name. It’s just a way of organizing your data. Right. And Plexrack works great whether you are an enterprise, an organization that is defending your home turf, or if you are a third party service provider. Right? And so here I’ve got a couple of different ways you might organize your data. If you are a service provider, you can create these different buckets to keep the information segregated.

That also allows you, by the way, to give electronic access to your clients the results of their data inside of the platform. You can tag it with, hey, what type of relationship do we have? Things like that. Or if you’re an enterprise user, you can also have it really broken out in any way that makes sense for you. Maybe you have it by logical network segments, maybe you’ve got it broken out by applications. If you’re doing AppSec, work with PlexTrac, it’s really up to you. I’ve created a client called the Toronto Headquarter DMZ, where I am going to bring in some Pantera data. So real simple how I can do that.

I can just create a quick report and I’m going to call this Pantera data. Right? Obviously, we’d give it a more logical name. But one of the things that we talked that Nelson set up for me was he was talking about how you can enrich your reporting, right? And one of the ways you can do that is not by sitting here and typing out a bunch of words, but by having reusable content. Right. So I have created a simple Pantera introduction, maybe a narrative that I want to put that would be included with each one of my Pantera reports. That includes things like narratives, but also then maybe gives me the opportunity to add some metadata to this report as well. So I didn’t show you that before I selected this template, but these fields didn’t exist.

They just appeared automatically once I chose that template. So what’s the target location? This is the Toronto DMZ. And apologies to any friends in Toronto that I may have misspelled your wonderful city’s name. What’s the IP range? We’ll go with 1010 00:24. And who is the tester? It’s going to be Nelson today. So we’ll use him. Right? Great.

So once I get my zoom controls out of the way, we will now submit that and we’ve got to show the report. And what you’ll notice is that I’ve automatically got some data that came in here and this data is just a narrative that I may want to include. And there’s these placeholders these short codes here. And the nice thing is that based upon the data that I included in those fields, we can just with one click replace those. I forgot to put a start date in here. So we’ll say that we did this on the 29 March. Great.

Report updated. So now before we’ve even worried about bringing in any of the data, by the way, you can modify this. If you go up, you can add any additional narratives you want. I only brought in one, but you can have an entire database in PlexTrac of reusable narratives. Go ahead, Nelson. Sorry, Neil, your mic is a little weird. Someone’s mentioned it’s on Chat as well, but we can hear it’s a little robotic.

Got you. Tell you what, why don’t you tell a joke while I cycle, Mike? Well, I’ll tell you the question. There was a question that came in. Does PlexTrac offer scheduling calendar to see which projects are assigned to reach people? We have ways of seeing what findings are assigned. We are not having a project management tool. How’s my audio on Nelson? That same problem. One thing I’ll tell you though, Dan, that I know that I find amazing about PlexTrac.

It does have integration with ServiceNow jira and these other tools. So it’s something that I do have a lot of clients using is these integrations because now you can assign tickets. You can do this stuff on these tools that are usually present on Company. Already testing one more time. If not, I’m just going to do a hard reset on this club. Unfortunately. No, it’s still I have one more joke for me.


All right, how about now? No, that’s good. All right, well, hey, I appreciate you hopping in there and let me know if that happens again. All right, well, getting back to where I was at, actually, I’m going to go out here to my readout view. We haven’t brought any of the narrative data in yet from PenTerra, but just to show you how easy it is to enrich, I’m going to go ahead and replace those placeholders. And now you’re going to see that we’ve brought in the date, we’ve brought in the target location, IP range, who the tester was. And so all of the administrative work that is associated with enriching your reporting, whatever it is that you want to do, however much, however little, however many different sections of text you want to have, but you can really automate that whole process and enrich the data. But let’s get into the point of bringing in the Pintera data.

Give me a thumbs up if I’m still sounding good, Nelson. Awesome. All right. How easy is it to get the data in? Well, great. I’m just going to go to add findings, and there’s all sorts of different methods of getting data into PlexTrac, but I’m going to go from file imports because Nelson was kind enough to share with me the file that he generated from the demonstration he just did. We have a ton of different methods that we can bring in, a ton of different supported tools. But I’m going to come down here and I am going to find Pintera.

I’ll make sure that I scroll past any competitors of yours real fast there, Nelson.

And now I’m just going to take the JSON file that he gave me. And I have the opportunity, by the way, before I bring the data in is as we’re parsing that out, if I want to enrich that data, let’s say that I wanted to add a tag that indicated that any of this data came from Pantera. I can do that. I could tag this with the DMZ tag to your friends. I always joke with people that we don’t charge by the tag. They’re free, so use them often. So great.

We are bringing in the data. I get this nice little indication that it’s happening in the background. Nelson was kind enough to give me a fairly small sample, so it happened almost instantaneously. Everything’s got a nice orange hue to it. And the reason why is because we bring this in as a draft status. And so, hey, maybe co host has asked you to start your video. Am I not showing video nelson.

Sorry. No, you’re not.

I am having all sorts of now we’re good, we’re good. All right. I was afraid what might happen if I just accepted, start sharing video. So figure I’d take that down. All right. I’m going to blame this on you, Nelson. I don’t know why, but I’m going to.

All right, got video now. Outstanding. All right, so you can see that all of the data that we brought in from the Pantera results are in this nice orange hue. They’re in a draft status. And so this is just really just a workflow management tool. As you go through these and you want to maybe internally validate or review them, you can click on each one of these, take a look at what’s there. And once you have reviewed one or more of these, you can say, yes, I want to move these into a published status.

Just a nice little visual indicator for you that you have in fact validated that. If you don’t want to do that, you don’t have to use that feature at all. It’s just nice and available for you. One thing that Nelson mentioned is, yeah, sure, they create vulnerabilities or they detect vulnerabilities, but then they also string them together into those achievements. We are bringing both of those types of data into PlexTrac. And how do I see which is which? Well, let me just take a look at my vulnerabilities. Right? And so one of the things we do is we automatically tag each one of these on, import each data structure as to whether it’s a vulnerability and then you can see those or maybe I just want to see my achievements.

Right. And I can do that as well. And so now I’ve got my achievements here as well. Now I can further enrich these because I can edit these. Maybe I like what Pantera gave me. But upon further review and digging deeper, maybe I’m doing manual validation in my environment. I can come in here and perhaps I can tailor the recommendations for what do we want to do in my environment, right? And before I hand this thing off to a remediator, I can give them what I want them to do, specifically all sorts of things you can do.

I’m not going to once again turn this into a full PlexTrac demonstration for you, but I do want to talk a little bit more about the workflow and how that may work. So okay, great. We now have I’m going to go back to the vulnerabilities because we’ve got things that we actually want people to go fix in the environment and I want someone to go out and do something about this. So I can say, great, I would like to assign this to let’s assign this to myself because I don’t want to spam anybody else. And hey, I can say that now that I’ve assigned this, that this is now in process. Right. And now a notification goes out and you’re into that remediation workflow.

If I haven’t done that, maybe I realize that I’m not going to get around to fixing this anytime soon. I just don’t have the resources. Or maybe it’s not a high enough priority issue for me. So I’m going to leave it open. But I’m going to get assigned as a substatus for remediation in FY 24, right. So I’ve got once again just a nice little workflow tool and whatever notes you want to add. And the nice thing is that you can keep this running dialogue every single time you come in here if you want to.

Maybe this is now in process and Nelson’s got it. He’s working on this thing now. We’ve got this running dialogue and tracking of our remediation lifecycle. And that’s just for this one issue. So all sorts of things that you can do in the platform to enrich the data to facilitate your remediation workflow, to track where you’re at with this. But then once again, you can also augment the findings if you want. Maybe based upon I think I saw a vulnerability in here about Deserialization.

If you’ve been curating your write ups database, I probably don’t have a write up in here on Deserialization. Well, I’ve got something on here about a serialized object, but let’s just pretend, right, put our imaginary hats on that. I now have an additional write up that provides amplifying data that I have in my local repository. Or once again, I have all sorts of other different tools that I can bring additional data in. So I get that holistic look in time. So I’m going to stop the clicky button here. Thanks all for working with me on the inevitable technical issues here.

And maybe at this point we’ll bring it back together for a little bit of Q and A. So Nelson, I you’ve been probably been taking a look at what’s been coming up in here.

So I don’t know. Yeah, because you were handling that. So I don’t know if you’re so we had a question before about scheduling. I mentioned the integrations with these other tools as well. It’s not just ingestion, right. It can also push the data to other systems and things like that. I don’t know if you want to talk about that.

Yeah, thank you so much for doing that. I am going to take back over the screen for just a second because great, the track and PlexTracrack is about tracking your remediation, but maybe you’ve already got a solution in place that you use for your workflow management. So one of the things that I didn’t show you is for any one of these tickets, I’ve only got Jira. We’ve also got service now associated with this. I could say that, you know what, I want to push this as a task into my Jira integration test project and I can then create that ticket and then that’s going to reach out and it is going to create the actual Jira ticket. And now you can see that we’ve got that linked ticket here and it’s actually a hyperlink. If I click that, I’m taken to my Jira instance and I’m there and now I can continue my workflow and it is bi directional.

I could spend half an hour talking about how the many different ways you can configure your Jira to PlexTrac configuration. If you’ve seen PlexTrac in the past, one thing that we did about six months ago is we dramatically improved the Jira integration capabilities to really give you full customization. People like to customize their Jira projects. They like to customize their issue types. And we now support all of that customization. So, yes, we can push data to ticketing systems. We generally don’t push data to security solutions.

We’re the vacuum cleaner that sucks those things in and allows you to consolidate them. And we are not an orchestration platform. We are a reporting, remediation and analytics platform. Right. So that’s where we fit in the ecosystem.

All right, let’s see, what else do we have? Nelson, I’m going to go ahead and stop sharing again here. Let’s see what else it’s taking me just a second to get to my so, Nelson, I had a question. I saw a question. Here is how does Pantera handle retests? Is there anything special about it? Or you mentioned before that you don’t charge by the test. What’s that process look like? Yeah, that’s a great question. So the retest Pintera won’t leverage information that he had before to do the new one. Right.

The idea is it continues from scratch. Now on the report itself, the report is going to give you a score and that score might alter. So there’s a little graph that shows, oh, you were C on the last one and then went up to a B, whatever it was. And we are working on Delta reports as well, where you actually see and say, well, this path was here before, but it’s not here anymore. And I don’t want to keep saying the same thing. What we have a lot of clients doing, using PlexTrac for those types of things, where they can visualize the difference between what was imported before with Blackstrack, but basically from the Pinterest perspective, you can certainly if you want to reuse the data that was obtained from the last one, but Pinterest by itself won’t do that. But you can tell it, well, I want to use this NTLM hash that was obtained the last time to continue the attack.

But the idea is that Pinterest will not it doesn’t give Inter a lot of value because it wants to see things with a fresh eye every single time. And how PlexTrac can fit into that is you can bring subsequent engagement results into the same report. And so how we would handle that is let’s say that you had I showed you the status indicators, right? So let’s say you had gone through and you had cleaned up some of those vulnerabilities. And so if you then import the results of the subsequent Pantera engagement into a PlexTrac report, if any of those vulnerabilities that you said were closed were redected, we’re going to open those back up for you. Now, one thing we don’t do, and we think it’s smart, is that we don’t close things just because they weren’t detected, because maybe something wasn’t network accessible. There’s a host of reasons why a vulnerability may still exist in your environment. It was just offline for whatever reason at that particular time.

There’s a question there from a good friend of mine, Randy, on Pantera using agents. He asked, does Pantera use agents that can be deployed through the Enterprise? So we do not. So the idea and the main difference between us and breach attack simulation tools exactly that we do not use agents.

Again, the way to think about Pantera is it’s a Pen tester with a laptop that came in connected to network and starts the attacks from there. That gives us advantages and disadvantages compared to best just two different tools. Right. The advantages, again, you don’t have to deploy an agent is a lot more realistic because an attacker hopefully you’re not deploying agents for the attacker before they decide to attack the network. But at the same time you lose some visibility because the breach attacks relation to because he has its hooks on the machine. He knows exactly what component stopped the attack. You can tell your EDR did this specific thing and detected this thing on this part of the attack and all that.

Pinterest, of course, doesn’t know that it’s attacking from the outside. The way that I like to put it is breach of explanation. More like a probe tool. Right. They’re bottom up type approach. Pinterest a top down approach. It’s looking at the network as a whole.

It’s looking at things more holistic holistically. And I honestly think they’re a place for both of them. There’s a place for both of them on a good security stack. Awesome. Well, Nelson, we have hit most of the questions. There’s a few more. They’re kind of narrow and focused and we’re almost out of time.

But we’ve got the email addresses of those folks. We’ll make sure that we follow those up. Man, Nelson, this has been a blast.

I’m happy to have shared the stage with you here and thankful for the partnership that we have with Pantera now. Thank you for your time and for doing this awesome presentation. You’re a braver man than I did. Than I am, because we did everything live and I used to do it right there. I broke my mic in the process somehow. I don’t know the live demo. Gods, man.

You have to make sacrifice. I will sacrifice a bigger goat next time.

Awesome. Well, thank you all so. Much for sticking with us here. We had an amazing number of people actually stick to the end.

Thanks again and got there. And defend your networks, then your applications.