VIDEO Hack Your Pentesting Routine: Secrets for Success Series: On-Demand Webinars & Highlights Category: Pentesting, Red Teaming, Thought Leadership BACK TO VIDEOS Transcript Well, I think we’ve got a good number of folks here, Joe. Do we want to introduce ourselves and get. Yeah, flipping and slapping to it. Right. So, Nick, I’m going to want to know more than just your bio. I’m going to want to know how you came up with Hacker in Residence as your title. That’s fair. That’s fair. Well, first off, who I am and what I’m about. My name is Nick Popovich. I am a Hacker in Residence for PlexTrac. I’ve spent the lion’s share of my career as a practitioner. And so 2009, I started testing Pens consultatively as a practitioner, and then moved into practice leadership as a practice manager and eventually a practice director. I also got to spend time in a Fortune 500 Red team as an operator. Before that, started the Army Signal Corps, Systems administration, systems analyst type roles. For some reason, I just remember the story of thinking back in 2004, 2005, I said, I want to get better at this computer stuff. And I said, I’m going to teach myself Linux. And I thought that Linux was like a programming language. I remember thinking it was like C, and I bought a Linux and I was like, oh, it’s an operating system. And I remember walking around telling people, yeah, I’m going to learn the Linux language. So that’s a little bit of cringe. Hacker in Residence. When I chose that job title, it was basically because I wanted to come up with the idea of what my role is at PlexTrac, and that’s trying to bring the practitioners perspective and the hacker perspective into the things that we do from a business standpoint and kind of be the voice of the hacker. So being able to communicate internally the hacker perspective and then also similarly externally communicate with the hacker community been in it for a number of years. The story behind my handle, like Twitter handle, is Pipefish. Some people ask me, where did Pipefish come from? And so that’s just because whenever I type my last name Popovich in Microsoft Word, it’s like you mean Pipefish. So I said, If Bill Gates thinks my name should be Pipefish, then fine, it will be Piped. Sure. What about yourself, Joe? I’m going to cozy up the Bill Gates anyway. Career pen tester is kind of one of the truth. I’ve had multiple careers, so I’ve been a sales guy, I’ve been an optician. I used to make eyeglasses for a living, but after making eyeglasses, I went into computers. I was a system administrator, Windows system administrator, and my stuff kept getting broken into. Back then, they pop our SMTP server, put in an FTP server, and host zero day wares, movies, videos, music, documents, usually in a language that I couldn’t benefit from. So that was part of the downside to getting that hacked. But our servers would suddenly I’d get a text message on my pager saying, we ran out of this space and crap. Yes. They popped this again. I didn’t know how they were doing it. So that’s what got me into it. Was trying to figure out how are they doing this and how can we be checking for it? I’ve been doing this for 25 years. I like how you mentioned the on call pager. Taking it back. Yeah, it was before the BlackBerry. Isn’t that what we use as a timeline in it? It’s like before BlackBerry. After BlackBerry. Yeah. Could be. Absolutely. So from there, I got into regulatory testing, and I’ve managed a number of different pen test practices, built as a director and built a lot of different methodologies for testing a lot of different scenarios. So that’s where I came as the plaster product evangelist. I mean, this is the thing that would have saved me hours and hours of time and hassle no doubt, and I would have delivered better stuff to my customers. So no doubt. Yeah. I’m really happy for it that’s my role here is just to sing the praises and look for new and exciting use cases. Nice for the group. Why don’t we talk about how we’re going to frame this conversation up and what we’re going to be covering. Look at that. Greetings from rainy France. Nice welcome, Armando. So, yeah. In the world of pen testing, there’s a lot of different approaches. There’s a lot of different tools, there’s a lot of different teeth, but a lot of things remain the same in terms of how we go about doing a pen test and dealing with the phases of the life cycle and the pain points and the tricks, the common threads. And our goal today is to kind of get through what these look like and share our experience entirely for how we saw it in the field and what our customers have taught us about their pen testing and their reporting cycles here over at PlexTrac. Did I miss anything, Nick? No, I think that makes sense just to add a little flavor to it. I think the idea is we’re going to be coming at it from multiple facets, one from kind of a leadership management perspective in organization. Then there’s going to be the practitioner’s perspective of techniques and tooling at times that can help with efficiencies and then really talk about things from our perspective, experientially what we’ve seen that can hopefully be added as a part of those who are attending toolkit to be able to execute more efficiently, perhaps evolve. And then also we’re probably going to have an opportunity to ask for insight into how folks are doing. Maybe they have tools, techniques and tips they’d like to share with us. And so on that note, as we’re going through, if you have an opportunity and we’re talking about something that resonates with you and you have a technique or a tool that you’ve discovered that has made yourself more efficient, go ahead and throw that into the Q and a while. It may not be something that we actually address in the moment. We can save that and take it and chew on it and be able to maybe work with that data in a future webinar or discussion point. And it’s also just something that I’d be interested to hear. I want to hear the community, Joe. And I have our perspective. We’re going to say that voodoo that we do. I’d like to hear from the community as well. So, yeah, I think that’s it. We can dive into it, Joe. That’s perfect. So let’s go ahead and jump in then to the next slide. Actually, before we get here, I wanted to just kind of level set what is a Pen test, because even up until a couple of months ago, I was getting calls from sea levels. I was getting calls from folks looking to do Pen testing, looking to do, for example, we want a red team. It was like there was a Ted talk a few months back about red team. Everybody wants a red team. We need a red team. And you’re like, okay, well, tell me about your security maturity. Where are you in that lifecycle of your testing? And they’re like, well, we’re doing some vulnerability scans, but we really want a red team. It’s like, okay, you’re not ready yet. So I think if we can all agree that Pen testing has a variety of different flavors now, a lot of what you’re seeing on the screen in the Pen testing lifecycle might be the same regardless of whether or not you’re doing a vulnerability scan or you’re doing the tip of the spear adversarial simulation where you’ve hired somebody to be a bespoke apt that’s going to just go after your company for nine months. Most of these things are the same, but the different types of tests, I think we agreed that some of the steps would include something like, let me get rid of that screen. Hey, Nick, there you are. So some of the typical Pen test categories would be starting with vulnerability scanning and then moving up to Pentesting, where you’re noisy. It’s usually short. Maybe it’s an assumed breach. It’s pretty well structured, and you’re not really trying to be covert, where you start to move into covert. I think typically we talk about red teaming and you can’t have a red team test unless you’ve got a blue team, because the idea of a red team test is to strengthen your blue team. Your blue team or the defenders, the sock sitting back there at the end trying to figure out what you’re doing, making sure that any attacks that do occur, they would be able to detect. And then after, oh, my dog has found the cookies. Great. After you’re red teaming and blue teaming for a while, then you can do things like adversarial emulation where you’re pretending to be apt 23 or Finn, whatever. And then adversarial simulation where it’s just do what you do. Take nine months, take six months, take a year, whatever, and hack us like it was the real world. No, I’m not giving you a cookie. I’m sorry. That’s just how it works. So have I pretty much nailed that? Yeah. I think there’s also something to note. This is our perspective and organizationally and across different industries, different organizations, they’re going to have their own nomenclature, what they call things. I know that some places have the idea of targeted testing and surreptitious pen testing and those types of things. So these broad categories and that actually really dives in. Well, I think we’re talking about the first phase of pen testing in that when we talk about establishing a successful paradigm and methodology, it starts actually before a single packet hits the wire. And that’s in the setup phase. So level setting on this terminology and establishing what type of engagement and the McDonald’s menu, if you will, of what those terms mean with individual clients is absolutely should be a part of your battle plan in the setup phase. So I think that’s I think A, you’re spot on with kind of the broad industry accepted standards, B, that moves really well into this idea of the setup phase. So do you want to talk a bit, Joe, about from what you’ve seen, especially in your practice leadership days, what’s some of the pain points in setting up engagements and maybe some pro tips? Absolutely. So when you Google pen test phases, you’re going to usually hit discovery, enumeration detection, post exploitation. And those are the phases of a pen test. But in my mind, there are a lot more. And that begins with the setup. And the setup begins with that initial conversation you have with level setting expectations with your customer. You can’t over communicate in the set up phase at a point if you’re the SME and you’re working with the sales guy, yeah, you can say too much and sometimes you can talk yourself out of the sale. But after the sale is closed, this is the point where you need to bring on the full and complete communication suite, everything you’ve got. And you’ve got to engage all of the different stakeholders in the process. So those will include your customers, your customer point of contacts, making sure that you know who they are, that you’ve got phone numbers for every single one of them. You’ve got the correct email addresses. And then within your own organization, who’s going to be part of this? Who’s your project manager? Who are the operators? Who’s the tech lead? Who’s going to be the QA person? If you’re working with multiple pen testers, you got to set the groundwork and it’s better to put it in writing. Who’s going to be in charge? So I’ve had engagements where I’ve had two really good guys on the engagement, but they spent half the time fighting with who is going to be in charge. I just call that out early and upfront. So that a you don’t have any bruised egos, you don’t have any problems with it. But get everything documented, get it into the statement of work, get it into whatever documentation you use to communicate to the operators, because it’s really important to make sure that it’s clear. And ideally, you have a single source of truth. Now in my career, back in the day, and I tell this story frequently because it’s one of the most embarrassing things to ever happen to me during the setup process, I got my job sheet. My job sheet was to fly out to some customer and do their internal pen test. And I get there and I’ve got my bag over my shoulder and a big smiley face. It’s early in the morning, it’s East Coast. And I said, I’m here to do your pen test. And they went, no, you’re not. Okay. It’s going to be one of those customers. They’re fighting the process. It’s going to be painful. No, I’m here, let me explain how this works. And they politely listened to me for about three minutes and said, no, you don’t understand. We canceled our pen test. We’re doing it in a month. And as the blood drained out of me, I apologize, made my way out of the room and frantically called my project manager to discovered that the job sheet that I had been working with had been deleted or she had emailed it to me, but then she had emailed me a new one. So because we didn’t have a centralized source of truth on this, we were just shooting it back and forth from email. So I flew out to the wrong place, then got caught in the Blizzard, couldn’t come back. And every Christmas, I had the founders giving me deep, deep due to around that failure. But it can happen. Brutal. That is brutal. I think the biggest piece is making sure that you have a single source of truth for that. Whether or not it’s going to be SharePoint, it’s going to be Google Drive. You can use PlexTrac for this. With the assessment modules, you can create questionnaires and create places to store this so your operators have the information at their fingertips because you don’t want them to have to go searching around for it. I like it. You do. I like it. You end up in the wrong state, in the wrong place at the wrong time. So I think then to sum up some pain points and success for the setup phase and we move into some new ones is centralized. Make sure the level set communicate early, communicate often. Don’t ever assume on scope. Communicate well. Ensure that the scope is defined. Ensure everybody has good communication and don’t do that. Haphazardly. Ensure you have a process established that’s documented, repeatable, and drive on. Oh, yeah, I’m seeing some good stuff coming in here. So what are your thoughts then on moving into the next phase? So we’ve got our statement of work. We’ve got a rules of engagement. We communicate well. We make sure we’re working from the right one to make sure that if we have any questions about the scope, we’re going back to the client, we’re going to our source of truth. So now we have got our spicy packets, we’ve got our spicy hacker packets, and we want to spread some spicy hacker packets on the wire. What type of stuff we’re looking at for the discovery phase? And I’ve had some thoughts as well on that. You kind of nailed it when you talked about Scope. A lot of times, what I’ll do with means that works even during the set up process. Now, a lot of things I can do without throwing a packet at the client. I can kind of verify Scope so we can use tools out there like Showdown or census. Io or urlscan IO. Now, these are things that have already thrown the packets of the customers, but I can use what they told me to look at and verify before I get to the actual discovery phase, because that may save me some time. I can do some DNS enumeration or subdomain enumeration during the setup process, doesn’t throw any packets at the customer. If that’s not their primary DNS or I’m using a different DNS server, they won’t know. They won’t see I can validate. And then when I get to that discovery phase, I’m already halfway there. What about you? Validating Scope is huge, man. Validating Scope is so good. How many times have we been on an engagement where you go back and during some Scope validation practices, you see some websites or some banner pages that make you go, you see a whole block of IPS that are the same, and then you have a non contiguous block, and you look at it, you say, you go back to the client and say, oh, wow, yeah, we don’t have that IP space. We haven’t had that for three years. You’re like, I’m glad I checked before, because I’ve been on the other end of that. At the end of the test, they call you back and say, we don’t own those IPS. And you pause and go, Neat. Yeah, our legal will talk to your legal. Thanks for sharing that with us. Now, on the other hand, no, go ahead. I’ve actually had, like, one of the employees come to me and say, hey, did they tell you about the new office we opened up just down the street? You know what? They didn’t. So thanks for sharing that. Sometimes the reason Scope is wrong is because they don’t want you to know, and sometimes it’s because they don’t know. Asset management is hard. Let’s face it. I think part of the idea here, too, is a lot of these phases. Not only are they iterative, but you need to also continue to do them and continue to iterate as you get new data. So an idea of when you discover assets, when you’re expanding the attack surface, I think in the discovery phase, it’s one of the most important. But it’s not something that you do discovery, which is identifying the viable attack surface via what live hosts. And also, by the way, my background is kind of a traditional net Pen type background. I’ve gotten some apps that chops, but I think a lot of these phases and a lot of the pro tips can really translate, regardless of the perspective, internal or external, regardless of apps versus not. So I will say that some of my terminology is going to sound a little net Penny and that’s okay. I think you can translate it well. The idea being as you’re discovering what assets, what apps, what hosts are available, the point is to expand the attack surface from what’s available within scope. And as you get new information, where I see some people falling down is they’re blindly accepting the output, perhaps of automated tools. They blast an Nmap, they blast a mass scan, they blast a tool. And I’m not anti tool tools are important, but the idea is being able to then as you identify new subdomains, as you identify new assets, going through some discovery, again to continue to find that attack service, to make sure I think discovery is continuing throughout the whole entire lifecycle. You’re just rediscovering new assets. As you continue through, you can take what you’re discovering in your web application. You go through Burp, burp will go ahead and do target discovery for you. You can take that then take it over to Census. Io and begin to build additional IPS and additional assets. I think you just kind of might have made the argument for why we should have been hired and we should have at least one junior on staff because this is the sort of thing that you could then hand off to the Pen tester that you’re trying to train and make better that we’ve never been able to figure out how to really bill for. But I think that’s kind of a perfect that’s a good thing. And if you establish the right habits in discovery and you become an expert in that, the rest of it will align and fall in. So here are some common tools that I like. So I want to get a little bit more practical tool for folks. And then if folks are on there and if they could drop into the Q and A section, what kind of tools and techniques are you folks using that you found efficiencies in? Personally, I’m going to go through the transcript of this and if it’s in the Q and A, I’ll be able to see it. The idea here is I love utilizing tools like Amass and security trails. I’m sorry, security Tails, Census IO showdown. Amass is nice because you can plug in all these different API keys and whatnot, but the idea of being able to expand the attack surface beyond just layer three, beyond IP addresses, finding those host names, finding that DNS information from Amass or from those different tools. I really like it. And there’s newer tools coming out there like Enrich for the Showdown project. I really like the enriched tool project. Discovery has got their nuclei and chaos. Those are tools. But what you need to then do is really understand the mechanics of what you’re doing so that you can tune those tools. Because as I’ve told a number of consultants that you can’t just blast in map scan if you don’t understand the output coming back. Network scanning is a science, but it’s an art, too. What happens when every single Port shows up as open on every single host? What happens in those cases? Right. Looking at the tools that you’re using and making sure you understand what you’re using them appropriately for. I’ve seen some great tools coming in on the chat. That’s good stuff. Cool. So I think as we wrap up Discovery, do you have any last thoughts on the Discovery phase before we move into kind of enumeration? Joe? Only I’ve been pronouncing it amass for all for the last. Maybe it is. I’ve only ever read it. I think a math sounds much better. So thank you. I appreciate it. No, I think we’ve nailed Discovery pretty good. We’ve got some good tips in the chat. I really liked it before. That. Spiderfoot is a good all right, real quick on pronunciation. Is it niche or niche? I’ve always wondered, do we say niche or niche? I say niche, but I’m going to say niche, too. Then from now on, somebody put it in the comments and we’re just going to read it niche. Brandon says it’s niche. If we’re fancy, we fancy harvester PY. Yeah. These are some good tools. Coming. Classic. Now we’ve got Discovery, we validated scope. We’ve gone back to that again. Communicate, communicate, communicate. You got to take what you learned. You go back to your client and communicate. They validated the scope. You’re on your way. Enumeration. Very cool, right? That’s our third phase. Third phase of the day is Enumeration. What does that mean to you? So for me, Discovery is what’s out there. Enumeration is determining its use, the ports protocol services, the applications, finding out the viable attack surface so that I can move on to identify vulnerabilities in the next phase in the detection. So, yeah, Enumeration is again leveraging, at times, automation. Again, we’re not antiautomation, but what we don’t want to do is just blindly rely on the output of automated scripts and tools. If you’re blindly relying on it, you’re going to not be able to troubleshoot. You’re not going to be able to trust the results. And so leveraging, automation, leveraging scripting and some manual methods that you can go out there and really identify the services and the purpose and the use of whatever is under the scope of your assessment. Be it an application, be it network stack, be it people enumerating the viable attack surface. And again, this is something that is iterative and rinse and repeat as you enumerate. You’re going to find Newports protocol, services, websites, interesting things going in and then discovering and enumerating from that subset that you have is how you’re going to find that. Because here’s the reality from a perimeter pen test standpoint, if we’re just going to take a look and run a scan, if you could find it from a scanner, just scanning a 24, somebody else would have already found it for you. Our job as professionals is to go in and find what the tooling and what the scripts are not finding and we do that. So just a couple of pro tip before I pass the ball over to you Joe. Some of the tools that come to mind for me and some of the most common things that I like to do on engagements, especially maybe perimeter like a tax surface type engagement. Trying to find things is looking at that DNS enumeration vhost information, finding those names, finding hidden parameters with tools like Paraminer. Burp has an extender called Paraminer using things like Gobuster and Derbuster and Derb. And one of my favorites slightly is Fuzz faster you fool FF UF having a lot of nice go tool I think written, threaded, but basically the ability again to go and utilize your expertise to identify viable attack surface, especially in applications. What are your thoughts on anything on the enumeration phase? Or maybe where have you seen enumeration go wrong? Enumeration has gone wrong is when the client gives you a and says that’s our network good luck or God help us. Ipv six So I did externalism apps, but I really enjoyed the internal pen test. Internal testing is juicy. It is juicy. Assumed breach. I’m somewhere on the network now I’m looking enumeration for me in those cases tended to be more along the lines of what is most important to me, what is going to be the lowest hanging fruit. So I can enumerate all 65,000 ports across a class C pretty easily. But when I get to class B or God forbid they’ve set it up with a class A, I’ve got to start to narrow down. So it’s SMB, it’s RDP. What are the ports and protocols that I know I can abuse up front to get me that next initial vector that I may be on the network? I may have managed somehow to plug into a Jack, but I may not have credits. So that’s my next thing to get. So in order for me to do that I need to understand what my available footprint looks like. Enumerate those and that in a larger environment meant having to focus on what I could get best bang for the buck. So that’s where my scripts were written. Masscamp sauce was the go to tool. Sauce is great. I’ve actually got a pro tip on that. I asked the gentleman who taught me these things. So here’s something where I think and we’ll move on to the Detective some of the other phases here shortly. I think at times, especially in consultative pen testing, you do begin to get a little episodic and you get stuck on the tooling. And at times, even with even being an expert, even executing excellence, you may come out and let’s say you go on an engagement and it’s an engagement where you’re meant to either be a little bit more surreptitious or they really don’t want vulnerability scan. But maybe it’s an internal let’s talk about internal test. How do you find available a tax service and consultants are so stuck on perhaps Nmap or nests. If they don’t have those, they get analysis paralysis and they can say like what do I do? So here’s a couple of pro tips that I’ve learned and much love to the hacker Sage that taught me this. Valentine an interesting idea is using Nmap, but the list scan feature so technical s big L it will list out from the target that you give it all the viable hosts. And here’s another quick pro tip. Anytime I would get scoped from a client, I would run Nmap listing on it because you know what that catches that catches when they use the comma instead of a period that catches a space. Because there have been times where I get the end of the test. Why didn’t you test this? And I don’t know, I looked and it was 19216 eight. So list scan is your friend to just validate the scope that you’ve even been provided. Make sure there’s no copy pasta errors in there. However, another little pro tip is using the DNS resolution. So without doing the TAC end switch, Nmap will attempt to resolve. If you do capital R you can give it a DNS server. It will attempt to resolve every IP. So if you get a slash six. Now this is noisy as all snot. So this is not for a red team or trying to stay on the radar. But if you get a slash 16 or you want to slice it up going at giant NetBlocks with Nmap list scan and the DNS resolution. If you have a DNS server, maybe via DHCP or you have it, anything that comes back with a name at one point was live on the network and registered in DNS. That’s a nice way to Whittle down the viable attack surface. A couple of other pro tips is using Nmap to scan one and 255 from all available RFC 1918 space. Then that kind of whittles down some NetBlocks to go after abusing DNS or using DNS as it should be to find SRV recommend the serve records for service records for active directory. A lot of those really cool pro tips to help Whittle down. Because when I first started doing tests that were a little bit more surfacious or trying to be clandestine, not full red team, but clandestine. I get there and I was so stuck on Nessus output when I didn’t have it, I was like, I don’t know what to do. Oh yeah, I would like it if Nessus was the Hail Mary pass. I got nothing else and I throw nests up. It’s so noisy, it’s so slow. I prefer my operators weren’t using it unless they really, really need to. There’s a lot of other things that you can do. A pro tip back in Discovery, go to census. Io. Let’s say your domain is@yahoo.com put in Yahoo local@yahoo.net, Yahoo, all other possible subdomains so that you can see with the local and the.ORG and you’ll get internal IP addresses that you wouldn’t see with other tools and that can help you validate scope. Have we beat Enumeration to death? I think so, man. You know what? Honestly, I think discovery and enumeration are such of the core and foundation of how the rest of the assessment goes, we’ll probably move to the next phase a tad bit. I don’t want to say quicker, but the point of why we camped out on enumeration discovery is that’s your foundation. If you haven’t discovered it, you haven’t enumerated it. You can’t really detect a flaw and you can’t exploit it. So let’s talk through briefly on some detection of flaws, and you could call that vulnerability identification, whatever you want to call it. Let’s talk through a little bit of your detection pro chips, Joe detection. That might be where things like in Maps NSE scripts come in handy. So you’ve got your IPS, and now what you’re looking for is to see I need to take that and I need to compare it against what’s out there and what might be vulnerable. So the Vulner DB NSE script is useful for that exploit DB for the most of us, unless we’re at the Adversarial simulation, we’re going to stand on the shoulders of Giants, right? Yeah. We’re not finding zero days. Well, here’s why. Real quick, let’s think about it as Pen testers. We’re supposed to be performing adversary actions, but there’s something that we really can’t simulate and that’s time time is always against us. And so there has to be concessions made. So like Joe said, continue on the idea of identifying CVS and standing on the shoulders of Giants. I just wanted to quantify we got 40 to 80 hours to do something that genuine attackers might have six months to a year or unlimited amount of time to do. Yes. Metasploit just popped up in the chat complete. If you know the technologies or the staff that might be in place, then GitHub is your friend, right. Because there are going to be tools out there for POCs and other things. This might be part of your set up planning where you’ve questioned, you’ve asked the customer, what are your technology stacks so that you can all the way back in phase one begin to plan out the tools that you might need. Hey, they’re in the cloud. Okay. What tools? Maybe I’m going to use NCC Scout to help me. So I’m going to want to have that in my toolkit way earlier than when I arrive on site or when I begin to do my discovery, enumeration and detection. I think the idea here that we can from the detection phase is that’s where you’re going to be mapping and identifying flaws, configuration issues and flaws to try and take advantage of and exploit. So those protests that Joe gave you and relying, I mean, at this point there is an opportunity to rely on some automated vulnerability scan output, but then also going in and doing like they taught you in Hacker one on one and finding the CBEs based on the services, even if it’s the next level up, looking at old CVEs can help you understand the flaws that were in the app. So maybe they are there. So now let’s say, Joe, let’s say we found some flaws. We think we understand what we’re going to do. Maybe we’ve communicated the client and set up an exploitation window if they are so inclined. Let’s go into the exploitation phase. What pain points have you seen and some pro tips you’ve seen to make the exploitation phase go smoothly? Well, I think one of the things you just touched on was setting up the rules of engagement. Right. Because some people don’t want you exploiting. They’ll call a pen test at the point where you can with confidence say, you know what, I could do this. There’s a new vulnerability out, new Microsoft Windows RPC volume that came out, what, yesterday day before yesterday, which you know is just going to get us full access to a Windows box. So if it’s safe enough, fantastic exploitation, we’re done and then we can move on to our exploitation. There are other exploitative techniques that may not be as safe. For example, brute force password attacks against the domain who hasn’t locked an entire domain out because you got their lockout policy wrong. And instead of being three tries, it was only one. Yeah, I’ve locked out an entire ad before. It’s not cool. It happens. I think you raise a good point around that during the exploitation phase. That’s also not just buffer overflows and actual exploits. That’s the time to take active action to attempt to get an adversary help friendly results. So like password guessing, that’s password spraying, password guessing, et cetera. That’s not technically an exploit against the service, but it’s definitely taking advantage of the expected use of the service in an expletive fashion. So that’s also one of those arguments that people will get on at the bar at Schmoocon and yell at you about that’s not an exploit. I would say an exploit is anything that could possibly change something on the system. I agree. And I think it’s also an exploit or an exploitable condition is one wherein you can gain unauthorized access to systems and data. So, yeah, that’s more talking about semantics and the phase. Sure. So some of the things we see here, relying too much on what exists already, not being able to innovate and bypass AV and ADR, there’s tools out there like Donut MV Two has a lot of tools. Scarecrow. The SRDI tool is really amazing to be able and this is a little bit more advanced. But getting in there and being able to do your AMSI bypass and get your stuff so that you can bypass those endpoint detection controls in the effort that it’s going to get caught eventually, hopefully they read your report and they go back to their vendor and say, hey, they popped this shelf, it’s always that cat and mouse. But I think a lot of folks come in at the point of exploitation and really get frustrated. Maybe that’s going to the client communicating and saying, hey, you know what, I’m going to spend a couple of hours and if I can’t, where do you want this test to go? That constant communication, going back to the client, going back to kind of your set up communication protocols and saying, you want me to sit here and spend the rest of my 30 some odd hours trying to bypass your AV? Do you want me to continue this test so you can drive more value from it, go in and say, yeah, maybe whitelist my IP, maybe allow me to get a beacon or allow me to get some sort of C too. Because again, you may not have the time, clients may not want that. But having that communication and work through is a good phase. I think it depends upon the purpose of my pen test. If I’m there to test your AV, if I’m there to test your end point, yeah, I’m not going to ask for a whitelisting or bypass. But if you want me to find depth and width of vulnerabilities and weaknesses in your environment, then yeah, let’s not waste time on some things. Yes, leave it up. Let me try. So we know that it wouldn’t just fall over if somebody breathes on it, right. But after that, maybe we need to move on and do something else. So what’s the hope? You have a brain trust of people that can on staff that can build you sweet implants and get you bypassing Navy. That’s always a great place to work when you have that. But some consultants don’t have somebody on staff that can build implants that pop buy everything. Well, we did it wasn’t his official role, but you could always go to this one guy and he’d come back in 15 minutes with some crazy way of getting around whiz bang. So what’s the difference then between exploitation and post exploitation? There you go. So once you’ve established an initial foothold. Yeah, once you’ve established that initial foothold, the idea is to identify how you can move about and gain access to more systems and data, either later laterally or horizontally within the environment. It’s that show demonstrate impact. And again, this is a communication time with the client, establishing early on in the rules of engagement. How far do you want us to go down the rabbit hole? Do you want us to do full post exploitation, lateral movement, get as far as we can. Do you want us to demonstrate access to Crown jewels? This is definitely something that you communicate with the target of your assessment early on. But yeah, post exploitation, that’s how far down the rabbit hole it goes typically. Okay, what do my tools change? Well, I’m wondering if my tools change, what changes other than permission? So we’ve got communication, consent, permission, all those aspects. Am I changing my tool set or am I just using the tools that I already have and going further with them? Yeah, I think it depends on your comfort level. I think personally, I think a lot of practitioners will agree that living off the land can become very advantageous, abusing internal systems, resources, tools, sites, applications and binaries that are accessible to you with a hacker’s perspective. That is where there’s some sweet sauce that you can pepper all over the network. Now uploading your interesting tools within memory and getting your beacon, object files, perhaps, or your C sharp implants or different tools that may bypass certain Detective controls. I know a lot of people who old school windows are going to run those net commands, net view, all these different. Net commands. Well, a lot of these EDR products now have the ability to look at anomalous command usage and say, you know what? Across our enterprise, not many people are going to be typing net, user, net, group, domain, admins, those types of things. So creating your different DLLs and different implants that can abuse maybe net API to gather that data, et cetera. But I’m a big proponent for living off the land as much as you can and abusing systems in a novel way that would be advantageous to an attacker. Okay, I just happened to see there’s a good point. If there’s ever any change of a system or anything that you do. That’s why logging in what you do is important. And yeah, I try to never change a client environment, but if there is any change, got to remove it, got to talk about it. Got to make sure that they don’t find your account next year. Dropping a Meterpreter executable on a machine is still a change and doesn’t necessarily remember back in the day with Vale. Evasion loved Veil. I got like two and a half years out of bail with the Python wrapped up interpreted but it also dropped temp files that like, six months later, show up on an AV again and you’re like, yeah. They’re like, what are you still pen testing us for? I’m like, wait, I don’t even know who you are. Who are you? Yes, in there. Now wait a second. Should I have put in a phase called clean up after post expiration? Not a bad idea to put a boot Mark in there and say clean up or part of the wrap up is definitely in the cleanup phase. So we’ve got exploitation. We’ve got post exploitation. Now we might move into a phase that is somewhat near and dear to our hearts and talk about the reporting. I feel like this is relevant to our interests. I hope it’s relevant to yours. Your brass. Yeah. Police. Your brass. Exactly. That’s good. Thank you, Erin. I think that’s a good thing to remember. Yeah. Reporting. Let’s face it. It’s the reason they actually hired you. They didn’t hire you because you could drop down from the ceiling like Tom Cruise. Yeah. They want really good, talented, creative people, but they don’t know that until you put it on paper. And that was the biggest, you know, my biggest hassle in the practice were the findings, making sure that the finding that one operator wrote up was the same as that the other operator was using, because I would have customers that would come. I had one customer. They had 65 business units. We would do a pen test per business unit per quarter. And they wanted to see how they were improving over the year. And man, I would get vulnerability. Sometimes it was called cross site scripting. Sometimes it was called cross site scripting, Dom base. Sometimes it was reflective. Cross site scripting excess. I have six or seven different names for it, which made it hard for me to do analytics. So my project manager would dump everything into a spreadsheet. She would send it over to me. I would spend two or 3 hours getting it cleaned up and moving them and combining them. And that was just miserable. And it was hard for the customer to be able to get a good grasp. So I think having a strong, consistent write ups and finding database is critical in the reporting because those are the pieces I know I’m going to reuse having reusable elements for my narrative. I did that all the time. I mean, how many times can you say we did responder? And other than maybe cutting and pasting the actual output? Because my narrative is I like to have them so dead simple that a juniors window administrator could follow along and repeat the work that way. It took out that, well, the only reason you guys got in is because you’re special or you have your magic boxes stuff. And I’m sure you’ve heard that too, Nick. Sure. My narratives were like, these are the exact steps you can cut and paste the syntax in and do it yourself. But that meant I had to have someplace to store this. Sometimes it was on my laptop, sometimes it was on the file share. But our database for findings was just a mess. And I could have used something, oh, I don’t know, likPlexTrac that would have just saved me half the time. That’s one of the reasons. That’s one of the reasons I work here, having spent so much time. I think the idea is to try and add efficiency in every facet that you can. And one of those is removing as much of the manual human element with still human behind it. But cut and paste is our friend. We’re always going to use cut and paste. We’ve got screenshots, we got requests, we’re going to have to cut and paste. But the less you have to rely on external sources like Word, like Excel, like some of those things, and keep things in a centralized reporting and curation platform. Absolutely. And I think one of the other pro tips that I constantly coached people was the idea of collecting your data in a cohesive one place while you went. I think something that really bites people as they test is that they do all the cool hacker stuff until the last day of testing and then the test is over and then they start their report and they are behind the eight ball. They’re going to feel stressed out. They are going to be reporting over the weekend, they are going to be reporting as their next gig starts. Whereas if you are tying up and reporting as you went and had it in a centralized curation engine or a place where it should exist and you’re reporting as you go, then the last day of testing basically got your report written, you got to tighten up your narrative, put in a couple of screenshots, and Bob’s your uncle, I had a lot of pushback with the report as you go because some of my pen testers would argue, yeah, but it breaks my flow. I’m pen testing and then I’ve got to go pull out the Microsoft Word Doc and I’ve got to write up the finding and I got that. I mean, I understood where context switching for a lot of us can just be flat out difficult and B in the situation we’re in. Context switching is hard because we need to stay focused on our attack chain. So having it so that all it is, is just a point and click drop it into my report. Even if I can’t sit there and write a whole narrative, if I can just drop it right there and put a little keyword, if I could drop a screenshot and say the thing and keep testing that’s much better and having that reusable content like you mentioned, mapped and gapped and ready to roll, that’s the Jimmy Jam. Now what about your evidence? So at which point are we handling evidence here? Now, my history has been with regulatory regimes, which meant we kept evidence for at least three years, and that meant every artifacts that we collected during the pension had to go someplace. We weren’t shoving it in the report because that would make the report like 300 pages, 10,000. Yeah. How do you put a database into somebody’s report? But you need to consider that, too. Maybe that’s part of the setup. Questions in the sow how are you going to handle it? Where are you going to store it, how securely it is? Data retention retention policy. Yeah, I know. With the Plex Drive platform, we have the ability to store evidence and do it in the same secure manner that you’re storing the reports. And when you say evidence, too, you don’t mean perfect. Yeah, absolutely. As you put in findings, you have finding level evidence and the artifacts from the output of proof of the pudding of what you did in the findings. But what Joe talks about evidence, he’s talking about being able to take an archive of all the artifacts and tool output and all that fuzzy stuff and put it in place. Yeah, I’m talking about your run logs. What you did during not just the things you found, but the things you tried that weren’t successful are just as important in a litigious society where if they get breached, they’re going to come back on you. They’re going to say, why did you find we’ve all had that conversation where the independent security researchers found something you didn’t and the guy comes back and says, why having that evidence to say, well, I tested it on this date and it was not present. Right or wrong, I still got it. So you’ve got to consider if that’s a pro tip anywhere. When it comes to evidence, making sure that you’re collecting what didn’t work as much as what did work will keep you covered. When it comes to the disappointing conversation with the customer, my time as a practice director, I can say this phase before we move on to kind of the final, like, read out phase is reporting is really where you can make or break the efficiency of your practice. If you have an efficient system that allows you to spend more time hands on keyboard and more time solving your clients problems and raising their security posture, identifying flaws that helps them raise your security posture. If you’re not spending time administratively haphazardly dealing with your reports, or if you have a good flow and you have your Word template and you have your findings and copy paste is working for you if you’re trying to find a way to scale and become more efficient. But a big place of where you can derive efficiencies is going to be in that reporting. How many hours on average did your practice take to turn around a report? I mean, it depends. It depends on the consultant. It depends on if the places that I existed at that had a reporting system in place that wasn’t relying on Word templates, copy paste, Excel library of findings, those types of things. I would say that you would probably be eight to 16 hours available time on a report now if they did not if they were relying on copy paste, which I’ve been a lot of different places if they were just doing it manually. You’re talking about possibly an entire week of 40 billable hours of time sligging in Excel and Word, 40 hours of a consultant’s time that I hired not for his or her administrativia, but for their skill set. So, yeah, ideally, being able to bring that down to you got a day, 4 hours, 8 hours of billable time for reporting. Now, that’s not report delivery. There’s the QA process, the PR process, all that. But it’s nice to be able to have an in platform solution for that as well. We’re looking at most organizations are doing fixed bids. So if you can produce the same quality output in half the time, why wouldn’t you? Because that’s right. With a fixed bid, you’re not being able to turn that extra cost over to the customer. You’re going to eat it. And if you’ve guessed wrong, you’re losing money. So that’s where having that efficient platform for reporting, that program for reporting work pays for itself. I think there’s something to be said for also evolving your practice instead of delivering I mean, delivering a PDF, you’re going to have to probably do that from now until Kingdom come, because that’s what the industry accepts. But if we’re still reporting like it’s 2010 and we’re not evolving into the way that clients now are starting to expect to consume reports and expecting perhaps a client portal situation, being able to have evidence as a gift or a video, being able to elevate your practice. So you’re delivering, you’re spending less time reporting, you’re spending more time in actionable activity, and you’re being able to have an elevated delivery. Why wouldn’t you also evolve there’s that? So I think definitely I could talk about reporting for a good while. But now let’s talk about let’s move into the read out and then finish it up with the last phases of remediation and testing. So read out. Yeah. Read outs are always fun because not all of your consultants, let’s face it, or client facing. Right? Some of them just they’re not comfortable with it. They’re not good public speakers when they’re challenged on a finding, they can be less comfortable and somebody needs to step in for them. And there were times when I would do the read out even though I wasn’t the operator. And the more information the operator put in that report or kept it as evidence, the better off I was because I wasn’t the one who had actually done the application test or the internal or external pen test I needed to know the steps that he took, so I would insist on run logs. What did you do during the day? What are the things you tried? What are the things you tried that didn’t work so that I could tell the whole narrative during the readout and not just what was in the pentest report. So that was an important piece for me. What about you? Yeah. I mean, I don’t know if I have too much to add there being able to come in and support and be able to articulate the findings, be able to articulate the big ticket questions. Also, knowing your audience, reading the room, is this an executive level read out? Is it a readout for the engineers? Is it a time to talk at the high level systemic issues? Is it a call that you’re going to spend an hour going deep dive on the findings, being prepared, though, for both, perhaps being able to schedule those types of calls? It’s funny, actually. Just random question popped up in the Q and A. Doesn’t the client typically drive what format a report should be generated and what content can you export PlexTrac report to client preferences? Typically, from my experience, consultancies typically are going to own and drive what they’re going to report, because if we as consultancies, change the report template and formatting for every client. That’s just a heavy uplift. Usually I’ve seen consultancies that have custom reporting requirements. But just as because of the shirt that I’m wearing, you could absolutely modify the export the Templating from within the PlexTrac platform. You could have multiple report templates in there. You can customize modify them and you totally can’t export them. I couldn’t help but say that because I noticed it popped into my feed. No, that’s a good one. With regulatory regimes, the report is driven by the consultant, not by the client, because there is a certain format that the regulatory regime is expecting to have. You don’t let the customer. It’s one of those things where the customer doesn’t really have a voice in the severity levels and things that are in the report. There’s no negotiation. This is it. This is what you get when you’re doing it for non regulatory regime reasons. Then, yeah, you could go back and forth and say, well, we don’t agree with that severity level and there can be some negotiation and you can have more customized reports in our format with PlexTrac, yes, we can create templates. It’s one and done. You don’t have to worry about it. And then it just goes back to the dropping the elements into the report as like everything else when you’re doing it on your own. Back in one of my practices, yeah, we had 60 or so different templates for the different scenarios and the different regimes. And every year when I had to do something like change a Copyright date, it was a total payment. I feel like remediation and final testing. We could lump into kind of one bucket of discussion. I’d like to get your perspective on that because you’ve got more experience than that. Yeah. Remediation and final testing isn’t a requirement for all situations and it typically leans again towards regulatory regimes. If most customers and we all know this, we’ve gone back in the next year. Everything we found the previous year is still there. So they didn’t fix anything because they didn’t have to. So remediation and final testing. These are optional phases based on the situation. We would love to see customers be as committed and engaged as we were when we did the test, hoping that they would fix them. But if they don’t, hey, they don’t. That’s all we have to kind of deal with. Very cool. Yeah. So that would cover all of the pen test life cycle stuff. We’ve got some Q and A up here going back. Let’s see. In the Q and A, we address the issue of does the client typically drive the format? The other one was there was a mention of having pre written elements for attack narratives. Yes, you can do the exact same thing for executive summaries. You can do the exact same thing for any section of your report where you may have an opportunity to reuse it. And within the PlexTrac format, you can just save it, create it, and then drop it in later. So that will make your life significantly easier in the chat. Did we have any outstanding questions? When you say remediation, do you mean fixing the vulnerabilities? No, I don’t. I mean the post remediation testing. What I do mean is that you may have the situation where you can guide the client to making good decisions about the tools they buy, about the remediation they do. Maybe doing some tabletop testing or discussion before they implement. Because I’ve had customers implement a million dollars worth of fixes just to discover that we could blow right past them in three or 4 hours. So I like to have some sort of guidance in that phase of the testing just so that they’re not wasting their money. What else do we have up in? We got 1 minute left. I think we can start wrapping it up with what you got. I’m just being a nerdy time. Yeah. No, I think that’s great. We have hit our time. I think I’d like to go back through this chat and if there’s anything. Yeah, for sure. You got ideas, drop them in the Q and A. So we for sure capture them. I’m not sure if we’ll be able to capture them in the main chat. Maybe we will. But if you have thoughts, ideas, things that were bubbling your brain, throw it in the Q and A because that I know we for sure can capture, we’re wrapping it up. This was a blast, Joe. This was fun. I love rapping with you. Hacker FAM. Thank you for coming on stay in touch with us. I’m JP on the funny papers. You all at on Twitter. I’m at pipe with an underscore. Pipefish with an underscore which makes perfect sense. But hit us up on our PlexTrac YouTube channel book a demo hit us on all of the socials and even if you’re just looking you for mustache tips, I got you covered. I got a fake one. I got a fake one. Joe’s got the right one. Perfect later. All right. Thanks, guys. Gals. It was fun. Thank you. Bye. SHOW FULL TRANSCRIPT