Skip to content

VIDEO

Don’t Trade Quality for Speed in Your Pentest Reporting

Dan DeCloss, Founder and CTO at PlexTrac, Nicholas Popovich, Founder and Owner at Rotas Security, and Caleb Davis, Senior Manager of Emerging Technologies at Protiviti came together for a webinar focused on cutting-edge tools and innovative techniques needed to successfully create high-quality reports in half the time.

Series: On-Demand Webinars & Highlights

Category: Thought Leadership

   BACK TO VIDEOS

Transcript

So it’s my pleasure to introduce you to our presenters today. We have Nicholas Popovich, who’s founder and owner of Rotis Security. We have Caleb Davis, who’s senior manager of emerging technologies at Protiviti. And we have Dan DeCloss, who’s founder and CTO of PlexTrac. Welcome, everybody.

All right, well, Dan, I’ll turn things over to you to take it away. Yeah. Thanks so much and thanks for joining us, everybody. We’re excited to be with you. If you’ve ever attended one of our webinars in the past with PlexTrac, I hope you know that you’re in for a treat, especially when we’ve got some amazing talent joining the panel today. And Caleb and Nick, thanks for the brief introduction, but I do want to just kind of allow the panel to introduce themselves just to kind of highlight, like, hey, what level of experience that we have bringing to the table for the topic. And because it’s obviously one that’s near and dear to my heart, I’m very passionate about.

It’s why I started a company around it, but also want to make sure that the audience recognizes the value of other folks that are not me and much more talented and skilled than I am around this topic. So, Nick, why don’t you kind of just give an introduction to your background and who Rotus is and stuff like that? Absolutely. So I have been former army signal Corps member, got out, entered into the it world, moved into consultative pen testing, spent a lot of time in pen testing, moved into corporate security and enterprise red teaming, and been an early adopter and friend of PlexTrac friend employee, and then eventually moved to starting my own cybersecurity consultancy in Rotis, which is really focused on meeting the needs of our folks with consultative project based cybersecurity, looking at continuous assessment and testing and kind of tip of spear of how we deliver our services, are leveraging PlexTrac, and it’s been a joy to see PlexTrac from your laptop when it was teal, and you’re kind of throwing it together to the platform that is now. But, yeah, that’s me in a, yeah. Thanks, Nick. And obviously, thanks for being a longtime friend of PlexTrac and friend personally. But, Caleb, super excited to have you joining us today.

Really appreciate you taking the time because I know all of us are in the throes of the end of the year slog as work comes in, as people try to finish up end of year assessments and things like that. I really appreciate you taking some time to join us, but please introduce yourself and kind of your background and also a little background on Protiviti. Yeah, sure. So I’m a senior manager at Protiviti where we do pen testing kind of across the board. My team specifically focuses on low level embedded pen testing, hardware, firmware, et cetera, oftentimes that’s associated with the product. And these products in a lot of cases are going to some sort of regulatory submission. So really we are early adopters.

I would say probably not as early as Nick, but we are early adopters of PlexTrac and we use it frequently to help our clients understand not just a point in time security representation of their products, but also helping them to in their overall security process for that product and go to market strategy and all of those things. And Flex track is a wonderful tool that we’ve been using for about a year now to really facilitate our goals to put our clients in a good position in regard to product.

Well, yeah, thanks and thanks both of you. A brief background of me. Beyond founding PlexTrac, I am a former penetration tester. Started my career in the DoD on the civilian side and found my niche in penetration testing, specifically application security. Also did some embedded testing back in the day and reverse engineering. And so one of the main pain points and this ties into the main topic is when you’re doing a penetration test, it’s a highly skilled engagement, right? You’re using folks that have a deep knowledge of technology, very aware of techniques that are being exploited in the wild, and trying to simulate or emulate those against an environment or an application to really highlight the security gaps. Really important.

It’s because it’s highlighting some of the most critical risks that are being identified against an organization or a vulnerability. So keeping those people engaged as long as possible on that task is critical. Right. The longer that a pen tester has to spend in an environment or in an application, the better odds you have as the person that has hired them of getting quality results out of that really intricate finding that might take some time to just exploit. So keeping those folks engaged on that process is priority number one. One of the elements that takes up a significant amount of time during the engagement is the reporting lifecycle. It’s the most important piece in terms of the delivery to the customer or the end constituent, whoever’s receiving the results, because it does highlight everything that went on during the engagement, what they tested, how they tested it, and most importantly, what they found in terms of risks or exploits.

To be able to identify what you should go to do to fix these. Right. So it’s a really important piece of the overall security testing process, but reporting itself can take up a long time. Enter in plex track of like how do we speed up this process of getting reporting as automated as possible so we can have the pen testers spend more time on the engagement on finding those critical issues, but still delivering a high quality report that highlights everything that needs to be highlighted and is also repeatable for the end user. Can I validate these findings actually exist? How do I go about fixing them and how do I collaborate with the people that found these issues if I have questions? Right. So that’s really one of the primary goals of PlexTrac. In addition to what Caleb also mentioned, of being able to highlight your progress over time.

How often are issues reoccurring? Do I have visibility into are we getting better or not? And that’s one of the true benefits of PlexTrac, in addition to being able to do truly customized reports in an automated. So really what we want to talk about today is exactly that. What does it mean when we talk about a high quality report that comes out of a deeply technical engagement? Why should we implement pen test reporting and workflow automation? What value does this provide and what are some of the things that we’ve seen as we’ve implemented a reporting solution like PlexTrac and then selfishly we’ll be able to kind of share how our current customers are using PlexTrac and how are they getting more reports out the door in less time with the same amount, if not better amount of let’s, let’s, let’s kind of, you know, like Caleb and Nick. In your mind, what are the key elements of a high quality pen test report and what have you tried to employ within your own testing exercises, as well as your teams to really produce high quality reports? I’ll defer to Caleb for that to start us off, if you want Caleb, sure. Yeah. So I think the biggest thing that I would call out there is just the ability to have sort of golden language, if you will. Plexjack refers to it as a content library, but a repository of tried and true language around specific themes to us has been extremely beneficial.

Right. Some of our more advanced testers can spend time understanding the industry, understanding thoughtful recommendations around particular areas, or the actual impact to particular findings that we have. And with that, our testers that are day to day doing the engagements, finding some of these things, have a repository that they can lean on to help them look at more things to evaluate and understand the actual true impact of that security vulnerability, as well as some of the real tested recommendations and the broader context. When we have those conversations with, for us product teams, if we spend time, and we have in the past, starting from scratch, writing everything, researching the thing that we found and trying to triage the risk effectively, it just takes time. And it takes time away from the time that we could be spending finding more attack vectors, weaponizing the attack vector into more of a proof of concept, and trying to articulate that better. So just that the flexibility to allow the broader team to support the engagement team in a more direct manner for us has been huge in terms of how we’ve leveraged PlexTrac so far. But Nick, if you want to add to any of that, yeah, I mean, I could parrot a lot of that, but what comes to mind to me when talking about pen test reporting, and you even mentioned it, you use the word automation.

And I think some people have turned the word automation into a dirty word because it’s been abused. Automation, if leveraged expertly and with care, can be, under a framework, an incredibly useful utility in being able to maximize efficiency and effectiveness in your assessment and testing activity and in your reporting. I think some people maybe hear the term automation and maybe they’ve been burned in the past, because as we all know, all the professionals who are listening and watching this and they understand that tooling is only as good as those who wield it and those who tune it and those who configure it. And that’s also with reporting garbage in, garbage out. If your workflow is automated such that you’re not having touch points, you’re not expertly curating the data, you’re not doing what folks have engaged you as information security professionals to do, and you’re just kind of punting. All of a sudden you have this mediocre testing strategy and you have a mediocre report, and we have a flood of mediocrity. And I think rather than thinking, oh, we’re just going to take the automated results from some tooling, automatically pull them in and automatically shoot out a report.

That’s not what we’re talking about as practitioners who value tradecraft and we see solutions like PlexTrac and the other solutions that we expertly wield as force multipliers in our ability to execute excellence. And so the key factors of quality reporting is making sure you have actionable information. It’s not just bloat, it’s not just words. For the sake of words, I remember a decade ago, folks saying things like they gauged the best pen test report by how thick it was. Basically saying that they would print out that sucker and plop it on the desk and based on the resonance factor or how thick it was, and that’s just not the case. You need to have actionable information. The bottom line up front, you need to have the ability to have kind of, in my opinion, both paradigms.

You need the ability to have something tangible as an artifact, but you also need to have the ability to have a report that can somewhat be malleable and living. I think we as an industry have evolved from static reporting strategies. While the static reports are still valuable, being able to come in and adjust risk on the fly, add comments and information and that kind of activity. So, yeah, we see the bullets on the elements of a high quality report. At the end of the day, our entire existence, at least as far as offensive security professionals go, is really to ensure we’re raising the security posture of those organizations that are under our purview. Whether you’re an internal team, you’re consultative as a third party brought in or that’s kind of it, right? You’re on the outside or on the inside. We’re not going to really talk about the criminals because they’re not after the same things we are.

Even though we look and smell and see what I’m saying. No, but at the end of the day, the elements of the report, just like when you were camping and your mom or dad said, we need to leave this campsite better than when we came in in a better state. When we get into an enterprise or organization or a network, an app, a system, a product, whatever the case may be, we want to leave it in a better state. And the way that we can do that, even if we can’t turn the screws to raise the security posture, we can give them the roadmap. That’s what a report is. That’s what an engagement from a security professional is about. You now have a roadmap that should guide you to raise your security posture effectively.

Yeah, I think you guys highlighted on some really key elements that I wanted to kind of double click on a little bit is like, I think everybody’s going to have kind of their standard way of reporting within their organization, right. And so it’s important to be able to accommodate those aspects. But I would say, like, in general, from what I’ve heard from you and then what we’ve seen with thousands of templates that we’ve helped support across the different types of reporting that our customers are doing. Everybody breaks things down into some basic things, right? And then what’s really important is not just the time saved, but the ability to provide deeper collaboration and quality around what went into the engagement. So, I mean, I view it as kind of like the classic speech writing of like, you’re going to tell them what you’re going to tell them, right? So that’s like the introduction, the methodology, here’s what we did, here’s what the scope was, those kinds of things. And then here are the very nitty gritty details of what we found, specific reproduction steps, how we executed the engagement, and then the summary of like, hey, here’s what you need to go do, here’s what we found, here’s the key elements, and here’s the programmatic areas of risk or whatnot. I think a general report that kind of breaks down into those three things, but every organization is going to report on those three categories in much different fashion, may even want to present that information or group it in different ways.

Right? Can you touch on, when you’re working with different customers, do you have to adapt sometimes to their way of reporting or things that they may want to see differently in an engagement? And does that ever come up across the different types of customers you work with? Yeah, I can touch on that one, too, because I have a very real example of, you know, we started with our general template, right? And we tried to capture some of those things you mentioned, Dan, as best as we could and applied across the board, which is good to some extent, but we had a specific issue. Know the FDA for some of the medical devices that we were testing had very specific things that they needed to see in their reports. And with that, instead of going back and redoing an entire template and then training our team on the different steps that need to be added and making sure they have the latest version, et cetera. We leverage flext track, we leverage some of those things that I mentioned previously. So these narratives, these blurbs, and making sure that these different sections were added to the report at a higher level. And we, as the leadership team for our report writing, could make that change. And then all of the team could benefit from that change because we were changing it at the fundamental level of our report generation tool.

Right? So that was a quick response to handle a specific issue where our template wasn’t there, but with the support of some of the PlexTrac template specialists, as well as just the flexibility of the tool inherently, we were able to achieve that very quickly. And have a meaningful report for those specific regulatory bodies. That’s a great example. Yeah, that’s an awesome example. I would say from the rotis side, and I’m going to try not to soapbox too often, but been doing this quite a while. And I was in kind of practice leadership at a large consultancy in the past. And I would say that 1015 years ago, it’s really hard, at least from a third party consultancy’s perspective.

It’s tough to quantify what value you bring from a services organization. Right. You’re just like, we have smart people and we do smart things. And so I think in the past, one attempt at a differentiator was the report saying that you had a better report than other folks, or you had a more clear report or a consumable report, or you had a reporting engine. And so I think the industry somewhat has trained folks and consultancies especially are, I think, air on this in saying that our report is a part of our secret sauce. And I think to a degree, there’s a modicum, modicum, modicum. There’s a bit of truth to that in that the way that you present the data, if it’s clear and articulated well, and you have a very good way to categorize things and associate things, and you found some success in that, that can be somewhat of your secret sauce.

But at the end of the day, kind of what you were alluding to, Dan, we’re looking at things. We’re looking at assets. Know, maybe it’s social engineering engagements where the assets have pulses. Maybe it’s endpoints, it’s API endpoints, it’s nodes. We’re looking at things, and we’re trying to quantify and qualify the level of risk by identifying flaws and looking at the likelihood and impact that those risks, if realized, what risk does it present to the organization? And so there’s kind of an industry accepted way to do that. How you communicate some of the ancillary information is important, but I definitely say so. I’ve been kind of a proponent over the years of trying to say, listen, we want to clearly, concisely get the information out there.

Who cares if the tables got the exact visual formatting of it? We want to get the information out there. And what I have found is leveraging PlexTrac has allowed us to really focus on communicating the information versus being beholden to. And I think sometimes having something of an old think, not old think, but a very institutionalized view of, we’ve always reported this way, this has been our reporting this is hang our hat on this. This is our differentiator. Well, it’s true, but your differentiator is really how you articulate the data and your expertise and your people being able to be somewhat malleable and present the information for different needs, and then being able to do that without much trouble and ensuring the integrity of the data. Because in the past, without using an enterprise solution for this, I’ll be Frank, when we started off using just docs, just docs in a shared excel, or docs in a sharepoint, there’s human error in element, there’s peer review processes, there’s slog, there’s errors in calculating information, and being able to remove the human element, where the element of calculation matters more, and input the human element, where we need the expertise of a hacker and a technologist is so much more valuable. And then, yeah, we have in our line, we’ll have to do things for medical institutions that have requirements to submit certain plans to regulatory bodies, and the report needs to be in a certain format.

We have government clients that need helps with their POA and their plan of action and milestones, and they need the data presented to one team in one way, with lots of pictures and step through, and then they need the same data contextualized in a format that they can submit to the DHS or to governing bodies for certificates, authority. So what we found is, and the support that Plextrex given has been absolutely bananas, the ability to take all the data from engagements, getting it into the platform, and then being able to disseminate it logically via digits, and having folks log in via exporting to a certain kind of format that can be easily transformed to POA and m getting the data out there. The data is the key, because the most important part of our engagements is being able to consume the information and make adjustments. And so being able to communicate, that has been super important. And yeah, I think trying to break some folks habits of saying our way is the best way. And there’s elements of how you do reporting that’s super valuable, and you need to maintain those because the folks that you help appreciate the clarity of how you deliver it. But you can always be evolving.

One thing that I love about this industry, but it’s exhausting, but it’s important, is that you can’t rest. You’re either moving forward or you’re being left behind. You’re sliding backwards. You really can’t just marinate. Point is, we as an industry need to continue to evolve how we make sure that we’re articulating the information to the consumers of our security. Nonsense. Yeah, I think that was one pain point that I experienced early on, and I think we’re getting better, and I hope that PlexTrac is also helping in that arena, but at least facilitating it better is like a couple of pain points I always had was one working with folks after I delivered a report, like being able to reproduce the issues, even though you had lots of description and lots of screenshots, you just didn’t know what type of.

A lot of times it was like a junior analyst or a junior engineer that just wasn’t able to reproduce the issue. And it was frustrating for me because I always prided myself in being as articulate as I could. Where’s the gap here? And finally, I’d have to send them a video of, like, here’s exactly what I did, because you can’t really do that in a word report, right? So thinking outside the box of how we interact with our customers and our constituents is really important. But then you touched on the consistency and the actionability, which are the last two bullets here. Being able to have the consistency across different tests and different testers, because everyone’s going to have the different unique ways of testing and even writing to a degree. But at the end of the day, as a testing organization, you want to be providing a consistent methodology for how you’re presenting to your customers what you tested, how you tested it. And so I think that only adds to the quality of the report by having a customer see that consistency over time, regardless of who it was that was conducting the test.

But it’s the organization itself and the rigor that you put, and it starts to become clear in how you flow with the consistent wording and the verbiage. And this report went through several cycles of quality assurance, a review of, like, hey, what looks good, what doesn’t, what do we need to tweak? And not just, hey, this specific tester just wrote it up and delivered it, that you had some workflow around that. But then also really importantly is the actionability of, like, what do I do with this 300 page pdf once I get it right? And that’s the age old question, right? Somebody’s got to do the work behind that. And so facilitating a workflow where the document that you receive is more an artifact of that point in time of the engagement, but really being able to have an interactive capability around, hey, we need to go test these things because they’re critical, but these low findings might start to bubble up over time. So we want to keep them on the radar.

Does that resonate with you as well, or some of the workflows? And hopefully it does. Actually, I have a question for Caleb, because I’m truthfully curious, because while we’re talking about elements of a high quality report, I almost think we could talk about high quality reporting as well, because I tend to see from my vantage point of kind of the maybe consider traditional pen testing. Net pen infrastructure, web apps, applications, API, social engineering, even physical stuff, breaking in buildings, all that fun stuff. Less on the physical side, but on everything else that I just discussed, we’re seeing an uptick on the idea of continuous assessment and testing. Continuous monitoring. You’ve got the risk management framework requiring continuous monitoring in the government side, and then whatever the government does, the industry has been usually doing for five years anyway. So we’ve seen a lot of the more continuous type assessments.

So I’m curious, on the productivity side, are you guys seeing similar things? Yeah, definitely. Like I said, products are kind of our bread and butter right now.

So each product is going to have a pre market set of activities that you do building in things like threat modeling and software composition, sats, things like that. And then pin testing is part of that. And we’re seeing more and more pin testing earlier on into the product development lifecycle to build on itself, understand the very low level core foundation of security that you’re building into your product. And then how do those things kind of materialize into the broader layers of the stack? So we see that continuously, and we do leverage Plex track for that same approach. And then even we’re not done after that. By the time that these products go to market, there are post market activities that have to be done to constantly understand, apply what the industry is doing to that device. And if a new industry technique or zero day or something like that is released, that makes this device vulnerable, that’s something that we need to address.

So I’d say that the way that we handle that kind of within the tool, in addition to just general, like you said, the reporting approach changes a little bit. But what we do is sort of always update our methodologies inside the tool. Right. The PlexTrac term, there is runbooks, if anybody’s familiar. But understanding a new runbook, understanding how to complete that runbook and how to apply that to a device on a future engagement and maybe take a vulnerability and kind of expand it and keep going down the line is something we do consistently. I think in addition to that, something that’s been really beneficial to us in terms of the quality of reporting. Know, I think you could be dismissive to here’s, here’s the finding, go fix this thing.

And then, like Dan mentioned too, steps to know, we can give some steps to reproduce. But oftentimes we push our teams to develop pocs, right? And send PoCs, send Gifs and MP4 s of exploiting the attack. We generally, as a philosophy, don’t like to do the same technique over and over again, right? So if we build out the tool with the mindset that we want to enable the product team to do this testing in the future and have some semblance of if they’ve mitigated something on their own, we can come and we can apply our industry research and expertise to see what’s changed since the last time we looked at something. I think that all goes into the facilitation of doing those things through PlexTrac, as well as just that overall approach to what we’re trying to do, not just have a point in time. Here’s your report card for this time of the year, right? This is your ongoing security posture of these particular products and even your broader portfolio. Would you see when you’re doing that activity, whichever phase it is, pre market, post market, is there some level of kind of, is it iterative? Is it kind of do some work, and then is there an iterative nature to some of your testing? Yeah, definitely. I think that for us, what typically happens is the scope can be extreme, right? If we look at an extremely complex medical device comprised of multiple different subcomponents, we could pen test that.

But that could be a scope of 20 weeks if we want to go as deep as we should. So really the complexity drives sort of the biting off chunks of that pen test, testing different components and understanding. Here’s this component, and here’s the scope of this component and how it interacts with everything else. And that’s part of the threat modeling that has to take place, obviously. But taking that off in bite sized chunks is much better, obviously, from a budget and timing perspective, because we don’t just have to stop everything for 20 weeks and then the team gets the 500 page pdfs that we’re all talking about. It’s more about being able to integrate into a development team’s pipeline and just being part of that verification, validation, feeding the backlog, that’s where we want to operate. And I think by necessity and just out of looking at efficient programs in terms of product security, I think you’re exactly right.

I mean, piecemealing and taking smaller pen test scopes to give more beneficial and thoughtful, specific and targeted recommendations is where I see a lot of the industry going now. Yeah, I can’t say it enough. I remember when I first got exposed to PlexTrac. I think it was. What was it, Dan? Like, I think it was 2018, 2019, when we first started playing around. And I remember at that time I was working at a pretty large consultancy in North America, and I remember saying things like, I’m either not going to work at a place that doesn’t have some sort of reporting tool or platform. And then I remember stepping back from consulting completely and just saying, I’m just the meat grinder.

I’m done. And if I ever am going to do anything, it’s got to include a reporting tool. And then when it came time to start my own pen testing company, I was like, well, now that I’ve seen PlexTrac and PlexTrac removes the most painful part of pen testing, and that’s reporting, I guess now’s the time to start my own company. So PlexTrac is kind of tangentially a part of my Androtus’journey, but I’ll tell you what we found is it’s just impossible for those who are viewing this and have ever had to deal with pen test results.

You know, the nonsense that we’re talking about when you have a PDF and a spreadsheet or a word doc, and then the next year or the next test, you’re trying to diff the results and compare and see progress. And there’s so much post processing work. I will say from a consultative standpoint, in our pen testing, that does have some iteration to it, especially in apps, you’ll have releases and then bug fixes and then checking those fixes, or you have major releases. And some of our clients retain us for long term over the lifecycle of maybe the first year of an app being released to production, those types of things, being able to come in and work with the findings, not only from a tester’s perspective and being able to expertly provide the insight into the findings manually by applying some logic and rules, and being able to leverage automation expertly and smartly, but then on the consumer side, being able to have a login, perhaps, and being able to collaborate and see and diff the tests and see what’s happening and track progress is absolutely valuable because things will fall down in the cracks. Folks will maybe look at the first couple of findings, the findings that are red, and that’s why? When I’m talking with folks and we’re partnering together, the idea is it’s not a report, it’s reporting. It never ends. It’s always a reporting.

It’s a reporting paradigm. I don’t know why I put so much emphasis on the ping. It’s annoying. But I think you guys get what I’m saying. And so we see a parallel, we see the evolution of, we have to be as an industry, continuously assessing this snapshot in time just isn’t going to do it. You can do snapshot in times for very specific, bespoke, advanced or there’s a necessity to it. I’m not saying that you don’t need those, but apart especially organizations that, well, I won’t even get into the categories of organizations.

It’s just to be as protected as you can be. A constant vigilant standpoint of continuous assessment and testing. Continuous monitoring, attack surface mean, think about CI CD. Right. You can’t functionally in modern applications say that you did a pen test one time last year and the app is good and thumbs up, the SEC is going to rip you apart depending on where you are. So the idea of our industry is moving to understand continuous assessment of testing with the idea of a CI CD. You’re going to have to expertly leverage automation in some of that.

And then in that reporting paradigm that you put together, whether it’s a part of as the product owner or as the person who’s being charged with assessing the product security or whatever the case may be, we just have to continually push the envelope forward and evolve our strategy so that it’s staying kind of in lockstep with the assessment and testing paradigms. Yeah. And I’ll add one thing just to the tail end of that, and I think we might be getting into it in this slide, but I think overall, what all the things that we’re saying really does for our clients that are consuming some of these things is it helps them articulate risk and triage risk much better. Right? I know we’ve all seen hand a pen test over to somebody. Okay, we’re going to fix all the criticals and highs and not care about anything else. And then a lot of times our clients will see fixing a critical or a high costs x amount of dollars where if you fix the medium and low, the potential impact of your critical might be fairly mitigated. Right.

So I think all of these things really help put our clients and the receivers of these reports in a much better position to consume that and understand. Okay, what’s the most impactful to our business to make us more secure. And I think that just speaks volumes to how we should be approaching this in general.

Yeah, no, that’s great. And both of you were kind of flowing into the next section, which we’ll spend a little bit of time on. But I think just in summary of the previous section around what makes a high quality report, I think all of us agreed, maybe not explicitly, but how we were speaking is it’s about the content. It’s about the manner in which we’re getting it to the clients. Not so much about the actual look and feel of the document or whatever mechanism it was to deliver the content. It’s really about, hey, here are the steps that it took. Here’s the quality of effort that went into it and the skill that went into it, and why it’s important.

All of that is really what that secret sauce is like you mentioned, Nick.

So you’re not sacrificing anything with using a reporting tool like PlexTrac to help in speeding up the delivery and even the capability that you can provide to the customer.

Let’s talk about what are the true, we’ve alluded to some of it, but some of the true benefits of a reporting and workflow automation capability.

I think that one of the biggest ones is that not only do you use your brain and your learned skills within pen testing for identifying exploits, but you’re also using a lot of tools as well. And when you talk about kind of some of that continuous assessment, continuous validation, a lot of that comes from automated tools as well. So being able to incorporate the output and from vulnerabilities that are being identified from those automation sources is a key component of the testing.

Maybe both of you speak to what tools you might use that also kind of help deliver a report, or even from an automation of just the test execution really speed up your delivery, right? Yeah, certainly. I can speak to the idea of why does it benefit to have the workflow and even give you behind the door look a little bit into some of the sauce? One of the aspects of execution and delivery is knowing the capabilities that your people have. And then those capabilities are augmented or enhanced by the tooling and automation. You can understand how to exploit certain activities and given the results from the initial pass of automation. So if you go in with burp suite, or you go in with an enterprise tool like Nessus, or you go in with any of the litany of, and then tons of open source tooling, or you write your own scripting or you leverage some of the different, I mean, there’s tons of different open source tooling out there that comes in, being able to curate that data and bubble up into a platform that can begin to then allow expert manual analysis to go in and go deeper than the initial. That first path is only going to get you 30 40% of the way there. It’s going to show you what is known, it’s going to show you some cves, it’s going to show you ports, protocols and services, maybe API endpoints, sitemap.

But being able to first and foremost have the testing and tele that you’re trying to get kicked off, that helps in being able to execute well with having your people and training people on how to do it and making sure people do it in a documented, repeatable way so that you have the right kind of coverage, you know what’s being done and that it’s being done thoroughly. And you can check on the work, check on the progress, but then when you have this big mountain of data, it’s not useful if you’ve got to go to eleven different sources, when you can aggregate and curate all of this information and then begin to like part of our testing methodology is we have some Javascript and we have some different things that we’ve done in our back end to be able to allow us to pull in findings and tag them appropriately based on the insight, which will allow us to apply different cvss scoring based on different rules that we have established. And so we can start to bubble up. And I think a lot of the continuous assessment and testing, you absolutely have to be able to bubble impactful, insightful things so that the folks who are going to do things manually know what to go manually do. It’s not one or the other, it’s both. You’re just going to leverage automation and hope and pray you covered it. And you’re just going to have one person running NMaP across 8000 hosts, hoping that they get lucky.

To be able to see from the results, you have to combine them. Right. We love NMAP, but we’re going to use NMAP forever. And you’ve got things like it’s not rumble anymore, but like run zero. And you have all these different tools that are going to get you information, getting that information into a platform like PlexTrac and then being able to leverage the functionality in PlexTrac to be able to ensure that you’re streamlining not just reporting, but the actual execution and curating some of the mean to be a little bit frank, I don’t know how we could do what we do now without that. The volume of data. I mean I look back like 20 years and think of tests that I’ve been on.

I’m like man, I wonder the stuff that I missed because the volume of data I was operating from was just so vast. And from a dollars and cents perspective, consultants only get 40 8120 hours on certain engagements. Or if you’re doing assessment testing maybe you have buckets of however it’s done there is a time cost. And so doing things to lower the time cost to increase the analysis time that you can have move time from reporting into other buckets, it’s just absolutely imperative, especially at scale. And a lot of our folks end up saying things like we’ve been delivering a certain strategy and they want to change things up and it’s as simple as changing a few of our rules or deciding to add in indicators and we can do that on the fly. Whereas before if we kind of were doing manual processes without the support of a platform, if you change some of your methodology it’s like do we break everything? Are we missing things now? So that may have been a little esoteric or not so much esoteric but maybe a little pie in the sky, but yeah, that’s kind of where I’m at on that. What do you think Caleb? Yeah, I was going to say I agree with you nick.

And just in terms of taking this mass amount of data from your burf or checkmark whatever you’re using and consuming that into something reasonable, I think it just goes speaks to the same point that we were trying to emphasize previously about we have to articulate not just a scan and here’s the automated 500 page report of that scan. And we’re just regurgitating the language that these scanners give you. I don’t see for us we shouldn’t get paid to do that. Right. Exactly. Where we play is we add the additional context to that story and we understand how to change some of these things together and really determine the actual risk posture to the clients. I mean that’s where I see our role right on the back end of that too.

Just speaking to some more of the automation. Hopefully not getting too far ahead of ourselves here, but something we really struggled with, especially when we were doing kind of the point in time word document. Even if we were generating it. If we generated it out of PlexTrac now into a word document we run into a ton of issues because we’ve got a whole internal QA process and when that happens at different frequencies, if you’ve got different report artifacts, different people looking at it at different times, we can get into kind of a conflict of versioning where it’s not effective at all to have those different artifacts and different permutations of those artifacts. And we’re trying to maintain those things in different locations. Right. Whereas PlexTrac and the idea of having a centralized location of this is where the risk language resides.

This is where you can export XML, CSV, Worddoc, et cetera to us. That’s been a huge help. And then the tools and capabilities for automation within the tool to facilitate some of that internal QA that we have to do is just much better inside of a single tool that’s designed with that intention. Right. Just to kind of speak to the automation on the back end a little bit. Yeah, no, that’s great. I think facilitating the process around whatever your process is for how you get reports out the door is really a main goal of a reporting automation tool like PlexTrac.

It’s not to say that it writes the full report for you and you don’t want that. And certainly you don’t want to just be regurgitating automated findings, right? So it’s like, what is the value add that you’re providing where it’s like, hey, this helped find some low hanging fruit. We can now addend this to how we would normally write up an XSE vulnerability or something like that and then be able to augment it with our own testing. I equate to kind of like knowing what the calculator is doing under the hood and then using the calculator. Similar to what I would say, like SQL map, right? We all know how we would identify SQL injection. It’s just when, man, you can use a tool like that, it can find it so much faster, then you can take over once it’s founded and get deeper and provide the deeper analysis.

I think that’s a good key element of why you would want to have a workflow automation capability and reporting that helps aggregate that data in addition to just like copying and pasting results straight from the scanner.

I know we’re kind of starting to run a little bit on time. We want to leave a little bit of time for questions. I think we kind of have covered a lot of these aspects, but I definitely wanted to highlight the latter two with one use case and then see if this resonates or if you guys have any other examples. But more actionable insights even during the engagement is really important. I ran across several issues in my time as a pen tester where we had identified a pretty critical vulnerability during the engagement. So we weren’t really ready. We didn’t have a full report ready.

We weren’t going to have a debrief on everything that we’d done. But it’s like this item really needs to get fixed right away. It’s pretty critical. Letting it wait till the report is through its paces of QA and delivery is just too long, in my opinion. So how do you get that data into the right hands? And that is one nice thing that we support in PlexTrac, is being able to publish findings straight to a customer without having to do any other kind of out of band communication or out of band encryption and delivery. It’s straight through the portal. But do you guys handle that kind of a workflow, or have you had to in the past? And what were some of the techniques versus what you can do now? Go for it, Caleb.

Yeah, we just did it. That’s the only reason I’d say I could speak to it. I like it. And we did exactly what you said, dan. I mean, previously, you know, just talking, know, some of the time saving, we’ve, we’ve got all of our content library. We found some things that we deemed were, you know, we can pull from that content library where we’ve seen some of these things in the past, and they’re pretty common risks, and we have some recommendation generally around it. But what it allowed us to do that didn’t just write it for us.

We had to add the additional context, obviously. So the content library made it very easy for us to add that additional context. And then in terms of how we used to do it previously, we would write that all in a word document, basically write a little mini report that have to go through our internal QA process. They’d review it, and then we’d send it over via email to the client. Right. In this case, we say, hey, this is the golden. Even in this case, we could turn on track changes and see, like, here are the deviations from the golden language that are specific to this context.

Everything else has been internally qaed in the past. So it could take less than a day to turn around a critical vulnerability. And then, like you mentioned, if the client’s onboarded to the tool, you hit publish and the client can see it and see steps reproduce, see the description, the risk, the recommendations, all of that there for them, in addition to, like I mentioned earlier, the artifacts or logs or whatever else is those secondary things that could be provided to help tell the story a little bit better. It just saves an insane amount of time just in the churn of putting something together.

Yeah, I don’t have too much to add. I will say that I wish because of some of the nature of our clients, some of them are required to have out of band delivery methods because they’re not allowed to access certain sites or contractually they always our standard. I tell the folks when they’re scoping out engagements, we want them to have the opportunity to have the engagement platform, which is PlexTrac. It’s nothing special. They just get a log into PlexTrac to view results. And it’s amazing to be able to have a framework. To Caleb’s point, being able to rapidly have a framework that can allow you to articulate new findings and not start from scratch is immeasurably valuable and then empowering most of our engagements where folks will log into the platform to manage their vulnerabilities, they have the opportunity, if they want, to generate their own doc, they can, they can generate an excel, they can just view the data, they can take screenshots if they want, they can compartmentalize, and it’s so much cleaner that way.

I haven’t told any of the business operations people at this at Rotis, but I’m considering an upcharge on folks who are not opting for the report delivery platform if they want the report delivered statically because it’s more hassle we have to use. Some of our government clients have a specific encryption technology they are required to use, whether it’s state or federal or DoD, they have to be sent an email. And then because of DLP and those types of things, there’s concern over leveraging email. Maybe not so much the encrypted email, but having to deliver in multiple different ways is a pain. And it’s always easier when folks will opt for the engagement platform. That’s the line item we call it. It’s just a PlexTrac login.

You just get a log into PlexTrac and you get the joy of dealing with your findings and the digits. Well, and I think what’s most important to me in that scenario is that the client or the customer, whoever’s receiving those results, gets these at a much quicker pace. Right? Which means they can start fixing the issue right away and you’re not having to hassle with all the other steps that it might take to get it into their hands versus just saying, hey, it’s published go find it and execute on it, which kind of leads to the better stakeholder differentiation. I think you all kind of highlighted in one aspect or another that there’s different types of reporting that you do to different types of stakeholders, right. And in a manual solution, you’re having to go and disseminate that data into a different template in various capabilities and basically a lot of repetition through, into a different document for a different type of a stakeholder. Say like it’s an external auditor versus an engineer that needs the nitty gritty details versus an executive that just wants a high level summary. So one of the key aspects of speeding up the delivery, maintaining quality across all of those types of reports is the templating capability in PlexTrac and being able to quickly export to a different report template using all the same information that you have in the report today.

So you’re not having to do anything else that’s already pre built for you and just speeds up how you interact with different stakeholders. Right. So I think that’s just kind of a highlight and obviously a shameless plug.

We’ve got eight minutes left. I’ll quickly jump through the slides and get to the Q and a. We’ve got a few questions lined up, but from the audience perspective, if you have any questions, we’re happy to chime in on anything related to what we talked about today or any other questions that might have sparked your thought process, but just some statistics that from a PlexTrac perspective, this is genuinely some of the metrics that we help customers track and manage is like really reducing the amount of time it takes to deliver and write a report, better collaboration and efficiency on reducing the risk that comes out of those reports and truly improving team morale in terms of their life. The less time they have to spend on reporting, the more time hacking, the better. And then obviously the deeper they can go on engagements, the more quality of a report and engagement. You’re going to get what you’re paying for. You can also do a lot of other workflow automation within Plex track.

We support a lot of integrations with a lot of different tools and use cases around the customers, and then not only consolidating all the proactive side of the house in terms of the scan data and the assessments into it, but also then the tracking, remediation and analytics capabilities. That’s all a really important part of the workflow. So with that, we’ve got a couple of questions here, and I’ll also see if anybody else chimes in with any questions.

You mentioned integrations with commonly used vulnerability scanners, and sorry if you mentioned this already, but is there a way to integrate custom solutions? The answer is yes, we do support multiple formats. We also have a CSV format, and you can also integrate directly with our API. Nick, I think you have some experience with this. Maybe we really only leverage like two of the out the box, off the shelf stuff, right? That’s burp and then an enterprise scanner of whatever it is. Maybe it’s a nessus, sometimes it’s a checkmarks. Academics, whatever the case may be. We usually have one kind of infrastructure scanner or automation scanner, and then burp is always kind of our scalpel, and then we use a ton of different tooling.

I know that NMAP is supported in PlexTrac, but we’ve got a lot of the project discovery tools like Nuclei, and we’ve got a lot of custom scripting. We use a ton of cloud based tools that scatter across into a bunch of different nonsense and coming in, and so we actually have spent a ton of time leveraging that CSV parser. It’s so simple and easy, like we could and have written our own kind of middleware parsers that use the API. But when the CSV is there, it’s so much easier to take your output, whether you natively go out to JSon or whatever, and you come in and it’s so simple. Before the CSV parser was an option, we would definitely be writing because the API is really well documented, it’s easy to interact with the support PlexTrac supplies too, with help and asking questions and helping to engineer some of that is really great. But honestly, there’s not too much need to do custom code nowadays. We just jam it in that CSV.

I mean, every single engagement, bar none, I’m confident in saying, because I’m still pretty heavy handed as far as execution. We are constantly using our own tooling and scripts and methodologies that we’ve come up with and getting it into the platform via that custom integration with the awesome, awesome.

I just realized that we’re going to also have to do the drawing for the gift card. So we’ll do one more quick question and then we’ll go okay, so one question is, does PlexTrac sell to enterprise customers? I know we talked a lot about kind of like the consultant use case and how do they use it. So the answer is absolutely yes. There’s a lot of internal enterprise pen testing teams, and then we also support additional use cases for vulnerability management application security testing. So they do use it not only for internal testing and internal purple teaming, but also for vulnerable management and application security and even questionnaire risk based assessments. Definitely. And it supports all of those use cases and really serves as that central reporting platform for the proactive security program within an enterprise organization.

Great question and happy to answer more. Just reach out. Okay. So there’s all kinds of ways to connect with PlexTrac. You can get a live demo if you’d like, see more walkthroughs on our YouTube channel and everything else.

Caleb, Nick, really appreciate spending time and your expertise and the value that you bring to the industry and your customers. And obviously we really enjoy working with you as partners with PlexTrac. So thank you for your time. Appreciate everyone else’s time. I’m going to hand it back over so we can do the gift card, the drawing. All right, excellent. Well, Dan and Caleb and Nick, thanks so much and I’ll let you guys drop out.