Skip to content

VIDEO

Death to the Document: A Guide to Creating Security Reports for the Digital Age

Cybersecurity reporting has been stuck in the dark ages. Why is an industry defined by innovation and technology still thoroughly dependent on a paper/PDF document to deliver their highly technical work?

Series: On-Demand Webinars & Highlights

Category: Reports

   BACK TO VIDEOS

Transcript

Hello and welcome to today’s remote session. My name is Drew Todd from the Secure World Media Team, and today we are discussing Death to the Document, a guide to creating Security Reports for the Digital Age, and a special shout out to PlexTrac for sponsoring today’s session. Again, I’m Drew Todd, and a little bit about our webcast today. So, cyber security reporting has been stuck in the Dark Ages. Why is an industry defined by innovation and technology still thoroughly dependent on a paper PDF document to deliver their highly technical work? The security report needs to come into the 21st century. Continued reliance on antiquated processes and technologies is harming both the industry and the consumers of the work produced. The solution is in plain sight.

We must move to electronic reporting using applications that are designed to maximize reporting efficiency and make consumption of our results not just convenient, but enjoyable.

I’m joined today by Nick Popovich and Shawn Scott, both from PlexTrac, and we are excited to hear from them today. And a little bit of a reminder for our audience today. Our slides and documents are available in the resource list. You can submit questions in the Q and a chat on the side of your screen. The certificate of attendance, very important, will be downloadable after 50 minutes. If you’re having any audio or visual issues, just refresh your page and join again. And just a reminder, all of our webcasts are on demand, recorded and available at the original registration link.

So with that being said, I’ll hand it over to Nick and Shawn for their spiel.

Thanks so much. Due, really appreciate it. Thanks for having us on and glad to be here. With your audience here so quick. A lot of Nick go ahead and introduce himself to you. A couple works by himself in the background, give you a little context of where we’re from and what we’re here to talk about. And then I’ll take over.

Yeah, very cool. Yeah. Thanks, Shawn. I am the hacker in residence at PlexTrac. Career Penetration Tester and Red Team are focused on offensive security adversarial simulation emulation started in the Army Signal Corps, moved into Consultative Penetration Testing in about nine. And as a hacker and residents at PlexTrac, my role is really to provide that hackers perspective in our innovation vision and then a guide, folks into how best to utilize our platform and our methodologies to really assist in dealing with the output from security assessment and testing. And I’ll hand it back over to Shawn.

Thanks, Nick. My background is I’m also former Department of Defense, retired, commanding a cyber operations squadron in 2018, spent a little bit of time as a consultant generating reports, and then in 2019 joined PlexTrac because I somehow decided that my life’s work should be revolving around information security reporting, which is a dubious distinction. And so that’s why we’re here today. We’re here to do some preaching and this may be a little bit unexpected, but we are here today to talk to you about why we need to change as an industry. And I come to you from the perspective, from working from a company that is currently doing the thing for lots of people that we are telling you that you need to stop. And so we’re going to talk about the third rail of information security reporting and that is the deliverable document, the thing that you hand off to whether it’s the client, whether it’s the internal user at an enterprise who’s actually got to go and fix whatever it is that you found. So we’re going to COVID the state of penetration test recording today from our vantage point where we see we’ve seen now over 500 different offensive security reporting methodologies and templates.

And then we’re going to talk a little bit about for the first time ever, we’re going to release some of the results of the study we actually undertook because we have this unique vantage point here, PlexTrac and we do see how everyone in the industry is doing their reporting. We were able to actually complete a study where we identified what are people actually reporting? Am I reporting the same things that other people what can we learn by taking that kind of macro level look across the community? We’ll talk about the results of that study and how they might inform some of our decisions moving forward and then we’re going to get into the real meat of it. The fun part, which is why I’m going to tell you as an industry we’re holding it wrong. It’s honestly common sense stuff and nothing I’m going to say is going to shock you and probably going to be some AHA moments. But let’s talk about why is it that the primary way that we’re delivering the valuable work we do is just stuck in the Stone Ages. Right? And then of course, that’s the problem. Let’s talk about the solution which is what are the advantages of if we move to a world where we are doing electronic delivery of our results? So I want to start by making a full claim.

I truly believe that the vast majority of information security testers, especially in the offensive security side, really worship their documents, the thing that they hand over to their clients to get paid or to continue to get paid if they’re working in an enterprise. Right? And how do we get to the state? In my mind it started with some benign requests from the marketing department to hey, can you just start using our style guide? And then someone from sales probably said hey, if you had some of those fancy graphics we could probably sell more of these things. And then maybe if we started wrapping up your tables and a little more beautiful styles, throwing in some fancy header and footer graphics, we can really make this thing pop and sizzle and we’re going to be able to sell more of those things. And I think that someplace along the way we as the actual security professional started drinking the Kool Aid. And Nick, I’d love to get your thoughts whether you want to raise the BS lag on anything I’ve just said or from your perspective, do you think that’s accurate? No, not at all. In fact, I think it all came from positive motives. The idea being that we’re performing activity and you’re going to try and distill the depth and breadth of a week or two’s work of consultative effort and into a document.

So you want that document to be pristine, you want the data to be pristine and you want the appearance to be pristine. And then similarly from especially a consultative standpoint, when you’re trying to differentiate your services, other than saying things like hire us, pay us to do good because we have really smart people, I think especially consultative organizations view their report and their deliverable format specifically not so much the data, but the actual format and how they present the data as intellectual property and very protective. And that’s sometimes presented as a differentiator which I think is actually a misnomer because the data is more important. The data is more important than how it looks. But you’re absolutely right. And I’ve definitely seen folks who treat their report template both in statements of work and in their marketing efforts, basically saying that we can give you the prettiest looking report and that’s why you should choose us. Thumbs up.

Yeah. I’m not making an argument that there’s anything wrong with pristine or perfection or quite frankly, even with marketing and appearance, but it comes at a cost. Right? And what have you seen in your experience that some of the frustrations as a tester who wants to actually get back to doing more testing, but you got the scan calls out there. Well. I mean. You could sit there and say and realize we hire hackers to hack and when they’re spending more than 50% of their time dealing with formatting issues in a document and trying to figure out fault points and table situations and then I think what we’ve done as an industry too. Is we’ve done a disservice and we’ve conditioned the consumers of security assessment and testing reports to have this expectation.

And when a report doesn’t have certain features and visual representations of data that folks have become trained to, they can’t see past, they can’t see the value of the data, the flaws of vulnerabilities, the tag narrative, the chain of compromise because they’re stuck on the fact that my reports usually have this bulleted list in this fashion or the critical nomenclature right here in this fashion. And so I think we’ve gotten a little bit lost in the sauce with graphics in the deliverable and then having your security assessors spend more time in clerical work versus in executing worthwhile endeavors is not an efficient use of their time or your money paying them. Yeah, absolutely. I mean, you’re baking that cost in, right? I mean, these are billable hours. I think most consultancies these days for top notch testers are charging 350 an hour. Right. And if those are billable hours, you could be driving revenue.

I would also argue that this isn’t a completely new idea. The concept of delivering things electronically versus a document is starting to get some traction in the industry, but not in ways that I would expect. You know, we work with a lot of clients. Some of them use our platform as portal, some have to create their own. But where I think we might actually start to see broader adoption because they’ve been built from the ground up this way, is because of the rise of pen testing as a service of things like Hacker one and bug crowd and things like that, and things where you don’t have an option for a document. You’re just going to have to go and if you want to buy this service. So do you think that’s going to drive further adoption as we continue to see more adoption at PTAs? I tend to think if I put on my forward thinking cap and future thinking, I think it has to, like you mentioned, and the list of providers keeps growing.

We go on with Cobalt and a lot of these different folks who are the pen testing as a service model really is about having a dashboard, being able to track flows, dealing with the data, and then you’re going to need systems that can even still collect that data and be able to be a place to curate it and action the data. Because that’s the problem, is how do you take the data into action? I do tend to think that as folks are looking at more of the pen testing as a service, where they really have to have dashboards and being able to track status as the findings are starting to become the norm, I think that having a report consumption model that is really driven by digital is really going to become more the norm as a future state. Sure. Yeah. And this third bullet, I honestly think is honestly one of the more important ones on this slide. One of the things that I get to do is really get deep into the workflows and the methodology of the people that I work with. And it’s almost universal across the board.

Here’s how retesting works. We wait till we have completed every part of an engagement. We wait till we’ve written the report. We wait until it’s gone through QA, our technical editors, until we deliver that to the client and we’re end user for enterprise. Right. But especially on the consultancy side, if there’s a retest SKU built into that statement of work, it’s not an opening thing. It’s like, okay, great.

You’re going to fix certain things by this date and then you can ask us for a retest. But the result of the fact that everything is happening is one big giant document. Here’s your results all at once, which you get delayed and then you’ve got to then do everything that you’re going to do before we even begin to talk about retest. It really drives a methodology that is unresponsive, especially if you got critical things wrong in your environment and really just it’s a methodological shackle on best practices. Thoughts on that? Yeah, I agree 100%. I mean the retesting paradigm is really the onus is on the consumer of the service to track retesting. Typically it’s punted off into you’ve delivered a static deliverable, typically some sort of PDF and or companion Excel document.

And then from that a lot of times folks have their own dashboards and they’re trying to hand it off to the teams that need to fix it. They’re trying to get retesting. And like you said, the SKU usually there’s a 30, 60, 90 day retest clause in there that you have a certain set of days because after the fact they’ll usually say we won’t retest. It has to be a new engagement because the state of the environment has changed too much. We can’t attest to it in that fashion. And then from the back end as a provider, it’s very disorganized. It’s go and pull the last PDF, go pull the last doc, go and pull and if you didn’t do the original test, you’re kind of pulling through the notes and trying to retest.

So you end up having the onus of tracking retesting beyond the consumer of the service. Hopefully they have their ducks in a row and that’s on them and then they’re really not sharing that with the provider of the retesting many times and there’s disconnect, it’s static and it is trying to figure out how best to get these findings fixed. And at the end of the day, that’s interesting how much chaos there can be in the retesting phase because the reality is it’s kind of the most important part. The most important part is after you’ve identified these risks and flaws, they need to be mitigated and shown to be fixed. So yeah, it’s tough also. Thanks for the insights. Nick and I have been sharing some anecdotes from what we have observed.

Right. But one of the beautiful things about the positions that we hold and the vantage points we have with all the different consultancies and enterprise we work with is we actually get our hands really dirty with their methodologies and we see how they’re recording. And so I kind of tease this little bit in the intro but getting beyond anecdotes, we actually did some research here, so we’ve seen hundreds and hundreds and hundreds of thousands of these different things and we took a sample size of 50 report templates, offensive security report templates, primarily pen tests. There were some other cats and dogs that were a little bit different in there, but things where you’re going to present findings to a user that need to be remediated. For this sample, we chose large or very large consultancies. We wanted to find those mature consultancies that have had the benefit of a lot of input and a lot of voices into how they’re reporting. But we didn’t want to leave out the enterprises as well because we’ve got plenty of mature programs out there, especially in larger organizations, and they produce very important and very similar reports as well.

So we got this beautiful sample set of 50 high quality tests from respected testers in the industry. And then we statistically coded for over 40 different types of data. So we grouped the data by sections of the report, right? Erada all that stuff that goes after the COVID page, executive summary, which is really just all of the narratives and what kinds of narratives and summary tables and things like that. It’s basically everything that’s after the Errata and before either your detailed findings or your attack path, if that’s a portion of the report, obviously the attack path and detailed findings as well. And then what was the goal of this is just let’s understand what people are actually reporting, what is common, what is uncommon, and then use that analysis to actually drive some further questions. We didn’t really go into this with a hypothesis other than we expected that we would see a pretty wide divergence between, hey, generally almost everybody includes these data elements in an offensive security report. And then you’ve got all these cats and dogs that are much more less used, right? And so today, for the first time, we’re actually going to share with you the results of that analysis.

So a few tables here on the left hand side is data elements that are part of what I refer to just as an executive summary section of the report. On the right hand side, these are the data elements inside of detailed findings. Obviously, we coded for those other sections of the record, but these are the two that are the most critical for us to discuss. The rows that are shaded in green are those rows where there was a greater than adoption or use of that data type. So, for example, the introduction narrative, almost everybody’s got that right. 92% of the people you hired us or tasked us to perform this kind of test on this application or environment starting on these days and that end date, right. What was in scope? This should actually be my apology.

That title here is a summary findings table. Just that high level title severity, maybe a CVSS score, maybe not. But just for the high level leadership, what was discovered? How do we actually go about oh yeah, and then how many of each severity findings did we find? We found ten criticals, 15 highs. Things like that what was our methodology? How do we go about doing our tests? Probably about close to three quarters of the sample size included that and then what do the Severities mean that explainer table critical means this and you should do this. But then we find that things drop off really quickly. After that, you get down some additional narrative sections, but then after that, everything else is under 20%. So definitely some key areas of commonality.

And then I think there were probably another eight to ten other different types of data that only a handful of people are actually included. When we go over to the detailed findings that are displayed, obviously just about everybody’s, including a severity, some sort of description of what the finding is, a verbose recommendation, how do you go about actually remediating the problem? Where does the problem live? Your affected assets and then some sort of evidence or technical details, your screenshots, your code samples, things like that. And then once again, a pretty deep drop off. And I didn’t even have room. There were about another seven attributes under this 14% here. Things like port and service data, right, that people will optionally include or sorry, always include in their reporting methodology. So the question that came to mind, Nick, after I did this analysis is, okay, so why is the inclusion of data like a CVSS score and perhaps an impact and likelihood score or port and service data, are those useful bits of data? Are they useful to some people? Are they use not useful to other? I think that there are strong arguments that these lesser adopted data types do provide valuable information to specific audiences.

There are users downstream that are going to get that out of that or useful data. So why why are we not including those things with every report and everyone gets all the things? You know what I think? Yes, go ahead, man. No, as you’re saying that, I’m looking at the data, there’s obviously commonality, but when you look at all the things, especially in finding detail type, where it drops off dramatically. I think what we’re seeing is the lack of visibility as the provider from their template, or from their perspective of when they are initially delivered to consumption by the end, the consumer of their service. A lot of those things, I mean, like for example, many people have SLA’s tied around CVS scores. They have to, you know, within X number of days address mitigate or putting comps in controls for CVSS scores of this and this. So the impact statement, right, how is it going to impact their environment? A lot of this data because it’s a static deliverable and there’s no portal for it to exist in.

Typically what we’re probably not getting to see is the fact that after this data is disseminated, folks are taking this, cutting it up, and then the consumer of the services and adding their impact statements. They are adding a lot of their information. They’re maybe changing the context of the CDs score based on their institutional knowledge. So what I think you’re highlighting here is the fact that as providers of services. Whether it’s as an enterprise to business units or as a provider or consultative organization to other entities. What we’re seeing is the fact that we have such good data. We can present good data and if we presented it in a collaborative format.

In a portal or digitally. That would derive so much more value because the consumers are having to take this. Probably mark up their own documents. Start their own tables and start doing a lot of stuff. Because I mean, this is the type of stuff that’s really going to drive the flaws being fixed. So that’s my initial thoughts is you’re highlighting the fact that static deliverables. Especially from a provider standpoint.

The utility of those only go so far and then the work is still being done. The data is still being consumed. It’s just being cut up and sliced up by the consumers and probably never seen again except or by a couple of folks. By a couple of folks who are being tasked with remedying these flaws. But as the team that’s providing the data, we don’t get to see that context, we don’t get to see the impact statement, we don’t get to see the adjusted CVS score. So then it goes back to how do you triage retesting? How can you provide value as the provider of the security services if you don’t have the full data, the full impact? And again goes back to this kind of chaotic process of retesting that’s very seems to be from my perspective ad hoc at times and difficult to do in static deliverable formats. Good deal, good deal.

Well, you know, I also after I did this stat analysis, I actually went to some of the clients that I talked to a regular basis and I was like hey man, you know, we did this analysis and you’re not reporting CVSS scores and you’re not including an impact and likely to score and there’s no port service data on your host. Why are you not including that stuff with your data? And I actually got a pretty common response in these conversations and that was really about signal to noise ratio decisions, right? So let’s be honest, when you’re putting together what’s going to go into a document that whole process suffers from the law of diminishing returns. The more information that I include, the more difficult it’s going to be for a user to find the information that they need for the task at hand and how do they prioritize analysis the data. So information that might be useful signal to one set of remediators because of how their environment is architected might just be complete noise to another. And if you included every single data attribute that we coded statistically in this analysis that’s how we get to this industry archetype of the 300 page report. Right. And so I’m not just positing.

I’ve now got empirical evidence of conversations with people that they’re making conscious choices about what goes into a report and what doesn’t. And they’re making those choices on behalf of the people who have paid them to give them information. And generally not without collaboration and consent. Right. You know, nobody goes to a client and says, hey, here’s a menu of possible data points I can provide you. Please mark the boxes, intersoshi waiter will be with you shortly. Right.

I actually started keeping track of some of the actual quotes that people have told me when I ask them these questions, you know, why are you including this or why are you not? I hear things like I really don’t know how valuable this information is to the people who have to fix these things. Or It’s good information, but we’re really trying to tighten up and shorten our reports because they’re just getting out of control. Or this is my favorite, I don’t know. We’ve always just included this information, but I’m not sure that we should. And it’ll ask me what are you seeing? And I’m like, well now the good news is I can tell you exactly what I’m seeing because we’ve actually done the analysis. Right? Right.

And so I think that more often than not, and I’d love to hear your history in this is that if one client asks you to include something one time you’re going to throw that into the report. You’re probably going to use that as a starting point, as a template for the next report and six months later you’re not going to remember why you started including that. Is that accurate? I think you’re right because from large and small, the boutique is probably going to be a little bit more amenable to custom reporting. The larger you get in the bigger consultancies and providers, what you see is what you get. The reporting standard is established and they’re just like, this is how we report the data, deal with it. But I think it’s similar to any kind of product when folks come in and say things like, we’d like to have this added, when you look at that for this initial ask, maybe there’s a heavy lift, but it’s like this is value that we can add in and make our report look cooler and it’s actually a value add. So why don’t we build this in an absolutely.

Then all of a sudden you start having a business impact section of your report or you start having these different sections added to the report, sometimes from a one off ask that then becomes baked into your methodology and your template. I will say that a lot of at least providers, because internally it’s a little different, I imagine. But from my experience but as providers, the idea of saying we want custom reporting is usually an upcharge. If they say things like we want the report split this way or we want this section tabulated in this fashion, that’s usually going to be an upcharge because it’s more hours. Because as you see here, editing and QA versioning hell, our efficiencies as providers are built into having documented repeat processes that folks can churn through and get going. So anything that deviates from that is going to be more time which would be more cost. Absolutely.

And guaranteed that there is data that you’re collecting to the force of engagement that no matter what sushi rolls they choose, they’re never going to see. Right? Oh my goodness. There is probably 70% of the artifacts screenshots, log, sniffs and interesting things that are done on a pen test on a red team never see the light of day because it’s hours and hours of deriving flaws from a lot of data that the raw artifacts just that’s truly going to add no value. Someone’s going to see that and get completely lost in the salt. However, I have had organizations be like we want all your raw testing logs. And even then I’m like that’s a lot of stuff. Are you sure? Well, you know, if you’re paying, just show me the money, I’ll do whatever you like.

That’s basically it. And then obviously you touched on it. The editing in the QA version, any team that is beyond just having a handful of members is going to have some sort of workflow. The document, the report is going to go to a technical editor, perhaps a senior manager, a more senior information security professional and then there’s all sorts of fun problems there. Not only do we have version control help, but now we’ve also got a whole lot of sensitive information just kind of floating around a lot there. How are you doing sending an email, you’re putting on SharePoint, you printed it off and mailing it. There’s a lot of back and forth and it is an absolute nightmare.

I mean, one of my worst was 17 versions. We had 17 versions and then somebody 14 versions back was like oh, I forgot to accept all the bob’s changes. And Joe was like, well I went off of this stock and it’s like the versioning and the documents everywhere nightmare. Absolutely. So let’s fast forward now to we’ve actually delivered the reports to the consumer and they’ve got this thing and what do they do with it, right? Well does anybody actually ever sit there with their PDF open on a screen while they’re actually doing remediation? They don’t because they can’t find easily the information that they need. Are you control effing your way through your report row? Maybe you’re lucky enough and you’ve got some hyperlinks in there but actually easily getting searching, finding the data that you need. Here’s a great one.

Let’s say that you’re looking for a particular instance of a particular vulnerability on a particular. Asset. If I search for the asset name, I might have 17 hits because that asset was impacted by multiple vulnerabilities in the report. So I’m spending all this time that I’m trying to actually find the data that I need from a remediation perspective, much less as a tester. Somebody asks you a question about something, now I’m the one who looks like a boob because I can’t even find the data in the document that I authored. Right, 100%. So you were talking a little bit before about all of the evidence, all the screenshots that you capture and so awesome.

I’ve just captured a beautiful full screen shot of a shell that I’ve just popped in, all sorts of really awesome information on there and then I take that and I shrink it down to a five and a half inch fissure.

Yeah, that’s a struggle. I mean, that’s a true struggle. That’s why this bullet of media and document based evidence is huge. Because many times a flaws impact and the reality of the flaw needs to be a story. It needs to be something that can be consumed. And so a picture is worth 1000 words, but other media formats are worth a million. And there have been plenty of times where I’ve had to take four or five different screenshots to show maybe a composite change scenario to get from point A to point B because pasting in a screen grab and then a couple of lines of here’s what you did.

When the client or the consumer gets that the document, the data custodian or the system owner gets that and trying to reproduce it or understand the flaw. They’re squinting that one picture like, I don’t understand how you got here. Even with the command output and a screenshot being able to post multiple digital pictures or a gift or video or something that can allow you to have much more context. The beginning, middle and end of a flaw is huge and something that we just lack in static deliverables. Yeah, absolutely. It’s funny, I was watching one time, someone who was looking at a report, it was in a Word document and they had a full screen collapsed down into a five and a half inch picture, can’t read anything, so they grab it and they like expand it out and then you just had a bunch of pixelated blurry crap, you know, so it’s a physical limitation, but absolutely. When we talk, you touch on videos.

So pictures worth a thousand words, video worth a thousand pictures, right. Depending on how good your graphics card is and what your frame rate is. But also you’ve got additional documents that you’re collecting. Maybe it’s logs, maybe it’s the raw scan results from using a scanning tool. And what do you do with that? Do you actually print that stuff out? Chances are no one’s going to ever want to use it, but they might. And so I can’t give them that instead of a traditional documentation for but if I had some sort of other wave just hanging on on a web based platform bob’s your uncle, right? That’s it. That’s the one.

So let’s talk about this last bullet down here, which ties into the bullet where we talked about how the current recording methodology is driving retest methodology. It’s like when I have to wait until everything is pristine to steal your word from later. And I’m doing that one time dump. I am increasing the time to delivery of information that may be about critical vulnerabilities of environment. Nick, how many times have you sat there and made this and had this conversation in your head? It’s going to be about another three weeks before I actually get this to the client. Should I give them a call now and let them know? But have you been in those situations? Yeah, there’s a lot of folks who do it in different ways. And again, it ties to the idea of having a digital portal and a way to consume that in real time or more elegantly, is clutch because a lot of organizations that have an SLA or an agreement that says critical impact that’s not severely critical of a finding.

We define critical as something to be taken to buy a real threat actor, to take advantage of your system either externally or something that’s kind of like stop the presses, let them know about it now, and the rest is up to the contestants.

Other than that, other than if an organization has it defined or they’re in a statement of work, say things like, look, there’s an imminent threat of compromise or a huge deal. But even in that there’s great areas. Consultant A says this is imminent threat. Let me talk to them about it. Consultant B says, no, that’s not an imminent threat. I’ll talk to them later. Then all of a sudden you wait that three weeks, you deliver the report and the client is super excited to talk to you about the fact that their log for J was open.

I’ve had some consultants say Eternal Blue is a stop and talk to them about it. Other consultants are like, so what you have is you have that decision point that and then I’ll play devil’s advocate from because I’m always, you know, I fight for the user, just like in Tron and the consumer of the service. But I am a practitioner as well. I’m a hacker and I’m a pen tester. And let me tell you, peace Mealing Findings is a nightmare in and of itself without control. Because once you open that, you give it an inch to take a mile happens and you say, here’s a flaw. They start fixing it, changing the environment as you go.

They start trailing you. And then every day you’re getting on sync up calls or having to deal with findings that are piecemeal to them and you’re not testing they’re paying you to test the environment. But now, as you piecemeal findings, you’re being henpecked to death to talk about findings that you’re starting to piecemeal out. So there has to be this balance. And I think one of those balances hypothetically could be a portal where there’s collaboration, interaction, and maybe a little bit of self service so that hackers can hack and readers can read. I’ll say I couldn’t say it better, man. Hackers can hack, readers can read.

All right, so hey, just a recap here of why traditional document based reporting sucks is, hey, we’re making subjective decisions about what we’re actually given. And we all know that there’s plenty of data that we’re not given. We are stuck in our version of Hell. It’s very difficult for both us as the testers and the consumers to navigate through this stuff. We’re really hamstrung on our ability to provide media based evidence. And obviously we just finished talking about that. But one thing I didn’t even talk about here is that it just takes time.

And once again, it goes back to that conversation of billable hours. Right? And Nick, in environments that you’ve been in, where you didn’t have productivity tools, where people are truly writing a document as a proportion of the overall amount of time that a tester spends on an individual engagement, about how much of that is actually just dealing with the reporting. I’ll tell you what, in organizations where we either were going from a document library, which would be a just giant fat Word doc of findings that you’re copying and pasting you’re starting from a template and everything’s basically manual, and you’re living in Excel and Word percentage of time, it’s tough. But I can tell you right now, when I was the practice director for a huge consultancy in North America, that was the number one pain point. And problem with consultants was being overwhelmed with tests because it wasn’t the technical work, it was getting behind in testing, in reporting. And I mean on the low end, on the low end, for a typical common gig, a 40 hours kind of Monday through Friday, 40 hours internal pen test. On the low end of manual reporting, you’re taking 24 billable hours.

You’re taking 16 to 24 billable hours of reporting. And on the worst end, so that was what we would build in 16 to 24 hours for reporting. But then we talk about Julian calendar time of folks, actually how long it takes. I have seen people take an additional 40, like an additional full week of time to slug through that report and then through QA and then versioning Hell. And then guess what? They’re booked for another test because we only plan for up to two days of reporting, maybe three days of reporting. So they’re getting new gigs and now you get the burnout scenario. You have folks working on a report while they’re starting the next gig.

And then once you get behind in a consultative practice especially, you just never catch up. And the frustration of reporting, I mean, seasoned consultants, even in that type of space, could probably get it down to ten to 16 hours of reporting. But when you talk about bringing in well, maybe we’ll talk about it later, but the dichotomy between manual effort and just slugging through it and then the efficiency that we see bringing in a digital platform for efficiency, it’s absolutely bananas. But yeah, I mean, the worst I’ve seen, the absolute worst was somebody took two full weeks of their time reporting and I was like, look, we got to get this right or this isn’t working out. That was one of my first performance improvement plan scenarios.

And for years now we’ve been on the alarm, beaten the drums, talking about the critical shortage of trained technical information security professionals, right? And why do we if you have a finite number of resources and you have a world of problems, man, we got to be using people for the skills that they are best at. I mean, capitalism, differentiation of labor, there’s all sorts of theoretical foundations for this stuff. And it’s mind boggling that we are still having our technical testers spend that much of their billable time doing things that they don’t need to be doing. I’ll stop preaching if I can get an amen out.

Absolutely. If we could get a t shirt that says all of that exceptionally, I like it, I’ll wear it.

All right, well, that’s the problem. Let’s talk about the solution out there, right? So let’s talk about what the world might look like if we have a web based platform that we can actually deliver this data to our clients with as the primary method. Now, I am not saying that I’m preaching and I want you I’ll say for purposes of being emphatic, that I want to kill documentbased delivery. The reality of matter is this, man. People are always going to need an artifact of the engagement. They are going to need something that when it comes time for their socks or ISO 27,001 audit, or when someone asks them for a letter of attestation, something where they can give and show. Right? But that doesn’t necessarily have to be the primary deliverable.

It doesn’t have to be how we design it. You know, it goes back to that we’re holding it wrong. If we live and we allow ourselves to live in a world where we do have a modern Webbased method of electronic delivery, let’s just think about all the amazing things that we get. So over the last two weeks, I’ve had the unfortunate experiences of having gone to two banquets. The things where you have to wear like ties and crap and then someone about 30 minutes into a speech that you don’t want to be listening to and your iPhone is dying and you can’t even like search reddit anymore. Someone throws a plate in front of you, and it’s got like a three inch piece of cold salmon, some unseasoned Brussels sprouts, and maybe some sort of like steamed vegetables. And it’s like, what is this crap? I don’t want anything on this plate.

Right? Okay. So maybe I also get usually a nice slice of cake right at the end. The desserts are usually pretty good and things like that. It’s kind of hard to screw up a brownie.

But contrast that with the experience you have when you go to a good buffet, right? It’s guilty pleasures. Everybody likes a good buffet. Why? Because I can go to the things that I want to put into my body, and I can grab those things, and I can ignore the cold, unseasoned Brussels sprouts, and I can ignore the three inch piece of salmon and grab myself a juicy piece of prime rib carved off right. And I can get the things that make me happy that feed my body in the way I want. And that’s really what we’re talking about at a high level. We spent a lot of time in the previous slide talking about the fact that we’re making conscious decisions to not include certain bits of data that we acknowledge might be valuable to the end users because of the signal to noise ratio problem, right? I mean, no one’s going to throw a, you know, a plate at a bank with gun with me. It’s got everything on it, and I just pick and choose and throw the rest of the way, right? So it’s really a mind shift.

We now are empowering the end user to grab the data that we want. And if they want to see all of next 30 full screenshots because, hey, maybe they need a deeper understanding, it’s all there for right. Thoughts on that, Nick? Yes, absolutely. Being able to pick and choose and then as a tester, too, from a consultative side or from a practitioner side, you make a conscious decision of what pieces of data to apply, and now you can start giving a lot more context that folks can not click on what they don’t want to click on, and they can click on what they want to click on. And it is absolutely a paradigm shift and it’s an institutional mindset shift, certainly, but everybody wins when it’s not just an illusion of choice, but it’s a legitimate login. Work with the data that’s your main team. I mean, think about different teams.

Think about all the different teams impacted from a full stack application test. When you come in and there’s a finding that crosses so many borders, that crosses the DevOps team, that crosses the infrastructure team, might cross the cloud ops team. There are different aspects of those findings or that finding that are going to impact different teams. You have one report, the person that consumes it has to go through slice up, and those reports typically are secured. Right. Secure delivery. So they’re not allowed to be disseminated as is, and so certain groups aren’t allowed to see the report in Max.

So you’re taking screenshots or copy and pasting pieces to send to the different teams. What if you have the ability to segment those findings to the teams that are appropriate to be able to provide access and those teams can log in and see the information that’s germane to them and the finding that is german to them versus this nightmare manual process? Absolutely.

And it’s not just about the data, it’s about actually whether or not you can get to the data and see the data. You know, when we talk about ease of access and usage, I want to almost expand that a little bit to talk about ease of ability to control access. If I’ve got a document out there, how do I have the ability to audit who has had access to that thing? How can I control that? Once something is on paper or in a PDF, bob’s your uncle. Unless you’ve got a very, very rigorous and well instituted DLP strategy in your environment, you’re really not in a position where you’ve got good control of who’s got that data. Right. I’m sure that you’ve seen reports that you’ve delivered sitting on coffee tables and all sorts of fun stuff. It’s interesting you bring that up.

So one time I did a thought experiment just to demonstrate secure delivery. Our reports in consultancy World were considered confidential and consumer confidential, et cetera. And on the other side they consider those reports to be controlled delivery, very sensitive because it’s a roadmap to hack their environment. Many times they got sensitive data, so you wanted to control the release of that. And so I talked with a couple of clients and said, would you want to be a part of a thought experiment? I simply put in some honey tokens into the documents that they would work both in a PDF form that was harder to do, but the document form had a honey token that just made a benign call out to a server that I controlled that could say this document was open when it was open from the IP, etc. Or the company agreed to it. The several companies we worked on and their expectation was we would be the only ones to see it.

And then only one or two folks on the other side in a controlled fashion, we should only ever see that company’s IP space because it was only supposed to be open on the company’s VPN, et cetera, et cetera. And then each one of those, I think it was three different organizations, each one of those reports upon delivery. After two months we went back and looked at the data and each of them had 30 plus different outside of the organization IPS coming in from all over the world, which basically meant the report was being shared. Now, some of those were iPhones. Some of those were people being sent a PDF and opening it at home and while traveling and all sorts of different things. But the companies freaked out. They came in and said, hey, we need to check these out.

I was like, that’s not part of the experiment. I don’t have any trouble. I’m just telling you the data here, the onus is on you to control your documents. But certainly and then you get into the nightmare of document control. The DLPs. The document control. Which there’s some maturity there.

But I mean. How many times if you’re using S Mime or if you use an Adobe’s mixing with Microsoft controls and seven people get delivered and five of them say. I can’t open it. And then four of them say they can’t open it. And you work through after a week. You say. F it.

And you just send off the PDF with no controls. Because they’re like, hey, we got a board meeting. We need to read this report. So the document control, controlling delivery and dissemination of that report is difficult. And yeah, that experiment that we did more than just thought, that experiment we did back in 2018 showcased how even an organization that had strict controls in place, that document was opened all over the place. It wasn’t supposed to be.

It’s great times. So Nick, I know that you’ve done work as a oneman shop and you’ve been part of some pretty large teams. What’s that like when you’re trying to write a document with another team member? You finished the engagement, maybe one member focus area.

Tell me about some of the pains there. Nightmare. Absolute nightmare. In fact, one of the things that we used to do was build if we had a collaborative engagement where it was multiple testers and that meant reporting, we either up charged or uplifted the cost or the time because of the collab. Like it was a collaboration penalty or fee. Not so much fee. It’s not a penalty that we’re collaborating, but it was something that had to be dealt with.

And so we had a couple of different paradigms. You know, one was folks just tried their you know, you left it on the consultants, which, mind you, is never a good idea. We need to give them guidance. So that was a nightmare. Different writing styles, different nomenclature, different taxonomy, copy and paste errors, literal formatting, just nightmares. So that’s a nightmare. And then on the other side of the coming learning from our lessons of leaving it to folks to just combine the report together and collaborate, which sometimes folks did well.

Most times it was just it added so much complexity and there were so many errors that it was just frustrating. We would assign one of the consultants to be document consolidation. And so that meant they got additional time and it was their purpose to take an additional probably on top of core reporting time. It was an additional eight to 16 hours of document consolidation. And sometimes we would have five, six consultants working on large engagements. And so one poor soul was charged with document consolidation. And that meant that it was their job to go through and make sure to collect everything.

And they just went through the hell that I explained, but they had some more time to do it at the end of it, though, you’re error prone. The struggle of narratives is difficult because of writing styles and formatting troubles and just reporting data in the appropriate fashion. Similarly, in a repeatable process when you’re collaborating is just pants on head nightmare. Whereas the alternative is if you’re building a report for electronic delivery in the same platform, everyone knows where their swim lanes are. It’s like, great, I started this finding, I’m working on this. You’ve got that narrative, and I can see what everyone else is doing. Maybe I’ve got some comments, maybe I’ve got some QA workflow tools, track changes, and I’ve got that ability to instead of handling those things after people have completed work and going back and fixing things, we’re just doing it right the first time.

Right? Absolutely. I mean, how many times we would get there when it was time to deliver the report and the document consolidation person is sitting there trying to do their thing. I’m waiting on stuff from these three consultants. I can’t finish my piece of the puzzle because I’m waiting on data and I’m hammering on them because their responsibility as documents, as a delivery manager is what we would call them sometimes, or document, whatever.

Then you have to go chase down consultants and get yes, again, I keep going back to the bad of a nightmare.

Well, then let’s talk about some of the advantages for the end user. So if I’m able to go someplace and I’m able to see the results there of what the most current engagement is, that’s great. But would it be great if I could also see how did I do this time compared to last time? That’s almost an impossible thing to do. If I’ve got two documents, if I’ve got an electronic portal where I can come and like, great, here’s all your table rows, here’s your last four tests on this application, and here’s the overall level of remediation of the things that were filmed previously. Here’s how much this is still open. Why are we doing this again? Because we never finished closing it from the last time. Right, but having that historical context as a decision maker and a leader in an organization in the information security space, that to me is almost as valuable as the individual data from an individual engagement.

Because at the leadership level, I’m about solving problems. Programmatically. I want to look at the results of any given engagement as symptoms of problems in my management of the overall information security program. And I’m not going to get that from a point time. But if I’ve got that historical context, I can start looking at trend data now. I can start informing my decision making on things like, you know, what do I need to request budget for technical solutions, people solutions, all those things. Those are high level bits of situational awareness that I need as a leader and decision maker that I’m never going to get from a point in time before.

Right? And we’re seeing it in the industry already with the prevalence of continuous assessment and teching, continuous assessment and testing, attack surface management, asset discovery and assessment testing, the idea of the CI CD pipeline and continuous delivery of services is really transitioning into the security assessment and testing space. Because folks, that snapshot in time assessment, don’t hear me wrong, there’s value in still getting snapshot in time assessments, especially for niche situations in product or in your environment, getting maybe an elite team, a red team or something. Snapshot in time assessments, there’s value there. But overall, for your security paradigm and your security program, moving to a continuous assessment and testing paradigm outside of snapshot is where we’re all moving. So if we’re not pen testing, we’re not doing security assessment like it’s 2010, then why are we still consuming the data like it’s 2010? In the third to bottom bullet, there a centralized place for all of your security data. The status of remediation is part of that security data. If you get a document and you go and you fix ten things, guess what? That document doesn’t magically change.

It hasn’t been enchanted by an elf, right? So what I’m having to do, I’m having to take the data, I actually want to track my remediation. I’m now putting a burden on the end user to take the data from this document, put it into some sort of ticketing system, and for no other purpose than giving me a way of tracking the remediation. And do you think that every bit of data from the report is being scraped into the ticketing system and getting to the person who’s got to do the job? No. Do you think they’re ever going to see that data? No. So it’s not just about having the historicals and having the analytics. It’s about where am I actually at now in fixing these problems and then are opening doors for analytics that are even more robust. Like am I meeting my internal service level agreements for remediation timelines based upon the severity of a finding, based upon what assets it’s on? And I can’t do that unless once again under the traditional industry standard, unless some poor schlub, probably an intern, you know, maybe a junior pen tester, is scraping all this data and throwing it someplace else.

Right? You know what, as you say, that bad data makes meetings because when the folks don’t have enough information to fix the flaws, they’re going to have to ask the testers on how to and they’re just going to have to go back, and that’s going to delay the time to fix. It’s going to cause meetings to happen. And I feel like I could speak for everyone in the world that says, do you want more meetings? No. Because that’s how you make more meetings happen. Incomplete data. So let’s do our part to make meetings happen. Less destitute documents and destined meetings.

I love it.

And then this last bullet. We’ve been hammered on this for the last hour, man. It’s about, how do I get the vulnerabilities in front of the people who need to know about it immediately? Am I making those decisions? Am I making subjective decisions? Just take me out of it. Let me give you a method, and you can do this extremely easily with an electronic delivery platform where as I find things, I can release them bit by bit. I’m not waiting three weeks. I haven’t gotten behind in the hell you were talking about where, crap, the next engagement started, which means I have even less time to finish this report. And I had been on plenty of calls with clients where it’s like, okay, it’s a Friday afternoon, and they have hit a bump with their use of our product, and they’re freaking out because this report is already three weeks overdue, right? Yeah.

And it’s not just about releasing it. It’s about the whole workflow that goes around that. Dude, if you find a critical out there that needs to be addressed, not only can I release that down if I got an electronic platform, dude, I can get you a notification, right? I can bring it to your attention, and it’s all easy. And you talk about how triple release can sometimes result in a little bit of a feedback loop. Man, if we just condition people to self serve and get the data they need and everyone’s life so much easier.

I like it. Well, good deal. Nick, before we kind of wrap up and move on to questions here, any last comments or thoughts about you? We’ve definitely been preaching here, and for those who don’t know what PlexTrac is, we export beautiful bespoke documents, but we also are a platform for electronic delivery, and people find a lot of value for bespoke documents, and that’s cool, but, you know, at the end of the day, I want the industry to mature. And I think it’s time that we recognize that we’re all kind of a little bit addicted to the crack we’ve been smoking for so long, and it’s time to embrace what’s coming in the future. I think acting as a hacker in residence, I also get a lot of opportunity to be involved with folks who are trying out the PlexTrac platform and then after the fact. What’s interesting is especially it doesn’t matter if they’re a provider, a large enterprise, or a consultancy. Many, many times I don’t have.

Empirical data. So I can’t say 80% of the cut. But many times folks come in with the idea that the PlexTrac platform is going to be where they collect the data. And internally, just as a team, they deliver their document and that’s what they get involved with. But very rapidly, just a significant number of the time, folks realize that allowing it to be something that the clients, the consumer of their services leverage and utilize and giving them access to that platform to use. And that can be game changers. I’ve seen plenty of folks who end up creating new service lines based on being able to have more time to execute excellence, leveraging platforms like Cluster.

Well, hey, man, we’ve got just a couple of minutes and we do have a couple of questions in here and so I think you’d be good to feel this one. Nick, I think he’s asking us, how do you address the mindset of I’d rather have something in my hand when attempting to implement this new method of delivery? Yes, I can take a whack at it, but I’d like to hear your thoughts as well after the fact. I think it’s just in communication, it’s in evangelism and understanding that we have to help guide them, just like we’re guiding them to excellence and to a higher security posture. With the output of our security assessment and work, we can be their trusted adviser and their advocate. And so the fact that it’s habit, first it’s deciding why? Why do you need that? Is it an audit requirement? Is having a PDF sitting in a file share a part of your audit requirement? Well, cool. Like Shawn was saying, we always have the ability to have an artifact, especially for those use cases. But coming in and deciding, is there a way that they consume data that we can educate and help? I think being a trusted advisor, asking the whys and how the data is used and being able to fill that in with, well, let me showcase how you can get the same data or the same functionality easier in electronic delivery.

So understanding the use case from the document, is it a security blanket? Is it just they have folks on the board who just expect it. Is it because they take that document, they create a PowerPoint presentow from it and take it to the board? Finding out that and being a trusted advisor and kind of I think it’s a more educational conversation. And also maybe they don’t know what they don’t know. They have never maybe received output in a digital platform. And so being able to showcase and say, well, check this out, here’s how you can do it. I think once the veil is pulled from the eyes and the discovery of the AHA, moments happen, it’s like, yeah, this is what’s up. So that you can always have the option, but it’s a secondary option.

It’s an ancillary like, oh, neat. I have the ability to generate a document from this platform, but the primary use case is getting the digital. That’s that’s my opinion anyways. You know, I’ve had a conversation like this with people and the report that I made was, when’s the last time you actually read through your bank statement that got mailed to you, right? I mean, I still get a bank statement every month. I honestly never even open it. I take it and I throw it in the shredder because I do everything through electronic delivery. We’re almost out of time.

We got a couple of questions that are around security, and I think that that’s important. So let’s talk about it. And so how do we address the security concerns of having this extremely sensitive information in a web based portal? And how I’ve countered this in the past is, well, let’s talk about the security of your existing delivery method of electronic document. You’ve already told your story about the Honey Tokens, and that is a security concern. But also, how are having all these documents around diving with your stated or legally mandated data retention requirements? How are you managing? Who’s got access? And, oh, by the way, what tools are you using to create your documents today? Because unless you’re banging them out on a typewriter, you’re probably using web based tools for report writing or document storage. So you’ve already made this leap, and it’s just a matter of a little bit of a mind shift change. And Nicotine Glass.

Thoughts on that? I know we’re running low on time, my friend. No, man, I think you got a hit right on the nose. Outstanding. Well, hey, I want to thank everybody for joining us. This has been a lot of fun. Nick and I are actually going to be taking some of the results of the data that we presented and hopefully talking at a time near you soon. So we love to continue the conversation.

I’m Shawn at PlexTrac, and feel free to hit me up. And Drew, I think if we still got you on back over to you. Yes. Thank you guys both so much for that overview. Really good stuff. Reminder for our audience, there are some resources for you to check out. Connect with Plex, track on social media, see some walkthroughs, book a demo.

And then Nick and Shawn, thank you again. And then for Secure World, coming up, we have a few more webcasts if you’re looking for some more information and some CPEs coming your way. Theresa Payton on ransomware. Are your debts OWASP and aware or educated and Cosigning best practices for protecting CI CD builds from NextGen cyber attacks. All coming up in the next month. If you’re interested, make sure to give a quick register on our website. And thank you all so much for attending today.

I hope you enjoyed hearing from Nick and Shawn. Thank you both again. Enjoy the day. Cheers. Cheers.