VIDEO Hack More. Report Less. Fix What You Find Every Time. PlexTrac Founder and CTO Dan DeCloss shares pro-tips, cutting-edge tools, and innovative techniques needed to successfully automate the entire testing lifecycle — from reconnaissance to exploitation to findings delivery — and improve time to value in his talk at SANS HackFest. Series: On-Demand Webinars & Highlights, PlexTrac ProTips Category: Reports, Templating BACK TO VIDEOS Transcript Good afternoon everybody. Super excited to be joining you all today with the Hackfest Summit for our webinar this afternoon. Really appreciate SANS giving us the opportunity to present. I think we’ll let a few folks kind of trickle in here and then get started, but really excited to be with you today. Okay, we’ll go ahead and get rolling because I’m excited to share a lot with you today and we’ve got a lot to get through, so we’ll try to make sure we make good use of this time. But again, thanks again for joining. My name is Dan DeCloss. I am the founder and CTO of PlexTrac. I’ve been in the security space most of my career, if not all of my career. I have a master’s in computer science with an emphasis in security from the Naval Postgraduate School. Time is starting to that is much more distant than it used to be, but it was a great experience and it’s really how I got my start in security. I worked in the DoD for a little while and then as a DoD contractor and then found myself in the private sector for many years. And my niche was in penetration testing with an emphasis in application security. I’m a practitioner by trade and one of the issues that I was really finding that I really am always excited to talk to everybody about is around reporting. I hated writing reports. As a Pen tester, it took a long time, spent a lot of time doing a lot of tedious work around a document and then had a lot of issues trying to reuse content that we would have in the past and also hated coming back a year later and rewriting the same report for everybody. So really that was the impetus for PlexTrac and wanted to have a better report writing experience for the testers, have a better way to deliver the findings for not only the testing team, but also the recipients of those tests so that it could be much more collaborative and what that’s really what PlexTrac does. And I’m excited to share with you a little bit more about our solution and my experience and then am eager to answer any questions that come up. So I’m going to dive in. Here’s the agenda. We’re just going to talk through how as PlexTrac, we can optimize your entire report writing process for whatever type of engagement that you’re doing, whether it’s a penetration test, or a Purple team engagement, or a vulnerability assessment, a risk assessment. We really focus on everything related to the proactive assessment lifecycle. And so we’re going to talk and show you how we can help you optimize the entire reporting process and then also show off the value of being able to have a dynamic and collaborative environment. To be able to not only create and write your reports and collect all the evidence that comes out of the engagement and make that experience much more streamlined, but also how you can have a collaborative experience with your customers or whoever you’re doing a test for. Whether you’re a consulting firm and a service provider or if you’re doing this in house and you’re part of an internal security team having a better and easier way to manage the results of these findings in your environment or with your customers. And then that all leads into the final leg of the stool, so to speak, in terms of being able to enhance your visibility. We have great analytics. We have capabilities around managing service level agreements that you might have with respect to closing out findings and being able to see trends and capabilities of whether or not you’re getting better with resolving the risks that are getting identified out of these critical assessments. So that’s what we’re going to kind of share today. I’m excited. If you have any questions along the way, please feel free to use the Q and A box in the webinar, and I’ll either try to address it in real time or we’ll also have some time at the end to answer any questions. First off, just kind of like as a high level, this is kind of what our service offering is as a platform within PlexTrac. As I mentioned, we really support all of the front end use cases related to assessments. So anybody doing offensive security assessments, penetration testing, all of those aspects. And you’re either doing a lot of manual work or a combination of manual work and using different tools in your tool belt. So we integrate with a lot of systems to be able to bring these findings in, and then you have a single source of truth for all those issues and all those findings and all the reports. We also have a content library to be able to reuse types of findings and even the narrative sections, so we’ll get into that. And then once you have all that in one place, you can send it to the different areas of your organization that can actually help fix the issues. You can use PlexTrac as a tracking system in and of itself, but we can also integrate into ticketing systems like Jira and ServiceNow and be able to support a collaborative experience on the remediation side, which is where the work gets done and where the most important aspects of the engagement actually take place is not only identifying these issues, but getting them resolved and then having some visibility into the analytics around. How quickly are we getting these issues resolved, what are the highest priority issues that we should be focused on, and how do we bring that into the context of our organization? So that’s just kind of a high level overview of what PlexTrac is and how it all fits. But let’s get to the really exciting stuff in terms of the demo and we will dive in. As I said, PlexTrac is a dynamic web based portal for being able to manage all of your assessments. So when you land in the platform, you get exposed with a dashboard that really shows here’s the recent reports or engagements that we’ve been working on. Here are some items that I specifically have been assigned to, whether those are findings that I need to either address from the perspective of actually resolving them, or if I need to go in and QA the results, I can have kind of an immediate dashboard to be able to do that. But a couple of notes about the platform itself is that it is geared towards being able to meet a service provider’s use case or an enterprise use case. So you’ll see today we’re presenting it from the enterprise internal team perspective. But you can change these names, whether It’s department, you can change these and white label label them and customize them to be how you would want to be addressing them. So if you’re a service provider instead of departments, we actually have the label of clients and you can specify any way that you want to be able to direct access within the platform. But you’ll see here, this is just kind of a nice landing board. And I’m going to just walk through what it would be like if I’m a penetration tester, getting ready to collect all the evidence and getting ready to start writing a report, or if I’m reporting as I go along the engagement. We really facilitate that workflow really well. So I’m going to dive in just from kind of start to finish on how you do that in PlexTrac. Then we’ll walk into how you can actually assign issues to different people and have a QA workflow all segmented around the findings and the report lifecycle, and then really show off how we can actually bring this to life with the analytics capabilities. So we break everything down by clients or departments. And so in this example we’re using departments and so we have all these contexts of like, hey, we’re going to be testing these different departments. If we were to dive into a department, you can see we have all the reports or engagements that are associated with this client or this department. And you’ll also see that we have pretty robust role based access control where we can limit who has access to different departments. And you can assign these based on the users within those groups and the data does not carry over between departments or clients. So in some use cases you may have different business units or different locations. And in the context of a service provider, these would be your clients, right? So a department would be equivalent to a client. So let’s say I’m ready to start kind of getting an engagement together and really want to be able to edit a report or come in and create a report. You can also see that I can list all the different reports across all of the different clients or departments that we have. And you can kind of get a high level view that within each report we have the different statuses and the different capabilities for the workflow. So every report will start in a draft status, and then you can move those reports into different statuses based on your workflow in terms of it’s being QAED or it’s ready for review or it’s actually published, which means now the recipients can actually view those results. So it really facilitates the streamlined workflow. So, first off, I’m going to kind of show you what a report looks like and then I’ll jump back into how you get there because it’s actually quite simple. So I’ve actually got a report that I want to kind of load in already because it’s got some fun items with. So one of the big benefits to PlexTrac is being able to have reusable content, and not only from a findings perspective, but also from the narrative sections of a report. So you’ll see here that we have this narrative concept within every report. And this can all be built out from a template, right? So we can have templates of the different types of assessments that we might be doing. And when we bring these in, it brings in all of these narrative sections. And then from here, I can come in and edit the different fields, I can add additional sections to the report. And so think of the narrative section as really those parts of the document or the parts of the report that kind of highlight here’s. What we did here was the general timeline, what I call the more pros oriented aspects of the report and not just the findings. And so we make this streamlined in terms of you can have all this content pre built. You can bring it in and then just tweak it. One nice thing about that I’ll kind of show right now is we do have the full capability around tracking your changes as you edit. So we need to adjust the scope or something like this, right? And then somebody else can come in, or myself can come in and adjust what we’re saying in terms of making the fixes or making the adjustments. We can also make comments here like, what’s this? I’m sorry, I didn’t make the comment. Here we go. I will do this. Come over here and we’ll say, what’s this? So we have a pretty robust capability for just doing the standard work of getting the documentation together and working on the track changes if we need it, and being able to make comments within a report. You can do this throughout every text field within the report. And it makes it really nice to help with the QA portions of the report, right? So if you have a pretty robust QA workflow around your reporting lifecycle, we really support that quite well. You don’t have to use it, but it’s available to you. So then another really important aspect of the engagement and the reporting piece are actually being able to manage the findings. So we’ve got a pre built list of findings here in this report that just kind of kind of show you what a report will look like and then I’ll start from scratch and kind of show you how you get one done really quickly. But you’ll see here we’ve got different findings for different types of the different types of test cases that we ran in this engagement. This is an example of a web app penetration test and these have different findings. So it’s really nice to just have this content available. These findings either came in from our own individual writing of the report findings. We also support a lot of integrations with lots of different security tools, as I showed in that previous slide, as well as we have what’s known as the Writeups database, where you can bring findings into a report that are pre built content. It’s pre built the way that you would normally have a standard around writing up the different types of findings. But you can see we support a lot of different metadata around these findings as well. The type of status that it’s in, what the criticality or the severity of the finding is, who’s assigned to this issue. So this really supports the collaboration between not only the testing team, but also the people that are responsible for fixing these issues when it was reported, its source. So this actually came in from a previous report that we had imported from PlexTrac. And then we also have this notion of an SLA, which is a service level agreement for this type of finding. So we’ve got a label of high because it’s a high finding. And this just highlights how close we are to needing to fix this before we start getting alerted, notified to the right constituents of who’s supposed to be aware that this issue is not getting resolved in a timely manner. And then we also support tags. So one thing that’s also important to note within PlexTrac is that we support tagging across the platform. You can tag findings, you can tag assets, you can tag clients or departments and you can tag reports. And that also helps with the analytics, being able to slice and dice your data however you want. So you’ll also notice that within these findings, we can navigate them quite well, we can assign statuses. One other thing, you’ll see, I’m assigned to this one and that showed up in my dashboard as well. One thing that I love, having been a pen tester, and we always support the ability to import screenshots, but one thing that is nice with supporting a web based portal is that we can also import videos. And so hackers love videos. They love to show off what they’re able to do within an engagement. And if a picture paints a thousand words, a video paints a thousand pictures, right? And so this is just an example of exploiting some SQL injection within the juice shop web app. So this is really nice when you want to have a little bit more of a dynamic video capability within your reporting of the findings. Let’s see if we’ll give me a couple of questions here, but I might need to pause and hold on to these questions to the end, but we’ll see. So in terms of editing the finding, I’ll just kind of show you what that experience looks like. So we have some default fields within PlexTrac, which is the title, the severity, the score, the status. We support custom substatuses. So within PlexTrac we’ve been very adamant that there are really three primary statuses within a finding. This comes back to my 17 plus years in working in security. The true status of a finding is actually open, closed or in process. But we also know that people have different workflows and where they would assign different aspects to those findings. So that’s why we support the ability to have substatuses under each of those major statuses so that you can create your own workflow around those if you need. A finding may stay open until it’s truly been assigned and then you can have it in the in process status and have a substatus of waiting for review or even some of those more unique statuses like Risk Accepted or Risk Insured or something like that. You can determine what the ultimate status of that finding is, whether that belongs under an in process status or a closed status. But that’s just kind of to make sure that these statuses are all customized at the tenant level and then you can assign these issues out if you need to. And again, we support the standard ways to edit and make these a dynamic experience within PlexTrac. We support CVE and CWE and tags out of the gate and then you can have your own custom fields as well, and you can specify a distinct format that is templated across all of your reports, or you can bring in custom fields on a one off basis as well. We also support the notion of affected assets within a finding. We can add assets very quickly to a report, you can paste them in as well, and you can import them from like an Nmap scan or whatnot. But the nice thing about being able to paste them is if we had a list of assets, we’d paste them in here and be able to add them immediately into the platform during the engagement, which saves a lot of time. Here’s where we support the screenshots and videos, predominantly the videos at this point because you can actually add screenshots into the specific fields as well. All very dynamic capability and then code samples to be able to have separate sections of the finding for specific code that you may have written. Each editor itself also supports code blocks as well. So that’s just kind of a basic of what a finding block looks like. And then once we’re done with the finding themselves or say we’re in the middle of an engagement, but we found something that’s pretty critical and we want to make sure we go ahead and get that into the hands of the people responsible for fixing the issue. We have this notion of a draft versus published status on the findings. So let’s say we just felt like these two findings really needed to get delivered before we were actually done with the engagement. We’re going to go ahead and assign this to be published and then whoever is assigned to these items will actually get notified that they can come in and view this. So it kind of highlights the ability from a role based access control that you can publish findings and they can come in and start working on them even while you are still maybe polishing up the report or editing the narrative sections and things like that, right? And how you actually publish the full report is through these different report statuses here as well. And then finally, let’s say we also want to be able to support exporting this out to a document. We support full customization of exporting to Word documents so we can export the report into your custom look and feel and how you would deliver a document. You don’t have to stick with our format and you can have it really be able to look and feel the way that you want it. This is an example of our export format, but we can customize everything here and you just have to make a few final finishing touches like updating the table of contents and then you’re off to the races. So this is an example. We’ve seen thousands of templates and can support pretty much anything that customers throw our way. Just envision that this is how you would want it to be customized. But you can also take our word for it that this is how a lot of the standard teams report on their issues and they’re in a good format for being able to manage your report. But you see, we can get in all of those narrative sections, we can order these however we want, we can have appendices at the end of the document and then also supply, be able to format tables and be able to provide a solid view of the findings and the finding blocks. And this is an example of a Multiscope template. So we did different types of assessments and touched on the web application and then here’s what a finding block would look like. So you can let your imagination run wild with what you can do in the document export, but it’s there for your convenience as well. That’s kind of the basic aspect to kind of managing a report. One last thing I will touch on is we do have this handy little feature called the attack path where you can drag and drop the different findings into different locations that allow you to specify what the attack path was. Right. And you can drag and drop these however you want. And this is another nice visual for the reporting in the platform. So I’m going to pause here because I’ve got a few questions and I’ll try to address them and then we’ll kind of move on. So can we consider this like Azure for documenting and reporting for Pen testers? Yeah, we like to view this as like this is a centralized database for all of the assets, the findings and the different content that you’re going to be utilizing within your testing program. The next thing I will show is how you can actually kind of speed through a report. So I showed you having this pre built already, but now I kind of want to show you what it looks like to actually create a report and get going. So while I’m doing that, I will just call this the SANS demo for today. I’ll leave this in draft status. We’ve pre built several different report templates. Keep in mind that this can be anything that you build and we have a robust way to do this. But I’m going to go ahead and call this the Red Team template because we’re talking about a lot of hacker stuff today and you’re probably enjoying the conference. And we also have the notion of different findings layouts. So if we have specific fields that we always want to have in our findings, we can select that. I’m not going to show that today, but just know that if there’s specific fields within your findings that you always have to have, you can incorporate it into your templating experience. We have lots of other metadata that you can supply to the report details themselves. But I’m going to go ahead and just say fill in some of these custom fields and I’ll just put them as the same. But this is just a nice way to kind of get going with the kind of the metadata around the report. So you’ll notice that because I selected that template around red teaming, it brought in all of these report narratives, right. So this is how we always standardize on writing up the report. And so we have all of this narrative section already available to us. But let’s say I wanted to actually bring in a couple more sections. So I have the template but I’m going to go ahead and say like, hey, you know what, these are all good, but I might want to bring in one other field. So I have the notion of a narratives DB which is pre built content that you can supply and be able to quickly bring in, so you can bring in things like related to specific threat actors, whatever you want to actually be able to do. And this is already pre built content and we can just add that in right away and then it brings it down to the bottom. But we can drag and drop this to whatever section that we want. Right. The other way too is just say, hey, I’m just going to go from scratch and this is a spoke section for me, right. And I’m just going to put in some data, right? So we really give you the flexibility to really make this as custom as it needs to be for this specific engagement. While automating a lot of the mundane stuff, you’ll notice I didn’t have to copy and paste any information from any previous reports. It’s just there for my availability. Let’s say we’ve kind of moved on from the narrative section. That’s how we’ve gotten some of that in. But now you really want to start working on getting findings into a report. Like I showed you, you can create a finding from that standard form, just kind of a blank slate. And that’s in the notion of having really custom findings or custom aspects to this engagement. That’s not really something that you would have found before. But let’s say we are finding something that we found before. This is where the write ups database really comes in handy. And I’m just going to do some quick bringing in of some findings because this is how we always write up these findings, right? Whatever your engagement is or whatever your organization is like, say these are the standard ways we want people writing up the results. You notice how easy it was to bring that data in and it has all this pre built content for us, right? So it really saves a lot of time. Again, didn’t have to copy and paste. I can come straight in and now start adding my screenshots and my videos and everything that supports this finding. I can now continue to edit it, but I didn’t have to waste a lot of time bringing in this from a different report and sanitizing it and all that. I know it’s a clean template to work off of and it really saves a lot of time for those that are writing a lot of reports as I showed you how to add affected assets and things like that. So it makes it really nice. So there’s two ways that we can get findings into a report. But you’re also using a lot of other sources of tools, right? So throughout an engagement, you may use one or more of these types of tools. And so we really support the ability to import from various sources. So I’m just going to show you real quick how easy this is. I’ve got a Nessa scan over here that I’m going to grab and pull this in maybe 1 second here’s, acting slow. So I’m going to pull this Nessus scan in right here. Going to go ahead and here’s where also as we’re bringing the report the findings in, we can tag these on import. So we can tag the findings, say like they come from a specific subnet or something that’s very specific that we want to call out. We can also tag the assets that exist within this finding in case they’re something related to PCI or some other kind of attribute that you want to apply. But go ahead and upload that. So it’s uploading. I’ll get a notification and actually this table will populate once those results are done. Parsing while it’s bringing in the findings, I’m going to go ahead and answer a couple more questions that have come in. So if different Pen testing teams are working within PlexTrac, can their view be restricted to a dedicated Pen test project? So we do have robust role based access control for limiting access to all sorts of data, including the departments and the projects. There’s also the notion of a classification tier where you can set your own classification of reports. So if you want a report to be quote unquote, top secret, you can have that classification within the platform and therefore that anybody that’s cleared at the top secret level. Now this is just an example we’re not talking about in the government perspective, but we do have that notion of being able to restrict access to different classification levels of data. So, great question. So you notice while I was answering that question, those results populated in and here’s a critical finding. It brought in all the details, it brought in the asset, it even will have evidence that came in from the scan if it existed. And now I can start getting going right away on working on this report or working on this finding here’s where I can actually start to do the remediation activities. And this is what’s really important about having this dynamic and collaborative experience is while I’m writing this up, I can immediately notify whoever I think should own this finding. I can assign it to them, they can come in and they can start adding updates right away. So they can start saying like, hey, I’m moving this to in process. I’m going to assign the Reviewing status and I’m going to go ahead and assign it, I’ll go ahead and assign it to Bruce Wayne. He seems like the right guy and we need to say, hey, fix this. Now what’s nice about this capability is that we now see the running track record in the historical log of how this finding is being addressed. One other thing you do notice in this view is that I can also assign this to a Jira ticket. So I can create a Jira ticket straight from here and be able to all the data that’s in the finding gets assigned to a Jira board. We have a robust way to integrate with those findings. So I have. My own integration test, Jira board set up already so I can immediately create a ticket. And then when we close this out, we will see that it is also linked to a another here’s another cool thing. We can edit a lot of the different fields that come into the table. So if I want to see, hey, which issues are linked to a ticket, I can just add that column straight to this, and then I can see that this ticket was assigned. And if I load up Jira, you see all this information came over. So it’s not interrupting necessarily the workflow of the developers that might be using Jira to handle this. It’s a bi directional sync, so any status updates that come from there automatically feed into the report here, into the status tracker. So I’ve given quite an extensive demo of how PlexTrac really you’ll notice I just created a pretty robust Pen test report consisting of 146 findings, pretty substantial amount of narrative data, all within the span of about ten minutes. Right? And so now we can actually spend more time doing the Engagement, focused on the actual results and the critical aspects of the Engagement while still having an enhanced and streamlined reporting experience. That’s one of the core power of using PlexTrac is not only the ability to speed up the time, but to have this in a central location where we can immediately hand out tasks to people and they can immediately start working on remediating the issues, which is the most important aspect of the Engagement. Right? So I want to quickly, briefly move over to now that we’ve kind of shown how you manage findings, how you get data into the platform, how you manage the status and tracking of these results, what does that lead to? And that really leads to, do I have visibility into are we getting better or not? And so this is kind of where we talk about the ability to really use the power of analytics to kind of give snapshots in time of, here’s where we are. Here’s the breakdown by the different departments of how many findings are outstanding, what are some of the most critical findings that exist in our environment, and we can quickly manage these or at least see how they’re being addressed. All in the central dashboard, we can narrow down and filter based on things that we want to see. So not only based on the type of finding, but also the type of assets that exist in those findings. And then we could also select different findings tags, right? So let’s say this is an example. Like we tried to filter based on something related to the Oasp top ten, didn’t happen to have it. But now we can just kind of see here’s the breakdown of each department or client based on the findings that exist and their statuses. One other nice thing is we can also view this by asset. So you can see the breakdown of how many assets we have with findings. Assets without findings. This starts to kind of help identify do we have a coverage gap or have we actually tested things that we need to? Are we able to view things based on an asset level? And then what I also really love about this is the ability to drill into trends and SLAs. So you notice it kept my filters across the whole way. I’m going to go ahead and clear these filters out. So now you can kind of see what is the meantime to remediate these findings based on severity. Right? And so this starts to give us a perspective of how quickly are we fixing issues, how urgent are we handling the most critical issues? Then we can also see a trend of over time, are we getting better? Are we seeing issues continue to stay open and as they come in, they’re not getting addressed? Ideally, in this graph, we’d love to see this green bar higher than the red bar. It means that we’re fixing more issues and we’re burning them down as opposed to just continuing to bring them in. And then this is where I talked about the service level agreements, where we have the notion of, hey, for a critical finding. We’ve established that we need to fix those within 48 hours. I guess in this example, we have ten days, right? And you can set all of these. I’ll just kind of briefly show that you can specify these SLAs pretty customized to your environment. So you can specify the type of criteria that meets certain SLAs who will get notified, how quickly will they start to get notified if it starts to meet within the SLA. So it really supports a lot of workflow around the actual tracking and remediation of the results. But you’ll notice, like here, we’ve got ten findings that are in this snapshot, two of which have now exceeded the ten day SLA, eight of which are within it. None of them are nearing the end of it. So we start to be able to really get a good perspective of are we getting better and how are we trending towards success in remediating these issues that are coming in? I’m going to pause there and answer a few more questions because they’ve definitely come in. What is the language support for the finding database? So let’s just talk about the content library. So you see, I brought all that data in, but this is where it actually resides. So we’ve got different repositories for the different narratives of types of engagements that we may be conducting. And that can also match or it doesn’t have to match the repositories that exist within Writeups. So this is all built into the platform. So we do support importing via CSV for your Writeups, but it’s a JSON based API under the hood. And so that’s how you could automate bringing data into repositories based on an API. Again. Restful JSON Based And we do support importing write ups into our own CSV format. So that is how you can get data in quickly. So no other language that is needed to be able to support these. And then a couple of other questions as we’re kind of getting closer to the end of our time here. But for service providers, what about other languages that aren’t English? The platform itself is based in English, but the content itself can be supported for most languages, including Arabic and Hebrew and things like that. Well, I’m sorry, Hebrew may I think Hebrew is a challenge, but most other languages are supported in terms of adding that language into the text, into the text fields and being able to get that into the document export. So that’s how a lot of customers handle if they’re not using English as the native language, could it be collaborative with customers or stakeholders if they want to review the progress or ask for retest? That’s a great question because that’s exactly what this was built for. Yes. So like I mentioned before, you can have the custom substatuses for the different things. So you’ll see here, this can be someone that says, hey, I’m ready for a retest. So within a report, I can just come in here as the one that’s gone and done the fixing. I can say like, hey, you’ll even see here’s an example. This is perfect. So here, this one actually was say, hey, ready for retest? And this person came in and said, close remediated. They didn’t add any comments, but maybe I come back in. I was like, Actually, you know what, I need to review this again because it came back up, right? So it supports that whole lifecycle of being able to facilitate retesting of these findings. People that are assigned to them get notifications to come back in. So it really supports a collaborative workflow for both not only the test generation and the test completion, but also the remediation of the results. Okay, how do we see reading through some of these questions? There’s quite a few. I’m going to see if I can handle some more of them. Let’s see. Okay, one thing here, how do you combine all the inputs on the back end? Are the ones mentioned on the first slide the only ones PlexTrac supports? Or can you import live custom feeds from different utilities? So we do support an API. We have an open API that you can build against for any other utilities or any other tools that we don’t have listed here. If it’s a more popular one, we will definitely work with you to get this in the list. Right. And then we also have support like a CSV format so that you can bring that in. One other thing that I will highlight briefly is not only do we support file uploads, but we also have integrations from an automated perspective with all of these service providers. So. Cobalt IO edge scan. Hacker One. I showed you jira before. We also support ServiceNow. Sneak and tenable. And then this list will continue to go. This is from the automated side. So if we come in and you can see here was the last time we synced data, and you can bring those findings directly into a report as well in an automated fashion. So this is definitely very handy for those that are doing continuous scanning, whether that’s in an enterprise or if you’re a service provider doing continuous scanning on behalf of your customers. Real quickly, I do want to show off one other capability in the platform, which is our runbooks module. Runbooks really serves three core use cases. It supports the ability to conduct a true purple team collaborative engagement, where you can have the red team and the blue team both working in the same engagement and seeing what’s being conducted and be able to document what has happened during the engagement. Similarly, you can support tabletop exercises, so running through different scenarios as well as a test methodology. So say you have a large testing team and you just want to make sure everybody has covered down on the right procedures to execute within the engagement. That’s what this supports. And so, if we come into an existing engagement, you can see these are all the procedures that are going to be executed. You can add and subtract from this. You may have seen we do have a content library that has lots of pre built TTPs predominantly related to Mitre, but you can bring your own as well related to OWASP or any other type of testing methodology. And so, when we come in here, we can see here’s what the red team outcome was. The blue team can specify whether they saw it or not. You can say, hey, we’re still in the process of this. If it becomes a finding, you can actually specify, hey, this is now a finding. So when this engagement is submitted, it immediately brings all the findings into the report and stores those procedures in another section of the report. So this is a really handy way to facilitate workflow within the engagement as well. And it all funnels back into the core reporting product as well. So, just kind of a quick time check here. We wanted to recap kind of what we were able to show in a brief amount of time within PlexTrac is that it’s meant to be a collaborative and dynamic experience, not only for the reporting capabilities, but also for the interaction with the constituents that are meant to be resolving these issues. But we also support the ability to export these results out into static documents to serve as artifacts for the engagement, serve as artifacts for an audit and things like that as well. So we support both of those use cases quite well. And then we also were able to show that now, because it’s all in this platform, being able to bring these results in, as well as show the tracking remediation capabilities, you have much deeper visibility into the capabilities that your security program is getting better or not. And being able to answer some of those key questions. I’ll stop there. If you have more questions or more thoughts and would like to learn more, please hit us up@PlexTrac.com. There’s plenty of resources available there. I’m going to briefly try to get to a few more of these questions. I like this one. Does it have chat GPT capabilities? We don’t today. I would say that we definitely have an AI strategy in general and we’re exploring how we can utilize that in a safe and responsible way, but not today. And so that’s a good question, but certainly it makes sense that people would want to have more capability in writing their reports faster. And I think that about does it for time because I don’t want to get cut off, but maybe we’ve got time for one more. Hang on here. How does the software determine vulnerability to clearance sensitivity label? That is all customizable within your instance, so you can set the different classification levels that you would want, and you also can specify how severe findings are. So while the tools themselves that they bring in, they may specify the severity of issues that are coming in, you have full control over changing those severities and being able to supply the context within your environment. Great questions. Well, I think I have come up pretty close to time here. We’ve got a lot more capability within the platform, but I wanted to definitely highlight the core functionality of how you actually spend more time hacking, less time reporting, better experience with your customers, whoever they are, in managing these reports and ultimately helping provide a safer organization for yourself or your customers. All right, with that, I will conclude our time together. Please reach out to me if you have any other questions. My name is Dan again, and it’s just dan@PlexTrac.com me and you can hit me up on LinkedIn as well. Great. With that, I will conclude our session. Thank you so much. SHOW FULL TRANSCRIPT