VIDEO Unleash the Power of AI to Speed Pentest Reporting: Introducing Plex AI Tired of spending hours crafting pentest reports? Not anymore! PlexTrac is officially the first pentest reporting and management platform to add AI capabilities. Our initial AI package, Plex AI, streamlines report writing and analysis of large data sets, freeing up countless hours you can dedicate to other critical tasks. See how the current capabilities work and hear our vision for the future, all of which is built with security in mind. Series: On-Demand Webinars & Highlights Category: AI, Product Features BACK TO VIDEOS Transcript Hey, everybody! Great to be with you. I’m going to fire up our slide deck to get rocking here. I’m really excited to be with you today. Fun topic. Can everybody see the screen? It’s loading here. It’s loading. I can. Okay, I guess everybody else can’t talk. We’re really excited to be talking about how we unleash the power of AI to speed up pentest reporting and the introduction of Plex AI. We are very excited about this. It’s something that we’ve been working on for a while. As the founder and CTO now of the company, I always had a vision around AI and what it could do for our product and for our customers, more importantly. And so just really excited to be laying the foundation for that. So that’s really what we’re excited to be talking about. Nick, why don’t you introduce … oh, I guess I should switch the slide … Nick, why don’t you introduce yourself. And just appreciate you joining and taking some time with us today. Dan, thank you. It is always a pleasure to talk to dope people. Nick, Popovich, founder and hacker at Rotas pentesting shop, friend of PlexTrac for many moons and really excited to talk about what we have going because what we’re talking about right now is tip of spear in the maturation process of the cybersecurity assessment and testing paradigms and industry and all that good stuff as everybody’s racing to make sure we have the best and brightest tools, techniques and processes. So this is pretty cool. Awesome, awesome. Well, thanks. Thanks, Nick. So, yeah, so I think what we’re going to talk about today is like, what does this actually mean for PlexTrac? How is PlexTrac implementing AI? We’re going to talk a little bit about the implications of AI in general in the space, but then, you know, how does it, how do we focus it? How have we focused on it within PlexTrac? And then what is, what are, what is our vision longer term? Like, we’ll share some discussion topics around, like where we go from here and then talk obviously about some of the, some of the key things that are on everybody’s mind of like, hey, what about security and privacy, you know, when we talk about AI in general, but then how is PlexTrac approaching it? And then, you know, then obviously continuing to evolve as we grow. Where does security play a role in AI? I’ve written several blog posts recently, and we’ve had some other guests on our Friends Friday episodes. If you haven’t checked that out, definitely check it out. But where we’ve talked about AI and the importance of secure-by-design technology, secure-by-design infrastructure around AI and why it’s important as security professionals and then more importantly as a security company, how we’re using AI. So we’ll talk a lot about that and then we’ll actually obviously demo the product you and demo the launch, and then take any questions from there. So feel free, as Hailey said at the onset, feel free to throw questions into the Q and A at any time. If it’s pertinent to what we’re talking about right now, we’ll go ahead and try and address it. If not, we’ll address it at the end. So, Nick, I’m really curious. Being a pentester for many, I think both of us are getting up there in terms of veteran status in the security space. Super fair. I call it seasoned, seasoned, seasoned. But talk to us a little bit about some of the pains that you’ve had throughout the years just from reporting in general. And then where do you see AI being used and what value have you been able to get from it? Yeah, certainly. Also just to kind of talk about the elephant in the room. Not sure if it’s the official branding for Plex AI, but I do appreciate the cyber koala that is staring at us right now. That is harrowing. No, just kidding. So when you’re doing hacking and cracking and pentesting, there’s leveraging automation and expert manual analysis. We’ve done that. It’s a tired topic. It’s a well-documented discussion. I wouldn’t even call it an argument. There’s really no argument. There’s you leverage tools as you expertly wield them. You still have to have an expertise in executing excellence. And there’s something to be said for having the requisite understanding and the prerequisite knowledge of tech stacks and what you’re dealing with. And that’s in hacking in DevOps, in every field. I think anytime there’s a new tooling that comes in, it can at first be scary. There can be some naysayers. And I think a lot of the struggle that assessment and testing practitioners have faced for decades has been at scale. How, you know, going in and performing an assessment of several systems is far different than thousands upon thousands. And making sure that you’re not just finding low-hanging information and you’re not just finding the simple things, but you’re really going through and attempting to find vulnerable conditions, vulnerabilities both known and unknown. And that’s the key when folks are hiring or they’re hiring internal staff or they’re hiring partners or providers, as you may say, the idea of being able to come in and at scale identify vulnerable conditions beyond just running automated tools and scripts, leveraging those and leveraging the output from those, but also then being able to come in and provide manual context. So the struggle that you have, and I think in the past, a number of years ago, when we first met, and I was actually working at a different pentesting provider at the time, the idea is we don’t want to be working in hundreds of spreadsheets and across different word documents and text files. And just when I managed a significant number of different testers, or when I still managed different testers, the idea of being able to report on the data is important. And so along the way, before even generative AI and any kind of machine learning and AI, and that science has been around for 50, 60 years as far as object-oriented programming, as conditional statements and logic gates continue to mature from the tech model railroad at MIT. The idea of being able to find ways to be able to provide automation at scale that was reliable has been the constant Holy Grail. You’re trying to find ways to take all this data that you gather during testing, assessments, and even internally in organizations, they’re dealing with all sorts of data. It’s a whole field, right, business intelligence. And so we just, when it comes to being able to find ways to curate data, present it well and at scale too with large amounts of data, because the more data you have, the more informed you can be as a risk professional whose purview is governance and risk. And so that’s been really exciting to, you know, for me as someone who runs, you know, feeds their family with, with pentesting practice, the ability to take data and package it up and present it well means more work. When I was on large internal teams, it meant more efficiency, it meant the ability to do more with less, less overhead, less all sorts of interesting things. So, yeah, I mean, graduating from the Word doc and PDF world over the last couple of years into being able to leverage PlexTrac, as we do at Rotas, for our assessment of testing has been a game changer, full stop. And now as we are continuing to drive home the idea of continuous assessment and testing, continuous monitoring teeing in with threat feeds, and it’s not just going in and wham bam, thanks for the pentest for a week and see you later. It’s this ingrained into the ecosystem, it’s just kind of a natural progression as the rest of the technology stack is seeing where AI fits in their ecosystem. It makes sense that the assessors and testers and folks who are dealing with security data have to deal with that as well. Yeah, 100%. And I think like, you know, coming from the PlexTrac perspective of, hey, we’ve always wanted to help in report automation, right? You know, we talk about, hey, we support workflow automation around the reporting lifecycle and, you know, existing features that have been there from the start, you know, around your content library, you know, and Narratives DB and WriteupsDB and everything around, hey, we have these templates in this prebuilt language and, you know, trying to speed up a lot of the, a lot of the aspects around the reporting lifecycle has always been our goal. And so, you know, with the dawn of generative AI, obviously AI has been around in a lot of different capacities and we always had, we’ll talk about some of the other vision that we had for it, but, you know, consistently we get the, we get the question like, can’t you just write the report for me? You know, and it’s like, well, that’d be nice. That’s also, that’s, I mean, at that point, then what are, what’s the person there for? You raise a really good point in the idea that I mentioned earlier about being a tool that expert practitioners still have to leverage. Honestly with, with how AI is leveraged right now for a lot of projects, this included, it’s a beginning, it’s a starting point. It allows you to get information. And having looked under the hood of some of the Plex AI stuff, I’m extremely pleased and impressed. We’ll talk more about that in a bit. But it’s not the idea of, oh, we’re just going to data in, spew out magic with no thought. There’s risk contextualization. There’s a lot. And can you plug in a lot of that data, variables? Can it do a lot of the heavy lifting for you? Eventually, certainly. But as of right now, you know, when the robot overlords take over and they make me see this clip 15 years from now, and they say, remember when you said that as of right now, though, there’s still the requirement for a brain that can understand the context and add in information and unique nuances and those types of things. But the frameworks and the building block that Plex AI and PlexTrac helps to build out makes it quite expeditious to continue to move forward. There’s no roadblocks of let’s get started or frame this up or stub it out, right? Yeah, much like what we’ve provided in the past around, like, hey, you can have templates, you can have write ups, database aspects out of the content, DB. What Plex AI is really going to afford you is the ability to speed up the actual contextualization aspects of the report itself. Right? Because while you have the templates and everything today, you still have to go in and provide the context specific to the report. Here’s what we did. Here was the timeframe, here’s the general summary, and that’s really what we set out to accomplish as our first launch of AI into the product, and obviously addressing all of the things that we talk about here on the slide. But can we help report write faster? Can we actually write the report for you in some sense, not removing your knowledge and expertise from the process? Because obviously with anything in AI, you still have to vet it, make sure that it makes sense and that it’s something that you would still put your stamp on because you’re still the ones delivering the report. But it can also start to help provide consistency and, obviously, scalability as your team grows, as you do more engagements, as you get more data. You know the ability to go faster on the reporting lifecycle is just so critical, right? That way it keeps the teams focused on the testing and the automation around the actual test execution is starting to, is starting to continue to extrapolate back to the reporting lifecycle, which just means that, you know, your customers are going to get more value out of their tests because you’re spending more time finding, you know, critical security issues. You’re able to schedule more tests throughout your calendar year, you know, so that if you’re a service provider, that’s helping you generate more revenue. If you’re an enterprise team, it’s helping you get through more of your portfolio from your testing capabilities or requirements. Those are some of the benefits of AI in general and pentest reporting. I think we’re excited about being able to offer this to our customers and prospects. I’ll share a little bit about our AI vision and what we’re starting with today in the current offering and where we want to head. So again, like we said, we’re the industry’s first pentest reporting AI assistant, being able to bring AI from a generative AI perspective to the reporting process so that you can contextualize the findings in a report and automatically write up summaries and narratives as well as writing up the findings themselves. So what we’ll demo to you, so the current offering is Plex AI. It is the ability to write narrative sections within a report based on the findings that exist in the report itself. So that’s that piece that everybody still has to do. Even if you have everything templated out, you still have to go into the report and write the specific details of that report or come up with like, hey, based on the findings that exist here, these are the top issues. Here’s what we recommend. You know, that’s still, that’s still a manual step regardless. But now we’re helping you actually automate that piece of it, which is super helpful. Also, within a specific finding, we can write up the description for you. We can take your description and help augment it as well as the recommendations in the finding itself. So it’s using a generative AI model under the hood. It’s a private model. We’ll talk a little bit more about the security before I go on to the future vision. Nick, I know you’ve had a chance to share your thoughts before we show it off. It’s super cool. There’s so much in it that, but back in the day, you’d sit there and spend time trying to either make decisions and logic happen within your template, you might write some scripts for it. So this is really huge to be able to add a lot of the automation that folks want. And then, to be frank, where we’re seeing so much use for this is in kind of the exotic findings as we go and move more into hacking at PlCs and hacking its products. And when you’re hacking at clouds and when you’re, when you’re doing assessments against findings that don’t fit a concise narrative, it’ll fit like a CWE or a category. But when you want to add in verbiage, I mean, let’s be real, everybody’s doing this already. When my consultants come across an interesting thing, where do they go first? They go to, to a generative AI. They put in a little prompt about the flaw that they found, and they take the description, they take some of the remediations now they, they tweak it out. They make sure it’s right, but it’s so it’s a catalyst for innovation and inspiration for them. And so the reality is, I don’t think there’s a professional that works in it that has been exposed to generative AI in the last two years who doesn’t use it a lot, like a lot to go, just even, maybe it’s not copy/paste, but it’s going to be looked at maybe before Google just to get context around something. And so having it in platform just makes sense. And I’m excited for it because of the number of kind of exotic, maybe esoteric findings that we come across that just don’t come from a tool. They come from observation or experience and those types of things that you can kind of leverage these. So, yeah. Pretty neat. Yeah. Yeah. So, I mean, we’ve gotten great feedback already from those that have been early adopters and users of it. So thank you for those that have been out there. And I will also say, like, what’s exciting is that this is just the foundation one for the current offering. As time goes on, it’ll continue to train and learn and grow, so that it makes everything more accurate and more applicable. So that’s just part of what’s next. We’ll continue to improve its model and its training capabilities. We use external sources of data today, and we’ll talk about this in a second, but next, we will be able to provide, in a confined space within your tenancy, the ability to start learning on your reports. That’ll all be self-contained, but then it’ll continue to help learn how you write reports and how you write up your findings. So that’s some of the next stuff to come. But then in the future, where we’re headed with AI is really exciting. We’ve just recently announced the launch of our Priorities module, which is helping companies and customers of your service providers, helping them get a, get a handle on the exposure management within the organization. Right. So being able to group programmatic areas of risk and then show progress over time and so where AI is really going to play a big factor in that is one helping suggest, hey, you have programmatic areas of risk, we think you should create a priority out of this. Oh, and by the way, here’s the recommendations to go fix those things. Right? So that’s where we’re headed in a midterm with our AI vision, and then that continues to pave the way for advanced analytics and truly being able to chat with your data. So being able to have a prompt that says, hey, what are the most important things that I should be working on today? Being able to ask that question of your data and get an intelligent response is the goal. And being able to say, hey, what are the most reported things in our environment? What should the next area of focus from a test perspective be? What coverage gaps do we have in terms of our testing? Right. Because at the end of the day, we’re all focused on a proactive approach and a continuous proactive paradigm for security testing. And AI is just going to continue to help pave the way for suggesting test plans, suggesting what things to test for, specifically being able to bring in threat intelligence and highlight whether or not you have gaps related to those threatened to those IOCs or the TTPs that are being exploited in the wild. These are all the exciting things that are to come with AI. This just lays the foundation for us. Really excited to show it off. I think that that’s enough lead in. Yeah. Last but certainly not least, before we actually jump into the demo, I wanted to talk a little bit. That’ll cover one of the questions that’s popping up, too. Yeah, I missed somebody put one in chat. That’s pretty germane to everybody, I know that we’re going to answer, but everybody’s curious and somebody put one in the Q and A. Yeah, yeah, yeah. Okay, so, yeah, okay, that’s a good one. And it flows right into the security by design. Right, so, so I’ll talk about this and then answer this question. So, so, obviously, security and privacy have been built from the, from the ground up. Within this context. I can’t tell you how many times, Nick, I’ve talked to people over the last year, which is kind of nerve-wracking of like, oh, yeah, we’ve put, we put, you know, we use ChatGPT to help write our reports, you know, throwing data in there. And I’m just like, no, why are you doing that? I mean, that’s a public model. You don’t, you don’t know and have any control over where that data is going and what it’s going to be used to train on. And can it be extracted out of the model? Right. These are all, these are all things that we, as security practitioners, you know, should be kind of focused on from a testing perspective. And how do we test AI? So, so we really set out one with security and privacy in mind from a model perspective. Currently, it’s using a private model that is based on an open-source LLM, and then we’re using industry data on top of it, including NIST, MITRE, CVE’s, vulnerability databases. We’ll continue to throw in more intelligence feeds as we get going down the road. And so that is what is being used to train the publicly available, I say publicly available loose in terms of what’s available within Plex AI. So that’s the data that it’s being trained on. We’re not contaminating any data across clients or tenants within Plex AI. So the question was, how is organizational data used to train your model, and is there an opt-out capability? So for one, you actually have to opt in. This is an add-on feature for Plex AI in general. So if you are a current customer, you don’t have access to Plex AI out of the gate, you have to opt in and sign up for it. But two, your data is not being used to train the model today. Our plan is to isolate your instance of your AI, of your Plex AI, to be able to start training its responses based on your data. But that, again, that’s not available today. That’s the goal, and that would certainly be an option to optimize. But what we will, what we will always hold true to is that your data will not be used to train other people’s data and their other people’s models. Right. So it will become more of a personalized model over time. That’s not how it works out of the gate today. That’s a future vision thing, and we’ll welcome feedback if that’s not where the market really wants us to go, we won’t go there, right? But we do see value. We’ve gotten requests about like, hey, I’ve got all these writeups, can you use those to train our instance of Plex AI so that it starts to know how we report on write on things? So that’s kind of why we’ve had that discussion. But there is no cross-data contamination. Your data is not being used to train models. And if you’re not a member, if you haven’t even signed up for Plex AI, there’s not even a way for the Plex AI models to be given access to the data. So we really do take this very seriously in terms of how we’re treating the data and keeping a secure and private design at the forefront. So Nick, any questions, I mean, any thoughts or questions on the security side? No, it makes a lot of sense. We’re getting a lot of gigs the last six months to a year on they want us to pentest their AI or pentest. It’s funny, it’s funny getting a statement of work request for we want you to pentest ChatGPT, like what do you want? And it’s interesting, you know, as with any cloud provider, you end up putting a level of trust in them. And some organizations are using like the OpenAI services through Azure to do some of that isolation. And so the retrieval assisted, whatever rag stands for is isolated. Their document libraries and their instances aren’t being used in the public. But it is bananas for folks who come in. I have a, you know, a product manager who will come in and say, hey, we want you to do an application assessment on this platform and we use OpenAI services, but it’s just the OpenAI services out on the public Internet. Our customers want you to pentest the AI to make sure it’s secure. And I’m like, well, it doesn’t work like that. I can’t just go fling packets at the Internet and say, bring me back data by the morrow. So obviously it’s great to hear that you’ve taken security seriously and having again, under the hood, you know, their security by design in the code base, but then also using a private model that you’re training and keeping kind of under lock and key. I almost think that’s tape. I mean, it’s phenomenal. But I hope that that becomes table stakes for services that are using it. Unless it’s like, you know, we get, we got a huge company came in that wanted to travel, you know, travel information, and they’re leveraging a bunch of AI for itineraries and travel, stuff like that. Okay, cool. You know, I understand using a public model. I even brought the idea of do you want all your senior staff’s details and their trips and their names and addresses and all this information going out into the Internet and into a model so that when the robot overlords come over, they can know exactly where you live and the places that you like to go on vacation? So, yeah, I think that’s phenomenal and good stuff. Yeah, yeah, that’s great. And so hopefully that answers questions. And obviously, if you’re a current customer, reach out to your CS representatives and they’ll help answer any other questions that you have around this. We’re an open book when we come to this because we want you to feel the most comfortable and actually be operating safely. I’m going to stop sharing here for a second and move over to a different screen because it’s now a time to show it off, which we do have a video already out there highlighting it, but I want to just be able to do it live for folks as well. I’m in a report and I’ve got a bunch of findings already existing in this report and I’ve chosen this CVE just as a sample just to highlight what it can do. So it is trained. It does have all the CVE’s and we’re updating the model consistently with new CVEs that come out. So if you were to just type in a CVE ID, it will come back with details around the CVE. So basically, if you think about a workflow of like, hey, I found the CVE. I’m in the middle of another big exploit, so I want to keep hacking on it, but I want to at least like make a note to come back and write this up. I’ve created this finding, I’ve added the CVE title as the title and then I’ve just made a note to, hey, finish writing this up and we’ll come back to it later. That’s what we’ve done. We’ve come back to it later. We’ll go ahead. Actually, I can just leave this there. I can say, hey, let’s use the AI engine to get this going. It quickly comes back with, hey, I know about this CVE. It’s a JetBrains off bypass. Pretty serious. Now we can actually just insert this into the text field from here. We can go ahead and add our screenshots, can tailor this to how we want it to look and feel as part of the description. If I wanted to also get some recommendations, we can use the AI engine as well for the recommendations of the finding. Now keep in mind what this is using for its prompt is the title. If there is description data in the description, it will also use that for the recommendations. But if it’s just the title and you’re just trying to generate the description, it will base its response off of of the title. So you can have a title of like, hey, privilege escalation through TeamCity. And you could use that as the title and use the AI and it will provide a response accordingly. And again, as it starts to learn, it may affiliate that with the CVE or not. But what’s exciting is that it’ll provide a place for you to get going and get started. As you saw here in the recommendations, it did keep the formatting that came from the prompt. So, so this, you know, kind of again, is a way to get started with just automatically writing up. A writeup, sorry, a finding itself. So I’ll pause there. Nick, any thoughts? Questions? You know what? No, it’s bananas. This is phenomenal because one of the things that it enables you to start doing as well is keeping it in platform. Not only do you have the collaboration situation going on, but you have standardization because everyone going and doing their own prompting and throwing things into the Internet, maybe extruding your personal, corporate data or private data or client data into some model, that’s a problem. But then also there’s consistency. There’s the idea of wanting to have a standard and being able to do it all in platform and on a model that is safe. I love it, but I’m biased because I’ve been using PlexTrac for like. It’s okay to be biased on our own webinars. It’s because you feed me shirts like no other. That’s right, that’s right. Yeah. So really excited about this and welcome feedback. Continue using it. The model will continue to train and get better over time as well. So this is just continuing to lay the foundation for everything that we’re doing moving forward. So it’s nice to be able to write up a finding itself, right. It speeds it up, but where there’s a lot of power. Right. It’s like I’ve now got to go kind of write up the entire verbiage and narrative sections of my report, right. And so I really would like a leg up on being able to do that, you know, without, you know, having to go through each of the findings, come back and kind of you know, write up my own summary. So, so what’s nice is like out of the gate, you know, you can, you can generate narrative sections based on the title and what it’s, what it’s taking here is it’s taking the title of the, of the narrative section itself as well as the finding metadata. So things, all the findings in the report related to the severities of the findings, the titles of the findings. In some instances, it will also include the descriptions. We’ve kind of got. I forget now, I should know this, but we put a threshold in of like, hey, if you got like a million findings in your report, it’s just going to take the title and just metadata severities and things like that around it and being able to provide the executive summary just from a performance perspective. Right. Which kind of makes sense, but if it’s a smaller report, it does include the description as part of the prompt to the models so that it really helps provide as detailed of an executive summary as possible. Right? So what’s cool is like, you know, hey, we can use this and just plop it straight in. We can now come in and tweak it as much as we desire, add, you know, add our own screenshots and everything like that. And if we don’t like it, we can always regenerate it and get a different response. The other nice thing is like, you know, we can have additional types of, types of sections. So here I’ve said, hey, what are the top five issues? Right? So this is just going to go through and analyze based on the findings themselves. Hey, these are the things that it’s recommending would be the top five things you should, you should, you should address. So then we can easily just replace that in here and, you know, again, just super nice to just have somewhere to start. Right? And this is, this is just, I mean these are, these are genuine things that you would put into your report as recommendations, right? So like, hey, these are the five programmatic areas of risk that you should focus on. And then finally, you know, we can have like a recommendations section for the report itself. So based on the findings that exist in this report, generate me a list of recommendations. What should I be doing? You know, truly, truly write this report for me. Right. And again, here’s, here’s some of that, that detail. I’ll go ahead and regenerate it just so you kind of like, it should come back with a slightly different response. So like if I didn’t like that one, I can regenerate it. And again, you know, with, you know, with the usage of this, you know, obviously you want to check it for validity, right? You know, it is training itself as it, as we continue to grow the data sources that are being used in it. But, you know, you’re still responsible for the validity of the data. Right. So hopefully this will finish up here soon. But Nick, any commentary or thoughts? Right. So, no, I mean, this is really showing the power of the possible and a lot of cool capabilities. And I think this, like you said, this is the foundation. Just your mind is awash with all the different things that you can do. And I’m sure, I mean, I see the questions are like pouring in. May not even be able to get to all of them. Who knows? You know, some, some thoughts on the ability for custom prompts. The thoughts for the ability to start. Getting it to do conditional logic is really exciting. And I know that that’s future stuff that just, you know, when you’re, when you’re dealing with report automation, the idea of taking what you’ve done and putting it into a report is one level of automation, but PlexTrac is now taking it to the next level of truly putting in some logic beyond just some for loops, which is pretty exciting. Yeah, exactly. Like we said, this is paving the way for a lot of the ability to do some of what the questions are asking here. So since we don’t have. Let me stop sharing my screen, I’ll jump back over to the slide, slide deck here, 1 second, and then we’ll dive into some questions here. So if one of the questions is, does it have a dark mode? The answer is yes. I was getting unreasonably frustrated that you are not in dark mode. Yeah, yeah, yeah. So let’s hang on here. I gotta get back to the slide deck, but sorry, whenever I’m in presentation mode. Oops. Nope, that’s not what I want to be sharing. Sorry, everybody. Supposed to be more agile in Zoom. Okay, so we’ve done the live demo. Let’s jump into some of the questions here. So I’ll leave this up. Here’s where you can learn more about it. I think there was a question around pricing. Reach out to the sales team. They will get you a quote. And there’s like kind of a limited-time offering, you know, like a free trial and things like that. So. So if you want to get your hands on it, please reach out to the sales team. So I kind of want to just dive into some of these questions. So one was around kind of being able to control the prompts and the ability to get it into the flavor and verbiage of the way that you write reports, that is coming. Right. That’s as we get it out there. And we, you know, bring the ability to train on your data within your tenancy and keep that isolated. That’s where it’ll start to learn on how you write your reports and provide responses. What we do have under the hood today, I will kind of give a little bit of a sneak peek, is that we do have generic voices that it can respond in. And I say generic in that it’s not trained on anybody else’s data beyond what we’ve put into it. So what we will be eventually being able to, to provide is the ability to, hey, write this in the voice of a CISO or write this in the voice of a security engineer, an analyst, an appsec person. So it can kind of flavor the tone of the data and the responses coming back in those types of voices. So that is, that is under the hood today. We haven’t been exposed that for this first iteration as we get it out there, but those are the types of things that, you know, it’s going to be able to provide. So hopefully that answers that question. Let’s see, see. Okay, so this is an important question. So right now, this is a cloud-only offering. So this is only available to those that are using the cloud. Our cloud-hosted instances of PlexTrac. It’s not available today for on-prem instances. It is hosted in the model is hosted in Google Vertex. So we’re utilizing Google’s cloud infrastructure for the hosting of the model. And that part of the reason why this initial launch, it is a cloud-only feature. I’m not going to speak to whether or not we’re going to bring it to on-prem or not. I think we really wanted to set out and make sure we had a solid solution and then we’ll explore whether or not we actually bring this to on-prem. But I do encourage you, if you are on-prem and can move over to the cloud, this is how you would have access to it today. That’s just an important side note. So let’s see, for customer. Let’s see. Okay, so there is a question around kind of like being able to leverage AI in the writeups database itself. And that’s kind of what I was mentioning before is like as we get going, we are going to provide a capability where it can learn on your writeups and on your previous reports, again, isolated within your instance, in your tenancy so that there’s no data bleed across it, but so that it truly does learn how to respond in your voice, in your field. And those are some of the things that we’re going to be working on here in the near future. But excited about that. So like we do see the writeups as you’ve already got this cultivated list of writeups in your content database that is a prime thing to use to help in the voice and how it’s um, how it’s providing the, the details back in the responses themselves. So um, yeah, I just think about the future in that it’s going to be so powerful if you just think about, you know, we’ve spent years curating and getting a really tight solid writeup CV with tons of different repositories, and we can leverage them for all sorts of different assessments and the ability to take that and put some smarts to it, that normally would be my human consultants having to put their smarts behind it. Now we can give it context in the future state. Giving the context of this is an assessment of an app that’s accessible from the Internet. So it takes all these keys of context and then being able to leverage the information it learned from our own writeup CV and our own findings and our own tone to decide this finding is this risk and this setup in this WriteupsDB that you would usually use. But you said it’s an app on the Internet, so I should use this writeup verbiage from this type of writeups database language. Just being able to kind of take some of those decisions and do things again, I understand that that’s future state, but the foundation you’re establishing now, the foundation you’ve established for years, really if you think about it, I mean, the number of consultancies that I’ve worked for that are, our writeups are just the library doc, it’s a giant word document with a ton of verbiage that you’re manually going in and copy and pasting. So PlexTrac already moved the needle and saying WriteupsDB is set up here so you can leverage it. And now being able to leverage generative AI and different decision points is pretty cool. Yeah, yeah, exactly. And there are some questions around like, hey, how can we, is there the ability to adjust the prompt? Right. You know, because like, you know, hey, you know, you know, like in like your standard ChatGPT can, you know, using this, you know, write me this? And that is, that is also part of the future, future vision of things. We’re going to be working on. So being able to provide a little bit more control over what goes into the prompt, to be able to provide, you know, any kind of details back or you know, even things around like formatting, like hey, give me a bulleted list of these top five issues, things like that. That’s all, that’s all, that’s all going to be possible. Right? So we’re excited, which is huge because that’s stuff that you either write in JavaScript for right now or you’re putting in the browser or you’re doing it post-processing. So the more that you can do activity in platform, the better. Yeah, yeah, exactly, exactly. So there’s a good question here around kind of how do you ensure like the AI responses are accurate. Right. So we’ve been, we’ve been cultivating the public in open source information and even like open source pentest reports that we’ve, you know, we’ve been able to, you know, grab from like places like Miner and, and I guess more like, more like threat reports. Right. But so that is what’s being used to train the accuracy of the model itself. And so that’s also what’s encouraging is that it’s not public, it’s not a ChatGPT trained model. It’s something that we have control over, how we’re helping train it. So it is being used on trusted sources in the industry, things like MITRE, CVEs. What am I missing? There’s one other one, but that are truly helping really cultivate the accuracy. But I think it is an important, just like with any kind of AI response, you as the practitioner still need to vet and polish it up the way that you want it to. So in terms of time savings, this is saving you a bunch of time of that initial canvas. What I’d like to say the initial canvas of, you know, it gives you the skeleton of what you can now fill in, additionally, based on the context of the report, which is huge. So I think we’ve covered most of the key topics in the questions. I’ll open it up for any other thoughts that people might want to throw out there. Okay, well, Nick,, any final thoughts from your end? Yeah, I think this is just powerful because a number of our, I mean this is, this is something that we’re excited at Rotas to be able to leverage frontline. I mean we leverage PlexTrac, frontline and back of the house for generating reports, but we also allow folks to have access into PlexTrac instances as a customer portal to be able to see their findings and track their retest results and track information in that fashion. We also, it’s interesting the ecosystem is now, PlexTrac is kind of so ubiquitous that organizations are saying things like, or it’s kind of, we’re going in and saying, well, you know, our PlexTrac instance can deal with your PlexTrac instance. So we’ll, we’ll be doing their tests and export into their PlexTrac instance for testing and the ability to a lot of our assessments that are moving into more continuous assessment and testing. As far as you know, it’s not just a snapshot in time, but it’s a constant cadence. I’m just imagining there’s a lot of folks who are asking for things that can, where this type of activity will be able to be leveraged frontline by folks who need the data, need it now, don’t want to click a bunch of buttons. So we’re going to be able to leverage this platform to continue to show value for them because then they can start focusing on the problems they need to fix. They can see trends, they can see analysis. And I’m excited about it because, you know, the platform’s already shown tons of value over the years. We all know that. And somebody who’s both worked internally in a team where this would have been great for metrics and availability and purple teaming activity to now more third-party consultative approach. I think every time that you can take decisions on data and put guardrails around it and put starting points so that an analyst or an analysis that’s done against it can be done against a set of standards. And like we said before, starting from a starting point is huge. And I’m really excited not only for this release coming up shortly, for the rest of the world to kind of start being able to play with this, but the continued showing, moving the needle towards just what, you know, I’m excited about the next best thing, too. Yeah, no, I love, that’s pretty cool. And I think actually a couple of things came in that I do want to address. So there’s one about, you know, is this more granular than ChatGPT? And I said I would say it should be. Obviously, we don’t control ChatGPT and we don’t control all the responses that the AI is generating, but based on the data that it’s being trained, it’s very specific to cybersecurity. So that’s an important distinction first and foremost. And actually, I think that’s probably secondary to the fact that ChatGBT, these are public models, you don’t necessarily always control where that data is going and what that data is going to, that that data is going to be used to train on or that’s going to be used to be training its own models that now are available to the rest of the world. And how do you, you know, it’s already been shown that you can extract, you can trick the models to extract data out. And so I think what’s important is that like we are, you know, a private model, you know, by design, and that you have, you can have confidence that, you know, any, any potential sensitive, sensitive information being used by the model is not going to be exposed to the public. Right. So I just, I just would not, not encourage you putting pentest data into ChatGPT, period. Like that would just be my recommendation. Bad move. You know, and just as a reminder, although, so my world is kind of the pentest practitioner organization, a third-party consultancy. So I don’t want folks to think or get skewed that this is really only the pentester’s best friend. We partner with organizations that have their own internal, for their internal security team, their vulnerability management team, their pentest team, or just the security analysts, security engineers at an org that leverage taking their vulnerability information and putting it into PlexTrac for tracking it, for remediation, for doling it out to the teams every facet, whether you’re providing services like an MSP, your consultative pentesting organization, or you’re an internal team, think about the value that this is going to add. Think about the use cases that you already have with your team and how PlexTrac is already keyed up to solve it, but can also solve it with, with leveraging AI. So I just wanted to make sure it’s clear that this isn’t just, you know, just from my perspective as a pentester and I’m having conversations with folks, you know, Fortune 500 and Fortune fifties who are PlexTrac folks who, yeah, we’re leveraging this, are excited about the idea of leveraging this. Yeah. So I mean it’s not just for a pentesting company. It’s not just for a consulting firm one, you know, obviously we can bring in vulnerability data already, you know, so vulnerability management teams can take a take, you know, take use of this. And so it’s really, for now, it’s really helping write that report. So if you need to generate a report for your stakeholders or you know, for a weekly check-in report, you know, you can, you can bring the data in, you know, ask it to provide an executive summary or what are the top issues based on this vulnerability scan? And then use that as the launching point for whatever you’re reporting. But also keep in mind that this is the foundation of where we’re headed with being able to ask more questions of your data and get more out of the data as you grow your security program and want to show improvement over time and your security posture. So staying focused on the right things, being able to make sure that you’re making progress, those are two of the big questions that we try to help answer for you with that. Thanks everyone for joining. Really excited about this launch and really excited to get it in your hands. Get the feedback and definitely see how much time you’re going to start saving on your reporting through Plex AI. So again, if you want to, if you want to demo, reach out to us either through your sales, through your CS representative, or if you’re new to PlexTrac, hop on and request a demo. Happy to show it off to you. Happy to get it in your hands to be testing it out and would love for you to join the family if you’re not already. So we appreciate you. Nick, thanks so much for your time. Wish you the best this week and we will be in touch soon, everybody. Thanks again. Cheers, all. SHOW FULL TRANSCRIPT