All right, ladies and gentlemen, thank you so much for joining us on this lovely Wednesday afternoon, or potentially morning, depending on where you’re based. Today, we have a wonderful webinar between two leading cybersecurity companies in Edgescan and PlexTrac. So what I’m going to do here for the next couple of minutes is kind of just let everybody funnel in. I know these live events sometimes take a minute to garner a crowd, but what I’ll give you guys is just kind of the gist of what we’re going to go over today. So for anyone who’s here, we’re going to talk a little bit about the sourcing of data, the collection, the validation, the integration of that data streamlining, that vulnerability process for reporting. We’re going to go over measuring and monitoring of cybersecurity metrics. We’re going to identify some strengths and weaknesses of defenses, some key components of an effective vulnerability managing mint process.
And then we’re going to go into a little demo led by our friend David Kenneth, and I’m sure Nick Popovich will have quite a bit to say there as well. We are familiar with one another. We are working together on a couple of clients. It’s been a very positive relationship so far. Very much enjoy talking to the Plex track people. And as you can tell by my lovely hat up here, I did have the pleasure of interacting with them at RSA last week. So from a moderator perspective, I have my PlexTrac hat, I have my Edge Scan sticker over on this side, and my Edge Scan shirt trying to kind of play to the middle.
But we’ll move on now that we’ve given everybody a couple of minutes to funnel in. So I’m going to start by sharing my screen here, and I’ll wait for the lovely thumbs up from my counterparts here to make sure that I’m sharing correctly. Beautiful. Thank you, Nick. All right, so as I stated before, we’re here to talk about sourcing, aggregating reporting and remediating and to achieve that complete security visibility with Edgescan and PlexTrac. Yes, I can read. But next, we’ll move on to our lovely co host today.
If I can move on, perfect. So I myself am the strategic relationships and channel lead here at Edgescan. I do enjoy talking with partners, and I do love building relationships, which is why I’m so happy to be blessed with this webinar today. But I will allow my counterparts to introduce themselves, starting with the lovely Nick Popovich. Hey, thanks. Yeah. My name is Nick Popovich.
I am the hacker in residence at PlexTrac. And so that doesn’t just mean I got to choose my job title because I’m friends with the CEO.
My background is penetration testing and red teaming. I come to PlexTrac not only as someone who’s passionate about making sure the security issues are resolved, and the way they’re resolved is the details of assessment and testing activity are articulated clearly. Former practitioner, I’ve lived and bled on the battlefields of security assessment and testing, so to speak. And so my role at PlexTrac is to provide the hackers perspective and make sure that we’re involved and we’re innovating and our product as is solves the problems. And the future of security issues can also be communicated clearly. So that’s me in a nutshell.
And I’ll move it back to David Cool. Hi, everyone. My name is David Kenneth. I’m the engineering principal here at Edgescan. I started off life and antimoney laundering and I realized I didn’t get enough of a kick out of it. So I moved into more of a pen testing, break things role and that just developed over years and years. And I’ve been with Edgescanp, I think I’m in my 10th year now and I do less of the pen testing nowadays than I used to.
But a lot of what my involvement is is making sure our customers are successful. We’re the technical fit for a lot of the organizations that work with us. And probably most importantly is I have the luxury of being involved with a lot of the operations and penetration testing team, finding out how they do things all day, every day, finding out how they’re innovating, but then also the development team and the R and D teams as well. And it’s one of the most interesting things because everybody focuses on kind of scanning, scanning, scanning, whereas what we’re finding nowadays is the world is shifting a little bit more towards crawling and ASM and all these type of cool technologies as well. So it’s interesting to see how the space is changing. Yeah, that’s a little bit of what I do well, thank you, David. And in terms of that synopsis, I can tell you that I wrote David pretty much into everything that I possibly can here at Edgescan, which I’m sure that he very much appreciates.
And from what I’ve gathered from Nick, he’s in a very similar role where he is relied on by many, many people over at Plex Track. So it’s a pleasure to be joined by you two gentlemen today. I love having my picture up next to yours. It makes me feel very accomplished. So, moving on from this slide, we’re going to go to our agenda for the day, if my spacebar will work. So today we’re going to talk about achieving complete security visibility. And like I went over, if you were here, for the first 30 seconds of this video discussion, we’re going to talk about the sourcing of data, how to streamline that data for vulnerability reporting and making it timely, measuring and monitoring cybersecurity metrics, identifying weaknesses and strengthening defenses, key components of an effective vulnerability reporting process.
And then the super fun technical demo with our exclamation point at the end, tech demo. All right, guys, let’s get excited. So with that, we’re going to move on to our first talking point, which is the sourcing of data. And for this slide, I’m going to lean on Nick Popovich to get us started off. Yeah, I appreciate that, Brian. So when it comes to security assessment and testing, where you’re getting your information and data matters quite a bit. And we’ve got a lot of disparate sources.
So understanding that there’s a lot of different methodologies that practitioners employ. Some are pure play manual testing with manual created tooling and leverage automation very little. Some have a baseline where they leverage automation. And we heard earlier scan, scan, scan where associating kind of an inventory and a starting point comes from different tooling and telemetry systems. And so where you gather the data and where it’s sourced kind of details how your operational flow, it dictates the operational flow. You have to have a way to curate that information, source it and then be able to deal with it. Because if you are data overload and I know back in my testing days you could gather all the data you wanted if you didn’t have a way to provide cohesion and oversight the point of any kind of assessment.
And testing activity is lost then because it’s not just spinning wheels and trying your tooling and being a cool practitioner. It’s the ability for the organizations under your purview at that moment. Whether you’re a provider in a consultative fashion or you’re an internal team, the whole point of your activity is to be able to raise the security posture and so sourcing that data, being able to pull it in from multiple sources at times. Because also it’s important to be able to sometimes check validity of information, which I might lean on David just shortly here talking about validation of that information. But that kind of is an idea of in our space and where PlexTrac, for example, is supposed to be that single pane of glass where you can take all of that vulnerability information, all that data that’s been generated. Manually by your experts or by expertly tuned and executed tooling to pull it all together versus having piles of data I guess you could say steaming piles of data littering the landscape. We’re supposed to take all that in, and as practitioners, we have to find a way to make sense of it.
And so that’s kind of speaking to sourcing it, where you pull the data in, whether it’s from automated tooling manual activities or other kind of force multipliers or systems within your ecosystem. I’m kind of interested to hear about David and thoughts on that validation and other points along the way. Yeah, and actually I kind of started touching on that a little bit in my intro. Yeah, no verification of vulnerability data is it has to happen somewhere. This is the thing. It has to happen somewhere. It either happens on the vendor side or it happens on the customer side.
And that’s really what it boils down to. And what we found is that when we start producing clean data, all of a sudden you can start sending data to a million different places that were never an option. So around 2015, and I’ll talk about the PlexTrac integration as well, I think it’s super important that people understand that story and the history behind us. But the first integration that we ever built was integer, and the reason for that is because we could send stuff directly to development teams, because the data was verified already. And it opened up a whole new world of options for us, because we could now speak to developers instead of just speaking just to security teams or just to consultancy teams. And the whole new world of this, it just meant people were getting better data, and they just had to spend less effort maintaining the data, maintaining the feeds. And it’s a really important aspect of what we do.
But one of the things I touched on was that the kind of the innovation side of it is that people always talk about scanning, scanning, scanning, scanning, and scanning is great, scanning is super important. But the reality is that scanning is probably the easiest part of everything that we’re doing, and taking the data afterwards is one of the most difficult parts. But even the part that has to happen before the scanning, the discovery phase. So in your traditional applications, what we found is that scanning is really easy, but actually finding out what to scan was really difficult. And then you have modern technologies like APIs, and single page applications and all of these interesting problems. That’s where all the innovation was happening. And so with all of that innovation happening on the crawling side of things, it meant that the scanning thing could improve.
It can improve, but when you feed better crawling information into your scanning technology, and then you verify the data afterwards, your data just becomes a one. And that was what we found over the last couple of years in particular, as we started spending more time and effort on the actual innovation side of things, on the crawling side of things, implementing new crawling engines, new scanning engines, support for APIs and support for single page applications and kind of vulnerabilities in the Dom, all of these type of things. That’s where we found a lot of the newer, more interesting vulnerabilities that were previously being missed were getting picked up. So that’s a super important part of it. I don’t know what your kind of opinions are on that, Nick, but that’s something that I found was quite inspiring and something that really resonates with people.
You’re absolutely right. That’s huge. And having used some Nissant vulnerability checking and scanning tools early on, I’m thinking 2005, 2006, where there was just no, I don’t know how to say it, smarts behind the crawling. I remember at one point getting called in. I was working for the US government at the time. I got called to the carpet and dressed down because some of my scans had gone awry and crawled off of the app that I was on, followed some links and went to the NSA’s homepage. Like NSA gov, like nothing special, but here I am from a little DoD organization scanning the NSA and it was because there wasn’t that logic and enhanced capability for that crawling.
And when it comes to AppSec stuff, crawling is king and the ability to, I don’t want to say educatedly guess, that doesn’t strike right, but when it comes from my perspective, your reports and your information is only as good as the data you put into it. So garbage in, garbage out. If you’re getting garbage then you’re going to report on garbage. And I really like that sentiment, David, that you said the data is going to be validated. It’s either going to be validated by the person doing the work or the person receiving the report on it and it kind of makes sense for it to be on the former. So yeah, that’s really neat. And also what’s neat in my new kind of role at Plextrek is now I’m not just either working at a consultancy and seeing their processes or I’m working for an organization, I see theirs.
I get to see tooling and telemetry across hundreds and hundreds of organizations and their workflows. And our partnership with Edgescan has really showcased the necessity for organizations to really care about data in and data out. And it’s really neat that you guys are definitely moving the needle as far as that data integrity concept, which is pretty cool to see. Yeah, that’s cool. And it’s actually just talking about the integration side of it. It’s really rare and it’s something that is always quite exciting when loads of your customers turn around and say, we want you guys to interact with PlexTrac and start building an integration with them. So there’s no better talking about validation, there’s no better validation than when all of your customers are turning around and saying, oh, it would really be great if you guys were a little more in sync.
And we’re like, well we’ve never talked to them before and then all of a sudden there’s loads of people meeting and we’re like, oh my God, we’re saying the same thing but from the opposite end of the circle. So no, it is, it’s quite exciting. No, it is, it’s very good. Cool. Do we want to jump onto the next section here, Brian? Yeah, before that, David, I just want to add to exactly what you just said. I mean even myself being President RSA, we had multiple people come up and speak to us that were not only looking for what we do, but looking for what PlexTrac does. So I’ve got meetings set up in the pipeline that are not just for us, for with PlexTrac and I’ve personally never experienced that with another integration that we’ve had at Edge Scan.
So as low level as my understanding is, I can understand that last point. So taking a quick break so that we can clip that last part, we’re now going to move on to something that is a little bit more vulnerability related and timely. So for any of you guys in the audience that are familiar with the phrase or the acronym MTTR, this next slide is going to be very appealing to you guys. So with that being said, moving on streamlining vulnerability reporting for timely action. Because everybody wants to be fast and everybody wants to do it well. And so Nick, tell us how we can do that. Yeah, I mean, obviously, based on the hat that I’m wearing, that’s kind of where we have drawn our line in the sand.
I actually knew Dan kind of at the genesis of his journey. The founder of PlexTrac, Dana Kloss, was an AppSec tester and he spent time in a lot of different hats as CISO Director of security, a lot of different and he got to see the problems that he was trying to solve at a technical level as a practitioner, then at a board level or at an executive level. And he’s seeing there’s a lot of solutions that folks are tiptoeing around. He said we need some cohesion. And so that was the vision behind PlexTrac, helping security teams solve the right problems. And so the idea of being able to make the technical practitioners activity when it comes to reporting simpler and more repeatable and more cohesive, while at the same time then allowing the generation of metrics and information that go beyond just snapshot in time. Earlier, David, I believe, was talking about the evolution, the snapshot and time assessments, the point in time assessments, there’s still value there, there’s still a need and an ask.
But moving to attack surface management and continuous assessment and testing and continuous validation containerization cloud, the landscape is constantly changing. So how do you then take a reporting methodology and the days of yesteryear, where it was just IPS and it was just hostnames and you had a very kind of codified methodology. Whereas now the necessity is you need to have a single pane of glass and the ability to adjust your methodologies on the needs of varying technology. Stacks. In one breath, I’ve shifted my view. I kick myself looking back ten years, I even said these words and I’m going to eat them right now on camera, I’m eating these words. I said, I mean, it doesn’t matter if it’s cloud or not, it’s an IP address, but just test it like we always test it.
That was on the ask of some peers who are like, we need a cloud native pen testing methodology. We need to be able to focus on that. And I just didn’t want to do the work.
My mindset has drastically shifted to understand that the nuances of the technology stacks and the ecosystems that are under assessment and testing. Those nuances need to be taken into account and they need to be taken into account with your reporting paradigm. And so that reporting paradigm has to be something that’s malleable and that allows you to not only take a repeatable process, but has enough malleability within to be able to adjust and fit the different nuances of the text tax that you are engaging with. And so I’m going to kind of lean on David to speak of. We talked about sourcing the data and we’ve talked about having an ecosystem view and being able to take in information and then being able to have a documented repeatable process. But now we have all of this data working with it. I’d like to get David’s perspective on some of that concepts of now you have it, what do you do with it? Yeah, there’s two kind of pieces to that because it’s really interesting.
One place where our thoughts very much so align is in Edgescan. We’ve always kept the historical data, always kept historical data. We didn’t have a notion of you perform a scan and then that data goes as soon as you perform a new scan. It seems insane. And now we’re seeing that the industry also agrees that it’s insane. You need this data because in order to show progress, you need to be able to show what bad looked like or good looked like in the past. And there’s two metrics that people tend to focus on and both have their merits in some shape or form, but the first one would be NTTD some meantime to detection, how long does it take to find something? And so if a vulnerability gets introduced on a piece of tech, does it take you two weeks, one week, one day, 1 hour to detect this type of vulnerability? So that’s one approach and it needs a balance here.
So there’s no right or wrong approach here. It needs a healthy balance of both. So MTTD really useful. MTT or how fast are you fixing it? How can you measure success? The best way is understanding how fast you’re fixing vulnerabilities today and then measuring it slowly coming down over the course of a period of time. So that is kind of the two ways I look at the reporting of vulnerability data. There’s all of the normal style of reporting and there’s all of the things that you require for. But the data tells us that the faster you fix vulnerabilities, the more secure you are.
The faster you find vulnerabilities, the more you can fix them more quickly. So they’re the things that we tell people to focus on. So find vulnerabilities quickly and fix them more quickly. Yeah, I don’t know. I’m sure you’ve got can of worms there, but the core tenets of what you just spoke to really have driven home the importance of it. I think those metrics have demonstrated the evolution. I tend to tell folks again 1015 years ago.
It was all about counting vones. It was the number of vones. It was categories of flaws with instances of vulnerabilities, unique instances. And folks started getting hyper focused on numbers, and they want to see those numbers go up and down. And it wasn’t long before technologists and security leaders and even just technology leaders started beating the drum and saying, this is a poor metric, but it just seemed like it took a really long while for folks to just not count. Vaughns. And the example I give folks, just for the sake of quick demonstration of why the argumentation of well, counting volumes matters is if you have 100 flaws and 50 of them get fixed, but then there’s a patch cycle and you now have 70 new flaws, the disparity of the numbers while you see numbers going down, the introduction of new vulnerabilities.
And there was this brief tenet of uniqueness of flaws which there’s some validity there about uniqueness of flaws. It just got people in the wrong headspace. And I think, folks, the metrics they were focused on was efficiency in the numbers and making sure the numbers were going up and down, and it was just a flawed metric. So when you look at meantime to remediation and meantime, your summation was so accurate, you find them quickly, you fix them quickly, your exposure is limited, that’s risk management 101, maybe 102 when you move off of what is a risk. But it’s really part of the metrics. And that’s why within PlexTrac, one of our primary metrics that you don’t even have to set up special filters for is that meantime to close or about meantime to remediation reporting. Because when it comes down to it, when you articulate the necessity of identifying and quickly remediating flaws, that doesn’t take a degree in rocket surgery to understand, okay, yeah, the quicker we can identify flaws, but the next step is closing them.
So I think there was this evolutionary period where folks were like, yeah, let’s find them really quick. Folks, bug bounty was a burgeoning thing in whatever it was 2010, 2012, and really started to gain some traction. And so you have bug bounty programs, and now we’re talking we’re going back to sourcing data. You had your bug bounties, you had your external pen tests, you had your internal teams, you had tooling, and so you’re now discovering all of this. But if you’re not fixing it quickly, and you’re also just focusing on snapshot in time once a year, looking at something, it’s just a little bit bananas. Yeah, spot on. And I’m personally glad that we’ve been able to I believe, for the most part, I think most organizations, even of varying maturity levels, most organizations recognize that just counting numbers of flaws really doesn’t paint a picture at all.
It’s not a beam counting exercise. It used to be, but it isn’t anymore. The data is telling us that some vulnerabilities are more important than other ones nowadays. We’ve got things like context, we’ve got things like the Csecav list, we’ve got EPSs scoring, we’ve got even CBS SV, four things like that. We have better data nowadays to marry with the vulnerability data. So we know that certain data points are more dangerous. Something you talked about actually just very briefly on that was really important because one thing we notice is that there’s a lot of vendors out there that if they find a cross site script on an application, it’s one vulnerability.
If they find cross site script on the same endpoint in a slightly different location, it’s a second vulnerability.
And then you keep going. You keep going and all of a sudden you have WordPress.com authorinsertname here is vulnerable to cross site scripting, and you got 100,000 authors and that’s 100,000 cross site scripting vulnerabilities. Whereas in the real world, that’s not really how it works because there’s one fix across the board. And what we started doing is we started merging those vulnerabilities together because we were unhappy that people were going, wait a minute, this was like 100 vulnerabilities before, but now it’s only one. It’s like, yes, because it’s only one fix. And it was kind of like a red herring telling people that they had 1000 vulnerabilities, whereas in reality they had one fix to implement that would fix a thousand vulnerabilities. And it’s just about being more efficient about your vulnerability management program and all.
We have the data, this is the problem nowadays is we have too much data and it’s about interpreting it. And that’s always part of the problem. I think that feeds us in with our next slide here. Yeah, I was going to say, you guys are doing my job for me here because you beautifully led us into that next slide. So we do a quick little pause here, smiles all around, see if the space bar will work. And speaking of bean counting, gentlemen, how are we going to count those beans and how do we quantify those beans? And I’ll start with you on this one, David, since Nick has been so gracious starting us off with every slide so far. Yeah, so measuring and monitoring cybersecurity metrics is kris okayors.
KPIs all of these things that teams are laden down with in some shape or form, and then ultimately they get less and less accurate or non contextual, I suppose the higher up the food chain you get and it boils down to did we get act or not? And so that’s how it gets boiled down as you move up throughout an organization, but everybody has a different risk appetite as well, which is really important. Nick, you said something really important there a few minutes ago about maturity. And if an organization doesn’t recognize that they need a program of some shape or form because they’ve got 100 APIs all over the globe and they don’t have a plan to deprecate these APIs in the future. They are still doing annual pen testing. They’re all public facing, they host sensitive data. This is important information and we need to educate these people that you need to have a program in place for these technologies and you need to measure your success against these technologies staying secure. And there’s things like MTT or MTTD and all of these type of metrics, how fast are we fixing vulnerabilities? These are all relevant here.
But there’s also one of the things, a couple of years ago we had a particular customer who kept buying these other customers in the pharmaceutical space. So they kept buying all these other people. And what they were doing then is they were measuring them over the course of three to six months. And they were looking at all these metrics associated with the type of vulnerabilities that we were finding across their estates. And they were putting in corrective actions based on that. And it was one of the most interesting programs I ever saw. Because instead of spending a million euro on firewalls for this team here, they ended up doing things like spending a million euros on education for development teams.
Because they found out that 90% of the raw vulnerabilities that were getting presented in their technology was true development error. And just being able to infer this type of thing, you can’t do that from MTTR, you cannot do that from NTTD. You need to look at the type of vulnerabilities. So be it. CIS benchmarking, NIST style benchmarking, looking at the CWES and the CVE is associated with it as well, but less so on the application side and API side. But looking at the other type of information that gets presented, because seeing that, say, CWE 79 is your most common CWE across the board, and it’s prevalent on 90% of your technology, tells you that 10% of your technology is probably not introducing these type of vulnerabilities. But there’s a lot of your organization is.
So they can probably do with some training or cross training in some shape or form. So it’s just inferring this type of information from that data as well, and then monitoring it as well. Monitoring it is it’s really understanding your level of maturity and how long you need to wait before you review it again. And if you are a lagarde in some shape or form in this space, in whatever vertical it is you’re measuring yourself against, you need to acknowledge that. And sometimes organizations won’t do that. But more importantly, sometimes vendors won’t tell them that, hey, look, this is not how people normally do it. We normally see people doing it this way.
We probably wouldn’t recommend this approach because it opens you up to some certain exposures and that’s kind of the accountability and the ownership of certain problems on, say, a customer facing side and on a vendor side. So it’s just about having a frank and honest. Conversation with your customers every so often. So I don’t know. Yeah, I’m speaking a little bit too much here. Nick, what do you think on this? No, I think that’s absolutely right. It’s kind of like I don’t know if you’ve seen at the mall, they have those magic eye.
I think I brought this I don’t know why every webinar I bring up the magic eye, it’s those 3D, weird swirly looking pictures where if you look at it and then you cross your eyes and you pull it back, then you see the big picture. That’s how it is when you’re looking through your security metrics and having a platform that allows you to aggregate all that information and be able to massage the data and be able to see those patterns absolutely is a huge piece of the security posture enhancement puzzle. And being able to identify those gaps. It’s funny too, I think, to tie it into earlier what you were mentioning about how there’s 100,000 flaws, but it’s actually one fix. If you were taking that and performing analysis and using the right kind of pattern identification, organizations should be able to see that themselves and say things like, oh, okay, we have 100,000 instances of this. They’re all using the same library. If we fix it in the one library, we’ve fixed all these instances in Sidebar, that was also and I’m not anti bug bounty at all.
I love bug bounty programs. But I think it’s interesting some of the different patterns that have come up as you introduce new types of industry.
Bug bounty hunters are incentivized to show you instances of a flaw and get paid. It’s better to get paid $25 100,000 times than it is to get paid $200 for one fix. So that’s why it’s important as you enlist partners through I think it’s important to have different, at times compartmentalized pieces of your security paradigm. You have your bug bounty program. That’s a check. You have your pen testers, you have your pen test team. You have your red team.
You have all the different teams coming in. But David’s, a whole point, a whole team that’s doing bi or that’s doing some intelligence and then having a solution or a system that allows you to curate that data, get the intelligence on the vulnerability sectors is important. Yeah, I think we move in. I think we move on. Identifying weaknesses, move us on. Brian. Thanks, Chance.
So, moving on, just like you said, to identifying weaknesses and strengthening defenses, give this just 1 second for my space bar to kick in. Boom. There we go. And Nick, we can go back to starting with you for this one. Yeah, I don’t actually have too much on this topic other than the next five minutes. No, I’m just kidding.
Again, it ties into so much. This is just the evolution as you’re putting the pieces of the puzzle together. The table is your ecosystem. The vulnerability information that you’ve curated is the pieces of the puzzle. And then looking, as David so eloquently mentioned a moment ago, looking for those patterns and looking at the metrics and gauging the appropriateness of where am I spending my money, where am I spending my time, what am I supposed to be doing? I think identifying weaknesses to a certain audience might be very technical in nature and a low level, I’m identifying a configuration error, a flaw or a missing patch or a version in this one system and that’s identifying a very specific weakness like a CVE. Then you move it up a level. And identifying systemic weaknesses in processes, in programs, I think you can’t have one without the other.
You can’t identify the systemic weaknesses and the overarching flaws in processes and methodologies and in implementations if you’re not trying to find play whack a mole, so to speak, and find the small weaknesses or the individual instances of weaknesses. But as you abstract and go up a level, then again you’re able to identify those. And so when you look at a technology like edge scan that has the capabilities I’m not going to speak too much to it because I don’t work there, but to crawl, collect, process and do so much, you’re getting to be able to find very low level instances of issues and weaknesses and then start to abstract it up. And then further, as you add that information into a system that allows you to collect and curate from a lot of different data points, it’s furthering your ability to identify those weaknesses. And then obviously, once you see a problem, you can start to mitigate and strengthen your defenses. So David, what are your thoughts on the weaknesses and whatnot yeah, it’s interesting because one person’s weakness can be another person’s strength. And I’m sure there’s a couple of old adages in there.
February 2014, I was sitting in a war room with a large, very large company and the the CIO basically kicked the door down. It was the day Heartbleed broke and said, are we vulnerable to this? And the head of infrastructure just turned around and said, no we’re not. We haven’t updated our systems in three years. It’s been working, it’s been effective, we’re not impacted by this bug.
Which it was super interesting to see that because it’s like, okay, well, they analyzed their SSL implementation every year and they said, okay, well look, we don’t need to update. This is giving us everything we need. It’s secure, it’s safe, it doesn’t have any known bugs in it. But they were three years behind. And it wasn’t as if there was no major vulnerabilities. I’m sure there was a handful of them, but none as bad as Heartpleet. So it was super interesting that people always try stay on the latest safe version, right? But if you have the time and the budget and the resources to run that latest save version in a production environment or a pre production environment Qat whatever for a couple of weeks and perform an analysis on it and make sure that it’s not going to impact anything in the longer term, then you should probably do that.
So that was kind of what that story told me. But it was interesting because in a couple of other scenarios I’ve seen one in particular when API testing started to get really popular, right? So this was around the time of PSD Two in Europe, which is the Open Banking Initiative. And all of a sudden, every single financial institute had to be able to interact with a bank via a uniform API. This is in Europe, and I think the US. Has adopted many similar formats and styles. So all of a sudden, banks had to be able to provide APIs where they didn’t want to do them in the past, and now they have to. And they were performing security testing on these APIs and they were getting back clean bills of health.
And what was happening is they were just running application scanning engines against their APIs and they were just getting back, say, 200 redirects, like 300 redirects or 200 you’re on the wrong page errors that just didn’t Http correctly. And we kind of figured that that was the cardinal sin. If you are treating APIs like normal web applications, you are going to miss all of the good vulnerabilities. So when people talk about strengthening defenses, you need to take a look at how you’re testing your technology and you need to make sure that the level of testing you’re providing is appropriate. And what happens at scale then is you have 50,000 applications across your whole breadth of organizations, right? And 10,000 of them require pen testing, whereas 40,000 of them just are brochureware marketing sites. They only require not a detailed level of manual testing, but appropriate level of testing. So it’s super interesting when you talk about kind of strengthening defenses, it’s really taking a look at what you’re doing from a 10,000 foot view instead of being stuck in the trenches all the time, can really have such a positive impact on making you more secure and then identifying weaknesses.
You might find that the pasture you had a year ago was significantly safer than the posture you had with the later safe version today. So it’s really about the risk appetite on your side and making sure that you are properly maintaining your approaches. Thank you. Davis so just for time constraints, we’re just going to move on to our last slide here. Jen. So I don’t mean to cut anybody off, but talking about that last point that you made, David, taking that 10,000 foot high view, it leads us really nicely into our last slide here, which is the key components of an effective vulnerability reporting process. And because, David, I’m sure that you could use a drink of water.
We’ll start off with Nick on this one. Yeah, that phrase can mean a couple of different things when folks hear vulnerability reporting. Really? Let’s talk about it in the purest sense of form. You have a vulnerability and you need to tell somebody about it. It doesn’t matter if it’s the external security atcompany.com email about external vulnerabilities again with your bug bounty or your disclosure programs, or it’s the results of your penetration testing or security assessment and testing activity. Understanding that the tenants that need to be taken apart, the tenants that need to be associated with the vulnerability reporting process. Just like with a successful prescoping of an engagement, you need to have a defined reporting process so that the vulnerabilities, their impact analysis on the performing risk analysis to understand it’s, not just handing a seeming pile of report and saying here’s orange, red, yellow and blue chart, figure it out yourself.
Board or executive or security team. The idea is being able to contextualize. Again, David had mentioned how we now have a lot of different frameworks and the ability for internal organizations to do that impact assessment. And then there’s the ability to have a system in place that allows you to associate criticalities of systems, the data custodians and data owners of information. So that a finding that shows up from security assessment and testing. It may come out of a tool saying high or maybe the person assessing it says it’s high, but is it really a high based on the business rules, the impact of the information, the data stored on them and those types of things. So when you’re creating the vulnerability reporting process knowing that it’s not just the bean counting and the numbers context matters and having the process to ensure that the right custodians of that information and the data owners are being told.
Because if they’re not being shown a roadmap of the flaws and the necessity to fix them, and then there’s no reason to set any kind of fix it threshold. And your meantime to remediation goes to pot. So ensuring that you establish the what’s affected, why it’s affected, how it’s affected, I get all the five W’s as a part of that process, regardless if it’s an internal process, external, if it’s from your assessment teams, those types of things. So I’d like to get your thoughts on that and we might then move on to some of the bits and bytes demo. Yeah, definitely the key components. Every single team is going to have a slightly different key component. You’ll have a red team that wants to know how to exploit it and the location.
You’ll have a blue team that wants to know how to patch it in, the location. You’ll have an accountancy team that wants to know how much it’s going to cost to really remediate all these vulnerabilities. Every single team will have a slightly different data point that’s the most important to them and making sure that that data is correct. And getting to the right people is really what we spend so much time and energy on, because one team may have CVSS scoring as their gospel, but another team may look at EPSs scoring, and another team may only care about a vulnerability if it’s on the CSA cab list. So everybody’s slightly different when it comes to the key components, but really, we want to make sure that whatever data point is most important to them is getting to them as quickly as possible.
And it’s a one. We kind of talk about that a little bit, but really, the data has to be a one. It has to be verified, it has to be good data that has evidence to back it up, and ideally, it can be confirmed by multiple sources as well, so that’s when we look at this type of data, that’s the best case scenario here. So if you can get that type of data directly to the teams that are responsible for it, then you are making great strides towards having a very effective vulnerability reporting process.
Perfect. All right, guys. Well, thank you so much for touching on all those slides. And my next lovely slide just says Tech Demo on it. So what I’m going to do is stop sharing and hand it over to David Kenneth, who will be running our technical demonstration of Edgescan. If anybody in the audience does have any questions, I know we are running tight on time. What I don’t want is for David to miss crucial information in the demo, so maybe we could go over a little bit.
I’d rather that we cover the things we want to cover than to rush through it, but I’ll let David handle his end. He’s much more comfortable with a technical demonstration than I am. But with that being said, I’m going to stop sharing my screen and let David Kenneth take over. Cool. So I’m just going to spend about kind of six to eight minutes talking about the vulnerabilities with a visual aid of a demonstration platform. So that’s really it. It’s not going to be too deep in the weeds.
I’m not going to go into the finer points of everything that we’re bringing to the table. Can everybody see just the demo edge can here? Yeah. Lovely. Cool. I’m not going to dig too much in on things like assets and metrics and dashboards and things like that as well. I want to focus on the vulnerabilities themselves. So, yeah, look, all the metrics, I’m not going to worry too much about those.
I want to talk about vulnerabilities and the actual metrics themselves. Just one or two things on here. So create an asset, you just add them in here on the right and you add the asset and you click initiate scanning. So your scanning gets started. You can schedule your pen testing, all of that’s in here as well. What we found is when we started presenting this information to people. It’s great information and it’s really useful information, but then we started presenting new types of information with them.
So APIs, I talked about the cardinal sin a few minutes ago, we talked about API documentation, or we haven’t actually, but API documentation is one of the most important components that you require for good testing. So that means a human or a non human, whatever it is, can actually understand how to interact with certain endpoints and how it comes back. So all that information is in here. You can have PCI scanning, you can have ASM scanning in here as well. You can have web application, authenticated, unauthenticated, external, internal. It’s really what’s your flavor of vulnerability scanning? So that’s a really important component, but really we want to talk about the data and so the vulnerability data and how we create and produce this type of vulnerability data. So vulnerability data, especially on the application side, it may not have a CVE, it may not have much information about it because maybe the team who created this piece of technology have only ever created it once and this vulnerability has never been seen anywhere ever before.
And that’s a large case scenario and some people call that zero days, but it’s not, it’s just application security and that’s really the way it works nowadays. So looking at simple vulnerabilities, this type of authorization issue, we were able to interact with a piece of technology we didn’t have the permission or I suppose the privilege to access with. So you have things like the description of the vulnerability, the remediation information associated with the vulnerability, and then the actual evidence itself. So just the evidence itself, and this is what it looks like. And we’ll notice vulnerabilities like this, they don’t have any CWES, they don’t have any CVEs, because they’re somewhat unknown. They’re really bespoke to this particular application. But they do have CIS controls and they’re not on the cease list as well.
But if we look at, I think I used the example of heartbeat earlier, so let’s have a quick look. And everything in edge. Can is built as a list pages. So I’m not going to dig too much into each of these. But if we look at a vulnerability, like heart lead so we can see the data, it’s really about distilling this data so that this is the description, this is the remediation. You can do this or you can do this. They’re your two approaches.
And then the evidence that gets presented back, we have data that gets presented back in an unencrypted format. So this tells us the day that it broke, it was a super interesting day because we did have a particular person that needed to see evidence of it. And one of the teams pulled out a holiday request form from an email server and said, is this the holiday you requested earlier? And it was one of the easiest verifications as it ever happened. And it was so interesting to see these types of vulnerabilities. But the data that’s included, we’ve got CVE, we’ve got CWES, yes, it’s on the Cecil list. All of these vulnerabilities are in here and all of the data is attached and associated with it. So it’s really important that not all vulnerabilities are created equally and not all vulnerabilities require all of this data, I suppose to tell you it’s vulnerable or it’s any more vulnerable than other vulnerabilities.
So all the vulnerabilities in Edge, can they all have some shape or format? One thing that’s actually really important and it’s the last thing I’ll show you in the vulnerabilities page. But if we’ve got to say a vulnerability, let’s have a look at one of our nice cross site scripts. We’ve got lots of them in here. So we have a normal crosshead script and it’s a normal crosshead script that has say a script alert payload or something like that. How do we know that this vulnerability is a real vulnerability? Well, we got it to pop in the browser. Okay, well how do we know that it’s not disappearing after a week or two? And we have the evidence here in a typical application style vulnerability is that we’ve got a request and we’ve got a response. So we send this request to this endpoint every Thursday and nothing changes.
We get back the exact same response. Only thing that changes is a timestamp. Then one rainy Thursday, a few months down the line, all of a sudden something changed. We know we need to go look at it again because something has changed and we need to reverify it. So that’s just a really important way of understanding how these type of vulnerabilities get verified. And I’m not going to talk too much more on the platform itself. We’ve got Attack Surface Management.
We’ve actually got a new module coming called External Attack Surface Management, which will be married together with this. It’s more of a proactive attack. Surface Management technology. We’re not going to go scan the internet every day and then get you to pay us for a portion of access. It’s more proactive, you know where you want to start and we’re going to show you all the stuff that we can find based on it. So it’s super interesting. Osin style EASM.
So that went into Alpha last month and it’s going into Beta this month. So I know a few people have seen it and they’re already very exciting. So it’s really cool. And then everything in Edgescanp or BAC controlled, SSO controlled, not too worried about any of that. One thing that I do want to talk about is events. The very last thing I’ll talk about is reporting and things like that as well, but I don’t want to talk too much about that. But the events themselves.
Edgegun is not a sore program, it’s not a seam piece to technology. Anything along those lines. But what we do have is a pretty hefty capability to create events and design events based on scenarios that happen in the platform. So let’s say I have a vulnerability opened on any of my assets or any of my assets with certain tags, whatever it is, and I want to send an email or I want to send a webhook or something along those lines. I want to get that information to where it needs to go. I can do that really quickly. So the important stuff can get sent to an SMS bot phone for a Red team.
A web hook can get sent into an Operations Blue team so that something can get action really quickly, and an email can get sent to somebody that they need to look in PlexTrac because they’ve got lots of new vulnerabilities that are pulled in from Edgescan. So there’s lots of ways to get this information out of the platform. And it’s super important because we’re doing our customers a disservice if we’re not giving them away to look at this information and whatever pane of glass they want to look at at any given time. So it’s super important. But look, that’s a little bit of Edgescan 101 and a little bit of Edgescan 102. Yeah. I’ve barely touched on any of the actual scanning and the crawling side of things or any of the API side of things, but there’s a lot in it and I’m just glad we got to touch on it at least a little bit today.
I’ve been getting to play with some of this myself. It’s been neat. I love the events. We’ll be able to create a rules engine and there’s so much depth in here, even my playing around hasn’t hit all of it. I appreciate that. I’m actually going to spend even less time. I just want to show you how.
Now you’ve got all this fantastic Edge scan data, I want to show you real quick the integration that we worked on, getting it into PlexTrac, because PlexTrac becomes that single pane of glass. If you have your Pen test teams putting stuff in, you have different data sources from different tooling spends, different maybe you have an infrastructure team, an AppSec team, an API team. You have all sorts of different necessity to get tools in and not just tool output. The manual findings, the stuff that the individuals find. I want to show you how that looks and I’m going to do it way fast, two minutes or less. So here we go. In PlexTrac, I’ve already created a client, which is that workspace, and similar SSO.
We have role based access control. I like to consider clients a workspace. And so in here I’ve got a report and it’s an empty report. I haven’t added anything into it. And so I want to showcase pulling in the assets and the findings from Edge scan and what that looks like. So when we add in findings. We have a number of different ways we can do that.
Some of the I’ll call it traditional ways would be taking out files from different tooling and adding that into the platform. But this edge scan is an API integration. And so if I go to integrations, I could choose the integration that I want to pull from. And then we can start to create selection criteria to pull in the findings from Edge Scan. So maybe we want to find different asset name or IDs. We want specific issue names. We have maybe different dates.
We can create this selection criteria. And then after we’ve determined our selection criteria, we can then connect into Edge Scan and pull that data in so we could select it all. We could come in and we could individually select only those that are interesting to us. And now we can pull that information. We could tag them. We are going to pull over that information all of the different data that Edge Scan has expertly crawled, has done the heavy lifting. And then we could pull all that information.
But if we wanted to continue to sanitize not sanitize, that’s the wrong word. If we wanted to continue to add in any additional tags to either the assets or findings, we could at this point, but it’s with a click of a button or an API call, we can pull all those findings into PlexTrac. So this is pretty neat. And now we could also be adding in other tooling. Maybe we were using burp, maybe we’re using this, maybe we’re using that. Maybe we want to come in and add findings manually. And so now, not only do we have this oversight view, we have the data coming in in a way that’s actionable across teams, across all different fooling technology and that kind of thing, but pretty clean means to get the data from Edge Scan into Flextreck.
The same cross site script that I showed it and we totally planned it. Everything went off without a hitch. That’s it for my technical demo. I just wanted to showcase clicking a couple of buttons and it worked like a champ. Man, this has been super exciting. I hope if we have questions or questions, we can go back to Hit. I’d love it.
So you guys could either drop questions in the chat, or you can find our cell phones on the Internet and text us right now. No, don’t do that. Please don’t do that. Find me on LinkedIn. I’ll give you David’s cell phone number. Whatever you want on. I have his home address if you want it.
We did have one question, which I believe was answered, Nick, but it is is it possible to track MTTD and or MTTR with PlexTrac? Yeah, leveraging PlexTrac and some tag and analytics and tagging. Yes, definitely. And then we have about three minutes left. So I do have a couple other questions. Some of them are pretty broad, so I’ll try to get to some more specific ones.
Here we go. What trends are you seeing as far as common vulnerabilities that seem to show up regardless of industry or organizational size? So sticking with the broad questions, but go ahead Nick, I’ll let you take the first crack at that one. I don’t know, I think David may actually be in a better position. I could have probably answered this a little better from a certain perspective, like three years ago. I might be able to pepper in some thoughts, but I think David would be a little bit more pulse of the current trends. Injection based vulnerabilities. Injection based vulnerabilities are not going away and people are trying to get rid of them by putting everything into a user’s Dom.
But that just creates injection based vulnerabilities in a user’s Dom, it’s just on an XHR instead of a traditional JSON or HTML query. It’s just sitting in the user’s Dom. But injection based vulnerabilities are not going away, they’re still there. They lead to everything from Log Forge to SQL injection to all these type of vulnerabilities. They all start with an injection based vulnerability somewhere. It’d be interesting. That’s a great point you bring up, you know, with that, you know, you find a cross site while you find a reflection flaw in, you know, some JSON, how that can be leveraged is lost sometimes on traditional flat scanners or maybe non skilled practitioners.
But that reality of being able to take a reflection flaw perhaps and leverage that, that’s then reusing the Dom is something that I think that a reflection, say for cross site scripting as a good example in a JSON file is usually not a problem because your browser protects you based on that. It doesn’t interpret it as HTML, so it doesn’t render whatever is trying to attack you in some shape or form. So you’re usually fine. But then people start to realize that the likes of Log Forge was a JSON file, that all of a sudden some information got rendered and executed and that led to a call out to a beacon somewhere. And that’s how people started figuring out this vulnerability. And just because it doesn’t cause a problem now doesn’t mean I can’t force change the content type or I can’t get a particular vector to actually execute. If I keep batting at this for every day over the course of a year, or I get an AI to do it, whatever, it doesn’t matter.
Eventually we’ll wear it down and we’ll find the hole or we’ll find the combination that makes a particular exploit work because we only have to be right once in this space. And that’s the thing. Whereas the defenders, they have to be right all the time. All the time. Tough job. It’s a tough job. Yeah.
I think we’re at time. Yes, sir, that is it for us. So first and foremost I want to thank everybody. For attending this webinar today. Our lovely marketing people will have this available to you as soon as possible. And don’t worry, I’ll blast it out on LinkedIn, so we’ll get it to you as many ways as we possibly can. More information on that to come.
And so with that, I’m going to thank both of our presenters today. You guys made it really easy for me to do my job, and so that’s always a lovely thing when it comes to moderating these webinars. And so, thank you guys very much and hopefully we’ll be seeing together again soon.
Thank you both. Cheers. Thanks, Ben. Bye, everybody. Bye. Cheers.