Skip to content


Driving Value from Your Security Services: Collaborate, Remediate, Accelerate

This video discusses next generation testing styles that give organizations a more comprehensive overview of their environment. Using modern threat detection by-pass techniques Digital Silence’s Heliotrope testing style blends the best of red and blue teaming to deliver more meaningful insights. Combined with PlexTrac, it gives organizations a complete picture of their risks and an actionable understanding of how to protect themselves.

Series: On-Demand Webinars & Highlights

Category: Runbooks, Thought Leadership



Let’s get started. We’ve got a good group that has joined us. Welcome, everybody, to today’s webinar. We’re excited to have you here. Our webinar is driving value from your security services collaborate, remediate, and accelerate with Digital Silence. We’re very excited to have them here today and PlexTrac. So I’ll kick things off.

I’d like to introduce you to our panelists for today’s session. We’re very fortunate to be joined by JT Galletto. He is the Chief security officer at Digital Silence. In his role, he collaborates with leaders on the It and business side to help everyone understand the threat landscape, align security priorities with business goals, and maintain an eye on ever increasing regulatory compliance. His 25 year career has spanned a wide array of cybersecurity roles, with extensive experience in the financial services industry. And he’s an MPA. Certified TPN Site Security Assessor.

CISSP a certified Forensics examiner, contributor to the CDSA App and Cloud Framework, a credit union board of director. And he’s earned a defcon Black Badge. So congrats, JT, and we are very excited to have you here.

Yeah. Next we have Victor Tysler. Victor leads the offensive security team at Digital Silence. He’s a longtime Tinkerer hacker, reverse engineer turned director, and Victor leverages his extensive and ongoing experience as a tester to direct a team of world class offensive security experts for Digital Silence. In his spare time, he continues to hack on personal projects for fun, maintaining and improving his skill set. Victor, we’re glad to have you here. Thanks, guys.

And finally we’ve got Nick Popovich. Nick is PlexTrac’s own hacker in residence. His passion is learning and exploring technology ecosystems and trying to find ways to utilize systems in unexpected ways. His career has focused on adversarial, threat simulation, offensive and defensive security, and advanced technical security assessments. Nick’s mission is to help individuals and organizations involved with defensive security operations to have an opportunity to observe the mechanics and methods of the attackers they’re defending against, and to assist in realistically testing those defenses. He’s a lifelong learner and loves finding new ways to get under the hood of systems and networks. Plus, he’s pretty fun to have around too, and always has some good lines to share with us.

So, JT, Victor, and Nick, we’re very excited to have you. And for everyone that’s joined today, thank you so much for taking some time out of your day. We’ll get things kicked off. We really do encourage you to ask questions throughout. You can use the Q and A, and I think the chat is enabled as well. So either way, feel free to put it in either spot and we’ll keep an eye out. And we do have some time set aside at the end, but if there’s some great questions coming in, JT, Nick, and Victor will take those along the way, even if they’re bad questions too.

They don’t have to be any questions. Exactly. So with that, yeah, we’ve got a great session planned, so let’s dive in, guys. I’m going to turn it over to you.

Thank you.

Very cool. You want to go to the next slide and we’ll talk about some of there we all are. Yeah.


Yeah. Go for it. JT I was going to say, so really what we wanted to have a conversation around, just talking through really driving value from security services. I think it’s a very pertinent topic. I’m sure you guys have seen it as well in a very constrained marketplace. Right. You’ve got a lot of different variables hitting organizations today between economic challenges, compliance, and regulatory challenges.

And one of the things that I think is really topical is efficiencies and cost. Right. Are you getting what you’re paying for? I think from an agenda perspective, there’s quite a bit here for us to dig into and talk about driving value through both our relationships with PlexTrac and how we bring those efficiencies, but also what we’re seeing from our client perspective.

Yeah, I’m excited to talk about it. I think one of the neat vantage points that I selfishly get to have being at PlexTrac, having been a practitioner and a Red teamer, is getting have that relationship and see the experts execute that voodoo that they do, and then we can become a force multiplier to help showcase and drive value in those types of things. And I’m super excited for you to explain. Heliotropic heliotropic. Yeah, that was a new word. That was exciting for me to learn, a new word. So we’ve got some really interesting stats here that I think really carry through that initial message.

Right. And one of the things that I think is a really pertinent topic as we talk about some of the things that Victor and his team does from an offensive security perspective is that the bad guys are still winning, right? I mean, you look at the FBI’s Internet Crime Report stats for 2022, it’s staggering over $10 billion, up from almost seven. And then just two years ago during the pandemic, we’re just a little over $4 billion in losses. And then even then, that seemed extremely staggering. But the bad guys continue to figure out ways to defeat our defenses, and it’s a big challenge. And I think that’s also why we saw American Banker come back and say, hey, out of our interviews, a very large percentage of folks are saying their cybersecurity spend is going to go up. And in the financial services space, you’ve got both a myriad of regulatory agencies coming out and saying, hey, your boards are now fiduciary responsible for assessing risk.

And it’s not just things like interest rate risk like we saw with certain financial institutions just a couple weeks ago, but also cybersecurity. Right. So now even at the board level, you have to have someone that understands cybersecurity. It has to come from the Audit Risk committee out to the full board on reporting on stats and so it’s becoming a much more pointed conversation. It’s coming out of the shadows right. Of technical it areas. And so it’s really interesting, at least from my perspective, when we talk to clients and how much more interest there is in getting a comprehensive look at what their threat profile looks like and getting value out of a pen testing engagement.

And so that’s really where the impetus for the heliotrup really came from. And we’ll definitely talk more about that here in a minute.

Yeah, that makes sense. These are sobering stats, I think, too. It’s interesting when you talk about the shifts of responsibility that people are seeing when we look at liability insurance and folks are trying to figure out how to drive change, and it ends up being like, well, if you can make folks focus on their budgets and money and how it affects that or on accountability.

I don’t know how realistic it is because I’m not up on the laws and whatnot, but folks talking about board levels being on the hook for jail time, for certain privacy violations, or for showing a lack of due diligence when it comes to cybersecurity and cybersecurity insurance. And it’s really interesting that I don’t know what you all I would like to hear what you all’s opinions are as far as what is it that you think can resonate the most with organizations as a whole to cause them to have that AHA moment of why they should invest their time, effort, money and resources in cybersecurity? Is it highlighting accountability, regulatory, is it budgets, is it money, is it customers? Good faith. I’d be interested to know what you see in your purview. Yeah, I’m going to pick on Victor a little bit because I think there’s two competing items. Item one is the one I think on the surface is pretty obvious is that there’s compliance and regulatory push that’s forcing these business leaders to really come along for the ride right. As the government kind of encourages us to be more diligent. But we do have a number of clients that are now taking a deeper interest.

Victor and his team just recently worked on a project where I’m pulling a dragnet here, where the facts and names have been changed to protect the innocent. Right. But the client had a long standing legacy application, really hadn’t had any former due diligence done on that platform and throughout that engagement. Victor, again, just at a high level, if you want to share just how that project transformed from being a very technical project right into there being a broader interest within the organization.

Yeah, it’s a very recent engagement that we worked on to describe the scope. I think it was roughly 4 million lines of code that was written towards the, towards the end of the was riddled with a sorts of vulnerabilities and attack classes you would see around that time. And one of the difficulties was it’s a massive application and it was a very complex application and a lot of the standard tools that people would use weren’t finding anything. And so we came up with a methodology that was highly effective. We looked at it from a dynamic analysis perspective. We did kind of your standard, let’s look at it, let’s use burp, let’s try to try to throw inputs at input vectors, let’s see if we can trigger find any vulnerabilities that way. And we had limited success there, but we also learned a lot about the application and about how it was handling inputs.

Then we also performed a static analysis. We took the source code, we put it through some static analyzers and we also looked at the build pipelines and we realized that the company that we’re doing this work for had this large investment in relatively expensive static analyzers that are really good. However, we learned that there were issues in how the build pipeline was configured and they weren’t getting any value out of the tools that they were using. So we kind of shifted focus like let’s tackle this first, let’s get this working for them.

That worked really well. And then we also performed a code review and also that was highly effective, but we also decompiled the compiled binaries and looked at it from the perspective of what actually makes it into production. Exactly. Cutting through the complexities of the build pipeline and everything there, just looking at what ultimately was running. And by looking at it from these different perspectives, we were able to identify patterns of programming patterns that were vulnerable and we’re able to provide recommendations on how to remove the most vulnerabilities with the least amount of code changes. And this garnered interest from the higher levels at the company. And it started a discussion around secure software development lifecycles and checking commits for security and like really high level stuff as well as really low level stuff.

Wow, that is one of my new favorite stories because how cool is that? That is a demonstration of how it’s supposed to work. That organization perhaps they had, obviously they have invested in security tooling, so they probably have security staff and that staff has a lot of different responsibilities and they had, as one does, tooling that maybe wasn’t configured appropriately or configured to the best way that it could be. They didn’t know what they didn’t know at the time. They’ve got other duties as assigned and so they’re doing their job and they’re doing their job well. But then a third party comes in and not only shows value in the penetration test in that project based snapshot and time assessment of an application showing that deep technical knowledge, dynamic and static analysis. But at the same time, I’ve also never heard of a company coming in and helping out with the company’s own static tooling and build pipeline assessment. But look at the value that that left raising the security posture for that organization, not just from the results of the test.

If they just got the test, that would have been great. But then the continuing value that they’ve been able to derive by now having the ability to scan their own code, so to speak. And so that just is a perfect picture of my favorite thing. The idea of some people say, oh, internal teams versus third parties versus consultative versus providers and it’s not a pick one, it’s find the mix of that recipe that’s going to raise your security posture the best. So I love that story. And you also hit on one of my favorite things of the idea of like I am not anti tooling is fantastic and we have to have the best tools, but then making sure that you understand how it works and have the training and the experience because the experts behind the tooling are always going to drive that value. That’s a really cool story and showcases a lot of different neat facets.

Yeah. And that’s back to your initial question. That’s really where it was like, well, yeah, the compliance and regulatory environment obviously pushes a lot of that. But when you’re able to demonstrate that value, the leadership of that organization really started taking notice. Like, wait a second, not only did we have this point in time test, but jeez, we’re getting a lot more value from this collaboration. Right. And it really did accelerate their program overall.

And kudos to that attitude because if their attitude is absolutely bare minimum compliance is making us do this, then you would have said, hey, we noticed that your tooling isn’t giving you the most value. And they’re like, we didn’t hire you to tell us about new things to fix. Give us the penny that’s on tomorrow and stamp that sucker. No, but they had the right, they appreciated the idea of let’s see how we can make this better. That’s phenomenal.

Let’s look at our next slide a little bit.

And one of the things that we run into all the time is, as you just kind of hit on the nail on the head, right, that, hey, just get us our pen test and let’s put a stamp on it, is that we’re seeing a lot of folks. I never name drop any of our competitors, and there’s really what we found in the marketplace. I even dislike the term pen test anymore because it’s been so watered down with a lot of folks that are doing more of an automated vulnerability scan versus taking some time. Yes, tools have their place, but really looking at, hey, I’ve identified these potential vulnerabilities, but I actually need to manually exploit this and it might come actually another pointed piece there that I’ll have Victor talk on is that we were working another engagement where we identified vulnerabilities. The team identified vulnerabilities. There were public exploits. The public exploits wouldn’t work.

Right. So the tooling regardless of what tools you’re using to automate that would have said, oh yeah, I found the vulnerability, but it must not be vulnerable because they can’t exploit it. And that’s really we’re having a team like Victor and the folks on his team comment a little bit about what you guys did to get around the broken exploit. Again, keeping it high level on Dragon terms. I’m going to try to not name names or shame anybody.

We were performing an internal network assessment and we came across physical access controls and physical security systems. Specifically, what we’re talking about here is our security cameras and there’s a public vulnerability that was implemented by a third party. And we did an analysis of we tried to use, it didn’t work. And we thought to ourselves, why isn’t it working? So we ported the exploit to a new language.

We read the original author’s sort of white paper on the vulnerability and we’re able to find a few mistakes in the publicly available exploit. We also had to work around some egress filters, some network controls that were preventing the exploit from working on top of regular bugs in the exploit. And we were able to fix it up and make it work with the environment that we were in. And that granted us access to additional network segments and gave us one thing I thought was really cool is we were able to see through the cameras, the other computers that we were hacking as part of the engagement. It’s always nice to be able to see it that way. That is awesome. That is awesome.

JT you bring up a really good point and it’s one that’s hotly contested at times and I think some people feel somewhat attacked at their business model and stuff. And it’s not about attacking, it’s about what the maturity level at times of folks. I think the one thing is what can folks people being transparent in their service offerings, right? And you’re right, it is tough because the term pen test, just like 15 years ago, the term cloud, everybody had their own what is cloud? So terms start getting once they become ubiquitous, they start to hopefully have this de facto meaning. But you’re right, the concept of a pen test, one person’s pen test is another person’s bone scan. One person’s pen test that’s manual, your standard Pen test that requires manual activity, that’s highly customized and advanced exploitation. Maybe that’s somebody’s Red team or somebody’s advanced secret sauce assessment, whatever the case may be. So I think you bring up such good points and there’s so much value in making because a certain organization may be so their security posture is so nassant or they’ve never been looked at before that if you guys came in and did full gloves off, full on pen test, it would be like taking a sandblaster to a cracker a soup cracker.

And so maybe they need to start with a volume scan. Right. But you call it what it is.

You define it. And I think there’s always value, there’s immense value in having expert practitioners who can go beyond the tooling output. You have to you can’t just trust tooling. Tooling is a tool and it’s an enabler, but being able to take it beyond that and show what threat actors can do I remember when I was on a gig one time in a casino and was able to see one of my compatriots from the security system cameras. Getting camera access to be able to take that screenshot and jam it in a report is always butter.

As I said, I 100% agree with you. I think organizations that aren’t doing anything at all are the most at risk right. As you kind of progress through that channel of let’s at least start looking at our vulnerabilities. Let’s see if we’re missing patches. Let’s get on a good routine there because quite frequently I’m asked what are some of the things that we could do to remediate our risk? And typically that’s coming from management at the board level. And I’m still talking about Patching. Let’s be honest.

We’ve been talking about patching since, like, 1996, right.

It’s still a very big part of your security repertoire and your toolkit. And so, yeah, I think having tools that help you get better at that are very pointed. But I think to the other side of that, right. As you progress and mature your environment, it’s really having a good representation of what threat actors really can do. Because everything that Victor and his team does, while it sounds extremely complicated, I don’t want to say it’s not super time intensive, but a threat actor isn’t going to be sitting there for months trying to figure these things out. Right. These engagements for us are typically weeks in length, right? Not months.

And so that’s really just to kind of give people a sense of the amount of time and effort a structured threat actor needs. And some of these guys are backed by large teams right. So that they’ve got lots of resources they could throw at it.

Absolutely. I love the questions you ask here where it says, is your pen test really a pen test? Right. Those are some of the main pieces of the puzzle to look at. Going beyond it. Absolutely.

Looking at the next slide here, I think you mentioned heliotrope. And I’m going to also call out that the image is actually AI generated. We used AI art because I was like, how do you get a guy that’s trying to break into something and be engaging? And so it was interesting to see what the AI came up with with that. But that’s all an AI generated image there. But that being said, one of the things that we started seeing, the theme you hear from some of the stories that Victor and his team shares, is that folks that come to us are typically looking for a much more realistic engagement. Right. They don’t want that automated output because, again, there’s a point in time and place for that.

But it’s really, hey, we want to know if our controls and our systems are working the way we expect them to. And a lot of this was fed into the fact that there has been a large growth in the deployment of a combination of SIM and EDR or MDR tools in the world. And when those tools are deployed and utilized to the best of their ability, they do a really great job at monitoring environments. Right. So that it’s not meant to be derogatory at all. But what you do see is that just by the numbers that we talked about earlier in the webinar, the threat actors are still winning. Right.

They’re still making money. So if we’ve grown our technical footprint in our complexity, how is it that we’re still having this? And so that’s really where a lot of that engagement needed to come from. And so what we did is that that’s where we came up with the idea for the heliotropic engagement. Right. And it’s really designed to be an engaging penetration test. And Victor, I’d love you to kind of walk through kind of how that works and maybe share a little bit about one of those engagements where, ironically, we’ll say that the good guys found us. Right.

But the way they found us and everything you were able to accomplish before that, I think, is actually very telling on how an engagement like this drives value for folks. Yeah, okay. I’ll get to that. Fun how we got caught story in a second. But yeah, first I wanted to kind of talk about how we came up with this, what the needs from the clients were, and kind of like what our reasoning was and why it was effective. So it started when a company came to us and said, look, we’re looking for your standard penetration test. But we’ve made these big changes in our network.

We’ve invested heavily into monitoring. We’ve built a good blue team, and we’d like to see how effective that is.

The way that we went about testing that is kind of a two part test. We gave them both. And the way we did that first was to test the monitoring and the blue team. So the first half or the first 30% of the engagement was trying to go really covert. We’re not running your noisy testing tools. We’re not running Nessus, we’re not running Nmap. We’re going as quietly as we possibly can, and communication is key here.

For the first three or four days, they weren’t sure that we were doing anything at all. So we were talking with them and we kind of showed them our notes. We’re like, here’s what we got access to. We’ve compromised users. We’ve accessed file shares, we’ve export data, and here’s how we did it and here’s why. We think it kind of flew under the radar and we would kind of gradually ramp up the noisiness and they came to us and they’re like, hey, we, we caught you. You’re using this, this one compromised user.

And the way they caught us was actually really interesting. They were doing kind of statistical analysis of LDAP queries, which I thought was wild. It caught us. But what we learned from this test was, yes, this is a highly effective strategy. However, there was a four to five day delay on when they found out about it. So that allowed them to tune these tools to get more value out of their investment. And then we kind of gradually eased into a regular Pen test.

Super noisy. We ran all the loud tools and everything and gave them the coverage but also insight into how they can detect some of the stealthy attacks associated with some of your modern day criminals. Okay, I’ll talk about the other fun story. So another similar test that we were working on and trying to be really stealthy and I got caught. And the reason I got caught was really interesting. I had typoed a domain.

I think I put an extra E at the end or an O in there somewhere. Either way, just an easy mistake to make and didn’t think that was going to cause an issue.

It wasn’t going to take anything down. I just had to correct the command and then run it again. But it got flagged as a phishing attack. They had a monitoring tool that was looking at access to different domains and it was looking for the homoglyph attacks or the lookalike domain attacks and it caught us. It was wrong about what it was, but we learned that that was also highly effective and we’re able to help them enhance that part of their monitoring as well. That’s really cool. I love those.

And so in the heliotropic, would you call that an engagement type or a type of engagement? And so that’s a type of engagement that folks can request and that is very collaborative in nature. So you’re doing activity when you’re kind of scoping that work out, do you come up with is it every time it’s going to be this custom collaboration or do you kind of have some feeders that say, this is what we’ll do? I’m just kind of curious about what that engagement looks like and what that process from talking with them. Yeah, I’d say every engagement we do is tailored to the customer and their environment, what their needs are. So the first story I gave was around responding to their, their need of checking in on their, on their investment and their monitoring and training and personnel. And so we, we proposed this as an effective way to do that as well as give them a regular Pen test. And as we scope and as we work with customers more and more, we kind of know what to look for as we’re doing our onboarding or scoping discussions and that’s how we go about it.

What’s really interesting, too, because I think a lot of folks look at what a full Red Team or Purple Team engagement would look like. And obviously there’s a lot of moving parts and a lot of logistics there while heliotrope is a shade of purple. Right. Just for everybody that’s not in marketing or graphics design. But what we found was that we were able to provide, given some other efficiencies in the way that we run our business, a very tailored approach for right around about a 20% increase in cost to what a typical engagement for us would look like. So it’s not a massive cost and increase, but the amount of value that comes along with an engagement like this is massive. Right.

So it’s really hitting a sweet spot with our clientele because of the interactive nature of it. But, you know, we’re we’re, you know, we’re able to really drive a lot of value without adding a lot of cost to the engagement. Man, that’s fantastic. Go ahead. I was just going to say one benefit for us on the testing team is it’s a two way street. We are communicating with them, they’re communicating with us. So they get to see what we’re doing and what was effective and what their blind spots are.

But we also get to learn what is getting detected and what technology is. Your tradecraft? Yeah. What’s the word? Homogeneous. I don’t know if that’s the right word. I tried to look up a dictionary and come up, but that’s not the right word. But I don’t know. Symbiotic.

Symbiotic. That’s fantastic. Your trade craft is raising so that everybody in your Purview and the ecosystem raises up there. Obviously their controls are being tested and that’s great. I’ll tell you from one of the reasons. After spending a decade plus as a Pen tester and as a Red Teamer, both inside, both outside, when I got involved with PlexTrac, I got the opportunity to start seeing trends at a much more macro level. Instead of seeing what individual organizations and the organizations on their Purview are, I’m seeing an orders of magnitude.

And what I’m pleased to see is that a lot of folks are starting to recognize that you don’t just need this six month advanced Red Team Engagement that then turns into a Purple Team post mortem for three months and you’re investing nine months to a year before you can have this conversation. I think starting to collaborate and have conversation either in real time or as near real time as you can, especially even from a service provider or a consultative nature. The value derived. I mean, I’ll also be honest with you, I’m a huge proponent for this concept of just do it all. Like get that Black box test, go in and get that. Week of just, gee, gosh, golly, what’s going on? But then certainly being able to maybe the next gig or the next quarter get that collaborative test. Because organizations that are not just taking a seeming pile of document and dropping it on the desk and being like, we started the gig here’s the doc pay us.

We’ll see you next year. That model kind of needs to evolve not kind of needs to absolutely needs to evolve into whatever type of offensive security or security assessment and testing activity that is happening. It’s this collaboration check on learning. I think it started I don’t know when it started getting very popular, but I know a long time ago it just started becoming de facto that when you did pen testing, you would kind of have a retest baked in. Right. We’ll come back in 30 days. We’ll come back in 60 days.

So that retest was the start of a collaboration, but it was still very much in a vacuum, very much snapshot in time. It’s good to show that progress. But hearing the heliotropic assessment model or testing model and that live collaboration is fantastic. That’s really cool.

It’s one of the things, and I think we talk about it here on this next slide, but we’re kind of testing going in that. But I think you hit it. One point that you made, I want to touch on a little bit further, is that not just delivering the report, as I like to say, that passes the weight test if you print it out right? Can I print it out and roll it up and smack somebody up the head with it? Will they pass out? Right? But one of the things where we really value our relationship with PlexTrac is some of the efficiencies we’ve gotten on our ability to deliver on some of that on the reporting side. Right. Because one of the things that set us apart, and not to say there aren’t other testing companies out here that do this is that we’re trying not to just give a black box deliverable. And so a lot of our reports, regardless if they’re heliotropic or not, are very detailed and we call it an attack narrative, right. So everything that was and wasn’t successful, as the testing teams go through that, they’re documenting everything.

So when someone gets a deliverable from us, they really have kind of like a start to finish on what our testing team did. And as you could imagine doing that in a more old fashioned way with Word docs and all that, right. It’s extremely kludgy. It’s time intensive and that’s really where we’ve gotten a lot of value out of our partnership with you guys, is that that makes it a lot more efficient. And then if you’ve got multiple testers on a project too, they’re not just sharing a Word doc that might get corrupted or lost or anything like that. And so that’s really been a big thing for us and it also helps us keep our costs to deliver down, right? So there’s a whole value chain that goes along with that and I think to that point and we start thinking about where testing goes. We’ve seen kind of this shift and the first piece on this slide seems obvious, right, that there’s a lot of cloud as people have moved out of their own personal data centers, but it’s not just kind of lift and shift cloud mentality.

Victor, I’d love you to talk a little bit about how looking at some of these ephemeral networks, things that are like containerized and things like that, how that’s impacting the way we test and kind of some of the results that we’re finding. Yeah, I think the future is really exciting. We’re seeing a lot of containerization, we’re seeing a lot of serverless, we’re seeing just kind of like we’re seeing a shift towards security defined by policies, we’re seeing a lot more separation of concerns, we’re seeing a lot less complexity on the different nodes that build up your system, your software.

I think that’s terrific, but it does change the way that we go about it. A lot of the old tooling doesn’t work so well, so you have to adapt, use new tooling, you become familiar with how policies are written and what some common vulnerabilities are in there and just kind of like it kind of becomes like a code review. You’re looking at logic and systems defined in essentially code and I think that’s awesome. So I think we’re going to continue to move in that direction and we’re ready.

Go for it. JT, I was going to say there was a question that just came in and I think it ties in a little bit too with some of the items that are coming through. So I’m sorry, I didn’t mean to talk over you on it, but do we have a mentor program in our company? And I think this is something that I love about our company in general, both for our employees internal, but we celebrate the community quite a bit and there’s a lot of different things that we do.

If anybody wanted to reach out afterwards, we’re always happy to get more in the details, but things on the nature of hosting classes and things like that. But when we start talking about some of these new technologies and attack types, the team, it’s funny because we also do incident response and digital forensics within a different group of the business, right? And so a lot of times, when you start talking about mentoring and the collaboration that goes on there, we had some IR engagements where we were having difficulties identifying what the threat actors were doing. And that team would reach out to Victor and his team and saying, hey, this is kind of what the environment looks like and this is what the network looks like. If you guys would you do? What would you do, right? And there’s a lot of that collaborative nature. So that’s one of the things that I think is really cool. But yeah, we really try to mentor and share and do a lot of knowledge sharing. Just for instance, we do a lot of embedded device testing, right, because we’re getting a lot of quote unquote smart devices that are coming to market and those providers come to us for either preregulatory approval testing or pre market engaging.

And what’s interesting with all of that, for an example, Victor, if you want to talk a little bit about, again, high level, the recent testing engagement where we had one specialized person do a very specialized hardware hack, but then we kind of had an internal learning to kind of decrease everybody. Yeah, okay.

I’m not trying to divulge too much, but yeah, we were testing a purpose built device that came to us as a bit of a black box. It was sort of like a here’s this thing, tell us what’s wrong with it. And then we had some regulatory drivers that kind of guided what we were looking for.

This is the third time we’ve tested with them. So now a lot of the vulnerabilities that we saw initially have been removed and this was like a new device, but they’ve taken lessons learned from the previous rounds of testing and have made it more secure. So we get this box and we look at the different input vectors and we realize we can’t do that protocol downgrade attack anymore. We can’t outright disable crypto anymore on the protocol level. So it’s time to dig into the chip and see if we can pull the firmware off of the chip. So we have a guy that specializes in that and he pulled off this amazing attack using Tweezers, of all things. He shorted some pins and was able to introduce a fault that prevented the device from booting.

And he also soldered on some wires and was able to drop into get like a terminal with it. And because it failed to boot, he was now in this bootloader environment and was able to pull the data off of it. And there’s a number of us that are super interested in hardware hacks, but just have allocated our time towards other skill sets. But we were definitely interested. And so we did internal kind of debrief and like lunch and learn. Here’s how the Tweezer attack worked, here’s how you can do it yourself. And as a consequence, the whole team is now better at hardware hacking.

Sorted out so many different things at your house.

Get the old coffee making you’re like, I got a shell on my coffee.

So we’ll use different resources on a team to kind of like, get people to play to their strengths. Now that I have access to the firmware, it’s become a software problem, which I love. So I’m digging around in Gidra and reverse engineering.

First unpacking the file system and then analyzing the binaries on there and looking at how the crypto works. So that’s actually an ongoing engagement. I can’t wait to get back to it. Yeah. Right after this call or after this zoom. All right. And that’s cool.

That’s exciting to be able to have someone with the skill set that they can also articulate to the rest of the team and raise it up. And I’m sure you guys are involved with the community and speaking at conferences and offering different trainings and shameless plug for me and PlexTrac. We’re actually going to have a free workshop at Hack Spacecon coming up shortly, where I’m going to be going over. It’s a two hour workshop on the Friday of the conference, or this Friday actually, where we’re going to be. I’ve set up a vulnerable hack lab of kind of a CTF esque environment. But the CTF isn’t the point of the lab. We’re going to be going in and showing you to do some nonsense, some Pen testy hacks and then how to report in the PlexTrac platform.

So we’re going to be showcasing how to drive that efficiency, the digital delivery, and then just the ability to kind of because that’s the point. All of that voodoo that you do, all of the important the suite, the hacks then has to be articulated in a report. And as you guys know, having done it in manual methods and just trying your own efficiencies, having a purpose built platform that’s highly supported, that’s entire purpose is to take disparate sets of data and let you cohesively report it in a fashion. That’s the point of the workshop. And I’m excited for that. It’s free, which is cool. So if you’re in Is at the Kennedy Space Center, that’ll be cool.

You hit a nail on the head, right.

We talk about this with folks all the time during the interview process. You could be a great hacker or a great forensics investigator or great at whatever you do, but if you can’t document it and articulate the issue and deliver it to the customer or the client right.

I hate to say it like that, but it doesn’t mean anything. Right? I mean, you have to be able to articulate. The other thing I like to say is, like, hey, the whole concept of how this world works is that we’re taking the synapses and everything from between your ears and we’re turning it into a deliverable. Right.

That’s literally like what we’re trying to do. That is a huge piece of what we do on a daily basis for everybody. So, yeah, it’s very important. Yeah. Communication is key, and the lion’s share of communication is in the report that we deliver. And it is tremendously helpful, thanks to PlexTrac, to focus on the content, the technical content, the writing rather than formatting, font, spacing, all that stuff that eats up a ton of QA time, but just focusing on what is being communicated. Jamming it, yeah, no doubt.

That was well said. I’ve never heard of communicating or talking like that, but now I’m going to use it all the time, be like, let’s have a time where I can take the synapses between my ears and create a deliverable. And they’re like, what? I was like, I want to talk, let’s chat.

I look at this slide that we’re looking at and talking about where testing is heading, and it is interesting. We can’t rest on our laurels what worked ten years ago or five years ago, especially the techniques, the technologies you’re familiar with. And I just have a terrible story, too. For years I read the word ephemeral. I only read it, and so I never said it in real life. And I was on a call one time and I was like, I tried to say I said Ephemeral, and they’re like, do you mean ephemeral? And I was like and I’m the expert. I’m supposed to be the expert.

They can’t say that word. So I see that gives me anxiety. But with the nature of cloud technologies, containers, it’s so interesting with folks who haven’t then engaged with folks who are staying bleeding edge. And that’s one of the values that I had both when I’m working internally and externally, being able to be a part of a community and engage with service providers and folks in the community who are kind of driving each other to stay bleeding edge. But I’ll be honest with you, a lot of folks who kind of end up doing the same vulnerability management program and their asset management strategy, the idea of IP addresses don’t matter anymore in some instances, right? Like when you have what is it called, functions as a service and the containers and the different things that might have load balancers and might have host names that matter, but every day they’re going to have a different IP address. It really takes the strategy that you have to continue finding ways. And a lot of times now it’s interesting, and I don’t know if it’s always been this way, I’ve only been really in the industry since nine, but the folks who are doing it and understand the technologies are driving the innovation.

They’re coming up with methods and then tooling comes behind. I think a lot of times before, at one point I was waiting for the tools to show me the methods and then I was finding ways to enhance those earlier on in my career. And now I’m recognizing that staying bleeding edge. It’s the researchers and the practitioners and the technologists as an example. I come from internal pen testing was my Jimmy jams. I just loved it. And yeah, plenty of organizations have just hard standing old school active directory.

But now when you start exploring Azure ad and hybrid domains and net commands don’t matter as much as just go to the Azure panel. You can see everything in Azure environments for ad centric flaws. I’m attacking APIs now for Azure. When I was attacking ad in the past, I wasn’t spending time in Burp. So all that to say, I think making sure that we all is part of our technology ecosystems. We’re making sure that we partner with, we mentor, train, and hire folks who are continuously moving the needle forward with technology. Because as new technology is established and taken over or injected into tech stacks, the methods and the tooling of yesteryear just doesn’t cut it.

No, agreed. I think that’s one of the things. Not to buzzword it, but, Victor, I’d love to pick your brain just on this last little bit here on where some of these I don’t want to name drop specific AI machine learning bots, but just kind of where folks are using that for code checking versus scripting versus passing safeguards from adversarial perspective. Where do you kind of see? How do you think that might impact testing in the future? Just from your purview? Oh, yeah, interesting question. I think we’re already seeing some interesting adaptations from the Infosec community. We’re seeing new attack classes like prompt injection or prompt reverse engineering trying to jailbreak this language model.

We’re seeing code being ported from one language to another or written synthesized entirely. I think it’s really exciting and it’s hard to predict where it’s going to go. But one thing I got to say right now, my issue with large language models is that they are confidently incorrect. They’re confident all the time. Sometimes. Yeah, that’s exactly right. But you always kind of need a human to verify or validate the output because it’s always confident.

It’s always 100% confident. So what I’d like to see is a confidence score. I would like to see how confident hopefully that’s where we’re heading.

I love it because I’ve used that exact same term, confidently incorrect. And I don’t know if you’ve messed around a bit, but I have, and I’ve asked the models to explain why its answer is correct, and it will confidently give you a wrong explanation. It gets you like 70, 80% of the way there, but it’ll add switches to commands that don’t exist anymore or it’ll talk about it’s fascinating. And I was talking to a buddy of mine who I was a buddy and his wife, she’s a PhD in human computer interaction, and she’s already a professor and a research scientist. But she is jamming hard on these AI models now and some of the machine learning stuff. And what she was mentioning to me was that and I don’t know, I’m probably getting this wrong, so apologies if she finds out. And if I say it wrong, apologies.

But she was explaining that some of these learning models I don’t want to use the term punish, but they have a point system that allows them and it basically incentivize them to produce an answer versus say, it doesn’t know an answer. And so, interestingly enough, as they’ve trained the model, it knows that it should produce something versus say, I don’t have an answer for that, or I need to research it. I need to get back to you. So it’s fascinating and you’re absolutely right. Just like with anything, at least at this stage, those different models are certainly another tool in the toolkit because, I mean, I love it. I’m never going to have to Google a regular expression again or IP table syntax or a whole number of things, but it’s only as good as the folks who have the requisite knowledge to take what it makes and then manipulate it into the sauce. Yeah, we have, through several iterations, managed to coax functioning Python code that does egress bypasses over DNS and ICMP.

But it’s usually it has to be kind of like a solved problem and then we review it and kind of like, correct it a little bit. But now it’s figured out, instead of generating the code, we’re kind of proofreading it and error correcting it. It becomes more of like an editing process when you go about it this way. I don’t know about you all, but I’ve had it write Burp plugin that shocked me. I was like, Write a Burp plugin that adds this custom header that does this thing. And I was like, this actually kind of works. I mean, it still took some time to edit a little bit, but the fact that somebody was telling me that some dude has a self contained model that is compressed down to like, four gigs, we can get a full functioning at four gigs.

I don’t understand how that works. But yeah, this is good stuff. Well, we’re coming up close. I mean, we’ve got five minutes left. If there’s some topics that you guys want to put a bow on, we can get there. Or Angie, if there’s some questions that you want to pull from the powers that be in the Q and A, we could work through those. But JT and Victor, this has been absolutely engaging.

I’ve super enjoyed it.

I want to thank everybody that joined us today as well as our partners over at PlexTrac. Again, I think that as the threat landscape continues to evolve and I know that term gets thrown out a lot, I think we’re going to be in this confluence of figuring out how can we be more efficient with our spend as well as how do we provide the value for the spend that’s there. And I know we’ve been immensely appreciative of our partnership together for helping us continue to be efficient for our clients. But yeah, I just wanted to say thank you on that behalf. That’s awesome. I see we’ve got a question here around yeah, this is actually kind of that’s a big question. That’s a big question.

But victor I don’t know. You want to touch on this one. This is like how will Chat GPT be used or leveraged for pen testing? Okay. Yeah. So right out of the gate we’ve been able to develop tools with it. For example, the Burp plugin or Python code to bypass egress filtering. Another way that you could use it is you could have it rewrite code.

Like a buddy of mine had an implant that he had written in Python ported to Nimlang and just do that conversion process. Antiviruses were no longer picking it up.

It’s a cool language. Yeah. It’s kind of like a Python esque but like systems programming language. It’s really cool. But you could also use a language model to review your policies. You could say you are an attacker and you are trying to find a vulnerability or a security flaw in and then a corpus of your policies. Like, for example, if your AWS policies and I think it would do a pretty good job there.

And if it’s not ready yet, I think it’ll be ready soon.

Yeah, I agree. I think I’ve seen some really neat things. Like you mentioned fixing up code. That’s what I’ve noticed too is at times end to end, it’s going to fall over if you ask you to do something end to end at times when it comes to writing something. But I’ve found a lot of success in being like, write me a Python module that does this. And I like how it explains it too. It’s the best.

When I got in the army, I was doing schooling while I was working online. And shoot, if I had an AI model like explaining line by line like that, I would be way smarter. But I say, I’ll sit there and say, write me a Python module that does this with this inputs and does this. And I like, write it in, go write it in Rust. Write it in C sharp so I can start to see the differences. And I just found it to be phenomenal. And then I was messing around with the API and I found a way.

I didn’t find a way, but I sat back and experienced API OpenAI inception. I was messing around with the API for OpenAI and I was using API calls from Chat GPT to ask how to do API calls to Chat GPT and getting responses. And I was like, this is chat GPT inception. That’s funny. Yeah. And Rory, I wanted to also add to it.

Language models are great at writing phishing emails. I like it. That’s a good call. It might be a phishing email that says we missed your package and make it super convincing. That’s right.

Well, this is good stuff. I think we can hand it back to Angie to take us home on the range. That sounds great. Yeah. Thank you guys for such a great session. Awesome stories, insights and your experiences. And to the audience for all the great questions.

Got one more slide, just some resources and we’ll follow up. You’ll receive a copy of the recording. But if you have questions, we’ve got a ton of videos on our website and on our YouTube channel. Lots of different ways to connect with our team. And if you’re interested in seeing a demo, we’re always happy to show you the platform, see it in person, and hopefully you also learned a lot about the team at Digital Silence, if you’re interested in learning more about their services and offerings. JT and Victor, I’m guessing the website is the best place to send people, but correct me if I’m wrong there, we’re also on the Socials. But yeah, the website is a great place to start.

Perfect. Well, yeah, we’re right on time. That concludes our session today. And JT, Victor and Nick just wanted to thank you all again for an awesome session and for your time. I know it’s always an investment, but really appreciate it and want to thank everybody that joined. Thanks again and enjoy the rest of your day. Thank you.

Take it easy.