Top AI Threats to Your Cybersecurity in 2024 AI challenges and opportunities for black hats, white hats, and blue teams By Dan DeCloss, PlexTrac Founder/CTO If you haven’t been living in a cave or bunker over the past year or so, you’ve obviously noticed the craze around generative AI and the impact that it’s having on everyone’s lives. I’m continually asked about AI, with questions ranging from the impact it will have on society, jobs, etc. to the impact it will have on the cybersecurity industry. When discussing AI, it’s important to keep all of the flavors of AI in mind. We have been living with AI for quite some time in various forms, but generative AI definitely warrants some deeper discussion into how it’s already impacting our industry and how we should continue to prepare. Nobody knows the future, but we can discuss where we see trends going and make predictions. This article is the first in a series, focusing on how I anticipate AI will continue to affect cybersecurity teams and where it helps or hurts the odds of deep exploitation versus early detection and prevention. To cover this topic, I’ll consider the top threats from generative AI — what’s most likely to be exploited by black hat hackers — and then how white hats and blue teams can prepare and respond. The Intersection of Cybersecurity and AI Remember that your goal as a security team is to prevent as many attacks as possible, and, when compromised, to detect attacker behavior and activity as early in the attack life cycle as possible. Doing so gives you the best chance to avoid a major breach that ends up on the front page of the news or worse. Within that context, the two major questions related to AI are: What role will AI play in helping you achieve your mission? What role will it play in hindering your mission? I have a principle of trying to keep things as simple as possible. So for this series, I will cover where I see AI creating challenges and opportunities related to these questions within the two primary functions of cybersecurity, namely offense and defense. But first, it’s important to consider the advantages to the adversaries: the top three ways I think AI is and will continue to be most exploited by bad actors. Supercharged techniques for social engineering First, generative AI is completely changing the game in social engineering. Because bad actors can adopt new technology so much more quickly than organizations can, AI’s ability to exponentially enhance the power of social engineering is definitely a challenge facing businesses. From crafting phishing emails and schemes to audio and video deepfakes, AI makes creating targeted, believable media for social engineering attacks both easy and fast. It’s not hard to predict that things produced by AI will continue to get even better — which means it will get much, much worse for individuals and organizations trying to avoid falling victim. The reason AI is so effective in social engineering is because humans are often an attacker’s first target and it’s much easier for people to make mistakes than machines. People are beautifully imperfect and unpredictable. Generative AI is also far from perfect at this point, but it doesn’t have to be perfect as long as it is better than what a black hat could produce before. And it is increasingly scalable, which serves as a warning sign to all of us. Preparing for social engineering To prepare for the continuing onslaught of high-quality social engineering attacks, ethical hackers should be replicating AI usage in their pentesting to help highlight to businesses the new level of quality of social engineering attacks that they can expect to see. Likewise, cybersecurity teams and businesses must increase training and education and work toward a zero-trust model. People will always fall victim to social engineering attacks, but they are also an important asset in preventing them. Employees who are well-trained, aware, and encouraged to speak up when they see something can make a big difference. You should also start using AI to help employees identify these advanced social engineering attacks, which can be just as important an approach as continued vigilance and training. Empowering them with additional technology that can help them make objective decisions rather than ones tied to emotion can give everyone a much better fighting chance against the advances in social engineering attacks assisted by AI. Increased speed at which attacks can occur Generative AI is increasing not only the quality of media that can be used for cyber attacks but also the speed at which it can be produced. Threat actors are and will continue to take advantage of generative AI to scale their attacks. Unfortunately, businesses were already operating at a deficit in resources compared to threat actors. AI is only widening the gap. Time and money were always on the side of the black hats and AI is making it quicker and cheaper to deploy advanced attacks at scale. Attackers can use AI to create additional payloads faster and can utilize machine speed to determine attack paths and adapt to a target environment at scale. This decreases dwell time on behalf of the attacker which reduces their footprint and shortens their window for detection. Preparing for more attacks I do see an opportunity here for ethical hackers who work at service providers or on internal teams. Because AI is so visible, it is becoming easier to make a case for prioritizing and resourcing cybersecurity to combat it. Organizations are enthralled with the possibilities of AI right now and assessing their overall strategy. It is a good time for cybersecurity leaders to educate organizational leadership on the threats AI can pose and to strengthen alignment between cybersecurity teams and the larger business. Bad actors don’t have to be the only ones benefiting from AI to scale their activities. White hats and blue teams must also use AI to automate proactive and defensive measures to keep up. More on this in the next articles. Enhanced automation and detection bypass Generative AI is not only useful in scaling social engineering attacks. It is also capable of quickly learning how security systems work and devising ways to bypass defensive technology. With its ability to learn and respond to what it finds, it is a dangerous tool even for less sophisticated adversaries to use to avoid detection software and maximize damage once in. AI increases the amount of automation for attackers for almost all phases of the attack lifecycle including reconnaissance, enumeration, payload development, and delivery. Additionally, AI can assist in discovering what security controls exist within the target and aid the attacker in bypass techniques. With this kind of aid, the attacker leaves a much lighter footprint, and, thus, it becomes much harder to detect nefarious behavior and activity. The forensic exercise also becomes more challenging, so the skills bar needed to conduct more advanced attacks continues to lower. The typical script kiddie can do a lot more damage than they used to, and advanced threat actors are wielding much more power. Preparing for automated detection bypass In this case, the same aspects of AI that are useful for avoiding detection are also the ones to leverage in improving visibility. As always, blue teamers will have to evolve and understand their risk and threat landscape more than anyone. While AI can aid attackers, it can also aid defenders. I will always emphasize a strong proactive security program, and AI can help in automating discovery of your attack surface, correlating known vulnerabilities with existing exploits, and identifying gaps or weaknesses unique to your environment. The best defense is a strong offense and this must include the notion of using AI to combat itself. AI strengths and weaknesses apply to everyone To keep up with the advantages of AI for black hats, we all must be proactively following developments in generative AI as well. As with most technology, the bar for using it for nefarious purposes is low. Generative AI is readily available for cheap so bad actors can and will use it to multiply their efforts far faster than businesses with limited resources and many competing priorities can. Knowing this means that security teams (red and blue) must race to keep up and find ways to leverage AI to their advantage too. As you work to maximize the potential of AI for cybersecurity and mitigate the threats it creates, remember that you’ll likely have to prioritize. The areas above are my predictions as those to watch. And remember the age-old notion that garbage in equals garbage out. Generative AI learns from and feeds on the data it collects, and if it learns from and feeds on incorrect information, it can have dramatic consequences. This is true for the white hats and black hats, so be diligent and rely most on your complex, uniquely human mind. Dan DeClossPlexTrac Founder/CTODan has over 15 years of experience in cybersecurity. Dan started his career in the Department of Defense and then moved on to consulting where he worked for various companies. Prior to PlexTrac, Dan was the Director of Cybersecurity for Scentsy where he and his team built the security program out of its infancy into a best-in-class program. Dan has a master’s degree in Computer Science from the Naval Postgraduate School with an emphasis in Information Security. Additionally, Dan holds the OSCP and CISSP certifications.
Vulnerability Assessment vs Penetration Testing: Understanding the Key Differences Vulnerability Assessment vs Penetration Testing READ ARTICLE
Unlocking Continuous Threat Exposure Management: New Features for Prioritizing Remediation Based on Business Impact The evolution of product security READ ARTICLE