Skip to content
NOW AVAILABLE Feature Release! Learn About Our Enhanced Capabilities for Prioritizing Remediation CTEM Prioritization >>

Authored by: PlexTrac Author

Posted on: December 12, 2025

Spooky Supply Chains & Researcher Reality: A Conversation with Jonathan Leitschuh

Software supply chain vulnerabilities are becoming one of the most unsettling challenges in modern cybersecurity with increasingly creative attackers. To explore these issues, our founder, Daniel DeCloss, sat down with Jonathan Leitschuh, an open source security researcher known for uncovering high-impact vulnerabilities, advancing responsible disclosure practices, and pushing the industry toward more secure-by-default software.

Jonathan’s is most known for identifying a Zoom flaw in 2019 that allowed websites to silently activate a user’s webcam. The discovery helped reshape Zoom’s security posture and sparked global discussion about how vulnerabilities should be handled.

This conversation talks about the Zoom flaw but also dives into how he got into vulnerability research, what researchers need to know about disclosure, where supply chain weaknesses are growing, and how engineering teams can protect themselves from increasingly sophisticated attacks.

Watch the full episode or read the highlights below.

From Robotics to Security Research (By Accident)

Jonathan didn’t start out in security. He studied Robotics and Computer Science at Worcester Polytechnic Institute, took a single cybersecurity class, and went into industry as a software engineer.

The pivot came when he accidentally discovered a vulnerability in the Gradle plugin portal. By abusing wildcard version resolution, it was possible to hijack dependencies by publishing a higher version of an existing plugin. That moment showed him that a strong software engineering background was enough to start finding impactful security issues.

As he put it, “I realized my knowledge of computer science was enough to just find vulnerabilities.”

That realization set him on a path that would shape his career.

The Zoom Vulnerability That Changed Everything

The Zoom vulnerability that brought Jonathan global attention didn’t start as a hunt for bugs. He was simply curious about how clicking a Zoom link in a browser could launch the local Zoom client.

Digging into that flow, he discovered that Zoom was running a local web server on the user’s machine and accepting requests in a way any website could abuse. The result: a site could silently join a user to a Zoom call and turn on their webcam without proper consent.

Jonathan reported the issue, gave Zoom time to fix it, and warned that their approach was incomplete. When their fix proved inadequate and the vulnerability resurfaced, Zoom initially treated it as a “new” issue. At that point, he decided to go public.

The disclosure drew coverage from major outlets and led Zoom to:

  • Fully fix the issue
  • Invest more heavily in security
  • Eventually pay Jonathan a $10,000 bounty as a recognition of his work, despite him not using their original bug bounty channel

It also pulled him into a later federal lawsuit involving Zoom and its investors, where he was subpoenaed. The case went nowhere, but it underlined how legally messy vulnerability disclosure can become, even for good-faith researchers.

The Dan Kaminsky Fellowship & Fixing at Scale

After his early work, Jonathan was honored as the first-ever Dan Kaminsky Fellow, a program that allowed him to spend a year working on a personal project in Dan’s name. He chose to focus on automating vulnerability remediation at scale, using bulk pull request generation to push fixes across open source ecosystems. That research led to talks at Black Hat and DEF CON and helped shift the industry conversation from “how do we find bugs?” to “how do we fix them everywhere they matter?”

Along the way, Jonathan has worked with the Linux Foundation, contributed to the Open Source Security Foundation (OpenSSF), ranked among the top earners in the inaugural year of GitHub’s CodeQL bug bounty program, and helped shape vulnerability disclosure guidance for open source maintainers.

Lessons for Aspiring Security Researchers

Jonathan shared several practical insights for people who want to break into security research and vulnerability discovery:

1. Your Engineering Skills Are Enough to Start

You don’t need a pure security background. A solid understanding of how systems are built is often the best foundation. Many vulnerabilities come from simply asking, “How does this actually work?” and following that curiosity.

2. Write Disclosures Like Future Blog Posts

When you report a vulnerability to a vendor or maintainer, write your report as if it’s going to be a blog post later. That way, when the disclosure window ends, you already have a polished, public-ready narrative instead of trying to reconstruct everything months after the fact.

3. Always Use a Disclosure Policy

Jonathan emphasized the importance of linking a clear disclosure policy in your communication with vendors. Without a defined timeline and expectations, issues can sit unresolved for months (or indefinitely).

He used Google Project Zero’s 90-day policy for years, then helped co-author the OpenSSF vulnerability disclosure policy, which is more suitable for open source. That policy includes:

  • An earlier “soft” deadline when maintainers are unresponsive
  • A defined point at which researchers are justified in publishing, even if there’s no fix

The goal is to balance fairness to maintainers with the need to protect downstream users.

Bug Bounties, NDAs, and Researcher Silence

Jonathan has had success in some bug bounty ecosystems, but he remains cautious about how many programs are structured. He’s particularly wary when participation forces researchers into nondisclosure agreements, when vendors retain full control over if or when research can be made public, or when bounty programs seem designed more for PR optics than genuine security improvement. In those situations, researchers may end up silenced rather than supported—an outcome he believes ultimately harms users.

For software that ships directly to end-user environments or forms part of the open source ecosystem, Jonathan argues that public disclosure is often essential so downstream consumers can understand their exposure and patch appropriately. To avoid being trapped in restrictive communication channels, he avoids defaulting to vendor bug bounty portals, makes it clear from the outset that he is not agreeing to NDA-based disclosure terms, and keeps the option to publish if a vendor responds slowly or dismissively.

The Legal Risk: Still There, But the Community Helps

While the industry has matured and collaboration between vendors and researchers has improved, there are still cases where good-faith researchers face legal threats, investigations, or even arrest.

Jonathan pointed to:

  • Community efforts like disclose.io, which tracks past legal threats against researchers and promotes safer frameworks for disclosure
  • Organizations like the Electronic Frontier Foundation (EFF)
  • A growing legal support fund specifically for security researchers who need help defending themselves

The takeaway is to do your research ethically, know your rights, document your good-faith intent, and be aware of where to seek help if things escalate.

The Truly Spooky Part: Open Source Supply Chain Risk

In the last part of the conversation, Jonathan focused on modern software supply chains and open source security — arguably the scariest topic of all.

You’re Using More Than You Think

Most organizations are building on mountains of open source dependencies. Many don’t even have a complete view of what they’re using, let alone how secure it is.

New Attack Patterns: Typos, Slop, and AI

He highlighted attacks like:

  • Namespace hijacking / dependency confusion
  • Package abuse driven by AI-generated suggestions, where tools suggest package names that don’t exist yet, and attackers rush in to register them with malicious content

Combined with the habit of copy-pasting install commands from Stack Overflow or AI tools, the result is a risky cocktail.

Critical Repositories With Little Scrutiny

A major concern: many foundational artifact repositories — npm, PyPI, RubyGems, and others — became part of everyone’s supply chain without going through the sort of security review a commercial SaaS platform would face.

Because they weren’t “sold” into enterprises, they often:

  • Didn’t need SOC reports
  • Weren’t required to undergo regular penetration tests
  • Still ended up at the heart of the global SDLC

Some of these services, Jonathan notes, have historically never had a proper pen test despite their central role.

Practical Hardening Advice and a Zero Trust Mindset

Jonathan emphasized several practical ways teams can strengthen their software supply chains today. He pointed to tools that detect or block malicious packages by analyzing behavior and metadata rather than relying solely on package names. He also highlighted the importance of hardening CI/CD pipelines using solutions that monitor which domains a workflow communicates with during an audit phase and later block unexpected outbound requests—an effective safeguard against credential leakage or silent exfiltration attempts.

More broadly, Jonathan argued for secure-by-default behavior throughout the ecosystem. Whether a developer is running apt-get install, adding a new dependency, or pulling in AI-generated code, the initial state should be as safe as possible. And since no dependency—human-written or AI-generated—should be assumed trustworthy, he encouraged developers to treat every component as untrusted until proven otherwise.

This philosophy mirrors a zero trust approach applied directly to software development. Nothing should be implicitly trusted: not open source simply because it’s widely adopted, not AI because it’s convenient, and not even internal pipelines because they appear to function smoothly. Instead, teams should layer defenses, continuously observe behavior, and operate under the assumption that anything capable of being abused eventually will be.

Closing Thoughts

This session underscored just how complex and fast-moving today’s vulnerability landscape has become. From disclosure challenges and legal risks to supply chain weaknesses and insecure defaults, the conversation highlighted the need for clear policies, thoughtful engineering practices, and a zero trust mindset across the entire development lifecycle. The message was simple: vulnerabilities will always exist, but a combination of transparency, collaboration, and disciplined engineering can dramatically reduce the impact they have on downstream users.

For those who want to dive deeper into the topics discussed, Jonathan shares his research and writing on Medium, stays active on LinkedIn, and posts occasionally on X/Twitter. His ongoing work offers additional context and examples that build on many of the ideas explored in this discussion.

Follow PlexTrac on LinkedIn for more engaging episodes of PlexTrac Friends Friday, featuring leaders across all aspects of the cybersecurity industry. 

PlexTrac Author
PlexTrac Author At PlexTrac, we bring together insights from a diverse range of voices. Our blog features contributions from industry experts, ethical hackers, CTOs, influencers, and PlexTrac team members—all sharing valuable perspectives on cybersecurity, pentesting, and risk management.

Liked what you saw?

We’ve got more content for you

Request a Demo

PlexTrac supercharges the efforts of cybersecurity teams of any size in the battle against attackers.

See the platform in action for your environment and use case.