Blog
/
AI turns software supply chain security into a continuous engineering discipline

AI turns software supply chain security into a continuous engineering discipline

Photo of Andrew Zigler
|
Blog_Software_supply_chain_security_2400x1256_4537b892c4

The software supply chain has always been an iceberg, and most of what powers modern applications remains invisible, buried deep in dependency layers that few teams directly observe. But as AI-driven engineering accelerates code production, that hidden foundation is accumulating security debt at an unprecedented rate.

Dan Lorenc, co-founder and CEO of Chainguard, frames the challenge starkly. Faster iteration and higher code volume are already stressing existing verification, release, and supply chain assurance practices. The issue is not whether AI-generated code can be trusted in isolation. The real issue is whether organizations can control how that code is validated and allowed to reach production.

The bifurcation of the software stack

Lorenc predicts a bifurcation in the software stack. People are going to keep using battle-tested pieces of software at the foundation. Components like web servers and databases are too critical and too complex to replace with generated alternatives. In these cases, Lorenc argues that the industry is better off if everyone points their tokens at one solution to make it better over time, rather than everyone pointing their tokens at their own fragmented solutions.

However, the middle layer, the integration glue, middleware, and client libraries, may increasingly be generated or maintained internally through agentic development. Lorenc anticipates this middle section getting hollowed out a bit as agents become better at handling the "glue" that holds applications together.

This shift does not eliminate supply chain risk; it transforms it. Organizations must now account for incident response across both traditional dependencies and internally generated components. Supply chain security becomes an ongoing organizational capability rather than a one-time procurement decision. It requires the stewardship of widely deployed but lightly maintained components and continuous risk assessment.

The implication for engineering leaders is clear. Supply chain security can no longer be treated as a static concern. As agents reshape what gets built internally versus consumed externally, teams need visibility, governance, and rapid response capabilities that span both traditional open source dependencies and the expanding surface area of AI-generated integration code.

Trustworthy CI/CD lets teams ship at scale

When code volume increases by an order of magnitude overnight, the bottleneck shifts from writing to shipping. AI-enabled development only scales when CI/CD systems provide dependable tests, automated deployment, and clear signals that teams can trust.

Lorenc describes a progression from loose guardrails to tight guide rails, creating a system where repeated checks and constrained pathways lead to eventual determinism for both human and machine-generated changes. He uses the metaphor of bowling with bumpers. A single ball bounces predictably and reaches the pins. But if you throw a hundred balls down the lane simultaneously, chaos ensues unless those bumpers are precisely calibrated.

Lorenc sees this calibration as the future role of engineers, getting those gates rock solid, ensuring all intent is captured, and validating performance so teams can deploy with confidence.

Flaky tests, fragile deployments, and low release confidence become the true bottleneck when parallelized coding agents dramatically increase output. If a team is scared to deploy on a Friday because half the deployments crash, generating 500 times more code does not help. It simply creates 500 times more risk sitting in a queue.

Lorenc recommends prioritizing strong deployment pipelines even in regulated environments that cannot yet fully adopt AI. Those systems become the foundation for later automation. The engineer's role evolves from manually carrying every change across the finish line to encoding intent, quality thresholds, and operational safety into the delivery system itself.

For teams with rock-solid pipelines where green checks mean production-ready code, agents become true force multipliers. The code quality matters less than the system's ability to catch problems before they reach users.

Google shows AI can strengthen open source despite the spam

AI is both a burden and an accelerator for open source maintainers. On one hand, low-quality issue and vulnerability spam is increasing. Daniel Stenberg of the cURL project has publicly complained about ChatGPT-generated security reports flooding his vulnerability disclosure process with garbage, making it harder to find real vulnerabilities.

On the other hand, automated analysis and remediation are improving. Google's Deep Sleep research recently found genuine zero-day vulnerabilities in open source projects that no security tools had previously detected, disclosed them responsibly, and got them fixed before exploitation.

Lorenc predicts a near-term split where some projects reject AI-mediated contribution flows entirely to avoid the noise, while others embrace agents to move faster. Those embracing the technology will likely experiment with automated rebasing, merge conflict resolution, and long-tail support across multiple projects.

Maintenance economics could also improve for projects that are feature-complete but require bursty upkeep. As Lorenc notes, many widely deployed projects are "just plain done." They are stable, tested, and rarely need new features, but they still require stewardship for security incidents and occasional updates. That is exactly the type of work agents can handle across hundreds of projects simultaneously.

Enterprise delays create a widening autonomous advantage

The central asymmetry in autonomous agent security is simple but devastating. Attackers adopt autonomous capabilities immediately, while defenders in large enterprises remain slowed by governance, procurement, and security review.

On an exponential capability curve, even a normal enterprise delay creates a massive practical gap between attacker and defender effectiveness. As Lorenc points out, a two-year lag behind the rest of the industry used to be manageable, but with how fast things are moving now, that delay translates to being two decades behind in capability.

Security teams in regulated industries typically wait six to nine months or longer before adopting new technologies. They let others discover the risks and see what breaks. That conservatism has historically been prudent. But when capabilities are doubling every few months, a six-month lag does not just leave an organization behind, it leaves it defenseless.

Lorenc's recommended response is pragmatic: set up enterprise AI sandbox environments. Organizations should carve out time for developers to play with these tools on isolated laptops that do not have access to production codebases. The goal is not to eliminate governance but to build developer intuition for AI tooling as a critical defensive skill.

Teams need hands-on experience to judge which tasks agents can handle safely and where supervision is still required. Without that intuition, security teams won't know how to evaluate agent capabilities when they finally do adopt them.

Lorenc emphasizes treating agents as useful but fallible operators whose privileges must be tightly constrained. He advises assuming every agent is like a new intern who was just handed a laptop, noting that no one gives an intern root keys to production. The principle is straightforward. If you would not give a new intern access to production credentials, do not give an agent that access either. Agents will make mistakes, so systems must be architected to contain those mistakes before they cascade into incidents.

Encoding trust into the agentic future

For engineering leaders, the takeaway is urgent. The attack-defense asymmetry is happening now. Organizations that delay hands-on AI experimentation in the name of security may find themselves so far behind the curve that no amount of catch-up investment can close the gap. The solution is structured experimentation that builds organizational capability while maintaining security boundaries.

The agentic transformation is not coming; it is here. Engineering leaders who treat AI as a distant concern will find themselves managing teams that cannot ship, dependencies they cannot secure, and threats they cannot defend against. The organizations that thrive won't be those with the fanciest AI tools. They will be those that encoded trust, safety, and intent into systems resilient enough to handle whatever code comes next.

To hear more about the future of software supply chain security, listen to Dan Lorenc discuss these ideas in depth on the Dev Interrupted podcast.

andrewzigler_4239eb98ca

Andrew Zigler

Andrew Zigler is a developer advocate and host of the Dev Interrupted podcast, where engineering leadership meets real-world insight. With a background in Classics from The University of Texas at Austin and early years spent teaching in Japan, he brings a humanistic lens to the tech world. Andrew's work bridges the gap between technical excellence and team wellbeing.

Connect with

Your next read