In this episode of The AI Kubernetes Show, Mike Lieberman, co-founder of Kusari, chatted with host William Morgan about the biggest security challenge introduced by GenAI: non-deterministic agentic software running in production.
Securing open source in cloud native
This blog post was generated by AI from the interview transcript, with some editing.
Open source is the foundation of modern software development. The vast majority of software today relies on open source in some capacity, with approximately 70% of the code coming from open source. This means that securing Kubernetes environments is impossible if you aren't also securing the open source components they rely upon. A great starting point is focusing on what you ingest.
Start with what you ingest
When you're pulling open source software, whether it's an image from Docker Hub, GitHub, or installing via Helm, platform engineers need to think about two key things to mitigate risk.
First, you have to understand what you're bringing in. Think holistically about the image. Figure out if an image is just a Postgres image with only Postgres installed or if it also has tools like Vim and a bunch of other command-line utilities. Every additional tool increases your attack surface. Knowing the contents upfront makes it much easier to quickly assess the impact when a vulnerability pops up.
Second, minimize your attack surface. When you're deploying to production, it's essential to cut down on the amount of open source software included.
Shifting complexity in the Kubernetes world
Containerization and Kubernetes shifted complexity. Kubernetes SREs are now responsible for managing runtimes and the underlying OS. Application developers, on the other hand, focus on a thinner slice of deployment concerns. Consequently, Kubernetes engineers are the ones responsible for securing the platform, and the continuous increase in software development pace (more software, updates, and moving parts) has made getting security right even harder.
Navigating the supply chain security acronym jungle
The term supply chain security can be intimidating, especially if combined with acronyms like SBOMs and attestations. To simplify this, think of the discipline as software delivery life cycle (SDLC) security. This focuses on how to safely write, build, push, and distribute code and, recursively, how to safely ingest that code from any external source (team, open source, or vendor). And at its core, supply chain security is about getting the observability you need into stuff that's coming from outside across the SDLC.
SBOMs: Simplifying communication
A Software Bill of Materials (SBOM) is a standard format (e.g., JSON or XML) for having and distributing component information. An SBOM gives you a standard way to communicate which packages you use to your consumers, no matter what language or packaging system you’re using. It's a simplification over multiple, often proprietary, software composition analysis reports.
Attestations: Getting observability
Attestations and related terms provide key observability in a few critical areas. First, they help verify the origin of the code, confirming that it's actually coming from an employee or known individual rather than an unauthorized party who may have hacked the system. Second, they ensure you track everything pulled into the build process, so you can go back and review the timeline if something seems off. Finally, attestations are important when publishing artifacts to confirm that a malicious actor hasn't stolen credentials and published a harmful version of your library upstream.
AI-powered attacks and the need for zero trust
Applying zero trust to supply chain security is the future of general cybersecurity. The rise of AI has significantly escalated the threat landscape, making attacks alarmingly easy. AI agents are now being used to automate malicious pull requests on open source projects, often with inadequate security permissions. These requests could change a build to download malicious code. The ability to scale this kind of attack against thousands of projects is becoming simpler every day.
Attacks like self-replicating worms demonstrate the danger inherent in the supply chain. Often, it only takes one compromised package. The next time you run a build, you pull in this compromised package, and then that compromises your build environment.
A common example is compromised open source packages that change the colors on the terminal. Because a malicious tool reads everything in your standard input and output stream, an attacker can capture sensitive information like passwords or SSH keys without you ever noticing.
The core solution to these attacks lies in zero trust: limiting access. We need to ensure that only people allowed to see the build secrets are approved builders, and the only things that can merge your code are approved actors.
The non-deterministic security nightmare
The most frightening security risk introduced by AI is the non-determinism in agentic software running in production. If an AI agent is given power, that system could eventually do something bad, like going down a rabbit hole if a shell command is missing, leading to unexpected actions like database deletion.
The defense is essentially reapplying all the same normal security techniques just to a new avenue. This includes protecting LLM-driven security tools from prompt injection, for example, by using tools like Nono that use Linux kernel-level controls to limit the process's access and blast radius.
Tools and the future of security observability
The same AI capabilities that scale attacks are also being used for defense, with LLMs getting better at analyzing code to figure out where issues might be. This is where GUAC comes in.
GUAC (Graph for Understanding Artifact Composition) is an open source project co-created by Kusari, Google, and Purdue University, focused on supply chain observability and analysis. GUAC helps teams visualize and query the software supply chain, which is essential for understanding the impact when a deep-chain library compromise affects dozens of applications that need updating. Beyond vulnerabilities, GUAC also monitors for other issues like license risk and end-of-life libraries.
Kusari also offers a tool called Inspector, which is free for CNCF projects. Inspector combines SBOM-related tools, fuzzing, SCA, and LLMs with extra context to provide accurate analysis and remediation steps, which really helps mitigate the security noise problem.
Future prediction: Agents mimicking org structure
As we look toward early 2027, Lieberman predicts a shift away from generic AI toward focused agents that will essentially mirror your organization's structure. Think of this structure as a new kind of management system, one built to ensure checks and balances within your code production.
For example, an agent could be responsible solely for QA and helping to write code. When disagreements arise, that QA agent will have the ability to essentially pull the brakes on a specific task if it detects broken code, much like enforcing a zero trust policy.
A further consideration is that we might start experimenting with systems that are less and less deterministic, constantly updating and changing, sometimes within an hour. The immediate concern here is security, because the less you can reason about a system's state, the easier it becomes for compromises to occur.
FAQ
What is SDLC security?
SDLC security, or Software Delivery Life Cycle security, is a broader term for supply chain security that focuses on safely writing, building, publishing, and distributing code, as well as recursively and safely ingesting code from any external source.
What information does an SBOM provide?
A Software Bill of Materials (SBOM) provides application component information in a unified format (like JSON or XML). For example, it can tell you that a component includes Python, Rust, and OS-level packages and which packages they are.
What is GUAC used for in supply chain security?
GUAC (Graph for Understanding Artifact Composition) is an observability tool that helps visualize and query the software supply chain to understand the impact of compromised libraries, track license risk, and identify end-of-life packages.
What is the primary security concern with non-deterministic AI agents in production?
Non-deterministic behavior can be dangerous because an agent, given enough latitude, might follow an unpredictable chain of thought if a command fails, potentially leading to destructive actions like deleting a database.



.png)
.webp)

.webp)

.webp)

.webp)
.png)
.png)
.png)
.webp)
.webp)
