In this episode of The AI Kubernetes Show, we talked with Ahmed Bebars, Principal Platform Engineer, CNCF Ambassador, and Board Member of the CNCF End User Technical Advisory Board (TAB), about how the proliferation of AI tooling and changes in AI processes are changing the world of platform engineering.
This blog post was generated by AI from the interview transcript, with some editing.
While AI certainly accelerates shipping code, it's important that it doesn't disrupt established processes. Bebars views AI as a powerful productivity tool that helps teams ship more, faster, but users must maintain the integrity of their existing software development lifecycle (SDLC) process. The ultimate aim is to boost velocity without removing the well-established DevOps processes honed over the last decade or two. This means that rigorous testing, including integration and regression testing, must remain firmly in place.
Shipping code faster is about more than just coding; it’s really about the surrounding ecosystem. AI is improving the entire development lifecycle. This includes accelerating all those supporting tasks like documentation, test iteration, testing, observability, regression testing, and user feedback analysis.
AI is also being used in monitoring to analyze platform metrics and dashboards, with correlation happening much earlier in the process. Ultimately, this speeds up the process of releasing and rolling back code.
A frequent concern about AI is that it will lead to reduced quality, but the quality of the LLM's output is directly tied to the quality and context of the input you provide. Context matters immensely. If a codebase is well-written, documented, and orchestrated, the resulting AI output will be very high-quality—similar to what a human would write.
This leads directly to a highly favored concept: spec-driven development. In this approach, users tell AI exactly what they want, and humans review and guide the entire process.
The non-deterministic nature of LLMs requires a significant mindset shift for engineers who typically prefer deterministic outputs. Even though an LLM's output is non-deterministic, tooling can be used to make the result predictable. You can build scripts that run on top of the LLM's output to make it more deterministic.
The bigger shift is designing a system to handle unpredictable user intent, where the inputs themselves will be non-deterministic. This approach is incredibly powerful for creative platforms and knowledge bases, as it offers a much more personalized user experience.
For platform engineering leaders, SREs, and those managing Kubernetes clusters, Bebars advises to embrace and integrate this new technology. To start small—really just test it out, try it, and see what the next possibilities are. A key move is also to prepare your organization by focusing on local knowledge. This means finding solutions to host your own LLMs and using the right tooling to build agentic workflows for specific use cases.
When it comes to incident response, AI is proving valuable for triage and data gathering. It helps quickly correlate data, which is great for giving an engineer a solid starting point when a cluster has a problem.
If you would like to connect with Ahmed Bebars, the best way to find him is over LinkedIn or in person at events like KubeCon. He is always happy to talk about open source, technology, and specifically, his passion for scuba diving.
AI tooling allows engineers to ship more code faster, increasing velocity, provided that well-established software development lifecycle (SDLC) processes, testing, and reviews are kept intact.
The core DevOps processes must be maintained. AI is a productivity tool, but it's essential to keep the same testing in place, integration testing, and regression testing.
A strategy that is gaining interest is "spec-driven development," where the user is highly specific with prompts to guide the LLM's output.
Software engineers are generally drawn to deterministic outputs, while LLMs produce outputs that are inherently non-deterministic. The challenge is adopting a new mindset and building tooling to make the non-deterministic output more deterministic.