In this episode of The AI Kubernetes Show, we chat with Ramiro Berrelleza, CEO of Okteto, about how the rise of artificial intelligence is fundamentally changing the art and science of platform engineering.
This blog post was generated by AI from the interview transcript, with some editing.
AI is no longer a niche tool; its adoption is a universal trend across the tech industry. Berrelleza highlighted that pretty much 100% of their customers are already adopting AI, ranging from early-stage startups to institutions in government and education. Developers are moving past the question of if they should use tools like Cursor, Copilot, Claude Code, and ChatGPT to how they can use them to get better at what they do.
AI code generation is still new, and choosing a generation tool is a dynamic process. It's far too early to commit to just one favorite tool. Developers must be willing to try a lot of different things. The current AI tooling space is much like the Cambrian explosion of tooling we saw when developers first started and rapidly switched between five or six programming languages—think JavaScript, Python, and Bash—before things started to settle down.
The move from single-threaded AI tools to a model powered by numerous asynchronous agents is inevitable. For Berrelleza, "it's obvious that the future of software development isn't about using one or two agents. It's about building fleets of them."
A major productivity bottleneck in single-threaded AI chat interfaces is the wait time for a response. That experience of waiting for an AI to answer is a lot like the old development pain point of waiting for code to compile. The solution to this, instead of single-threading, is the agent fleet approach. The idea is to have a fleet of agents running somewhere asynchronously, give them a task, and then your role becomes more about managing that fleet. This system is capable of handling different streams of work simultaneously.
This agentic development model is proving to be a game-changer. One early adopter really leaned into this approach. The company got 150 agents running with just one person in 24 hours. They pulled 20 to 30 items from their backlog, fed them to the agents, and ended up with a bunch of PRs in less than a day.
The massive increase in output from AI agents brings a new set of organizational challenges, sometimes even leading to organizational regret about AI project adoption. But the problem is a fundamental misunderstanding of what to measure and what to optimize.
Measuring productivity based on metrics like lines of code or the number of pull requests is essentially measuring the wrong thing. The rise of agent-produced code means we're facing a new bottleneck. If agents are producing ten times the PRs, our developers are now spending their time reviewing all that code, waiting for CI workflows to complete, or dealing with unfamiliar code that is somehow running in production.
Organizations need to figure out their real constraints and adjust their AI investment strategy to match it. For instance, if your bottleneck is your CI workflows, your AI budget should focus on optimizing that area. You could use AI for inspecting logs and figuring out why a test failed. Phase one in adopting AI is "lift and shift," where existing processes are simply moved over. In the next phase, you move beyond that and start building AI-native workflows that help the organization without accidentally creating new bottlenecks.
The rise of AI-generated code is hitting open source maintainers disproportionately hard. An influx of AI-generated pull requests is leading to serious maintainer overload. Since maintainers can't realistically fight the adoption of AI, a smart move is to set clear policies for the community regarding AI-enhanced contributions.
A sensible approach is to ask contributors to ensure their code is "human proof." This means that while sending AI-enhanced PRs is fine, contributors must understand the code. You shouldn't just send something straight from an AI tool and push it to a PR without review.
AI agents aren't just for generating code; they are a powerful tool for boosting technical understanding across the organization, benefiting everyone from sales engineers to the CEO. Tools like Claude, for example, can be used to analyze a codebase and PRs. The user can request an explanation of what is happening. This allows technical folk that don't code on a daily basis to easily get the context.
Agents can also be incredibly useful for onboarding. They can summarize complex information into a document for new maintainers or employees, essentially becoming a dynamic part of the new-hire process. This speeds up the time it takes for new contributors to get up to speed in a complex codebase.
If you'd like to connect with Ramiro Berrelleza or learn more about Okteto and their agents, here is the information he shared: Ramiro is pretty active on both Twitter and GitHub; you can find him under his name almost everywhere. Visit Okteto.com to learn more about their products. They recently launched a free tier for their agents, allowing you to try out the agent fleet concept firsthand. Finally, Okteto has an active community and an open source project, and they are always looking for more people to help maintain, discuss, test, and share ideas.
Software development is moving beyond single-threaded AI tools to a model powered by agent fleets. This approach solves the productivity bottleneck of waiting for a single AI response by allowing the system to handle different streams of work simultaneously, similar to a manager overseeing a team.
How do you measure developer productivity in the age of AI?
Traditional metrics like lines of code or the number of PRs are no longer effective and often lead to new bottlenecks, as developers spend more time reviewing agent-produced code. Organizations should focus on identifying their real constraints, such as slow CI workflows.
The influx of PRs driven by AI-generated code is causing maintainer overload. A sensible policy is to require human-proof code, where contributors are welcome to use AI-generated code but must understand it.
AI agents can act as a context multiplier, analyzing complex codebases and PRs to explain what’s happening. This allows technical staff, even those not coding daily, to easily understand the context. Agents can also summarize complex information, speeding up onboarding.