In this episode of The AI Kubernetes Show, we talked with Grant Miller, CEO of Replicated and creator of Enterprise Ready, about the intersection of AI and platform engineering and how the pace of innovation is reshaping the software development industry.
This blog post was generated by AI from the interview transcript, with some editing.
A strong proponent of AI, Miller said he's a "huge fan and all in on AI." AI is a fundamental engine for acceleration, positioning engineering velocity as the core competitive advantage. The inability to move fast can be fatal for an engineering organization, and the pace of change has recently increased this by an order of magnitude or two, making high velocity critical for successful execution.
Miller shared a personal example of this "camp acceleration" mindset: he was able to develop a SCIM integration in just two or three days. This rapid turnaround on a feature that might otherwise have been deprioritized illustrates how AI tools enable quick responses to customer needs.
To grapple with the speed of change and the new challenges presented by AI, Miller asked every leader at the company to contribute at least one pull request with AI. Everything has changed, so for managers who might not have pushed code to production for a while, it's important to get that experience and understand what it's like to use these new tools. This practice helps generate leadership empathy for the engineers doing the work every day, which is seen as equally important as having customer empathy.
While AI speeds up feature development, it also piles extra pressure on the platform and the whole organization. One effect is Frankenstein-y application footprints. Since AI code generation isn't advanced enough yet to re-architect or re-platform things or add in new components gracefully, engineers often end up bolting on new features. This leads to Frankenstein-like growth and a total hodgepodge of things, making troubleshooting way harder.
This increased velocity also translates to an increased surface area for the platform team. They're covering a lot more ground now, which means troubleshooting is more complex and demands new dashboards that can provide immediate feedback on the state of things.
To sustain high velocity under pressure, platform teams need to nail both technology and developer experience. When looking at key areas for improvement, observability is clearly important for maintaining visibility into the application stack. CI/CD pipelines also consume a lot of time. Teams are constantly fixing tests, adjusting how integration tests are run, dealing with flaky end-to-end tests, and figuring out where that flakiness is coming from. They talk about it as “iteration speed total,” meaning how long it takes for a customer request to come in to become a product change. It's a mix of team capacity and cycle time.
AI tooling expands your internal contributor base pool, from SEs and PMs to CEOs, calling for a stronger focus on development experience. For that to work effectively, you need lower entry barriers. Ideally, your organization operates like an open source project with standardized processes and detailed documentation. By building a good development experience for the executive team, every engineer gets a VIP experience.
The move toward AI has introduced new challenges for distributing applications to customers. There's a significant demand for self-hosted AI applications, whether in an on-prem environment or under self-managed customer control. This need is far above what we see with traditional SaaS, primarily because these AI applications are so data-intensive.
Compounding this is the complexity of distribution. There's a notable lack of standardization around how to distribute large models that require frequent updates. Additionally, companies are now often distributing infrastructure-as-code tools like Terraform or CloudFormation alongside the application itself.
This creates a difficult validation loop. AI tools are great at generating visual code, like a working front-end or app, because humans are highly visual and can quickly validate the output with their eyes. This same validation loop is not intuitive for templated infrastructure-as-code manifests. For instance, nobody can just look at an entire Helm chart and instantly say, “That looks good.” The validation loop is simply not intuitive, and a clear process doesn't exist today for handling such a high matrix of complexity.
The biggest challenge we still face with AI-generated code is the complexity of testing and validation. If we could solve that, it would unlock much more advanced AI guidance and verification for the output. To contribute to this space, Replicated even open sourced ChartSmith.ai, a project aimed at helping people develop Helm charts using AI.
Miller believes that the disruption from AI will actually create even more opportunity. That's the beauty of the unsolved problem—we don't even know what most of the new problems will be yet. However, testing and validation are clearly going to be key areas for new discoveries and jobs in the near future.
To follow Grant Miller's work and connect with him, you can find him and Replicated at their company website, replicated.com, and his Enterprise Ready initiative at enterpriseready.io. You can also check out the Enterprise Ready Podcast and follow him on Twitter.
AI is an acceleration engine, potentially turning engineering velocity into a core competitive advantage. The pace of change has increased significantly, making high velocity critical for successful execution in engineering organizations.
Platform teams need to focus on nailing both technology and developer experience. Key areas for improvement include maintaining visibility through top-notch observability and optimizing CI/CD pipelines to fix tests and reduce the total "iteration speed."
Unlike visual code (like a front-end app) that a human can quickly validate with their eyes, templated infrastructure-as-code manifests (like Helm charts or Terraform) lack an intuitive validation loop.
The biggest challenge is the complexity of testing and validation. Solving this would unlock much more advanced AI guidance and verification for the output of code generation tools.