Get Service Mesh Certified with Buoyant.

Enroll now!
close

AI: Bubble or Bug? A CTO’s Perspective on Engineering in the AI Era

No items found.

Previous episode:

Why Testing and Validation are the Unsolved AI Code Challenges

Previous

In this episode of The AI Kubernetes Show, we spoke with Dinesh Majrekar, CTO of Civo, about how the wave of AI that is changing the world is impacting platform engineering and software development in general. We covered the current state of AI hype, the critical need for security and data privacy, and a shift in focus from development speed to code quality when integrating AI tools into the engineering workflow.

Is there an AI bubble?

This blog post was generated by AI from the interview transcript, with some editing.

Is the current AI boom a bubble, or is it the start of something massive? According to Majrekar, it might be a bit of both. While acknowledging there's a certain "bubble" feeling, he suggested it could also be the relentless march of innovation, kicking off an entirely new technological wave. We might be witnessing something similar to the rise of mainframes and IBM. Back in those early days of computing, IBM essentially held a hardware monopoly. You can see a lot of similarities between that era and what's happening now with Nvidia and their role in the AI landscape.

Security, data sovereignty, and an on-prem public cloud

Whether talking to his team or customers, Majrekar currently always focuses on security. During the ChatGPT launch, all early adopters' conversations leaked and are now accessible on the internet. That's one example of why it's important to always remain vigilant when using third-party services and be cautious about what data you are sharing with them.

Civo addresses the security challenge around large language models by essentially "dogfooding" their own solution. They built an OpenAI-compatible endpoint that runs their internal LLM on their own GPUs. This capability is now available to customers as a managed service on what he calls an on-prem public cloud.

This whole setup lets customers run the LLM at a small scale while Civo handles all the management, and the data never leaves your building. For security-focused organizations, like in healthcare, a hospital could simply have a GPU in the basement, ensuring all data remains within that facility.

Open source model capabilities on par with proprietary ones

Civo is leaning on open source models that have effectively closed the gap with their proprietary counterparts. The capabilities of open source models have reached a plateau over the last year, and they are now on par with the proprietary options available. For example, the Kimi K2 model is one to watch; it's currently demonstrating performance that surpasses the latest models from Anthropic and OpenAI.

AI in software development: Bug or partner?

AI is shaking up core engineering practices, particularly the dilemma of non-deterministic outputs in a field fundamentally built on determinism. In processes that demand a specific, algorithmic answer—think applying a credit for an SLA breach—introducing an LLM can really complicate things. Getting an unexpected output when you feed in an input? In engineering, we just call that a bug. This leads to an important realization: AI isn't some kind of universal solution. Trying to force AI into use cases where it doesn't fit is essentially asking for trouble.

The best AI use case: Code generation 

Code generation may be one of the stronger use cases for AI in development, but it's important to see it as a partnership, not a full delegation. You don't have to write all the code; it can be generated, as long as you're infusing your own “flavor” into the process.

The primary rule for developers remains ownership: regardless of how the code is generated, you own it, and the responsibility for it is yours. The quality of the output directly ties back to the context you provide as input. When you're prompting an LLM to write a function, for instance, it will follow the example of good software practices like good documentation, clear function names, and concise methods. This act of "setting the context" is key to getting better results.

Quality over velocity

The ongoing debate in the tech world about whether AI significantly boosts code velocity has an interesting counterpoint. For Majrekar, the big takeaway isn't speed; it's quality.

Code velocity should hold steady, even with the integration of new AI tools. The real benefit comes from repurposing the time saved on initial code generation. That extra time can be invested in tackling technical debt and strengthening the code base, which is important for managing risk. After all, deployment velocity is ultimately limited by the acceptable risk of bugs and failed deployments.

Here's an example from the Civo team: an engineer used AI for a simple one-line change in ChartJS. Instead of moving quickly to the next task, they used the time saved to make tests more robust, transforming five unit tests into 50, and also added comments describing what had changed and why. This approach ensures that every time an engineer touches the code, they leave the code base in a better state than they found it.

Stay in touch with Dinesh

To connect with Dinesh Majrekar or learn more about Civo, you can reach out via LinkedIn or the Civo Community Slack channel. 

FAQ

Is the tech industry in an "AI bubble"?  

The current AI boom we are witnessing today is likely a mix between some sort of bubble and the start of a new technology wave. Think back to the mainframe days with IBM's hardware monopoly, with Nvidia being today's IBM.

How can you maintain data sovereignty with Large Language Models (LLMs)?

Cloud providers, like Civo, can use OpenAI-compatible endpoints to run an internal LLM on their own GPUs and offer a managed service on an "on-prem public cloud" that ensures data never leaves your building.

How do open source LLM models compare to proprietary ones?

Over the last year, we've seen open source LLM models really catching up with the proprietary models. The Kimi K2 model, for example, is currently beating the latest Anthropic models and OpenAI models while using fewer resources. 

Code velocity or quality: What's the real benefit of AI in software development?

Ultimately, the real benefit will be a gain in code quality. The time saved on initial code generation should be repurposed for tackling technical debt and strengthening the code base, which is important for managing deployment risk.

Get started with Buoyant
Enterprise for Linkerd

Download and install the world's most advanced service
mesh on any Kubernetes cluster in minutes