MBI Daily Dose (July 15, 2025)
Companies or topics mentioned in today's Daily Dose: Future of Enterprise software, Some cold water on AI productivity, AWS vs GCP
A Programming Note: A number of you have asked whether the Daily Dose might be a short-term experiment. Given the positive feedback I have received so far, rest assured it’s here to stay, and so are the monthly Deep Dives.
Starting August 2025, Daily Doses will be behind a paywall every other day. Monthly Deep Dives will, of course, all be paywalled.
To sustain the growing breadth and depth of my research, subscription rates for new subscribers will rise on August 1, 2025 to $30/month or $250/year. Anyone who joins on or before July 31, 2025 will keep today’s pricing of $20/month or $200/year.
Thank you for reading and for your continued support!
Enterprise software is going through yet another transition.
The center of gravity inside enterprises is likely evolving from human-driven SaaS front-ends to AI-native “control planes” where agents act directly on unified data. For incumbents that do not refactor their stacks, Karpathy’s “Software 3.0” warning looms large: LLMs are now the users and prompts are the API. Because LLMs will soon be the main “users,” click-heavy GUIs and hand-coded logic flip from assets to liabilities; winning teams will expose clean APIs, markdown docs and declarative flows that an agent can read and execute. Karpathy speculates even a mass rewrite in which legacy code is deleted or wrapped by governance controls that expose an “autonomy slider,” letting managers dial AI from a single text-completion to a full repo overhaul.
Scuttleblurb wrote a thoughtful piece yesterday where he strikes a balance between existential questions and more prosaic concerns about AI’s impact on enterprise software. From Scuttleblurb’s post:
The obligatory question of the 2010s – “What is your data strategy?” – gradually bled into “What is your AI strategy?”. The explosive rise of LLMs and AI agents collapsed the distinction. Given the relentless copy-catting that has long characterized the space, it should come as no surprise that literally every one has the same pitch for why they are uniquely advantaged to win in AI: we have the cleanest data, the most curated data, the most contextualized data for agents to consume.
AI is an intelligence layer that augments a vendor’s incumbent strengths. Its benefits, quantifiable in terms of time savings and improved match rates, can be priced for just like any other product. But it is not, in itself, a decisive wedge. Enterprise buyers won’t pick a vendor because of its AI or defect because their current vendor is charging a bit more for it than another. AI lifts ARPU but it doesn’t improve win rates.
“AI lifts ARPU but it doesn’t improve win rates”….indeed, you know what else it may not improve if everyone has basically the same pitch? Margins.
If these transitions aren’t headache enough, SOTA model developers in their pursuit of AGI can decide to form end-to-end software stack that truly cater to their contexts. These risks are increasingly becoming a bit more tangible over time. This isn’t quite confirmed yet, but a recent Guggenheim piece alluded to the possibility that OpenAI may decide to move away from Datadog:
"OpenAI has already completed building and testing the internal observability solution onto which Datadog workloads will move. In fact, a recent video called “Scaling Clickhouse to Petabytes of Logs at OpenAI,” shows members of technical staff at OpenAI discussing an internal observability solution built on Clickhouse, which we believe corroborates our view. Meanwhile, the company may be evaluating other more cost- effective alternatives to other Datadog functionalities. We believe this would be consistent with OpenAI’s approach to using in house and primarily open source infrastructure software solutions, which enable lower costs together with higher flexibility and security."
If OpenAI can pull it off, how about Anthropic and other SOTA model developers? Of course, Datadog is just an example and it’s not really just a question about just model developers vs incumbent software players. Given the size of each of the SOTA model developers, they would all like to productize their R&D efforts as much as possible and eventually externalize many of the tools/software they’re internally using. Of course, the models themselves can be further ammunition to arming their own users to create more point solutions.
I don’t want to paint a broad brush on the overall sector; the real world always tends to move slower than narratives and spreadsheets. Software is far from dead and Jevons paradox is indeed a more likely outcome. There will certainly be some incumbents who will remain unscathed during this transition, but the transition may not be trivial for many incumbent enterprise software companies.
There are few use cases that had as much product market fit as AI-assisted coding in this new AI world. We have been hearing about how “English” is now the most popular coding language, implying how trivial the skillset is becoming even for non-technical users. Of course, companies are also falling over themselves to mention in their earnings call what percentage of their codes are now being written by AI and how much that is leading to higher productivity.
Given that context, this super interesting post definitely poured some cold water in that narrative. The whole piece is worth reading, but I will share some key excerpts:
METR performed a rigorous study (blog post, full paper) to measure the productivity gain provided by AI tools for experienced developers working on mature projects. The results are surprising everyone: a 19 percent decrease in productivity. Even the study participants themselves were surprised: they estimated that AI had increased their productivity by 20 percent. If you take away just one thing from this study, it should probably be this: when people report that AI has accelerated their work, they might be wrong!
The paper states that “developers were instructed to use AI to whatever degree they thought would make them most productive”. However, some subjects seem to have gotten carried away, and this may have contributed to the observed slowdown.
Based on exit interviews and analysis of screen recordings, the study authors identified several key sources of reduced productivity. The biggest issue is that the code generated by AI tools was generally not up to the high standards of these open-source projects. Developers spent substantial amounts of time reviewing the AI’s output, which often led to multiple rounds of prompting the AI, waiting for it to generate code, reviewing the code, discarding it as fatally flawed, and prompting the AI again. (The paper notes that only 39% of code generations from Cursor were accepted; bear in mind that developers might have to rework even code that they “accept”.) In many cases, the developers would eventually throw up their hands and write the code themselves.
Several aspects of the study play to the weaknesses of current tools. First, it was conducted on mature projects with extensive codebases. The average project in the study is over 10 years old and contains over 1 million lines of code – the opposite of “greenfield”. Carrying out a task may require understanding large portions of the codebase, something that current AI tools struggle with.

In addition to "Daily Dose" like this, MBI Deep Dives publishes one Deep Dive on a publicly listed company every month. You can find all the 60 Deep Dives here. I would greatly appreciate if you share MBI content with anyone who might find it useful!
It’s not just Azure. The Information yesterday published a piece indicating AWS may be ceding some share to GCP thanks to GCP’s superior offerings in AI:
“Earlier this year, when The Browser Company was looking for a cloud provider to power the artificial intelligence features in a web browser it was developing, the startup’s leaders asked Amazon Web Services if it could handle the work.
But as the two sides neared an agreement, The Browser Company found that Dia’s AI features—such as analyzing text and images on webpages in less than a second—ran faster and more cheaply on Google Cloud, where they were powered by Gemini, an AI model Google developed in-house.
Startups are important to cloud providers because they can quickly become meaningful customers. Snowflake and Pinterest launched their companies on AWS, and each now spends at least $500 million a year on its cloud servers and other products, according to The Information’s Cloud Database.”
It’s not all doom and gloom for AWS; the same piece mentioned some key startups that use AWS almost exclusively and some have multi-cloud approach.

Here’s the thing though. I have been hearing about GCP potentially enjoying a superior hand in AI ever since ChatGPT came to dominate public consciousness. The narrative makes sense, but it’s not quite there in the numbers…yet! I have mentioned before how Google Cloud (Google doesn’t disclose GCP only numbers) tends to mimic AWS revenue 16 quarters apart. This somewhat stupid rule of thumb had remarkable consistency as Google Cloud consistently was ~90-100% of AWS revenue 16 quarters apart. It is bit of a surprise that the lowest number in this series was 1Q’25 when Google Cloud was supposed to capitalize on AWS weakness! Perhaps we are on the cusp of Google Cloud inflecting materially; we’ll see.

Current Portfolio:
Please note that these are NOT my recommendation to buy/sell these securities, but just disclosure from my end so that you can assess potential biases that I may have because of my own personal portfolio holdings. Always consider my write-up my personal investing journal and never forget my objectives, risk tolerance, and constraints may have no resemblance to yours.