Notes from MS Conference: Meta, and Microsoft

Morgan Stanley (MS) had their annual TMT conference last week. Let me share some notes from Meta, and Microsoft’s sessions today.

Meta Platforms

Susan Li, CFO of Meta, attended the MS conference. As you can imagine, there were plenty of questions around ROIC on their capex. It turns out it’s not just investors or analysts like us, but even the CFO was also surprised at Meta’s continued ability to find ways to improve their ad performance. From the session:

“…we have an internal metric called IREV, which is basically how we measure the performance of ads. And here’s the list and here’s what they add up to. And it is, I think, one of the maybe modern wonders of the world that we have continued to generate basically half after half a list of improvements that continue to generate IREV gains every half and those continue to compound on each other…that’s true on the organic side, too. And I would say the core business is very healthy.”

As I have argued before, we may be underestimating how much AI is reshaping Meta’s ad infrastructure. Meta’s monetization ability was always pretty good, especially relative to most of their sub-scaled counterparts, but AI is likely accelerating the distance between Meta and the sub-scaled players’ (current and future) ability to match Meta’s monetization abilities. It’s not just monetization; a lot of the work on improving the ads relevance is also pretty relevant for improving the ranking of organic content as well. From Susan Li:

“…we have a healthy pipeline of work ahead of us to basically to continue making the content more relevant through a couple of things. One is just scaling up the amount of data we can use that lets us increase sort of the history of content interactions, makes the overall corpus of data available to the recommendation engine larger. The second thing is we're really focused now on -- in the same way that we talked in the past couple of quarters, the way we are really trying to redistribute ad loads so that what we care about is right now, are you in a position where you're interested in engaging with an ad where you want to buy something where you're in a period of commercial intent.

…we're also investing in using LLMs to deepen our content understanding. They are -- as the models continue to become smarter and the sort of understanding and reasoning capabilities become better. Using LLMs to kind of help us understand content helps with recommendations in part because the traditional recommendation engine relies a lot on engagement signals and then you need a lot of engagement to happen to get the engagement signals, but LLMs can reason in real time about whether this is a piece of content that would likely be interesting to you based on what we know.

The bear case for Meta’s capex is that it can be hard to know beforehand where exactly we are currently on the curve of ad performance or organic content relevance improvement. Are we in early, mid, or late innings here? As the CFO suggested, they have always been able to come up with new improvements but past may not always be a great guide for the future, especially in light of noticeable revenue acceleration at a much larger revenue base in recent years. Li seems to have sympathy for these concerns as well:

“…the thing that I think now at some point I was up here talking about this, it used to worry me, and it still does, to be clear, I’m just like engineered that way, that if you added up all these initiatives, sure, you could measure the return on each one because of that individual experiment, but you didn’t know where on the slope of the curve you were. And so maybe actually, if you add up these 20 things, then you need to discount them by 80% because like the slope becomes much steeper. That has not turned out to be the case.

These -- the work that we have done has turned out to be more additive than we expected. And there is a virtuous cycle that you get into with advertisers, right? Like you make the ads perform better. That, in turn, drives costs down for advertisers. That in turn drives their budgets on us up. And then on the platform up. And then hopefully, that’s good for their business, and that’s a like long-term virtuous flywheel because now it’s good for their business. They have more money to spend in the next cycle around doing this with us. That’s really hard to measure, right? That’s a multi-month, sometimes multiyear process. It’s hard for us to measure that very directly...But from everything we can observe, that appears to be happening on the platform. And our goal every day is to be the best place advertisers can come and spend their money relative to anywhere else.”

My takeaway is Meta is likely fairly confident on their ability to generate compelling ROIC on their capex related to core ad business in the near term, but is still tentative about their long-term ability to generate attractive ROIC here. Another big unknown for Meta appears to be their spending on inference. Li has hinted that Meta will try to introduce more interactive content formats within their apps which will be very inference heavy than current formats. But since they haven’t been launched yet, Meta doesn’t know how the demand for such formats would look like and hence also the associated inference spending.

Meta is also currently noticeably behind on frontier models, but Li appeared optimistic about their ability to get back to the race and once they do, she believes Meta has a clear path in integrating Meta AI much more deeply within its existing distribution. From Li:

“I think both based on just your deep history of interacting with the platform already and our ability to understand that information and sort of use it to make sure we're building a good experience for you. So I feel -- I think that when we have a frontier model, I feel quite confident that the combination of that, the combination of the distribution graph, the network effects, the fact that there are a lot of very natural places to have Meta AI interact with you…I think the ways in which the family of apps as it exists today, I think, are a great scaffold for AI experiences to fit very neatly within them”

I will share my notes from Microsoft’s session behind the paywall.


In addition to “Daily Dose” (yes, DAILY) like this, MBI Deep Dives publishes one Deep Dive on a publicly listed company every month. You can find all the 66 Deep Dives here.


Microsoft

This post is for paying subscribers only

Already have an account? Sign in.

Subscribe to MBI Deep Dives

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe