MBI Daily Dose (July 16, 2025)
Companies or topics mentioned in today's Daily Dose: Extreme leverage of AI Research team with high talent density, GLP-1 impact on Life Insurance
I have been pondering about Zuck’s recent hiring spree of AI researchers. I walk ~10,000 steps everyday and during my afternoon walk yesterday, I was thinking about three different pieces I came across yesterday that all touched on this topic.
First, Zuck gave an interview with The Information in which he said some things that I think are likely preview of what he may be going to say during the earnings call in a couple of weeks:
I think that the physics of this is you don’t need a massive team to do this. You actually kind of want the smallest group of people who can fit the whole thing in their head. So there’s just an absolute premium for the best and most talented people.
I think we’ll see how the technology trends, and we’ll see what the results are. In running the company, I’m sort of always looking for ways that I can convert capital into a higher-quality service for people.
And one of the benefits of reinforcement learning is it gives you a venue to, you know, potentially convert very large amounts of capital into a better and better service, and potentially a better service than other less well-funded or less bold competitors will be able to do so. I view that as a competitive advantage.
I think we’re going to have the largest compute fleet of any company, and focusing on that on being powered by a small and talent-dense team, I think we’re gonna have by far the most compute per researcher to do leading edge work.
Like I said before, a lot of the numbers specifically have been inaccurate, but I think it discounts the other key reasons why people are super excited to come work on Meta Superintelligence Labs.
And one of the biggest is that you can just have more leverage as a researcher. You have more compute right? I mean, basically historically, when I was recruiting people to different parts of the company, you know, people are like, OK, what’s my scope going to be? And, you know, here, people say, I want the fewest number of people reporting to me and the most GPUs. And so having basically the most compute per researcher is definitely a strategic advantage, not just for doing the work, but for attracting the best people.
When I was reading the reflections of Calvin French-Owen, a former OpenAI employee who helped launch Codex, about his time at OpenAI, I started appreciating Zuck’s strategic direction here. The following excerpt is particularly relevant to the topic at hand:
Andrey (the Codex lead) used to tell me that you should think of researchers as their own "mini-executive". There is a strong bias to work on your own thing and see how it pans out. There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem.
…back in November 2024, OpenAI had set a 2025 goal to launch a coding agent. By February 2025 we had a few internal tools floating around which were using the models to great effect. And we were feeling the pressure to launch a coding-specific agent…From start (the first lines of code written) to finish, the whole product was built in just 7 weeks.
7 weeks!! This is a good glimpse of what a small team with high talent density can accomplish. Finally, I read this piece by Kevin Kwok which further drove this point home for me:
Tech, and especially AI, is increasingly deflationary. Every year the advances in AI are obsoleting the last year’s models. The knowledge gleaned from training the last generation of models or from building products that best utilized them might be essential for working with the latest models–but the actual old models or products will be outdated fast. Conversely, as it gets easier to build software every year, the value of owning the legacy codebase falls or can even go negative.
An interesting note on the hiring done by Zuckerberg is how much of his hiring is of a profile that maps closer to being founders than AI researchers. Across the industry we are increasingly seeing companies figuring out how to create setups that work well hiring “founders.” And there’s a lot to unpack in this blurring by the market of the founder role.
As I was digesting all these pieces, it did strike me that Zuck indeed has an incredibly compelling pitch for all these AI Researchers.
Meta’s new hiring binge is less a numbers game than a deliberate experiment in leverage. Zuck’s recent comments make the formula explicit: take a “talent-dense” handful of researchers, surround them with a GPU arsenal no one else can match, and let them run with minimal management overhead. Models, codebases, even entire product lines can decay faster each year. In a world where yesterday’s model is tomorrow’s technical debt, Meta’s competitive edge can be that every one of its AI researchers operates with founder-level autonomy atop their largest GPU fleet, turning capital into breakthroughs faster than the rest of the field can depreciate.
If you want to win in AI, the inputs are pretty simple to list and brutally hard to assemble: absurd compute, top-decile researchers, deep capital reserves, high quality data, scaled distribution, leadership urgency/vision, and execution muscle that doesn’t flinch. Meta has lined up the first six; we’re about to learn whether it can deliver the seventh. I’ve been closely following Meta since 2017, and I’d be surprised if execution is what trips them up.
The optimistic read from Owen’s piece is that we may not be stuck in a three-year wait for proof. Give it three quarters and we should know whether Zuck’s leverage experiment is paying off. Sure, pack that much firepower into one org and egos will spark. Some sparks are fine. With the level of autonomy and resources on offer, I doubt ego drama will define Meta’s Superintelligence team when we look back three years out.
Let’s change gear a bit and talk about GLP-1.
One of the underappreciated aspects of revolutionary technologies is it is nearly impossible to map out the ripple effects of such invention over time. I was reading this very interesting piece by Ashwin Sharma about how GLP-1 drugs started having noticeable impact on life insurance business!
Life insurers set premiums using decades-deep mortality tables and a few core biomarkers such as HbA1c, cholesterol, blood pressure, BMI that forecast death with ruthless, 98 % accuracy. GLP-1 drugs like semaglutide swiftly improve those exact metrics, so applicants who recently used them can appear pretty healthy even while an underlying metabolic syndrome still lurks.
Because prescription records from direct-to-consumer providers often stay hidden, underwriters may grant decades-long “preferred” rates to people who were obese a year ago and are highly likely to regain the weight once they stop medication, something almost two-thirds do within 12 months.
When those gains reverse, the insurer is stuck with a badly mispriced policy, a phenomenon the industry calls “mortality slippage.” Since 2019, slippage has nearly tripled to 15.3 %, meaning almost one in six life policies now carries a hidden, multimillion-dollar risk.

In addition to "Daily Dose" like this, MBI Deep Dives publishes one Deep Dive on a publicly listed company every month. You can find all the 60 Deep Dives here.
Prices for new subscribers will increase to $30/month or $250/year from August 01, 2025. Anyone who joins on or before July 31, 2025 will keep today’s pricing of $20/month or $200/year.
Current Portfolio:
Please note that these are NOT my recommendation to buy/sell these securities, but just disclosure from my end so that you can assess potential biases that I may have because of my own personal portfolio holdings. Always consider my write-up my personal investing journal and never forget my objectives, risk tolerance, and constraints may have no resemblance to yours.