Digital Advertising's "Bitter Lesson" Moment
Eric Seufert recently had an interesting podcast with a couple of professors who authored the paper titled “The Impact of Visual Generative AI on Advertising Effectiveness”.
The authors pursued four specific lines of inquiry to understand how visual Generative AI (genAI) fits into the advertising landscape. Here are the four questions from the paper (emphasis mine):
Does allowing visual genAI to modify existing expert-created advertisements enhance advertising effectiveness? In other words, do “GenAI-modified ads” outperform their original “human expert-created” counterparts?
Does allowing visual genAI to create new advertisements from scratch enhance advertising effectiveness? In other words, do “GenAI-created ads” outperform “human expert created” (and “GenAI-modified”) ads?
Does allowing visual genAI to also redesign product packaging shown in the advertisements further enhance advertising effectiveness?
Does disclosing to consumers that genAI was involved in producing the ads, either by modifying or fully creating them, affect advertising effectiveness?
To answer these questions, the authors set up a rigorous “Man vs. Machine” competition involving both lab tests and real-world spending.
First, they built a library of advertisements. They took real, historical ads from a beauty retailer (the “Human Expert” baseline). Then, they acted as creative directors for the AI, using tools like Midjourney, Stable Diffusion, and DALL-E to create two types of challengers: a) “Modified” Ads: They asked the AI to act as an editor, tweaking the human ads by adding faces, nature scenes, or artistic filters, while forcing it to keep the original layout; and b) “Created” Ads: They asked the AI to act as an artist, generating brand-new ads from scratch based on text prompts, sometimes even letting it redesign the product packaging itself.
You can see one of the examples from the paper below:

The researchers then ran two major tests: a) they recruited nearly 700 participants and showed them these ads in a controlled setting. They asked how likely they were to buy the products and measured psychological factors like how “real” or emotionally engaging the ads felt., and b) to prove this worked in the “wild,” they spent money on a real Google Ads campaign. They ran the human, AI-modified, and AI-created ads for cosmetic brands over four months, tracking over 100,000 views to see which versions real people actually clicked on. So, they only had click-through rate (CTR) data for this study, and not actual conversions.
The study found a massive performance gap between asking AI to “fix” an ad and asking it to “make” one. In the Google Ads campaign, GenAI-created ads (made from scratch) achieved a 19% increase in CTR compared to human-expert ads. Interestingly, GenAI-modified ads provided no significant improvement over human designs. They also found no difference in performance based on which AI models were used to create the ads.
The effectiveness was amplified when the AI was given even more creative freedom. When genAI created both the advertisement and the product packaging design, it boosted the CTR by ~15% relative to genAI ads that used the standard packaging.
It didn’t come up during the podcast, but listening to it was yet another reminder to me of Rich Sutton’s “the bitter lesson” in AI! Sutton’s “bitter lesson” (in spirit) is that systems that scale i.e. learning/search with lots of compute/data tend to beat systems where humans hand-engineer clever structure, even when the hand-engineering looks smart in the short run. That’s precisely why the lesson feels bitter. We all like to imagine us trying to act clever should triumph over more hands-off, simpler approach.
When the researchers in this paper tried to inject human constraints i.e. forcing the AI to work within the layout and structure of a human-designed ad, the AI performed poorly. It was limited by the “human prior” and struggled to make the edits look natural. However, when genAI gets to create from scratch (and even design the packaging), it has more degrees of freedom to build a coherent visual story. As the researchers removed the constraints and let the model generate the entire artifact from scratch (leveraging its massive training on billions of images), it outperformed the human experts. That feels to me yet another evidence of “bitter lesson” in AI.
These research findings are super consistent with the vision Zuckerberg tried to outline in 3Q’25 call:
“…advertisers are increasingly just going to be able to give us a business objective and give us a credit card or bank account and like have the AI system basically figure out everything else that’s necessary including generating video or different types of creative that might resonate with different people that are personalized in different ways, finding who the right customers are. All of these -- all of the capabilities that we’re building, I think, go towards improving all of these different things. So I’m quite optimistic about that.”
It is, however, interesting that this study also found that explicitly labeling an ad as “AI-generated” or “AI-edited” caused the CTR to plummet by 31.5%. Thankfully, that’s hardly a problem. Just as nobody cares whether someone created an ad using Adobe or Canva, consumers don’t really need to know whether an ad is AI generated or not.
In addition to “Daily Dose” (yes, DAILY) like this, MBI Deep Dives publishes one Deep Dive on a publicly listed company every month. You can find all the 65 Deep Dives here
Current Portfolio:
Please note that these are NOT my recommendation to buy/sell these securities, but just disclosure from my end so that you can assess potential biases that I may have because of my own personal portfolio holdings. Always consider my write-up my personal investing journal and never forget my objectives, risk tolerance, and constraints may have no resemblance to yours.
My current portfolio is disclosed below: