Social Media's Defense
Last week, I shared my concern that algorithmic content may force companies such as Meta to share some liabilities for the content shown on their properties because of the product design choices these companies made. From my piece:
I…do sympathize with the concerns related to algorithmic content on all social networks these days…Meta may invoke section 230 defense or/and their first amendment right, but I’m not sure even the Supreme Court will be sympathetic towards tech companies’ right to control an algorithm that ends up showing questionable or disturbing content to minors.
I have been pondering about this point and would like to present the other side of the arguments. Techdirt published a piece last week that made the case that the distinction between design and content is quite flimsy. From Techdirt:
Plaintiffs’ lawyers have been trying to get around Section 230 for years, and these two cases represent them finally finding a formula that works: don’t sue over the content on the platform. Sue over the design of the platform itself. Argue that features like infinite scroll, autoplay, algorithmic recommendations, and notification systems are “product design” choices that are addictive and harmful, separate and apart from whatever content flows through them.
The trial judge in the California case bought this argument, ruling that because the claims were about “product design and other non-speech issues,” Section 230 didn’t apply. The New Mexico court reached a similar conclusion. Both cases then went to trial.
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
While the thought experiment is thought provoking, we can actually look at a more intriguing example to make the same argument. TBPN made a compelling argument by pointing out Sora’s failure despite borrowing the same “design” choices Meta or TikTok made. From TBPN (emphasis mine):
“So what to make of these two situations? It feels a bit like a placebo-controlled trial to me. Sora absolutely used all the social media “best practices” (or addictive & harmful neurobiological techniques if you want to use the court’s language). The Sora app was basically the same as TikTok, IG Reels, and YouTube Shorts in terms of UI & UX design. It had infinite scroll, algorithmic recommendations, notifications, a “Like” button! All the same features that were found to be addictive when applied to human-generated content, very much did not seem addictive when applied to AI generated content (at least if estimated usage rates are to be believed).
As people compare social media to the cigarette industry, it might be worth revisiting what exactly is addictive about cigarettes. Nicotine is the addictive chemical in cigarettes that causes addiction, and nicotine is addictive even when it’s not administered via combustible tobacco. It’s addictive in vapes, it’s addictive in pouches, it’s even addictive in nicotine gum and lozenges.
So we now have an experiment where we applied all the same UI/UX features to different content, and maybe you can’t read too much into it, but it certainly seems like what pulls people into social media is more the humans that create content on the platform. Some creators create very compelling content that can lead to high screen-time. Some people go on social media and make horrible content that depresses people that land on it.”
Indeed, if it’s the content rather than design choices that made you “addicted” to a particular app, that sounds like more of a content issue (which means section 230 defense might apply) than a design issue. In case you wonder OpenAI shut down Sora because it was too expensive to operate in a compute constrained environment, WSJ reported that while number of users peaked at 1 million for Sora, the number dwindled to less than half of that in recent months. So, clearly it wasn’t going in the right direction even if compute constraints were not a factor in their decision.
I personally have never used the word “addiction” when it comes to social networking apps. Given the measurement of this “addiction” is far from scientific and often relies on survey questions, this paper made the case it is almost too easy to find “addiction” because it showed using similar standard medical criteria you could make the case young people are “addicted” to their real-life friends. Indeed, I think my parents could be easily convinced I was “addicted” to watching Test Cricket on TV and they would have perhaps loved it if government banned or limited my watch time for a game that often ends in draw after playing six hours for five consecutive days!
Joking aside, I still think these lawsuits will likely end up in the Supreme Court and hence, it makes more sense to pay closer attention to what the Supreme Court has already indicated about their opinions on these topics. I think Moody v. NetChoice provides a good ground to gauge Supreme Court’s point of view on some of these topics. The Supreme Court explicitly used "Facebook's News Feed" throughout the opinion in this case as the primary example to analyze how First Amendment protections apply to algorithmic content curation. Justice Kagan’s majority opinion strongly signaled that Meta's core content moderation practices on its main feeds are protected by the First Amendment. From the opinion (emphasis mine):
“To the extent that social media platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment’s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world."
The court in its opinion continued later (emphasis mine; deleted some reference for readability):
“The individual messages may originate with third parties, but the larger offering is the platform’s. It is the product of a wealth of choices about whether—and, if so, how—to convey posts having a certain content or viewpoint. Those choices rest on a set of beliefs about which messages are appropriate and which are not (or which are more appropriate and which less so). And in the aggregate they give the feed a particular expressive quality. Consider again an opinion page editor, as in Tornillo, who wants to publish a variety of views, but thinks some things off-limits (or, to change the facts, worth only a couple of column inches). “The choice of material,” the “decisions made [as to] content,” the “treatment of public issues”—“whether fair or unfair”—all these “constitute the exercise of editorial control and judgment.”. For a paper, and for a platform too…That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference.”
While Justice Alito concurred with the Court’s opinion, it does appear he hasn’t quite made up his mind on whether platforms such as Meta and Alphabet should receive the same first amendment defense as human beings would. From Justice Alito (emphasis mine; deleted some reference for readability):
“…consider how newspapers and social-media platforms edit content. Newspaper editors are real human beings, and when the Court decided Tornillo (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the “curation” and “content moderation” carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know. After all, many of the biggest platforms are beginning to use AI algorithms to help them moderate content. And when AI algorithms make a decision, “even the researchers and programmers creating them don’t really understand why the models they have built make the decisions they make.” Are such decisions equally expressive as the decisions made by humans? Should we at least think about this? Other questions abound. Maybe we should think about the enormous power exercised by platforms like Facebook and YouTube as a result of “network effects.” And maybe we should think about the unique ways in which social-media platforms influence public thought. To be sure, I do not suggest that we should decide at this time whether the Florida and Texas laws are constitutional as applied to Facebook’s News Feed or YouTube’s homepage. My argument is just the opposite. Such questions should be resolved in the context of an as-applied challenge. But no as-applied question is before us, and we do not have all the facts that we need to tackle the extraneous matters reached by the majority.”
Moreover, Justice Barrett too may be quite open in evaluating platforms’ first amendment rights differently. From her opinion (emphasis mine; deleted some reference for readability):
“what if a platform’s algorithm just presents automatically to each user whatever the algorithm thinks the user will like—e.g., content similar to posts with which the user previously engaged? The First Amendment implications of the Florida and Texas laws might be different for that kind of algorithm. And what about AI, which is rapidly evolving? What if a platform’s owners hand the reins to an AI tool and ask it simply to remove “hateful” content? If the AI relies on large language models to determine what is “hateful” and should be removed, has a human being with First Amendment rights made an inherently expressive “choice . . . not to propound. a particular point of view”? In other words, technology may attenuate the connection be tween content-moderation actions (e.g., removing posts) and human beings’ constitutionally protected right to “decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence.” So the way platforms use this sort of technology might have constitutional significance.”
Given these contexts, I don’t quite think it’s an open and shut case that the Supreme Court will certainly side with the platforms, especially for their products/services aimed at non-adults. However, Meta et al do have a strong case at hand and as I alluded before, I suspect Meta won’t be heartbroken either if there are onerous regulations to make kids experience on these platforms boring as long as the rules apply to everyone, including its current and future competitors not just in social media but also on any digital apps. For context, kids spend on an average 2.7 hours on Roblox per day which contain much of the social elements Meta provides on their properties. So, I’m not sure why whatever regulations that apply to Meta won’t also largely apply to platforms such as Roblox.
Meta makes ~1% of their revenue from teens; so what they really need is regulators to make it abundantly clear what these companies can or cannot do in their properties. Meta cannot come up with these policies on their own if their competitors decide to eat their lunch by making very different choices in product design.
In addition to “Daily Dose” (yes, DAILY) like this, MBI Deep Dives publishes one Deep Dive on a publicly listed company every month. You can find all the 67 Deep Dives here.
Current Portfolio:
Please note that these are NOT my recommendation to buy/sell these securities, but just disclosure from my end so that you can assess potential biases that I may have because of my own personal portfolio holdings. Always consider my write-up my personal investing journal and never forget my objectives, risk tolerance, and constraints may have no resemblance to yours.
My current portfolio is disclosed below: