Legal, Policy and First Amendment   Social Media  

The Misbegotten War Against Curation

This article by University of Florida College of Journalism and Communications Brechner Eminent Scholar Jane Bambauer was originally published in Lawfare on May 8, 2023.

Public resentment of the “attention economy” has reached a fever pitch, and the political will to regulate its algorithms is palpable. Right now, there are federal bills circulating with bipartisan support that will impose new restrictions on a company’s hosting and curation decisions in the course of addressing privacy, social media harms, and platform competition. What all of these efforts have in common is a shared sense that consumers and the country are being harmed by the power that technology companies have to “gatekeep” and curate the content their users see or the apps they’re able to use.

The broad base of support for these bills, and the belief that Congress must do something, is formed from a tangle of wide-ranging, often-contradictory theories and assumptions. No politician has any reason to untangle and think through the competing goals and whether these bills would achieve them because that could undermine delicate coalitions. But these coalitions are fragile precisely because they don’t really make any sense.

To a significant extent, the left and the right are attracted to these bills for diametrically opposite reasons. To be sure, constituents on the left and right alike share concerns about China (in the case of TikTok), data privacy, and children’s mental health. But progressives want to keep intact tech platforms’ liberty to purge offensive content while, for those on the right, the whole point of regulating curation algorithms is to correct for biases against conservative speakers.

Let me put my cards on the table: I do not think the industry has treated conservative voices fairly. The “Twitter Files” and other studies provide evidence of homogeneity and groupthink in the ranks of tech companies. I have sympathy for the politicians and commentators who want to make sure conservatives have access to public debate on equal terms. Content moderation decisions related to at least some topics (such as the coronavirus pandemic) have been surprisingly distorted. Through these practices, social media has earned the public’s unease and distrust.

Nevertheless, conservatives should refrain from rushing to regulate. First, regulation will miss the mark. Government management of content curation is a fool’s errand, even if everyone agrees and intends to impose “neutrality” requirements (which they won’t). Moreover, regulation is unnecessary: The markets—even in these fairly concentrated areas of high tech—are still capable of evolving and self-correcting. In fact, in a $0-price digital economy, curation is one of the main features by which tech firms compete against one another.

Curation Algorithms Are the Product

When a platform competes for users, to the extent it does compete (more on that later), it does so on the basis of format and curation. In other words, among social media companies that have a sufficient number of users, those users generally decide to engage more or less on the platform based on how user-generated content is formatted (such as video for YouTube, short video for TikTok, photos and videos for Instagram, short statements and links for Twitter), and how user-generated content is curated.

When platforms compete on curation, they attempt to simultaneously satisfy several goals: to present the content to the user-as-receiver that will be most likely to keep them engaged with the platform (the eyeballs); to give the user-as-content-generator confidence that others will see their content so that it will be worth creating the content (the content); and to maximize the price that advertisers are willing to pay for placing ads on the platform (the revenues). This is a complex exercise, and feedback loops from both users and advertisers are the best gauge platforms have to fine-tune curation. Platforms may also pursue their own agendas when they design curation algorithms, and these are bound to be interpreted as evidence of either responsible and ethical tech or naked political bias, depending on whether the observer likes the agenda or not. But the platforms’ freedom to tinker with curation algorithms to suit their own interests is constrained by market forces—in particular, whether users or advertisers will put up with it.

At this point, progressives and conservatives alike may object that the preferences of users do little to discipline platforms because these companies are “dominant” and enjoy security from meaningful competition. Indeed, there is a degree of lock-in (in the case of vertically integrated services like app stores) and network effects (in the case of social media) that can blunt the impact of a dissatisfied user base that would otherwise want to switch to a competitor. An iPhone user who is unhappy with the way Apple manages the app store would have to buy new hardware in order to run a different app store. And a Twitter user who wants to leave Twitter in protest will have to rebuild a base of readers and content creators on another site that might not have as many users. But these theories of anti-competitive advantage tend to be exaggerated in the minds of journalists and policymakers because the power of the platform is attenuated by a lot of other factors (such as participating in multiple networks or “multi-homing”).

In the case of social media companies, the impression of dominance is grossly exaggerated and doesn’t really make sense given the other objections leveled against the companies. In The Attention Merchants, Tim Wu harshly criticizes Facebook, YouTube, and other services for leveraging cognitive biases and the human tendency to engage more with negative and polarizing content in order to keep users on their services. Tristan Harris, founder of the Center for Humane Technology, has made similar arguments in congressional testimony. I believe these criticisms have merit, as there is at least some credible evidence that platform users are addicted to content that subjectively makes them feel bad. But this problem is a problem of vice—that consumers demand content and functionality that, at times, can also cause them harm. I will have more to say about the regulation of content as vice in a later article. For now, the fact that platforms engage in these types of attention-keeping practices is corroborating evidence that they are not in a monopoly position. After all, why would these companies need to exert energy and incur costs to design their curation algorithms to maximize engagement if users had no good substitutes? Aren’t monopolies supposed to be able to take their customers’ engagement for granted?

Even Google’s dominance in search—which Federal Trade Commission (FTC) Chairperson Lina Khan once described as enjoying “an insurmountable barrier to entry for new competition”—is now in a code red panic about the incursions of ChatGPT into search traffic. (And now Khan already regards Open AI as one of the “large incumbent technology firms.”) In the context of app stores, former economic adviser to President Barack Obama Jason Furman and others are making basically the same mistake, asserting, by implication at least, that the two mobile operating systems that currently dominate the market in the United States—Apple and Android—are destined to remain the only options.

The fact that the acronym for dominant tech firms—including Google, Apple, Facebook, Netflix, and Amazon—keeps changing (from GAFA to GAFAM to GAFAM-NATU-BATX) demonstrates that although costs of entry into the market are nontrivial, and although there is a certain amount of consumer inertia in operation, the market still churns. Thus, there is a risk in design regulation under strong assumptions of dominance. Even if the regulations are designed for the purpose of helping competition among the markets and products that depend on the dominant firms, those very regulations can entrench the dominance of the regulated platforms. Public discourse tends to assume that markets are irrevocably broken in the platform space and that regulations will more or less work as intended. In fact, both market forces and proposed regulations are imperfectly responsive to consumer concerns (including ideological bias). The appropriate question is: Which option is the least flawed?

The End Goal of Regulation for Curation Algorithms Is a Complete Mystery

Tech firms need to tend to their bottom line and can perhaps pursue other values or cultural goals if they don’t interfere with profitability too much. But their primary end goal—profit—is discernible.

What are the ends and aims of algorithm regulations? Even the ones that purport to have a goal (for example, “viewpoint neutrality” in the case of the Texas social media law) are failures on their own terms, and not just because of the First Amendment flaws. Texas S.B. 5 begins its preamble by stating, “The legislature finds that social media platforms are akin to common carriers” and prohibits platforms from “censoring” content based on the viewpoint of the user.

Consider how far these bills are from common carriage by imagining a truly neutral platform—a government-run social media forum, for example—and asking what the bills’ promoters would strip away from that forum. As soon as users veer beyond a forum where they see only what their direct contacts have posted, and in the order (or reverse order) that they posted it, the issues and quandaries multiply. Users and platforms alike wouldn’t want the forum to be flooded with spam or pornography, so proponents of neutrality would likely embrace rules that are more restrictive and more heavily curated than the complete set of protected speech under First Amendment law. What about content that is only potentially illegal (threats that might be credible, for example) or content that nearly every user would simply not want to encounter, such as hate speech and pointless vitriol? What do free speech norms require for treatment of these items? Prescriptive regulation of curation algorithms will have to answer these questions, and they might not do so as clearly or as well as the market pressure of users.

The Texas law attempts to bypass these questions by requiring that platforms interpret their own self-chosen content moderation rules in a manner that is viewpoint neutral. But if a platform designs its content moderation rules to maintain some flexibility to respond to new fact patterns or to context on the ground, or if it takes a position on what constitutes a “viewpoint” that a user or regulator disagrees with, the courts (and thus the government) are inevitably going to be in the business of prescribing content moderation rules as a practical matter. Hence the serious and legitimate constitutional questions raised by these bills.

The RESTRICT Act, which was introduced in Congress earlier this year, has similar problems. It would allow the government to “disrupt” or “otherwise mitigate” content delivery services that pose an “unacceptable risk” of “interfering in … a Federal election.” These terms, too, should strike policymakers as vague and insufficiently sensitive to free speech concerns. After all, government officials perceived a threat of Russian election interference when they pressured Twitter and other social media companies to place certain stories (the Hunter Biden laptop) and certain Americans (seemingly harmless conspiracy theorists and garden-variety Trump supporters) in social media purgatory.

Other proposed laws, such as the American Data Privacy and Protection Act, attempt to regulate curation algorithms through transparency and self-assessment requirements. These are basically an admission that policymakers don’t have a theory of what a good or bad algorithm is. In the case of transparency, this information might help users match themselves to the platforms with curation algorithms they like best (to the extent that description is possible and meaningful). It would be most useful for users-as-content-creators who would like to be able to confirm or dispel suspicions that their content is being throttled in some manner and can therefore put their energy into developing content on other platforms. If a platform breaks a specific promise (for example, if it promises to provide notice before throttling or removing content and then fails to abide by that promise), the FTC can bring a claim under its authority to combat consumer deception. But overall, users already glean a lot of information about the content that is amplified or deamplified. Disagreement tends to be over the substance of those decisions.

Regulation of Curation Algorithms Will Have Anti-Competitive Effects

Regulation of curation algorithms brings compliance costs and uncertainty of the sort that would likely make venture capital investors nervous. If lawmakers are concerned that network effects and general start-up costs make it difficult for new social media platforms (like Truth Social) to compete with them, complying with byzantine rules for content curation and management will make it all the more difficult. Indeed, the model for Truth Social seems to be not the “neutral” platform that the Texas law would seem to promote, but a counter-biased platform that gleefully bans leftists. Truth Social will hit the Texas law like a buzzsaw if it is fortunate enough to grow its audience to the size of Twitter.

The proposed American Innovation and Choice Online Act (AICOA) targets curation of a different sort: Amid concerns that Apple and Google use access to their app stores to exploit app developers or to prevent them from competing on equal terms, the AICOA places several requirements on the operating systems, including obligations not to discriminate against apps that want to join the app stores and obligations to allow the easy porting and exporting of personal data from the operating system to the app developer’s own servers. In other words, it will compromise Apple’s ability to compete on the “walled garden” model as well as Android’s ability to differentiate by having somewhat more permissive gatekeeping.

As discussed in a previous piece on Lawfare, I salute the impulse to make the app economy more vibrant, but bills like the AICOA will make it harder for a new entrant in the mobile operating system market. A new entrant, if it does emerge, will have fewer dimensions of differentiation because it will not be able to compete on the curation of apps. Granted, the argument that Apple and Google stand to benefit from regulation that imposes restrictions and requirements on them might seem silly, especially since Apple and Google are publicly opposed to the bills. But large banks lobbied hard against the passage of the Dodd-Frank Act, and whatever can be said about that law’s salutary impact, it has definitely insulated the biggest banks from competition. This is true across industries. As a partner at Orrick, Herrington & Sutcliffe, LLP recently put it while describing the online gambling industry,

The AICOA’s unintentional entrenchment of Apple’s and Google’s phones could be worth it, of course, if it tackles real problems that harm consumers. The goal of the bill is to spur innovation and competition among the apps that run on Apple and Google devices, but these benefits also seem dubious. It will grant nearly every app access to the app stores with limited opportunities for quality control by Apple and Google. The bill also gives app producers automatic access to user data collected by Apple and Google outside some limited exceptions. This is in considerable (perhaps irreconcilable) tension with the privacy and security goals that lawmakers claim to have. At the very least, I would ask legislators who intend to support the AICOA to explain with precision what it was that Facebook did wrong when it allowed applications like Cambridge Analytica to seamlessly access the networks and personal data of Facebooks’ users.

The Best Example of a Market Failure Leading to Bias Is Parler, and It’s Not a Very Good One

Republicans are understandably troubled that a few large companies were able to shut down the mass adoption of the Parler app in the wake of former President Donald Trump’s deplatforming. This wasn’t the first time that private companies had taken a platform offline based on the content of their users. But to many, this marked the first time the dominant tech firms were willing to use their power to shape mainstream discussions that would have otherwise taken place on social media.

Apple allegedly considered dropping Twitter from the App Store immediately after Elon Musk’s takeover of the social media company, too. If that had occurred, the case for regulation to prevent ideological bias would have been stronger, because that decision would not have been made under the pressure of imminent violence and government malfunction. But Apple backed off, leaving the Parler episode as the only known case of viewpoint-driven app removal. So what should observers draw from this case?

Even if the executives of these companies made the wrong decision at the time, few would deny that the decision to flip the breaker on Parler, made as it was during a particularly tense period of American history when other dominant media companies (such as Fox News) were fanning agitation based on election lies, fell outside the range of reasonable responses.

The executives making these decisions were facing a digital “trolley problem.” Apple and Google confronted a switch that could route the trolley headed for five innocent bystanders onto a track that would kill just one. That is, they probably perceived at least some degree of the costs in public trust from discontinuing smartphone access to Parler but must have believed these costs were still smaller than the risks from a continuation or exacerbation of Jan. 6 activities. To be clear, one can reasonably disagree with these executives’ perspective on the situation—I believe they may have overvalued the risk that Parler would cause government collapse and undervalued the risk that purging Parler would increase distrust and paranoia—but it would be hard to claim their estimates were outside the bounds of rational belief. I understand why the set of decision-makers at these tech companies would choose the path they did.

Some readers may have deontological objections to this trolley problem analogy. They may object that companies built from and profiting from the American spirit of free speech and liberty should not have intervened so directly and heavy-handedly in political association and discourse. This, too, is a defensible position. All I mean to say is that it is not the only defensible position, and users and observers must judge these companies’ actions based on the moral no-man’s-land in which they had to operate. Did they behave outrageously? No. And I suspect market forces as well as criticism from outside and within these companies will keep them from meddling so dramatically in public debate in the future, outside equally bizarre circumstances.

Conclusion

The Twitter Files has verified that the government has been involved in the political skew of content moderation. Nevertheless, Republican lawmakers should pause before rewriting the rules for the digital economy and narrowing the options for one of the few domains that firms currently use to compete: curation.

Posted: October 12, 2023
Tagged as: ,