Discover more from Platforms & Polemics
Texas Legislature Convinced First Amendment Simply Does Not Exist
Science concludes: anti-porn ideations impact brain's ability to understand Constitution
Over the past two years there has been a concerted push by state legislatures to regulate the Internet, the likes of which has not been seen since the late 90s/early aughts. Content moderation, financial relationships between journalists and platforms, social media design and transparency, “national security,” kids being exposed to “bad” Internet speech—you name it, a state legislature has introduced an unconstitutional bill about it. So it’s no surprise that the anti-porn crowd seized the moment to once again exhibit a creepy and unhealthy interest in what other people do with their pants off.
“I know it when I see it”
-Justice Potter Stewart, referring to unconstitutional laws, probably.
The Texas legislature, also unsurprisingly, was all too happy to help out. Last week, Texas Governor Greg Abbott signed into law HB 1181, which regulates websites that publish or distribute “material harmful to minors,” i.e., porn.
Start from the premise that pornography is protected by the First Amendment, but that it may be restricted for minors where it could not be for adults under variable obscenity jurisprudence.
The law’s requirements applies to any “commercial entity,” explicitly including social media platforms, that “intentionally publishes or distributes material on an Internet website…more than one-third of which” is porn. That’s a problematic criterion in the first place. I don’t know that there’s an easy (or even feasible) way for a social media platform to know precisely how much porn is on it (perhaps there is, though). And what about a non-social media website—what is the denominator? If a website has articles (which is definitely the reason you’re on it, I know) plus naughty pictures, is the percentage calculated by comparing the number of porn-y things to the number of articles? Words? Pages? Who knows—the law sure doesn’t say.
But that’s the least of the law’s problems. HB 1181 requires qualifying entities (however determined) to do two things, both of which clear First Amendment hurdles about as well as a rhinoceros competing in a steeplechase.
This has been a recurring theme in state and federal legislation recently. HB 1181 requires covered entities to “use reasonable age verification methods” to ensure that users are 18 or older before allowing access.
We’ve been here before, and explaining this over and over again is getting exhausting. But I’ll do it again, louder, for the people in the back.
Age Verification Laws: A Brief History
In the beginning (of the web) there was porn. And the Government saw that it was “icky” and said “let there be laws.”
In 1996, Congress passed the Communications Decency Act, prohibiting the knowing transmission or display of “obscene or indecent” messages to minors using the Internet. A unanimous Supreme Court struck down the law (with the exception of Section 230) in Reno v. ACLU, holding that it chilled protected speech, in part because there was no way for users in chat rooms, newsgroups, etc. to know the age of other users—and even if there was, a heckler’s veto could be easily imposed by
any opponent of indecent speech who might simply log on and inform the would-be discoursers that his 17-year-old child…would be present.
The Court rejected the government’s argument that affirmative defenses for use of age-verification methods (in particular credit card verification) saved the law, noting that not every adult has a credit card, and that existing age verification methods did not “actually preclude minors from posing as adults.”
So Congress tried again, passing the Child Online Protection Act (COPA) in 1998, ostensibly narrowed to only commercial enterprises, and again containing affirmative defenses for using age-verification. Again, the courts were not buying it: in a pair of decisions, the Third Circuit struck down COPA.
With respect to the viability of age verification, the court found that the affirmative defense was “effectively unavailable” because, again, entering a credit or debit card number does precisely nothing to verify a user’s age.
But more importantly, the court ruled that the entire idea of conditioning access to material on a government-imposed age verification scheme violates the First Amendment. Noting Supreme Court precedent “disapprov[ing] of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech,” the Third Circuit ruled in 2003 that age-verification would chill protected speech:
We agree with the District Court's determination that COPA will likely deter many adults from accessing restricted content, because many Web users are simply unwilling to provide identification information in order to gain access to content, especially where the information they wish to access is sensitive or controversial. People may fear to transmit their personal information, and may also fear that their personal, identifying information will be collected and stored in the records of various Web sites or providers of adult identification numbers.
In its second decision, coming in 2008, the court again agreed that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” And thus, after the Supreme Court denied cert, COPA—and the notion that government could force websites to age-verify users—died.
Age Verification Today
Has anything changed that would render these laws newly-constitutional? One might argue that age-verification technologies have improved, and are no longer as crude as “enter a credit card number.” I suppose that’s true in a sense, but not a meaningful one. HB 1181 requires age verification by either (a) a user providing “digital identification” (left undefined), or (b) use of a commercial age-verification system that uses either government-issued ID or “a commercially reasonable method that relies on public or private transactional data.”
It stands to reason that if a minor can swipe a parent’s credit card for long enough to enter it into a verification service, they can do the same with a form of Government ID. Or even easier, they could just borrow one from an older friend or relative. And like entering a credit card number, simply entering (or photographing) a government ID does not ensure that the person doing so is the owner of that ID. And what of verification solutions that rely on selfies or live video? There is very good reason to doubt that they are any more reliable: the first page of Google search results for “trick selfie verification” turns up numerous methods for bypassing verification using free, easy-to-use software. Even the French, who very much want online age-verification to be a thing, have acknowledged that all current methods “are circumventable and intrusive.”
But even assuming that there was a reliable way to do age verification, the First Amendment problem remains: HB 1181 requires adult users to sacrifice their anonymity in order to access content disfavored by the government, and First Amendment jurisprudence on that point has not changed since 2008. Texas might argue that because HB 1181 prohibits websites or verification services from retaining any identifying information, the chilling harm is mitigated. But there are two problems with that argument:
First, on a practical level, I don’t know how that prohibition can work. A Texas attorney general suing a platform for violating the law will have to point to specific instances where an entity failed to age-verify. But how, exactly, is an entity to prove that it indeed did perform adequate verification, if it must delete all the proof? Surely just keeping a record that verification occurred wouldn’t be acceptable to Texas—otherwise companies could simply create the record for each user and Texas would have no way of disproving it.
Second, whether or not entities retain identification information is entirely irrelevant. The chilling effect isn’t dependent on whether or not a user’s browsing history or personal information is ultimately revealed. It occurs because the user is asked for their identifying information in the first place. Few if any users are even likely to even know about the data retention prohibition. All they will know is that they are being asked to hand over ID to access content that they might not want associated with their identity—and many will likely refrain as a result. The de-anonymization to anyone, for any amount of time, is what causes the First Amendment harm.
Technology has changed, but humans and the First Amendment…not so much. Age verification remains a threat to user privacy and security, and to protected First Amendment activity.
HB 1181 also requires covered entities to display three conspicuous notices on their home page (and any advertising for their website):
TEXAS HEALTH AND HUMAN SERVICES WARNING: Pornography is potentially biologically addictive, is proven to harm human brain development, desensitizes brain reward circuits, increases conditioned responses, and weakens brain function.
TEXAS HEALTH AND HUMAN SERVICES WARNING: Exposure to this content is associated with low self-esteem and body image, eating disorders, impaired brain development, and other emotional and mental illnesses.
TEXAS HEALTH AND HUMAN SERVICES WARNING: Pornography increases the demand for prostitution, child exploitation, and child pornography.
It’s obvious what Texas is trying to do here. And it’s also obvious what Texas will argue: “The government often forces companies to place warnings on dangerous products, just look at cigarette packages. That’s what we’re doing here too!”
You can likely anticipate what I have to think about that, but it’s worth interrogating in some depth to see exactly why it’s so very wrong.
What Kind of Speech Regulation is This?
Obviously, HB 1181 compels speech. In First Amendment jurisprudence, compelled speech is generally anathema, and subject to strict scrutiny. But the government has more leeway to regulate (or compel) “commercial speech,” that is, non-misleading speech that “does no more than propose a commercial transaction” or "relate[s] solely to the economic interests of the speaker and its audience.”
At the outset, I am skeptical that this is a commercial speech regulation. True, it applies only to “commercial entities” (defined effectively as any legally recognized business entity), but speech by a business entity is not ipso facto commercial speech, nor does a profit motive automatically render speech “commercial.” Imagine, for example, that 30% of Twitter content was found to be pornographic. Twitter makes money through its Twitter Blue subscriptions and advertisements. But does that make Twitter as a whole, and every piece of content on it, “commercial speech?” Certainly not. See Riley v. National Federation of Blind, 487 U.S. 781, 796 (1988) (when commercial speech is “inextricably intertwined with otherwise fully protected speech,” the relaxed standards for commercial speech are inapplicable).
And even as applied to commercial pornography websites in the traditional sense1 (presuming that in this application, courts would view the notice requirement as a commercial speech regulation), HB 1181 might be in trouble. In International Outdoor, Inc. v. City of Troy, the Sixth Circuit persuasively reasoned that even commercial regulations are subject to strict scrutiny when they are content based (as HB 1181 plainly is), particularly where they also regulate noncommercial speech (as HB 1181 plainly does). If strict scrutiny is the applicable constitutional standard, the law is certainly dead.
But let’s assume for the sake of argument that we are in Commercial Speech Land, because either way the notice requirement is unconstitutional.
Constitutional Standards for Compelled Commercial Speech
For a commercial speech regulation to be constitutional, it must directly advance a substantial government interest and be narrowly tailored so as not to be more extensive than necessary to further that interest—known as the Central Hudson test.
But there’s another wrinkle: certain compelled commercial disclosures are subjected to the lower constitutional standard articulated in Zauderer v. Office of Disciplinary Counsel. Under Zauderer, compelled disclosures of “purely factual and uncontroversial information” must only “reasonably relate” to a substantial government interest and not be unjustified or unduly burdensome. What type of government interest suffices has been a matter of controversy: Zauderer (and Supreme Court cases applying it) have, on their face, related to remedying or preventing consumer deception in advertising.2 But multiple appellate courts have held that the government interest need not be related to consumer deception.
Would the HB 1181 Receive the More Permissive Zauderer Analysis?
Setting aside the question of government interest for just a moment, the HB 1181 notices are clearly not governed by the lower Zauderer standard because in no way are they “purely factual and uncontroversial.”
In 2015, the U.S. Court of Appeals for the D.C. Circuit struck down a regulation requiring (to simplify) labeling of “conflict minerals.” While the origin of minerals might be a factual matter, the court found that the “not conflict free” label was not “non-ideological” (i.e., uncontroversial): it conveyed “moral responsibility for the Congo war” and required sellers to “publicly condemn [themselves]” and tell consumers that their products are “ethically tainted.”
Dissenting, Judge Srinivasan would have read “uncontroversial” as relating to “factual”—that is, disclosures are uncontroversial if they disclose facts that are indisputably accurate. Even under Judge Srinivasan’s more permissive construction, the HB 1181 notices are not factual and uncontroversial. They are, quite simply, standard hysterical anti-porn lobby talking points—some rejected by science and in every other case hotly disputed by professionals and the scientific literature.
And then the Supreme Court decided National Institute of Family & Life Advocates v. Becerra (NIFLA), striking down a California regulation requiring family planning clinics to disseminate a government notice regarding state-provided family-planning services, including abortion—”anything but an ‘uncontroversial’ topic,” the Court noted. In a later case, the Ninth Circuit explained that the notices in NIFLA were not “uncontroversial” under Zauderer because they “took sides in a heated political controversy, forcing [clinics opposed to abortion] to convey a message fundamentally at odds with its mission.”
However you look at it, these notices are not “factual and uncontroversial.” They make claims that are by no means established facts (one might even call them opinions), put the government thumb on the scale in support of them, and force speakers to promote controversial hot-button views that condemn their own constitutionally protected speech. They are simply not the type of disclosures that Zauderer contemplates.
Do the Notices Satisfy the Central Hudson Test?
I’ll admit to hiding the ball a little in order to talk about Zauderer. Regardless of whether Zauderer or Central Hudson controls, the first step of the analysis would remain the same: does the government have a substantial interest?
It seems clear to me that the answer is “no,” so the notice requirement would fail scrutiny either way.
Texas may argue that its interest is “protecting the physical and psychological well-being of minors,” as the federal government asserted when defending the CDA and COPA. While the Supreme Court has held that interest to be compelling, I’m not sure Texas can plausibly claim it here. If the harm to minors comes from viewing porn, but the age verification requirement prevents them from seeing the porn while they are minors, is there a substantial government interest in telling them that the porn they can’t even access is “bad?” To my mind, it doesn’t adequately square. (Admittedly, this may be more of a question of whether the notices “directly advance” the government interest.)
The plain language of the notices evince a much broader theme. To the extent that Texas is trying to protect minors, it seems that it is also trying to protect them from the “harms” of porn even once they are no longer minors—that is, to keep them from getting “hooked on porn” ever. In that sense, the notice requirement is aimed as much at adults as it is at minors. The message is clear: porn is harmful and bad—no matter what age you are—and you should abstain from consuming it.
Here’s where Texas will invariably analogize HB 1181 to mandated warning labels on cigarettes. “It’s constitutionally permissible to force companies to label dangerous products, and that’s all we’re doing,” Texas will say. But the government interest there is to reduce smoking rates—thereby protecting consumer and public health from a physical product that definitively causes serious and deadly physical disease.
HB 1181 is different in every respect, by a country mile. Distilled to its core, the government interest that Texas must be asserting is: generally reducing the consumption of protected expression disfavored by a government that considers it psychologically harmful to readers/viewers. HB 1181 seeks to protect citizens not from a product with physical effects,3 but rather, from ideas and how they make us think and feel.4 Can that be any government interest at all, let alone a substantial one?
It’s a startling proposition that would give government the power to shape the contours of public discourse in ways entirely at odds with First Amendment principles. Could the government invoke an interest in protecting the public from the psychological harms of hateful speech and demand that any commercial entity distributing it affix a warning label dissuading readers from consuming it? What about the damaging effects (including on health) of political polarizations? Could the government rely on those harms and force “partisan media” to issue warnings about the dangers of their content? Must gun-related periodicals warn readers that “gun culture” leads to mass shootings at the government’s demand? Or can fashion magazines be forced to tell readers that looking at skinny people causes low self-esteem eating disorders? You get the picture.
Consider New York’s “Hateful Conduct Law,” recently struck down by a federal district court in a challenge brought by Eugene Volokh and and two social media platforms. That law requires any commercial operator of a service that allows users to share content to establish a mechanism for users to complain about “hateful conduct” and post a policy detailing how such reports will be addressed. (Notably, the court rejected New York’s assertion that the law only compelled commercial speech.) While the court ultimately accepted “reducing instances of hate-fueled mass shootings” as a compelling government interest (and then held the law not narrowly tailored), it explained in a footnote that “a state’s desire to reduce [constitutionally protected speech] from the public discourse cannot be a compelling government interest.”
And that is clearly the aim of the HB 1181 notices: to reduce porn consumption. To my mind, this is no different than the Supreme Court’s rejection in Matal v. Tam of a government interest in “preventing speech…that offend[s].” Offense, after all, is a psychological impact that can affect mental well-being. But the First Amendment demands that government stay out of the business of deciding whether protected speech is “good” or “bad” for us.
The wholly unestablished nature of the claims made in HB 1181’s notices also cut against the sufficiency of Texas’s interest. In Brown v. Entertainment Merchants Association, California could not draw a direct link between violent video games and “harm to minors,” so it instead relied on “predictive judgments” based on “competing psychological studies” to establish a compelling government interest. But the Supreme Court demanded more than “ambiguous proof,” noting that the case California relied on for a lower burden “applied to intermediate scrutiny to a content-neutral regulation.” (emphasis in original)
While (presuming again that this is in fact a commercial speech regulation) we may be Intermediate Scrutiny Land, we are also in Unquestionably Content-Based Land—and I think that counts for something. In all respects, HB 1181’s notice requirement is a content-based regulation justified by the (state’s theorized) reaction of listeners. See Boos v. Barry, 485 U.S. 312, 321 (1988) (“[I]f the ordinance…was justified by the city’s desire to prevent the psychological damage it felt was associated with viewing adult movies, then analysis of the measure as a content-based statute would have been appropriate.”). While I am doubtful that Texas can ultimately assert any substantial interest here, at the very least any asserted interest must be solidly supported rather than moralistic cherry picking.
In sum, I do not see how any state interest in reducing the consumption (and thus ultimately proliferation) of entirely protected speech can itself be a legitimate one. By extension, I think that invalidates any government interest in protecting recipients of that speech from the psychological effects of that speech—the entire point of expression is to have some kind of impact. Speech can of course have harmful effects at times, and the government is free to use its own speech, on its own time, to encourage citizens to make healthy decisions. But it can’t force speakers to warn recipients that their speech ought not be listened to.
So why do state legislatures keep introducing and passing laws that are undercut by such clear lines of precedent? The “innocent” answer is that they simply do not care: once they’ve completed the part where they “do something,” they can get the media spots and do the chest-pounding and fundraising—whether the law is ultimately struck down is immaterial. The more sinister answer is that, believing that they have a sympathetic Supreme Court, they are actively manufacturing cases in the hopes that they can remake the First Amendment to their liking. Here’s hoping they fail.
In contrast, I think that a porn site that provides content (especially if user-uploaded) for free and relies on revenue from advertising is more akin to Twitter than it is to a pay-for-access site for commercial speech purposes.
For a good treatment of the Supreme Court’s Zauderer jurisprudence and analysis of its applicability to content moderation transparency laws, see Eric Goldman, Zauderer and Compelled Editorial Transparency: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4246090
Notably, some courts have expressed skepticism (without deciding) that a government could even assert “a substantial interest in discouraging consumers from purchasing a lawful product, even one that has been conclusively linked to adverse health consequences [i.e., cigarettes].
Unlike cigarettes, the ideas and expression contained within books, films, music, etc (as opposed to the physical medium) are not considered “products” for products liability purposes, and courts have rejected invitations to hold otherwise on First Amendment grounds. See, e.g., Winter v. G.P. Putnam’s Sons, 938 F.2d 1033 (9th Cir. 1991); Gorran v. Atkins Nutritionals, Inc., 464 F. Supp. 2d 315 (S.D.N.Y. 2006).