Human beings love, and I mean love, to panic. That’s why we invented technology, of course: to provide endless new things to panic about. At least that’s how it seems, anyway. Generative Artificial Intelligence (GenAI) is the latest technological stop on the Freakout Express, and boy has it has given us a lot to lose our collective minds about.
There’s the problem of “AI hallucinations”—basically when large language models generate content that is not in line with the source material or reality. We have already seen the first defamation lawsuit (of what will assuredly be many) over these AI confabulations, and it will be fascinating to see judges—not always the most technologically…current—grapple with the application of doctrine to speech made by machines instead of people.
But then there’s the use of GenAI to deliberately create false information, often by way of deepfakes. To be sure, there are valid concerns surrounding this technology. Non-consensual deepfake pornography, for example, is (to put it perhaps too mildly) utterly vile, and still a million times more so when it is of a high school student.
But with the 2024 presidential election season in full swing, doomsayers, pundits, and legislators are laser-focused on all of the ways that GenAI will inevitably—and immediately—destroy our democracy. Deepfakes, they say, will result in a flood of highly-believable disinformation that will wreak unimaginable havoc on elections and tear our institutions asunder.
But there’s a problem with this prediction: we’ve heard it before. In the run-up to the 2020 election, you couldn’t go a day without hearing about how deepfakes were poised to subvert the Will of the People. As it turns out? Not so much. But you know what they say: if your panic doesn’t manifest, panic harder. So here we are, once again, and this time they are absolutely sure it’s going to happen. As a result, federal legislation has been introduced, and in some states laws have already been enacted, doing things like banning the use of deepfakes in campaign or election-related speech and mandating labeling for political speech created with the assistance of GenAI.
In September, I was invited to testify at a hearing of the U.S. Senate Committee on Rules & Administration (which has jurisdiction over legislation related to federal elections) on the question of what to do about GenAI’s feared impact on elections. Because Congress so frequently demonstrates that it doesn’t know anything about First Amendment law, I decided to give them the law professor treatment: my testimony took them from constitutional protection for false speech, through the formidable First Amendment challenges of regulating core political speech, and on to an application of doctrine to one of the bills proposed to ban “deceptive” election-related speech created with GenAI. (Spoiler: while very narrow, targeted regulation may be technically possible, the bill in question is…not that.)
In November, I returned to the Senate to participate in Chuck Schumer’s A.I. Insight Forum session on the same topic. My written statement for that forum reviewed some recent examples of political speech that used GenAI, which ranged from actually good to pretty mildly objectionable at worst. I raised the same First Amendment issues, and posited that more good could be done by strengthening laws guarding the electoral process by prohibiting misleading voters about the actual mechanics of elections (an easier constitutional lift) and funding digital literacy programs.
Predictably, legislators have not been entirely thrilled with my assessment that the First Amendment prevents them from legislating Bad Speech out of existence. And they have plenty of people willing to tell them that of course Congress can regulate deceptive GenAI media in politics—ignoring both that deceptive political media has been around for a lot longer than GenAI and all the cases showing that it’s not that simple.
Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation is one of those people. He (along with Nicole Gill of Accountable Tech) joined WBUR’s On Point radio show a couple of weeks back to talk about AI and elections. Host Meghna Chakrabarti played the conclusion of my oral testimony as a counter to the prevailing “something must be done” winds, at approximately 24:54:
CHAKRABARTI: Yeah, we're going to talk about that, because it's a really important part of the overall picture here, but you mentioned some action at the federal level, Nicole. On September 27th, there was a Senate hearing held on the use of AI in elections. And it was also a chance to debate possible safeguards against AI deceiving voters.
And Ari Cohn of Tech Freedom was one of the speakers, and he warned lawmakers while he was testifying that legislation that was too restrictive could actually be harmful:
“Reflexive legislation prompted by fear of the next technological boogeyman will not safeguard us. Free and unfettered discourse has been the lifeblood of our democracy and it has kept us free.
If we sacrifice that fundamental liberty and discard that tried-and-true wisdom that the best remedy for false or bad speech is true or better speech, no law will save our democratic institutions. They will already have been lost.”
CHAKRABARTI: Now, Nicole and Darrell, and Nicole, I'll start with you first. This is a very important counter argument here, because it really does go to a fundamental of another aspect of our democracy, that ideally the federal government should not be regulating speech, and also how hard it is to determine what is harmful speech, right?
Because what's the harm that we're defining here? It does seem to me that some of these bills could run up against that wall. How would you respond to that?
Nicole Gill, despite my many disagreements with her, raised a good point in explaining that her concern is largely of a deepfake “October surprise,” which is salient—in my testimony I pointed out that a narrow law applicable only in a short timeframe before an election when there may not be opportunity for counterspeech is more likely to survive constitutional scrutiny.
But then Darrell West, a man with two advanced degrees in political science, took a hard left turn and did a speed run through the kind of First Amendment takes that you’d expect to see from engineers on twitter dot com. When pressed with the (perhaps imperfect) analogy that nobody would stand for a law telling a newspaper that they couldn’t put out a daily, West replied:
WEST: Absolutely. And that type of provision is never going to pass legal muster. But on the freedom of speech argument, all of us support freedom of speech, but
You know where this is going. There’s always a Free Speech But.
we've never had unlimited freedom of speech.
He went a little out of order, invoking Trope Three first, but we’ll give him credit for moving from the general to the specific. Unfortunately for him, that’s where the credit ends, because this is a hollow statement meant to evade the actual question: is this particular speech protected or unprotected? Lest you think this is just an honest opening to a fact-based and rational discussion of First Amendment principles, he continued:
Like you cannot yell fire in a crowded theater, because it creates harms to other individuals.
You cannot advocate violence.
Funny thing…actually you can. We know this because the very case that officially relegated “you can’t shout fire in a crowded theater” to “uninformed trope” status said so:
“[T]he constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. Brandenburg v. Ohio, 395 U.S. 444, 447 (1969).
What’s next?
You cannot engage in illegal activities.
Aside from eliding the whole “speech” issue, this is a tautology with a side of question begging. The entire point of the First Amendment is that the government is not free to just declare whatever speech it wants “illegal.” If just saying “this speech is illegal” sufficed, the First Amendment would mean absolutely nothing.
Moving on.
You can't use voice to engage in hate speech.
Look, political scientists are not lawyers and should not be held to the same expectations of knowledge. But if there is anything I would expect even a high school graduate to know, let alone someone with a doctorate in political science from an American university, it is that the First Amendment protects “hate speech.”
That has been the subject of numerous Supreme Court decisions, a metric ton of media, and enough scholarly writing that it makes you wonder whether we would have cured cancer already if we weren’t so preoccupied with whether it should be illegal to be mean to people.
That someone could be a purported expert in American politics and not know this very basic, simple fact, is baffling.
Companies cannot engage in fraudulent advertising; they get fined for a consumer fraud in that situation. So my argument is we've litigated freedom of speech cases for decades. Like we actually have rules of the road in that area. We just need to apply those rules to the digital space. Right now, there are no guardrails. There are no rules in that area.
I am confused why West seems to think that the “rules of the road” set out in First Amendment jurisprudence cease to exist when digital things are involved. That’s simply not the case, and we have about 20 or so years of caselaw saying that those same “rules of the road” apply to the Internet and new technologies just like they apply everywhere else. What is he claiming? That companies can engage in false advertising online but not in the physical world? The FTC would certainly beg to differ.
It's a wild west, anything goes. That creates a lot of dangers for us. We know that we are facing choices that are very fundamental in this election, perhaps even the future of American democracy. My greatest fear is this election gets decided based on disinformation.
I see that West’s understanding of the “Wild West” is about as nuanced as his understanding of First Amendment doctrine. But again, no it’s not. If you want to strengthen laws around spreading false information about the electoral process, go right ahead—probably a good idea just to cover all our bases. But to claim that somehow all of the laws we have passed for analog speech cease to apply if a computer is involved is to lack serious understanding of how reality works.
Chakrabarti, perhaps psychically sensing my frustration, pushed back:
CHAKRABARTI: But let me push on this a little bit, because lies may be odious, but they're not illegal.
And in the examples that you put out there, the most easily graspable one is you can't yell fire in a crowded theater. The harm is pretty well defined, right? The harm is causing panic. People might get injured in running out of the theater. In campaign or elections misinformation and disinformation, what is the harm?
It might be shaping what people believe, but lots of forms of advertising do that. And ultimately people are casting votes. That's a perfectly legal and desirable thing. What is the defined harm that might justify curbing AI generated speech, Darrell?
While I would have pushed back on the technical incorrectness of the “fire in a crowded theater” line, it’s a good counter. West’s reply, on the other hand…
WEST: There are defined harms in the election area. For example, it is illegal to basically buy or sell votes, that is illegal all across the country, but yet there are websites that have been doing that, like that should be illegal. We should take down those websites. That is perfectly legal. That is not freedom of expression on the part of those individuals, because they are advocating illegal behavior.
The fact that the election is going to be on a Tuesday in November, you can't go around telling people, "Oh, we changed the election date. It's actually going to be on Thursday." And you're targeting Black voters with that message, knowing that would harm Democrats, like that type of stuff actually is illegal as well.
So there are a number of defined harms, and we just need to apply those rules to the digital space. Because we already have them in the non-digital world.
If that is the best argument he’s got, I’d say we’re doing great! Darrell West should be happy to learn that in fact the same laws that prohibit buying and selling votes offline also prohibit it online, and there have already been prosecutions over tweets and robocalls seeking to mislead people about the mechanics of voting. Those rules already apply to the offline space, so what are you so worried about? What are we even talking about? There’s simply no there, there. And if you want to talk about how the rules need to apply to digital and non-digital speech alike, then you should be especially worried about legislation that prohibits AI deepfakes while letting campaigns with a lot of money to spend on media put out manually deceptively edited media, like the Romney campaign’s “you didn’t build that” ad, or the Biden campaign’s “COVID-19 hoax” ad attacking Donald Trump. It feels almost like people don’t even have a coherent idea of what they are worried about in the first place.
To sum up: in response to the question “aren’t there First Amendment problems with regulating political speech,” we got “well the First Amendment isn’t unlimited and here are all of my entirely wrong ideas of what speech isn’t protected. In conclusion, we should pass laws making the generally applicable, technology-agnostic, existing laws apply to the digital realm [where they already apply].” Sounds like the computers aren’t the only ones hallucinating.
It turns out that technology mirrors the humans that created it. We see it in algorithms that try to predict what content we want to see based on our history, and now we’re seeing it with AI. Of course AI is going to make shit up; we make shit up all the time! If you’re worried about the course of American democracy being influenced by disinformation, your first look should be inward. If you’re spreading false information about the law under the cover of “expertise,” it’s no wonder the machines aren’t able to get it right either.