Is the removal of Donald Trump from Twitter a free speech issue?
We can answer this question by asking a few more. Let’s dive in.
Question One: What are our free speech rights on the internet?
Here’s a few quotes from the Supreme Court, referencing cases over the last century: “advocacy of the use of force” is unprotected when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action”. When speech that advocates violence or constitutes a “clear and present danger” is used, it is NOT protected by the First Amendment.
So, inflammatory speech that calls for lawlessness and danger is not protected by the doctrine of free speech.
Question Two: Was Donald Trump inciting imminent lawless action and producing a clear and present danger?
Twitter believes his statements were obvious calls-to-action to his supporters:
“…our determination is that the two Tweets above are likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so.”
Based on question one, a court of law needs to decide what statements Trump made (both on Twitter and on the podium in front of a crowd on that day) that would count as protected, or not, by the First Amendment. But what about when the content is posted on tech platforms like Twitter? Is Twitter allowed to make determinations like the one above?
Question Three: Do the tech companies have either the responsibility or the authority to remove inflammatory speech from their platforms?
Quoting from the First Amendment: “Congress shall make no law…abridging the freedom of speech….”
The first word of the first amendment declares who can control your speech: Congress. The government shall make no law abridging your ability to speak. Period.
Private enterprises, however, can do as they please. They can control and moderate the content, assets and material they store on their own servers and publish for others to see. Google, Apple, Amazon and others have clear, enforceable policies about what they allow (and what they don’t) on their platforms.
Here’s an excerpt of Twitter’s content policy:
We reserve the right to remove Content that violates the User Agreement, including for example, copyright or trademark violations or other intellectual property misappropriation, impersonation, unlawful conduct, or harassment.
If you are a Twitter user, you have legally agreed to this term as part of your account registration process. If you violate it, your account can be suspended. Simple as that.
Question Four: Does Section 230 help or hurt the tech companies as it relates to speech on their platforms?
Trump and others have brought up the term “Section 230” as something of a threat to tech companies. They’re referring to a passage from the Communications Decency Act, Section 230, which states:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
That’s it — that’s the whole Section 230.
Section 230 exists to disassociate users (or content creators) from the platforms that make that material available themselves. The entire law, passed in 1996, creates the initial decency laws of the internet (protecting minors, limiting lewd or inappropriate content, etc).
It was the 90s, so think more Geocities than Facebook. That said, the legislation still applies to modern internet companies, as do the amendments to the act (largely dealing with sex trafficking) that followed later.
Federal crimes, hate speech, and other concerns are being hotly debated as missing provisions from this law. Now, consider this: if the protections were modified to include these areas, or if Section 230 was removed entirely, the law would actually have MORE limitations on speech on the internet.
If Section 230 were to be modified further, speech would ultimately become less free, as it would be illegal for these platforms to support it. Would that hurt the tech companies? No. Would that hurt hate speech? Yes.
Question Five: What about Parler?
Take the key points from Questions 3 and 4: private enterprise can control the content on their platforms, as a manner of complying with the law or their own policies.
Google, Apple, Amazon and others play a role in harboring Parler today. They merchandise the app for downloading from app stores, the run the servers that house Parler’s content, they host the web properties.
In order to use these platforms, Parler’s developers would have had to agree to a set of Terms & Conditions. Here’s a snipper from the (very) long Terms of Service from Amazon Web Services, the server environment where Parler stores all its user content:
If we reasonably believe any of Your Content violates the law, infringes or misappropriates the rights of any third party, or otherwise violates a material term of the Agreement (including the documentation, the Service Terms, or the Acceptable Use Policy) (“Prohibited Content”), we will notify you of the Prohibited Content and may request that such content be removed from the Services or access to it be disabled. If you do not remove or disable access to the Prohibited Content within 2 business days of our notice, we may remove or disable access to the Prohibited Content or suspend the Services to the extent we are not able to remove or disable access to the Prohibited Content.
So: if the speech accumulating on Parler violates the law, or violates the terms and conditions of the private companies Parler uses, then Parler is in violation and can be removed or disabled from those services as per their agreement.
- If you believe Donald Trump’s statements “incited or produced imminent lawless action”, then he was not protected by the First Amendment.
- The tech companies are not bound by the First Amendment, only the government.
- The tech companies do have legislation they are bound by, but they apply more stringent content guidelines themselves.
- Therefore, Twitter or other private companies can decide what they allow or disallow on their platforms.
This issue isn’t over, and we’ll probably see legal debates stretch out over the various components (Trump, Twitter, and so on) for years to come. However, it should be clear now what protections exist and why Twitter was in the right for making the decision they did.