DIGITAL ACCESS

Trump supporters outside the US Capitol during the Jan. 6, 2021 insurrection. (Wikipedia, Tyler Merbler).

DE-PLATFORMING

Social media companies have  terms of service that can shut down  content and accounts that promote violence, misinformation and hate speech.  For want of a better term, this is called “de-platforming.”  

The best-known instance of de-platforming is  a permanent ban on US President Donald Trump’s Twitter account announced on Jan. 8, 2021. This was two days after the attack by Trump’s followers on the US Capitol in an attempt to stop the election certification process and execute the vice president on the US Capitol mall. Twitter said that a continued Trump account would be  “likely to inspire others to replicate the violent acts that took place on January 6, 2021.”  The ban cuts Trump off from 88 million followers. Facebook and other social media have also banned Trump permanently, and legal opinion tends to place Trump’s Jan. 6 speech in the “imminent action” category  not protected by the First Amendment.  In “The First Amendment Doesnt protect Trump’s Incitement,”  Harvard Law professor  says Donald Trump clearly went beyond the First Amendment boundary and incited imminent lawless action.    

In response to de-platforming,  Trump: a) sued the social media companies and b) tried to create a new social media channel called “Truth Social.” Neither course of action seemed likely to succeed. In September, 2022,  Truth Social was on the verge of bankruptcy.)  

Many others on the right wing have been de-platformed. According to the BBC, Twitter suspended more than 70,000 accounts linked to the false right-wing conspiracy theory QAnon. Apple, Google, and Amazon Web Services banned the right-wing Twitter alternative Parler, temporarily shutting down the site.  An anti-trust lawsuit by Parler was not successful. 

Others de-platformed are those who continued to claim that Trump won the 2020 election. Among them are Trump’s lawyer Sidney Powell,  (who later said she was “just kidding”), former national security adviser Michael Flynn, the insurrectionist “my pillow” guy,  and Virginia state senator Amanda Chase  who has been challenged by the state attorney general to provide evidence to back up claims of fraud.  US Rep. Marjorie Taylor Green was  also “de-platformed” on Jan. 2, 2022,  by Twitter

 Among the many questions about “de-platforming” are:

  • Are Facebook and Twitter regulating posts  “to make sure some people are not offended.”  Or are there other issues involved?
  • What about the First Amendment that guarantees freedom of speech to all American citizens?
  • Is de-platforming an even-handed enforcement of a clearly stated regulation, or is it vague or overly broad?
  • What could (or should) government do to help enforce  policies to ensure they are fair?  
  • Are conservatives being targeted for de-platforming?  asks  Sen. Josh Hawley (R-Mo).  

Facebook and Twitter said that they suspended Trump “due to the risk of further incitement of violence.”  They continue to insist that bans on  false  inflammatory content have long been part of their terms of service and that they have been fighting misinformation through, among other things,  the International Fact Checking Network. 

In response to his de-platforming in January 2021, Trump filed a lawsuit against Facebook, Twitter and Google on July 7, 2021, claiming that the tech giants were “state actors” (by virtue of their protection through Section 230) and that censorship / de-platforming was unconstitutional.  Florida also passed a law banning de-platforming in the spring of 2021, but the law was struck down by a Federal court. In October, 2021, the Trump suit was moved to California. and legal experts consider it very unlikely to prevail.  

Current law versus underlying issues    

The First Amendment protects the media from censorship by the government, and private companies like Twitter and Facebook  are treated the same way as the New York Times. Not many people would argue that the government should tell the New York Times to carry (or not to carry) a daily opinion column by Donald Trump. 

But the underlying issue — that a national public forum can exclude someone like Trump  — is unsettling for many people who see big tech censoring a major political figure.  

The issue came forward a year before, in Prager v YouTube   2020,  when the non-profit Prager institution sued YouTube (and its parent Google) for removing access to some of its videos for breaking its rules. (The case is discussed below). 

The libertarian Electronic Frontier Foundation  said this about the Trump account shutdowns: 

The decisions by Twitter, Facebook, Instagram, Snapchat, and others to suspend and/or block President Trump’s communications via their platforms is a simple exercise of their rights, under the First Amendment and Section 230, to curate their sites. We support those rights. Nevertheless, we are always concerned when platforms take on the role of censors, which is why we continue to call on them to apply a human rights framework to those decisions. … Going forward, we call once again on the platforms to be more transparent and consistent in how they apply their rules—and we call on policymakers to find ways to foster competition so that users have numerous editorial options and policies from which to choose.

In the absence of any legal requirements on social media companies, some people have questioned whether the policies are consistent or fair.  

  • The families of Allison Parker and Adam Ward,  journalist and camera operator killed during a live WDBJ Roanoke television broadcast in 2015, have asked in court that YouTube and Google  take down videos of the killing and to block false and  hateful speech about the incident. A search led Allison’s father “down a rabbit hole of painful and despicable content,”  including claims that his gun safety foundation was a fraud and that Alison had plastic surgery to live a secret life in Israel.  So far Google and YouTube have refused to take down the video or associated misinformation, citing Section 230.  Parker’s father has filed a complaint with the Federal Trade Commission alleging that YouTube had failed to enforce its own Terms of Service.
  • Guy Babcock, a British software engineer, has asked Google to take down strange web sites that falsely accused him and family members  of being a thief, a fraudster and a pedophile. Google refused, again citing Section 230, but after years of investigation and litigation, Babcock was able to track down the false information to one deranged former employee.
  • Cary Goldberg, author of “Nobody’s Victim,” was harassed by an ex-boyfriend with “revenge porn.” Police said there was nothing they could do. She now represents people who have been subject to “sextortion.” 

What are the issues surrounding de-platforming and restraint of harmful content?   

1. Whose rights?  The primary issue, according to legal experts, involves the First Amendment rights of the social media companies (and not those who post on these public forums).

This was clarified in Prager v. Google 2020, in which the court held that YouTube is not a government actor, common carrier or public forum bound by First Amendment limits for its users if it hosts a forum for  public speech.  It is a private forum.  

Prager casts itself as a university, but that’s not true; it does not qualify for the “edu” domain name suffix. It is best described as a non-profit  educational source of conservative and right-wing advocacy videos.  YouTube labeled Prager’s videos as “mature content,” only appropriate for Restricted Mode, cutting them off from users who had these restrictions, for example, children whose parents had restricted their browsing.

In a Prager video, attorneys for the video maker argue that the question is whether YouTube is (or should be considered) a common carrier or a public forum. This is a “heads I win, tails you lose” argument, since either way, Prager videos would not be restricted on You Tube. 

YouTube is a private forum, the 9th circuit court found, and “not subject to judicial scrutiny under the First Amendment.”  Its editorial decisions are protected by the First Amendment, and its editors have the right to manage the platform as they see fit, the court said.  The Electronic Frontier Foundation filed an  amicus brief  in the case, arguing that the remedy for bad speech was not more government regulation.   

2. Social responsibility:  Many people believe that inappropriate,  disturbing and invasive photos and videos can and should be taken off the web or kept away from children.  And the social media giants should be especially responsive when victims and relatives complain. But there is no enforcement mechanism. There is nothing more than a social media company’s own sense of ethics when it comes to enforcing its own policies. The companies operate with impunity. But why do social media companies dislike regulating their own content? 

In the first place, it is expensive. If the law allows them to leave the content up, it’s cheaper than taking it down. Also, shocking content is routinely promoted through algorithms, and that means profits.  The algorithmic amplification of extreme content “is a business choice made in pursuit of profit; eliminating it would reduce the harm from hate speech, disinformation, and conspiracy theories without any limitation on free speech,” says Roger McNamee in a Wired Magazine article.  He adds:  Facebook’s own research revealed that 64 percent of the time a person joins an extremist Facebook Group, they do so because Facebook recommended it through its algorithms.  

3. Transparency and oversight:  Laws protecting privacy in these and similar areas are far more common in Europe than in the US, and some legal experts argue that a more democratic rule of law must be applied, with transparency, to social media policies.

 “In this moment, the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online.  It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing,” says Renee DiResta of the Stanford Internet Observatory in a Wired essay titled “Free Speech Is Not the Same As Free Reach.”

4. What if we treated social media as common carriers or public accommodations? Eugene Volukh asks if new social media operate differently than old media and whether they would accept common carrier status in return for immunity from content regulation. This would square with Justice Clarence Thomas’ idea of rejecting the public forum doctrine and applying the common carrier or public accommodations doctrines in return for what he called “special government favors” in his dissent in Knight v Trump, 2017 and 2021. This is probably a reference to immunity in Section 230. 

Many legal scholars believe that the common carrier approach would be inappropriate  since “digital platforms do not serve the public indiscriminately in a manner that would confer common carrier status,” according to Sarah S. Seo (“Failed Analogies,: Fordham Intell. Prop.. Summer 2022).  “First, social media platforms narrow their services to those potential users with either a valid phone number or email address. Second, the platforms employ algorithms that create individualized experiences for each user, actively monitoring available content.” 

5.  Recent litigation:  In September, 2022, a federal appeals court in Texas upheld a state law that prohibits social media companies from removing speakers because of their political views.     

“We reject the idea that corporations have a freewheeling right to censor what people say,” the court wrote in its Sept. 16, 2022 opinion

 

Washington Post zoom discussion on the First Amendment rights of social media companies.  


READING