Two experts weigh in on Big Tech banning the president from their platforms in wake of the deadly Capitol riot
Major tech platforms, including Twitter and Facebook, have either permanently banned or restricted President Donald Trump from their platforms in response to his inflammatory rhetoric that culminated in a mob of his supporters storming the U.S. Capitol last week in an effort to block certification of President-elect Joe Biden’s win.
The moves were aimed at preventing further efforts to incite violence and followed months of the president’s unfounded claims of voter fraud and refusal to concede his election loss. As the president denies that he played a role in the violence that led to his second impeachment, both he and his allies have decried these actions, claiming censorship and a violation of their First Amendment rights.
Syracuse University law professor Shubha Ghosh pushed back against these assertions, saying private companies — no matter how large — don’t have to comply with the First Amendment because it is meant to protect people from being silenced by the government.
“It’s no different than any other private business refusing service to somebody,” Ghosh, who specializes in antitrust law, said. “So if somebody goes into a grocery store or a movie theater or a bar and starts acting in an unruly way, let’s say, the private establishment is within its rights to have them be barred or to have them thrown out of the establishment.”
But the president’s Twitter ban has raised some new questions about tech regulation. While various groups like the NAACP praised the move, others like the ACLU — which have otherwise been critical of the president’s online rhetoric — said this represented a concerning exercise of “unchecked power.” World leaders, including EU Commissioner Thierry Breton and German Chancellor Angela Merkel, have called the decision “problematic.”
In a series of tweets, Twitter CEO Jack Dorsey himself acknowledged the ramifications of banning the president, saying it represented “a failure to promote healthy conversation.” But he ultimately said he believed it was the “right decision” in light of concerns of public safety.
The debate over Big Tech’s increasingly powerful role isn’t just a partisan issue as both sides have argued that platforms like Twitter and Facebook have too much power to shape social discourse and censor speech. Social media companies possess further protection under federal law through Section 230 — a provision of the 1996 Communications Decency Act, which shields platforms from being held liable in court for the speech of users. The president and his allies have repeatedly tried to repeal Section 230 and Ghosh predicted that if there are any reforms, it will take some time.
“There will be some gradual pushback on what is now or what has been fairly much blanket immunity,” he said.
Megan Squire, a computer scientist who studies online extremism, said what can be frustrating for technologists like herself is trying to determine where the tipping point is between what is and what isn’t considered acceptable platform behavior. She said the rules guiding content moderation don’t always feel very transparent and enforcement of the terms and conditions doesn’t seem consistent.
“Transparency could definitely be improved on just about every platform,” she said. “There also needs to be clear and consistent ways for people to report bad behavior on platforms.”
Though Twitter and Facebook have come a long way in combating hate speech and misinformation, Squire said these giant tech companies need to act in concert with one another when setting rules to avoid a game of whack-a-mole. This effect basically creates a situation where users get banned from one platform and just end up popping up on another platform causing the same issues.
“The community needs to act as a unit, and just really inform each other and act more like a team,” she said. “That’s hard to do. I mean, this is the tech world, right? So that’s going to be a difficult ask, but I think that’s where we are. That’s what needs to happen.”
This could be especially important amid security concerns leading up to Biden’s inauguration next week and law enforcement is bracing for further extremist violence. In its announcement on banning President Trump’s account permanently, Twitter said plans for future armed protests had already begun proliferating on and off its platform, including a proposed secondary attack on the U.S. Capitol and state capitol buildings on Jan. 17.
Though conservative-friendly social platforms like Parler have recently all but vanished from the internet, Squire warned that deplatforming won’t stop the calls for further violence online. She warned that angry Trump supporters and extremists will continue to seek out other digital gathering spaces to spew violent rhetoric — or worse, plan further attacks.
“The deplatforming issue of what happens next is top of mind for a lot of us, because it isn’t just ban them and then sort of walk away,” she said. “They’re not going to just stop and go away quietly.”
Written and reported by Tess Bonn.