This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Content Moderation at Work
There are several quickly moving parts here, so forgive me if this post becomes outdated between writing, publication, and reading. It’s about the rapid shifts and rapidly shifting arguments around content moderation. In a blur of activities, Twitter went from a 24-hour ban to permanent suspension of the @realDonaldTrump account. Facebook, Google, Amazon, and others have also banned politicians and the Parler app for activities related to the insurrection at the U.S. Capitol on January 6, 2021.
As communications technology evolves, we continuously face new challenges and opportunities. For example, Lincoln was the first president to wield the telegraph and its advances, making him more efficient in communications than prior presidents. Kennedy had the first televised presidency, and Trump was the first to leverage social media.
The topics for censorship, the First Amendment, Section 230, and antitrust were trending throughout 2020. There’s a lot of opinions about if or how social media should be regulated or changed. It’s important to note that many of the rules affecting social media also pertain to enterprise communications.
Content moderation isn’t a new problem. There have always been limits to free speech. For example, I was taught that free speech doesn’t entitle someone to falsely yell “fire” in a crowded theater. Yelling is now something we’re more likely to encounter online, and various technologies amplify speech. Giving people a voice sounds like a noble idea until they use it to say bad things.
Social networks such as Twitter and Facebook, in addition to enterprise solutions such as email and team chat apps, amplify a user’s voice. They offer efficient, quick, and broad communications – but the topics of censorship, net neutrality, and antitrust all play into enterprise communications as much or more as they do with social networks.
Censorship is often confused with the First Amendment. The constitution protects us from the government restricting what we can say, but it’s not a mutual arrangement. As citizens and private organizations, we retain the right to censor whatever we like. Furthermore, we do it all the time.
An example of censorship is a parent prohibiting their child from saying bad words. That’s a form of policy censorship. Policies can be explicit or informal. There are forms of legal censorship too. These include child and non-consensual pornography. The protection of information such as customer data is also a form of censorship, though that one needs improvement. There have been countless attempts of automating censorship, but it has proven to be elusive. Content moderation has inherent subjective components.
The First Amendment doesn’t protect speech on private services, nor does it apply to corporate email or team chat conversations. The government can’t stop you from sharing your opinions, but privately owned networks can. Service providers and other private organizations can define what they consider permissible. Ideally, they are fair and transparent – but again, that’s easier said than done.
As communication tools become more efficient and widespread, censorship becomes more restrictive. For example, email was one of the first widespread communications tools that enterprises deemed easy and inexpensive to use. As a result, many employers created appropriate use policies and restricted the capability of sending company-wide emails.
Team Chat apps introduce several new challenges. Chat messages tend to be casual and short, which invites misinterpretation. They are also penetrating deeper into organizations, and often represent the first enterprise communications tool to reach certain roles. There’s tremendous efficiency and alignment benefits with enterprise-wide communications, but there’s also risks. Frustrated employees can spread discontent. Unchecked, the platforms can also facilitate the spread of hate speech or harassment. As chat apps mature, we see stronger forms of compliance monitoring and solutions that actually stifle communications. For example, Microsoft just introduced Ethical Walls in Teams that prevent communications between specific individuals or groups.
Amazon’s recent decision to boot Parler from its web services blurs the boundaries between applications and infrastructure. If content moderation on an infrastructure service sounds odd, it doesn’t matter. Amazon can limit services to customers. Amazon’s terms of service clearly state that it reserves the right to terminate users from their networks at its sole discretion. Most businesses can refuse customers as long as the discrimination doesn’t pertain to protected classes such as gender, race, or religious beliefs.
We dealt with this before in communications and created what’s known as a common carrier. These businesses are regulated and generally are not allowed to refuse customers. AT&T (the Bell System) was the nation’s first common carrier, and today, there are numerous carriers that provide services without discrimination.
One of the key differences between Amazon Web Services (AWS) and the Bell system is competitive choice. Common carriers often trade competition for regulation. Parler has a choice regarding where to host its platform. It can choose another infrastructure provider or even self-host. However, AWS (along with Google Cloud Platform (GCP) and Microsoft Azure) offer numerous operational and financial advantages.
The bigger issue here is that, unlike television or the telephone network, the Internet is largely unregulated. That means that lots of private companies are making policy decisions regarding operations and content. However, AWS provides key infrastructure, but other infrastructure pieces come from content delivery networks (CDNs),
Even if Parler does reestablish itself, it could still face content moderation from broadband carriers. During the Trump administration, the Federal Communications Commission (FCC) determined that broadband carriers aren’t subject to the rules of common carriers (Title II of the Communications Act). With most of the protections of net neutrality gone, broadband providers such as Comcast and AT&T can block or throttle Parler (and other) sites — and may have (it's hard to tell when this occurs).
As the Internet has become such a critical communications channel, we may need to reestablish government regulation of its core infrastructure including cloud-hosting platforms and broadband networks.
In addition to AWS, we also saw Apple and Google block the social app Parler. It’s a familiar story—the app facilitated violence, and the store providers don’t want to distribute the app. However, in this case, the app maker has few options. If you want your app to be on an iPhone, it needs to be distributed by the Apple App Store. Note, Apple sold 195 million iPhones last year alone.
Parler is an alternative social network. The app creators bypassed the policies and restrictions of the larger social networks, and they can find alternatives to how/where they host the service. However, once they get that sorted, they will still need to distribute a mobile app. Apple and Google have a duopoly on smartphones and smartphone app distribution, and since Parler was blocked it has no viable way to distribute its mobile app.
It’s legal for private companies to censor speech, discriminate against customers, and make other subjective policy decisions. But the underlying assumption is that competition and a free market will work things out. If we don’t like the decisions Facebook or Amazon makes, we can go elsewhere. But where do you go when Apple or Google block their app stores?
This situation highlights the need for antitrust action. It’s not about censorship or the First Amendment. It isn’t even about Parler. There have been and will be other apps that get restricted from the smartphone marketplaces. Most users access the net from their phones, and that access is gated by two private companies.
For that matter, breaking up Facebook, Twitter, and Amazon solves a lot of problems. The reason we are so concerned about censorship and speech is due to limited viable alternatives. Censorship and freedom of speech are not new issues, yet we seem to only hear about them today with these big companies.
A very tall friend of mine doesn’t like a certain brand of cars because he feels they are unfriendly to tall people. He’s not upset about it; he just buys a different brand.
There’s a lot to unpack regarding Internet speech and regulation, and many of these issues are solvable with competition. We need to include antitrust and net neutrality in our free speech and censorship debates.
Dave Michels is a contributing editor and analyst at TalkingPointz.