fbpx

In 2016 and the years that followed, we learned that foreign powers had used social media platforms, fake news, and other methods to influence our elections, and we as a people were largely livid. How dare they? Someone, it seemed to us, needed to DO SOMETHING.

So as the election of 2020 neared, social media platforms, most notably Facebook and Twitter, “took action” to supposedly suppress fake news, hate speech, and other harmful content. In other words, they started to “sensor” what users put on their platforms. Hearing before hearing in Congress raised the question again:

“Are social media platforms responsible and even liable for what users post online?”

Understand, this has implications beyond Facebook and Twitter. If this law, this small clause were removed, you as a website owner could be liable for comments made on your posts, any harmful reviews someone might manage to post on your site, and any guest content or opinion-based content on your site.

If it were construed as “hate speech” or “fake news” you could be held legally responsible. In other words, the offended party could sue you, even if you did not post the content, but simply allowed it to exist on your platform.

This creates large problems. First, no matter who you are or what website you own, you fall under this statute. That means even Google must delist sites that are “hate speech” or “fake news.”

But the second problem is what this article will focus on: who does the fact-checking and checks all that content? Who gets to decide what defines hate speech, fake news, or even just undesirable content?

The answer is complex, expensive, and despite great effort may still not solve the problem.

Human Bias and Limitations

The first issue is a simple one to talk about, but a much harder one to fix. Because there are two aspects to it, and neither is something we can solve quickly or easily, if at all.

The first is bias. Every human being has it, and as much as we like to say we are unbiased, almost no one can look at content without the filter of who you are. If you are more conservative in your politics, you may view certain content as harmful or even false. If you are liberal in your politics, the same applies. Even those centrists who try to walk the narrow line of common sense have implicit biases when they look at content of any kind.

So even if we can have humans reviewing all the content put on a social platform, who would those unbiased humans be? Truth be told, they would be the hardest humans to find: they are often not on social media at all, or if they are present are its quietest members when it comes to their political, religious, and other controversial views.

And if you found them, would they even be interested in fact-checking thousands of posts on social networks? Unlikely. It should also come as no surprise to conservatives that social media and tech employees are generally more liberal in their political leanings. Even if they were not, the sheer volume of conspiracy theories is often much higher in extreme conservative circles, which isn’t where most conservatives live, politically speaking.

There will always be spillover, where content that is on the fringe of a movement but not as extreme as the movement itself, may still suffer “censorship by association” which is something we will tackle in a moment when we talk about artificial intelligence.

However, the other problem is the sheer volume of content put on Facebook and other social media every day. Despite all you hear about people leaving Facebook in droves, membership numbers continue to grow every single month, and the average number of daily users is rising exponentially.

Quite simply, there is too much content for a human team to adequately evaluate unless it were huge. With such a huge team, there is no way Facebook or any other platform could profit. So they don’t rely on human teams. Instead, they rely on artificial intelligence backed up by smaller human teams.

But even the most intelligent of algorithms fail in this area.

The Downfall of Artificial Intelligence

Can’t artificial intelligence be trained to do this work? You bet, to an extent. But artificial intelligence is only so smart. It can look for keywords and phrases that might indicate “fake news” or misinformation. This is why posts on Facebook that had nothing to do with the election got flagged with “Learn the facts about the election.”

If you thought a human was doing that, or hiding photos because they might be graphic, you likely have an odd picture of a massive Facebook warehouse filled with people doing nothing but staring at screens and going through assigned feeds all day long.

This is also why “fake news” sometimes gets through the filter, at least initially. Scammers get to know the algorithm, and what words and phrases it looks for, and they cleverly avoid them. So when people say, “Why did Facebook censor this and not that” the answer often has little to do with any human bias. It has more to do with human cleverness.

It doesn’t mean there isn’t any bias at all: the algorithm must be programmed and trained, after all, so whoever teaches it about what to look for will inevitably introduce bias. Also, when there is legitimate news about something that has often been the topic of conspiracy theorists, that content may get blocked as well.

It’s not personal. If the algorithm has been told to watch for stories about a “Hunter Biden Scandal” and blocks one from a legitimate news source, or at least flags it for review, it may have nothing to do with the article or its legitimacy. It has to do with how the algorithm was taught and has learned since, and about the person who does any manual review of the articles.

Human bias introduced into AI is a problem, but perhaps a solvable one. The question is what social media and big tech companies can or even should do about it. Because whether human or AI, limiting user content is a form of censorship. And that has to do with whether social media is a public utility because of how public and influential it is, and therefore subject to regulation, or whether your profile is like your living room, a place where you can say whatever you like, and anyone who doesn’t like it can be shown the door.

Expectations

Of course, in dealing with this topic, we have to deal with expectations. Because humans approach most things with some kind of expectation. And when we see hate speech or fake news on a social media platform, we expect the company who owns the platform to “do something.”

But what is it we want them to do? Those who often cry the platform should “do something” are the same ones who complain when one of their posts is removed or one of their photos is “covered.” But it is impossible to have it both ways. If Facebook is going to “do something” about such content, it is inevitable that two things will happen: they will make mistakes, and they will overreach their authority when it comes to monitoring and removing content.

We expect this “censorship” to protect us, and to prevent those who are nefarious from influencing people in less than desirable ways, but we want that censorship limited to our own biases and definitions of what is false and what is hateful. When we post on social media, and someone gets offended, we define it as their fault, their shortsightedness, or their inability to take a joke.

It is quite the opposite when we get offended, and we react in an entirely different way. We want that person banned, their content removed. Action must be taken.

So what’s the answer? Several questions come up: is the platform liable for what someone posts on their site? Are you liable when Uncle Fred goes on a political tirade at family Thanksgiving?

Even if those who run the platform or even some of the people on it disagree with something, or it is fake news (in some cases satire) should the platform remove it, or should they simply count on users having sound judgment and fact-checking something before swallowing a sensational headline hook, line, and sinker?

And in line with expectations, should we be as in tune and invested in social media and what people say and share there as we are? The answer is likely no, we should not. The reality is, social media likely has too much influence over most people, and there is almost no feasible way to turn the clock back.

What do we expect? Are we expecting too much, and is the task too large?

Freedom of Speech

My grandfather always said, “Freedom of speech means you can say whatever you want. But you are not free from the consequence of saying it.” You can scream, “FIRE!” in a crowded theater, and laugh as everyone runs out. But the theater company is also free to sue you for lost sales that evening, and for defrauding the patrons of that establishment.

Not to mention that you may very well be hated by some of those same patrons for ruining their evening. What you said was false. It was potentially damaging. However, you have every right to say it. But when the theater company sues you, you have no defense. Another grandfatherly saying is, “Just because you can doesn’t mean you should.”

Freedom comes with responsibility. Shirk the responsibility and someone has to make rules to keep you from damaging others. Your “life, liberty, and the pursuit of happiness” ends when your freedom interferes with someone else’s. This topic has a much broader application than social media, but if we stay in that lane, we should consider what we post with this in mind.

When we don’t, when nefarious characters spread fake news that we then digest and believe, influencing our elections and our way of life, there should be and are consequences. But should those consequences be for the creators or the platform they posted on? If there is a fake news website hosted on WordPress, should WordPress be the company we sue and prosecute?

No Simple Answer

I fully expect there to be a plethora of comments and responses to a post like this. As a writer, I hate censorship. As a social media user, I despise both hate speech and fake news. How do I reconcile those two things?

First, we have to determine if limiting speech on a privately owned platform counts as censorship. It doesn’t, really. You can certainly limit the kind of speech you allow on your own property, just as a business can set guidelines like, “No Shirt, No Shoes, No Service.” When government especially starts telling you what you can do in your own home or business, a line has been crossed that, at least in my opinion should not be.

But since social media is such a force, should speech be regulated, not by the platform, but by an outside entity? What follows from allowing such action? What if the government regulates or holds liable social media companies and websites for what others post there? Does this translate to messages shared in churches, regulated Ted talks, and more?

Where does freedom of speech end, and the need to censor and quiet certain voices begin? Is there any way for the system to truly be “fair and impartial?”

One thing we do know. Social media has a lot of power. So do many companies who are a part of “Big Tech.” Anti-trust action will take years if it ever happens at all. And it may not do any good, as small companies gobble each other up and merge, creating new giants some don’t really want to slay.

But who is responsible for content posted on websites, social media, and that is indexable and therefore can be found in Google remains nebulous at best. It’s about freedom vs. control and risk vs. reward. It’s about safety, security, and what rights we are willing to give up in order to have them.

I don’t have an answer. I don’t think anyone does. And the questions just keep getting bigger and harder to answer.