This article will discuss why social media algorithms are bad and will present arguments on why we need better humans in social media who have better insights on what’s going on in society.
Two are better than one
When Facebook was founded they had two people running it: Dick Costolo, one of their employees, and Chris Cox, who was an investment banker.
They then hired Joel Kaplan, who was Vice President of Sales and Distribution for Warner Music Group.
They were considered two of the “big brains” to have a hand in shaping the platform and they certainly put that to good use.
As their workforce grew, their role expanded along with it. Since it was supposed to be a phone tool to enable people to communicate with their friends, they didn’t have the manpower to filter which news sources were legit and which ones were wrong.
They did however have one very important person in charge: Mark Zuckerberg.
Zuckerberg made Facebook more accessible by making it free to use and by linking to it from the famous “Like” button that was made public.
A little while after, the two gentlemen turned to growth, by investing in content that was of high quality.
For years, users could see that a particular source was “Progressive” and was making an effort to tell the truth. These types of pages had a large, decent-sized following but, as Zuckerberg would soon find out, they could only do so much.
The day a news source reached “Recommended” was the day Facebook closed the doors on them
They then began pumping stories out that were very popular and followed a line of typical headline style: “Lying Dick Slater invented Democracy” or “Nasty Josette Bristol came for my rifle”, using the classic “distributed” strategy that Facebook used.
As they had no human moderators to moderate what was going on, they had no idea what was true or what was fake. As the fakery got more and more obvious, these social media leaders changed the way stories were selected and began censoring content.
They put up walls and technology, in the form of algorithms, that were poorly designed to screen out propaganda and real news alike.
What was true and what was false didn’t matter and the algorithms chose it. The problem is that, as they pushed out a stream of content, they had no idea what the impact would be on the users.
What they pushed out would be the most popular and what people read would naturally be the most popular.
Who was controlling this site was, literally, a secret.
They were processing hundreds of millions of data points every day and because of that, they did not realize that the algorithms were putting up a facade to fool the users.
It didn’t take long for them to discover it because the internet users that didn’t have a clue about how their favorite news sources were being controlled were expressing their concerns online.
This feedback was then fed into the algorithms and, eventually, it all got to the people at the top, who now knew what was going on.
While they were given access to their users, they had no access to their users’ newsfeeds.
This was called “shadow banning” and would be the first breaking point
Shadow banning is when you hide people in your feed, who are either conservative or non-leftist. While their conservative users were allowed to see the right-wing posts, their liberal users were hidden.
This was an obvious violation of their content guidelines and of their community standards, so Zuckerberg decided to fix it by changing their community standards to no longer allow content that “favors violence or hate speech”.
They were merely censoring conservative views but what they couldn’t figure out, was that they were, in fact, censoring every content type that they didn’t like. What had them concerned was that many news sources were not allowed on their platform because they said they were “fake news”.
The algorithm clearly did not know what was fake and what was not, so it had no way of telling what was real or fake. The algorithms had taken over and they were in full control.
It has been well over a decade since the January 14th, 2018 report that exposed the network, and after reading those comments I can understand why they were really trying to push content in the feeds to keep users engaged.
The amount of interaction people were having on the site was dropping. Even at the height of the news feed algorithm change, from January to April of 2018, news feed posts were in a downward trend.
You might be wondering how so? As this site went to the same pattern as the mainstream media, the number of real-time reactions dropped and, eventually, the number of shares in direct correlation to that.
People were getting bored and users were reading less and less. At the time that users were returning to social media, they were not coming back to interact.
No longer were they finding a place to discuss the issues they cared about, even if they were on the same page with the people they were sharing it with.
In the days after, social media leaders turned their heads and looked away as people were becoming increasingly disappointed
Some sites started to reverse their success by banning over-sharing, adding lock boxes and removing posts with misleading headlines.
If you find a post that is a retweet, no longer considered to be your post, you’ll be able to like it, post a reply, share it or whatever you want to do, but you’ll be limited in what you can share. People who have been banned were hit hard.
Twitter got rid of the old “see who liked it” view and people who are trying to dig up dirt on another account will have to look much harder. In response to the backlash, the site has turned its strategy on its head by no longer asking to check who liked a post and instead asking if you’d like to follow a user.
It’s a new era of internet experience and they’re playing catch up, but they’re playing a long game. Social media platforms are working to combat the unhealthy aspects of the sites with people’s interests in mind.
In an article by Tech Crunch, they write that Facebook has even more ambitious goals and they want to make sure that the people who spend time on their site are the users they want them to be. They want to remove what they called “toxic interactions”.
They are now looking at the content being posted and are trying to create more authentic conversations.
Whether the website works as well as they want it to, is still in question.