Are Facebook and Instagram Bad For Society?

This article is going to take a look at some of the decisions that Meta have made for the direction and operation of their Facebook and Instagram platforms. As you’re likely aware, these giants of social media have consumed our attention relentlessly for the past decade, and have been consistently criticized for their exploitative business models and potential to negatively impact society.

Facebook and Meta logos

More and more people are coming to understand that social media isn’t all upside, and that we need to start looking at the potential negative impacts of these platforms on ourselves, our families and society as a whole.

Are Facebook and Instagram bad for society?

It is no secret that social media has had a profound impact on our lives. The way we communicate, the way we consume information, and the way we connect with others have all been transformed by platforms like Facebook and Instagram.

However, there is a growing body of evidence that suggests that social media also has a negative impact on our mental health and well-being. A recent study found that people who use social media more than two hours a day are twice as likely to suffer from depression and anxiety. Another study found that the more time people spend on Facebook, the less happy they feel.

There are several reasons why social media can have such a negative effect on our mental health. First, it can be addictive. The constant stream of notifications and updates can create a sense of FOMO (fear of missing out) which leads to compulsive checking of our phones and feeds. Second, Facebook and Instagram are designed to be addictive by nature. They use algorithms to keep us engaged for as long as possible so that we see more ads or click on more links. This can lead to us spending hours scrolling through our feeds without really taking anything in or connecting with anyone in a meaningful way. Third, social media – and Instagram in particular – can be extremely damaging to our self-esteem. Constantly comparing ourselves to the perfectly curated lives of others can lead to feelings of inadequacy and low self-worth.

The rest of this article is going to look at specific decisions that Meta has made and the impact of those decisions on society.

Facebook CHOSE to promote negative posts more than positive ones because of their own research

To improve the chance that their content went viral, Facebook decided to prioritize posts with an “angry face” 5x more than ones without. The company also intentionally made those posts appear in news feeds disproportionately. This has led to a number of negative consequences for society and Facebook’s users, including increased feelings of anger and hostility, decreases in the quality of social life, and decreases in the number of social interactions.

A man made angry by what he's seen on social media

Negative emotions are powerful and spreading hate is easy. This makes Facebook a breeding ground for anger and hostility.

People who see more negative posts are likely to be in a bad mood, which can make them lash out at others or withdraw from society. This frame of mind makes people more susceptible to misinformation and disinformation and allows extremists to spread their message more easily.

Facebook is now the number one source of news for Americans. This has led to a spread of misinformation and fake stories that have impacted elections, including the 2016 U.S. presidential election.

Does seeing extreme content give psychological permission to hold extreme views?

The way we interact with each other has drastically changed with the rise of social media, particularly Facebook. We now have the ability to connect with anyone, anywhere in the world at any time. This has had both positive and negative consequences.

On the positive side, social media has made it easier for people to connect and share information. It has also given a voice to marginalized groups who might not otherwise be heard. On the negative side, social media can be a breeding ground for hate speech and misinformation. It can also be addictive and lead to feelings of isolation and loneliness.

Facebook’s own research showed that suggesting posts with extreme or racist views to users that their friends or family have shared gives them a sort of psychological permission to have those same extreme views. Basically “your uncle shared this racist post!” gives the greenlight for themselves to also share the extreme post. Despite this, Zuckerberg himself refused to allow it be fixed, saying that it would negatively impact Facebook’s growth.

Facebook’s research shows that moderate conservatives were exposed to QAnon content within 1 day

Facebook’s own research showed that a test account with moderate conservative leanings took only 1 day to start seeing QAnon content. They nicknamed the test “Carol’s Journey to QAnon”, and despite this, allowed QAnon to remain on the platform for 13 more months before finally deleting their posts from Facebook in late December 2017. More than a year after the FBI designated them as a domestic terrorist threat.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands including Fox News and then-President Donald Trump. Though Smith had never expressed interest in conspiracy theories, Facebook was recommending she join groups dedicated to QAnon – a group dedicated to the sprawling theory that claimed President Trump was secretly saving the world from pedophiles and Satanists – within just two days. In one week her feed was full of groups violating Facebook’s own rules like those against hate speech and disinformation.

Flat earth conspiracy theories spread like a virus

Carol Smith wasn’t a real person. A Facebook researcher invented the account, along with those of other fictitious “test users”, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its algorithms.

The body of research consistently found Facebook pushed some users into “rabbit holes,” increasingly narrow echo chambers where violent conspiracy theories thrived. People radicalized through these rabbit holes make up a small slice of total users, but at Facebook’s scale that can mean millions of individuals.

When you consider this in tandem with what we said previously about seeing extreme content shared by your friends and family, and the psychological permission it grants, you can begin to understand how these ideas gain popularity.

Are Meta good stewards of your personal information?

Facebook and Instagram have come under fire for their decision making when it comes to user privacy and data collection. Two proposed class action lawsuits have been filed accusing the social media giants of evading Apple’s privacy-oriented App Tracking Transparency (ATT) feature and consequently violating both federal and state laws barring unauthorized data gathering.

Hacker in the darkness ready to steal your data from Facebook

Facebook has been vocal in its opposition to App Tracking Transparency, arguing that it will hurt small businesses that rely on targeted advertising. However, many users are concerned about the implications of Facebook’s data collection practices, particularly when it comes to how the social media giant uses personal information for targeted advertising.

The negative impact of Facebook and Instagram’s decision making on society is evident in the growing mistrust of these social media platforms. Users are becoming more aware of the ways in which their personal data is being used and collected, and they are increasingly concerned about how this information could be used to manipulate them. As a result, many people are beginning to question whether or not these social media platforms are good for society as a whole.

Facebook employees found that groups were being misused by extremists but weren’t allowed to fix it

When Facebook employees noticed that the Groups feature was being used to promote new extremist and Neo Nazi groups, they made fixes to tamp down on the hate. Joel Kaplan, Facebook’s VP of Global Policy, personally ensured that the fixes were reversed, and again said that doing this would disproportionately affect conservatives.

When a senior VP of a social media giant is prioritizing profit above all else, and is willing to use his position and influence to ensure that the company’s bottom line is not harmed, it should be a cause for concern.

Facebook and Instagram have a responsibility to society to ensure that their platforms are not used for spreading harmful and hateful information. Instead of focusing on profits, they should be actively working to ensure that their platforms are used for good.

When you’re a corporate VP at Facebook and your #1 job is profit maximization, you’re not going to do what’s best for society.

You’re going to do what’s best for Meta.

The Meta rebrand – is Facebook really “privacy-focused”?

Meta’s decision to rebrand Facebook as a “privacy-focused” social network is questionable, are their motives really in the best interest of users or is this a ploy to generate more revenue?

The decision by Facebook’s CEO, Mark Zuckerberg, to rebrand his company as “privacy-focused” has been met with a lot of skepticism.

The move has been seen as an attempt to regain the trust of users who have become increasingly concerned about how their personal data is being used, and it has been reported that Facebook has lost nearly $100 billion in market value since the Cambridge Analytica scandal.

US Representative Alexandria Ocasio-Cortez responded to Zuckerberg’s announcement about Meta with a tweet that says: “Meta as in ‘we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society… for profit!'” This sentiment was echoed by former Facebook employee Frances Haugen, who expressed doubts about the company’s ability to improve while led by Mark Zuckerberg.

Speaking of Frances Haugen…

Meta tried to discredit the Facebook whistle blower, Frances Haugen

Facebook intentionally tried to deepen political divides as a strategy to try and discredit the whistle blower, Frances Haugen.

They went to the GOP and warned them that she was a leftist political activist trying to take away conservative voices, and then went to Democrat lawmakers and claimed she is a GOP political operative trying to punish Facebook for banning Donald Trump. Facebook cynically tried to deepen the cracks in the damaged America political system just so they could discredit Frances Haugen.

While many Meta critics are wary of Haugen’s proposed solutions – which include creating a federal regulator to oversee Facebook rather than using antitrust law to break it up – there appears to be a consensus in Washington, DC, that her leaks show something in seriously wrong at the company and that action must be taken.

Research shows that Instagram has an extremely negative effect on young people – particularly teenage girls

Research has been published that shows that Instagram has an extremely negative effect on the well-being of young people, particularly that of teenage girls. A UK study has shown that 13% of suicidal teen girls can trace their first suicidal thoughts back to Instagram.

Constantly being bombarded with perfect people living perfect lives while you’re struggling through puberty is torture for the mind and destroys self-esteem.

Teenagers are already at a vulnerable stage of their lives and to be exposed to this level of negativity can lead them down a dangerous path.

Girl depressed because of Instagram

It’s not just Instagram that is bad for teenagers. Facebook has also been found to have a negative impact on their mental health, with young people feeling left out due to the amount of time spent on social media and not enough time spent with real friends.

Despite these studies and the impact Instagram is having on an entire generation of young girls, Meta is still pressing ahead with it’s plan to create “Instagram for Kids”.

Facebook is also the number one source of news for teens and young adults. This has led to a spread of misinformation that has further impacted the mental health of adolescents.

Meta sat back and watched while Facebook was used to organize a literal genocide

Facebook sat back and watched as its platform was used to organize a genocide. Despite repeated pleas from human rights groups such as Amnesty International and Human Rights Watch for Facebook to not allow the platforms use in Burma, Facebook did nothing. It is estimated that over 700,000 Rohingya Muslims were displaced and nearly 30,000 killed as a result of the violence Facebook allowed to unfold on its platform.

Facebook admitted in 2018 that it had not done enough to prevent the incitement of violence and hatred against the Rohingya people.

When profit and marketing penetration are more important than human lives, it is time to re-evaluate what we are allowing Meta to do.

Burmese boy devastated by genocide

While Facebook has taken some steps toward preventing the spread of hate speech and misinformation, it continues to be a place where people can easily lie about who they are and what they stand for.

It is a place where people can pretend to be someone else, hiding behind their computer screen to share misleading information and harmful rhetoric that could lead to real-life violence.

Meta exists only to make a profit – how does that square with being a force for good?

A lot of the decisions that we’ve seen Meta make that are clearly bad for society are the correct decisions when considered only in terms of growing Meta as a business and generating profit. If Meta is focused only on generating revenue, is it possible for the social media giant to behave responsibly and ethically for the betterment of the human race?

Meta’s revenue is generated by selling advertising space. The more time people spend on Meta’s platforms, the more ads they will see, and the higher their revenue will be.

Meta has shown that they are willing to sacrifice the privacy and well being of their users in order to maximize ad revenue. The more information Meta has about their users, the better they can target ads.

In 2016, Facebook and Instagram were caught selling user data to a political consulting firm called Cambridge Analytica (CA). CA was able to obtain the data of over 50 million Facebook users. This information was used by CA to build a program that could target voters with political ads based on their psychological profile. Facebook and Instagram did not inform their users that this data was being shared with CA.

Decisions like this are clearly motivated by profit and clearly work against society, is it possible for Meta to find a way past this dichotomy?

Actions you can take personally to have a healthier relationship with Social Media

If you find that social media is having a negative impact on your mental health, there are several things you can do to mitigate the damage.

First, try to limit your use of social media to no more than 30 minutes per day.

Second, make sure you take breaks from checking your phone or feeds every now and then.

Third, unfollow or unfriend anyone who regularly posts content that makes you feel bad about yourself.

Finally, remember that what you see on social media is not always an accurate representation of reality. People only post the best parts of their lives online, so don’t compare your life to someone else’s highlight reel.

What can we do about the negative impact of social media on society?

Facebook has been intentionally crafted by its creators to be an addictive mental illness machine that spreads extremism and hatred, all in the name of profit. They knowingly made these choices, choosing addiction, hate and extremism every time.

Some argue that we need to regulate it more, while others believe that education is the key to solving this problem. Is it possible to inform enough users about the impact social media has on society and our lives to really make a difference?

I think not.

The real solution is to find a way to change the incentive structure underpinning these decisions. To find a way for Meta to optimize for human well-being instead of optimizing for human attention. Unfortunately, that may simply not be possible.