The Facebook Papers once again show the dark side of the Meta Corporation's social media channels, which are also popular in the non-profit environment. What is there to the current accusations? How socially harmful are these services really? And can we still justify the usage of these channels from a viewpoint of sustainability?
What is new is that the Facebook group is now called "Meta". This is accompanied by the introduction of a new product called Metaverse. A digital world that will remind some people of "Second Life". But that is not what this is about. What is remarkable about this, however, is that Mark Zuckerberg probably also wanted to get rid of the stock exchange listing as "Facebook, Inc." in this way - a brand that is no longer really cool after various scandals.
The reason for this blog post is rather the latest scandal around the company: the Facebook Papers leaked by former Facebook employee Frances Haugen. Whereby "Papers" is not quite right. It's about an accumulation of tens of thousands of screenshots of memos and internal chats that revolve around different topics. And because the more than 10,000 files have not yet been completely analysed, new findings could soon be added.
A concrete accusation regarding Instagram has already been made in the media: The company's management had played down an internal study in which the long feared psychologically damaging effect of the social media channel, especially on young people, had been proven. In my opinion, this is a good reason to check whether this is true and to question whether an agency can still offer its clients communication via this channel with a clear conscience.
In essence, Frances Haugen accuses Facebook of a "profit over people" maxim. The algorithms are harmful. Facebook knows this, but would not change anything out of concern for usage intensity, usage figures and profit. Simon Hurtz, one of the journalists at the Süddeutsche Zeitung who analysed the Facebook Papers, says in a podcast on Deutschlandfunk Kultur that this is not quite the case.
Facebook did test with a small control group whether the chronological display of posts had a less divisive effect, and discarded the model because it did nothing better. Karissa Bell from endgadget takes a more critical view: "Test changes to the algorithm are only about increasing engagement. Whether negative emotions or misinformation could be eliminated is completely subordinate to this goal. The most recent and impressive example of this attitude is that Facebook, according to an AP report, apparently preferred to stall for time instead of implementing proposals to curb misinformation on the COVID-19 vaccination.
For all those who follow Facebook rather critically, this first of all confirms assumptions about the effect of Facebook's algorithm on filter bubbles and the strengthening of the new right, which we have also observed and analysed since 2015. What is new is the evidence that this is known to Facebook, has been discussed internally and ultimately plays no role in the company's actions. A new political explosive power lies in the fact that some documents could convict Facebook CEO Zuckerberg of knowingly making false statements - even when testifying under oath.
The biggest outcry so far has been caused by an internal study that proved a harmful effect of Instagram use among teenagers (especially girls) and was deliberately ignored by Facebook's leadership. According to the Wall Street Journal, which analysed the relevant Facebook Papers, 32 percent of the teenage girls surveyed had said that Instagram made them feel even worse if they already felt uncomfortable about their bodies beforehand. It was not these already known results, but only the wave of media reporting that led the Facebook company to suspend the timetable for its Instagram Kids project (a version for children under 13).
From a scientific point of view, the substance of this study is rather thin, according to Simon Hurtz of Süddeutsche Zeitung: only correlations were found, the study would not prove causalities. Statements on suicidal intentions would be based on case numbers beyond statistical significance. This assessment contrasts with the fact that the internal study ultimately only reproduced the findings of a similarly designed study by the British Royal Society for Public Health from 2017, which did prove causalities. For example, that increasing use of social media is associated with more frequent mention of depression and anxiety. In addition to Instagram, this study also examined the use of Facebook, Snapchat, Twitter and YouTube and confirmed that Instagram had the most negative effect on the mental health of young people.
In the study, the Facebook service scored significantly better than Instagram in terms of the mental health of young people, and also better than Snapchat, but worse than YouTube or Twitter. It should be noted, however, that Facebook is hardly still widespread among young people. However, a more recent study by Stanford University, which examined how social behaviour and well-being of former (also adult) Facebook users change when they give up their Facebook use, also comes to a differentiated assessment of Facebook. The more beneficial effects of not using Facebook are likely to include that ex-users spend more time with friends and family and form less polarised political opinions. On the other hand, they also spend more time alone in front of the television and, above all, their factual knowledge of the news situation decreases with decreasing Facebook use, as the study determined with quiz questions. The measured improvement in subjective well-being by not using Facebook was to be assessed as marginal overall.
The Facebook Papers hopefully provide further evidence that social media platforms need to be regulated and forced to act responsibly through legislation, as Sacha Baron Cohen already called for in an impressive speech in 2019. Regrettably, only scandals, hearings and threats of sanctions have led to improvements so far. Improvements that we as an agency clearly feel when working with political messages, and which hopefully also put a stop to those who work not on informed debate but on dividing society and delegitimising its democratic institutions. Nevertheless, when using advertising material, we will continue to remind people who they are supporting. Under these auspices, we can continue to work with the Facebook service.
What is really critical is the use of Instagram in communication with target groups under the age of 16. Even if one does not contribute to the image of a world full of beauty and wellness with one's own content, one does not want to motivate the use of a medium that demonstrably harms its users.
Snapchat performs in studies just as badly, YouTube's format is a bit too different, and (unfortunately) not using social media is not a solution. That leaves TikTok. Here, too, there are justified suspicions that the passivity to which the video waterfall with addictive potential invites does not necessarily increase self-esteem. Nor does TikTok seem to have a full handle on psychological hazards, according to a US Senate hearing. After all, this Chinese-owned company seems to deal with mental health issues somewhat more proactively (e.g. on the topic of suicidal thoughts) than the American meta-corporation. In the wake of the Facebook Papers, TikTok and Snapchat have now been asked by members of the US Congress to produce comparable internal studies on the mental vulnerability of their users. Hopefully, a more concrete assessment will be possible within short time.