Grandstand. Since the start of the Covid-19 crisis, hate messages have exploded on social networks. According to the online hate barometer put in place by the company Netino, they increased significantly in the first quarter of 2020 and represent 14% of messages. In the same period, the number of comments of all kinds on Facebook has almost doubled.
This double phenomenon goes far beyond the national framework alone. This hatred is perpetrated by individuals, but also by organized groups, present and active for a long time on social networks. They see this crisis as the ultimate opportunity to get their messages across and even their ideology. The virality of these messages is all the stronger and their consequences more worrying that the context of confinement has significantly increased the number of hours spent in front of screens, especially on social networks.
Faced with this, we, activists or representatives of associations, participants in various initiatives of civics online, present “on the ground” on a daily basis, let us all make a sorry observation: the platforms fail in their duty of moderation. And Facebook, the largest of them, is a particularly vivid example.
Certainly, we note the willingness of Facebook to support civil society and fight against disinformation and conspiracy theories related to Covid-19, also on the rise. In addition, the creation announced by the social network of a “supreme court”, responsible for resolving certain cases independently, constitutes an effort that we welcome.
Use of algorithms
However, the lack of moderation on the platform, already problematic before the crisis, worsened when, precisely, it would have been desirable for Facebook to redouble its efforts in this area. This is all the more alarming than the Wall street journal recently revealed that the social network had obtained proof, after conducting internal research, that its algorithm contributed to the proliferation of extreme content, without this pushing it to make fundamental changes.
Furthermore, the provisions of the law against online hatred should come into force from July, and we do not see how Facebook intends to respect them, in particular as regards the withdrawal of content.
“Dirty is an act of wickedness that has nothing to do with justice”
François Jost is a semiologist and professor emeritus in information and communication sciences at Paris-3. In Wickedness in Action in the Digital Age (CNRS, 2018), he is interested in the spectacle of cruelty that is played out every day in the media and on social networks.
How do you analyze the trivialization of ” name and shame “(” Naming and shaming “), until then extremely rare in France?
With reality TV, anonymous people became famous people within a few weeks. It was experienced as a kind of social revenge, as a victory against the elites. Social media has ushered in a new era in which it is no longer just a matter of competing with the elites but bringing them down from their pedestals. It’s about attacking a person’s name and ethos, destroying their online reputation. As the philosopher Vladimir Jankélévitch says: “The most radical way of verifying the power we have over a thing or being is to obtain from that being its own destruction. ” And this destruction involves the public exposure of an act considered shameful.
Can we speak of wickedness? In the case of the denunciation of hunters, for example, those who participate in these campaigns seem motivated by a desire for justice.
Wickedness is a bit of a childish term, but it has the advantage of covering a whole gradation of wicked acts, from backbiting to hate. What makes name and shame a nasty act is that instead of fighting at the level of ideas, entering into a debate, he violently attacks the speaker and that he resorts to the gaze of others. It needs a third party, which is represented by internet users or media listeners. To smear is an act of wickedness, which has nothing to do with justice which deserves at least one possible defense.
Can we protect ourselves from these movements?
For that, we would need moderators, as it exists on certain sites. But the speed of Twitter or Facebook makes it difficult to control such acts. One solution would be for Internet users to think a little about what they are doing before relaying degrading comments. As we know of slander, there is always something left.
We now receive each time there is a report of content clearly violating Facebook’s “community standards”, a response indicating that the social network is also affected by the Covid-19 crisis; and that, due to a reduced moderation staff, the report in question will not have priority.
You have 61.03% of this article to read. The suite is reserved for subscribers.