Over 70% of us in the UK use at least one social media platform. Social media has defined a generation and has become a crucial part of our cultural fabric as a species. Unfortunately it has a dark side in the form of fake news and misinformation. We previously advised on how to spot and avoid fake news in our blog post here.
This week, US feminist Author Naomi Wolf was banned from Twitter due to sharing incorrect information relating to Covid-19 vaccines. This isn’t the first time Twitter has challenged users for spreading anti-vaccine propaganda; since 2020 the social media platform has removed 8,493 tweets and warned 11.5 million accounts that were sharing incorrect messaging about the pandemic.
This begs the question: why are social media platforms being utilised to drive a misinformation pandemic? Also, what can they do to stop these dangerous messages from being shared?
As early as the start of 2020, discussion around COVID-19 began online. While this could be a helpful and informative way to share information, especially to those staying at home, it didn’t take long for scepticism and conspiracy theories to follow.
The bizarre rumour that COVID-19 was spread via 5g hit everyone’s timelines around March. As time progressed, more and more people used Twitter, Instagram, Facebook and YouTube to spread misinformation about the COVID vaccine. This included speculation about side effects and general ineffectiveness, despite its undeniable success in reducing COVID rates in vaccinated populations.
Social media can quickly become an echo chamber of rumours surrounding important topics. This is likely due to a combination of many factors, including the fact that social media offers users relative anonymity to discuss controversial issues without fear of repercussions. The pandemic has disrupted life as we know it for a long time, creating a sense of resentment against the restrictions that have been in place, causing certain people to re-direct their anger to online platforms.
Although it’s unpleasant to see negative or hateful content online, posts such as those written by Dr. Wolf are dangerous. Followers of prominent figures are more likely to believe information that they share, and this may even influence their behaviour or cause them to spread the message to other people who may be susceptible to misinformation.
A survey by Kings College in London found that among the quarter of the UK who report seeing these messages on social media, 28% say they were shared by a news or lifestyle account and 23% say they were shared by a celebrity or public figure. Luckily, this has been recognised by social media platforms, who are often quick to take action to remove these posts.
In March, Twitter published a blog post stating “Through the use of the strike system, we hope to educate people on why certain content breaks our rules so they have the opportunity to further consider their behaviour and their impact on the public conversation,”
Similarly, Facebook – who have previously stayed quiet on their censoring of controversial issues – announced in February that they would be removing posts with false claims about COVID-related issues, such as mask-wearing and vaccines.
Recognition by social media organisations means that we can now report posts which we believe are sharing false information.
If platform moderators agree that the reported content breaches the rules, they can take the post down or even suspend or ban the user. This offers some comfort to the majority of the population, who are just keen for the pandemic to come to an end safely.
While we as individuals can do little to stop others from sharing this content online, it is comforting to know that these issues are being taken seriously and will hopefully be kept to a minimum with careful moderation.