Live COVID-19 Cases
  • World N/A
    World
    Confirmed: N/A
    Active: N/A
    Recovered: N/A
    Death: N/A
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Three Black and disabled folx (a non-binary person in a power wheelchair, a femme leaning against a wall, and a non-binary person standing with a cane) engaged in conversation. All three are outdoors and in front of a building with two large windows.

BY Leo Hynett

Culture

Should Misinformation Be Censored?

Throughout the pandemic, the dangers of misinformation have become more apparent than ever.

FEBRUARY 03  2022

Follow

Conspiracy theories around COVID-19 have cost lives. False claims about COVID’s origins lead to an increase in anti-Asian hate crimes, and telecoms workers have been harassed and threatened by people who believe 5G is responsible for the virus. Misinformation can be dangerous.

In light of these dangers, questions have arisen around whether ‘bad science’ should be censored on social media. Censorship is an understandably complex subject, mired in heavy debate and strong arguments for and against. On the one hand, censoring dangerous misinformation has the potential to save lives; on the other, there is uncertainty of where this censorship might end.

The most recent misinformation controversy surrounds Joe Rogan’s podcast on Spotify. The row has dominated headlines with many artists pulling their music from the platform unless something is done about the perpetuation of COVID-19 misinformation on the platform. It is clear that the public is unhappy with the current state of misinformation online, but what to do about it is much less certain.

A dangerous business

Misinformation has always had the potential to be dangerous, but that has never been clearer than over the past two years. The Plandemic video that went viral early last year was a prime example of how rapidly such content can spread and the dangers it can pose to public health. The video contained false information about the virus’s origins and the effectiveness of masks and vaccines.

The ​​Center for Countering Digital Hate, a British non-profit organisation that campaigns for big tech firms to stop providing services to individuals who may promote hate and misinformation, ‘maintains there are cases when the best thing to do is to remove content when it is very harmful, clearly wrong and spreading very widely.’

Their report titled Pandemic Profiteers: the business of anti-vaxx found that the majority of online COVID misinformation stemmed from 12 people with a combined following of 59 million people across multiple social media platforms. This ‘disinformation dozen’ were responsible for 73% of all anti-vaccine content on Facebook. The difference between misinformation and disinformation is intent – the latter is wrong on purpose. Following pressure from the White House, Facebook has taken some action against the disinformation dozen to protect users but some of their accounts remain active.

While removing their accounts and their content may seem like the right thing to do in light of how many lives have been endangered by the misinformation they spread, removing them may only add fuel to the fire:

‘Removing content may exacerbate feelings of distrust and be exploited by others to promote misinformation content,’ noted The Royal Society, adding that this “may cause more harm than good by driving misinformation content…towards harder-to-address corners of the internet.’

If certain content is banned or restricted, some people may have their worst fears about society confirmed – the government would indeed be controlling what they can and can’t see, tipping the scale away from individual liberties and towards government censorship. It is certainly a delicate line to tread, and creating censorship rules that are truly in the public’s best interest is no easy task.

Unfortunately, even the argument of protecting people from dangerous misinformation can be misused by those implementing the bans. Things could be banned because they pose a danger not to the public but to the reigning ideology.

 

A slippery slope

While striving to ban misinformation seems like a noble goal, problems can easily arise.

Discourse around the banning of books has been reignited recently following the lengthy list of books Texas lawmaker Matt Krause wants to remove from school libraries. This list, since dubbed ‘The Krause List’, contains some 850 titles and has a heavy focus on books that may clash with particularly conservative ideals, such as (but no way near limited to) the following:

Black Lives Matter: from hashtag to the streets,
Protesting police violence in modern America,
Hood feminism: notes from the women that a movement forgot,
What is white privilege?,
Beyond the gender binary,
Jane against the world: Roe v. Wade and the fight for reproductive rights

Students, teachers, and members of the community in Granbury Independent School District spoke up about the slippery slope of removing books from school library shelves. Stellar points were made throughout their speeches to the school board, but one line said by a student stood out for its weight and clarity:

‘No government […] has ever banned books, and banned information from its public, and been remembered in history as the good guys.’

An ever-changing landscape

With how rapidly the digital world changes, any legislation about misinformation censorship is likely to already be outdated by the time it is passed. With the metaverse poised to disrupt the internet as we know it, it is nearly impossible to build policy recommendations for an online world that may be completely unrecognisable a year from now.

For now, one possible way to tackle misinformation would be to implement changes in the way recommendation algorithms work. Currently, the more engagement a piece of content gets the more likely it is to be recommended to others. This engagement doesn’t have to be positive – content that sparks arguments often benefits most from this snowball effect.

If misinformation is identified, platforms could simply stop recommending that content to others. This would defend people’s right to freely voice their thoughts on their accounts without guaranteeing them an audience of millions (and rewarding their fake news with a greater reach). In fact, limiting a user’s reach could even encourage them to stop posting fake news in order to maintain their platform. Unfortunately, social media companies profit from engagement – and nothing gets engagement quite like a controversial post.

Online health misinformation is not a problem unique to COVID-19; false information about HIV is still rampant online today and completely unfounded claims about the links between vaccines and autism regularly resurface. Such claims are not unique to the internet, but social media certainly facilitates their rapid spread. The key to effective misinformation is its plausibility – effective misinformation campaigns often appropriate and misuse or misrepresent real data and research, making them harder to debunk and adding to the air of realism. Teaching people how to identify misinformation for themselves could be extremely beneficial, but it is unlikely that such a scheme would reach everyone in society equally.

Beneath all of these online elements remains the key point that science is indeed ever-changing; scientific facts should be disputed and interrogated, and dominant scientific thinking adjusted in light of new evidence. If we were not free to challenge dominant scientific thought, we may all still believe that the Earth is flat.

Recommended for you

Mindfulness Classes Fail to Deliver

An 8-year study found school mindfulness classes are less effective than hoped. However, there have been some positive side effects.

Trending