Blocking is an easy way to hide disagreement, but doesn’t do much to end it, and scales poorly.

Richard Mason is a freelance writer from the UK, and Editor-​in-​Chief of the blog SpeakFreely.today. He is interested in issues pertaining to freedom of speech, discourse, and political philosophy.

In 2014, Charlie Brooker envisioned a world in which the online ability to ‘block’ someone was transferred into the real life. Through ‘Z-​eye’ technology, residents of the Black Mirror Christmas special could avoid ever having to see or hear those they disliked. Viewers, of course, were made painfully aware of the dangers of this ability as the episode progressed.

Despite recognizing Black Mirror’s version of blocking as a destructive practice, few seem willing to apply this same distrust to the real-​life equivalent. While we can’t permanently erase a person’s visage and voice from our view, we can make it very difficult for our digital paths to ever cross again.

Despite this finality, blocking has become both widespread and commonplace on social media, with many discussions and debates on Facebook and Twitter ending with one person simply banishing their opponents. Sometimes, one may even find themselves blocked by a person they’ve had no direct contact with.

This begs the question; how can productive discourse continue to occur in a world where opposing views can be simply filtered out?

A Distorted Marketplace

Without the presence of any central authority, one would naturally have high-​hopes for the internet as an ideal home for a true marketplace of ideas. The lack of authority and immediate means of coercion implies a space in which ideas must be able to stand on their own. Additionally, the creation of intelligent arguments and fact-​checking should be far easier, given the immediate availability of the whole.

This, however, is all too often not the case. Despite being able to converse with people from quite literally anywhere in the world, we also maintain the ability to personalize our online circle of peers. Users have complete freedom of association, and may thus tailor their exposure to dissenting worldviews as they wish. Yet with this power, as always, comes contingent responsibility; to shape an online atmosphere entirely around your own worldviews is to delude yourself into self-​confirmation, and to be deprived of the opportunity to grow through honest, intelligent debate.

In this scenario, online discourse no longer takes the form of a utopic global forum, but looks more like cloistered clerics preaching to their respective choirs.

Online game developer Nicky Case exemplified this perfectly in her game ‘ The Wisdom and/​or Madness of Crowds ’ (which I wholly recommend taking thirty minutes out of your day to play). In this game, the player controls ‘threads’ which connect individuals to their respective groups. Connecting these threads in the right (or wrong) ways can result in a group perceiving a false worldview to be true.

For instance, in the first level, the player must create and sever social ties between nine people, in order to convince them all that half of the group are alcoholics. In reality, only one-​third of the group drink heavily, yet in selecting how each individual member links to the rest, it is possible to convince them that this harmful behavior is far more common than it actually is.

By shaping our own social media environments through blocking, we effectively replicate the creation and severance of social ties represented in Case’s game; in exposing ourselves to selected information, we become convinced that a greater proportion of society behaves or believes along certain lines. This concern was given new importance following the 2016 Presidential election; the sense of shock that attended Hillary Clinton’s defeat can be arguably laid at the feet of social media bubbling.

Of course, this idea of the ‘bubble’ is an old chestnut going back to the days Reagan and even Nixon . How could social media have brought about the issue of self-​confinement and censorship, when it was being discussed as far back as 1968?

In reality, the modern bubble is far more complicated than the “but I don’t know anybody that voted Republican!” model of the past; the results of a 2014 study into the effects of media habits on political polarization were mixed. On the one hand, those with ‘consistently liberal views’ were more likely to follow a multitude of news sources and expose themselves to varying viewpoints, yet were also more likely to block, unfollow, or unfriend somebody in response to political disagreement. Meanwhile, users who were more consistently conservative were exposed to far more viewpoints and news sources that complemented their own.

Yet, according to the study, a substantial number of people across the political spectrum will still be exposed to opposing viewpoints, suggesting that the infamous echo-​chamber, while present, is perhaps not so damaging to political discourse as first thought. Case certainly presents an interesting argument in her game, and there appears to be some truth to it. Yet the widespread implications are, evidently, rather difficult to pin down. Instead, it may be best to first understand the phenomena of social media, such as blocking or bubbles, from the perspective of individual ethics.

An Issue of Free Speech?

The debate over the ethics of blocking is nothing new. An article from Mary Elizabeth Williams in Salon, for instance, argues that blocking is entirely acceptable behavior, since no-​one has a right to a captive audience . As such, the article argues, blocking does not deprive anyone of their right to free speech, since it does not include a right to a platform, and thus is ethically justifiable.

This is mostly correct; you certainly don’t have a right to somebody else’s audience, and you are of course able to walk away from a discussion at any point. Naturally, to force somebody not to block their dissenters on social media would be to violate their freedom to associate with whomever they choose, yet the dangers of overzealous blocking on the polity as a whole have already been established.

It’s important here to make a distinction between the individual and the societal. While both see the negative effects of blocking, neither does so via the same moral shortcoming. The ethics of blocking can be defined around the two ‘victims’ of the action; the individuals involved in the block, and the wider society.

Intellectual Self-​Harm

Ultimately, blocking based on disagreement negates the real benefits of a discussion. One forgoes the opportunity to broaden their horizons to think more critically, and instead reinforces their existing perceptions regardless of how factual of viable they may be.

A common philosophical idea used to justify this behavior is Karl Popper’s theory of the ‘ paradox of tolerance ’, which contends that intolerance must be practiced against intolerant ideologies, in order to protect the long-​term survival of the tolerant, open society. Following this logic, some may view blocking, no-​platforming, or otherwise shunning those whom they find intolerant as morally justifiable, since allowing these beliefs to be expressed may threaten the rest of tolerant society.

However, this interpretation falls somewhat flat, not least because Popper was discussing policy options against the intolerant. More importantly, Popper advocated first engaging in discussion with the intolerant before other actions is taken, something that blocking makes rather difficult.

In many ways, blockers pay the greatest disservice not only to themselves, but to their ideas, as they deny themselves the opportunity to test them against rival beliefs. Philosopher Nigel Warburton phrases this perfectly , arguing that “Philosophy is an inherently social activity that thrives on the collision of viewpoints and rarely emerges from unchallenged interior monologue.” Without being challenged, it is impossible for ideas to grow and improve. The same goes for intolerant beliefs; blocking out an intolerant idea prevents its’ advocates from developing their own ideas, thus doing little to combat Popper’s fear of the intolerant triumphing over the tolerant.

Naturally, this would assume that rational discussion is possible; deliberate harassment or trolling of other users is not constructive for a discussion, so it’s understandable to block someone who is frequently, and deliberately, inflammatory and unconstructive. Second, the effects of an over-​tailored online environment are likely to be influenced by one’s social atmosphere in the real world.

Moreover, a person may not become quite as polarized by a homogenous online environment if their offline social life is more diverse. The individual repercussions of blocking may thus be somewhat dependent upon the degree to which a person relies on social media for news and debate.

Taking these factors into consideration, it may therefore make sense to approach blocking as part of a wider, more long-​term process of polarization. While individual cases may represent a moral issue if the above exceptions are not present, the real problem lies in the wider tribalization of society, and the role played by how we choose to use social media.

A Return to Tribalism

While blocking-​out opposing viewpoints may prevent an idea from growing, it may also ensure that the person who believes it will do so more and more fervently. At a societal level, however, we have only seen questionable evidence that this leads to greater polarization.

Where, then, does the danger lie? John Samples of the Cato Institute argues that social media users are not usually deprived of disagreeing viewpoints . On the contrary; research in this area suggests that it is the offline-​demographics which are more likely to experience the effects of polarization. Nonetheless, it would be unwise to shrug off the potential dangers of polarization, given the capacity for separation and long-​term social partitioning built in to the platform internet.

The decisions of individuals to block probably fuel a greater willingness to allow third-​parties to shape our online environments. Between calls for state or platform action to remove unwanted speakers and individual blocking decisions sit block-​lists . Essentially, these lists allow a user to quickly block multiple different accounts based on an algorithm, or trusted third party, which sorts the block-​worthy from the rest. Rather than blocking users one-​by-​one based on a poor experience, a block-​list can block whole swathes of users before you’ve had the chance of engaging with them.

The danger here should be rather apparent. Whereas blocking someone personally gives plenty of room for nuanced and justified reasons (trolls, for example), using a block-​list relies entirely upon the decision of a computer-​based algorithm or third-​party censors. Users engaging in honest discussion may find themselves blocked without reason, by someone with whom they are yet to speak to. Furthermore, mass blocking may influence which content is algorithmically surfaced for other users, as Twitter draws upon blocking decisions to identify “bad-​faith actors”.

More recently we’ve also seen a more substantial role being played by social media companies in regulating discussion as well. The removal of Alex Jones and InfoWars’ presence on Facebook, YouTube, and other social media platforms suggests that such companies are increasingly comfortable standing in for the old media gatekeepers. Like the prevalence of block-​lists, this suggests that the shaping of our online environments may be shifting away from personal decisions to block, and towards detached, potentially algorithmic third-​parties.

But is any of this helpful, or even necessary? Innovations that could effectively advance serious discussion without requiring users to accept widespread harassment or delegate conversation structuring power, are already in the works. For example, Facebook has been trialing Reddit-​esque upvote/​downvote buttons for comments in order to allow users to increase the visibility of comments which are more ‘useful’, rather than those they simply like. While still in early days, this feature may allow for unconstructive or ‘troll’ comments to be trumped by those engaging in more thoughtful discussion. In this way, those who engage in honest discussion are rewarded with greater visibility, while those who create negative externalities (such as trolls who spur emotionally-​driven polarization and provide higher incentives to block) are relegated to the bottom of the comments section.

New models of social media sites, such as the decentralized social media Mastodon and Urbit represent interesting alternate routes by which top-​down intervention may be avoided by removing any centralized authority. On a personal level, we may consider advocating ‘muting’ over blocking, as this does not entirely separate the two parties in quite so concretely.

Such measures are still in their infancy, and the efficacy is yet to be determined, but they certainly seem to provide a better alternative to block-​lists of social media gatekeeping, since they empower the individuals involved in the discussion to decide for themselves what is or isn’t constructive. In any case, the responsibility currently falls to social media users themselves to promote inter-​ideological discourse, and to prevent online discussion from being organized by faceless third parties. The lessons of Booker’s universal block apply to social media; erasure not a solution to disagreement, and may contribute to the wider issue of online echo chambers. Talk to your political opponents - you just might learn something.