The Future of Censorship as We Transition from Web2 to Web3

by

Sharan Grandigae

The Web3 Rabbit hole

|

Apr 14, 2023

Censorship is an immensely nuanced topic and debates around it have resurfaced every few years when a new form of communication was invented, such as the Gutenberg press, radio, television, the internet, social media, and communication apps like WhatsApp. The question has always been about weighing the benefits of protecting the rights of an individual to express themselves freely against the harm it may cause society at large. And it has been the purview of a small group of individuals to come up with the rules for what is to be censored and what shouldn’t. But as we usher in the age of Web3 with its ability to allow truly anonymous people to publish on everlasting and permanent decentralised protocols, it is no longer the domain of these small groups but the responsibility of each of us involved in its development to determine the rules that regulate it.

As this is an old debate, there’s lots of material to rely upon and the results of various experiments that we can use to sidestep bad implementations. There are several questions to consider when doing this and the following is an exploration of each that I have begun when working on our own protocol.

Can we be objective?

While one may propose censorship as a reasonable measure to protect a group, the fact is that it is sometimes very hard to determine where such a line must be drawn. If we’re talking about child pornography, the point where the line may be drawn may be clearer even though cultures around the world have debated about the age when a child becomes an adult, however this is among the easier questions involved. But suppose we’re talking about raising a voice against the government body that’s allowing mining in forests, it may be a more complex one as it may not be simple enough to have a rule that says “any voice raised against a government should be censored” because the voice may in fact be serving the nation more faithfully. So if it’s not possible to draw up a rule that can act as an absolute standard, it is better to not create the rule in the first place.

Even in the United States, the land where free speech is held to be sacred, the evaluation whether something can be said or not is measured against a scale of “value versus harm”. So while most speeches are not curtailed in any way, yelling “Fire!” in a movie theatre (the most famous example) is one where it is clear that the value of such free speech is outweighed by the harm it could cause with the stampede that may ensue from it. But despite that, there are very few such restrictions placed on free speech within the country despite the debate reaching new heights of fervour when people started burning the American flag in protest. And that’s something to take note of.

What are the downsides?

What’s sometimes counter intuitive is that there are downsides to censoring an offensive idea as well and it becomes important to weigh it against the upsides using the value-versus-harm principle.

Firstly, when certain things are prevented from being expressed in public, the valid arguments that could have been expressed as a debate against it also aren’t afforded the opportunity to be expressed. For example, if climate crisis deniers are prevented from expressing their ideas on the basis of their harm to society, it wouldn’t provide the opportunity for climate crisis activists to make different arguments or strengthen their positions either.

Secondly, potential calamities could sometimes be avoided if the people who posted the initial offensive idea had the chance to assess the strength of the opposition too. As an example, if people posting messages that antagonise a certain group of people for their religious beliefs weren’t allowed to do so, they wouldn’t see that there are enough people who stand in solidarity with the oppressed and this may prompt them to reconsider their actions.

Thirdly, if certain things are prevented from being discussed, it may actually serve to diminish society’s understanding of the world they live in. For example, there is a group in the US that is proposing the removal of “trigger words” or “trigger incidents” from texts that teenagers are meant to study in school because, and I’m paraphrasing here, these remind the children of the traumas of the world. But if such censorship is allowed, we may actually end up with children that grow up to become emotionally stunted adults who aren’t prepared to face the realities of the world.

And finally, we also may not know whether we live among people who secretly espouse or denounce an idea if it were censored. There are incidents in Uganda where gay people were targeted by their government that determined that such relationships were “against the natural order”. If activists who were fighting for gay rights weren’t allowed to post messages that detailed what was happening on the ground there and how sting operations were carried out (albeit from other countries), the lives of several innocent people would be jeopardised as well.

Who gets to decide?

As mentioned, the ability to censor is a powerful tool. It often corrupts those who are entrusted with that responsibility which can result in the persecution of individuals or specific groups. It can deplatform the voices of dissent. It can quash the opposition entirely. As the saying goes, power corrupts and absolute power corrupts absolutely. This is because people are fallible and it’s in our nature to find ways to benefit ourselves above all others. So the question remains: Who then should decide the rules and the consequence of infractions?

Moreover, these rules need to change over time to accommodate the evolution of society’s sensibilities. It was common once upon a time to refer to people of African origin with the “N” word while it is completely unacceptable now as we have as a society stopped largely to attribute race with any particular negative quality.

And then there are the even more complicated rules of what to do about literature that used the word but was published at a time when it was socially acceptable to do so. These are obviously hard questions without simple answers, so who would be capable of deciding things like this? Not fallible humans I would presume.

Is deplatforming the only solution?

Often the response to offensive content is to remove it or to deplatform the offender. But are these the only solutions? Should it also be the solution when you consider that these people can just generate new keys and start posting things again? And should it be the solution if what the person is saying may actually contain some value to some group or at some later point in time? Is it also the solution when only a portion of what’s been shared may be censorship worthy rather than the entirety? Also, what if the material is considered censorship worthy by some and not by others? These are some reasons to develop alternative and varying levels of censorship.

Some solutions that cater to these problems of censorship may include the following:

Solution 1: The development of a labelling system that indicates that a certain censorship group considers a specific type of content necessary to be censored. Viewers who may agree with their policies could request to view content vetted by that group of censors.

Solution 2: A second alternative could be to charge for content to be published, but it could become more expensive or cheaper to keep it published based on the number of votes cast to keep it live versus to kill it. And this only works if voters also pay to cast their votes.

Solution 3: A third, simpler alternative could be to allow users to determine what what they could or couldn’t be exposed to? This of course assumes that people are capable of making their own decisions, but, yes, I recognise that children and others may be among the exceptions. But then again, do we decide the rules of society based only on the lowest common denominator or do we design this on the basis that people have a right to information, whatever it is?

Solution 4: A more drastic alternative is that if certain rules regarding censorship implemented on a platform aren’t working for a significant group of people, they can in most cases, just fork the blockchain and carry on forward with other rules set up.

There may also be other, better methods; but the point is that de-platforming is often thought of as the only solution and better alternatives specific to the protocol in question would be present if we only try to find ways here.

What mechanisms will allow for the evolution of the rules?

As mentioned above, the rules we set for our communities are also an expression of the sensibilities of the times we live in. As such, we need to have ways by which these rules are forced to constantly evolve.

Borrowing from the legal world where the same problem applies to laws, a “sunset clause” or more preferably, a “self-destruct mechanism” is being recognised by many legal institutions across the world that make it so that every law is automatically set for withdrawal after a certain number of years. If someone believes the law to be valid still, they have the right to bring it up, have it argued for and only when it is found to be valid is it reinstated. The time period for reconsideration can also be modified by the field it addresses. For instance, areas such as information technology will have a different self-destruct time than, let’s say, marital laws. Similarly, rules regarding censorship should also have such self-destruct timelines built into them, which forces everyone to reconsider the rules they’ve set up in the first place.

Does every protocol need to develop censorship mechanisms?

While the answer to this sounds obvious, it may behove us to examine it further. Protocols often go between not having any censorship at all to building out really stringent measures. The ideas surrounding censorship should be weighed as carefully as those around free speech. So the question worth asking first is whether your protocol requires censorship and to see it in the larger context of the sandbox it is playing in, if possible.

For example, a storage protocol like ours may consider censorship because media of any nature could potentially be stored in an encrypted way on our platform. However, as our customers are essentially developers who build apps on top of our protocol, we have not built any native viewers that allow any stranger to view any other person’s content on the platform and run into any objectionable content. The way I see it, it’s not the purview of our protocol to implement censorship mechanisms and it would actually hurt our brand if we do provide mechanisms that allow any external entities to censor content without consent. Our sole responsibility is to the developers and serving them.

Nonetheless, if one of our customers chooses to build a decentralised social-networking platform on top of our protocol, it would be responsible of them to build the mechanisms by which the different levels of censorship spoken about above are implemented.

Conclusion

Censorship is a field filled with nuances that punish and reward every action for or against it. And there isn’t a single solution that works in every scenario. Users will soon be making choices of joining or leaving a platform based on the experience they are getting on it more than any set of features or functionality that it may provide. Censorship is a key aspect of that experience and its importance cannot be overstated. So if a mechanism needs to be built to handle this specific subject, it would be time well spent because the steps we take today regarding censorship in Web3 will only strengthen the foundations for the infinitely hopeful world we look forward to building.

I wish you the very best in your individual efforts and I invite you to reach out if you would like to discuss anything related to this topic. Please message me directly at @sharan01x on Twitter or leave a comment here and I’ll be sure to respond.

Copyright © Arcana Technologies Ltd. All rights reserved.

Schedule a Demo

The call is completely free and no commitment is required.

Copyright © Arcana Technologies Ltd. All rights reserved.

Schedule a Demo

The call is completely free and no commitment is required.

Copyright © Arcana Technologies Ltd. All rights reserved.

Schedule a Demo

The call is completely free and no commitment is required.

Copyright © Arcana Technologies Ltd. All rights reserved.

Schedule a Demo

The call is completely free and no commitment is required.

Copyright © Arcana Technologies Ltd. All rights reserved.

Schedule a Demo

The call is completely free and no commitment is required.