Rebuilding Our Broken Internet


On January 16th, the UK newspaper The Guardian published an article titled: Banning Trump won’t fix social media: 10 ideas to rebuild our broken internet - by experts. This article points out the sorry state of today’s social networks and calls for policies to be implemented and actions to be taken to ameliorate their shortcomings. Perhaps unsurprisingly, given that the compiler of these proposals is a Silicon Valley correspondent writing for The Guardian, these recommendations seem fundamentally misguided. Instead, counter-proposals can easily be made. Let us take each of the ‘10 “expert” ideas’ and argue the case for their exact opposite. We might find, in the end, those counter-recommendations more reasonable and productive.

The author posits that ‘There is something fundamentally broken in social media’. Indeed, the growing tensions over the recent weeks, months, and years on social networks are undeniable. These platforms are under increasing pressures from an increasing number of sides to enact an increasing array of policies to address an increasing array of conflicts playing out on, through, or with the unwitting help of social media. Something needs to change - and something will, one way or another. ‘You can avoid reality, but you cannot avoid the consequences of avoiding reality.’

Although there are currently no official capacities to ‘legally require “broadcasting [to] serve the public interest, convenience and necessity”’, major social media companies are acting along these lines. They employ a vast number of people who curate their platforms, searching for ‘problematic’ content to remove and working to cleanse the online space of any undesirable presence. Who is ‘problematic’ or undesirable is decided by these platforms themselves, often according to politically expedient or ideological lines. When the ‘expert’ says that ‘Social media companies should tune into the frequency of democracy’, we first need to realise that this is exactly what they are now doing. They are acting remarkably democratically: removing minority-aligned content according to the wishes of the majority. The current tensions on social media are to a significant extent the results of such censorship. To alleviate those tensions, these ‘librarians’ should be fired by social media companies, and an ever-less curated social media experience should be aimed at.

The ‘expert’ here argues the point that teachers are asked to ‘save democracy’ through what they do, but they are not provided with the tools and funding they need to do so. This means that their harmonizing influence is overpowered by hateful and divisive messaging. In fact, teachers are not underfunded, at least definitely not when it comes to the US, as documented meticulously by Corey DeAngelis. Given this reality (and given the view that teachers have a great deal of influence on the young), we reach the opposite conclusion - that teachers are, in fact, the ones contributing to the problem of general discord in society. The evidence points in this direction as well. As reported by Campus Reform and elsewhere, universities and other schools have played a significant role in stoking societal tensions and fomenting discord within and among social groups. To neutralize this threat, governments should start with disempowering teachers by ‘funding students instead of systems’ and ‘seizing the endowments’. 

According to the ‘expert’ here, minority voices ‘have been historically and structurally censored in law’ as well as suppressed by police violence. Contrary to the ‘expert’, however, this means that we should act as if the real limitations of the first amendment did not exist. The constitution, including its amendments, is worthless on its own and only has social power to the extent that people at large believe in it. If people do not care about the Bill of Rights, and the legislature and the Supreme Court become a reflection of such people, there is nothing to prevent unconstitutional and rights-violating laws being passed and receiving SCOTUS’ blessings. To preserve human rights enumerated in the Bill, we should pretend that this limitation does not exist and that the Bill is all-powerful on its own, independently of what tyrannically-minded people would like to see and enact. Acting as if the first amendment was an absolute fact of life, which no one can deny or infringe upon in any circumstances, would do wonders to ensure that everyone’s free speech is not suppressed and guaranteed to the fullest extent. When it comes to social power, ‘fake it till you make it’ is not a cliché but an effective tactic.

It is immoral to try to fix the world if one’s own backyard needs fixing first. Otherwise, one risks exporting their issues elsewhere, where there is no telling how much more damage they will do. Social media platforms, where grave social issues are being intensified, need to focus on finding a way to diffuse these tensions ‘at home’ before allowing themselves to get involved in the socio-political situation beyond the US and Europe. Anyone in the rest of the world (as well as anyone in the West) is free to disengage from these platforms and not include them in their lives, should they not reflect or be offensive to the sensibilities of that person or group. Focusing on solving problems as locally as possible will more likely create a good product and a healthy network that people will want to use, even globally.

As the ‘expert’ says, ‘We cannot fix what we do not understand.’ At the moment, curation and moderation decisions on social media platforms are made primarily by unknown figures in corporations’ ‘safety’ departments alongside often questionable ‘journalists and researchers’ with dubious histories. For example, a significant number of deplatformings of specific groups is done in response to said group being featured on a list of ‘hate groups’ compiled by the Southern Poverty Law Center, an institution known for its strong political leanings and interests. A ‘hate group’ label attributed to an organization by such figures, therefore, needs further investigation, and should not be taken at face value, even though it may be often justified. Instead, given the far-reaching consequences of their actions, the people involved in the work of selecting individuals and groups for virtual removal, as well as those doing the removing, should operate as transparently as possible and should be exposed to as much public scrutiny as possible.

Again, the impulse behind the ‘expert’ recommendation is sound: ‘Companies should evaluate and make changes to how they recommend content and their entire advertising system.’ Currently, social media strongly privilege ‘official’ corporate sources of news, whether truthful or not. ‘Fake news’, ‘hate’, and ‘indoctrination’ can be very effectively promoted through these protected venues, with the approval and promotion of the platforms themselves. This should change. Recommendation algorithms should return to their versions from before this issue became political - and become based on the mix of popularity and relevance instead.

Algorithms by themselves cannot tell what information is ‘accurate’ and what is not. It can only observe 1) popularity of a certain piece of information, 2) similarity of a certain piece of information to a ‘trusted’, pre-selected source taken as a benchmark for ‘truth’ or ‘accuracy’. As no one can escape their own bias and positions of power attract malicious agendas, such ‘trusted’ pre-selected sources are bound to be partisan as well, and therefore the whole process of finding ‘accuracy’ is bound to be compromised. 

Instead, accuracy is approximated through lively debate, with as many uncurated influences mixing together as possible. Those who claim to speak the truth and be authoritative enough to justify the silencing of their opposition should be demoted from the positions of power they hold today and should not be given any special recognition.

Many of the rules currently in place across social media platforms claim to be targeting ‘harassment’ or ‘harm’. In fact, many of them prohibit behaviour which constitutes none of these negative acts. Some go even further and assert that ‘speech is violence’ and that a mere disagreement can constitute a physical attack. Threats of physical violence are illegal in every jurisdiction, as is open incitement.

Many people seem to forget or ignore that most, if not all, social media sites have comprehensive options for disengagement and avoiding unwanted attention or interaction. Anyone can curate their own experience on any social media platform not to be exposed to instances of ‘hate’. Many users go to great lengths to mute or block as many people as possible outside of a narrow circle of those whom they want to interact with.

Punitive rules overreaching into conversations where no real-world harm is being done or threatened should be scaled back. Although not all platforms intend to present themselves as open spaces for people to gather and communicate about any issues in general, those who do should not limit such discussion along political lines using dishonest ‘harassment’, ‘hate’, and ‘harm’ justifications.

‘Platforms [should] enforce the rules, transparently and with immediacy for all users, and hold them accountable for their behavior online’, says the ‘expert’. However, that depends on what these rules are. If, as we have seen in the previous section, they consist of preventing political discussion, or blocking any type of confrontation of one’s ideas, the opposite should be advised. If it is not feasible to change these written rules (perhaps for the fear of political backlash), platforms can simply not enforce them instead. When questioned by nefarious actors, they can be as obstructive as they wish and pretend this is not the case, as that discussion is not being held in good faith in the first place. Not enforcing bad, written rules is a suboptimal solution - it would be better to remove them altogether. If that is not an option, though, a suboptimal step is better than none at all. Along with reinstating those already deplatformed due to these bad rules, it would help immensely in the rebuilding of healthy public discourse.

Accessibility of platforms for users with disabilities is a wonderful thing. Many platforms are doing their best (whether for genuine or superficial reasons) to make their interfaces and functions as accessible for medically non-typical users as possible - and should be commended for doing so. At the same time, this is not an issue which divides the political aisle at the moment or causes tensions between groups of people. A change that is needed in the socio-political online space will likely come through innovation and competition. Many of today’s giant social media platforms either are not in a safe-enough political position to make significant positive changes to their operations or are unwilling to do so. This can only change through outside forces - new technologies, new platforms, new ways to communicate and engage in public discourse. Always starting small, such innovative solutions often lack the capacity to make their user experience ideal for everyone from the outset. As the new platform grows, such features are then added as soon as possible, especially in the face of the threat of public criticism otherwise. Limiting such progress through short-sighted, though well-meaning policies would only lead to further degradation of civil discourse.

Public discourse does not need policing. Real-world violence needs policing. Law enforcement agencies are free to search through any platform to find potential real-world threats (be it interpersonal violence, terrorism, fraud, or something else). Contrary to the ‘expert’, ‘the concept of collective responsibility’ is not ‘necessary for a functioning society’. The precise opposite is true: the concept of collective responsibility is a plague upon a functioning society. Individuals need to be held responsible for their actions, and, under almost all circumstances, cannot be held responsible for the actions of anyone else without their prior consent. Section 230 which to  an extent removes such responsibility in the context of internet platforms should not be amended or revoked. Instead, reforms should be made to other laws so that individuals are held responsible, rather than platforms, even for illegal content (threats, incitement...) online, which should not be required to be taken down automatically, but only upon request by law enforcement agencies. Such changes would create a fairer ‘public square’, where responsibility is in most cases clearly attributed rather than chaotically fragmented or diffused.

-

If realized, these changes would not make the internet perfect. But they would repair some of the damage done to the social fabric through misguided or nefarious policies enacted across the online sphere in the recent years. Conversely, were we to follow the advice of the ‘experts’ featured in The Guardian, public discourse online would only deteriorate further until a breaking point is reached and a crisis ensues.

Share:

Comments