Disrecognized Space

agressi sunt mare tenebrarum, quid in eo esset exploraturi

Filtering the unfilterable: Why the internet should not be censored

Asking who should be allowed to filter the internet presupposes a number of assumptions: that the internet should be filtered, that the internet can be filtered, and that filtering is accurate and effective. This essay examines the flaws in these assumptions, noting that the internet was designed to circumvent blockages and therefore foils attempts at censorship (this essay equates ‘filtering’ with censorship, though the internet is, of course, technologically a filtering mechanism in the way it routes data packages). While many entities can apply filtering to the internet, dealing with undesirable information is better left to individuals. Problematic information is therefore better dealt with at source, rather than attempting to constrain the medium of the internet.

The internet, as it was originally designed and developed, had the fundamental goal of actively avoiding and correcting for disruptions to its connections. Indeed, one of its design imperatives was to primarily withstand disruptions. It has therefore been said that the internet views attempts at censorship as just one form of network disruption to be corrected for (Gilmore, 2011). Also of prime importance was the original decision to make the internet open and agnostic, such that anyone, if they so desired, could create applications and extensions to it. Since its initial deployment, the internet has not only grown exponentially, but has developed in numerous, often unexpected ways, such that the ‘internet’ is now a loosely applied term. Many would view the World Wide Web as the internet, though it is simply one of a number of protocols that have been created within the framework of the internet (and which, itself, is also subject to this ongoing development and transformation – witness, for example, the changing standards in HTML, the ‘language’ of the Web). Other protocols include older uses such as email, FTP and newsgroups, as well as newer ones such as Peer to Peer sharing.

Asking who should be allowed to filter this network of networks and its multiple applications ignores an underlying assumption as to whether filtering can actually take place on a network designed not to be so affected. While it is not true that the internet was intended to be able to withstand a nuclear war, it was intended to be a decentralized network resistant to interruptions and to reroute around blockages on the network (Leiner et al, 2007). The network as a whole therefore cannot be controlled by a single government or entity (Hogan, 1999, p. 432).

Certainly, there have been attempts to filter parts of the internet by various actors including governments, corporations, and institutions, though these often have adverse side affects such as over- or under-filtering. Brown (2008, p. 5) notes that filtering blacklists of sites via their Internet Protocol addresses are both easy to evade, and prone to blocking thousands of innocent sites for every blocked site. In Australia, ASIC recently blocked 250,000 sites unintentionally when banning a single site (Lawrence, 2013). Brown describes a number of other blocking technologies, but concludes (2008, p8) that these are expensive and imprecise. Villeneuve (2006) also notes the unintended consequences of filtering, both from the inaccuracies of the methods used but also from ‘mission creep’ whereby the initially filtered material is expanded over time for various reasons. Hogan (1999, p. 446) notes a number of issues with Singapore’s internet filtering, and concludes that it would be better to forego some control in return for the benefits of the internet for economic growth. It should also not be forgotten that filtering importantly represents the imposition of a power structure and its implicit assumptions onto the internet, as Bambauer (2008, p. 26) pertinently comments. These values, according to Hogan (1999, p. 432), differ greatly across the world. Filtering is thus not a benign concept, and is increasingly being used in a non-opaque manner: it can be invisible, unaccountable, and can involve ‘soft’ censorship when different users see different information (Burnett & Feamster, 2013, p. 85). . Bambauer (2013, p. 30) enumerates a progression of censorship which has now resulted in the process being undertaken by democratic nations in an increasingly outsourced mode that is opaque and thus less open to criticism.

Filtering is consequently a cat-and-mouse game between (often speedy) circumvention of filtering methods and imposition of new methods (Maitland, Thomas & Tchouakeu, 2012, p. 294). Armstrong and Forde (2003, p. 213) list numerous ways criminals can hide themselves online to avoid filtering, and the same principle applies to anyone else wishing to bypass these controls, from digital pirates to human rights activists. In a study of filtering by the Pakistan government, Nabi (2013, p. 6) found that using Virtual Private Networks or web proxies easily bypassed the censorship. Even in China, the country regularly claimed as the exemplar of filtering, bypassing the controls is a frequent and easy activity (August, 2007). Richet (2013, pp. 37-38) also finds that censorship not only makes the censored material better known and more desirable, but that filters can easily be circumvented by even simple methods such as indirect references and misspelling trigger words.

How an international network can be effectively filtered by national entities is also problematic. What information or subjects are considered offensive are subject to numerous national jurisdictions, many of which disagree as to what these may be. Klein (2002, p. 194) describes the conflict of international jurisdiction and governance as a mismatch where geographical laws founder in a ‘spaceless’ environment.

Filtering is also often confused with eliminating matter of concern, whereas it is really only addressing the issue of how that matter is accessed. Child pornography, terrorism information, or discussions about democracy have existed, and will continue to exist, regardless of whether sites are blocked on the internet. Certainly, the internet had extended the easy availability of these and many other subjects, but efforts to filter them out usually represent only closing one gate in an endless fence of open gates.

Perhaps the strongest argument against filtering at all is the way the internet rebalances power between authorities and suppressed or dissident voices (Dalegaard Hansen, Thompson, Dueholm Jensen, & Andersen, 2012, p. 9) by allowing equal access to information and a leveling of social groups in cyberspace which may not exist in the offline world. Dalegaard Hansen et al specifically discuss the situation in China, but such power imbalances exist in all societies, and the internet is most powerful when it is unfiltered for precisely this reason.

It is technically possible for a filter to be applied to a part of the internet, just as it is technically possible to attach impartially a device to the internet (whether it be a person, a computer, a sensor, a camera and so on) but that filter can only interact with that portion of the internet it is connected to. The rest of the internet will, as it was agnostically designed, simply ignore the filter. Mueller, Mathiason, and McKnight (2004, p. 8) comment that the internet potentially consists of anything that can communicate or transmit information. This sobering thought indicates the scale of what has to be tackled to effectively filter the entirety of this amorphous and malleable construct. Lessig (1998, p. 5) discusses how regulation occurs on the internet and observes that the codes that construct the internet impose a regulatory architecture. Yet this regulation also limits the very forces that would filter the internet by imposing controls and avoidances.

Who should be allowed to filter the internet? The underlying structure and damage-resilient origin of the internet means that anyone can filter the internet. The same openness, however, also allows anyone else to bypass those filters. While there are legitimate reasons to filter the internet (such as legal statutes) as well as less legitimate (broadly, any suppression of information that is considered ‘harmful’ for political or social reasons), the wider this filtering becomes the less effective and accurate it is. The answer to these conflicts is not a simple one of filtering, but rather of addressing each issue separately to determine the best method (if, indeed, one is actually needed) to deal with them. Filtering seeks to resolve this in a simplistic manner that is counterproductive.

Asking who should filter the internet requires a nuanced answer: numerous authorities, such as governments, organizations, websites, and individual users, claim a legitimate right to do so. While these claims are often valid, filtering itself is an imprecise control that is inaccurate and frequently opaque to democratic criticism. More importantly, filtering is easily subverted or bypassed since the structure of the internet itself allows both anyone to act as a filterer, or as an avoider of filtering. ‘Filtering the internet’ is therefore the wrong concept since it tries to apply a method to the internet that the internet itself avoids. The ‘code’ of the internet denies the enforcement of such a political solution. More effectively, filtering of inappropriate or illegal information should be undertaken both by targeting the producers of such information before it passes to the internet, and societal pressures for the individual to consider the implications of her own internet usage. Rather than the method of transmission, perhaps a closer look is warranted at the message.

References

Armstrong, H. L., & Forde, P. J. (2003). Internet anonymity practices in computer crime. Information Management and Computer Security, 11(5), 209–215. doi:10.1108/09685220310500117
August, O. (2007, October 23). The Great Firewall: China’s misguided — and futile — attempt to control what happens online. Wired, 15(11). Retrieved from http://www.wired.com/politics/security/magazine/15-11/ff_chinafirewall?currentPage=all
Bambauer, D. (2008). Filtering in Oz: Australia’s foray into internet censorship (No. 125) (pp. 1–30). Brooklyn: Brooklyn Law School Legal Studies. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1319466
Bambauer, D. (2013). Censorship v3.1. IEEE Internet Computing, 17(3), 26–33.
Brown, I. (2008). Internet censorship: Be careful what you ask for. In S. Kirca & L. Hanson (Eds.), Freedom and Prejudice: Approaches to Media and Culture. Istanbul: Bahcesehir University Press. Retrieved from http://ssrn.com/abstract=1026597
Burnett, S., & Feamster, N. (2013). Making sense of internet censorship: A new frontier for internet measurement. ACM SIGCOMM Computer Communication Review, 43(3), 84–89.
Dalegaard Hansen, A., Thompson, H., Dueholm Jensen, P., & Andersen, K. (2012). Internet censorship and the common discourse of political reform. Roskilde. Retrieved from Roskilde University Digital Archive.
Gilmore, J. (2011). John Gilmore, entrepreneur and civil libertarian. Retrieved September 6, 2013, from http://www.toad.com/gnu/
Hogan, S. (1999). To Net or not to Net: Singapore’s regulation of the internet. Federal Communications Law Journal, 51(2), 429–447.
Klein, H. (2002). ICANN and internet governance: Leveraging technical coordination to realize global public policy. The Information Society, (18), 193–207. doi:10.1080/0197224029007495 9
Lawrence, J. (2013, June 5). ASIC admits to blocking another 250,000 sites. Electronic Frontiers Australia. Retrieved September 4, 2013, from https://www.efa.org.au/2013/06/05/asic-blocked-250000-sites/
Leiner, B., Cerf, V., Clark, D., Kahn, R., Kleinrock, L., Lynch, D., & Wolff, S. (1997). A brief history of the internet. e-Oti: On The Internet. Retrieved September 5, 2013, from http://www.isoc.org/oti/printversions/0797prleiner.html
Lessig, L. (1998). The laws of cyberspace. Presented at Taiwan Net ’98, Taipei. Retrieved from http://www.lessig.org/content/articles/works/laws_cyberspace.pdf
Maitland, C., Thomas, H. F., & Tchouakeu, L.-M. (2012). Internet censorship circumvention technology use in human rights organizations: an exploratory analysis. Journal of Information Technology, 27(4), 285–300. doi:10.1057/jit.2012.20
Mueller, M., Mathiason, J., & McKnight, L. (2004). Making sense of “internet governance:” Defining principles and norms in a policy context. Retrieved from http://www.wgig.org/docs/ig-project5.pdf
Nabi, Z. (2013). The anatomy of web censorship in Pakistan. Presented at the Proceedings of the 3rd USENIX Workshop on Free and Open Communications on the Internet (FOCI ’13), Washington, DC. Retrieved from http://arxiv.org/pdf/1307.1144.pdf
Richet, J.-L. (2013). Overt censorship: A fatal mistake? Communications of the ACM, 56(8), 37–38.
Villeneuve, N. (2006). The filtering matrix: Integrated mechanisms of information control and the demarcation of borders in cyberspace. First Monday, 11(1). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/1307/1227

Advertisements

Comments are closed.