Anne Henochowicz, China Digital Times Internet censorship is accomplished not only through technological means, but also through manual controls enforced by the state on the private sector and individuals. In China, central and local government bodies issue directives to Internet companies concerning information and activity that should be deleted, filtered, or monitored. Companies must comply with these instructions in order to remain viable, while individual users who discuss “sensitive” issues online may find their social media posts are made invisible or removed, and may even have their accounts shut down. Government directives are often vague, encouraging self-censorship and overcompensation to stay safely away from the invisible "red line." Internet censorship is accomplished not only through technological means, but also through manual controls enforced by the state on the private sector and individuals. In China, central and local government bodies issue directives to Internet companies concerning information and activity that should be deleted, filtered, or monitored. Companies must comply with these instructions in order to remain viable, while individual users who discuss “sensitive” issues online may find their social media posts are made invisible or removed, and may even have their accounts shut down. Government directives are often vague, encouraging self-censorship and overcompensation to stay safely away from the invisible "red line."
China Digital Times monitors Chinese Internet censorship through several ongoing projects. This presentation will highlight our work on two primary projects: (1) Directives from the Ministry of Truth, which tracks censorship and propaganda directives issued by central and local government bodies to Internet companies, including news Web sites and portals; and (2) Sensitive Words, which records keywords filtered from the search results of the popular microblogging platform Sina Weibo. I will also introduce the Grass-Mud Horse Lexicon, a wiki of creative, subversive Chinese Internet language created to skirt censorship and mock propaganda.
Bill Marczak, University of California, Berkeley, and Citizen Lab, University of Toronto; Nicholas Weaver, International Computer Science Institute (ICSI) and University of California, Berkeley; Jakub Dalek, Citizen Lab, University of Toronto; Roya Ensafi, Princeton University; David Fifield, University of California, Berkeley; Sarah McKune, Citizen Lab, University of Toronto; Arn Rey; John Scott-Railton and Ron Deibert, Citizen Lab, University of Toronto; Vern Paxson, International Computer Science Institute (ICSI) and University of California, Berkeley On March 16th, 2015, the Chinese censorship apparatus employed a new tool, the “Great Cannon”, to engineer a denial-of-service attack on GreatFire.org, an organization dedicated to resisting China’s censorship. We present a technical analysis of the attack and what it reveals about the Great Cannon’s working, underscoring that in essence it constitutes a selective nation-state Man-in-the-Middle attack tool. Although sharing some code similarities and network locations with the Great Firewall, the Great Cannon is a distinct tool, designed to compromise foreign visitors to Chinese sites. We identify the Great Cannon’s operational behavior, localize it in the network topology, verify its distinctive side-channel, and attribute the system as likely operated by the Chinese government. We also discuss
the substantial policy implications raised by its use, including the potential imposition on any user whose browser might visit (even inadvertently) a Chinese web site.
Jeffrey Knockel, University of New Mexico and Citizen Lab, University of Toronto; Masashi Crete-Nishihata, Jason Q. Ng, and Adam Senft, Citizen Lab, University of Toronto; Jedidiah R. Crandall, University of New Mexico Social media companies operating in China face a complex array of regulations and are liable for content posted to their platforms. Through reverse engineering we provide a view into how keyword censorship operates on four popular social video platforms in China: YY, 9158, Sina Show, and GuaGua. We also find keyword surveillance capabilities on YY. Our findings show inconsistencies in the implementation of censorship and the keyword lists used to trigger censorship events between the platforms we analyzed. We reveal a range of targeted content including criticism of the government and collective action. These results develop a deeper understanding of Chinese social media via comparative analysis across platforms, and provide evidence that there is no monolithic set of rules that govern how information controls are implemented in China.
|
Jon Penney, Oxford Internet Institute, Berkman Centre for Internet & Society, and Citizen Lab Since the Snowden revelations about NSA/PRISM and other governmental online surveillance operations, understanding the impact and potential harms of such surveillance (and related)—particularly any regulatory chilling effects has taken on an even greater urgency. This talk provides an update on recent developments in both constitutional litigation challenging such surveillance practices, as well as empirical research aimed at understanding their impact, including a new study that examines surveillance-related chilling effects stemming from NSA surveillance. Since the Snowden revelations about NSA/PRISM and other governmental online surveillance operations, understanding the impact and potential harms of such surveillance (and related)—particularly any regulatory chilling effects has taken on an even greater urgency. This talk provides an update on recent developments in both constitutional litigation challenging such surveillance practices, as well as empirical research aimed at understanding their impact, including a new study that examines surveillance-related chilling effects stemming from NSA surveillance.
Vasilis Ververis, Humboldt University Berlin; George Kargiotakis; Arturo Filastò, The Tor Project; Benjamin Fabian, Humboldt University Berlin; Afentoulis Alexandros The Greek government has recently initiated large scale content blocking that leads to Internet censorship. In this article we analyze the techniques and policies used to block content of gambling websites in Greece and present the implications of the detected underblocking and overblocking. We have collected results probing eight major broadband and cellular Internet Service Providers (ISPs) that are a representative sample of Internet usage in Greece, and investigate the methods and infrastructure used to conduct the content filtering. The results of this study highlight issues related to how transparently Internet filtering is implemented in democratic countries and could indicate the presence of unfair competition between ISPs.
Andrew Hilts, Citizen Lab, University of Toronto and Open Effect; Christopher Parsons, Citizen Lab, University of Toronto Documents released by Edward Snowden have revealed that the National Security Agency, and its Australian, British, Canadian, and New Zealand equivalents, routinely monitor the Internet for the identifiers that are contained in advertising and tracking cookies. Once collected, the identifiers are stored in government databases and used to develop patterns of life, or the chains of activities that individuals engage in when they use Internet-capable devices. This paper investigates the extent to which contemporary advertising and analytics identifiers that are used in establishing such patterns continue to be transmitted in plaintext following Snowden’s revelations. We look at variations in the secure transmission of cookie-based identifiers across different website categories, and identify practical steps for both website operators and ad tracking companies to take to better secure their audiences and readers from passive surveillance.
|