A Harvard article: Censorship in China and field test
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
23-04-2014, 07:47 PM (This post was last modified: 23-04-2014 10:08 PM by HU.Junyuan.)
A Harvard article: Censorship in China and field test
I expected more yet to find it pretty much flawed.

Its very existance provides ammunition for what it criticises.

Anyway, if you have the patience of reading through this 18-page article, you might find something interesting.

-----

How Censorship in China Allows Government Criticism but Silences Collective Expression

2013 18-page article

http://gking.harvard.edu/publications/ho...expression

2014 30-page article

http://gking.harvard.edu/files/gking/fil...ment_0.pdf

-----

Abstract:

We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the large subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future --- and, as such, seem to clearly expose government intent.

-----

Abstract:

Chinese government censorship of social media constitutes the largest coordinated selective suppression of human communication in recorded history. Although existing research on the subject has revealed a great deal, it is based on passive, observational methods, with well known inferential limitations. For example, these methods can reveal nothing about censorship that occurs before submissions are posted, such as via automated review which we show is used at two-thirds of all social media sites. We offer two approaches to overcome these limitations. For causal inferences, we conduct the first large scale experimental study of censorship by creating accounts on numerous social media sites spread throughout the country, submitting different randomly assigned types of social media texts, and detecting from a network of computers all over the world which types are censored. Then, for descriptive inferences, we supplement the current uncertain practice of conducting anonymous interviews with secret informants, by participant observation: we set up our own social media site in China, contract with Chinese firms to install the same censoring technologies as their existing sites, and with direct access to their software, documentation, and even customer service help desk support reverse engineer how it all works. Our results offer the first rigorous experimental support for the recent hypothesis that criticism of the state, its leaders, and their policies are routinely published, whereas posts about real world events with collective action potential are censored. We also extend the hypothesis by showing that it applies even to accusations of corruption by high level officials and massive online-only protests, neither of which are censored. We also reveal for the first time the inner workings of the process of automated review, and as a result are able to reconcile conflicting accounts of keyword-based content filtering in the academic literature. We show that the Chinese government tolerates surprising levels of diversity in automated review technology, but still ensures a uniform outcome by post hoc censorship using huge numbers of human coders.

Want something? Then do something.
Find all posts by this user
Like Post Quote this message in a reply
Post Reply
Forum Jump: