Print Edition - 2019-06-02  |  Free the Words

A ‘Fake News’ law gives Singapore worrisome powers

  • New legislation could force companies to tell the government what websites users have viewed.

Jun 2, 2019-

Singapore this month joined the rapidly growing list of countries seeking to shield their citizens from harmful content online by passing anti-fake news legislation. While critics have focused on the legislation’s risks to free speech, there’s another, equally grave, concern about this law, which is likely to become a model for the region and possibly elsewhere. Under the law, the government could mandate that service providers track the viewing habits of their users in ways that dangerously threaten their privacy.

The legislation was promoted as a gentler version of laws like those in Australia, Germany and France that require certain kinds of hate speech to be removed from the internet. The Singapore law instead requires websites to post “correction notices” alongside speech that the government deems false or misleading.

But when it comes to privacy, the legislation is a much bigger threat than any of the fake news or hate speech laws that have come before it. The law could be used to require any company that operates as an “internet intermediary”—including search engines, social media companies, and messaging services—to keep records of what users view. But it doesn’t stop there. While it’s unclear how the new law will be enforced, it even appears to leave room for the government to require encrypted messaging services like WhatsApp or iMessage to identify who said what to whom. It is not far-fetched to think that the government could one day demand and abuse that information. Even if that never happens, it’s a chilling new level of surveillance online.

Under the legislation, any government minister can mandate a correction notice in response to any statement online that the minister decides is false and that undercuts confidence in the government’s policies or is contrary to Singapore’s policies. These ministers can also order that such statements be taken off the internet outright. But the government says these more draconian takedown measures will be used as a matter of second resort and that correction notices will be the primary response. A moderate alternative to the takedown order. Or so the thinking goes.

With correction notices, the content stays up, supplemented by a conspicuous, easy-to-read explanation that the statement is false, coupled with a corrective statement that, according to the government, is the truth. Readers can review and assess for themselves.

But enforcement is a potential privacy nightmare. Correction notices effectively require websites to track those who post, look at and might be influenced by or attracted to a “false” statement. They can be ordered to identify all those who looked at the infringing material even before it was labeled troubling. They must then send out correction notices to these prior viewers, or risk hefty fines and even jail time.

Of course, for many service providers, user tracking is hardly a new thing. That is, after all, how companies like Google know to show you ads about shoes, say, and not diapers. But there is something particularly insidious, and damaging, about private parties being told by the government whom to monitor and why. It is, after all, the government, not Google, that can put you behind bars.

If such a tracking mandate is in effect, it also becomes significantly harder for companies to resist government demands for a list of people who have viewed a particular piece of content. Even those companies that collect user viewing history for purposes of ad targeting don’t necessarily compile or store it in the way that the government effectively would be demanding.

And many companies covered by the law don’t currently track user viewing history. Some lack the capacity to do so. What happens if those companies plausibly claim it may be too hard or expensive to issue the kind of retroactive corrective notices that a minister demands? Does the government fine those noncomplying companies or put its executives in jail?

True, the law makes clear that courts are to consider things like cost and technical capacity in deciding whether a website’s failure to comply is excused. But it is not clear how these factors will be evaluated. Or how much will be ordered and quietly complied with without the issue ever reaching a court.

The scope is also broad. Any service that allows users to see third-party material online is also subject to the law—think newspaper comment sections, Yelp or Expedia, or any site that allows third-party comments or reviews that in any way touch on things affected by governmental policy.

Even if the government never demands these lists in the end, the looming threat of it will almost certainly make people cautious. It is what the Washington University law professor Neil Richards has aptly labeled “intellectual surveillance,” a form of thought and behavioral control. Studies document the ways in which just the threat of surveillance chills communication and the search for information online.

And imagine what the government could do with that information if it did in fact demand it. It is not far-fetched to think that governments could, or would, define individuals as threats or potential threats based on what they wrote and viewed. This kind of information in the hands of the government could have widespread effects on the ability to get jobs, financing or travel documents—a China-like social credit scoring system based on online activities.

To be sure, the Singapore law has just been passed and we have yet to see it in action. There still is hope that it is enforced in a somewhat sensible way, that ministers never require companies to send correction notices to prior viewers of allegedly false statements, that orders are never directed at closed and encrypted services, that “false” is defined judiciously and narrowly, and that no government official ever demands a list of who looked at what.

But there is no guarantee that these powers will be employed responsibly. And there is a risk of legislation modeled on Singapore’s being instituted elsewhere.

The Singapore law is hardly the first speech-related restriction that raises privacy concerns. Around the world, courts are increasingly ordering the removal of particular posts or articles deemed hate speech, along with any other posts found to be similar to the original offending post. Such a determination as to what is sufficiently similar requires nuanced analyses that are exceedingly difficult—after all, what is the line between falsehood and parody? These determinations require evaluation of context, and additional intrusions into privacy, to assess.

In a world in which even Facebook’s Mark Zuckerberg is now calling for increased governmental regulation of content online, the privacy consequences of the push to monitor speech cannot be ignored. Laws like Singapore’s set a deeply troubling precedent.

 

- JENNIFER DASKAL

—©2019 The New York Times

Published: 02-06-2019 10:03

User's Feedback

Click here for your comments

Comment via Facebook

Don't have facebook account? Use this form to comment