Ensure Steps to Prevent Spread of Rumours, Government Warns WhatsApp
Ensure Steps to Prevent Spread of Rumours, Government Warns WhatsApp is a Hindustan Times report published on 4 July 2018. The article examines the Ministry of Electronics and IT’s expression of “deep disapproval” to WhatsApp over rumour-driven mob violence that killed at least 29 people, with Sunil Abraham of Centre for Internet and Society advocating for database-driven fact-checking integrated into messaging interfaces whilst warning that regulatory demands to break end-to-end encryption would prove counterproductive.
Contents
Article Details
- 📰 Published in:
- Hindustan Times
- 📅 Date:
- 4 July 2018
- 📄 Type:
- News Report
- 📰 Newspaper Link:
- Read Online
Full Text
The government has expressed its "deep disapproval" to instant messaging service company WhatsApp, over "irresponsible and explosive messages", warning it to prevent the spread of rumours that have incited several instance of violence in the country in the last two months.
"While the law and order machinery is taking steps to apprehend the culprits, the abuse of platforms like WhatsApp for repeated circulation of such provocative content are equally a matter of deep concern," the ministry of electronics and information technology said in a statement.
"It has also been pointed out that such a platform cannot evade accountability and responsibility, especially when good technological inventions are abused by some miscreants who resort to provocative messages which lead to spread of violence."
An official familiar with the matter said on condition of anonymity the implicit message is that if WhatsApp, which is owned by Facebook, does not take action, the government would be forced to act.
A spokesperson for WhatsApp said: "WhatsApp cares deeply about people's safety and their ability to freely communicate. We don't want our services used to spread harmful misinformation and believe this is a challenge that companies and societies should address. For example, we recently made a number of updates to our group chats and will be stepping up efforts to help people spot false news and hoaxes."
Fake videos and rumours of alleged child-lifters have resulted in mobs targeting innocent bystanders or visitors. At least eight states, including Karnataka, Assam, Maharashtra and Gujarat, have reported incidents of mob violence motivated by WhatsApp messages on childlifters. The most recent incident occurred in Maharashtra, where five people were lynched on Sunday, although the role of a messaging platform in this case hasn't been established. Fear of child-lifters has led to at least 29 people being killed since May last year, of which at least 19 have been in the last two months alone.
Fact checking should be built into the interface of WhatsApp to avoid rumour-mongering, said Sunil Abraham, founder of the think tank Centre for Internet and Society. "So, for instance, if there is a database of discredited memes then each message sent or received should be checked against that database," he said.
In December, a top Facebook executive confirmed plans to bring third-party fact checking to India.
Abraham added that a hard regulatory approach won't work in this case. "If Facebook were to ban end-to-end encryption to be able to monitor what's happening on WhatsApp, chances are people will move to free software alternatives (that do the same)."
Context and Background
This report appeared during a crisis of WhatsApp-mediated mob violence across India, where rumours about child-lifting gangs—typically featuring grainy videos with misleading captions—led to lynchings of innocent people mistaken for kidnappers. The 29 deaths between May 2017 and July 2018, with 19 occurring in the final two months, indicated accelerating violence that overwhelmed local law enforcement’s capacity to debunk rumours before mobs formed.
The Ministry of Electronics and IT’s statement that platforms “cannot evade accountability and responsibility” challenged WhatsApp’s long-standing position that end-to-end encryption made content moderation technically impossible. The government’s implicit threat—that failure to act would force regulatory intervention—created pressure for platform solutions that preserved encryption whilst addressing content harms, a technically difficult balance.
Sunil Abraham’s proposal for interface-level fact-checking represented an innovative compromise: rather than breaking encryption to scan message content, platforms could maintain client-side databases of known false content (identified by hash values) and flag matches when users encountered them. This approach preserved encryption whilst providing warning mechanisms, though it raised questions about who would maintain “discredited memes” databases and how frequently they could be updated to match rapidly evolving rumours.
Abraham’s warning that breaking encryption would drive users to “free software alternatives” reflected awareness that open-source encrypted messaging applications existed beyond regulatory reach. Signal, Telegram and other platforms offered similar functionality, meaning coercive regulatory approaches risked fragmenting users across platforms whilst eliminating the concentration that made WhatsApp’s cooperation valuable for content moderation efforts.
WhatsApp’s response—emphasising group chat updates and efforts to help users “spot false news and hoaxes”—referenced measures like limiting message forwarding and adding “forwarded” labels to indicate non-original content. However, these interventions addressed spread velocity rather than content veracity, leaving the fundamental problem of users believing and acting on false information largely unresolved.
External Link
📄 This page was created on 1 January 2026. You can view its history on GitHub, preview the fileTip: Press Alt+Shift+G, or inspect the .