Featured News

New, Stronger Law To Curb Extortion By Fake Media

By Gajanan Khergamker 

It's time the fakes are called out for their bluff. And weeded out of the system. Riding on the yeoman's merits of a free media, a sea of fake players that include fly-by-night 'reporters', lumpen agents masquerading as 'lawyers', 'RTI' activists including 'media channels' will be exposed. The rise of politically-motivated individuals, self-styled influencers and outright blackmailers posing as arbiters of truth has systematically created a parallel information economy, one where 'news' is no longer discovered or verified but manufactured, curated, and, in the most egregious cases, monetised through coercion. 

The proposed 2026 amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules arrive at a moment when this rot has deepened into a systemic malaise, threatening not merely reputations but the foundational trust upon which democratic communication rests.

Image for representational purpose only
Across platforms as varied as Facebook, X, Instagram and WhatsApp, the pattern has acquired a disturbing familiarity. A post appears, often crafted with deliberate sophistication, mimicking the tone, structure and visual grammar of legitimate journalism. It names individuals, alleges wrongdoing and insinuates scandal. The content is rarely substantiated, often doctored, and almost always selective. 

Within hours, it circulates across networks, amplified by coordinated shares, anonymous accounts and algorithmic bias towards sensationalism. The victim, frequently a private citizen with neither the institutional backing nor the digital literacy to mount a defence, is left to confront a tidal wave of reputational damage.

The second act of this digital theatre is where the true intent reveals itself. The perpetrators, having established a narrative in the public domain, initiate contact. The proposition is crude in its simplicity: payment in exchange for silence, for deletion, for withdrawal. Refusal invites escalation, additional posts, more allegations and wider dissemination follow. What emerges is not merely fake news but a structured extortion racket, one that exploits the credibility of the 'news format' to lend weight to what is, at its core, criminal intimidation.

The case studies are as varied as they are alarming. In Maharashtra’s smaller towns, police investigations have uncovered networks of individuals posing as journalists on Facebook pages, publishing defamatory reports about local businessmen and subsequently demanding money to retract them. 
The façade of journalism was meticulously maintained, complete with logos, page layouts and fabricated bylines, creating an illusion of legitimacy that proved difficult to challenge. In one instance, a trader found himself accused of financial impropriety in a series of posts that quickly went viral within his community. The allegations were entirely baseless, but the damage was immediate and severe, affecting his business relationships and social standing. The demand for payment that followed was framed not as extortion but as a 'settlement' to prevent further publication.
On Instagram, the misuse has taken a more insidious, visually-driven form. Anonymous accounts, often styled as 'news update' pages, have been known to circulate morphed images and short video clips targeting individuals, particularly women. These posts, accompanied by suggestive captions and insinuations, are designed to provoke outrage and curiosity in equal measure. Once traction is achieved, the victim is approached with offers to 'take down' the content for a price. The ephemeral nature of stories and reels, far from mitigating the harm, exacerbates it by enabling rapid dissemination before any corrective action can be taken.

The role of WhatsApp in this ecosystem is both pervasive and deeply problematic. Encrypted, closed-group communication lends itself to the unchecked spread of fabricated narratives, often in the form of forwards that carry the imprimatur of authenticity. In several documented instances, false allegations about individuals have been circulated within local community groups, housing societies and professional networks. 

The intimacy of these groups amplifies the impact, as recipients are more likely to trust and act upon information received from known contacts. In one particularly distressing case, a school teacher was accused, through a series of WhatsApp forwards, of misconduct that was entirely fictitious. The messages, formatted as 'breaking news' alerts, led to social ostracisation and professional repercussions before the truth could emerge.
On X, the dynamics are shaped by virality and visibility. Coordinated campaigns, often driven by bot-like accounts and anonymous handles, can propel a false narrative into trending territory within hours. The architecture of the platform, which rewards engagement irrespective of veracity, ensures that sensational content travels faster and farther than sober correction. Individuals targeted in such campaigns find themselves subjected not only to reputational harm but also to sustained harassment, with personal information, photographs and fabricated allegations being circulated widely. The line between misinformation and targeted abuse blurs, creating an environment where digital vigilantism thrives.
Statistics lend weight to these anecdotal accounts. India has consistently ranked among the countries most affected by misinformation, with multiple studies suggesting that a majority of internet users have encountered fake or misleading content online at least once in recent years. Law enforcement data points to a steady rise in cases involving cyber extortion, online defamation and impersonation of journalists, with financial losses from cybercrime collectively running into thousands of crores annually. The economic incentive that underpins this ecosystem ensures its persistence, as low risk and high reward continue to attract bad actors into this murky domain.

It is within this fraught landscape that the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2026 must be situated, not as an abstract exercise in regulatory drafting but as a direct response to a lived and escalating crisis. The legal text itself, when examined closely, reveals a deliberate attempt to reclaim regulatory ground that had been ceded, whether inadvertently or by design, to an unaccountable digital multitude. 

The amendment to Rule 8, for instance, extends the applicability of Part III to include “news and current affairs content” shared by users who are not formally recognised as publishers. This single insertion has far-reaching implications, effectively collapsing the distinction between institutional journalism and individual content creation, and ensuring that those who engage in the systematic dissemination of news-like material can no longer evade scrutiny by sheltering behind the label of “user”.

The insertion of Rule 3(4) marks an equally significant shift in the compliance paradigm. By mandating that intermediaries adhere to government advisories, directions and standard operating procedures, and by linking such adherence to the preservation of safe harbour protections under Section 79, the law transforms what was once a largely voluntary framework into a binding obligation. Platforms that fail to act risk not merely regulatory censure but legal exposure, a prospect that is likely to recalibrate their approach to content moderation in a fundamental way.
The amendment to Rule 14 further strengthens the State’s hand by empowering the Inter-Departmental Committee to take cognisance of content on its own motion, without waiting for a formal complaint. This bypassing of the earlier grievance redressal hierarchy is not merely procedural. It signals a shift from reactive governance to proactive oversight, enabling the State to intervene at a stage where the damage may still be containable.
Perhaps the most striking provision, however, is the introduction of stringent timelines for the removal of content deemed misleading or synthetic. The three-hour takedown window, particularly in the context of AI-generated or doctored material, is an acknowledgment of the velocity at which misinformation travels in the digital age. Coupled with mandatory labelling requirements for AI-assisted content, it seeks to impose a degree of transparency that has hitherto been conspicuously absent.

A comparative view of the regulatory framework underscores the magnitude of this transformation. Where the 2021 Rules were largely confined to recognised publishers, the 2026 draft extends its reach to anyone engaging with news and current affairs. Where enforcement was triggered primarily by user complaints, the new regime allows the Ministry to act suo motu. Where takedown timelines stretched to thirty-six hours, they are now compressed to a mere three in cases of government notice. Where advisories were once suggestive, they now carry the force of mandatory compliance.

The promise of these provisions lies in their potential to disrupt the very mechanics of the fake news economy. Rapid takedowns can blunt virality. Expanded definitions can pierce anonymity. Mandatory compliance can compel platforms to act with urgency rather than expedience. The capacity of the State to initiate action independently can ensure that victims are not left to navigate a labyrinthine complaint process while their reputations are dismantled in real time.

However, the durability of this promise is contingent upon the presence of clear and enforceable penal provisions. Without consequences that extend beyond content removal into the realm of criminal liability, the deterrent effect of the law risks being diluted. Those who currently orchestrate these campaigns operate with a calculated understanding that the worst outcome, in many cases, is the deletion of a post that has already served its purpose. The introduction of penalties that target not only the act of dissemination but also the intent to extort, defame or manipulate is therefore essential to alter this calculus.

Such penalties must be calibrated with care, ensuring that they are directed at malicious conduct rather than inadvertent error, and that they do not become instruments for suppressing legitimate dissent or criticism. The distinction between regulation and censorship must remain sharply drawn, even as the State seeks to assert greater control over a chaotic digital landscape. Institutional capacity, including specialised cybercrime units and digital forensics expertise, will play a crucial role in translating legislative intent into effective enforcement.
The credibility of the written word, whether encountered in the permanence of print or the immediacy of a screen, continues to shape perception in ways that are both profound and enduring. It is this credibility that has been systematically exploited by those who cloak falsehood in the language and aesthetics of journalism, converting trust into a tool of coercion. The proposed amendments represent an attempt, imperfect but necessary, to reclaim that trust and to restore a measure of accountability to a space that has long operated beyond the reach of conventional regulation.
The challenge that lies ahead is not merely one of implementation but of balance. A framework that is too lenient risks irrelevance, while one that is overly intrusive risks undermining the very freedoms it seeks to protect. The task before the State is to craft and enforce a regime that possesses both the strength to confront abuse and the wisdom to preserve liberty. In doing so, it must ensure that the digital public square remains a space for genuine expression and informed debate, rather than a marketplace where truth is traded, distorted and, all too often, sold to the highest bidder.

To receive regular updates and notifications, follow The Draft News: