Module 3: Content

From Technology Governance Wiki
(Redirected from Content)
Jump to navigation Jump to search

Misinformation and Disinformation Online

Disinformation is false or misleading content that is spread with an intention to deceive or secure economic or political gain, and which may cause public harm. Misinformation is false or misleading content shared without harmful intent though the effects can be still harmful.

Brazil

The Brazilian Context 

The Brazilian 2018 presidential elections brought the spotlight onto issues of disinformation into the policy debate due to the use of disinformation in Jair Bolsanaro’s campaign. A key aspect of efforts to fight disinformation and the design of policy bills centers around the use of messaging services, primarily Telegram and WhatsApp, to spread disinformation during the presidential elections. Due to the fact that the rates of sending traditional (‘SMS’) messages in Brazil were unaffordable and very expensive for the majority of people in Brazil, the web-based WhatsApp offered a less expensive alternative. The feature that stood out in these messaging apps in particular was the ability to make large-scale messaging chains, which became a prime social-media instrument for targeted political disinformation campaigns in Brazil. Consequently, it was noted that Bolsonaro’s supporters often preyed on Brazil’s ‘poor,’ to gain a voting base in disadvantaged communities. These WhatsApp-led disinformation campaigns, which later led to the surrounding discussion on disinformation regulation in Brazil, culminated in an attack on the Brazilian Congress by supporters of Bolsanaro after his loss, reminiscent of the January 6 Capitol attack in the United States.

Legislation relating to disinformation in Brazil is still in progress, with greater progress after COVID-19. Numerous bills have been raised with different approaches to regulating disinformation, such as user-centered, non-content-focused approaches like identification and location requirements of users for platform access, and measures targeting businesses including transparency reports and complaint processes obligations.

Due to the overarching societal and political backlash, Brazilian legislators have proposed a very large sum of bills that can be, one way or another, connected to challenges and the problems of the spread of disinformation. In particular, sixty-two disinformation bills were proposed between 2019 and 2022. Academics have tried to analyze the bills to understand the commonalities and differences – for example, one analysis of these bills showcased that many do not specify exactly which problem related to disinformation they aim to address, though a common focus is on election-related disinformation. This is particularly challenging due to the use of one word, desinformação, to describe both intentional and unintentional spread of false information, with a lack of clear definitions in bills.

Two main approaches are identified in the bills, the first being a focus on individual criminalization, and the second focusing on platform regulation. The former was more popular pre-2020, whereas the latter has appeared in more bills after 2020.
While the landscape of Brazilian law on this topic is quite complex, inevitable uncertainty is present when it comes to which bills will be passed and what will be the legal framework chosen to protect the Brazilian people in the coming years. Nonetheless, what seems to be a commonality in the majority of the available academic writing on disinformation is the importance and prominence of the “Fake News Bill” PL 2630. The following sections will focus on what are the present legally binding instruments governing the field, the unique role of the judicial branch in this context, and what the most talked about bill, Fake News Bill, has to offer as a mitigation strategy.

Pre-2020 approaches: Law 13.834/2019

The only law currently in place aiming to regulate disinformation is Law 13.834/2019, more specifically, Article 20. It criminalizes attributing an innocent person to a crime, particularly in the context of elections, to provoke criminal investigations. This Article reflects the pre-2020 state of disinformation regulation in Brazil, wherein regulatory approaches focused on the content of individual expressions and aimed to discourage individuals from spreading disinformation. This was done primarily through enhancing pre-existing punishments, such as for libel and defamation, and empowering both platforms and authorities to remove offending content. However, such approaches were critiqued for being heavy-handed in their impacts on freedom of expression.

Post-2020 approaches

The widespread disinformation claiming that the election that Bolsanaro lost and brought Lula into power was fraudulent brought disinformation regulation even more into policymakers' focus. However, criticisms regarding impacts on freedom of expression and the powers of the government still remained even with the new approaches taken in bills after 2020, which focuses more on platform regulation (obligations of content removal, advertisement, and complaint process transparency). Additionally, another development includes the growing role of the Supreme Court in tackling disinformation.

The role of the judiciary and Supreme Court

The Brazilian judiciary generally plays a large role in combating disinformation due to the ‘sluggishness’ of legislators to pass effective legislation. The Electoral Justice has launched a Permanent Program to Combat Disinformation particularly in an electoral context. This program involves an ‘Electoral Disinformation Alert System’ to help detect and respond to false content relating to the electoral process through collaboration with various platforms, companies, and fact-checking agencies. The judiciary collaborates with other public authorities and private institutions in the ‘Integrated Center for Confronting Disinformation and Defending Democracy’ which focuses on analysing, monitoring, and responding to disinformation. The Centre revolves around tackling disinformation that seeks to delegitimize public institutions and the electoral system through collaboration with platforms and fact-checking agencies. Lastly, the ‘Disinformation Combat Program’ is led by the Supreme Federal Court to address disinformation that impacts public perceptions of and trust in the Court.<p>Additionally, the Supreme Court, more specifically, Justice Alexandre de Moraes, has taken a strong stance against fake news. This is possible due to the strong powers attributed to the Supreme Court in Brazil, which acts as a constitutional court, final court of appeals, and a trial court for elected officials. Justice Moraes was responsible for the blocking of X, formerly Twitter, in Brazil due to Elon Musk’s unwillingness to comply with Moraes’ order to block certain individuals associated with the 2023 attack on the Brazilian Congress. X was suspended in Brazil, but access was reinstated after Musk complied with the orders and settled the imposed fines worth $5.2 million. It was noted that such a judiciary order was something that was never done before, and critics question whether such actions go against due process. The president of the Brazilian Bar Association (OAB), Beto Simonetti, voiced his concern and requested further review of the Supreme Court, specifically around the section of the decision to impose fines for VPN users attempting to access blocked platforms.

The Fake News Bill PL 2630

Whilst the Fake News Bill takes on a platform regulation perspective, it still focuses primarily on the election context and also seeks to regulate messenger services considering their significant role in spreading disinformation during the 2018 elections. The original draft of the bill had concerning rights implications relating to identification requirements, expansion of data retention requirements, and the monitoring of private communications, but many of these were phased out with multi-stakeholder consultations. However, it showcases how polarising disinformation regulation is in Brazil, due to its perceived urgent need with the rise of far-right disinformation campaigns and its high likelihood to impact human rights, particularly freedom of expression. The progress of the Bill can be followed here.


The March 2022 draft of the Bill features a variety of obligations as follows. Unfortunately, following the progress and various drafts of the bills is complicated and inaccessible. However, the March 2022 draft was lauded by civil society organizations in Brazil and seems to represent the new approach of the legislators to be followed.
Chapter 1 outlines the goals of the bill and definitions. 
  • Bill aims to bolster the transparency of social networks, search engines, and instant messaging service providers.
  • It applies to providers offering the aforementioned services in Brazil with over 10 million registered users.
Chapter 2 outlines the accountability obligations of those providers.
  • Article 6 requires all automated accounts to be identified as such to either the users or providers and for sponsored and automated content to be clearly defined to users.
  • Articles 9 and 10 oblige providers to conduct half-yearly transparency reports focusing on the measures implemented to comply with the Bill’s applications, how many decisions taken pertaining to users were reverted after appeals, the systems used in content moderation, aggregated information on content which is ‘irregularly’ reaching greater audiences as identified by the provider, and the characteristics of the human content moderation team.
  • Article 13 empowers legal authorities to request messaging services to register and disclose user interactions for up to 60 days during an investigation.
  • Article 15 obliges platforms to implement procedural rights for users when their content is moderated.
Chapter 3 introduces transparency criteria for boosted content and advertising with a specific focus on political advertising during electoral campaigns.
  • Article 19 requires providers to make public the amount spent by candidates and parties on online political advertising on their platforms, and other characteristics including the period of advertising circulation and general profiling categories of the targeted audience.
Chapter 4 includes a specific regime to apply to platform accounts of public administration and office holders.
  • Article 22 stipulates that politicians cannot block either journalists or non-governmental organisations.
  • However, ‘parliamentary immunity’ is also extended to the content of the accounts, potentially protecting politicians from moderation decisions even if they participate in spreading false information.
Chapter 5 addresses the promotion of media literacy.
Chapter 6 addresses sanctions to platforms based on turnover.
Chapter 7 addresses the new functions of the Brazilian Internet Steering Committee (CGI.br) creating a co-regulatory mechanism.
Chapter 8 obliges service providers to create their own self-regulating institutions.

Chapter 9 includes criminal sanctions for the dissemination of fake news.

  • Article 36 states that criminal sanctions could be applied for the spreading of fake news if it either compromises the integrity of the electoral system or cause physical harm.

How does the Fake News Bill fit into other digital law in Brazil?
Setting the scene with MCI and LGPD
After Brazil hosted the 2014 World Cup, there was great anticipation for the upcoming 2016 Olympic Games in Rio. The opportunity to have such large events in Brazil stimulated a considerable amount of technological development to take place in the country’s urban areas. As a consequence, social and digital inequalities between different regions of the country seem to have further advanced when it comes to Brazil’s marginalized populations. To aid these issues, Brazilian legislators decided to codify “democratic principles of internet openness by advancing a civil right to internet access, protecting net neutrality, providing preliminary guidelines for data privacy, and protecting internet intermediaries.” This was done under the bill named the “Civil Rights Framework for the Internet,” or MCI in short, which was passed in 2014. At the time, the bill was praised by scholars as a tool with “democratizing potential”.
Soon after, the next step in the legislative agenda was an act focusing on data protection and privacy: the LGPD. While the MCI outlined the important democratic principles of Internet governance, the LGPD aimed at specific regulations concerning data privacy. LGPD set out various protections for individuals, and most notably, took heavy inspiration from the EU’s General Data Protection Regulation (GDPR).

Critique of the ‘Fake News’ Bill
While the ‘Fake News’ Bill provides a practical example of the challenges of combating disinformation, the spread of propaganda, and defamation in the digital and online environment, critics point out that it might also pose threats to the data protection principles of the LGPD, and to the “universal internet access and freedom of association provisions” of the MCI. For example, the initial draft of the Fake News Bill took a heavy-handed approach that clashed with both the LGPD and MCI. Even though Article 2 of the initial draft Fake News Bill explicitly mentions its compliance with the LGPD and MCI, Bill’s vagueness and ambiguity was still criticised as potentially posing a threat to freedom of expression and privacy safeguards. For example, the Bill obliges social media providers to monitor user identities by implementing mechanisms to mandate linking phone numbers of users to their accounts, allowing for potentially unauthorized surveillance. Another provision suggested that social media platforms should “track and store the chain of forwarded communications of Brazilian internet users,” connected to the risks of Whatsapp large-scale messaging chains.
Whilst the May 2022 draft of the Bill makes great strides in response to civil society consultation, this shows an underlying tension within the Brazilian legislature about how strict the approach towards disinformation should be. The harshness of the initially proposed measures is understandable in the Brazilian local context, where disinformation was overwhelmingly used by the far-right and supporters of Jair Bolsanaro to destabilize electoral processes.

Current progress of the bill
Since the first draft was put out for discussion, the text of the Bill has seemingly changed many times, sometimes drastically Scholarly articles were written on a variety of drafts throughout this legislative process, and therefore it was challenging to distinguish and identify which draft is being discussed as well as which draft is currently standing and waiting to be approved. The above sections consequently aim to explain the overarching image and vision of the Fake News Bill, and its changing approaches. Therefore, a final conclusion is yet to be made, dependent on if the Bill is passed, and what final form it takes.

China

In today's 'post-truth' era, the phenomenon of misinformation and disinformation is not only pervasive but also deeply complex, exploiting emotions and biases to polarize audiences and provoke strong reactions. Beyond merely influencing public opinion, however, disinformation often intersects with criminal behaviors, including hate speech, harassment, and invasions of privacy. Technological advances like deep fakes have amplified these issues, enabling highly convincing yet misleading content that can be used for personal attacks or political manipulation. The semantic and regulatory challenges of combating cybercrimes associated with the dissemination of fake news are profound, especially when navigating varied legal definitions and the lack of consensus across different legal systems. For democratic states, regulatory debates around disinformation often focus on preserving freedom of expression while minimizing harm. However, in a tightly controlled, authoritarian context like China, disinformation takes on an additional layer of complexity. Here, the government itself defines and regulates what constitutes 'disinformation' often to reinforce state ideologies and stifle dissent rather than purely for public interest. In this context, the line between combating disinformation and controlling narratives blurs, raising fundamental questions about information integrity, government accountability, and the very definition of 'truth'. 

The Early Days (1997-2000)

In China, fake news are referred to as 'online rumors' and the government has been dealing with them since the early development of the Internet. In April 1994, China officially accessed the Internet and by 1997 it was already considering introducing laws to address the anticipated influx of allegedly false information associated with international connectivity. For this reason, the Ministry of Public Security enacted Decrees No. 195 and No. 33 to regulate network content. While Decree No. 195 prohibited the creation, access, copying, and dissemination of content that disrupts public order or contains obscene or pornographic material, Decree No. 33 specified nine categories of prohibited content, including fabricated or distorted facts, rumor-spreading that disturbs social order, advocacy of feudal superstitions, obscenity, pornography, gambling, violence, murder, terror, and incitement to criminal activities. It also banned content that publicly insulted or slandered others by fabricating facts.

With the further rise of communication platforms like portals, chat rooms, blogs, the channels for internet users ('netizens') expanded significantly, thus amplifying the social influence of online media and highlighting the value of public opinion on the Internet. However, according to the Chinese Government, along with the heightened influence, a large volume of false, illegal and harmful content has also appeared online.

Therefore, in December 2000, the Ministry of Culture, State Administration of Radio, Film, and Television, All-China Students' Federation, State Informatization Promotion Office, China Telecom, and China Mobile jointly initiated the 'Network Civilization Project'. Under Decree No. 292, the nine types of prohibited content, known as the 'Nine Prohibitions' already outlined in Decree No. 33, were confirmed and further specified and were 'information that endangers social stability and order, including rumor-spreading, social disorder, and activities that undermine social stability; along with content promoting obscenity, pornography, gambling, violence, murder, terror, or incitement to crime' outlawed.

PRC Criminal Law

In the era of Internet 2.0, individuals have transformed from passive recipients of information into active participants across various network activities, emphasizing the social dynamics of the internet. This shift has facilitated a surge in traditional crimes now committed online, with criminal activities on the internet continually increasing. To address this, the Standing Committee of the National People’s Congress (NPC) introduced the Decision on Guarding Internet Security in 2000, extending the Criminal Law to cover actions such as spreading obscene information, rumor-mongering, slander, fraud, theft, and the unauthorized dissemination of state secrets online. 

China’s approach to criminalizing fake news took a significant step on August 29, 2015, when the Standing Committee of the NPC adopted the Ninth Amendment to the Criminal Law of the People’s Republic of China (PRC). This amendment, which included measures aimed at curbing false information, added Article 291a to the Criminal Law, criminalizing the spread of news that 'seriously disturbs public order' via information networks or other media channels. Offenders under this provision face a range of penalties, including criminal detention, public surveillance, or imprisonment for up to three years. For cases where misinformation is found to have severe consequences, penalties escalate to fixed-term imprisonment ranging from three to seven years. Paragraph 2 of Article 291a explicitly targets the dissemination of false information related to critical situations—such as 'dangerous situations, epidemics, disasters, or alerts'—particularly if this dissemination leads to public disorder. This regulation covers both the creation and intentional sharing of known false information, thereby broadening liability to those who knowingly contribute to the spread of misinformation. The legislation reflects the government’s intention to preemptively address disinformation that could incite panic or social instability, framing the spread of false information as a direct threat to public order. This aligns with the state’s broader strategy of social control, where the management of information is paramount, especially regarding matters that could undermine public trust or prompt criticism of government actions.

The First Cybersecurity Law

On November 7, 2016, the NPC Standing Committee adopted the Cybersecurity Law of the PRC, which took effect on June 1, 2017. This law represents China’s first comprehensive legislation focused on cybersecurity and includes provisions that address the dissemination of fake news as part of a broader strategy to maintain social and economic order. Under Article 12, Paragraph 2, it is prohibited to manufacture or spread online fake news that could disturb social stability, harm the economy, or undermine public trust in institutions. 

Article 70 further outlines penalties for violations, stipulating that publication or transmission of prohibited information is subject to fines, shutdowns, and other penalties as defined by the relevant regulations. Article 74 states that violation of the Cybersecurity Law may even result in criminal sanctions, as provided by the Ninth Amendment to the PRC Criminal Law mentioned above. 

Importantly, the Cybersecurity Law empowers authorities to address disinformation not only by penalizing those directly responsible for creating or disseminating fake news but also by holding digital platforms accountable. Pursuant to Article 47, if a network operator identifies any content that is forbidden by law or regulation, it must promptly suspend its transmission, remove the content, take steps to prevent its further spread, maintain relevant records, and notify the appropriate government authority.

Administrative Measures on Internet Information Services

Licensing Requirements

To operate legally in China, social media platforms must hold a valid business license. The Administrative Measures on Internet Information Services, a regulation issued by the State Council on September 25, 2000, applies to any service delivering online information. This regulation requires that commercial internet service providers obtain an operating license from the government, while nonprofit providers must complete a registration process with authorities. The regulation outlines various duties for internet information service providers to ensure cooperation with government authorities. For instance, providers are required to keep records of all published content, along with the exact time of publication, and maintain user information such as account details, IP addresses or domain names, and session durations. These records must be stored for at least sixty days and made available to government authorities upon request.

Real-Name Registration

Under Chinese law, social media users must provide real-name registration and other identity details to service providers. According to the Cybersecurity Law, providers of information-sharing or messaging services must verify users' identities before granting access to these services. Service providers are prohibited from offering their services to users who have not completed this identity verification.

If service providers fail to enforce real-name registration, authorities may require corrective actions, suspend business operations, shut down websites, revoke licenses, or impose fines ranging from 50,000 to 500,000 yuan on the service providers and between 10,000 and 100,000 yuan on those responsible.

Provisions on Internet News Information Services

On May 2, 2017, China’s main internet regulatory body, the Cyberspace Administration of China (CAC), introduced the Provisions on Administration of Internet News Information Services.

License Requirements

According to these provisions, any entity that offers internet news information services to the public—whether via websites, apps, online forums, blogs, microblogs, social media accounts, instant messaging tools, or live broadcasts—must secure an internet news information service license and operate within the license’s authorized scope. Only legal entities established within China's territory are eligible for these licenses, and both the responsible individuals and editors-in-chief must be Chinese citizens. Unauthorized provision of internet news services is subject to fines between 10,000 and 30,000 yuan.

Limitations on Reprinting News

Providers are restricted to reprinting news that originates from state-approved sources, including central or provincial news organizations, or other specified agencies. Reposted news must include the original source, author, title, and editor to maintain traceability of the information. Violations of these reprint regulations can result in warnings, orders to correct the issue, temporary suspensions, or fines from 5,000 to 30,000 yuan, with potential criminal charges.

Prohibited Content

The provisions forbid internet news service providers and users from creating, reproducing, publishing, or disseminating content that is barred under relevant laws and regulations. Authorities may issue warnings, enforce corrective measures, suspend services, or levy fines between 20,000 and 30,000 yuan for non-compliance, and violators may also face criminal prosecution.

Duties of Service Providers

If internet news service providers detect content that violates these provisions or other legal regulations, they are required to immediately halt the transmission, delete the content, maintain relevant records, and report to the appropriate government authorities. Additionally, the provisions reiterate the Cybersecurity Law’s requirement for real-name registration, mandating that service providers verify users' real identities when they access internet news publishing platforms. Non-compliance with these requirements can lead to penalties from state or local internet authorities under the Cybersecurity Law.

Provisions on the Administration of Deep Synthesis of Internet-based Information Services (the 'Deep Synthesis Provisions')

China’s new Deep Synthesis Provisions, effective January 10, 2023, regulate 'deepfake' technologies—software that creates synthetic media such as text, images, audio, and video using generative models. Jointly issued by the Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology, and Ministry of Public Security, the regulations outline responsibilities for deep synthesis providers and users across data security, transparency, content labeling, and technical security. According to this law, providers must establish criteria for identifying and managing false or harmful content, including user identity verification and algorithm review mechanisms. They must also label synthetic media, such as simulated voices or manipulated images, and report false information to authorities.

The Introduction of 'Piyao' in 2018

In 2018, the Chinese government launched 'Piyao', a platform dedicated to countering rumors and disinformation online. The term 'Piyao' itself means 'refute rumors' in Chinese. The platform is managed by the Cyberspace Administration of China (CAC) and serves as an official channel to debunk false information and provide what is claimed to be verified, factual content, particularly on topics deemed sensitive by the Chinese government. The platform distributes 'authentic' news drawn from state-run media outlets, party-affiliated local newspapers, and multiple government agencies.

Platforms' Self-Regulation

In China, major platforms like Weibo and WeChat follow a model of corporate self-regulation that includes internal policies to counter false information, often in cooperation with the Cyberspace Administration of China (CAC). The CAC actively encourages the development of self-regulatory frameworks and industry standards to streamline information control across digital media. This partnership reflects a coordinated approach to information governance, where platforms align their policies with government priorities on sensitive topics. However, this cooperation has a more coercive dimension. On sensitive issues, the Chinese government has a history of employing intimidation tactics, both online and in the real world, to silence dissent and encourage self-censorship. Platforms perceived to deviate from state narratives, particularly on subjects like Xinjiang, Taiwan, or Hong Kong, can face punitive measures, and companies are pressured to comply or face potential repercussions. This extends beyond China's borders: Beijing has leveraged open societies within democratic countries to pursue legal action against critical voices, and it has extended technical censorship to overseas Chinese-speaking communities, notably on WeChat.

The Other Side of the Coin: Censorship in China

China’s media landscape is increasingly complex due to growing government intervention in digital spaces. Most Chinese users primarily engage on domestic platforms like WeChat, Weibo, and Toutiao for social interactions and news consumption. Research has shown that censorship and 'astroturfing'—organized efforts by the government or corporations to post comments that appear to be from ordinary users—are widespread on these platforms. The presence and perception of these activities can shape how people interpret content and express themselves online, influencing their views and stifling open dialogue on dominant topics. Censorship practices further restrict information flow, as social media sites are required to follow government directives and often use human moderators to monitor online content. The criteria for what constitutes 'sensitive' content are vague and constantly shifting, prompting platforms to enforce their own, often stricter, content removal policies to ensure compliance. Once content is flagged as sensitive, it is typically removed, with attempts to access it blocked. In some cases, the platform might display a notice explaining the reason for removal, though this depends on the platform's specific policies. This tight control of information creates an environment where counter-narratives are largely absent, allowing misinformation to circulate on key issues and shaping public opinion accordingly.

Finally, according to a report by the Global Engagement Center of the United States, the Chinese Government spends billions of dollars every year in order to promote abroad narratives favorable to the Chinese Communist Party (CCP) while stifling critical perspectives on controversial issues such as Taiwan, its human rights practices, and its domestic economy. Through a multifaceted strategy, Beijing aims to establish a global information environment that promotes its viewpoints and discourages criticism. This approach combines propaganda, censorship, digital authoritarianism, strategic partnerships, and influence over Chinese-language media to shape both domestic and international narratives in its favor.

India

Constitution

India’s constitution guarantees citizens the right to freedom of speech and expression in Article 19(1)(a). However, the Indian Constitution also says clearly that this right is not absolute; it can be limited by the State in some instances. Article 19(2) says that “reasonable restrictions” can legally be imposed on the freedom of speech and expression in the interests of “the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or in relation to contempt of court, defamation or incitement to an offence.” (source) Thus, there are a number of avenues for the State to place at least some restrictions on freedom of speech in India.

India is also a ratifying party to the International Covenant on Civil and Political Rights. Article 19 of the Covenant addresses freedom of expression and when it can be regulated. [source: https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights] However, the Convention does not figure prominently in Indian legislation, which focuses on freedom of expression as regulated under India's Constitution.

Criminal law

In the area of online speech, one of the most important laws that has historically been used to regulate speech is the Indian Penal Code, which applies to all of India. The Indian Penal Code, which was first enacted in 1860, was recently replaced in 2023 by a new criminal law code, the Bharatiya Nyaya Sanhita (BNS) 2023. BNS Section 197(d) criminalizes certain politically oriented disinformation under the Constitutional exceptions to freedom of speech. Specifically, Section 197(d) imposes criminal penalties on anyone who, by spoken or written words, by signs or by visible representations, or through electronic communication or otherwise, “makes or publishes false or misleading information, jeopardising the sovereignty, unity and integrity or security of India.” The offense is punishable by imprisonment up to 3 years or by fine, or both. (source) The BNS (and the previous Indian Penal Code) also criminalize certain types of hate speech.

Technology legislation

In the technology law field, another important law in India regulating online speech is the Information Technology Act of 2000 (the “IT Act”), another federal law that applies to all of India. The IT Act includes a list of offences punishable by fines, including offences which specifically deal with issues of speech that may be disfavored (speech that is deemed terrorism or obscene) or which is criminal; as well as false or unauthorized speech, such as fraud, breaches of privacy and unlawful disclosure of information. (source)

Safe harbour framework

India’s safe harbor provision took its current form in 2011, in Section 79 of the IT Act, around the same time that India promulgated the Information Technology (Intermediary Guidelines) Rules 2011. To take advantage of the safe harbor, intermediaries must only provide the means of communication and not have a say in the content or who the user are. In addition, intermediaries are required to perform certain due diligence and to follow guidelines promulgated by the central government. If intermediaries aid and abet illegal actions or fail to expeditiously remove or disable access to content that is being used to commit an unlawful act, the safe harbor does not apply. (source: https://www.indiacode.nic.in/handle/123456789/1362/simple-search?query=Information%20Technology%20(Intermediaries%20Guidelines)%20Rules,%202011.&searchradio=rules)

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (the “IT Rules 2021”), which largely replaced the Information Technology (Intermediary Guidelines) Rules of 2011, govern online intermediaries in detail. The IT Rules 2021 were updated in 2023, including as to due diligence obligations for intermediaries. (source)

Disinformation

In regard to disinformation, the IT Rules 2021 impose certain due diligence obligations on all intermediaries in Section 3(1)(b). Section 3(1)(b)(v) requires intermediaries to “make reasonable efforts by itself, and to cause the users of its computer resource to not host, display, upload, modify, publish, transmit, store, update or share any information that … deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or, in respect of any business of the Central Government, is identified as fake or false or misleading by such fact check unit of the Central Government as the Ministry may, by notification published in the Official Gazette, specify; impersonates another person…” Section 3(1)(d) imposes a requirement to take down content that is illegal when notified by an Agency.

When content is taken down, Section 3(1)(g) requires intermediaries to store the content for investigation purposes for at least 180 days or longer if ordered by a court. Section 3(1)(j) requires intermediaries to respond within 72 hours to written requests for information from government agencies which are authorized for investigative, protective or cybersecurity activities, for the purposes of verification of identity, or for the prevention, detection, investigation, or prosecution, of offences under any law for the time being in force, or for cyber security incidents.

Section 3(2) of the IT Rules 2021 requires intermediaries to implement a mechanism for users and victims to file complaints, which the Rules refer to as a “grievance redressal mechanism”. Complaints must be acknowledged within 24 hours and acted upon within 15 days. Section 3A also establishes a grievance appeal mechanism.

India also places additional rules on significant social media intermediaries (“SSMIs”), which are defined as intermediaries that have more than 50 lakh (5 million) registered users in India). (source) Section 4 of the IT Rules 2021 requires SSMIs (as well as online gaming intermediaries) to comply with additional due diligence obligations, including:

  • appoint a Chief Compliance Officer who shall be responsible for ensuring compliance with the Act and who can be liable for failure to comply in some situations;
  • appoint a nodal contact person for 24x7 coordination with law enforcement agencies and officers to ensure compliance to their orders or requisitions;
  • appoint a Resident Grievance Officer who is responsible for the grievance redressal mechanism;
  • publish monthly compliance reports of complaints received and actions taken, the number of links or parts of information removed or disabled in pursuance of any proactive monitoring via automated tools or other relevant information as may be specified.
  • SSMIs also must enable the identification of the first originator of the information on its computer resource as may be required by judicial order in some cases (under certain constitutional exceptions to free speech, in cases of rape, sexually explicit material or child sex abuse material), subject to certain limitations on when such a judicial order can be issued and the scope of the order and what must be revealed.

When a SSMI disables or takes down material, including disinformation, it must ensure that prior to that time, it has provided the user whose information is coming down or being disabled with a notice explaining the reasons for the action, ensure a reasonable opportunity to dispute the action, and ensure the Resident Grievance Office has oversight. SSMIs also must respond to requests for other information from the government that the government may consider necessary for Section 4. Note that under Section 6 of the IT Rules 2021, social media intermediaries that do not meet the 5 million user threshold can still become subject to these higher obligations for SSMIs if they permit the publication or transmission of information in a manner that may create a material risk of harm to the sovereignty and integrity of India, security of the State, friendly relations with foreign States or public order.

News and publishers

Section 5 requires social media intermediaries to conduct additional due diligence for news and current affairs content. These intermediaries must put publishers of news and current affairs content under additional terms of service that requires the publishers to furnish the publisher’s user account information to the Ministry of Information and Broadcasting. Publishers can then receive a mark of verification.

Failure to comply with the IT Rules 2021 can result in loss of the safe harbor for the intermediary.

The IT Rules 2021 also contain a Code of Ethics and Procedure and Safeguards in Relation to Digital Media in Part III, which regulates publishers of news and current affairs content and online curated content that operate in India. The Code of Ethics imposes rules to prevent many types of problematic content, including hate speech, illegal content, content that may be harmful to children, etc. Through application of the existing journalistic and broadcasting codes, the Code also promotes the practice of providing accurate and reliable content and discourages sensationalism and fake news. (Source)

United States

Definitions

According to the European Commission’s High Level Group (HLEG), disinformation is defined as ‘false, inaccurate, or misleading information, designed, presented and promoted to intentionally cause public harm or for profit’. Misinformation is defined as ‘misleading or inaccurate information shared by the people who don’t recognize it as such’.

In the U.S, clear definitions for "misinformation" and "disinformation" are not fully codified across all statutes. However, governmental agencies like the Cybersecurity and Infrastructure Security Agency (CISA) provide generally accepted definitions. Misinformation is typically understood as false or inaccurate information shared without the intent to deceive. It occurs when incorrect information is spread inadvertently, often due to misunderstandings or mistakes, rather than a purposeful intent to mislead. Disinformation, by contrast, is false information deliberately crafted and disseminated with the intent to mislead, manipulate, or cause harm. Disinformation is frequently used in contexts involving political, social, or economic influence campaigns, such as election interference or propaganda efforts.

Freedom of Speech and Legal Protection

In On Liberty, John Stuart Mill makes the case in favour of freedom of speech arguing that truth and ideas can only be achieved if both true and false statements and opinions are allowed to remain uncensored. Indeed, in the United States, false statements are often granted legal protection so as to protect other values and avoid a chilling effect on public discourse. As will be discussed below, the guarantee of free speech in the First Amendment also implements the values that underpin it, which include facilitating representative democracy and self-government, advancing knowledge and truth in the marketplace of ideas, promoting individual autonomy, self-expression and self-fulfilment.

Under the First Amendment to the U.S. Constitution:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

As a general rule, the government may not regulate speech “because of its message, its ideas, its subject matter, or its content.” [Police Dep’t of Chicago v. Mosle, 408 U.S. 92, 95 (1972)].

The Congressional Research Service (CRS) published an In Focus report in 2022 that says: 

  • The Supreme Court has said the Free Speech Clause protects false speech when viewed as a broad category, but the government may restrict limited subcategories of false speech without violating the First Amendment.
  • Defamation, fraud, political advertising, and broadcast speech are subject to special consideration. 
Case note: New York Times Co. v. Sullivan, 376 U.S. 254 (1964)

New York Times Co. v. Sullivan was a landmark U.S Supreme Court ruling on freedom of speech protections under the First Amendment to the U.S Constitution to restrict the ability of public officials to sue for defamation. The Court held that, in order to prove libel, a public official must show that what was said against them was made with actual malice – "that is, with knowledge that it was false or with reckless disregard for the truth."

This imposes a high threshold as it only applies to defamatory speech against public officials and only if actual malice is shown.

The reasons are twofold. Protection of false speech is based on the concern of a chilling effect, where regulation of false speech would also chill true speech through fear of breaking the law. The Supreme Court has recognized that false statements may not add much value to the marketplace of ideas. Even so, there is a concern that by prohibiting false speech, the government would also “chill” more valuable speech, meaning it would cause people to self-censor out of fear of violating the law. Consequently, the First Amendment creates “breathing space” protecting the false statements and hyperbole that are “inevitable in free debate.” In the words of Justice Brennansaid: ‘even a false statement may be deemed to make a valuable contribution to a public debate’.

Strict Scrutiny in Content-Based Legislation

As a general rule, if a US law targets speech based on its expressive content, that content-based regulation will trigger strict scrutiny analysis. Under strict scrutiny, a law is presumptively unconstitutional unless the government can show the challenged law is the least restrictive means of targeting speech while also serving a compelling governmental interest. 

Case note: United States v. Alvarez, 567 U.S. 709 (2012)

In United States v. Alvarez, the U.S Supreme Court ruled that the Stolen Valor Act of 2005, a federal law prohibiting false statements about receiving military decorations or medals, was violating the First’s Amendment guarantee of the right to free speech.

The four-Justice plurality opinion clarified “that falsity alone may not suffice to bring the speech outside the First Amendment.” Thus, the plurality opinion applied strict scrutiny to the Stolen Valor Act as a content-based law.  Moreover, the Court held that the law was not sufficiently narrowly tailored because it punished false statements regardless of the context or purpose. Accordingly, there was no “direct causal link” showing the law’s broad scope was necessary to the government’s goal of protecting the integrity of the military honors system. 

Election Speech

False, misleading, and intimidating speech has proliferated in both online and real world spaces, particularly as it relates to elections. Although the right to free speech is generally protected by the First Amendment, that right does not extend to saying anything, anywhere, in any manner. When governments identify efforts to intimidate, deter, or otherwise interfere with voters, the First Amendment allows them to safeguard the fundamental right of citizens to vote freely without fear, threat, or undue influence. Governments can also take steps to prevent, suppress, or remove false and misleading speech that seeks to disrupt the process of voting. 

Are election related false statements Protected by the first Amendment? Generally, yes, but not if they interfere with the voting process. The Constitution protects lies, especially if they are not under oath and can be easily countered (United States v. Alvarez). However, false statements can be regulated in contexts such as fraud, defamation, libel, statements to government officials, perjury, misrepresentation as a government official, and misleading advertising or commercial speech.
What types of false statements about elections can be constitutionally prohibited? Regulation allowed for misleading voting process information: Governments can prohibit false or misleading speech about voting logistics, like when, where, or how to vote, to protect voter access (Minn. Voters All. v. Mansky). The government’s interest in protecting voting rights is emphasized, especially during election campaigns when false statements can have adverse public consequences (McIntyre v. Ohio Elections Comm’n).
Can the government regulate other types of election-related speech? Yes, when necessary to protect voting integrity. The Supreme Court has recognized that government has compelling interests in protecting voters “from confusion and undue influence,”and in “preserving the integrity of its election process” [Burson v. Freeman].  “Preventing voter intimidation and election fraud” is “necessary,” and “ensuring that every vote is cast freely, without intimidation or undue influence, is  a valid and important state interest” [Brnovich v. Democratic Nat’l Comm.]  Thus, where it is necessary to regulate speech about elections—a content-based category of speech that is subject to strict scrutiny—the government can take steps that are the least restrictive means necessary to protect this compelling interest.
Case note: Social Media Influencer Sentenced for Election Interference in 2016 Presidential Race
In a recent case, the Department of Justice successfully prosecuted a social media influencer who made online posts ahead of the 2016 election to “disseminate fraudulent messages that encouraged supporters of presidential candidate Hillary Clinton to ‘vote’ via text message or social media, which was legally invalid”.  Although the defendant challenged his prosecution by claiming that his deceptive posts were “political speech,” a federal court found that his speech was “merely a single element within a course of criminal conduct”—namely, a conspiracy to interfere with the right to vote [United States v. Mackey]. Accordingly, the speech was “integral to criminal conduct” and thus unprotected by the First Amendment. The court also found that intentionally false speech about voting procedures did not fall within the core of political speech and that it could be regulated by the government where it did not address the substance of what was on the ballot but instead dealt with access to the ballot. Thus, speech that harms the integrity of the election process—such as misinformation about where, when, or how to vote—can be regulated even where speech about a candidate or a candidate’s views, even if false, could not be.


Role of Social Media Platforms

Recent events such as the Cambridge Analytics scandal or the allegations of Russian interference into the 2016 elections have shown how important online intermediaries and platforms are.

Online intermediaries and social media platforms have a large amount of freedom when it comes to speech violations. Section 230 of title 47 of the U.S. Code, part of a codification of the Communications Act of 1934 (Section 9 of the Communications Decency Act / Section 509 of the Telecommunications Act of 1996) has been interpreted to mean that operators of Internet services are not publishers (and thus not legally liable for the words of third parties who use their services).

  • Section 230(c)(1): "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
  • Section 230 confers full immunity on ‘interactive computer services’, from liability for torts others commit on their websites or online forums, even if the provider fails to take action after receiving notice of the harmful or offensive content.
  • Through the so-called Good Samaritan provision, this section also protects ISPs from liability for restricting access to certain material or giving others the technical means to restrict access to that material.
Case note: Murthy v Missouri, 603 U.S (2024)

In Murthy v. Missouri, the Supreme Court rejected a suit brought by two states and several social media users against Executive Branch officials and agencies, claiming that government officials’ contacts with social media companies about misinformation related to the COVID-19 pandemic and the 2020 election violated First Amendment guarantees against government censorship [Murthy v. Missouri].

The Court emphasized that social media companies already had policies against misinformation and disinformation, which they enforced by “exercising their independent judgment” even after consulting with government officials as well as outside experts. The Court held that the plaintiffs lacked standing, in part because they could not “link past social media restrictions” imposed by the social media companies to the government officials’ communications in which they identified potential misleading and false speech. Most major social media companies have in place policies against election-related misinformation and disinformation. After Murthy, social media companies remain free to enforce these policies on their own platforms by exercising their own judgment and conferring with outside experts, including government officials who may reach out to alert them to potential false statements. Unless there is “evidence of continued pressure” from government officials to interfere with companies’ independent application of their content- moderation policies, companies are “free to enforce, or not enforce, those policies,” even if those decisions are informed by contacts with government actors.

State Legislation

The federal government does not regulate false statement about the subject-matter of elections, but many states do. These laws impose liability for a number of different types of false claims.

Thirty-eight states have statutes that directly target the content of election-related speech:

  • Sixteen states have statutes that prohibit false statements about a candidate for public office. 
  • Fourteen states have statutes that prohibit false statements about a ballot measure, proposal, referendum, or petition before the electorate. 
  • Thirteen states have statutes that prohibit false statements about voting requirements or procedures.
  •  Eleven states have statutes that prohibit false statements about the source, authorization, or sponsorship of a political advertisement or about a speaker’s affiliation with an organization, candidate, or party. 
  • Nine states have statutes that prohibit false statements that a candidate, party, or ballot measure has the endorsement or support of a person or organization. 
  • Seven states have statutes that prohibit false statements about incumbency.

A number of states have false statement laws that explicitly apply to online actions and that can be applied to election deception.

  • California law prohibits “political cyber-fraud,” which includes intentionally causing visitors to believe advocacy or information posted on a political website represents the views of candidate or someone in favor or opposed to a ballot measure, when it does not [Cal. Penal Code 18320-18323).
  • Texas law (and California law until 2022) prohibits distribution of election-related “deepfakes,” which use artificial intelligence to create highly realistic videos that appear to portray real people when they do not.