Homo Digitalis 01 | Diffuse Actors in Indian Elections

Hello,

I am Amber Sinha, a researcher working at the intersection of law, technology and society, and studying the impact of digital technologies on socio-political processes and structures. I am based in Delhi NCR, India and am currently a Senior Fellow at Mozilla Foundation and an Information Fellow with Tech Policy Press. This is the inaugural edition of my new newsletter, Homo Digitalis. To receive Homo Digitalis in your inbox, please subscribe.

2024 will see over 80 elections in 78 countries, making it a decisive year for research, policy interventions and movement building. Through this year, I will write about AI, transparency and elections, and issues that lie between and around them. As a researcher based in India, a lot of what I write will be informed by that context. I will look at digital policymaking here in depth, but also delve into how they relate to other regions and interests from a geopolitical perspective. My past work with colleagues and projects on AI, data governance and digital identification in Sub-Saharan Africa and Latin America will also influence this and future editions.

We live in degraded times, inundated by facile, shallow, and rapidly disseminated bursts of information. The avalanche of poorly constructed information that we face is a stark contrast to the kind of writing where, in the words of Isaac Babel, “no iron spike can pierce a human heart as icily as a period in the right place.” I hope in this newsletter, I can present a small slice of the world around us, while asking the big questions.

In Focus: Regulatory Vacuum of Diffuse Actors in Indian Elections

Last month, in collaboration with Tactical Technology Collective’s Influence Industry Project, I wrote about the role played by diffuse actors in the Indian election and the regulatory vacuum in which they operate. Diffuse actors are people who are not formally aligned with a political party or institution but often act in concert with political campaigns on social media. The regulation of political content on social media platforms is already complex and problematic, but is made even more complicated by a loose network of political elite actors, influencers and the general public who create, share and disseminate content. In my essay, I looked at the very limited regulatory apparatus that governs online political speech in India: the Model Code of Conduct (MCC), and self-regulatory efforts made by large platforms such as Twitter, Facebook and Google during the 2019 elections in India in the form of the Voluntary Code of Ethics for General Elections (Code).

The Code aimed to increase transparency in paid political advertising by bringing political ads on platforms like Facebook and Twitter under the purview of the MCC. For instance, political parties were required to disclose expenditure accounts for social media advertising as they do with newspaper and radio advertisements. Expenditure on social media had to be declared by candidates and parties, thereby incorporating it into the overall spending limit. Advertisers were allowed to submit pre-certificates issued by the ECI and the Media Certification and Monitoring Committee for election ads featuring the names of political parties or candidates. Candidates were also obliged to provide details of their social media accounts when filing nominations. Additionally, the Code included provisions for direct communication channels between the companies and the ECI to expedite complaint resolution.

All of this must be considered with the knowledge that the Election Commission of India and the Representation of Peoples Act do not provide a specific definition of “political advertising.” As a result, each internet platform was left to decide for themselves how to govern political advertising. For example, Google considers political advertising to include ads from political parties, businesses, non-profit organizations, and individuals, as long as they feature a political party. On the other hand, X, formerly known as Twitter, defined political advertising as ads purchased by a political party, or political candidate, or advocating for a clearly identified candidate or political party. X placed a ban on political ads in 2019 after the micro-blogging site faced criticism for not curbing the spread of misinformation during elections (though this was relaxed considerably in 2023).

In 2019, political parties and candidates were required to disclose their expenditures on social media advertisements, but there has still been no attempt to regulate ad spending by loosely connected supporters who indirectly contribute to campaign funding through coordinated advertising. Some platforms have made efforts to introduce accountability in this unregulated space, such as requiring disclaimers on paid advertising, implementing take-down procedures for non-compliance, and creating a public repository for easy access to advertisements and expenditures. However, these measures are still inadequate in identifying all types of political content and actors, and are only partially enforced.

The emergence of unregulated, diffuse entities poses complex questions for regulatory bodies and sheds light on the myriad of ways in which online campaigns are now run. In states where election management bodies already face capacity issues, it is not realistic to increase the regulatory scope by including all political content during elections. On the other hand, more traditional regulatory measures, such as reform or campaign finance laws are needed to effectively monitor and protect fair elections over longer times and with real consequences for campaigns which disrupt democratic values. Read the full essay here.

Further Reading

If you would like to learn more about the state of data in the Indian election, do see the others essays in the series commissioned by Tactical Tech. Vasudevan Sridharan examines here the rise and role of the non-governmental organisation Citizens for Accountable Governance (CAG) and its evolution into the political consulting group, Indian Political Action Committee (I-PAC). In “Government data in political hands: Aadhaar citizen ID and the 2024 Indian election campaigns”, Safina Nabi explores the digitalisation of sensitive data of citizens and residents across India within systems that have come about with very little accountability or transparency.

Many themes in these three pieces are also explored in detail in my book, The Networked Public, which was released shortly after the 2019 general elections.

EFCSN finds Big Tech in Violation of Fact-checking Commitments

The European Fact-checking Standard Network (EFCSN) published a review of the fact-checking commitments of the major online platforms and search engines operating in the EU. The review is based on public data from those companies’ last DSA and Code of Practice semiannual reports and measures their performance against metrics in the Code of Practice on Disinformation. Notably, this Code is also slated to become the Code of Conduct under the Digital Security Act in 2024.

The key commitments of the Code are: (1) that platforms conclude agreements with independent fact-checking organizations to have complete coverage of the EU member states and official languages, (2) that they integrate or consistently use fact-checking in their services for the benefit of their users, and (3) that they provide fact-checkers with access to the data that they need to maximize the quality and impact of their work. The review by ECSN covers the performance of Youtube, Google Search, Facebook, Instagram, Bing, WhatsApp, X, Telegram, LinkedIn and TikTok. Of these, aside from Telegram and X, the other companies are signatories to the Code.

The performance of most of the companies against the metrics in the Code is dismal, and in some cases, rife with misleading claims in their reporting. For instance, YouTube lists 10 “EU-based fact-checking organisations” in the YouTube Partner Program, but some of these focus on markets such as Myanmar, Indonesia and Brazil. The Partner Program is passed off as a genuine partnership with the European fact-checking community, but, is in fact, a monetisation scheme for any type of content creator.

HRW claims Meta has been Censoring pro-Palestine Content

In a detailed report, Human Rights Watch (HRW) has claimed that Meta has engaged in systemic and global censorship of pro-Palestine content on its platforms, Facebook and Instagram. This is based on 1,050 takedowns and other suppression of content collected from 60 countries on Meta’s platforms during October and November 2023.

The findings are staggering. Of the 1050 cases reviewed by HRW, 1,049 involved “peaceful’ content in support of Palestine that HRW says “was censored or otherwise unduly suppressed,” while one case involved the removal of content in support of Israel. The cases of suppression have been characterised by HRW as including—removal of posts, stories and comments; suspension or permanent disabling of accounts; restrictions on the ability to engage with content—such as liking, commenting, sharing, and reposting on stories—for a specific period, ranging from 24 hours to three months; restrictions on the ability to follow or tag other accounts; restrictions on the use of certain features, such as Instagram/Facebook Live, monetization, and recommendation of accounts to non-followers; “shadow banning,"  or significant decrease in the visibility of an individual’s posts, stories, or account, without notification.

To carry out data collection for this research, HRW appealed to the public to report by email cases in which Meta censored content that in their view, did not deserve to be censored. They received reports of 1,285 posts that had been taken down. Out of these, 235 were deemed ineligible. The reasons for eligibility included challenges in substantiating a case for censorship, a lack of relevance to Israel or Palestine, or content that promoted discrimination and violence.

In response, Meta has accepted that its takedown measures are not foolproof especially in ‘exceptional and fast-moving situations’, but denied the claims of systemic bias in its takedown process. Meta has also questioned the methodology of the study, claiming “1,000 examples - out of the enormous amount of content posted about the conflict - [as] proof of 'systemic censorship'” is misleading. For a more detailed and critical review of HRW’s methodology, see this post which makes a case for a more statistically robust sampling approach for such studies.

Criticisms of research methodologies of reports emerging from human rights organisations, while valid to a limited extent, often ignore the wider reality of the lack of public or research access to data about measures taken by large private corporations who are in the position of acting as arbiters of free speech. In the absence of access to meaningful data, and speaking as someone who has faced similar challenges in research endeavours in the past, I would make a strong case for accepting the legitimacy of methodologies that rely on data collected as a response to public calls. Given the size and geographical range of the dataset, while it may not qualify as proof of ‘systemic bias’, I would infer that it strongly suggests as much.

In Other News

Doing research and writing for a living has meant that I had stopped writing for pleasure. When I spoke to others in the research and civil society space, I found that many of us who may have found our way to this life by way of a love for writing find ourselves in this position. Late last month, I started a new section on my website called Play, which is an attempt to break this pattern.

I began with a series on appreciating everyday delights which punctuate our humdrum existence. My first post is a story about finding my way to birding via UsTwoGames' beautiful online game, Alba and the pleasures of watching the Eurasian Hoopoe. My second post is about meeting one's heroes. I love Paul Fernandes' illustrations and they served as a guide to my exploration of Bangalore, Recently, I had the chance to meet him in person and convey what his work had meant to me. I hope to keep writing here.

Reading Recommendations

I recently read Margot Kaminski’s paper, The Developing Law of AI Regulation: A Turn to Risk Regulation, which is part of an excellent series by Lawfare called The Digital Social Contract. Published in April 2023, the paper is an excellent review of the emerging regulatory frameworks to govern AI — Singapore’s Model AI Governance Framework, the draft EU AI Act, and the NIST AI Risk Management Framework. Kaminski does a splendid job of identifying the limited nature of risk regulation that we see represented across these regulations which she describes with brutal simplicity — “[w]atch your data sets, assess risks, do an impact assessment, do some risk mitigation—and don’t establish or affirm a private right of action for related harms.” She highlights the inherent epistemic limitations of the risk regulation approach drawing from environmental regulation, and also questions why a range of tools in the risk approach’s kit are simply missing in conversations about AI.

I recently read Iginio Gagliardone’s 2016 book, The Politics of Technology in Africa. I visited Addis Ababa to speak at the Internet Governance Forum in 2022, while working for Pollicy Data Institute, Kampala. Through a hectic week, I had several long and educational conversations with my East African colleagues about the history and technopolitics of Ethiopia. Gagliardone’s book had been on my reading list since then and I finally got around to reading it only last month. Grounded in Hecht’s theory of technopolitics, the book is a case study of Ethiopia’s journey through the introduction of ICT technologies. As opposed to regimes like Kenya which empowered the private sector and saw the development of civil society, Ethiopia saw a more controlled step-by-step deployment of ICT. The parts of the book which focus on the power struggle between the government and private actors are most fascinating, and accurately highlight the risks of a centralised approach, which leads to the concentration of power in the hands of the state while considering the benefits of a more considered and cautious adoption. The book is also a godsend for anyone looking to understand the complex relationship that institutional donors have with African states. Finally, it touches briefly on the Chinese influence on the telecom sector in the region, and Gagliardone’s follow-up book, China Africa and the Future of the Internet: New Media, New Politics promises to explore that further.

My favourite recent non-work reading includes Gideon Haigh’s newsletter, Cricket et al. Haigh is my favourite living sports writer, and it is a delight to read him, unburdened by the contours of feature or book writing. His ~10,000-word opus on his late brother, written over just three days is among the most personal writing I have read in a while. Over the winter break, I read the much-celebrated Lessons in Chemistry by Bonnie Garmus, and Sally Hepworth’s The Soulmate. Lessons delivers strongly on its promise of being an intelligent, hilarious and life-affirming tale of a single mother and accomplished chemist in 60s America driven by sexism, harassment and plagiarism to hosting a cooking show which she does with severe manner and scientific precision. As someone who enjoys all manners of cooking shows, I would watch it every week. Soulmate is an account of two contrasting and intertwined marriages, told from the perspective of two women, one alive and another dead, written as a thriller where you do not see any of the twists coming.

I hope you enjoyed this first edition of the newsletter. In the coming weeks, I will be publishing a series of essays on algorithmic transparency at my project, Knowing without Seeing, and will be doing a deep dive into Facial Recognition Technologies in India soon. Until next time.

Thanks for reading,
Amber

Previous
Previous

Homo Digitalis 02 | What can we learn from elections in Taiwan

Next
Next

The slow march to a data protection law in India