Princeton's Digital Witness Lab Will Investigate How WhatsApp Misinformation Affects Elections Abroad

Published
By
Carrie Compton, Princeton International
Category
Research
Region
Global
Surya Mattu Surya Mattu leads Princeton's Center for Technology Policy's Digital Witness Lab.

Across the globe, social media and modern hyperconnectivity has had indelible and often insidious repercussions for democracy. Princeton’s Center for Information Technology Policy (CITP) has been scrutinizing tech’s societal implications since 2005. Last year, CITP launched the Digital Witness Lab, which will contribute to and translate its academic findings to create new tools, technologies and lines of inquiry to help journalists report on the tech sector.

Surya Mattu, a data engineer and journalist who leads the lab, says he wants to jumpstart the public imagination around algorithms the way Upton Sinclair exposed conditions in slaughterhouses a century ago. “We are the photojournalists on the streets of algorithm city — I’m trying to build the imagery of what the algorithm does. Humans drive everything, and so I want to bring the culpability back to the people creating them,” he said, adding: “The lab’s work is also meant to show how technology can be built with civic values versus corporate ones.”

Around 2014, Mattu began critiquing technology as an NYU student, interrogating through art the privacy pitfalls of smart devices. In UnFitbits, Mattu and collaborator Tega Brain lampooned the logic of reducing health insurance fees based on Fitbit data by showing how easily the metrics were spoofed when attached to a drill or a wheel. 

“The tech and algorithmic accountability conversation needs to be a global one. Some of the worst harms take place where there is less regulation and scrutiny than in the West.” 

About that same time, he joined the investigative news outlet ProPublica to analyze the algorithm of a criminal risk assessment tool used by U.S. courts that purported to predict recidivism. The reporting team’s stories — which landed them among the finalists for a 2016 Pulitzer Prize — revealed that pervasive racism baked into the algorithm resulted in harsher consequences for Black defendants. Mattu also worked for the nonprofit tech watchdog The Markup where he created the Citizen Browser Project. Using about 1,000 paid panelists, Mattu observed their Facebook feeds and found that the social media giant continued promoting divisive content, despite its post-Jan. 6 promise to stop.

At CITP, Mattu will focus on the encrypted messaging platform WhatsApp, owned by Facebook’s parent company Meta, specifically in India and Brazil. “The tech and algorithmic accountability conversation needs to be a global one,” said Mattu. “Some of the worst harms take place where there is less regulation and scrutiny than in the West.” 

The 2019 presidential elections in India and Brazil were both marked by rampant misinformation campaigns on WhatsApp, Mattu said. The app is monolithic in both countries: India has the largest national base of consumers — more than 500 million users in a country of 1.4 billion; Brazil is second, with about 140 million users in a population of about 216 million.

In India, users have for years been repeatedly exposed to incendiary rhetoric, provoking religious tensions between Hindus and Muslims and using rumors to spark vigilante mobs.

In Brazil, a 2021 Guardian analysis showed the extent of the app’s reach during its last presidential election season: Of about 12,000 right-wing messages that went viral, approximately 42 percent contained false information favoring the since-ousted strongman Jair Bolsonaro. 

While Facebook can moderate user-posted content, WhatsApp’s encrypted platform is more elusive. Mattu’s team will begin its investigation by studying the efficacy of WhatsApp’s new forwarding limits, which caps each post to five recipients. (In 2019, a user had unrestrained sharing capacity.)

They will also delve into whether AI-generated content affects the volume of misinformation. India has a presidential election in 2024, and Mattu is eager to see how it compares to 2019. (Brazil’s next presidential election is in 2026.)

Above all, Mattu said, he wants to help people understand they are entitled to a healthier online ecosystem. “There’s a world where we could build technology in a way that doesn’t rely on the venture capitalist model with unending growth as a measure of success,” Mattu said. “Sesame Street feels different to Cartoon Network, right? That’s something we have a sophistication for because we’ve been exposed to it. Civic and capitalist models will continue to exist online, but we need to help people decipher between the two.”