Philippa Williams, Queen Mary University of London and Lipika Kamra, O.P. Jindal Global University
India’s 2019 national elections are widely anticipated to be the “WhatsApp elections”. Against a backdrop of rapidly improving internet connectivity and rising smartphone use, the number of people using private messaging service WhatsApp has soared since its India launch in mid-2010 to more than 200m – more users than in any other democracy. And now the country’s political parties are moving to capitalise on this mass communication channel.
But given WhatsApp has already been used to misinform voters in other elections and spread damaging “fake news” that has led to serious violence in India, there’s a danger this could also pose a threat to the democratic process.
Keen to extend the power of social media mobilised in the 2014 election, India’s ruling Bharatiya Janata Party (BJP) is trying to target smartphone-owning voters at the grass roots. More than 900,000 volunteer “cell phone pramukhs” are creating neighbourhood-based WhatsApp groups to disseminate information about the BJP’s development achievements and prime minister Narendra Modi’s campaign activities. Meanwhile, the opposition Indian National Congress party is playing catch up with the launch of its “Digital Sathi” app and the appointment of their own volunteers to coordinate local digital campaigns.
But there’s good reason to think the widespread popularity of WhatsApp in India could have a damaging effect on the election. For one thing, the 2018 Brazilian elections and recent state-level elections in India exposed how WhatsApp is being used to rapidly share messages intended to misinform voters for political gain.
But India also has specific conditions related to the use of WhatsApp. While parties across India’s political spectrum – as well as globally – increasingly seek to gain from fake news by manipulating public opinion, the Hindu right has been far more successful at mobilising a common socio-political identity through media like WhatsApp. In particular, invitation-only groups have spread virulent and vitriolic messages that have played a role in cultivating a strong nationalist identity.
EPA-EFE
The recent conflict with Pakistan over Kashmir, which is likely to play an influential role in the election, has led to the spread of viral content that has stoked public tension, as well as a reported flood of misinformation.
In some cases, when more sinister forms of misinformation have gone viral, the impact on everyday social life in India has been lethal. The misuse of WhatsApp has been connected with at least 30 incidents of murder and lynching, for example following the circulation of children abduction rumours.
Anxious about the inadvertent dark side of its product, particularly within one of its biggest markets, WhatsApp has already launched its own public education campaign in India persuading its users to “spread joy not rumours”. It has also made simple alterations to its product design to encourage users to pause before forwarding messages and limited the number of people you can send a message to at once and the number of times you can forward it, which has since been rolled out globally. And it has banned more than 6m apparently automated and potentially harmful accounts in the past three months.
These steps are a starting point but may not be enough. For one thing, despite the forwarding limits, you can still send messages to 256 people at once and forward them five times – which means you can share something with 1,280 people in seconds.
Another challenge is that research suggests people care less about the validity of a message’s source and content, and more about the sender and its potential to entertain or reinforce a sense of identity. So, journalistic efforts to fact check reports circulating on WhatsApp will likely have a limited effect on media literacy and the detrimental impact of fake news.
Blaming each other
Part of the problem is that the question over who is at fault for the spread of misinformation is contentious and politically charged. Politicians have blamed WhatsApp and called on it to trace and stop the source of hostile messaging. The company is resolute that it can’t access the encrypted messages sent via its app and, even if it could, sharing them with the government would be tantamount to state surveillance, a position supported by India’s Supreme Court. The firm has, in turn, blamed Indian political parties for “misusing” the app during election times.
Ultimately, the role of WhatsApp in Indian politics needs to be understood through the interaction of technology with wider social and cultural issues. WhatsApp is a tool that amplifies certain tendencies that already exist in Indian society. For example, incidents of lynching might have much more to do with incitement to violence in a divided society than with an app that potentially facilitates the spread of rumours. Similarly, messages that promote hatred on religious, caste and gender lines rely on prevailing social cleavages.
We need a more well-rounded understanding of the emerging links between digital politics and the public sphere. How is (mis)information circulated by messaging apps related to more traditional forms of political campaigns, such as door to door canvassing, rallies and speeches? And how do these different spheres influence political participation and allegiance in different ways? This knowledge needs to be the starting point of any intervention to address WhatsApp’s role in misinformation during elections.
Philippa Williams, Senior Lecturer in Human Geography, Queen Mary University of London and Lipika Kamra, Assistant Professo, Jindal School of Liberal Arts and Humanities, O.P. Jindal Global University
This article is republished from The Conversation under a Creative Commons license. Read the original article.