Picture: for illustration purposes
Bharat Nayak, an east Indian fact-checker, spotted a worrying surge in misinformation and hate speech against Muslims on his WhatsApp 'dashboard following the October 7 attack by Hamas militants on Israel. This anointment of an already dangerous situation embodies a severe problem strategic to the current digital age: the spread, monitoring, and curtailment of hate speech and disinformation online.
These viral messages, sourced largely from public WhatsApp groups across India, donned graphic and provocative content, most erroneously labelled as derivative from the Israel conflict scene. Concerningly, the content was shared multiple times due to WhatsApp's lack of content moderation, highlighting the consistent issue of hate speech propagated under the guise of misinformation on grander social media platforms.
In the wake of the conflict, which has resulted in significant deaths in Israel and the Gaza Strip, there has been an upward swing in disinformation and hate speech across social media platforms globally. Resultantly, big tech companies like Meta and X have reportedly removed tens of thousands of such posts, but their efforts appear largely ineffective against the sheer volume of propagative content – a failure highlighting the need for better content moderation, especially in non-English languages, according to digital rights experts.
This glaring inadequacy in content moderation isn't a phenomenon new or exclusive to the present Israel-Hamas conflict. Meta, for example, has faced lawsuits accusing the platform of enabling disinformation and hate speech that led to devastating real-world consequences in Kenya, Sri Lanka, India, and Cambodia.
Despite the ongoing debate for improved content moderation strategies, addressing the problem remains a challenge for all platforms due to its resource-intensive nature and the difficulty in detecting and controlling user behaviour.