WhatsApp bans over 2.2 million Indian accounts
Stay tunned with 24 News HD Android App
Meta-owned WhatsApp has banned 2.21 million Indian accounts in June for flouting its community guidelines. The Meta-owned platform received 632 complaints and requests from users via its grievance redressal system. The cited data is from its monthly compliance report for June 1-June 30, 2022.
A majority of these pertained to ban appeals (426), account support (123), product support (35) and safety (16), among others. Based on these requests, the instant messaging platform initiated action against 64 accounts for ban appeals.
WhatsApp also added that safety-related grievances (16) pertain to issues that may be about abuse or harmful behaviour on the platform. For such grievances, WhatsApp responds to users by guiding them to report the complaint via in-app reporting.
The social media platform also stated that an account is ‘actioned’ either when an account is banned or a previously banned account is restored – as a result of a complaint.
Every month, WhatsApp consistently bans around two million accounts. It bans the accounts based on its abuse detection approach which also includes negative feedback received from users via its ‘Report’ feature.
Explaining its proactive detection process, WhatsApp has stated that abuse detection operates at three stages of an account’s lifestyle – at registration, during messaging, and in response to negative feedback.
“A team of analysts augments these systems to evaluate edge cases and help improve our effectiveness over time,” the company said in the report.
WhatsApp and Facebook stated that CCI could not investigate the platform in a ‘creeping fashion’. Facebook has also alleged that it should not be dragged into the case, as its [WhatsApp’s] policies are not Meta’s policies.
As per Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, also known as the new IT rules, first notified in February 2021, all social media intermediaries with more than 5 Mn registered users publish monthly compliance reports.
Under the IT Rules, 2021 guidelines, other platforms that have published their reports include Meta (removed 22.8 Mn content pieces) where Facebook took action against 18.7 Mn content pieces (a majority of them for being spam and for content related to violence) and Instagram took action against 4.1 Mn pieces (mostly against content promoting violent and graphic content).
Among other major social media platforms, Twitter banned 43,140 accounts for violating its guidelines related to terrorism promotion, child sexual exploitation, non-consensual nudity and other similar content. Tech giant Google took removal actions against 6,40,339 content pieces that violated community guidelines.