New Delhi, July 3: Meta-owned Facebook ‘actioned’ about 17.5 million content pieces across 13 violation categories in India during May, according to the latest monthly report by the social media giant. The content ‘actioned’ belonged to categories including bullying and harassment, violent and graphic content, adult nudity and sexual activity, child endangerment, dangerous organisations and individuals, and spam, among others.
Facebook took action against about 17.5 million content pieces between May 1-31, 2022 across multiple categories, while Meta’s photo-sharing platform Instagram ‘actioned’ nearly 4.1 million pieces of content across 12 categories during the same period, according to its recently-released India monthly report.
“Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning,” Meta’s report said.
Under the IT rules that came into effect in May last year, large digital platforms (with over five million users) have to publish periodic compliance reports every month, mentioning the details of complaints received and action taken. The reports also include details of content removed or disabled via proactive monitoring, using automated tools.
Microblogging platform Twitter’s India Transparency report of June 2022 reveals that it received over 1,500 complaints in the country through its local grievance channel between April 26, 2022 to May 25, 2022.
“In addition to the above, we processed 115 grievances which were appealing Twitter account suspensions. These were all resolved and the appropriate responses were sent,” Twitter’s report said. “We have not overturned any of the account suspensions based on the specifics of the situation, ergo, all of the reported accounts remain suspended,” it added.
More than 46,500 accounts were suspended for violating guidelines, through proactive monitoring, the Twitter report said, noting this data represents global actions taken, and not just those related to content from India. The government has issued a notice to Twitter to comply with all its past orders by July 4, failing which it may lose its intermediary status, which means it will be liable for all the comments posted on its platform.
Meta-owned WhatsApp has banned over 19 lakh Indian accounts in May, on the basis of complaints received from users via its grievances channel and through its own mechanism to prevent and detect violations, according to the monthly report published by the messaging platform recently.
Meanwhile, in case of Facebook, Meta’s latest report published on June 30 showed that of the 17.5 million actioned pieces, 3.7 million were in violent and graphic content category, 2.6 million in adult nudity and sexual activity category, while 9.3 million pertained to spam.
Some of the other categories under which content was ‘actioned’ included bullying and harassment (294,500), suicide and self-injury (482,000), dangerous organisations and individuals – terrorism (106,500), and dangerous organisations and individuals – organised hate (4,300).
Meta’s report contains information for a period of 31 days on actions taken against violating content on Facebook and Instagram for content created by users in India and proactive detection rates, as well as information on grievances received from users in the country via the grievance mechanisms.
For Facebook, it said, “Between 1st and 31st May, we received 835 reports through the Indian grievance mechanism, and we responded to 100 per cent of these 835 reports.”
“Of these incoming reports, we provided tools for users to resolve their issues in 564 cases. These include pre-established channels to report content for specific violations, self-remediation flows where they can download their data, avenues to address account hacked issues etc.”
For Instagram, in May, 13,869 reports were received through the Indian grievance mechanism, and the platform responded to 100 per cent of the reports. Of these incoming reports, it provided tools for users to resolve their issues in 4,693 cases. “Of the other 9173 reports where specialised review was needed, we reviewed content as per our policies, and we took action on 5770 reports in total,” it added.
The government is in the midst of finalising new social media rules that propose to arm users with grievance appeal mechanism against arbitrary content moderation, inaction, or takedown decisions of big tech companies.
The IT ministry, last month, circulated the draft rules that proposes a government panel to hear user appeals against inaction on complaints made, or against content-related decisions taken by grievance officers of social media platforms. Facebook Users Are Copying and Pasting a Post About the Life and Death.
At present, “there is no appellate mechanism provided by intermediaries nor is there any credible self-regulatory mechanism in place”, the IT ministry had said.
Big social media platforms have drawn flak in the past over hate speech, misinformation and fake news circulating on their platforms. Concerns have also been raised about digital platforms acting arbitrarily in pulling down content, and ‘de-platforming’ users. The government had last year notified IT rules to make digital intermediaries more accountable and responsible for content hosted on their platforms.
Leave a Reply