‘On Twitter, no one is above the rules’

‘On Twitter, no one is above the rules’

Kathaleen Ri, Senior director of public policy and philanthropy, in an email interview with APAC, Twitter, Morning Tidings, about new arbitral guidelines, preparations for micro-blogging websites for the upcoming assembly elections and navigating between freedom of expression and hate speech let’s talk. Edited excerpts:

Recently, the Indian government has notified new guidelines for social media intermediaries, which force them to identify the originator of certain messages, take content within a specific timeframe and establish a grievance redressal mechanism Huh. What is your opinion on them? And what action are you taking?

The Twitter Open supports a forward-looking approach to regulation that protects the Internet, drives universal access, and promotes competition and innovation. We believe that regulation is beneficial when the fundamental rights of citizens are protected and online freedom is confirmed. We are studying the updated intermediary guidelines and engage with a range of organizations and institutions affected by them. We deeply appreciate equal engagement with government, civil society, activists and academic experts. We look forward to continued engagement with the Government of India and hope to create a fair and dynamic balance between transparency, freedom of expression and privacy for all who use Twitter.

Many other countries are also considering similar rules. Your thoughts on the need to regulate social media platforms and the best way to proceed with it?

We are in a highly dynamic global regulatory environment, and discussions around privacy, online content, and self-regulation are taking place around the world. Regulating online content needs to strike a careful balance between protecting against harm while preserving human rights, including freedom of expression, privacy, and procedural fairness for all.

The technology industry has, on many fronts, given its support to the self-regulation model for content moderation. Our extensive efforts to combat violent extremism are a strong example of the alliances and self-regulation of the industry industry can achieve simultaneously: we are members and signatories of many coalitions and organs, not including, but global Internet Forum to Counter Terrorism (GIFCT)), Aqaba Process, Christchurch Call to Action and Australian Taskforce to Combat Terrorism and Extreme Violent Material Online. We also invested in the Global Research Network on Terrorism and Technology (GRNTT) to develop research and policy recommendations designed to prevent terrorist exploitation of technology.

Similarly, in the case of child sexual abuse, we have strong technology alliances to stay ahead of bad faith actors and to ensure that we are doing everything we can to remove content, facilitate investigations, and Can protect minors from harm – both online and offline. Our partnership with the National Center for Missing and Exploited Children (NCMEC) highlights the work done to fight child sexual abuse online. When we remove the material, we immediately report it to NCMEC and reports are made available to appropriate law enforcement agencies worldwide to facilitate investigation and prosecution.

Our approach to regulation and public policy issues focuses on the protection of the open Internet which is open to all and promotes security, inclusion, diversity, competition and innovation. Together with governments, civil society, and academics around the world, we are committed to building a favorable future Internet that people can trust, that empowers public interaction, and that is a global force for good. Ho.

What role does Twitter see in the upcoming elections in India? What have you learned from the last few initiatives?

Every year on Twitter is an election year and we are committed to providing a service that promotes free and open civic discourse. We recognize our role as an essential service where people come for reliable information. This includes knowing where, when and how to vote, to learn about candidates and their platforms, as well as to engage in healthy civic debate and dialogue – much as they did during the 2019 Lok Sabha elections and previous assembly elections Was.

We are seeking continuous improvement and adaptation to achieve these goals: insights and lessons from previous elections such as at home in India as well as globally, we are critical to protecting and supporting multilingual products, policy And implementing enforcement updates. Negotiations are taking place during the upcoming assembly elections. A global cross-functional team with local, cultural and language expertise is in place and tasked with protecting the service from attempts to incite violence, abuse and threats, which can trigger the risk of offline harm.

Our goal is to make it easier to obtain reliable information on Twitter while limiting the spread of potentially harmful and misleading content. We have prioritized our approach to dealing with misinformation based on the highest potential for harm in the context of these elections, which is why we focus on ‘synthetic and manipulative media’ and ‘civic integrity’.

For material to be labeled or removed under ‘synthetic and manipulated media’, we must have reason to believe that the media, or the context in which the media is presented, is significantly and deceptively altered or manipulated Has been done. We will label ‘synthetic and manipulated media’ and link it to ‘Twitter Moment’ to give additional context to people and we will surface related conversations so that they can make more informed decisions on the content they want to engage with, Or want to increase. When people try to retweet tweets with the label ‘Synthetic and manipulated media’, they will immediately see them pointing to credible information. These labeled tweets by Twitter will not be recommended by the algorithm, reducing the visibility of misleading information, and, it will encourage people to reconsider if they want to increase these tweets.

Twitter, along with other social media platforms, has been the focal point of debate over bias towards certain content or accounts. How do you address these issues, especially in view of the upcoming elections?

On Twitter, no one is above the rules and we apply our policies judiciously and fairly to everyone. Our products and policies are never developed or implemented on the basis of political ideology. We use a combination of machine learning and human effort and expertise to review reports and determine whether they violate Twitter’s rules.

We take a behavior-first approach, which means that we look at how accounts behave before reviewing the content they are posting. The open nature of Twitter means that our enforcement actions are clearly visible to the public, even when we cannot disclose personal details of personal accounts that have violated our rules.

We have also worked to produce better in-app notices where we have removed tweets to break our rules: we communicate with both an account that reports a tweet and the account it has posted on our actions. Posted with additional details. We can continue to improve our product, policies, and processes to advance people’s trust using Twitter.

There is also a fresh debate on how such forums navigate between freedom of expression and vulgar language. How does Twitter interact between the two?

Twitter aims to serve public conversation. We believe that public interaction is at its best when more and more people can participate. Participation is a function of a free and open Internet; An Internet that is global, which is not walled, critical or weak voices are not censored, and which is safe and promotes diversity, competition and innovation.

We want to make sure that conversations on Twitter are healthy and that people feel safe to express what they say. We do our work recognizing that free speech and security are intimately connected and sometimes anything can happen. We must ensure that all voices are heard, and we keep improving our service so that everyone feels safe in public conversation. As a principle, our enforcement is prudent and fair to all, regardless of their political beliefs and background.

At Twitter, we use a combination of people with machine-learning and expertise to review reports and determine that they violate Twitter’s rules. We take a behavior-first approach, which means that we look at how accounts behave before reviewing the content they are posting. We are constantly updating our policies, and seeking feedback to address online and emerging behaviors.

Our most recent update to our hate conduct policy is an example of how we have collaborated with the public and experts to continue our policy. In the case of hate conduct, we prohibit abusive behavior that targets individuals or groups belonging to protected categories

Be the first to comment

Leave a Reply

Your email address will not be published.


*