The photo-sharing app also said the move is part of Facebook’s larger goal with its users to encrypt and clean their platforms end-to-end.
(For a quick snapshot of the top 5 tech stories, subscribe to our today’s Cash Newsletter. Click here to subscribe for free.)
In an effort to secure its teen users, Instagram has added a feature that prevents adults under 18 from sending messages to people who don’t follow them. If an adult tries to message a teenager on the platform, they will receive a note that there is no option to send a direct message (DM) to the user.
The photo-sharing app also said the move is part of Facebook’s larger goal to make its platform end-to-end encrypted and secure for younger users. As a result, the feature will also be rolled out in Facebook’s Messenger app.
Instagram has faced criticism in the past for not doing enough to stop child abuse on its platform. This question was asked not to sign up new users, while its original Facebook had an entry barrier for young children. According to the US-based Not-for-Profit Organization National Center for Missing and Exploited Children (NCMEC), more than 20 million child abuse incidents were reported on its platform in 2020.
Also read With parental controls, Facebook unveils a chat app for children
About a year ago, the app set a birth-limit for its new users. And before the age-gating barrier was set up, new users kept coming to Instagram without being informed about their age. This border-less approach put the app in the area of child protection laws in the US, particularly the Children’s Online Privacy and Protection Act (COPPA). The primary purpose of the rule was to give parents control over the personally identifiable information (PII) of their young children. Violating COPPA can cost approximately $ 43,792 per user.
With the help of AI
Even after applying the age filter for new registrations, the platform continues to face a persistent issue of incorrect or inflated age data. This makes young users to sign up and stay in the network. Therefore, Instagram states that it is resorting to machine learning to improve potentially inappropriate interactions, juvenile privacy features, and DM-ing users with real-time security information. If a conversation is marked as inappropriate, the teen may choose to end it or block, report, or ban the adult.
The platform also said it would give tools to prevent potentially harmful adults from appearing as ‘suggested users’ of teenage users and from the reels and explore section, the company wrote. Facebook will also prompt users with public profiles to keep their accounts private.
Also read Instagram to leap for youth safety, alcohol advertisements
Using artificial intelligence (AI), Facebook says it will also verify the date of birth entered by a teenage user and can use the block if they are under 13 years of age. The social network is also a difficult task for adults who show potentially harmful behavior. Its stage by conversation with Kishore.
It is unclear how Facebook will use AI to know the actual age of the people. The company is being sued in Illinois for collecting biometric data of people, including scans of facial recognition, without their consent.
.
Leave a Reply