Home Latest News Artificial intelligence Instagram strengthens safety measures for accounts featuring children

Instagram strengthens safety measures for accounts featuring children

instagram-icon-ios

Meta is implementing stricter safety measures for Instagram accounts managed by adults that predominantly showcase children. These accounts will have the strictest message settings automatically enabled to prevent unwanted communication. The platform will also activate the “Hidden Words” feature to filter out offensive comments. These changes aim to protect children from potential exploitation on the platform.

The company is also introducing new safety features for teen accounts to help them navigate the platform more safely.

How Instagram will protect accounts featuring children

Embed from Getty Images

Accounts affected by these changes include those run by parents, guardians, or talent managers who regularly share content featuring children. Meta aims to mitigate potential abuse by preventing suspicious adults, such as those previously blocked by teens, from discovering these accounts.

The platform will also avoid recommending potentially suspicious adults to accounts that primarily feature children, making it harder for them to connect through Instagram Search. These measures are designed to address concerns about the safety and well-being of children whose images are widely shared online.

New safety features coming to teen accounts

Alongside the changes for accounts featuring children, Meta is adding safety features to Direct Messages (DMs) in teen accounts. Teens will now see safety tips prompting them to carefully examine profiles and be mindful of shared content.

Instagram will display the month and year an account joined the platform at the top of new chats. Moreover, a new block and report option will allow users to perform both actions simultaneously, further streamlining the process of addressing unwanted interactions.

Meta’s ongoing efforts to improve online safety

These features complement existing safety notices that remind users to exercise caution in private messages and to report any uncomfortable content. Meta reports that teens are actively utilizing these tools, blocking accounts one million times and reporting another million after seeing safety notices in June alone.

The company also highlighted the success of its nudity protection filter, with 99% of users, including teens, keeping it enabled. Last month, over 40% of blurred images received in DMs remained blurred, indicating that users found the content potentially inappropriate or unwanted.

The latest updates represent Meta’s continued efforts to create a safer online environment, particularly for vulnerable young users. By implementing stricter settings and empowering teens with additional safety tools, Instagram seeks to reduce the risk of exploitation and harmful interactions on its platform.

LEAVE A REPLY

Please enter your comment!
Please enter your name here