Vivid News Wave

New feature allows children to report inappropriate content directly to Apple - iOS Discussions on AppleInsider Forums


New feature allows children to report inappropriate content directly to Apple - iOS Discussions on AppleInsider Forums

A new feature rolled out as part of the iOS 18.2 beta allows children in Australia to report inappropriate content directly to Apple.

The feature is an extension of Apple's safety measures that are included in iOS 17. These features automatically detect images and videos that contain nudity from iMessage, AirDrop, FaceTime, and Photos.

Initially, when triggered, a pair of intervention popups would appear. These would explain how to contact authorities and instruct the child to alert a parent or guardian.

Now, when nudity is detected, a new popup will appear. Users will be allowed to report the images and videos directly to Apple, who, in turn, could send the information to the authorities.

When the warning appears, a user's device will prepare a report including any offensive material, messages sent immediately before or after the material, and contact information for both accounts. Users will be given the option to fill out a form describing what happened.

When Apple receives the report, the content will be reviewed. From there, the company can take action on the account, including disabling a user's ability to send messages over iMessage and reporting the issue to law enforcement.

The feature is currently rolling out as part of the iOS 18.2 beta in Australia but will roll out globally later.

As The Guardian points out, Apple likely picked Australia as the country will require companies to police child abuse and terror content on cloud messaging services by the end of 2024.

Apple warned that the draft code would not protect end-to-end encryption, leaving users' communications vulnerable to mass surveillance. Apple has been on record against these kinds of efforts since late 2018.

Apple has come under fire for the way it handles Child Sexual Abuse Material (CSAM) on its platforms. Initially, the company was accused of not taking CSAM protection seriously, which angered many watchdog groups.

In 2021, Apple planned to roll out CSAM protections that would scan a users iCloud Photos for known CSAM images. If found, the image would be reviewed by Apple and then a report would be sent to the National Center for Missing & Exploited Children (NCMEC).

Many users were outraged at the idea of Apple scanning their private images and videos and feared false detections. Apple eventually abandoned the idea, citing that it feared scanning the data "would create new threat vectors for data thieves to find and exploit."

In 2024, the UK's National Society for the Prevention of Cruelty to Children (NSPCC) said it found more cases of abuse images on Apple platforms in the UK than Apple reported globally.

Read on AppleInsider

Previous articleNext article

POPULAR CATEGORY

corporate

7102

tech

8131

entertainment

8750

research

3920

misc

9160

wellness

6952

athletics

9159