Apple delays plans to scan cloud uploads for child sexual abuse images

Posted at

Apple will delay its plans to begin scanning user images for child sexual abuse material (CSAM) before uploading them to the cloud, the company says, after a backlash from privacy groups.

The company’s proposal, first revealed in August, involved a new technique it had developed called “perceptual hashing” to compare photos with known images of child abuse when users opted to upload them to the cloud. If the company detected enough matches, it would manually review the images, before flagging the user account to law enforcement.

Now, Apple says it is pausing the implementation of the project. “Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of child sexual abuse material,” the company said in a statement.

“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

As well as the CSAM scanning, Apple announced and has now paused a second set of updates, which would have seen it using an AI system to identify explicit images sent and received by users under 18 through the company’s Messages app and, where those users were under 13 and had their phones managed by family members, warn a parent or guardian.

The two policies were announced in an unusual fashion for the company, leaking through academic channels before being confirmed in a dry press release posted directly to Apple’s website. Internally, some at the company blame the launch for some of the hostility to the plans, saying that the two proposals were wrongly conflated, and arguing that Apple missed its best shot to properly sell the benefits of the changes.

Others, however, were more critical. “The backlash should be no surprise,” said Jason Kelley of American digital rights group EFF. “What Apple intends to do will create an enormous danger to our privacy and security.

“It will give ammunition to authoritarian governments wishing to expand the surveillance, and because the company has compromised security and privacy at the behest of governments in the past, it’s not a stretch to think they may do so again.”

While privacy activists celebrated the decision to pause the scanning plans, child protection groups acted with dismay. “This is an incredibly disappointing delay,” said Andy Burrows, the NSPCC’s Head of Child Safety Online Policy. “Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online and could have set an industry standard.

“They sought to adopt a proportionate approach that scanned for child abuse images in a privacy preserving way, and that balanced user safety and privacy,” Burrows added. “We hope Apple will consider standing their ground instead of delaying important child protection measures in the face of criticism.”

Apple’s plans were struck a significant blow two weeks after they were announced, when security researchers managed to reverse engineer the “perceptual hashing” algorithm the company intended to use to identify known CSAM that was being uploaded. Within days, they had managed to create vastly different images that had the same mathematical output, implying that a malicious attacker would be able to craft a nondescript image that would nonetheless trigger Apple’s alarms.

Worse, others managed to do the reverse: change the mathematical output of an image, without changing how it looks at all. Such a flaw could undo the entire benefit of the scanning system, since it implies it would be trivial to alter entire libraries to make them invisible to Apple’s scanning system.

Add your Ad HERE Add your Website HERE