Apple said Thursday it will implement a system that checks photos on iPhone devices in the United States before uploading them to its iCloud storage services to ensure the uploads don’t match known child sexual abuse images.
Detection of child abuse uploads sufficient to protect against false positives will lead to human assessment and reporting of the user to law enforcement, Apple said. It said the system is designed to reduce false positives to one in a trillion.
Apple’s new system aims to respond to law enforcement requests to combat child sexual abuse while respecting the privacy and security practices that are a core tenet of the company’s brand. But some privacy advocates said the system could open the door to tracking political speeches or other content on iPhone handsets.
Most other major technology providers — including Alphabet’s Google, Facebook and Microsoft — already match images against a database of known child sexual abuse images.
“With so many people using Apple products, these new security measures have life-saving potential for children who are seduced online and whose horrific images are spread in child sexual abuse material,” said John Clark, director of the National Center for Missing & Exploited. children, the statement said. “The reality is that privacy and child protection can coexist.”
Here’s how Apple’s system works. Law enforcement officers maintain a database of known child sexual abuse images and translate those images into “hashes” — numerical codes that positively identify the image, but cannot be used to reconstruct them.
Apple implemented that database using a technology called “NeuralHash,” designed to also capture edited images that resemble the originals. That database is stored on iPhone gadgets.
When a user uploads an image to Apple’s iCloud storage service, the iPhone creates a hash of the image to be uploaded and compares it to the database.
Photos stored only on the phone won’t be checked, Apple said, and human review before reporting an account to law enforcement is to ensure any matches are genuine before an account is suspended.
Apple said users who believe their account has been suspended in error can appeal to have it reinstated.
The Financial Times previously reported on some aspects of the program.
One feature that sets Apple’s system apart is that it checks photos stored on phones before uploading, rather than checking the photos after they arrive on the company’s servers.
On Twitter, some privacy and security experts expressed concern that the system could eventually be expanded to scan phones more generally for banned content or political statements.
Apple has “sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content,” warned Matthew Green, a security researcher at Johns Hopkins University.
“This will break the dam – governments will demand it of everyone.”
Other privacy researchers such as India McKinney and Erica Portnoy of the Electronic Frontier Foundation wrote in a blog post that it may be impossible for outside researchers to verify that Apple keeps its promises to monitor only a small amount of content on the device.
The move is “a shocking sea change for users who have relied on the company’s leadership in privacy and security,” the pair wrote.
“In the end, even a thoroughly documented, carefully thought-out and narrowly defined backdoor is still a backdoor,” wrote McKinney and Portnoy.