Apple faces backlash against new scanning tool that detects child sex abuse images
Following employee and policymaker backlash, Apple has announced that it is still refining its plans to scan iPhones to detects images of children being sexually abused.
Last week, Apple announced that it would introduce a feature for iPhones that would automatically scan the device for photos depicting child abuse. These scans would take place before images are uploaded to the iCloud. Additionally, Apple announced that if an explicit image was found, it would be reported to the US National Centre for Missing and Exploited Children (NCMEC). The NCMEC is the only organisation that Apple has cleared this agreement with.
The announcement caused immense backlash from major tech policy groups, as well as its own employees. Both groups expressed concerns that Apple was risking its reputation for protecting consumer privacy. In addition, an anonymous individual wrote to Reuters last week to express employee concerns, detailing that Apple employees have taken to an internal channel on Slack to express their opinions on the scanner, with over 800 messages on the channel.
Craig Federighi, the company’s senior VP of software engineering, defended the new system in an interview with The Wall Street Journal, emphasising that private information will be protected with “multiple levels of audibility.” He said: “We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world.”
In a video of the interview, Federighi said, “What we’re doing is we’re finding illegal images of child pornography stored in iCloud. If you look at any other cloud service, they currently are scanning photos by looking at every single photo in the cloud and analysing it. We wanted to be able to spot such photos in the cloud without looking at people’s photos and came up with an architecture to do this.” He emphasised that the system is not a “backdoor” that will break encryption, and that it is “much more private than anything that’s been done in this area before.” Accordion to Federighi, this system was developed “in the most privacy-protecting way we can imagine and in the most auditable and verifiable way possible.”
Apple did not initially release further details on the scan, such as specifying how many matched images it would require before it would be flagged for review or reported to the authorities. However, it said on Friday that it would start with 30 to lower this number as the scanning system refines.
“It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood,” Federighi told The Wall Street Journal. “We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”
READ MORE:
- Apple’s iOS15 changes: What it means for email marketing and customer experience
- Apple sparks controversy over remote work ban
- Inside Apple’s powerful new privacy protections in iOS 15
This scanning service is planned to start in the US. Though its focus is on scanning for explicit images of minors, there is concern from policymakers and critics alike that the government will leverage Apple to scan for other content, such as prohibited political imagery.
For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!