
Apple’s upcoming system to detect and stop child sexual abuse images over iCloud has sparked a wave of alarm that the same system will make mistakes or be abused for widescale censorship and surveillance Published 14 page(CSAM) document for safeguarding which defines the threshold of atleast 30 images
The threshold is important, since there’s always a possibility Apple’s CSAM system will mistakenly flag an innocuous image uploaded to iCloud as child porn. The company settled on 30 images, describing it as “drastic safety margin reflecting a worst-case assumption about real-world performance.” The number might vary but the threshold of 30 will not get lowered.
Apple isn’t going to automatically report the account to the National Center for Missing and Exploited Children, which works with law enforcement to stop child predators. The company’s team of human reviewers will examine the flagged pictures firsthand to confirm the imagery is child sexual abuse material.
Apple is going to rely on not one, but at least two child safety organizations to determine the child pornography to look out for. The safety organizations will also operate in separate jurisdictions belonging to different governments. Apple created this safeguard in the event a government influences a particular child safety organization.
Any perceptual hashes appearing in only one participating child safety organization’s database, or only in databases from multiple agencies in a single sovereign jurisdiction, are discarded by this process, and not included in the encrypted CSAM database that Apple includes in the operating system.
Apple Statement
Apple has also stressed the company’s detection system isn’t “scanning” photos on customer iPhones. It leveraging computer algorithms on the device to look out for hashes or digital fingerprints of known child sexual abuse images in photos uploaded to iCloud. The company never learns what the photos sent to iCloud contain unless the 30-image threshold is crossed.
Apple’s attempts to defend the detection system, many IT security researchers still remain leery that the same technology could be easily abused in the future.