Summary: Apple has developed technology to detect and combat child abuse and exploitation on its devices. While this initiative is aimed at protecting children, it has raised concerns about privacy infringement. The new feature, set to be included in upcoming software updates, scans for images associated with child exploitation by converting them into unique codes and comparing them to a database managed by the National Center for Missing & Exploited Children. Advocates worry about potential misuse of this technology and its impact on user privacy.
Frequently Asked Questions
1. Can Apple’s child abuse detection system be turned into surveillance against dissidents?
Privacy experts have raised concerns that Apple’s tools could be misused for surveillance purposes. However, Apple has implemented several security features to prevent this. The system does not scan actual photos; instead, it compares hash codes. Furthermore, the hash database is stored on the device itself, not on the internet, allowing for easier security audits.
2. Will Apple have access to my personal photos and messages?
No, Apple’s system is designed to prioritize user privacy. The photo scanning feature only applies to images stored in Apple’s iCloud Photo Library, and the probability of false positives is extremely low. Apple performs human reviews before reporting any flagged accounts to the National Center for Missing and Exploited Children. Additionally, Apple’s separate system for detecting explicit images in text messages does not grant access to the contents of the messages themselves.