Monday, Aug 09, 2021, 18:00 Software

Apple's New Measures: Photo Analysis For iCloud, iMessage Photo Filters, Adapted Search

Apple has just released a whole bunch of new measures intended to help protect children from dangerous internet sites and to prevent the spread/trade of illegal images. First off: contrary to reports from yesterday, Apple did not end up introducing measures to scan photos on users' local camera rolls without an upload to iCloud. According to the measures that were actually recently introduced – a local device comparison of hashes to those stored in a database does take place – but only before an upload to iCloud. This is to help further the fight against CSAM (Child Sexual Abuse Material). Algorithms will be used to help compare known illegal material with photos in users' iCloud libraries.


From Apple's very own documentation.


advertising


advertising


The Goal
With the new measures, Apple hopes to maintain the same standards of personal privacy and data protection by not permanently scanning user data on servers. If photos in the library match photos in the official database, the system saves the results in a "safety voucher". "Threshold sharing" is then used to ensure that the results can't be read unless certain thresholds have been clearly exceeded. As stated by Apple – unless an image is flagged as "critical", it will not be selected for external review.

What Threshold Does An Image Have To Meet To Become "Critical"?
Apple doesn't mention what values are considered "critical" but refers to the high reliability of the technology used. The error rate is "one to one trillion" per year. Should the system determine that material in an individual's iCloud account contains flagged child pornography, Apple will review the values and decide whether or not to lock the account and report the user to the "National Center for Missing Exploited Children". However, many criteria have to be met until an image is determined to "clearly" be in violation of iCloud's terms of service and contain illegal pornographic material. Any images that are flagged as "uncertain" are considered safe.

Photo Filters In iMessage For Explicit Material
For those with children given access to their iCloud or who regularly use devices connected to a family account – filters can now be activated to protect children from explicit images and material in iMessage. Should iMessage then recognize any such incoming "explicit content", then iMessage will blur and censor the content. Should the device user still wish to view the photo or video afterwards, a warning will be displayed stating that the images or videos contain unwanted explicit material and have thus been censored. Should the user choose to disregard this message, the messenger won't prevent the user from doing so – however, a notification will be displayed stating that a message will be sent to the family account's owner (parents) to make sure that everything is "as it should be". The same will also happen when children attempt to produce and send explicit content of their own.



Siri & Search: Measures Against CSAM
When Siri recognizes that a user would like to report child abuse or pornographic material, Siri will provide a direct contact to help. Additionally, Siri and web search will automatically intercept requests concerning illegal material. The user will then be informed why the searched content is illegal and problematic as well as being provided with several mental health and support resources.

More mtech.news articles you might enjoy to read: