Hi! 👋 Welcome to The Big Y!
Amazon isn’t the only company toeing the line between safety and privacy. Apple has launched a new feature that has drawn a lot of criticism we normally don’t see for Apple, who regards itself as the leader of tech privacy.
The new feature scans photos uploaded to iCloud for illegal child sexual abuse material. Apple says it will only scan for images that have been ID’d by the National Center for Missing and Exploited Children. Many other companies already scan users’ materials on the cloud, but the difference here is that Apple is using on-device content filtering.
Critics say this opens the doors for potential access to all material on Apple devices by governments or other actors. While Apple maintains that they will not scan for anything else on the phone or give anyone access, now there’s a chance for policy changes that can demand additional access (via legislation or the courts).
There are obvious benefits that come with this new feature combating child sexual abuse. This is very important, but again, we need to examine the balance between on-device content access (privacy) with the benefits (safety). The looming question is always, who determines our privacy and safety standards? Should it continue to be the companies who choose?
In other news, we experienced another example of the importance of proper messaging within the AI world. Nvidia released an almost 2-hour long keynote by the CEO in April, and then followed up recently with a blog announcing that the keynote had been computer-generated, which everyone was super impressed by. But then they did a small update clarifying that it was actually only 14 seconds of computer-generated content, which is not quite so impressive.
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! 😁
🎙 The Big Y Podcast: Listen on Spotify, Apple Podcasts, Stitcher, Substack