Hi! 👋 Welcome to The Big Y!
While some are innovating new ways to improve image recognition including facial recognition, we are also seeing new tools being created to help preserve our privacy. The tools make small, pixel-level changes that a human eye cannot perceive, but messes with the AI algorithm so that it can no longer identify what it sees in the photo.
One early tool called Fawkes, developed out of the University of Chicago takes the just mentioned adversarial approach. A newer tool, LowKey, is trying to make your images unlearnable, effectively teaching the AI to ignore the image during training. This MIT Tech Review article dives into Fawkes and similar tools.
These tools are great until the recognition algorithms pick up on the small changes and adjust to the deceptive pixels. Since many of these facial recognition algorithms are constantly being trained and improved, it is only natural that they will also pick up on these adversarial tricks and proceed to ignore them. This is an example of the emerging battle between continuously improving AI systems, where there will likely never be a winner.
While many of the larger tech companies have stopped using or selling their facial recognition services, the smaller startups working within this field are worrying me the most. They aren’t well regulated and they likely do not have the security, privacy, or quality assurance in place commensurate to the impact their products may have on innocent people.
If you’re looking to see what images are out on the internet of you, there’s a website (among others) that can reverse image search a photo of you and find other photos of you out in the internet universe. This CNN article is a deep dive into the creepy world of these tools.
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! 😁
🎙 The Big Y Podcast: Listen on Spotify, Apple Podcasts, Stitcher, Substack