New and impressive image upscaling
Hi! 👋 Welcome to The Big Y!
Google researchers recently announced a breakthrough in image super-resolution. Through image synthesis, researchers are able to take a low-resolution image and improve it to a high-resolution image, for example, a 64x64 resolution image improved to a 1024x1024 resolution image.
Improving the resolution of images has a wide range of applications including the restoration of old family photos, as well as improving the quality of medical imaging systems. It can also be used to reverse image or video compression in gaming or videos, where you want to minimize the data sent to a device and then have the device upscale it to a higher resolution using AI. These techniques can also be useful for image classification, segmentation, and other AI image applications (potentially in privacy-protecting applications).
The big change with this new research is that they moved away from using deep generative models (think GANs - at the bottom here) to diffusion models, which is a change from the current predominant way that these image synthesis problems are approached.
Diffusion models start by taking the training data and progressively adding noise, slowly reducing the number of details available in the data until it consists only of noise. Then a neural network is trained to reverse this process. By running the reverse corruption process, data is synthesized to fill in any blanks until we get a clean image.
By scaling up diffusion models, the Google team was able to outperform the existing, traditional GAN methods of image synthesis.
NASA has tasked a research group at the Univ. of Arizona to build a swarm of autonomous robots to help mine lunar resources. Building autonomous mining robots could be useful both on the moon, but also on Earth, but also lead to big questions about resource ownership on the moon and a plethora of other questions.
Thanks for reading! Share this with a friend if you think they'd like it too. Have a great week! 😁