We’ve all seen it on police procedurals. They are trying to find a criminal but all they have are grainy images of the suspect so they just "sharpen" the image and like magic a photo of a person becomes sharp enough to identify them.
I said it’s like magic because it is. There’s currently no possible way to take an image that is grainy and make it sharper while also having it be accurate.
The reason the photo is grainy is because there’s not enough information. There are several reasons that you might end up with a grainy photo. There might not be enough light to clearly make out all aspects of the photo. The camera that took the photo didn’t have very high resolution. The photo was taken using digital zoom or the photo itself has been downscaled from the original.
When you use the camera on your phone you’re taking a photo that has will be saved with a certain number of pixels. Most phones these days save 12-megapixel (that’s 12 million pixels) photos.
Phones with more than one lens have the ability to do both optical and digital zoom. Optical zoom switches to a different lens, the optics of the camera change to zoom further in. You can also see this happen on a camera with an expensive lens, the lens itself will extend further out to zoom in on the subject. Digital zoom is different, it takes the photo that you’re going to take and crops it.
I know that 12 million pixels sounds like a lot, but it doesn’t take much digital zoom to lose half of the pixels that would be used for the photo. Think of it like cropping a photo and then blowing the smaller cropped version back up to the original size. You now have a lot fewer pixels to fill the same amount of space. That’s why when you zoom in really far with digital zoom, the photo ends up looking rather grainy, there’s just not enough pixels available anymore to make the image appear sharp.
Researchers have decided to try to tackle this problem, but the results aren’t what you might think (or hope). Researchers at Duke University have used artificial intelligence to take grainy photos and turn them into photos of a realistic person. The results are impressive --they took photos so grainy that the eyes were represented by 2 pixels (two squares that are slightly different colors) and turned them into a realistic person. It’s important to note that this is just a representation of a realistic person, not necessarily the exact person from the grainy photo.
But the less grainy the photo the more accurate the AI gets. The authors themselves took their own portrait and downscaled it (though not as badly as the previous example) and then used their AI to recreate an image that has a passing resemblance to the original photo.
That last part is important because no one is going to realistically try to figure out what a person looks like from a photo so grainy that eyes are represented by two squares that differ slightly in color. But photos that are slightly grainy can be sharpened to bring back something that looks like the original are useful for lots of people. If you took a photo of a loved one with an old phone that didn’t have a good camera there’s a chance that you could sharpen it with this AI.
As the world of photo AI improves my hope is that we’ll have apps available to us that we can use to make any photo we take (or took a long time ago) look better.