Google, MIT Create Algorithms To Improve Mobile Photography

James Marshall
August 3, 2017

That's why companies like Google are turning to computational photography: using algorithms and machine learning to improve your snaps.

With the technology, a photographer can see the final version of an image on the screen while they are still framing the shot on their phone, according to MIT News.

Scientists from Massachusetts Institute of Technology (MIT) and Google developed a new Artificial Intelligence system that can automatically retouch images like a professional photographer in real time, eliminating the need to edit images after they are clicked with smartphones. This means that users can apply professional grade image processing algorithms on images before even capturing the image.

The work is presented in a joint paper by Google and MIT researchers, describing an algorithm that "processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1,080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators". It can also speed up existing image processing algorithms, MIT News said. Performing the image processing on a low-resolution image also helps to save time.

Google has previously used its HDR+ algorithms to bring out more detail in light and shadow on mobile devices since the Nexus 6. The first is to render changes to the original image as so-called "formulae" (think of them as a layer) rather than masks applied to a high-res image.

Gharbi and his colleagues - MIT professor of electrical engineering and computer science Frédo Durand and Jiawen Chen, Jon Barron, and Sam Hasinoff of Google - address this problem with two clever tricks. The formulae are meant to alter the colors of the pixels in the original image and the machine-learning system is assessed during training based on those formulae's closeness to the retouched image. Each cell of the grid contains formulae that determine modifications of the color values of the source images. Each image in this collection has been retouched by five different photographers, and Google and MIT's algorithms used this data to learn what sort of improvements to make to different photos. "This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience". When researchers conducted the processing, it was found that the low-resolution image required 100 megabytes of memory to perform the operations, whereas a high-resolution version required almost 12 gigabytes, which means it would take longer to process a full-resolution image as the standard algorithm, according to MIT.

Finally, the researchers compared their system's performance to that of a machine-learning system that processed images at full resolution rather than low resolution.

Other reports by Click Lancashire

Discuss This Article

FOLLOW OUR NEWSPAPER