Way back in 2017, Apple unveiled Portrait Lighting, a technique that artificially replicates the look of studio lighting using the iPhone’s depth-sensing camera and a dollop of AI smarts.
Not to be outdone, Google researchers have published a paper outlining what looks like an even more robust approach to applying different lighting looks to portrait images through software alone.
Google’s approaching to “relighting” is based on a neural network which is fed a single image of a portrait taken with a smartphone camera. The image is then “relit” to look as if it were shot under completely different lighting conditions. This is significantly different than Apple’s portrait mode, which simply works with existing light data captured by the iPhone to dial up or down contrast and illumination in the foreground or background. Instead, Google is adding light and lighting effects from scratch.
Have a look at it in action:
According to the paper, the training set for this network was rather small–just 18 images–but the researchers say the results were “quantitatively superior” compared to prior work. One of the distinguishing characteristics of Google’s approach is that it skips an “inverse rendering step” that other models relied on to determine how light would reflect off of different facial geometries.
While it’s still early days, the research does indicate that Google is hard at work encroaching on photographic terrain (in this case lighting) that was previously the domain of professionals. The researchers speculate that this style of relighting could be applied to images as simply as other photo filters. It could also be used adjust lighting after the fact, turning a backlit scene into a frontally lit one (or vice-versa).
Don’t Miss: Sony, Microsoft Building an AI Camera