I enjoy photography. I’m sure that’s obvious based on the number of photography related posts on this blog. The advancements in mobile photography in particular have been amazing, and every vendor touts better images from their devices through “computational photography.” But one vendor, Samsung, has crossed a line.

Samsung has been advertising a feature on their newer phones called “Space Zoom.” Photographers (and anyone who understands how phone cameras work) questioned the legitimacy of those images, and in early March a Reddit user was able to reproduce those promotional images in a way that appeared to show Samsung was disingenuous.

Their methodology involved downloading a high-resolution image of the moon, downsizing it to just 170 by 170 pixels, clipping the highlights, and applying a gaussian blur to heavily obscure the moon’s surface details. This low-resolution image was then displayed on a monitor and captured at a distance from a Samsung Galaxy device. The resulting image has considerably more detail than its source.

Samsung Moon Shot

What Samsung devices appear to be doing is editing the blurry images through machine learning, adding texture to the photo based on other images of the moon it’s been trained against. That’s wrong. John Gruber at Daring Fireball sums it up well:

Where I draw the line is whether the software is clarifying reality as captured by the camera or not. Is the software improving/upscaling the input, or substituting the input with imagery that doesn’t originate from the camera? [..] What if the moon weren’t the same? What if it gets hit by a large meteor, creating a massive new visible-from-earth crater? Or what if our humble friend Phony Stark blows tens of billions of dollars erecting a giant billboard on the surface of the moon, visible from earth, that reads “@elonmusk”? A photo of the moon taken with one of these Samsung phones wouldn’t show either of those things, yet would appear to capture a detailed image of the moon’s surface. A camera should capture the moon as it is now, and computational photography should help improve the detail of that image of the moon as it appears now. Samsung’s phones are rendering the moon as it was, at some point in the past when this ML model was trained.

What Samsung is doing is fraudulent. Computational photography enhancements that preserve the integrity of images like night mode (which takes several pictures at different exposures and intelligently merges them together) and Apple’s ProRaw (which produces a RAW file that can include Smart HDR, Deep Fusion, or Night mode features) are fantastic. Those that add missing textures (or teeth) are not.