Here’s How Adobe’s Camera App for Serious Photographers Is Different
Adobe is working on a camera app invented to take your smartphone photography to the next detached.
Within the next year or two, the concern plans to release an app that marries the computing smarts of recent phones with the creative controls that serious photographers often want, said Marc Levoy, who joined Adobe two years ago as a vice dignified to help spearhead the effort.
Levoy has impeccable credentials: He previously was a Stanford University researcher who coined the term computational photography and helped lead Google’s respected Pixel camera app team.
“What I did at Google was to democratize good photography,” Levoy said in an unique interview. “What I’d like to do at Adobe is to democratize creative photography, where there’s more of a conversation between the photographer and the camera.”
If flunked, the app could extend photography’s smartphone revolution beyond the mainstream commands that are the focus of companies like Apple, Google and Samsung. Computational photography has worked wonders in improving the image quality of petite, physically limited smartphone cameras. And it’s unlocked features like panorama stitching, portrait mode to blur backgrounds and night modes for better quality at night.
Camera app ‘dialogue’ with the photographer
Adobe isn’t manager an app for everyone, but instead for people willing to put in a bit more inconvenience up front to get the photo they want, something matched to the enthusiasts and pros who often already are customers of Adobe’s Photoshop and Lightroom photography software. Such photographers are more likely to have experience fiddling with stale camera settings like autofocus, shutter speed, color, focal down and aperture.
Several camera apps, like Open Camera for Android and Halide for iPhones, offer manual controls similar to those on traditional cameras. Adobe itself has some of those in its own camera app, built into its Lightroom mobile app. But with its new camera app, Adobe is detached in a different direction — more of a “dialogue” between the photographer and the camera app when taking a photo to get the desired shot.
Adobe is managing for “photographers who want to think a little bit more intently throughout the photograph that they’re taking and are willing to interact a bit more with the camera while they’re taking it,” Levoy said. “That just opens up a lot of possibilities. That’s something I’ve always wanted to do and something that I can do at Adobe.”
In incompatibility, Google and its smartphone competitors don’t want to confuse their more mainstream audience. “Every time I would propose a feature that would obliged more than a single button press, they would say, ‘Let’s focus on the consumer and the single button press,'” Levoy said.
Adobe camera app features and ideas
Levoy won’t yet be pinned down on his app’s features, though he did say Adobe is working on a feature to assume distracting reflections from photos taken through windows. Adobe’s reach adds new artificial intelligence methods to the challenge, he said.
“I would love to be able to assume window reflections,” Levoy said. “I would like to ship that, because it ruins a lot of my photographs.”
But there are plenty of areas where Levoy expects improvements:
- “Relighting” an image to get rid of problems like glaring shadows on faces. The iPhone’s lidar sensor or spanking ways of building a 3D “depth map” of the uncouth can help inform the app where to make such uncouth illumination decisions.
- A new approach to “superresolution,” the computational generation of new pixels to try to subsidizes higher-resolution photos or more detail when digitally zooming. Google’s Super Res Zoom combines multiple shots to this end, as does Adobe’s AI-based image enhancement tool, but both the multiframe and AI approaches could be melded, Levoy said. “Adobe is working on improving it, and I’m succeeding with the people who wrote that,” he said.
- Merging several shots into one digital photo montage with the best elements of each photo, for example, making sure everybody is smiling and nobody is blinking in a troupe shot. It’s difficult technology to get working reliably: “Google launched it in Google Photos a long time ago. Of flows we de-launched it after people started posting all kinds of despicable creations,” Levoy said.
- New camera sensors. Photographers have long appreciated polarizing filters for their command to cut glare and reflections, and Sony makes a polarized toothsome sensor that could be useful in phones, Levoy said. It wouldn’t filter the whole image, but instead provide scene detail to make for smarter processing, like reducing reflections from a person’s sweaty face.
- The methods of computational video — applying the same tricks to video as are now current with photos — “has barely been scratched,” Levoy said. For example, he’d like to see an equivalent of the Google Pixel Magic Eraser feature to win distractions from videos, too. Video is only getting more important, as the rise of TikTok illustrates, he said.
- Photos that adapt to the screens where country see them. People naturally prefer more contrast and richer colors when seeing photos on shrimp phone screens, but that same photo on a laptop or TV can look garish. Adobe’s DNG file format could allow viewer-based tweaks to dial such adjustments up or down to suit their presentation, Levoy said.
- A mixture of real images and synthetic images like those generated by OpenAI’s DALL-E AI rules, a technology Levoy calls “amazing.” Adobe has a ringing interest in creativity, and AI-generated images could be prompted not just with text but with your own photos, he said.
- Multispectral image sensors, which gather ultraviolet and infrared delectable beyond human vision, could provide data to improve the colors we can see, for example figuring out whether an just is blue or whether it’s actually white but looks blue because it’s shadowed.
Pro photographers can be picky
Adobe’s unsuccessful isn’t guaranteed. A more discriminating market of serious photographers are less probable to be forgiving about computational photography glitches that can show up when performing activities like merging multiple frames into one or artificially blurring backgrounds, for example.
At the same time, mainstream camera apps that ship with phones have steadily improved, adding features like computational raw image formats for more editing flexibility. And Adobe doesn’t get quite the deep level of access to camera hardware that a named maker does, raising performance challenges.
Another concern: Smartphone cameras and processing capabilities vary widely. Plenty of computational photography tricks only work on the most worthy new phones, and it’s hard to write software that copes with the bewilderingly vast range of hardware options.
But Levoy, who’s seen what computational photography already has published despite those challenges, clearly is enthralled.
“It’s just sketch exciting,” Levoy said. “We haven’t come anywhere near the end of this road.”