NO MORE PHOTOBOMBS

EUGENE TANG

Princeton University

HOW RESEARCHERS ARE AUTOMATICALLY

REMOVING DISTRACTIONS FROM PHOTOS


Before.

Before.

After. Note that the sign is replaced with a nice patch of grass.

After. Note that the sign is replaced with a nice patch of grass.

Take a look at the picture on the right. You might notice that it is a beautiful day with a handsome tree standing in the center. But you probably also notice a small black sign in the grass, interrupting the natural landscape.

 

If you know how to use advanced image-editing applications such as Photoshop, you could do some fancy Photoshop tricks to remove the sign. If not, you could download an app to let you “wipe away” undesired sections. But what if you wanted to do this to 100 images, 1,000 images, even 10,000 images? It would be convenient if there were a tool that could go through all 10,000 images and automatically remove the black sign. This tool is exactly what researchers Ohad Fried (Princeton University), Eli Shechtman (Adobe Research), Dan Goldman (Adobe Research), and Adam Finkelstein (Princeton University) have created. With a click of a button, their program can automatically remove these unwanted “photobombs.”

 

Finding Distractors

The solution relies on two key steps. First, the program performs “distractor detection,” finding unwanted objects in the image. This task was the main focus of Fried and his colleagues’ work. They collected hundreds of images and had people identify sections of each image they found distracting or out of place. They then determined commonalities among the identified distractors. Through this analysis, the researchers found several indicators of a distraction, including saliency, color, and distance from the edge of the image. Saliency is a measure of how much an object draws your attention. In general, distractors stand out in an image, so objects with a high saliency are more likely to be distractors. Dissimilar colors are another key indicator of distractors. For example, in the image above, the black color of the sign among the sea of green grass is a key sign that it does not belong. Additionally, distractors tend to be close to the edges of a picture. When people take pictures, they usually want the main subject to be close to the center of the photo.

 

By combining these indicators, among many others, the researchers were able to write a program that can automatically identify distractors in an image.1 That being said, this success did not come without its challenges.

 

One of the main challenges of the project was collecting good data on items people identify as distractors. As Ohad Fried, a graduate student in computer science at Princeton University and lead author of the paper, puts it: the issue of data quality always occurs “when you are dealing with real users and real data. It’s really a challenge to know what to ask the users and how to collect the data.” For example, one way the researchers collected images for their project was by creating an app where users could manually “wipe away” undesired parts of an image. As part of the app, the user could voluntarily send the completed image to the researchers for research. When describing the data collected, Fried recalls how “one of the major users of the application… used it to remove watermarks [from images].” Technically, watermarks are not distractors (using such an app to remove them is illegal and discouraged by the researchers). Thus including these images in the data to be analyzed would skew the data and result in many false removals. Being able to identify false alarms such as these was very important to ensuring that accurate qualities of distractors would be found.

 

Removing Distractors

The second step of the procedure is to replace the distractors with appropriate fillers. To do this, Fried and his colleagues used a pre-existing method called “patch-based hole filling.” Optimized by a group of researchers from Princeton University, the University of Washington, and Adobe Systems, this method looks at the region surrounding the hole, and then studies the rest of the image for similar patches. It then takes the best-matching patches and uses the information it obtains to fill in the hole. It is also able to take certain constraints into account. For example, in “repairing” the temple, it realizes that the temple’s roof has a triangular shape, and fills in the holes accordingly.2

 

PATCH-BASED HOLE FILLING CAN ACCURATELY REMOVE DISTRACTORS RANGING FROM YARD SIGNS TO STAINS TO EVEN PEOPLE.

 
 
 

By incorporating “patch-based hole filling” into their program and treating distractors as “holes,” Fried and his colleagues were able to automatically remove undesired sections from images with impressive results. The program can accurately remove distractors ranging from yard signs to stains to even people.

 

What’s Next

Beyond removing distractors, this research is part of a larger field called computer vision, whose goals include helping computers acquire, analyze, and understand images. Programs have been created in medicine to automatically analyze neuron tracings in images of nerves,3 in law enforcement to identify the type of firearm used based on images of the cartridge cases,4 and recently Google Translate was updated with the ability to automatically translate text in a picture (think about the next time you need to read a sign in a foreign country).

 

The original image.

The original image.

The original image with holes and constraints.

The original image with holes and constraints.

The image with holes filled in.

The image with holes filled in.

 

Fried et al.’s project focuses on a field more specifically known as image enhancement, improving the quality of images. Currently, most automatic image enhancement techniques focus on tuning broad attributes, such as saturation or hue. This project, however, shows that it is also possible to make more granular changes and take the context into account. Its applications are useful for professional photographers and selfie-lovers alike, saving Photoshop-users time as they remove distractions in seconds per image and providing novices a simple way to remove undesired objects.

 

So the next time you’re sitting in front of your computer toiling away at perfecting your new Facebook profile picture, just know that in the future there might be a program to do it for you.

 

References

[1] Fried, O. et al. Finding Distractors In Images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015, 1703-1712.

[2] Barnes, C. et al. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing. ACM Transactions on Graphics-TOG 2009, 28(3), 24.

[3] Abràmoff, M. D. et al. Image Processing with ImageJ. Biophotonics international 2004, 11(7), 36-42.

[4] Hackwood, S.; Potter, P. A. Signal and Image Processing for Crime Control and Crime Prevention. Proceedings of the International Conference on Image Processing 1999, 3, 513-517.