Pinterest will let people use its image-recognizing Lens feature to augment text searches

A year after its debut, Pinterest’s Lens feature has become so capable of parsing what an image contains and what a person is searching for that the company will now use it to support text-based searches.

Starting next week, people will be able to attach images to textual search queries on Pinterest to have Lens aid in finding what they are looking for, the company announced on Thursday. The new option will first roll out to Pinterest’s iOS app and will eventually make its way to the Android version.

The idea is that the images will serve as an additional parameter for a search to better mimic how people might seek things out in the real world. Consider how you might walk into a furniture store looking for a living room rug and show the salesperson a photo of your couch and coffee table to help pinpoint a match. Or how you might be at the grocery store shopping for salsa ingredients, see an odd-but-inviting type of pepper and ask an employee what other salsa ingredients it would complement. Now you’ll be able to put those questions to Pinterest.

The combination of visual and text search should also help Pinterest refine its visual search results. The text queries can be used to augment Pinterest’s computer vision technology’s understanding of what an image contains and to establish new relationships between the objects the technology is familiar with and other things or uses it may not yet be aware of.

Of course, Pinterest’s ability to parse images is already improving as its volume of visual searches increases. Every month, people conduct more than 600 million visual searches using Lens, Pinterest’s image-parsing browser extensions and its visual search within pins feature. As a result, Pinterest’s computer vision technology can recognize more than five times as many items as it did a year ago, including recipe ingredients and clothing styles.


About The Author

Tim Peterson
Tim Peterson, Third Door Media's Social Media Reporter, has been covering the digital marketing industry since 2011. He has reported for Advertising Age, Adweek and Direct Marketing News. A born-and-raised Angeleno who graduated from New York University, he currently lives in Los Angeles. He has broken stories on Snapchat's ad plans, Hulu founding CEO Jason Kilar's attempt to take on YouTube and the assemblage of Amazon's ad-tech stack; analyzed YouTube's programming strategy, Facebook's ad-tech ambitions and ad blocking's rise; and documented digital video's biggest annual event VidCon, BuzzFeed's branded video production process and Snapchat Discover's ad load six months after launch. He has also developed tools to monitor brands' early adoption of live-streaming apps, compare Yahoo's and Google's search designs and examine the NFL's YouTube and Facebook video strategies.