Google Lens will soon search for words and images combined

Google is updating its visual search tool Google Lens with new AI-powered language features. The update will let users further narrow searches using text. So, for example, if you snap a photo of a paisley shirt in order to find similar items online using Google Lens, you can add the command “socks with this pattern” to specify the garments you’re looking for.

Additionally, Google is launching a new “Lens mode” option in its iOS Google app, allowing users to search using any image that appears while searching the web. This will be available “soon,” but it’ll be limited to the US. Google is also launching Google Lens on desktop within the Chrome browser, letting users select any image or video when browsing the web to find visual search results without leaving their tab. This will be available globally “soon.”

These updates are part of Google’s latest push to improve its search tools using AI language understanding. The updates to Lens are powered by a machine learning model that the company unveiled at I/O earlier this year named MUM. In addition to these new features, Google is also introducing new AI-powered tools to its web and mobile searches.


Using the updated Google Lens to identify a bike’s derailleur.
Image: Google

The changes to Google Lens show the company hasn’t lost interest in this feature, which has always shown promise but seemed to appeal more as a novelty. Machine learning techniques have made object and image recognition features relatively easy to launch at a basic level, but, as today’s updates show, they require a little finesse on the part of the users to be properly functional. Enthusiasm may be picking up, though — Snap recently upgraded its own Scan feature, which functions pretty much identically to Google Lens.

Google wants these Lens updates to turn its world-scanning AI into a more useful tool. It gives the example of someone trying to fix their bike but not knowing what the mechanism on the rear wheel is called. They snap a picture with Lens, add the search text “how to fix this,” and Google pops up with the results that identify the mechanism as a “derailleur.”

As ever with these demos, the examples Google is offering seem simple and helpful. But we’ll have to try out the updated Lens for ourselves to see if AI language understanding is really making visual search more than just a parlor trick.

www.theverge.com

Be the first to comment

Leave a Reply

Your email address will not be published.


*