Google Multisearch
My contribution
Product design
Visual design
Year
2021-2022
The Google Lens product lets you search what you see. Using a photo, your camera or almost any image, Lens helps you discover visually similar images and related content, gathering results from all over the internet.
At the SearchOn event in 2021, Google announced upcoming launch of Mutlisearch - the ability to search with a combination of images and text together. I led visual design for initial concept that was shared at SearchOn, and then continued to develop the visual and UX design of the feature, which launched in April 2022.
Background
At Google I/O in 2021, Google announced the development of a new AI model in Search: Multitask Unified Model (MUM).
Following this, the goal was to use MUM’s capabilities to make Google Lens more helpful, and enable new ways to search visually by giving users the ability to ask questions about what they see.
The existing Google Lens interface allowed users to search with an image, but there was no way for users to enhance their search with text, in order to pivot, refine, or ask questions.
Goals
Create an intuitive, inspiring and visually immersive way for users to add text inputs to their search in order to pivot, refine, or ask questions.
Want to see the full case study?
Reach out and we can set a time to talk.
Please note:
Non-Disclosure Agreements restrict my ability to show full case studies and process work for many of the projects I worked on while at Google. The screens shown below are publicly available screenshots of launched products and features that I worked on.