Millions of Google Labs users in the U.S. can now experience AI-powered image analysis through Google Lens’s enhanced AI Mode
In a significant update to its visual search experience, Google has announced the integration of multimodal capabilities into its Lens feature through the newly enhanced AI Mode. This advancement is set to offer users a more intelligent, context-aware interaction with the world around them by combining visual input with Gemini’s powerful AI understanding.
Initially available to Google One AI Premium subscribers, the feature is now being rolled out to millions of experimental users through Google Labs in the United States. This strategic expansion aligns with Google’s broader mission to bring advanced generative AI tools to more users in practical and innovative ways.
What Is Multimodal AI Mode in Google Lens?
Multimodal AI Mode allows users to capture an image of an object using Google Lens and immediately engage in a deeper AI-powered analysis with Gemini, Google’s cutting-edge AI model. This enables users to:
- Ask complex questions related to the image
- Understand the relationships between objects within a scene
- Receive intelligent suggestions, information, and relevant links
Whether you’re trying to identify a rare plant, analyze a piece of art, or plan a trip based on a photo, this new capability enhances the way people interact with visual content.
According to Google, “With Gemini’s multimodal capabilities, AI Mode can understand the entire scene in an image, including the context of how objects relate to one another and their unique materials, colors, shapes, and arrangements. Drawing on our deep visual search expertise, Lens precisely identifies each object in the image.”
Smarter Search, Longer Queries
One of the key findings from Google’s internal testing is that AI Mode prompts users to ask significantly longer and more detailed questions—in fact, queries made in AI Mode are reportedly twice as long as standard Google search queries. This demonstrates how the feature encourages more nuanced exploration and information-seeking behavior.
Google credits its “query fan-out” technique for this expanded search capability. This approach allows AI Mode to generate multiple search queries based on both the entire image and its individual components, offering a broader and deeper information set than traditional search.
Use Cases Across Daily Life
The enhanced AI Mode in Google Lens is designed for a wide range of everyday scenarios, including:
- Shopping assistance
- Travel planning and exploration
- Academic and general research
- Outdoor and nature identification
- Home decor and design inspiration
From casual strolls to serious study sessions, the new multimodal features make the world more searchable than ever before.
How to Try AI Mode in Google Lens
Although still in its experimental phase, Google is inviting interested users to try out AI Mode through Google Labs. The feature is available on both Android and iOS devices via the Google app. Users can opt in and provide feedback to help shape the future of this AI-powered experience.
Read more: Google Selects 20 Indian Startups for AI-Focused Accelerator Programme 2025