Google unveils Gemini-powered ‘Circle to Search’, will come to Pixel 8 and Samsung Galaxy S24 series
Ofcourse everything Google is announcing or launching these days, has a Gemini AI component in it. And it is hence not a surprise, that the search giant’s latest innovation in search, also comes riding on the Gemini wave. Google has introduced an intuitive new feature – “Circle to Search” – at Samsung’s Galaxy Unpacked 2024 event. Google’s “Circle to Search” introduces a gesture-based method for users to search for information without leaving their current app. The feature is set to debut on premium Android smartphones, starting with the Galaxy S24 series and the Pixel 8/Pixel 8 Pro by the end of January.
Activated by a long-press on the home button or navigation bar, “Circle to Search” allows users to draw a circle or perform other gestures (highlighting, scribbling, tapping) on their screen to initiate a search. Google Search results then appear at the bottom of the screen, providing relevant information without disrupting the user’s ongoing activity. “For more than two decades, we’ve continuously redefined what a search engine can do — always guided by our mission to organize the world’s information and make it universally accessible and useful. This has gone hand in hand with our ongoing advancements in AI, which help us better understand information in its many forms — whether it’s text, audio, images or videos,” Google noted in a blog post.
“Ultimately, we envision a future where you can search anyway, anywhere you want. Now, as we enter 2024, we’re introducing two major updates that bring this vision closer to reality: Circle to Search and an AI-powered multisearch experience,” the company added.
The versatility of this feature allows users to interact with text, images, or videos in a way that feels natural to them. Whether circling an item in a video, highlighting text, or tapping on an image, users can seamlessly retrieve search results. “Circle to Search” seamlessly integrates with Google’s Multisearch feature, which combines text and image searches to provide more nuanced results. For example, circling a specific item in a video and asking a related question prompts Multisearch to deliver detailed information, enhancing the user’s understanding of the content.
Building on the foundation of Multisearch, Google is injecting generative AI into the feature within the Google app. This enhancement empowers users to pose more complex or nuanced questions, expanding the capabilities of visual searches. Google illustrates the power of generative AI in Multisearch by describing a scenario where a user encounters an unlabeled board game at a yard sale. By snapping a picture and asking Google Lens, “How do you play this?” the generative AI-fueled overview pulls from the web’s most relevant information, providing a comprehensive answer.
The upgraded Multisearch feature, enriched with generative AI capabilities, is set to roll out in the Google app on Android and iOS in the U.S. (English only). This enhancement is available to all users meeting the specified criteria, eliminating the need for beta opt-ins.