Category: Technology | Published: 2025-07-03
Search Goes Conversational
Google this week announced the launch of Search Live with voice input, a new capability inside the Google app that allows users to engage in back-and-forth spoken conversations with its AI-powered Search tool. Rolled out first in the United States, the feature is initially available to those who have opted into the AI Mode experiment in Google Labs, the company’s testing platform for early-access features.
Hands-Free Search
The launch marks a step forward in how users interact with Search, with Google positioning the update as a more natural, hands-free way to discover and explore information while multitasking or on the move.
Use the “Live” Icon
A dedicated “Live” icon now appears within the Google app interface, allowing users to tap and speak their queries aloud. The AI responds in spoken form, and users can follow up with further questions to refine or expand the topic, thereby mirroring a more human-like back-and-forth conversation.
According to Google, Search Live _“lets you talk, listen and explore in real time,”_ giving users the ability to access web-based information while continuing to use other apps or even switching between tasks. The tool also provides on-screen links to source material, allowing users to dig deeper into AI-generated answers.
Building on Gemini and Search Infrastructure
Search Live actually runs on a custom version of Gemini, Google’s multimodal large language model, which powers many of its generative AI tools. The Gemini model used in AI Mode has been specially adapted to support live voice input, real-time responses, and integration with Google Search’s existing ranking and quality systems.
Liza Ma, director of product management at Google Search, explained in a company blog post that the system combines _“advanced voice capabilities”_ with the reliability of Search’s _“best-in-class quality and information systems,”_ ensuring that responses are both conversational and trustworthy. She also confirmed the use of Google’s ‘query fan-out’ technique, which enables the system to return a more diverse and useful range of web content in response to user questions.
For example, a user might ask, _“What are some tips for preventing a linen dress from wrinkling in a suitcase?”_ and then follow up with, _“What should I do if it still wrinkles?”_ The AI answers audibly while presenting related links on screen. This continuity is key to what Google hopes will be a smoother, more context-aware search experience.
How and Where to Access It
At launch, Search Live with voice is available only to users in the U.S. who have joined the AI Mode experiment through Google Labs. It works on both Android and iOS via the official Google app. There is currently no timeline for a broader international rollout, though Google says it intends to expand features and availability in the coming months.
Users who have access will know because they see a new “Live” microphone icon below the search bar in the app. Once activated, they can ask a question out loud and receive a spoken response. Users can view a transcript of the interaction, continue the conversation via typing if preferred, and even revisit past queries via the AI Mode history log.
Multitask While it Works in the Background
Also, because Search Live works in the background, it enables a degree of multitasking not previously possible with voice-based search tools. For example, a user could begin a conversation in the app, switch to messaging or maps, and continue speaking to the AI without interruption.
Voice, Visuals, and What Comes Next
The introduction of voice input is actually just one part of Google’s broader plan to bring real-time multimodal capabilities into Search. For example, at Google I/O in May 2025, the company previewed future updates that will allow users to combine voice interaction with real-time visual input via their phone’s camera, building on advances made in its Project Astra research and the ongoing development of Google Lens.
Multimodal Search
This evolution represents a deeper move by Google into what’s referred to as multimodal search, whereby users can interact with AI not just through typing or talking, but by showing it what they see. In practical terms, this could include pointing the phone at a confusing diagram or damaged object, asking what it is, and getting a contextual explanation, complete with suggested web links, video tutorials or shopping sources.
It also echoes the direction competitors are taking. For example, OpenAI’s ChatGPT has recently introduced voice interaction capabilities in its mobile apps, and Perplexity AI has gained traction for its own real-time web search and voice tools. Google’s response, with Search Live, is both a defensive and strategic step to stay ahead in what is quickly becoming a crowded, AI-first search market.
A New Frontier for Business and Advertisers?
For business users, the implications of voice-first search are far-reaching. For example, in sectors such as logistics, retail, and field service, the ability to conduct voice-based queries while driving or working could prove invaluable. Search Live also introduces potential benefits for productivity, especially for knowledge workers trying to conduct research or fact-checks while multitasking between devices or applications.
It may also signal a new phase for Google’s advertising ecosystem, although details remain unclear. As Search becomes more conversational and voice-led, traditional search result ads, particularly those dependent on text input and visual scanning, may need to evolve. It’s not yet known how, or if, Search Live results will incorporate sponsored content.
The visual links shown alongside voice answers could potentially become prime real estate for future advertising formats. However, Google has so far remained quiet on how monetisation will work within AI Mode. With more users consuming answers audibly and potentially clicking fewer links, publishers and advertisers will be watching closely.
Challenges
Despite the promise, it should be noted that there are several challenges ahead. For example, accuracy and reliability remain key concerns for AI-generated search responses. While Google stresses its Gemini-based AI uses the same quality controls as regular Search, AI hallucinations (where systems confidently give false or misleading answers) are still a known risk in generative models.
The opt-in nature of the feature also limits immediate user exposure and feedback. By placing Search Live behind the AI Mode experimental wall, Google is clearly seeking to manage rollout cautiously but this also means that the majority of users globally still can’t access or evaluate it.
There are also privacy and data security implications, particularly with voice-based input and persistent conversation histories. Google maintains that users can view, manage or delete their AI Mode interactions, but questions remain over how voice data is processed, stored, or used to train models.
One other aspect critics may point to is the increasing opacity of sources in AI answers. For example, while Google includes clickable links alongside Search Live responses, these can sometimes appear secondary to the spoken reply, which may not fully represent the nuance or breadth of available information. Ensuring transparency and balance in summarised answers will be crucial to maintaining trust, especially as Search Live expands into more domains.
What Does This Mean For Your Business?
The introduction of Search Live could be seen as the next step in its natural progression towards Google’s long-term vision for AI-powered search. By blending real-time voice interaction with the depth of web content, Google is essentially pos