Google announced on March 26, 2026, that Search Live is now available in every language and location where AI Mode is active. The expansion brings the feature from its previous availability in the US and India to more than 200 countries and territories.
Search Live turns Google Search into a spoken, multimodal conversation. Instead of typing a query, users open the Google app on Android or iOS, tap the Live icon under the search bar, and ask their question out loud. The system responds with audio, and users can continue the conversation with follow-up questions or tap through to web links for deeper information. The camera can be enabled during the conversation to add visual context, letting Search see what the user is looking at and respond accordingly.
The global rollout is powered by Gemini 3.1 Flash Live, a new audio and voice model that Google describes as its highest-quality yet. The model is inherently multilingual, meaning users can speak with Search in their preferred language without changing settings or switching modes.
What Gemini 3.1 Flash Live Brings to the Table
Gemini 3.1 Flash Live is purpose-built for real-time voice interaction. The model upgrades several capabilities compared to its predecessor, Gemini 2.5 Flash Native Audio, which powered Search Live in the US since December 2025.
The model handles tonal understanding better than previous versions. Google says it’s more effective at recognizing acoustic nuances like pitch and pace, which makes the conversational flow feel less robotic and more natural. Background noise filtering has also improved, with the model better at distinguishing relevant speech from environmental sounds like traffic or television.
For longer conversations, the model can now follow a train of thought for roughly twice as long as previous versions. The expanded conversational thread addresses a common frustration with voice AI where the system loses context midway through a complex discussion.
On the developer side, Gemini 3.1 Flash Live scores 90.8% on ComplexFuncBench Audio, a benchmark that measures multi-step function calling with constraints. On Scale AI’s Audio MultiChallenge, which tests complex instruction following and reasoning through the interruptions typical of real conversation, the model scores 36.1% with reasoning enabled. Google has made the model available in preview through the Gemini Live API in Google AI Studio.
The model supports over 90 languages for real-time multimodal conversations. All audio generated by 3.1 Flash Live is watermarked with SynthID, an imperceptible marker woven into the audio output that allows detection of AI-generated content.
How Search Live Works in Practice
Search Live is designed for situations where typing a search query feels awkward or insufficient. Google’s own tips highlight several use cases that show how the feature is intended to be used.
Pointing the camera at something unfamiliar while traveling and asking Search to explain what it is. Aiming the camera at a piece of equipment while learning a new hobby and having a back-and-forth conversation about technique. Getting troubleshooting help by showing the camera a problem (a broken appliance, an error message on a screen, a plant that looks sick) and asking what to do about it. Using it as an educational tool where a student can point the camera at a science experiment and have the AI walk through what’s happening.
The camera integration is the same capability that Google Lens provides, extended into a conversational format. Users already pointing their camera at something through Google Lens can tap the Live option at the bottom of the screen to start a real-time conversation about what the camera sees.
Each conversation can mix voice and visual input. A user might start by asking a question out loud, then enable the camera to show Search what they’re looking at, then ask follow-up questions based on the response. The AI maintains context throughout the conversation, so follow-up questions don’t need to repeat the original context.
The Timeline So Far
Search Live has been built incrementally over the past year. Google first demonstrated the Gemini-powered conversational search concept in mid-2024. The feature launched in the US in June 2025 with voice-only interaction. Video input through the phone camera was added in July 2025. In December 2025, Google upgraded the underlying model to Gemini 2.5 Flash Native Audio, improving conversation quality in the US market.
The March 2026 expansion represents the largest single rollout: from two countries to over 200, with a new model purpose-built for multilingual real-time conversation. The inherent multilingual capability of Gemini 3.1 Flash Live is what makes the global launch possible without requiring separate localized builds for each language.
Enterprise Adoption
Gemini 3.1 Flash Live isn’t limited to consumer search. Google has made the model available through Gemini Enterprise for Customer Experience, and it’s already deployed by companies including Verizon and Home Depot. Home Depot is using it specifically in its contact center experience, where the model handles real-time voice interactions with customers.
For enterprise use cases, the improvements in tool triggering and complex instruction following are the relevant capabilities. The model can call external tools and deliver information during live conversations more reliably than previous versions, which matters for customer service scenarios where the AI needs to check inventory, look up order status, or navigate multi-step troubleshooting flows.
What Search Live Means for Search Behavior
The broader trend Search Live accelerates is the shift from typed queries to spoken, conversational interactions with search. When users talk to Search instead of typing, the queries become longer, more natural-language, and more contextual. The camera input adds a dimension that typed search can’t replicate: the ability to search based on what’s physically in front of the user without needing to describe it in words.
For publishers and site owners, the conversational format raises the same questions that AI Overviews and AI Mode have already introduced. When a user gets an answer through a spoken conversation, the path to clicking through to a website becomes longer and less certain. Google includes web links in Search Live responses, and users can tap through to them, but the default interaction is a voice response that may satisfy the query without a click.
The sites that appear in those web links still need the same foundation: authoritative content, strong link building profiles, and the kind of trust signals that make Google confident enough to cite a source in an AI-generated response. Whether the user typed the query, spoke it, or pointed a camera at something, the underlying ranking and citation systems draw from the same authority signals. The input format changes. The criteria for being selected as a source don’t.
For digital PR and content strategy, the practical implication is that content needs to be useful in formats that voice AI can parse and deliver. Clear, direct answers to specific questions. Structured information that an AI model can extract and present verbally. Expert-driven content that establishes the kind of authority Google’s systems look for when deciding which sources to cite in a real-time conversation.
Search Live is available now through the Google app on Android and iOS in all markets where AI Mode is active.
