Media

Google launches offline-first AI dictation app on iOS

Gemma models run on-device while Gemini cleanup stays optional, privacy pitch arrives as startups monetise transcription

Images

Image Credits:Screenshot by TechCrunch Image Credits:Screenshot by TechCrunch Image Credits:Screenshot by TechCrunch
I was saying “Transcription.” Still early days for this app.Image Credits:Screenshot by TechCrunch I was saying “Transcription.” Still early days for this app.Image Credits:Screenshot by TechCrunch Image Credits:Screenshot by TechCrunch

Google has quietly released an iOS app called Google AI Edge Eloquent that turns speech into edited text on-device, a design choice that puts privacy and latency at the center of a product category usually built around cloud processing. The app is free to download, and once its Gemma-based automatic speech recognition models are installed it can transcribe in real time without sending audio off the phone, according to TechCrunch.

The release lands in a market that has grown quickly as speech-to-text quality improves and “dictation as a workflow” becomes a paid subscription habit. Startups including Wispr Flow, SuperWhisper and Willow have been selling speed and convenience; Google’s entry signals that the large-platform strategy may be to commoditise the core transcription layer while keeping premium value in text refinement and ecosystem hooks.

Eloquent’s interface shows live transcription, then automatically removes filler words and self-corrections when the user pauses. It also offers rewrite modes such as “Key points”, “Formal”, “Short” and “Long”, effectively turning raw dictation into a first draft. That is where the product’s split-brain architecture matters: users can disable cloud mode entirely for local-only processing, but when cloud mode is enabled the app uses Gemini models for cleanup, TechCrunch reports.

The app also offers to import names, jargon and keywords from a user’s Gmail account, an option that illustrates how “offline-first” can still be paired with data enrichment. A local speech model reduces the need to transmit audio, but the overall experience can still be shaped by the surrounding account graph—contacts, vocabulary, and the user’s own writing patterns—if the user opts in.

Google’s App Store description originally referenced an Android version with system-wide keyboard integration and a floating dictation button similar to Wispr Flow’s Android feature. TechCrunch notes that Google later removed references to Android from the listing while adding that an iOS keyboard is “coming soon”, suggesting the iPhone app may be a testbed for broader rollout.

For Apple users, the immediate competitive pressure is less on iOS itself—Apple already ships dictation and voice features—and more on third-party dictation tools that justify subscription pricing by promising privacy, speed, and “better than default” output. A free Google app that runs offline narrows that differentiation, while the optional switch to Gemini for polishing hints at where Google expects defensible value to sit.

Google has not framed the launch as a major product announcement. For now it is an experimental app with an unusually clear message: the speech model can live on the device, but the business model still lives in the stack above it.