A design provocation for Gemini as ambient intelligence. Proactive and integrated across every surface.
Today, Gemini waits.
Every AI company is racing to build the smartest model. Intelligence is becoming a commodity, cheaper by the quarter. Google sits on something far more durable: the most connected device ecosystem on the planet. Android, Wear OS, Pixel, Chrome, Gmail, Calendar, Maps, Health, AR. The data is already there, across all of them. Nobody's designed the connective layer. An AI that knows before you ask.
I built this provocation around four patterns I kept noticing in how Gemini shows up today. Each one felt like a real design opportunity.
Gemini feels different on every surface. The watch, the phone, the “Ask Gemini” and “Gemini Live” in browser all feel like separate products. The sparkle icon alone can't carry a unified identity. The ambient aura, tone of voice, and progressive reveal patterns throughout this concept are examples of brand connective tissue that feel missing.
Today's Gemini waits to be summoned. You open the app. You type a prompt. You wait. There's an opportunity for an AI that reads the room, that synthesizes biometrics and calendar data, and acts before the thought fully forms.
Google has the richest signal mesh on the planet. Health data, location, email, search history, device state. The data is all there. The problem is that none of it talks to each other. So the user ends up paying for that fragmentation in small, daily frustrations: starting over in a fresh chat when they switch devices, losing context that was never saved, repeating themselves to a system that should already know. The infrastructure connecting these signals is where the opportunity lives.
“Meeting with David Okafor at 10:00 AM” and “Rough night. I’ve got you.” are two wake up notifications that evoke completely different feelings. Gemini needs warmth, timing, and the restraint to know when starting with a confidence cue helps more than any general reminder or data point would.
7:14 AM
Maya hasn't touched their phone, but Gemini is already working.
The idle state is the most-seen surface in Maya's day. A watch face that can't express intelligence when nothing is happening isn't doing its job. So the face itself is proactive, composed by Gemini based on who they are and what they need, and what's ahead.
Maya picks up their phone.
It already knows what they need.
Gemini ranks by intent. The generative content model here is shaped around a single question: what does Maya need in the next 90 minutes to 1 day? The generative output changes with the unique context of each new day, offloading all of the appointments, meetings, pending to-do’s and an action plan for that side project you’ve been thinking about into one scannable flow.
The right information, the right surface. Gone when it's done.
Gemini’s advantage is the connective tissue. The signal flowing between a watch on your wrist, a phone in your pocket, glasses on your face, and a decade of context in your inbox. The technology to build this already exists inside Google; biometrics, calendar, docs, device state, email. The design work is in knowing what to do with it. On which surface? At what moment? Why? And even more importantly, how do we design for restraint, so it knows when to say nothing at all. That’s the problem that pulled me in and inspired me to build this.
Design Principles
Gemini should already be reading the room and the calendar before you reach for your phone. The prompt should be the fallback state.
Every surface is composed in real time, shaped by who you are and what's happening right now. The same watch face looks different after a bad night's sleep than it does before a big meeting.
Across every surface, with appropriate weight. A watch gets a glance. A phone gets a minute. Glasses get three seconds. The intelligence is the same everywhere. The expression changes with the surface.
The first thing I’d want to understand is where Gemini’s ambient model breaks down today. In practice, on real devices. I’d spend the first few weeks living with the current experience across devices, documenting where the AI feels helpful and where it feels like noise. That’s the starting point for everything else.
From there, I’d focus on one cross-device moment end-to-end. Something like the morning scenario in this provocation: a single user journey that touches watch, phone, and glasses, where the quality of the handoff between surfaces is the thing being designed. I'd ship a tight version of that, learn from it, and evolve.
An ambient AI experience pulls from health data, calendar, email, maps, and device sensors. That means working closely with the Health, Android, Wear OS, and Gemini model teams from the start. I’d want to be involved early enough to influence how the AI shows up on each surface, how the AI shows up on each surface and how it feels, before those decisions get locked into technical constraints.
I’ve spent the last two years at Meta designing for AI wearables and VR, figuring out how voice, text, gesture, and camera input work together on hardware that doesn’t have a traditional screen to fall back on. Before that, I took a Web3 start up from concept to acquisition in seven months as the founding designer, and worked on AI-assisted cardiac diagnosis tooling. Every role has been a bet on emerging technology where the design language didn’t exist yet. I like being the one who builds it.