Gemini 3 improves the Gemini app with agents and views | Keryc
Google rolls out a major update to the Gemini app: Gemini 3 arrives — a smarter model, new generative interfaces, and an agent that can carry out complex tasks for you.
What Gemini 3 brings
Gemini 3 is Google's new model promising more useful answers, better formatting, and greater concision. What does that mean in practice? Clearer replies, multimodal results (text, images, audio), and stronger ability to reason about tricky problems.
More useful, better-structured answers: less noise, more action.
Better at multimodality: you can upload photos, transcribe notes, or work with different content types and get integrated responses.
Better for developers: it’s the model with the best "vibe coding" for building more powerful apps in Canvas.
Gemini 3 Pro begins its global rollout today. To use it in the app, select Thinking in the model selector. Google AI Plus, Pro, and Ultra subscribers keep higher limits. Also, Google extends one free year of Google AI Pro to college students in the U.S.
Generative interfaces: visual and dynamic
One of the most interesting additions are the generative interfaces. It’s not just a new look; the model generates the interface on the fly based on what you ask.
Visual layout: creates a magazine-like view with photos and interactive modules. Think of it as a visual itinerary you can explore if you ask "plan a 3-day trip to Rome." Ever wished your plan looked as clickable as a brochure? This is that.
Dynamic view: here Gemini designs and codes a custom interface in real time using its agentic capabilities. If you ask "explain the Van Gogh Gallery with life context for each work," you’ll get an interactive response you can tap, scroll through, and learn from in a way a static text can’t match.
These two experiments are rolling out today, but you might initially see only one of them while Google gathers feedback.
Gemini Agent: complex tasks without losing control
Gemini Agent is an experimental feature that runs multi-step tasks inside Gemini. It connects to your Google apps to manage calendar items, set reminders, or organize your email. For example, you can ask: "Research and help book a mid-size SUV for my trip next week for under $80/day using info from my emails." Gemini will search, compare options, and prepare the booking.
How is safety and control maintained? Gemini asks for confirmation before critical actions like purchases or sending messages, and you can take control at any time. The agent uses tools like Deep Research, Canvas, your connected Google Workspace apps, and live web browsing. It’s built with lessons from Project Mariner and powered by Gemini 3.
Today Gemini Agent arrives first for Google AI Ultra subscribers on the web in the U.S.
Redesign, shopping and accessibility
The app got a cleaner redesign: starting chats is simpler and you now have the "My Stuff" folder to find your images, videos, and reports. The shopping experience was integrated with Google’s Shopping Graph, with over 50 billion listings, to show comparisons and prices directly in the app.
In short: the app not only answers better, it now presents, organizes, and even generates interfaces based on what you need. For creators, Canvas and the model’s coding capabilities open paths to more complete apps.
Brief reflection
This isn’t just a jump in answer quality: it’s a bet on interfaces built on the fly and agents that run complex flows without taking away your control. Sounds like a more personal, proactive assistant, right? That’s the idea. Now the question is how you’ll use these tools to save time, create better experiences, or solve real everyday problems.