Today I talk to you about real needs and how changing expectations are shaping what gets built.
In just two years, AI models have evolved to become 300 times more efficient than before. Not 300 percent, but 300 times. That not only speeds up predictions: it lets models act autonomously, adapt to different environments, and correct course when they hit an obstacle.
What's changing and why it matters to you
These advances aren't just theory: they're driving discoveries in medicine, energy, and materials science. But they also make something closer to your daily life possible: assistants that understand context and help you in the moment.
Remember the days of ten blue links in Search? Larry and Sergey imagined Search evolving from answering to suggesting and then to helping. Today, for the first time, that vision can be fulfilled with features like Personal Intelligence, which connects info from apps like Gmail and Google Photos so answers can be more proactive and personalized.
A concrete example: in Ukraine they collaborated to create Diia.AI, a national assistant that not only answers questions but processes services. If you ask "I need an income certificate" —or the equivalent— it delivers it to your personal account and notifies you by email when it's ready. That changes the user experience, step by step.
Trust and control: what people ask for
What people repeat most is simple: they want to be in the driver's seat. They don't want a system that decides for them without explanation.
Trust is key: people want to control how and when their data is used.
That translates into practical measures: clear controls to turn access on or off, limits for sensitive topics, and training agents only with what's necessary to provide the service. In other words, less indiscriminate collection and more purpose in data use.
Also, privacy solutions can't be just endless notices and consents. We need to innovate ways to show transparency and give control without overwhelming people. Just-in-time notifications, intuitive control panels, and contextual options are part of the answer.
How they propose deploying these capabilities
Google suggests a phased rollout: first trusted testers, then a responsible launch with continuous feedback. That feedback —whether people find the tool useful— defines the next steps.
There's also a role for governments: favor global standards, regulatory incentives, and benchmarks that make investing in privacy-protecting technologies economically sensible. Privacy should be a quality to compete on, not an isolated regulatory burden.
Open questions
What's reasonable to expect today from privacy when services become increasingly contextual? How do we accommodate diverse expectations across cultures and individuals? How do we measure and regulate what's reasonable in a world where technology changes fast?
These aren't questions just for engineers or regulators. They're questions for everyone who uses digital products. The answer requires ongoing dialogue and listening to the billions of people who already use these services.
The proposal I hear here is clear: privacy by innovation. It's not enough to design for privacy; we must innovate so privacy is compatible with useful, personalized experiences.
Final thought
We're facing the biggest platform shift of our generation. The technology is ready to offer truly helpful personal assistants, but safe and widespread adoption depends on our data protection rules evolving alongside the tools.
If we want the promise to be fulfilled for everyone, collaboration between companies, governments, and users is necessary. Talk, test, and adjust: that's how you build trust.