Google is officially rolling out powerful new capabilities to its next-gen AI assistant, Gemini, making advanced tools available to all eligible Android users.
Previously available only to select Pixel, Samsung, and Gemini Advanced users, the latest update powered by the Gemini 2.5 models brings two major features to a broader audience:
- Screen sharing with AI assistance
- Real-time video input for contextual responses
These features allow users to interact with Gemini in a more natural and intelligent way — simply by showing it what’s on their screen or through the camera in real time.
What is Gemini?
Gemini is Google’s AI-powered assistant, developed as a replacement for the traditional Google Assistant across Android devices. Unlike its predecessor, Gemini uses multimodal AI, which means it can understand and respond to:
- Text
- Voice
- Images
- Video
- Real-time screen or camera input
With this update, Gemini becomes more than just a voice assistant — it transforms into a smart, context-aware helper capable of assisting in tasks like troubleshooting tech issues, interpreting visual content, or diagnosing simple mechanical problems.
What’s New in Gemini 2.5?
The latest update is backed by Gemini 2.5 Pro and Gemini 2.5 Flash, currently in experimental mode. Google also announced Deep Research with 2.5 Pro, a new functionality geared toward advanced users that enables more in-depth assistance and task handling.
Key new capabilities include:
- Screen Sharing: Let Gemini see your screen and ask questions about what’s displayed.
- Camera View Support: Share real-time video through your camera to get instant answers based on what Gemini sees.
These enhancements significantly improve how users can interact with the assistant, making it useful for both casual and technical problem-solving.
Gemini Advanced: Free for US Students
In a move to support education, Google has made its Gemini Advanced subscription — normally priced at $20/month — available for free to college students in the United States. This subscription includes:
- Access to the most powerful Gemini models
- 2 TB of cloud storage
- Priority access to new AI features
iPhone Users Also Included
iPhone users can also access Gemini’s latest multimodal features — but only through the Gemini Advanced subscription. Once subscribed, they gain the same functionality as Android users, including visual and real-time video input support.
How to Access Gemini Live
To use the new Gemini Live capabilities:
- Open the Gemini app on your device.
- Tap on the Gemini Live icon.
- Choose either the camera icon or screen share icon.
- Start interacting with Gemini in real-time.
Whether you’re trying to troubleshoot your phone, understand what’s happening on your laptop screen, or need help identifying something through your camera, Gemini is now equipped to assist — visually and contextually.
Final Thoughts
Google’s expansion of Gemini’s capabilities signals a big leap in how AI assistants can support users in their daily lives. With multimodal understanding, real-time visual input, and advanced model access now available to more people, Gemini is well on its way to becoming one of the most powerful AI tools in the hands of everyday users.
As Assistant is slowly phased out, Gemini is set to redefine the future of AI assistance on mobile devices.