/ciol/media/media_files/2025/09/16/gemini-nano-banana-2025-09-16-23-38-26.png)
Google’s Gemini “Nano Banana” phenomenon converts ordinary selfies into highly stylised images, from glossy 3D figurine portraits to retro, 90s-Bollywood saree looks. The feature set powering these edits is described as a rebrand of Google’s Flash image models (Flash 2.5 for Nano-style edits; Flash 2.0 for some saree variants). The viral appeal comes from easily produced, cinematic outputs: chiffon drapes, grainy textures, warm golden-hour lighting, and exaggerated cinematic framing. That ease of creation is also what turned a playful experiment into a wide social trend on platforms such as Instagram.
Nano Banana: privacy claims, SynthID, and limits of watermarking
Google says images created or edited with Gemini include an invisible SynthID digital watermark and metadata tags to help identify AI-generated content: “All images created or edited with Gemini 2.5 Flash Image include an invisible SynthID digital watermark to clearly identify them as AI-generated. Build with confidence and provide transparency for your users,” (source text). At the same time, detection tools for SynthID are not yet available to the public.
Experts cited in the supplied material warn that watermarking is not a panacea. As Ben Colman put it:
"Watermarking at first sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed, or ignored."
Hany Farid, professor at the UC Berkeley School of Information, also cautioned:
"Some experts think watermarking can help in AI detection, but its limitations need to be understood. Nobody thinks watermarking alone will be sufficient."
Nano Banana: User experiences and an Indian advisory
Several users and commentators raised an alarm after testing the trend. One Instagram user, Jhalak Bhawnani, described a disturbing result after generating a saree portrait: “I generated my image and I found something creepy… so a trend is going viral on Instagram where you upload your image on Gemini with a prompt and Gemini converts it into a saree… I tried it last night and I found something very creepy on this,” she wrote. Her biggest shock: “How does Gemini know I have a mole in this part of my body? You can see this mole… this is very scary, very creepy… I am still not sure how this happened,” she said, urging followers to be careful about what they upload to AI platforms.
Indian police officer VC Sajjanar posted a public warning: “Be cautious with trending topics on the internet! Falling into the trap of the 'Nano Banana' trending craze... if you share personal information online, such scams are bound to happen. With just one click, the money in your bank accounts can end up in the hands of criminals. Never share photos or personal details with fake websites or unauthorized apps. You can share your joyful moments on social media trends, but don't forget that safety should be your top priority. If you step onto an unseen path, you're certain to fall into a pit... Think twice before uploading your photos or personal information. These trends come and make a fuss for a few days before disappearing... Once your data goes to fake websites or unauthorized apps, retrieving it is difficult. Remember... your data, your money - your responsibility." (translated).
Why Nano Banana is different from a simple filter
The underlying worry is about poor aesthetics or bad edits. Because Gemini edits user-supplied photos, the system may be ingesting high-resolution facial data to derive consistent facial features across edits. That derived biometric signal — facial geometry, persistent feature markers — is the kind of data that can be reused for re-identification or to train recognition systems.
The supplied material lists precedent cases that show how biometric image programs and insecure data practices can cause harm: the Meta / Facebook Texas settlement (mass biometric gathering; USD 1.4B settlement); Lensa.ai / Prisma Labs litigation in Illinois over alleged collection and reuse of facial geometry data; Clearview AI’s scraping and creation of a facial database; leaks from the Tea Dating Advice app; exposed AI image databases such as GenNomis / AI-Nomis; and Madurai Police’s Copseye app leaking sensitive data. Those incidents illustrate the spectrum of harms — from private data exposure to legal and regulatory penalties — that follow when facial or image data are mismanaged.
Practical precautions users should take now
Experts and reporting compiled in your brief recommend straightforward risk-mitigation steps for users who still want to experiment:
- Avoid uploading high-resolution ID photos or images containing sensitive body-area details.
- Strip metadata (location tags) before uploading.
- Prefer on-device or local-only processing where offered; check whether the app explicitly says uploads are not retained.
- Review app permissions and privacy policies for terms like retain, train, sublicense, and third party.
- Limit sharing and avoid unofficial third-party sites that mimic the platform.
The Nano Banana craze showcases how quickly generative image tools move from experimental labs into mass social use. The mix of invisible watermarks, limited public detection tooling, and past incidents involving biometric misuse creates a fertile ground for both misuse and misunderstanding.