Tools & Platforms
Is Google Gemini Nano Banana AI tool safe: Privacy, watermarks and other safety concerns that experts warn

Google launched Gemini Nano Banana AI tool last month. It has now taken the inetrnet by storm with users creating 3D figurines to Gemini Nano Banana AI Saree trend on Instagram with users turning their ordinary photos into dramatic 90s Bollywood-style portraits. The trend, however, has sparked fresh warnings about privacy and security risks linked to uploading personal images online.
What is Nano Banana AI trend: From 3D figurines to vintage sarees
The “Nano Banana” craze, powered by Google’s Gemini Nano model, allows users to transform selfies into stylised 3D figurine portraits with glossy skin and exaggerated features. Building on its popularity, a new variant — the “Banana AI Saree” trend is making waves on Meta’s Instagram that reimagines portraits in retro Bollywood-inspired saree looks, often featuring chiffon drapes, cinematic backdrops and vintage textures.
Is using Google Gemini Nano Banana safe
Google says images created with Gemini carry an invisible watermark known as SynthID, along with metadata tags, to help verify AI-generated content. “All images created or edited with Gemini 2.5 Flash Image include an invisible SynthID digital watermark to clearly identify them as AI-generated. Build with confidence and provide transparency for your users,” information on aistudio.google.com states.However, detection tools for SynthID are not yet available to the public. Experts also point out that watermarks can be tampered with. A report by Wired quoted Ben Colman, CEO of Reality Defender, as saying: “Watermarking at first sounds like a noble and promising solution but its real-world applications fail from the onset when they can be easily faked, removed or ignored.”Hany Farid, professor at the UC Berkeley School of Information, told Wired that watermarking has potential but is not a standalone safeguard: “Some experts think watermarking can help in AI detection but its limitations need to be understood. Nobody thinks watermarking alone will be sufficient.”
Indian police officer’s advisory on use of Google Gemini Nano Banana
Indian Police Service officer VC Sajjanar has also cautioned users about risks tied to the Nano Banana trend. In a post on X, Sajjanar said “Be cautious with trending topics on the internet! Falling into the trap of the ‘Nano Banana’ craze can be risky. If you share personal information online, scams are bound to happen. With just one click, the money in your bank accounts can end up in the hands of criminals” (translated).He also urged users to avoid fake websites or unofficial apps mimicking Gemini’s platform: “Once your data reaches a fake website, retrieving it becomes very difficult. Your data, your money — your responsibility.”
How to safely use Google Gemini Nano Banana
Experts recommend that users take precautions before engaging with viral AI tools. These include avoiding the upload of sensitive or private photos, stripping metadata such as location tags, and tightening privacy settings on social media. Limiting where and how images are shared can also reduce the risk of misuse.
Tools & Platforms
Augustana University announces AI expert, bestselling author as Critical Inquiry & Citizenship Colloquium speaker

Sept. 16, 2025
This piece is sponsored by Augustana University.
Augustana University’s third annual Critical Inquiry & Citizenship Colloquium will culminate with Dr. Joy Buolamwini as the featured speaker.
Buolamwini, bestselling author, MIT researcher and founder of the Algorithmic Justice League, will give a keynote presentation to the Augustana community, alumni and friends at 4 p.m. Oct. 25 in the Elmen Center, with a book signing to follow.
Generously supported by Rosemarie and Dean Buntrock and in partnership with Augustana’s Center for Western Studies, the Critical Inquiry & Citizenship Colloquium was established in 2023. The colloquium is designed to promote civil discourse and deep reflection with the goal of enhancing students’ skills to think critically and communicate persuasively as citizens of a pluralistic society.
“In an era of unprecedented technological advancement, Dr. Buolamwini’s insights urge us to consider not only the capabilities of artificial intelligence but its ethical implications. Her participation in this year’s colloquium invites meaningful dialogue around integrity, responsibility and the human experience,” Augustana President Stephanie Herseth Sandlin said.
In addition to being a researcher, model and artist, Buolamwini is the author of the U.S. bestseller “Unmasking AI: My Mission To Protect What Is Human in a World of Machines.”
Buolamwini’s research on facial recognition technologies transformed the field of AI auditing. She advises world leaders on preventing AI harm and lends her expertise to congressional hearings and government agencies seeking to enact equitable and accountable AI policy.
Buolamwini’s TEDx Talk on algorithmic bias has almost 1.9 million views, and her TED AI Talk on protecting human rights in an age of AI transforms the boundaries of TED Talks.
As the “Poet of Code,” she also creates art to illuminate the impact of AI on society, with her work featured in publications such as Time, The New York Times, Harvard Business Review, Rolling Stone and The Atlantic. Her work as a spokesmodel also has been featured in Vogue, Allure, Harper’s Bazaar and People. She is the protagonist of the Emmy-nominated documentary “Coded Bias.”
Buolamwini is the recipient of notable awards, including the Rhodes Scholarship, Fulbright Fellowship, Morals & Machines Prize, as well as the Technological Innovation Award from the King Center. She was selected as a 2022 Young Global Leader, one of the world’s most promising leaders younger than 40 as determined by The World Economic Forum, and Fortune named her the “conscience of the AI revolution.”
“Many associate AI with advancement and intrigue. Dr. Buolamwini invites cognitive dissonance by demonstrating the potentials for harm caused by the unexamined use of AI,” said Dr. Shannon Proksch, assistant professor of psychology and neuroscience at Augustana.
“Dr. Buolamwini’s visit will invite the Augustana community to engage in critical thinking and deep reflection around how algorithmic technology intersects with our lives and society as a whole. Her work embodies the goals of the Critical Inquiry & Citizenship Colloquium by challenging us to acknowledge the human impact of AI and remain vigilant about the role that we play in ensuring that these technologies do more to benefit and strengthen our communities than to harm them.”
“AI is the most powerful and disruptive technology of our time, so we’re very excited to bring Dr. Buolamwini to Sioux Falls. She’s an engaging and dynamic speaker whose research and life experience have given her deep insight into how we can ensure that AI is used to promote the flourishing of all,” said Dr. Stephen Minister, Stanley L. Olsen Chair of Moral Values and professor of philosophy at Augustana.
Tickets for the 2025 Critical Inquiry & Citizenship Colloquium are free and available to the public at augie.edu/CICCTickets.
About the Critical Inquiry & Citizenship Colloquium
In partnership with the Center for Western Studies and supported by Rosemarie and Dean Buntrock, this annual one- or two-day colloquium is intended to feature faculty scholars and students, as well as industry, research and policy experts who inspire and facilitate critical thinking, persuasive reasoning and thoughtful discussion around timely and engaging topics in areas ranging from religion, science and politics to history, technology and business. The colloquium kicks off or culminates in a keynote given by thought leaders of national or global prominence.
Tools & Platforms
Conduent Integrates AI Technologies to Modernize Government Payments, Combat Fraud and Improve Customer Experiences for Beneficiaries

Successfully completed AI pilot with Microsoft – now live – boosts fraud detection
FLORHAM PARK, N.J., September 16, 2025–(BUSINESS WIRE)–Conduent Incorporated (Nasdaq: CNDT), a global technology-driven business solutions and services company, is embedding generative AI (GenAI) and other advanced AI technologies into its suite of solutions for state and federal agencies. These technologies aim to improve the disbursement of critical government benefits, enhance the citizen experience, and fortify fraud prevention across major aid programs like Medicaid and the Supplemental Nutrition Assistance Program (SNAP).
As part of a recently completed GenAI pilot with Microsoft – originally announced in 2024 and now fully deployed – Conduent has significantly increased its fraud detection capacity for its largest open-loop payment card programs. Because these cards can be used at a wide range of merchants, monitoring for fraud is particularly complex. Leveraging AI, a small team of specialists can now surveil tens of thousands of accounts for suspicious activity, including identity theft and account takeover with significant improvement in accuracy. This capability is in the process of being scaled to other payment card programs.
Following the pilot’s success, Conduent is now seeking to apply similar AI methodologies to help detect and prevent fraud in Medicaid and closed-loop EBT cards, including SNAP benefits – helping safeguard usage at approved retailers. A leader in government payment disbursements, Conduent currently supports electronic payments for public programs in 37 states.
“As states adapt to evolving budget constraints and eligibility requirements, AI can empower agencies to reduce fraud and improper payments while improving service delivery,” said Anna Sever, President, Government Solutions at Conduent. “With decades of experience supporting critical government programs, Conduent is deepening its investment in AI to expand these gains across multiple programs.”
Transforming Customer Support with AI
Conduent is also deploying AI to drive improvements in the contact center experience for public benefit recipients. A standout example is the Conduent GenAI-powered capability that equips agents with instant access to accurate, program-specific information – reducing call handling times.
Conduent provides U.S. agencies with solutions for healthcare claims administration, government benefit payments, eligibility and enrollment, and child support. Visit Conduent Government Solutions to learn more.
Tools & Platforms
CobaltStrike’s AI-native successor, ‘Villager,’ makes hacking too easy

Villager can be weaponized for attacks
According to Straiker, Villager integrates AI agents to perform tasks that typically require human intervention, including vulnerability scanning, reconnaissance, and exploitation. Its AI can generate custom payloads and dynamically adapt attack sequences based on the target environment, effectively reducing dwell time and increasing success rates.
The framework also includes a modular orchestration system that allows attackers, or red teamers, to chain multiple exploits automatically, simulating sophisticated attacks with minimal manual oversight.
Villager’s dual-use nature is the crux of the concern. While it can be used by ethical hackers for legitimate testing, the same automation and AI-native orchestration make it a powerful weapon for malicious actors. Randolph Barr, chief information security officer at Cequence Security, explained, “What makes Villager and similar AI-driven tools like HexStrike so concerning is how they compress that entire process into something fast, automated, and dangerously easy to operationalize.”
Straiker traced Cyberspike to a Chinese AI and software development company operating since November 2023. A quick lookup on a Chinese LinkedIn-like website, however, revealed no information about the company. “The complete absence of any legitimate business traces for ‘Changchun Anshanyuan Technology Co., Ltd,’ along with no website available, raises some concerns about who is behind running ‘Red Team Operations’ with an automated tool,” Straiker noted in the blog.
Supply chain and detection risks
Villager’s presence on a trusted public repository like PyPI, where it was downloaded over 10,000 times over the last two months, introduces a new vector for supply chain compromise. Jason Soroko, senior fellow at Sectigo, advised that organizations “focus first on package provenance by mirroring PyPI, enforcing allow lists for pip, and blocking direct package installs from build and user endpoints.“
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries