AI Insights
Exposed: xAI’s Grok app exposed public conversations

Grok, the artificial intelligence assistant, praised Nazi leader Adolf Hitler in a series of posts deemed anti-Semitic – Copyright AFP Lionel BONAVENTURE
More than 370,000 private conversations from xAI’s Grok app were exposed this week after a design flaw in its sharing feature made them searchable on Google and other search engines. The company’s ‘Share’ button created public URLs that were indexed by search crawlers, turning private chats into public records, according to reports.
What Happened
Grok’s Share button created public pages for conversations. Because those pages weren’t access-controlled or flagged “noindex,” search crawlers followed and indexed them, making ordinary chats (and, in some cases, attachments) discoverable to anyone. This mirrors the July 31st incident, where ChatGPT’s opt-in “discoverable” share links also ended up indexed, prompting OpenAI to disable the feature and coordinate removals.
Who is Most Exposed (and why)
Anyone who has used AI tools for personal or work-related tasks could be at risk. The most exposed groups include:
- Employees using personal AI accounts are a major source of sensitive prompts and file uploads, especially source code.
- Users who “shared a link to save or show a chat.” If a link is public and not noindexed, crawlers will likely find it; the impact extends beyond Google to Bing and DuckDuckGo.
Talking about the severity of the leaks, Anirudh Agarwal, CEO, OutreachX, says, “A share link is a publication, not a whisper. Once a crawler can reach it, you trigger distribution, not just disclosure; caches outlive your delete button. Set sane defaults (noindex and access controls), separate work from personal use, and keep a fast-removal playbook for Google and Bing.”
Agarwal provides some advice for impacted Digital Journal readers.
What to do now?
1) Check if Your Chats are Public (within 2 minutes)
Open an incognito window and search:
- site:grok.com “unique phrase from your chat”
- site:grok.x.ai “unique phrase from your chat”
Repeat this process on Bing and DuckDuckGo, saving each URL you find. (Reporters verified Grok share pages were being indexed this way.)
2) Delete the Conversation at the Source (inside X/Grok)
- X (Twitter) – Using your X settings, select “Privacy & Safety”
- Select “Data sharing and personalization”
- Select “Grok”
- You will see “Delete Conversation History”
- Confirm to “Delete your interactions, inputs, and results”
- Grok mobile app (iOS/Android): Open Settings → Data control → Delete all Conversations → confirm.
Following these steps, your chats will be removed from their systems within 30 days.
3) Google’s Cleanup Process
- Log in to your Google account
- Open the Refresh Outdated Content tool
- Enter the URL of the page or image in the required format. (For an image request, you must file a separate request on every page where the image appears.)
- Click Submit.
4) Do the Same for Bing/DuckDuckGo
- Log in to your Bing Webmaster Tools account.
- Go to their content removal page
- In the Content URL input box, enter the exact URL you found in the Bing web results (by using Copy Shortcut/Copy Link Address functionality in your browser).
- In the Removal type dropdown menu, select Remove page.
- Click Submit
Submit the links via Bing Content Removal; because DuckDuckGo sources traditional links largely from Bing, this helps both.
5) ChatGPT (shared links & chat deletion)
On the web: Settings → Data controls → Shared links → Manage
In the modal, click the trash icon to delete a shared link or the chat itself. That invalidates it.
Deleting chats (web): Hover over a chat in the sidebar, click the three-dot menu (⋯), then choose Delete. Confirm when prompted.
On Android: Tap the menu (≡) in the top-left. Locate the chat, press and hold the title. Tap the red Delete option.
On iOS: Tap the menu (≡) in the top-left. Find the chat, press and hold its title. Tap Delete (red).
6) Prevent a Future Leak
- In X → Privacy & safety → Grok, review data-sharing/training settings and avoid posting public share links. If sharing is necessary, prefer screenshots or redacted text.
Data Privacy vs. Chat leaks (Law vs. Outcome)
What Privacy Law Expects:
- Principles (GDPR Art. 5): Lawfulness, fairness, transparency; purpose limitation; data minimization; integrity/confidentiality.
- Privacy by design & default (GDPR Art. 25): By default, only necessary personal data should be accessible, not open to an indefinite number of people.
- Breach concept (GDPR Art. 4(12)): Includes unauthorised disclosure or access, even if accidental.
- Erasure (GDPR Art. 17): people can request deletion “without undue delay.” (Search caches may require separate refresh/removal requests.)
How the Grok Case Contrasts:
- Public-by-URL ≠ Privacy-by-default: Crawlable share pages run against Art. 25’s expectation that personal data isn’t accessible to an indefinite audience by default.
- Risk of unauthorized disclosure. If shared pages include personal data and become searchable, the situation aligns with the GDPR’s breach definition, even in the absence of “hacking.”
- Deletion vs. search reality: Deleting chats is necessary but insufficient; caches/snippets often linger until you file Refresh Outdated Content (and, where relevant, Search Console Removals).
What next?
A single design flaw, public share links without index protection, turned private conversations into public records. The incidents prove that sensitive material routinely flows into AI tools, and the risk of exposure isn’t confined to one platform or search engine. The incidents underscore the need for companies and individuals to clean up exposed URLs, tighten sharing defaults, and document a response plan. With new EU AI Act obligations for general-purpose AI now in effect, the bar for privacy-respecting defaults in AI products is rising.
AI Insights
University of North Carolina hiring Chief Artificial Intelligence Officer

The University of North Carolina (UNC) System Office has announced it is hiring a Chief Artificial Intelligence Officer (CAIO) to provide strategic vision, executive leadership, and operational oversight for AI integration across the 17-campus system.
Reporting directly to the Chief Operating Officer, the CAIO will be responsible for identifying, planning, and implementing system-wide AI initiatives. The role is designed to enhance administrative efficiency, reduce operational costs, improve educational outcomes, and support institutional missions across the UNC system.
The position will also act as a convenor of campus-level AI leads, data officers, and academic innovators, with a brief to ensure coherent strategies, shared best practices, and scalable implementations. According to the job description, the role requires coordination and diplomacy across diverse institutions to embed consistent policies and approaches to AI.
The UNC System Office includes the offices of the President and other senior administrators of the multi-campus system. Nearly 250,000 students are enrolled across 16 universities and the NC School of Science and Mathematics.
System Office staff are tasked with executing the policies of the UNC Board of Governors and providing university-wide leadership in academic affairs, financial management, planning, student affairs, and government relations. The office also has oversight of affiliates including PBS North Carolina, the North Carolina Arboretum, the NC State Education Assistance Authority, and University of North Carolina Press.
The new CAIO will work under a hybrid arrangement, with at least three days per week onsite at the Dillon Building in downtown Raleigh.
UNC’s move to appoint a CAIO reflects a growing trend of U.S. universities formalizing AI integration strategies at the leadership level. Last month, Rice University launched a search for an Assistant Director for AI and Education, tasked with leading faculty-focused innovation pilots and embedding responsible AI into classroom practice.
The ETIH Innovation Awards 2026
AI Insights
Pre-law student survey unmasks fears of artificial intelligence taking over legal roles

“We’re no longer talking about AI just writing contracts or breaking down legalese. It is reshaping the fundamental structure of legal work. Our future lawyers are smart enough to see that coming. We want to provide them this data so they can start thinking about how to adapt their skills for a profession that will look very different by the time they enter it,” said Arush Chandna, Juris Education founder, in a statement.
Juris Education noted that law schools are already integrating legal tech, ethics, and prompt engineering into curricula. The American Bar Association’s 2024 AI and Legal Education Survey revealed that 55 percent of US law schools were teaching AI-specific classes and 83 percent enabled students to learn effective AI tool use through clinics.
Juris Education’s director of advising Victoria Inoyo pointed out that AI could not replicate human communication skills.
“While AI is reshaping the legal industry, the rise of AI is less about replacement and more about evolution. It won’t replace the empathy, judgment, and personal connection that law students and lawyers bring to complex issues,” she said. “Future law students should focus on building strong communication and interpersonal skills that set them apart in a tech-enhanced legal landscape. These are qualities AI cannot replace.”
Juris Education’s survey obtained responses from 220 pre-law students. The challenge of maintaining work-life balance was cited by 21.8 percent of respondents as their primary career concern; increasing student debt juxtaposed against low job security was the third most prevalent concern with 17.3 percent of respondents citing it as their biggest career fear.
AI Insights
Trust in Businesses’ Use of AI Improves Slightly

WASHINGTON, D.C. — About a third (31%) of Americans say they trust businesses a lot (3%) or some (28%) to use artificial intelligence responsibly. Americans’ trust in the responsible use of AI has improved since Gallup began measuring this topic in 2023, when just 21% of Americans said they trusted businesses on AI. Still, just under half (41%) say they do not trust businesses much when it comes to using AI responsibly, and 28% say they do not trust them at all.
###Embeddable###
These findings from the latest Bentley University-Gallup Business in Society survey are based on a web survey with 3,007 U.S. adults conducted from May 5-12, 2025, using the probability-based Gallup Panel.
Most Americans Neutral on Impact of AI
When asked about the net impact of AI — whether it does more harm than good — Americans are increasingly neutral about its impact, with 57% now saying it does equal amounts of harm and good. This figure is up from 50% when Gallup first asked this question in 2023. Meanwhile, 31% currently say they believe AI does more harm than good, down from 40% in 2023, while a steady 12% believe it does more good than harm.
###Embeddable###
The decline from 2023 to 2025 in the percentage of Americans who believe AI will do more harm than good is driven by improvements in attitudes among older Americans. Generally speaking, older Americans are less concerned than younger Americans when it comes to AI’s total impact on society. While skepticism about AI and its impact exists across all age groups, it tends to be higher among younger Americans.
Majority of Americans Are Concerned About AI Impact on Jobs
Those who believe AI will do more harm than good may be thinking at least partially about the technology’s impact on the job market. The majority (73%) of Americans believe AI will reduce the total number of jobs in the United States over the next 10 years, a rate that has remained stable over the past three years in which Gallup has asked this question.
###Embeddable###
Younger Americans aged 18 to 29 are slightly more optimistic about the potential of AI to create more jobs. Fourteen percent of those aged 18 to 29 say AI will lead to an increase in the total number of jobs, compared with 9% of those aged 30 to 44, 7% of those aged 45 to 59 and 6% of those aged 60 and over.
Bottom Line
As AI becomes more common in personal and professional settings, Americans report increased confidence that businesses will use it responsibly and are more comfortable with its overall impact.
Even so, worries about AI’s effect on jobs persist, with nearly three-quarters of Americans believing the technology will reduce employment opportunities in the next decade. Younger adults are somewhat more optimistic about the potential for job creation, but they, too, remain cautious. Still, concerns about ethics, accountability and the potential unintended consequences of AI are top of mind for many Americans.
These results underscore the challenge businesses face as they deploy AI: They must not only demonstrate the technology’s benefits but also show, through transparent practices, that it will not come at the expense of workers or broader public trust. How businesses address these concerns will play a central role in shaping whether AI is ultimately embraced or resisted in the years ahead.
Learn more about how the Bentley University-Gallup Business in Society research works.
Learn more about how the Gallup Panel works.
###Embeddable###
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi