AI Insights
TikTok is being flooded with racist AI videos generated by Google’s Veo 3

TikTok notes that it uses both technology and human moderators to identify rule-breaking content. However, the volume of uploads makes timely moderation difficult. While the racist videos racked up a lot of views, a TikTok spokesperson tells Ars that more than half of the accounts cited in the MediaMatters report were banned for policy violations before the report was published, and the remainder have now been removed.
As for Google, it has a comprehensive Prohibited Use Policy that bans the use of its services to promote hate speech, harassment, bullying, intimidation, and abuse. The videos uncovered by MediaMatters all seem to fall under one or more of these categories. In a perfect world, Veo 3 would refuse to create these videos, but vagueness in the prompt and the AI’s inability to understand the subtleties of racist tropes (i.e., the use of monkeys instead of humans in some videos) make it easy to skirt the rules.
TikTok, being the world’s leading social video behemoth, is a natural place for these videos to spread. It’s not exclusive to TikTok, though. X (formerly Twitter) has gained a reputation for very limited moderation, leading to an explosion of hateful AI content. This problem could also get worse very soon. Google has plans to integrate Veo 3 into YouTube Shorts, which could make it even easier for similar content to spread on YouTube.
TikTok and Google have clear prohibitions on this content, which should have prevented it from being seen millions of times on social media. Enforcement of those policies, however, is lacking. TikTok is seemingly unable to keep up with the flood of video uploads, and Google’s guardrails appear insufficient to block the creation of this content. We’ve reached out to Google to inquire about Veo 3’s safety features but have not yet heard back.
For as long as generative AI has existed, people have used it to create inflammatory and racist content. Google and others always talk about the guardrails to prevent misuse, but they can’t catch everything. The realism of Veo 3 makes it especially attractive for those who want to spread hateful stereotypes. Maybe all the guardrails in the world won’t stop that.
AI Insights
AI workers are boosting rents across the US

The newest wave of tech workers isn’t just filling office towers — it’s bidding up apartments in cities already notorious for high housing costs.
Across the US and Canada, the number of workers with artificial intelligence skills has surged by more than 50% in the past year, topping 517,000, according to CBRE.
Much of that growth is clustered in the San Francisco Bay Area, New York City, Seattle, Toronto and the District of Columbia — areas where rents were straining households even before the AI boom.
The result: a fresh wave of demand that has helped push Manhattan rents up more than 14% between 2021 and 2024, Washington more than 12% in that same span, Seattle more than 7% and San Francisco nearly 6%.
New York gained about 20,000 AI-skilled workers over the past year alone, while other hubs including Atlanta, Chicago, Dallas-Fort Worth, Toronto and Washington each logged increases of 75% or more.
High salaries in AI allow workers to shoulder those rents — CBRE found Manhattan’s AI professionals spend about 29% of their income on housing, while in San Francisco and DC the share drops closer to 19%.
That affordability for one group is adding to the squeeze on everyone else.
Colin Yasukochi, executive director of CBRE’s Tech Insights Center, said San Francisco illustrates the trend.
“With this AI revolution, it’s been a fundamental game changer for the city of San Francisco, because that’s really ground zero for the AI revolution and where most of these major high-profile firms like OpenAI are located,” he told CNBC.
Unlike other parts of the tech sector that turned to remote work, AI firms are filling office towers. In San Francisco, 1 out of every 4 square feet leased over the past two and a half years went to an AI tenant.
“AI is predominantly in-office work, and they’re sort of back to the earlier days of tech innovation, where they’re in the office five, six days a week and for long hours,” Yasukochi said.
AI Insights
AI chatbot users report mental health issues

More and more people are reporting that artificial intelligence chatbots like ChatGPT are triggering mental health issues, such as delusional thinking, psychotic episodes and even suicide.
O. Rose Broderick, who covers disability at STAT, spoke to doctors and researchers who are racing to understand this phenomenon.
This segment airs on September 10, 2025. Audio will be available after the broadcast.
AI Insights
Oracle’s AI strategy to take on Epic – statnews.com
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi