Business
Silicon Valley AI company to aid safety at Super Bowl, World Cup – NBC Bay Area

With two of the biggest events in all of sports descending on the South Bay next year, crowd control and safety are big concerns.
A Silicon Valley company says artificial intelligence can help manage those issues with the hundreds of thousands of fans expected for the Super Bowl at Levi’s Stadium in February and the FIFA World Cup in June.
San Francisco-based Ouster is one of the leaders in the industry when it comes to physical AI. It has vowed to make the fan experience at Levi’s Stadium safer for those two massive, global events.
Video of out-of-control crowds rushing into the COPA Cup last year in Miami sparked a new level of concern at stadiums everywhere. Non-ticketed soccer fans stampeded their way into Hard Rock Stadium, overwhelming security.
Ouster says its technology can help prevent similar chaos during the 2026 World Cup.
“Our systems are actually used in venues today to live track the crowd density in big concert venues and alert security when that density is going to go high so you can take action and actually prevent build-up and a very dangerous situation — kind of a stampede,” Ouster CEO Angus Pacala says.
Ouster will be installing 3D lidar sensors with laser vision technology around Levi’s Stadium for the Super Bowl and the World Cup to make sure crowds are safe inside and out.
“Ouster deals with the real world,” Pacala says. “And when you deal with the real world, you need to give a machine eyesight on what’s going on.”
Lidar is the same tech used to guide Waymo self-driving vehicles. At stadiums, it detects when a crowd is reaching critical mass at a certain location, like in front of a concert stage. It also counts the exact number of people in a certain part of the stadium.
Outside a stadium, it can control a street light when necessary, such as when someone in a wheelchair is at a crosswalk and might need extra time to cross the street. The technology also is able to distinguish among pedestrians, bicyclists and vehicles.
Law enforcement analyst Michael Leininger says extra eyes always help, especially at big events.
“The ability to not only monitor but count the number of people at an event is a tremendous help for law enforcement,” Leininger says. “And frankly, law enforcement is running short. So to have an extra set of eyes and ears, especially an incredibly accurate set of eyes and ears, is an awesome opportunity.”
Pacala added: “We’re at the forefront of this at Ouster, but we’re also at the very early days.”
Business
AI agents in business create new risks & urgent security needs

Radware has released new research analysing the cybersecurity risks linked to the growing adoption of agentic artificial intelligence (AI) systems in enterprise environments.
The report, titled ‘The Internet of Agents: The Next Threat Surface,’ examines how AI agents driven by large language models (LLMs) are being integrated into business operations. These systems differ from standard chatbots by acting autonomously, executing tasks, and collaborating with other digital agents using protocols such as the Model Context Protocol (MCP) and Agent-to-Agent (A2A).
Expanded attack surface
Organisations are increasingly deploying LLM-powered AI agents into customer service, development, and operational workflows. Unlike traditional software, these agents are capable of reasoning, executing commands, and initiating actions autonomously across enterprise networks.
The report notes that as these agents interact with business systems, they establish complex transitive chains of access to sensitive resources. This can complicate tracking and securing business processes with existing cybersecurity measures. According to Radware, these pathways present “complex pathways into sensitive enterprise resources that are difficult to track or secure with existing defences.”
New protocols and exposures
The adoption of protocols such as MCP and A2A enables enhanced interoperability and scalability for AI agents across different business processes, but this also introduces new risks. The report highlights threats such as prompt injection, tool poisoning, lateral compromise, and malicious handshakes, which take advantage of these emerging protocols.
Prompt injection attacks, in particular, are identified as a growing risk. By embedding covert instructions in content like emails or web pages, attackers can manipulate AI agents to exfiltrate data or initiate unauthorised actions – often without any indication to the end user. The research states: “Adversaries can embed hidden instructions in emails, web pages, or documents. When an AI agent processes that content, it may unwittingly exfiltrate data or trigger unauthorized actions – without the user ever clicking a link or approving a request.”
Lower barriers to cybercrime
The report observes that new “dark AI ecosystems” are emerging, which lower the technical barriers to cybercrime. Black-hat platforms such as XanthoroxAI provide access to offensive AI tools that automate previously manual attacks, including malware creation and phishing campaigns. These tools, offered on a subscription basis, enable less-experienced attackers to develop and deploy exploits more easily.
Radware’s analysis also shows that AI accelerates the weaponisation of new vulnerabilities. The report references research demonstrating that GPT-4 can develop working exploits for recently disclosed vulnerabilities more rapidly than experienced human researchers, reducing the window for IT teams to patch vulnerable systems before attackers strike.
Changing digital landscape
The emergence of the so-called ‘Internet of Agents’ is likened to previous digital shifts, such as the rise of the Internet of Things. In this new ecosystem, autonomous digital actors with memory, reasoning, and action capabilities are increasingly interconnected, resulting in greater operational efficiency but also expanded risk exposure.
Radware’s report argues that organisations must adjust their security models to account for the new roles played by AI agents in the enterprise. With these systems acting as decision-makers, intermediaries, and operational partners, the need for effective governance and security oversight is heightened.
“We are not entering an AI future – we are already living in it,” said [Insert Radware Spokesperson]. “The agentic ecosystem is expanding rapidly across industries, but without strong security and oversight, these systems risk becoming conduits for cybercrime. The businesses that succeed will be those capable of delivering trustworthy, secure AI experiences.”
Security recommendations
The report sets out a series of recommendations for enterprises to protect against the unique risks posed by autonomous AI agents. These include:
- Treating LLMs and AI agents as privileged actors, subject to strict governance and access controls.
- Integrating red-teaming and prompt evaluation exercises into software development lifecycles.
- Evaluating protocols such as MCP and A2A as security-critical interfaces, rather than mere productivity tools.
- Monitoring dark AI ecosystems to stay aware of how adversaries are adapting and exploiting new tools.
- Investing in detection, sandboxing, and behavioural monitoring technologies tailored for autonomous AI systems.
- Recognising that AI-powered defensive capabilities will play an increasingly important role in combating AI-driven threats.
The report concludes by noting that AI agents represent a significant technology shift for businesses. Although these systems hold potential for efficiency and economic growth, they also introduce risks that enterprises must urgently address as the boundaries between helpful tools and security threats continue to blur.
Business
San Francisco Is Investigating Scale AI Over Its Labor Practices

The city of San Francisco is investigating Scale AI over its labor practices, a Scale AI spokesperson confirmed to Business Insider.
Scale AI, which is based in San Francisco, relies heavily on a vast army of people it considers contractors to train tech companies’ latest artificial intelligence models. Meta bought almost half of Scale AI for $14 billion in a blockbuster AI deal this summer.
The city’s investigation is being led by its Office of Labor Standards Enforcement (OLSE), which oversees sick leave, minimum wage, overtime pay, and other regulations for San Francisco workers.
Scale AI spokesperson Natalia Montalvo told Business Insider that the startup is cooperating with the OLSE to provide the information they need and that Scale AI is fully compliant with local laws and regulations.
San Francisco’s investigation into Scale AI is limited to city residents who worked for the startup — including remotely — over the last three years, according to a now-deleted notice posted by Maura Prendiville, a compliance officer at the OLSE, in a subreddit for Outlier AI, a gig work platform run by Scale AI.
While the notice didn’t specify what types of labor practices the city is investigating, it did mention that investigators are looking to speak to people who worked for Scale AI as “taskers” and “freelancers” rather than the startup’s full-time employees.
The investigation’s existence doesn’t mean Scale AI has broken the law, and the city could find it in favor of Scale AI — or drop its probe altogether.
The OLSE declined to answer further questions about the probe, citing its policy of not commenting on ongoing investigations. The agency has the authority to levy fines for labor violations.
In the Reddit post, Prendiville specified that San Francisco is seeking to speak to people who worked for Scale AI through Outlier AI and Smart Ecosystem, another Scale AI platform. She also wrote that she seeks to speak with people who worked for Scale AI through HireArt, a third-party hiring agency, and the gig work marketplace Upwork.
Upwork said it has not been contacted by OLSE.
“Worker classification and compliance with labor regulations are ultimately the responsibility of the hiring business,” an Upwork spokesperson said. “As a general matter, Upwork does not play a role in those determinations.”
Montalvo, the Scale AI spokesperson, said that the feedback the company gets from its contributors is “overwhelmingly positive.”
“Our dedicated teams work hard to ensure contributors are paid fairly, feel supported, and can access the flexible earning opportunities they value,” she said.
It’s not the first time Scale AI has been investigated by labor regulators. It was also the subject of a federal Department of Labor investigation that was dropped this summer, TechCrunch reported.
Some Scale AI workers have previously alleged that the company illegally underpaid them, denied them benefits like sick leave, and misclassified them as contractors in two lawsuits filed over the past year in San Francisco’s superior court.
Meta declined to comment. HireArt didn’t respond to a request for comment.
Have a tip? Contact this reporter via email at crollet@insider.com or on Signal and WhatsApp at 628-282-2811. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.
Business
#siliconvalley #realestate #ai | Business Insider

The AI boom has lit a fuse under San Francisco’s housing market, with prices and rents inflamed by a tech workforce increasingly returning to the office, and as the AI talent wars push salaries to dizzying highs.
Tech workers are flocking to the Bay Area to work at OpenAI, Anthropic, and Nvidia after the region was dealt a massive blow during the COVID-19 pandemic. Real estate agents say RTO is fueling the resurgence, with AI being particularly synonymous with in-person work and hardcore culture.
What’s more, tech workers, especially those working in AI, are commanding massive salaries, including multimillion-dollar compensation packages.
Read more about this real estate boom on Business Insider: https://lnkd.in/eXhNxYbY
(Credit: Getty Images/iStockphoto)
#siliconvalley #realestate #ai
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi