AI Research
California cities turn to AI to streamline permitting

Two California cities announced within days of each other they are turning to two different artificial intelligence-driven tools to streamline and speed up their building permit applications and processing.
Lancaster, a city of just over 170,000 people near Los Angeles, announced last week it has partnered with AI regulatory tech company Labrynth to prescreen, validate and optimize permit applications before they are submitted. The city and company said that means fewer errors, less back-and-forth between an applicant and the government, and approval times reduced from months to days.
Separately, San Jose announced its own effort to use AI to streamline permitting accessory dwelling units with what it calls a “pre-check” feature to flag missing or incomplete information before building plans are submitted. Initially, the city said it has turned to AI permit software CivCheck for this pilot program, and may turn to others in the future as the effort expands.
“It’s time to bring permitting into the 21st century,” San Jose Mayor Matt Mahan said in a statement. “Our residents and our city planners need to be able to move faster and build better.”
California’s building and housing codes, as well as its permitting and regulatory landscape, are “notoriously complex,” Lancaster Mayor Rex Parris said in an email. That can be frustrating for permit applicants, who may struggle to get approvals for new buildings or additions due to the complexity of the applications they must fill out, and see their approvals take months or even years. The latter is especially troubling given the state’s housing crisis.
AI can help make some of that easier, however. Labrynth CEO Stuart Lacey said in an interview that deterministic AI can be trained very carefully with rules, rights, precedent and regulations.
“We can consume thousands of pages of information in minutes, and the model can learn probably better than a human could ever recall all those requirements,” Lacey said.
That then eases the administrative burden on staff in both cities, as well as developers looking to get projects off the ground. Chris Burton, director of San Jose’s Planning, Building and Code Enforcement Department, said in a statement the city is “dedicated to taking any guesswork out of the permitting process, helping builders and residents move quickly with clarity and confidence.”
But that does not mean that humans will be cut out of the permitting process entirely, as some have worried following the onset of agentic AI. But Parris said keeping the human in the loop is “foundational.”
“It’s not about automating decision-making,” he said. “It’s about augmenting better decisions, faster. Our city staff still sign off on every permit. But instead of spending hours flagging incomplete applications, they get structured submissions that are AI-assisted and city-tuned. We’re not replacing planners. We’re giving them a better starting point and a smarter toolkit. Lancaster believes AI means Augmented Intelligence not Artificial Intelligence.”
Lacey said he anticipates Labrynth to have the “lightest possible touch integration,” and be a “bridge” between a city’s application process and the applicant. As the technology knows what to look for in a successful application and where the gaps are, it will make it easier to have applicants go back and make amendments, then have the city validate their work.
“Effectively it means that someone before applying would go through the Labrynth application, the outcome of that is a higher, pre-validated, scored, ideal application. When the city gets it, and it’s got a Labrynth logo in it, they’re going to have the validation report and they’re going to know that it meets almost every one of their requirements, which means they can now move it straight through processing, review, oversight and a risk-graded, great process.”
Both cities anticipate their partnerships expanding as they look to further utilize AI for permitting. If this pilot is successful, San Jose said its customer readiness AI tool could be extended to permitting for single-family homes in the future. That would be especially useful in the wake of the kinds of wildfires that have ravaged California in recent years. Parris said he expects a similar expansion in time.
“We’re starting with permitting because it’s a pressure point,” he said. “Often the difference between success and failure is the time it takes to get permitted. And we see this expanding into zoning workflows, CEQA documentation, housing development pipelines; anywhere California’s regulatory complexity slows down good projects. This could serve as a model for regional collaboration where cities and counties share best practices through a unified framework, not siloed systems.”
Other cities in California may be tempted to follow suit, and Parris said they should not wait to do so.
“Treat permitting not just as paperwork but as a civic experience,” he said. “The frustration people feel with delays and opacity doesn’t just cost time. It erodes trust. California cities are being asked to do more with less, under intense scrutiny. My advice? Don’t wait for Sacramento to fix it. Partner locally, deploy ethically, and show your community that you’re serious about solving real problems with real tools. Lancaster didn’t wait and we’re already seeing what’s possible.”
AI Research
AI Startup Authentica Tackles Supply Chain Risk
AI Research
Artificial Intelligence (AI) Unicorn Anthropic Just Hit a $183 Billion Valuation. Here’s What It Means for Amazon Investors

Anthropic just closed on a $13 billion Series F funding round.
It’s been about a month since OpenAI unveiled its latest model, GPT-5. In that time, rival platforms have made bold moves of their own.
Perplexity, for instance, drew headlines with a $34.5 billion unsolicited bid for Alphabet‘s Google Chrome, while Anthropic — backed by both Alphabet and Amazon (AMZN 1.04%) — closed a $13 billion Series F funding round that propelled its valuation to an eye-popping $183 billion.
Since debuting its AI chatbot, Claude, in March 2023, Anthropic has experienced explosive growth. The company’s run-rate revenue surged from $1 billion at the start of this year to $5 billion by the end of August.
Image source: Getty Images.
While these gains are a clear win for the venture capital firms that backed Anthropic early on, the company’s trajectory carries even greater strategic weight for Amazon.
Let’s explore how Amazon is integrating Anthropic into its broader artificial intelligence (AI) ecosystem — and what this deepening alliance could mean for investors.
AWS + Anthropic: Amazon’s secret weapon in the AI arms race
Beyond its e-commerce dominance, Amazon’s largest business is its cloud computing arm — Amazon Web Services (AWS).
Much like Microsoft‘s integration of ChatGPT into its Azure platform, Amazon is positioning Anthropic’s Claude as a marquee offering within AWS. Through its Bedrock service, AWS customers can access a variety of large language models (LLMs) — with Claude being a prominent staple — to build and deploy generative AI applications.
In effect, Anthropic acts as both a differentiator and a distribution channel for AWS — giving enterprise customers the flexibility to test different models while keeping them within Amazon’s ecosystem. This expands AWS’s value proposition because it helps create stickiness in a fiercely intense cloud computing landscape.
Cutting Nvidia and AMD out of the loop
Another strategic benefit of Amazon’s partnership with Anthropic is the opportunity to accelerate adoption of its custom silicon, Trainium and Inferentia. These chips were specifically engineered to reduce dependence on Nvidia‘s GPUs and to lower the cost of both training and inferencing AI workloads.
The bet is that if Anthropic can successfully scale Claude on Trainium and Inferentia, it will serve as a proof point to the broader market that Amazon’s hardware offers a viable, cost-efficient alternative to premium GPUs from Nvidia and Advanced Micro Devices.
By steering more AI compute toward its in-house silicon, Amazon improves its unit economics — capturing more of the value chain and ultimately enhancing AWS’s profitability over time.
From Claude to cash flow
For investors, the central question is how Anthropic is translating into a tangible financial impact for Amazon. As the figures below illustrate, Amazon has not hesitated to deploy unprecedented sums into AI-related capital expenditures (capex) over the past few years. While this acceleration in spend has temporarily weighed on free cash flow, such investments are part of a deliberate long-term strategy rather than a short-term playbook.
AMZN Capital Expenditures (TTM) data by YCharts
Partnerships of this scale rarely yield immediate results. Working with Anthropic is not about incremental wins — it’s about laying the foundation for transformative outcomes.
In practice, Anthropic enhances AWS’s ability to secure long-term enterprise contracts — reinforcing Amazon’s position as an indispensable backbone of AI infrastructure. Once embedded, the switching costs for customers considering alternative models or rival cloud providers like Microsoft Azure or Google Cloud Platform (GCP) become prohibitively high.
Over time, these dynamics should enable Amazon to capture a larger share of AI workloads and generate durable, high-margin recurring fees. As profitability scales alongside revenue growth, Amazon is well-positioned to experience meaningful valuation expansion relative to its peers — making the stock a compelling opportunity to buy and hold for long-term investors right now.
Adam Spatacco has positions in Alphabet, Amazon, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
AI Research
AI co-pilot boosts noninvasive brain-computer interface by interpreting user intent, UCLA study finds

Key takeaways:
- A wearable, noninvasive brain-computer interface system that utilizes artificial intelligence as a co-pilot to help infer user intent and complete tasks has been developed by UCLA engineers.
- The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions.
- All participants completed both tasks significantly faster with AI assistance.
UCLA engineers have developed a wearable, noninvasive brain-computer interface system that utilizes artificial intelligence as a co-pilot to help infer user intent and complete tasks by moving a robotic arm or a computer cursor.
The study, published in Nature Machine Intelligence, shows that the interface demonstrates a new level of performance in noninvasive brain-computer interface, or BCI, systems. This could lead to a range of technologies to help people with limited physical capabilities, such as those with paralysis or neurological conditions, handle and move objects more easily and precisely.
The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions. They paired the decoded signals with a camera-based artificial intelligence platform that interprets user direction and intent in real time. The system allows individuals to complete tasks significantly faster than without AI assistance.
“By using artificial intelligence to complement brain-computer interface systems, we’re aiming for much less risky and invasive avenues,” said study leader Jonathan Kao, an associate professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Ultimately, we want to develop AI-BCI systems that offer shared autonomy, allowing people with movement disorders, such as paralysis or ALS, to regain some independence for everyday tasks.”
State-of-the-art, surgically implanted BCI devices can translate brain signals into commands, but the benefits they currently offer are outweighed by the risks and costs associated with neurosurgery to implant them. More than two decades after they were first demonstrated, such devices are still limited to small pilot clinical trials. Meanwhile, wearable and other external BCIs have demonstrated a lower level of performance in detecting brain signals reliably.
To address these limitations, the researchers tested their new noninvasive AI-assisted BCI with four participants — three without motor impairments and a fourth who was paralyzed from the waist down. Participants wore a head cap to record EEG, and the researchers used custom decoder algorithms to translate these brain signals into movements of a computer cursor and robotic arm. Simultaneously, an AI system with a built-in camera observed the decoded movements and helped participants complete two tasks.
In the first task, they were instructed to move a cursor on a computer screen to hit eight targets, holding the cursor in place at each for at least half a second. In the second challenge, participants were asked to activate a robotic arm to move four blocks on a table from their original spots to designated positions.
All participants completed both tasks significantly faster with AI assistance. Notably, the paralyzed participant completed the robotic arm task in approximately six and a half minutes with AI assistance, whereas without it, he was unable to complete the task.
The BCI deciphered electrical brain signals that encoded the participants’ intended actions. Using a computer vision system, the custom-built AI inferred the users’ intent — not their eye movements — to guide the cursor and position the blocks.
“Next steps for AI-BCI systems could include the development of more advanced co-pilots that move robotic arms with more speed and precision, and offer a deft touch that adapts to the object the user wants to grasp,” said co-lead author Johannes Lee, a UCLA electrical and computer engineering doctoral candidate advised by Kao. “And adding in larger-scale training data could also help the AI collaborate on more complex tasks, as well as improve EEG decoding itself.”
The paper’s authors are all members of Kao’s Neural Engineering and Computation Lab, including Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu and Chang Xie. A member of the UCLA Brain Research Institute, Kao also holds faculty appointments in the computer science department and the Interdepartmental Ph.D. program in neuroscience.
The research was funded by the National Institutes of Health and the Science Hub for Humanity and artificial intelligence, which is a collaboration between UCLA and Amazon. The UCLA Technology Development Group has applied for a patent related to the AI-BCI technology.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi