Connect with us

AI Research

YARBROUGH: A semi-intelligent look at artificial intelligence – Gwinnett Daily Post

Published

on

AI Research

AI Startup Authentica Tackles Supply Chain Risk

Published

on

By


The platform ingests emails and shipping documents and uses AI agents to automate key processes across the order-to-delivery cycle, including anomaly detection, tariff classification, supplier mapping and audit readiness. Authentica says its system generates verifiable audit trails and retains human oversight while reducing compliance risk and maintaining operational speed.

“Geopolitics and AI are redefining supply chains,” said Michael Borg, founder and chief executive of Authentica. Industry trends show that AI is rapidly transitioning from experimentation to necessity in global trade operations.

Reuters reports that spending on generative AI for supply chains is expected to climb from about $2.7 billion today to $55 billion by 2029. The growth is being driven by tariff shifts, sanctions and supply disruptions, making automation and real-time intelligence essential even for firms that continue to rely on lean, just-in-time inventory models. Analysts say this acceleration reflects a broader pattern of companies investing in digital tools that allow them to maintain agility while responding to shifting regulations and rising customer expectations.

Many companies are already integrating AI to reduce risk and improve resilience. PYMNTS recently reported that retailers such as Target, Macy’s, Kohl’s and Amazon are applying AI to optimize inventory, with Target using AI-powered systems across nearly 40% of its product assortment.

Schneider Electric has also shifted from “just-in-time” to “just-in-case” logistics, deploying AI to detect vulnerabilities across its supplier base. These moves illustrate how firms are attempting to harden supply chains against unexpected shocks while gaining better visibility into their networks.

In this context, Authentica positions its platform as both a compliance safeguard and a financial enabler. By automating audit-ready compliance and delivering proof of shipment integrity, the company says its system can help lenders and insurers underwrite trade finance and cross-border coverage with higher confidence. For importers, this could mean lower costs of capital and stronger resilience in a tightening regulatory environment.



Source link

Continue Reading

AI Research

Artificial Intelligence (AI) Unicorn Anthropic Just Hit a $183 Billion Valuation. Here’s What It Means for Amazon Investors

Published

on


Anthropic just closed on a $13 billion Series F funding round.

It’s been about a month since OpenAI unveiled its latest model, GPT-5. In that time, rival platforms have made bold moves of their own.

Perplexity, for instance, drew headlines with a $34.5 billion unsolicited bid for Alphabet‘s Google Chrome, while Anthropic — backed by both Alphabet and Amazon (AMZN 1.04%) — closed a $13 billion Series F funding round that propelled its valuation to an eye-popping $183 billion.

Since debuting its AI chatbot, Claude, in March 2023, Anthropic has experienced explosive growth. The company’s run-rate revenue surged from $1 billion at the start of this year to $5 billion by the end of August.

Image source: Getty Images.

While these gains are a clear win for the venture capital firms that backed Anthropic early on, the company’s trajectory carries even greater strategic weight for Amazon.

Let’s explore how Amazon is integrating Anthropic into its broader artificial intelligence (AI) ecosystem — and what this deepening alliance could mean for investors.

AWS + Anthropic: Amazon’s secret weapon in the AI arms race

Beyond its e-commerce dominance, Amazon’s largest business is its cloud computing arm — Amazon Web Services (AWS).

Much like Microsoft‘s integration of ChatGPT into its Azure platform, Amazon is positioning Anthropic’s Claude as a marquee offering within AWS. Through its Bedrock service, AWS customers can access a variety of large language models (LLMs) — with Claude being a prominent staple — to build and deploy generative AI applications.

In effect, Anthropic acts as both a differentiator and a distribution channel for AWS — giving enterprise customers the flexibility to test different models while keeping them within Amazon’s ecosystem. This expands AWS’s value proposition because it helps create stickiness in a fiercely intense cloud computing landscape.

Cutting Nvidia and AMD out of the loop

Another strategic benefit of Amazon’s partnership with Anthropic is the opportunity to accelerate adoption of its custom silicon, Trainium and Inferentia. These chips were specifically engineered to reduce dependence on Nvidia‘s GPUs and to lower the cost of both training and inferencing AI workloads.

The bet is that if Anthropic can successfully scale Claude on Trainium and Inferentia, it will serve as a proof point to the broader market that Amazon’s hardware offers a viable, cost-efficient alternative to premium GPUs from Nvidia and Advanced Micro Devices.

By steering more AI compute toward its in-house silicon, Amazon improves its unit economics — capturing more of the value chain and ultimately enhancing AWS’s profitability over time.

From Claude to cash flow

For investors, the central question is how Anthropic is translating into a tangible financial impact for Amazon. As the figures below illustrate, Amazon has not hesitated to deploy unprecedented sums into AI-related capital expenditures (capex) over the past few years. While this acceleration in spend has temporarily weighed on free cash flow, such investments are part of a deliberate long-term strategy rather than a short-term playbook.

AMZN Capital Expenditures (TTM) Chart

AMZN Capital Expenditures (TTM) data by YCharts

Partnerships of this scale rarely yield immediate results. Working with Anthropic is not about incremental wins — it’s about laying the foundation for transformative outcomes.

In practice, Anthropic enhances AWS’s ability to secure long-term enterprise contracts — reinforcing Amazon’s position as an indispensable backbone of AI infrastructure. Once embedded, the switching costs for customers considering alternative models or rival cloud providers like Microsoft Azure or Google Cloud Platform (GCP) become prohibitively high.

Over time, these dynamics should enable Amazon to capture a larger share of AI workloads and generate durable, high-margin recurring fees. As profitability scales alongside revenue growth, Amazon is well-positioned to experience meaningful valuation expansion relative to its peers — making the stock a compelling opportunity to buy and hold for long-term investors right now.

Adam Spatacco has positions in Alphabet, Amazon, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

AI co-pilot boosts noninvasive brain-computer interface by interpreting user intent, UCLA study finds

Published

on


Key takeaways:

  • A wearable, noninvasive brain-computer interface system that utilizes artificial intelligence as a co-pilot to help infer user intent and complete tasks has been developed by UCLA engineers.
  • The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions.
  • All participants completed both tasks significantly faster with AI assistance.

UCLA engineers have developed a wearable, noninvasive brain-computer interface system that utilizes artificial intelligence as a co-pilot to help infer user intent and complete tasks by moving a robotic arm or a computer cursor.

The study, published in Nature Machine Intelligence, shows that the interface demonstrates a new level of performance in noninvasive brain-computer interface, or BCI, systems. This could lead to a range of technologies to help people with limited physical capabilities, such as those with paralysis or neurological conditions, handle and move objects more easily and precisely.

The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions. They paired the decoded signals with a camera-based artificial intelligence platform that interprets user direction and intent in real time. The system allows individuals to complete tasks significantly faster than without AI assistance.

“By using artificial intelligence to complement brain-computer interface systems, we’re aiming for much less risky and invasive avenues,” said study leader Jonathan Kao, an associate professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Ultimately, we want to develop AI-BCI systems that offer shared autonomy, allowing people with movement disorders, such as paralysis or ALS, to regain some independence for everyday tasks.”

State-of-the-art, surgically implanted BCI devices can translate brain signals into commands, but the benefits they currently offer are outweighed by the risks and costs associated with neurosurgery to implant them. More than two decades after they were first demonstrated, such devices are still limited to small pilot clinical trials. Meanwhile, wearable and other external BCIs have demonstrated a lower level of performance in detecting brain signals reliably.

To address these limitations, the researchers tested their new noninvasive AI-assisted BCI with four participants — three without motor impairments and a fourth who was paralyzed from the waist down. Participants wore a head cap to record EEG, and the researchers used custom decoder algorithms to translate these brain signals into movements of a computer cursor and robotic arm. Simultaneously, an AI system with a built-in camera observed the decoded movements and helped participants complete two tasks.

In the first task, they were instructed to move a cursor on a computer screen to hit eight targets, holding the cursor in place at each for at least half a second. In the second challenge, participants were asked to activate a robotic arm to move four blocks on a table from their original spots to designated positions. 

All participants completed both tasks significantly faster with AI assistance. Notably, the paralyzed participant completed the robotic arm task in approximately six and a half minutes with AI assistance, whereas without it, he was unable to complete the task.

The BCI deciphered electrical brain signals that encoded the participants’ intended actions. Using a computer vision system, the custom-built AI inferred the users’ intent — not their eye movements — to guide the cursor and position the blocks.

“Next steps for AI-BCI systems could include the development of more advanced co-pilots that move robotic arms with more speed and precision, and offer a deft touch that adapts to the object the user wants to grasp,” said co-lead author Johannes Lee, a UCLA electrical and computer engineering doctoral candidate advised by Kao. “And adding in larger-scale training data could also help the AI collaborate on more complex tasks, as well as improve EEG decoding itself.”

The paper’s authors are all members of Kao’s Neural Engineering and Computation Lab, including Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu and Chang Xie. A member of the UCLA Brain Research Institute, Kao also holds faculty appointments in the computer science department and the Interdepartmental Ph.D. program in neuroscience.

The research was funded by the National Institutes of Health and the Science Hub for Humanity and artificial intelligence, which is a collaboration between UCLA and Amazon. The UCLA Technology Development Group has applied for a patent related to the AI-BCI technology. 



Source link

Continue Reading

Trending