Connect with us

AI Research

How AI is Revolutionizing Parkinson’s Research and Clinical Care – American Parkinson Disease Association

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Prompt Economy Recalculates Basic Math of Commerce

Published

on

By


A year ago, it was “everything is about AI.” Months later, it was “everything is about gen AI.” Now the focus has shifted to agentic artificial intelligence (AI) and the topic is filled with gigabytes worth of opportunities and challenges. Even since summer’s end (just two weeks ago) this space has had its share of urgent developments, technical advancements and other innovations. All of which reinforce the momentum behind agentic AI.

Let’s start with the past week, with two use cases from LinkedIn and Booking.com.

LinkedIn Formalizes Its Agentic AI Stack

InfoWorld reports that LinkedIn has gone beyond chatbots to build a full agentic AI platform that treats agents as part of its cloud-native architecture. At the center is an “agent life-cycle service” that coordinates agents, data sources and applications while storing context externally so systems can scale like any other distributed app. The design leans on messaging-based architectures that support natural language and structured content, allowing agents to carry conversation history and context across tasks — critical for workflows like recruitment, where queries evolve over time.

The first major use case is an upgraded Hiring Assistant that helps recruiters filter job candidates in natural language while keeping humans firmly in the loop. Every step is auditable, with observability features like OpenTelemetry built in to ensure compliance and reliability. LinkedIn’s approach underscores a broader enterprise shift: treating agentic AI not as a flashy demo tool but as another piece of software infrastructure, requiring the same rigor in monitoring, governance, and trust-building as any other large-scale system.

Booking.com Bets on Agentic AI

Skift reports that Booking Holdings is building its own agentic AI platform to create travel experiences that rival, and potentially out-personalize, those offered by tech giants like ChatGPT and Google Gemini. Unlike chatbots that simply answer questions, Booking’s agentic AI is designed to take actions, such as booking a car with the right child seat or rebooking a delayed flight, while keeping travelers inside Booking’s ecosystem. CFO Ewout Steenbergen described the effort as a step toward “the real connected trip,” where entire itineraries update automatically and every service touchpoint is tailored to the traveler.

Steenbergen emphasized that the strategy is not just about personalization but also about trust and economics. By building agentic capabilities directly into its platform, Booking aims to give customers peace of mind that issues will be resolved by a provider they know, while also lowering customer service costs and boosting satisfaction. Features like natural language search are already reducing cancellations by better matching travelers to the right accommodations, and Steenbergen suggested the next wave of agentic AI could cement Booking’s lead in alternative accommodations.

Parallel Agents

Here’s a new term for a new area of agentic AI that we just started to notice recently. They’re not limited to Google, but Google’s developer blog does a good job explaining them. The parallel agent framework shows how agentic AI can split large jobs into smaller, independent tasks and run them at the same time. Instead of waiting for one agent to finish before starting the next, multiple sub-agents can work concurrently, whether that’s pulling in data from different sources, crunching heavy computations, or researching distinct topics. This approach can cut processing time dramatically, but it only works when tasks don’t need to share information while running. In practice, it means that agentic AI systems can start to look less like a single “botand more like an assembly line of digital workers, each handling a piece of the job in parallel and then passing the results to a synthesis step.

It made its Realtime API generally available, giving developers tools to build more natural and reliable AI-powered voice agents. Unlike older systems that stitched together separate speech-to-text and text-to-speech models, the new setup processes audio in one step, cutting latency and preserving nuance. The upgrade makes it easier to deploy agents that can follow complex instructions, call external tools at the right time, and hold conversations that sound more human, complete with intonation, emotional cues, and even mid-sentence language switching. With features like phone calling, image inputs, and support for external servers, the technology is pitched as ready for customer service, personal assistance, and other real-world applications.

Nvidia: Smarter Data Highways Will Speed Up AI

The company released a new blog post that says that the real slowdown in AI today isn’t just inside the chips, it’s in how data moves in and out of them. Every time an AI system pulls customer records, grabs context from a database, or loads a large model, it depends on these “north-south” data connections. When they’re slow, the whole system feels sluggish. Nvidia’s answer is new networking technology that makes these data highways faster and more reliable, ensuring that agentic AI systems can respond instantly and at scale. For companies, this means customer-facing AI tools that are quicker, more dependable, and able to handle more complex tasks without bogging down.

Agentic AI Even Got Its Own ETF

SoFi has introduced the SoFi Agentic AI ETF on Sept. 3, giving investors a way to buy into companies building or using the technology. The fund tracks an index of 30 U.S.-listed firms tied to agentic AI. Current holdings include big names like Salesforce, Tesla and Nvidia, alongside companies in areas such as self-driving cars, cybersecurity, industrial automation, and enabling tech like semiconductors and cloud computing. Managed by Tidal Investments, the ETF carries a 0.69% fee and is available on SoFi Invest and other brokerages. SoFi positions AGIQ as a simple entry point for investors.

Let’s Not Forget the CFO

Boston-based FinQore announced the first direct integration of Anthropic’s Claude into a financial data platform recently, giving CFOs real-time access to reconciled, expert-validated company data inside the AI assistant. The move aims to solve one of finance’s biggest hurdles with AI — trusting the underlying data — by pairing Claude’s language capabilities with FinQore’s audit-trailed, context-rich financial records. Executives can now query models in plain English, test scenarios, and act on insights with more confidence. FinQore pitches the integration as a step toward “agentic finance,” where AI doesn’t just analyze numbers but also helps guide budget shifts, pricing moves, and growth strategies.

Let’s Not Forget Agentic Infrastructure

Visa has opened a new chapter in digital commerce by announcing tools designed to make agentic AI as seamless and trusted as swiping a Visa card. The company will give developers access to its Model Context Protocol (MCP) server, which provides a secure integration layer for plugging AI agents directly into Visa’s Intelligent Commerce APIs. Alongside this, Visa is introducing a no-code Acceptance Agent Toolkit that lets business teams generate invoices, create payment links, and run analytics with simple language prompts. Together, the moves are meant to speed up the creation of agent-enabled checkout flows while ensuring transactions remain anchored to Visa’s security standards.

For Visa, the strategy hinges on trust and ubiquity. Rubail Birwadker, the company’s global head of growth, told Karen Webster that the aim isn’t to have agent-led buying work for just a fraction of purchases but to make it universal, just like the traditional card experience. The MCP server reduces the friction of coding and authentication, compressing build times from weeks to hours, while giving developers a standardized way to test how authenticated AI agents behave in live checkout environments. At the same time, the Acceptance Agent Toolkit gives merchantsquick wins” on repetitive tasks, allowing them to see immediate value in agentic commerce as peak holiday season approaches.

Visa sees merchants falling into three camps: those eager to embrace agent channels as new sources of discovery and sales, established retailers recalculating acquisition costs as traffic shifts to agents, and cautious players worried about fraud. Across all three, the company argues, agentic commerce is not a passing trend but a new paradigm. Birwadker points to identity and authenticated credentials as the foundation of trust, with enriched transaction data serving as proof of what was bought, when, and under whose authority. That, he says, is how Visa plans to make agentic commerce both scalable and reliable, even as decades of web infrastructure built for human buyers are reimagined for machine-led transactions.

In-depth: CarEdge Rewrites the Auto Buying Playbook From Lot to Bot

What does agentic AI look like in practice? For decades, buying a car in the U.S. has been one of the least-trusted consumer experiences. Opaque pricing, aggressive sales tactics, and a lack of true buyer representation have tilted the field in favor of dealerships. CarEdge, led by CEO Zach Shefska, positions itself as a consumer-first advocate, offering education, pricing insights, and concierge negotiation services that put buyers back on equal footing. Its model flips the traditional dynamic by making the consumer its core customer.

CarEdge’s next bet takes that advocacy into the realm of agentic AI. Instead of simply arming buyers with data, the company is developing AI agents that can negotiate directly with dealerships on a customer’s behalf. Consumers would be able to select a car, click “negotiate this for me,” and let the AI secure quotes while they go about their day. The system shields personal contact details and draws on six years of CarEdge’s pricing intelligence to benchmark deals, raising the prospect of algorithm-to-algorithm bargaining becoming the new normal in auto sales.

The implications extend beyond cars. If CarEdge succeeds, high-value, emotionally charged retail sectors like real estate could also shift toward AI-led negotiations. Dealers, surprisingly, may welcome the change if it lowers acquisition costs and ensures better-qualified leads, even as they experiment with their own AI-powered sales agents.

Still, competition from tech giants looms: OpenAI, Google, or others could conceivably roll out one-click car-buying services. For now, CarEdge is betting that consumers will prefer independent advocates over walled gardens, even if the next negotiation starts not with a handshake, but with two algorithms trading offers.



Source link

Continue Reading

AI Research

It’s harder than expected to implement AI in the NHS, finds study

Published

on


First author Dr Angus Ramsey, principal research fellow at the UCL Department of Behavioural Sciences and Health

A study led by University College London (UCL) researchers found that implementing AI into NHS hospitals is more difficult than initially anticipated by healthcare leaders, with complications including around governance, contracts, data collection and staff training.

The study, published in The Lancet eClinicalMedicine on 10 September 2025,  examined a £21 million NHS England programme which launched in 2023 to introduce AI for the diagnosis of chest conditions, including lung cancer, across 66 NHS hospital trusts.

Researchers conducted interviews with hospital staff and AI suppliers to review how the diagnostic tools were procured and set up, and identify any pitfalls or factors that helped smooth the process.

They found that contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

First author Dr Angus Ramsey, principal research fellow at the UCL Department of Behavioural Sciences and Health, said: “Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals.

“Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.”

Challenges identified by the research included engaging clinical staff with high workloads in the project, embedding the technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding and scepticism among staff about using AI in healthcare.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope”.

They recommend that NHS staff are trained in how AI can be used effectively and safely, and that dedicated project management is used to implement schemes like this in the future.

Senior author Professor Naomi Fulop at UCL, said: “The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex.”

The research, funded by the National Institute for Health and Care Research, was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge.

They are now studying the use of AI tools following early deployment when they have had a chance to become more embedded.

Researchers say that the findings should provide useful learnings on implementing the government’s 10 year health plan, published on 3 July 2025, which identifies AI as key to improving the NHS.



Source link

Continue Reading

AI Research

AI system detects fires before alarms sound, NYU study shows

Published

on


NYU research introduces video-based fire detection

The NYU Tandon School of Engineering has reported that its Fire Research Group has developed an artificial intelligence system that can detect fires and smoke in real time using existing CCTV cameras.

According to NYU Tandon, the system analyses video frames within 0.016 seconds, faster than a human blink, and provides immediate alerts.

The researchers explained that conventional smoke alarms activate only once smoke has reached a sensor, whereas video analysis can recognise fire at an earlier stage.

Lead researcher Prabodh Panindre, Research Associate Professor at NYU Tandon’s Department of Mechanical and Aerospace Engineering, said: “The key advantage is speed and coverage.

“A single camera can monitor a much larger area than traditional detectors, and we can spot fires in the initial stages before they generate enough smoke to trigger conventional systems.”

Ensemble AI approach improves accuracy

NYU Tandon explained that the system combines multiple AI models rather than relying on a single network.

It noted that this reduces the risk of false positives, such as mistaking a bright object for fire, and improves detection reliability across different environments.

The team reported that Scaled-YOLOv4 and EfficientDet models provided the best results, with detection accuracy rates above 78% and processing times under 0.02 seconds per frame.

By contrast, Faster-RCNN produced slower results and lower accuracy, making it less suitable for real-time IoT use.

Dataset covers all NFPA fire classes

According to the NYU researchers, the system was trained on a custom dataset of more than 7,500 annotated images covering all five fire classes defined by the National Fire Protection Association.

The dataset included Class A through K fires, with scenarios ranging from wildfires to cooking incidents.

This approach allowed the AI to generalise across different ignition types, smoke colours, and fire growth patterns.

The team explained that bounding box tracking across frames helped differentiate live flames from static fire-like objects, achieving 92.6% accuracy in reducing false alarms.

Professor Sunil Kumar of NYU Abu Dhabi said: “Real fires are dynamic, growing and changing shape.

“Our system tracks these changes over time, achieving 92.6% accuracy in eliminating false detections.”

Technical evaluation of detection models

NYU Tandon reported that it tested three leading object detection approaches: YOLO, EfficientDet and Faster-RCNN.

The group found that Scaled-YOLOv4 achieved the highest accuracy at 80.6% with an average detection time of 0.016 seconds per frame.

EfficientDet-D2 achieved 78.1% accuracy with a slightly slower response of 0.019 seconds per frame.

Faster-RCNN produced 67.8% accuracy and required 0.054 seconds per frame, making it less practical for high-throughput applications.

The researchers concluded that Scaled-YOLOv4 and EfficientDet-D2 offered the best balance of speed and reliability for real-world deployment.

Dataset preparation and training methods

The research team stated that it collected approximately 13,000 images, which were reduced to 7,545 after cleaning and annotation.

Each image was labelled with bounding boxes for fire and smoke, and the dataset was evenly distributed across the five NFPA fire classes.

The models were pre-trained on the Common Objects in Context dataset before being fine-tuned on the fire dataset for hundreds of training epochs.

The team confirmed that anchor box calibration and hyperparameter tuning further improved YOLO model accuracy.

They reported that Scaled-YOLOv4 with custom training configurations provided the best results for dynamic fire detection.

IoT cloud-based deployment

The researchers outlined that the system operates in a three-layer Internet of Things architecture.

CCTV cameras stream raw video to cloud servers where AI models analyse frames, confirm detections and send alerts.

Detection results trigger email and text notifications, including short video clips, using Amazon Web Services tools.

The group reported that the system processes frames in 0.022 seconds on average when both models confirm a fire or smoke event.

This design, they said, allows the system to run on existing “dumb” CCTV cameras without requiring new hardware.

Deployment framework and false alarm reduction

The NYU team explained that fire detections are validated only when both AI models agree and the bounding box area grows over time.

This approach distinguishes real flames from static images of fire, preventing common sources of false alerts.

The deployment is based on Amazon Web Services with EC2 instances handling video ingestion and GPU-based inference.

Results and metadata are stored in S3 buckets and notifications are sent through AWS SNS and SES channels.

The researchers stated that this cloud-based framework ensures scalability and consistency across multiple camera networks.

Applications in firefighting and wildland response

NYU Tandon stated that the technology could be integrated into firefighting equipment, such as helmet-mounted cameras, vehicle cameras and autonomous robots.

It added that drones equipped with the system could provide 360-degree views during incidents, assisting fire services in locating fires in high-rise buildings or remote areas.

Capt. John Ceriello of the Fire Department of New York City said: “It can remotely assist us in confirming the location of the fire and possibility of trapped occupants.”

The researchers noted that the system could also support early wildfire detection, giving incident commanders more time to organise resources and evacuations.

Broader safety applications

Beyond fire detection, the NYU group explained that the same AI framework could be adapted for other safety scenarios, including medical emergencies and security threats.

It reported that the ensemble detection and IoT architecture provide a model for monitoring and alerting in multiple risk environments.

Relevance for fire and safety professionals

For fire and rescue services, the system demonstrates how existing CCTV infrastructure can be adapted for early fire detection without requiring new sensors.

For building managers, the research shows how AI video analysis could supplement or back up smoke alarms, particularly in settings where detector failure is a risk.

For wildland and urban response teams, the ability to embed the system into drones or helmet cameras may improve situational awareness and decision-making during fast-developing incidents.

AI system uses CCTV to detect fires in real time: Summary

The NYU Tandon School of Engineering Fire Research Group has reported an AI system that detects fires using CCTV cameras.

The research was published in the IEEE Internet of Things Journal.

The system processes video at 0.016 seconds per frame.

Scaled-YOLOv4 achieved 80.6% accuracy and EfficientDet achieved 78.1% accuracy.

False detections were reduced by tracking bounding box changes over time.

The dataset included 7,545 images covering all five NFPA fire classes.

Alerts are generated in real time through AWS cloud systems.

Applications include CCTV monitoring, drones, firefighter equipment and wildland detection.

The research suggests the same framework could support wider emergency monitoring.



Source link

Continue Reading

Trending