Connect with us

AI Research

AI system detects fires before alarms sound, NYU study shows

Published

on


NYU research introduces video-based fire detection

The NYU Tandon School of Engineering has reported that its Fire Research Group has developed an artificial intelligence system that can detect fires and smoke in real time using existing CCTV cameras.

According to NYU Tandon, the system analyses video frames within 0.016 seconds, faster than a human blink, and provides immediate alerts.

The researchers explained that conventional smoke alarms activate only once smoke has reached a sensor, whereas video analysis can recognise fire at an earlier stage.

Lead researcher Prabodh Panindre, Research Associate Professor at NYU Tandon’s Department of Mechanical and Aerospace Engineering, said: “The key advantage is speed and coverage.

“A single camera can monitor a much larger area than traditional detectors, and we can spot fires in the initial stages before they generate enough smoke to trigger conventional systems.”

Ensemble AI approach improves accuracy

NYU Tandon explained that the system combines multiple AI models rather than relying on a single network.

It noted that this reduces the risk of false positives, such as mistaking a bright object for fire, and improves detection reliability across different environments.

The team reported that Scaled-YOLOv4 and EfficientDet models provided the best results, with detection accuracy rates above 78% and processing times under 0.02 seconds per frame.

By contrast, Faster-RCNN produced slower results and lower accuracy, making it less suitable for real-time IoT use.

Dataset covers all NFPA fire classes

According to the NYU researchers, the system was trained on a custom dataset of more than 7,500 annotated images covering all five fire classes defined by the National Fire Protection Association.

The dataset included Class A through K fires, with scenarios ranging from wildfires to cooking incidents.

This approach allowed the AI to generalise across different ignition types, smoke colours, and fire growth patterns.

The team explained that bounding box tracking across frames helped differentiate live flames from static fire-like objects, achieving 92.6% accuracy in reducing false alarms.

Professor Sunil Kumar of NYU Abu Dhabi said: “Real fires are dynamic, growing and changing shape.

“Our system tracks these changes over time, achieving 92.6% accuracy in eliminating false detections.”

Technical evaluation of detection models

NYU Tandon reported that it tested three leading object detection approaches: YOLO, EfficientDet and Faster-RCNN.

The group found that Scaled-YOLOv4 achieved the highest accuracy at 80.6% with an average detection time of 0.016 seconds per frame.

EfficientDet-D2 achieved 78.1% accuracy with a slightly slower response of 0.019 seconds per frame.

Faster-RCNN produced 67.8% accuracy and required 0.054 seconds per frame, making it less practical for high-throughput applications.

The researchers concluded that Scaled-YOLOv4 and EfficientDet-D2 offered the best balance of speed and reliability for real-world deployment.

Dataset preparation and training methods

The research team stated that it collected approximately 13,000 images, which were reduced to 7,545 after cleaning and annotation.

Each image was labelled with bounding boxes for fire and smoke, and the dataset was evenly distributed across the five NFPA fire classes.

The models were pre-trained on the Common Objects in Context dataset before being fine-tuned on the fire dataset for hundreds of training epochs.

The team confirmed that anchor box calibration and hyperparameter tuning further improved YOLO model accuracy.

They reported that Scaled-YOLOv4 with custom training configurations provided the best results for dynamic fire detection.

IoT cloud-based deployment

The researchers outlined that the system operates in a three-layer Internet of Things architecture.

CCTV cameras stream raw video to cloud servers where AI models analyse frames, confirm detections and send alerts.

Detection results trigger email and text notifications, including short video clips, using Amazon Web Services tools.

The group reported that the system processes frames in 0.022 seconds on average when both models confirm a fire or smoke event.

This design, they said, allows the system to run on existing “dumb” CCTV cameras without requiring new hardware.

Deployment framework and false alarm reduction

The NYU team explained that fire detections are validated only when both AI models agree and the bounding box area grows over time.

This approach distinguishes real flames from static images of fire, preventing common sources of false alerts.

The deployment is based on Amazon Web Services with EC2 instances handling video ingestion and GPU-based inference.

Results and metadata are stored in S3 buckets and notifications are sent through AWS SNS and SES channels.

The researchers stated that this cloud-based framework ensures scalability and consistency across multiple camera networks.

Applications in firefighting and wildland response

NYU Tandon stated that the technology could be integrated into firefighting equipment, such as helmet-mounted cameras, vehicle cameras and autonomous robots.

It added that drones equipped with the system could provide 360-degree views during incidents, assisting fire services in locating fires in high-rise buildings or remote areas.

Capt. John Ceriello of the Fire Department of New York City said: “It can remotely assist us in confirming the location of the fire and possibility of trapped occupants.”

The researchers noted that the system could also support early wildfire detection, giving incident commanders more time to organise resources and evacuations.

Broader safety applications

Beyond fire detection, the NYU group explained that the same AI framework could be adapted for other safety scenarios, including medical emergencies and security threats.

It reported that the ensemble detection and IoT architecture provide a model for monitoring and alerting in multiple risk environments.

Relevance for fire and safety professionals

For fire and rescue services, the system demonstrates how existing CCTV infrastructure can be adapted for early fire detection without requiring new sensors.

For building managers, the research shows how AI video analysis could supplement or back up smoke alarms, particularly in settings where detector failure is a risk.

For wildland and urban response teams, the ability to embed the system into drones or helmet cameras may improve situational awareness and decision-making during fast-developing incidents.

AI system uses CCTV to detect fires in real time: Summary

The NYU Tandon School of Engineering Fire Research Group has reported an AI system that detects fires using CCTV cameras.

The research was published in the IEEE Internet of Things Journal.

The system processes video at 0.016 seconds per frame.

Scaled-YOLOv4 achieved 80.6% accuracy and EfficientDet achieved 78.1% accuracy.

False detections were reduced by tracking bounding box changes over time.

The dataset included 7,545 images covering all five NFPA fire classes.

Alerts are generated in real time through AWS cloud systems.

Applications include CCTV monitoring, drones, firefighter equipment and wildland detection.

The research suggests the same framework could support wider emergency monitoring.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

How artists, writers and designers can benefit from Artificial Intelligence – Deccan Herald

Published

on



How artists, writers and designers can benefit from Artificial Intelligence  Deccan Herald



Source link

Continue Reading

AI Research

Arista touts liquid cooling, optical tech to reduce power consumption for AI networking

Published

on


Both technologies will likely find a role in future AI and optical networks, experts say, as both promise to reduce power consumption and support improved bandwidth density. Both have advantages and disadvantages as well – CPOs are more complex to deploy given the amount of technology included in a CPO package, whereas LPOs promise more simplicity. 

Bechtolsheim said that LPO can provide an additional 20% power savings over other optical forms. Early tests show good receiver performance even under degraded conditions, though transmit paths remain sensitive to reflections and crosstalk at the connector level, Bechtolsheim added.

At the recent Hot Interconnects conference, he said: “The path to energy-efficient optics is constrained by high-volume manufacturing,” stressing that advanced optics packaging remains difficult and risky without proven production scale. 

“We are nonreligious about CPO, LPO, whatever it is. But we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion,” Bechtolsheim said at the investor event. “So, to put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, okay? So going from zero to 50 million is just not possible. The supply chain doesn’t exist. So, even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort.”

“We’re all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case,” Bechtolsheim added.

“So what we call the ‘purpose-built AI data center fabric’ around Ethernet technology is to really optimize AI application performance, which is the ultimate measure for the customer in both the scale-up and the scale-out domains. Some of this includes full switch customization for customers. Other cases, it includes the power and cost optimization. But we have a large part of our hardware engineering department working on these things,” he said. 



Source link

Continue Reading

AI Research

Learning by Doing: AI, Knowledge Transfer, and the Future of Skills  | American Enterprise Institute

Published

on


In a recent blog, I discussed Stanford University economist Erik Brynjolfsson’s new study showing that young college graduates are struggling to gain a foothold in a job market shaped by artificial intelligence (AI). His analysis found that, since 2022, early-career workers in AI-exposed roles have seen employment growth lag 13 percent behind peers in less-exposed fields. At the same time, experienced workers in the same jobs have held steady or even gained ground. The conclusion: AI isn’t eliminating work outright, but it is affecting the entry-level rungs that young workers depend on as they begin climbing career ladders.

The potential consequences of these findings, assuming they bear out, become clearer when read alongside Enrique Ide’s recent paper, Automation, AI, and the Intergenerational Transmission of Knowledge. Ide argues that when firms automate entry-level tasks, the opportunity for new workers to gain the tacit knowledge—the kind of workplace norms and rhythms of team-based work that aren’t necessarily written down—isn’t passed on. Thus, productivity gains accrue to seasoned workers while would-be novices lose the hands-on training they need to build the foundation for career progress. 

This short-circuiting of early career experiences, Ide says, has macro-economic consequences. He estimates that automating even five percent of entry-level tasks reduces long-run US output growth by an estimated 0.05 percentage points per year; at 30 percent automation, growth slows by more than 0.3 points. Over a hundred year timeline, this would reduce total output by 20 percent relative to a world without AI automation. In other words: automating the bottom rungs might lift firms’ quarterly performance, but at the cost of generational growth. 

This is where we need to pause and take a breath. While Ide’s results sound dramatic, it is critical to remember that the dynamics and consequences of AI adoption are unpredictable, and that a century is a very long time. For instance, who would have said in 2022 that one of the first effects of AI automation would be to benefit less tech-savvy boomer and Gen-X managers and harm freshly minted Gen-Z coders?

Given the history of positive, automation-induced wealth and employment effects, why would this time be different? 

Finally, it’s important to remember that in a dynamic market-driven economy, skill requirements are always changing and firms are always searching for ways to improve their efficiency relative to competitors. This is doubly true as we enter the era of cognitive, as opposed to physical, automation. AI-driven automation is part of the pathway to a more prosperous economy and society for ourselves and for future generations. As my AEI colleague Jim Pethokoukis recently said, “A supposedly powerful general-purpose technology that left every firm’s labor demand utterly unchanged wouldn’t be much of a GPT.”  Said another way, unless AI disrupts our economy and lives, it cannot deliver its promised benefits.

What then should we do? I believe the most important step we can take right now is to begin “stress-testing” our current workforce development policies and programs and building scenarios for how industry and government will respond should significant AI-related job disruptions occur. Such scenario planning could be shaped into a flexible “playbook” of options to guide policymakers geared to the types and numbers of affected workers. Such planning didn’t occur prior to the automation and trade shocks of the 1990s and 2000s with lasting consequences for factory workers and American society. We should try to make sure this doesn’t happen again with AI.

Pessimism is easy and cheap. We should resist the lure of social media-monetized AI doomerism and focus on building the future we want to see by preparing for and embracing change. 



Source link

Continue Reading

Trending