Connect with us

Tools & Platforms

AI Intersection Monitoring Could Yield Safer Streets

Published

on


In cities across the United States, an ambitious goal is gaining traction: Vision Zero, the strategy to eliminate all traffic fatalities and severe injuries. First implemented in Sweden in the 1990s, Vision Zero has already cut road deaths there by 50 percent from 2010 levels. Now, technology companies like Stop for Kids and Obvio.ai are trying to bring the results seen in Europe to U.S. streets with AI-powered camera systems designed to keep drivers honest, even when police aren’t around.

Local governments are turning to AI-powered cameras to monitor intersections and catch drivers who see stop signs as mere suggestions. The stakes are high: About half of all car accidents happen at intersections, and too many end in tragedy. By automating enforcement of rules against rolling stops, speeding, and failure to yield, these systems aim to change driver behavior for good. The carrot is safer roads and lower insurance rates; the stick is citations for those who break the law.

The Origins of Stop for Kids

Stop for Kids, based in Great Neck, N.Y., is one company leading the charge in residential areas and school zones. Co-founder and CEO Kamran Barelli was driven by personal tragedy: In 2018, his wife and three-year-old son were struck by an inattentive driver while crossing the street. “The impact launched them nearly 18 meters down the street, where they landed hard on the asphalt pavement,” Barelli says. Both survived, but the experience left him determined to find a solution.

He and his neighbors pressed the municipality to put up radar speed signs. But they turned out to be counterproductive. “Teenagers would race to see who could trigger the highest number,” Barelli says. “And extra police only worked until drivers texted each other to watch out.”

So Barelli and his brother, longtime software entrepreneurs, pivoted their tech business to develop an AI-enabled camera system that never takes a day off and can see in the dark. Installed at intersections, the cameras detect vehicles that fail to come to a full stop; then the system automatically issues citations. It uses AI to draw digital “bounding boxes” around vehicles to track their behavior without looking at faces or activities inside a car. If a driver stops properly, any footage is deleted immediately. Videos of violations, on the other hand, are stored securely and linked with DMV records to issue tickets to vehicle owners. The local municipality determines the amount of the fine.

Stop for Kids has already seen promising results. In a 2022 pilot of the tech in the Long Island town of Saddle Rock, N.Y., compliance with stop signs jumped from just 3 percent to 84 percent within 90 days of installing the cameras. Today, that figure stands at 94 percent, says Barelli. “The remaining 6 percent of non-compliance comes overwhelmingly from visitors to the area who aren’t aware that the cameras are in place.” Since then, the company has installed its camera systems in municipalities in New York and Florida, with a few cities in California up next.

  In a Stop for Kids pilot project, cameras installed at intersections drastically improved drivers’ compliance with stop signs over three months.Stop for Kids

Still, some experts say they’ll wait to pass judgment on the technology’s efficacy. “Those results are impressive,” says Daniel Schwarz, a senior privacy and technology strategist at the New York Civil Liberties Union (NYCLU). “But these marketing claims are rarely backed up by independent studies that validate what these AI technologies can really do.”

Privacy Issues in Automated Ticketing Systems

Privacy is a big concern for communities considering camera enforcement. In the Stop for Kids system, faces inside vehicles and in the rest of the scene are automatically blurred. Identifying images come only from an AI license plate reader. No personal DMV data is shared except with local authorities handling citations. The company has created an online evidence portal that allows vehicle owners to review footage and dispute tickets, helping ensure the system remains fair and transparent.

Watchdog groups are not convinced that this type of technology won’t be subject to mission creep. They say that gear originally introduced to help reach the sympathetic goal of lowering traffic deaths may be updated to do things outside of that scope.

“Expanding the overall goal of such a deployment is as simple as software push,” says NYCLU’s Schwarz. “More functionalities could be introduced, additional features that raise more civil liberties concerns or present other dangers that perhaps the prior version did not.”

Obvio.ai’s Approach

Meanwhile, in San Carlos, Calif., another startup is taking a similar approach with its own twist. Founded in 2023, Obvio.ai has designed a solar-powered, AI-enabled camera system that mounts on utility poles and street lamps near intersections. Like Stop for Kids, Obvio’s system detects rolling stops, illegal turns, and failures to yield. But instead of automating the entire setup, local governments review potential infractions before any citations are issued, ensuring a human is always in the loop.

Obvio.ai co-founder and president Dhruv Maheshwari says the company’s cameras run on solar power and connect to its cloud server via 5G, making them easy to deploy without major construction. Obvio’s AI processor, installed on site with the camera, uses computer vision models to identify cars, bicycles, and pedestrians in real time. The system continuously streams footage but only stores clips when a violation is likely. Everything else is automatically deleted within hours to protect privacy. And, as with Stop for Kids’ tech, the cameras do not use facial recognition to identify drivers—just the vehicle’s license plate.

Last summer, Obvio.ai partnered with Maryland’s Prince George’s County for a pilot program across towns like Colmar Manor, Morningside, Bowie, and College Park. Within weeks, stop-sign violations were cut in half. In Bowie, local leaders avoided concerns about the camera system rollout being a “ticketing for profit” scheme by sending warning letters instead of fines during the trial period.

Vision Zero Is the Target

Though both Stop for Kids and Obvio.ai declined to offer any specifics about where their cameras will appear next, Barelli told IEEE Spectrum that about 60 towns on Long Island, near the place where it conducted its pilot, are interested. “They asked the state legislature to provide a clear framework governing what they can do with systems like ours,” Barelli says. “Right now, it’s being considered by the State Senate.”

“Ultimately, we hope our technology becomes obsolete,” says Maheshwari. “We want drivers to do the right thing, every time. If that means we don’t issue any tickets, that means zero revenue but complete success.”

From Your Site Articles

Related Articles Around the Web



Source link

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | fosters.com
logo

fosters.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on fosters.com



Source link

Continue Reading

Tools & Platforms

Global movement to protect kids online fuels a wave of AI safety tech

Published

on


Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.

STR | Nurphoto via Getty Images

The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.

In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.

Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.

This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.

Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.

Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.

Digital ID tech flourishing

At the heart of all these age verification measures is one company: Yoti.

Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.

The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.

“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”

Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.

“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”

Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.

“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”

Child-safe smartphones

The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.

Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.

The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.

Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.

HMD Global

“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”

The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.

Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.

The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.

“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”



Source link

Continue Reading

Tools & Platforms

Meta to add new AI safeguards after report raises teen safety concerns

Published

on


FILE PHOTO: Meta is adding new teenager safeguards to its AI products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors.
| Photo Credit: Reuters

Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.

A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in “conversations that are romantic or sensual.”

Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences.

Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.

Meta’s AI policies came under intense scrutiny and backlash after the Reuters report.

U.S. Senator Josh Hawley launched a probe into the Facebook parent’s AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors.

Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters.

Meta had confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children.

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.



Source link

Continue Reading

Trending