Connect with us

Tools & Platforms

AI Intersection Monitoring Could Yield Safer Streets

Published

on


In cities across the United States, an ambitious goal is gaining traction: Vision Zero, the strategy to eliminate all traffic fatalities and severe injuries. First implemented in Sweden in the 1990s, Vision Zero has already cut road deaths there by 50 percent from 2010 levels. Now, technology companies like Stop for Kids and Obvio.ai are trying to bring the results seen in Europe to U.S. streets with AI-powered camera systems designed to keep drivers honest, even when police aren’t around.

Local governments are turning to AI-powered cameras to monitor intersections and catch drivers who see stop signs as mere suggestions. The stakes are high: About half of all car accidents happen at intersections, and too many end in tragedy. By automating enforcement of rules against rolling stops, speeding, and failure to yield, these systems aim to change driver behavior for good. The carrot is safer roads and lower insurance rates; the stick is citations for those who break the law.

The Origins of Stop for Kids

Stop for Kids, based in Great Neck, N.Y., is one company leading the charge in residential areas and school zones. Co-founder and CEO Kamran Barelli was driven by personal tragedy: In 2018, his wife and three-year-old son were struck by an inattentive driver while crossing the street. “The impact launched them nearly 18 meters down the street, where they landed hard on the asphalt pavement,” Barelli says. Both survived, but the experience left him determined to find a solution.

He and his neighbors pressed the municipality to put up radar speed signs. But they turned out to be counterproductive. “Teenagers would race to see who could trigger the highest number,” Barelli says. “And extra police only worked until drivers texted each other to watch out.”

So Barelli and his brother, longtime software entrepreneurs, pivoted their tech business to develop an AI-enabled camera system that never takes a day off and can see in the dark. Installed at intersections, the cameras detect vehicles that fail to come to a full stop; then the system automatically issues citations. It uses AI to draw digital “bounding boxes” around vehicles to track their behavior without looking at faces or activities inside a car. If a driver stops properly, any footage is deleted immediately. Videos of violations, on the other hand, are stored securely and linked with DMV records to issue tickets to vehicle owners. The local municipality determines the amount of the fine.

Stop for Kids has already seen promising results. In a 2022 pilot of the tech in the Long Island town of Saddle Rock, N.Y., compliance with stop signs jumped from just 3 percent to 84 percent within 90 days of installing the cameras. Today, that figure stands at 94 percent, says Barelli. “The remaining 6 percent of non-compliance comes overwhelmingly from visitors to the area who aren’t aware that the cameras are in place.” Since then, the company has installed its camera systems in municipalities in New York and Florida, with a few cities in California up next.

  In a Stop for Kids pilot project, cameras installed at intersections drastically improved drivers’ compliance with stop signs over three months.Stop for Kids

Still, some experts say they’ll wait to pass judgment on the technology’s efficacy. “Those results are impressive,” says Daniel Schwarz, a senior privacy and technology strategist at the New York Civil Liberties Union (NYCLU). “But these marketing claims are rarely backed up by independent studies that validate what these AI technologies can really do.”

Privacy Issues in Automated Ticketing Systems

Privacy is a big concern for communities considering camera enforcement. In the Stop for Kids system, faces inside vehicles and in the rest of the scene are automatically blurred. Identifying images come only from an AI license plate reader. No personal DMV data is shared except with local authorities handling citations. The company has created an online evidence portal that allows vehicle owners to review footage and dispute tickets, helping ensure the system remains fair and transparent.

Watchdog groups are not convinced that this type of technology won’t be subject to mission creep. They say that gear originally introduced to help reach the sympathetic goal of lowering traffic deaths may be updated to do things outside of that scope.

“Expanding the overall goal of such a deployment is as simple as software push,” says NYCLU’s Schwarz. “More functionalities could be introduced, additional features that raise more civil liberties concerns or present other dangers that perhaps the prior version did not.”

Obvio.ai’s Approach

Meanwhile, in San Carlos, Calif., another startup is taking a similar approach with its own twist. Founded in 2023, Obvio.ai has designed a solar-powered, AI-enabled camera system that mounts on utility poles and street lamps near intersections. Like Stop for Kids, Obvio’s system detects rolling stops, illegal turns, and failures to yield. But instead of automating the entire setup, local governments review potential infractions before any citations are issued, ensuring a human is always in the loop.

Obvio.ai co-founder and president Dhruv Maheshwari says the company’s cameras run on solar power and connect to its cloud server via 5G, making them easy to deploy without major construction. Obvio’s AI processor, installed on site with the camera, uses computer vision models to identify cars, bicycles, and pedestrians in real time. The system continuously streams footage but only stores clips when a violation is likely. Everything else is automatically deleted within hours to protect privacy. And, as with Stop for Kids’ tech, the cameras do not use facial recognition to identify drivers—just the vehicle’s license plate.

Last summer, Obvio.ai partnered with Maryland’s Prince George’s County for a pilot program across towns like Colmar Manor, Morningside, Bowie, and College Park. Within weeks, stop-sign violations were cut in half. In Bowie, local leaders avoided concerns about the camera system rollout being a “ticketing for profit” scheme by sending warning letters instead of fines during the trial period.

Vision Zero Is the Target

Though both Stop for Kids and Obvio.ai declined to offer any specifics about where their cameras will appear next, Barelli told IEEE Spectrum that about 60 towns on Long Island, near the place where it conducted its pilot, are interested. “They asked the state legislature to provide a clear framework governing what they can do with systems like ours,” Barelli says. “Right now, it’s being considered by the State Senate.”

“Ultimately, we hope our technology becomes obsolete,” says Maheshwari. “We want drivers to do the right thing, every time. If that means we don’t issue any tickets, that means zero revenue but complete success.”

From Your Site Articles

Related Articles Around the Web



Source link

Tools & Platforms

“In the past, artificial intelligence (AI) implementation required all the data to be gathered in on..

Published

on


“In the past, artificial intelligence (AI) implementation required all the data to be gathered in one place. “NetApp provides technology that enables AI to run right on scattered data.”

Its competitiveness, selected by global storage company NetApp, which competes with Dell and Hitachi, is the best software for AI, not hardware. How do companies differentiate themselves from the storage equipment they use to store data.

Yoo Jae-sung, CEO of Korea NetApp, recently met with Mail Business and emphasized, “NetApp is a solution that allows data to be accessed and managed quickly no matter what conditions, whether it is in the cloud or on-premises environment.”

What he introduced is ‘On-Tap’, a storage operating system (OS) software developed by Netflix. Not only the data stored in the storage of the netapp, but also the data in the cloud and on-premises environment such as Amazon Web Service (AWS) and Microsoft (MS) Azure can be identified in one place and the data can be moved freely. For example, on-tap solutions enable companies to transfer data generated in their own environment to cloud platforms such as AWS for AI learning.

Then, when asked what is different from storing all data in such a cloud from the beginning, CEO Yoo said, “You can start managing data in the cloud, but depending on the situation, you have to move data to an on-premise environment rather than the cloud. In some cases, it is difficult to store data in the cloud for very sensitive data. It complements each other.”

Meanwhile, as the importance of data grows, cyberattacks targeting such data are also increasing. It is also a challenge for storage companies to prepare for data-seeking attacks like ransomware. Netapp is focusing on upgrading ransomware detection using AI technology in on-tap solutions.

As it is a solution that supports data management, it learns patterns while monitoring all data entering the company’s storage, and when suspicious data is found, it captures the timing so that data can be restored like a movie. CEO Yoo said, “What is important in security is the Zero Trust,” and emphasized, “Since internal users should not be trusted with data movement, users can also be blocked immediately when a problem occurs.”

CEO Yoo, who has been leading Korea’s Internet app since this year, has been leading the company for half a year since he was appointed as the new CEO in January this year. He started as a sales representative at MS Korea and went up to CEO, and he is an expert who has experience in various global information technology (IT) companies such as VMware along with MS.

The areas that CEO Yoo is focusing on this year are the public and financial markets. Netapps, including domestic telecommunications companies and major conglomerates such as Shinhan Financial Group, have already secured big customers. Most major Korean companies are net app customers, but they were relatively weak in public and finance, he said. “We plan to invest more in this field in the future.”

[Reporter Jeong Hojun]



Source link

Continue Reading

Tools & Platforms

RACGP releases new AI guidance

Published

on



News



A new resource guides GPs through the practicalities of using conversational AI in their consults, how the new technology works, and what risks to be aware of.



AI is an emerging space in general practice, with more than half of GPs not familiar with specific AI tools.



Artificial intelligence (AI) is becoming increasingly relevant in healthcare, but at least 80% of GPs have reported that they are not at all, or not very, familiar with specific AI tools.

 

To help GPs broaden their understanding of the technology, and weigh up the potential advantages and disadvantages of its use in their practice, the RACGP has unveiled a comprehensive new resource focused on conversational AI.  

 

Unlike AI scribes, which convert a conversation with a patient into a clinical note that can be incorporated into a patient’s health record, conversational AI is technology that enables machines to interpret, process, and respond to human language in a natural way.

 

Examples include AI-powered chatbots and virtual assistants that can support patient interactions, streamline appointment scheduling, and automate routine administrative tasks.

 

The college resource offers further practical guidance on how conversational AI can be applied effectively in general practice and highlights key applications. These include:

  • answering patient questions regarding their diagnosis, potential side effects of prescribed medicines or by simplifying jargon in medical reports
  • providing treatment/medication reminders and dosage instructions
  • providing language translation services
  • guiding patients to appropriate resources
  • supporting patients to track and monitor blood pressure, blood sugar, or other health markers
  • triaging patients prior to a consultation
  • preparing medical documentation such as clinical letters, clinical notes and discharge summaries
  • providing clinical decision support by preparing lists of differential diagnoses, supporting diagnosis, and optimising clinical decision support tools (for investigation and treatment options)
  • suggesting treatment options and lifestyle recommendations.

Dr Rob Hosking, Chair of the RACGP’s Practice and Technology Management Expert Committee, told newsGP there are several potential advantages to these tools in general practice.
 
‘Some of the potential benefits include task automation, reduced administrative burden, improved access to care and personalised health education for patients,’ he said.
 
Beyond the clinical setting, conversational AI tools can also have a range of business, educational and research applications, such as automating billing and analysing billing data, summarising the medical literature and answering clinicians’ medical questions.
 
However, while there are a number of benefits, Dr Hosking says it is important to consider some of the potential disadvantages to its use as well.
 
‘Conversational AI tools can provide responses that appear authoritative but on review are vague, misleading, or even incorrect,’ he explained.
 
‘Biases are inherent to the data on which AI tools are trained, and as such, particular patient groups are likely to be underrepresented in the data.
 
‘There is a risk that conversational AI will make unsuitable and even discriminatory recommendations, rely on harmful and inaccurate stereotypes, and/or exclude or stigmatise already marginalised and vulnerable individuals.’
 
While some conversational AI tools are designed for medical use, such as Google’s MedPaLM and Microsoft’s BioGPT, Dr Hosking pointed out that most are designed for general applications and not trained to produce a result within a clinical context.
 
‘The data these general tools are trained on are not necessarily up-to-date or from high-quality sources, such as medical research,’ he said.
 
The college addresses these potential problems, as well as other ethical and privacy considerations, that come with using AI in healthcare.
 
For GPs deciding whether to use conversational AI, Dr Hosking notes that there are a number of considerations to ensure the delivery of safe and quality care, and that says that patients should play a key role in the decision-making process as to whether to use it in their specific consultation.
 
‘GPs should involve patients in the decision to use AI tools and obtain informed patient consent when using patient-facing AI tools,’ he said.
 
‘Also, do not input sensitive or identifying data.’
 
However, before conversational AI is brought into practice workflows, the RACGP recommends GPs are trained on how to use it safely, including knowledge around the risks and limitations of the tool, and how and where data is stored.
 
‘GPs must ensure that the use of the conversational AI tool complies with relevant legislation and regulations, as well as any practice policies and professional indemnity insurance requirements that might impact, prohibit or govern its use,’ the college resource states.
 
‘It is also worth considering that conversational AI tools designed specifically by, and for use by, medical practitioners are likely to provide more accurate and reliable information than that of general, open-use tools.
 
‘These tools should be TGA-registered as medical devices if they make diagnostic or treatment recommendations.’
 
While the college recognises that conversational AI could revolutionise parts of healthcare delivery, in the interim, it recommends that GPs be ‘extremely careful’ in using the technology at this time.
 
‘Many questions remain about patient safety, patient privacy, data security, and impacts for clinical outcomes,’ the college said.
 
Dr Hosking, who has yet to implement conversational AI tools in his own clinical practice, shared the sentiment.
 
‘AI will continue to evolve and really could make a huge difference in patient outcomes and time savings for GPs,’ he said.
 
‘But it will never replace the important role of the doctor-patient relationship. We need to ensure AI does not create health inequities through inbuilt biases.
 
‘This will help GPs weigh up the potential advantages and disadvantages of using conversational AI in their practice and inform of the risks associated with these tools.’
 
Log in below to join the conversation.



AI AI scribes artificial intelligence conversational AI


newsGP weekly poll
How often do you include integrative medicine, defined as blending conventional and complementary medicine practices, in your practice to deliver personalised healthcare?



Source link

Continue Reading

Tools & Platforms

Artificial intelligence (AI) | The Guardian

Published

on


Artificial intelligence (AI)



Source link

Continue Reading

Trending