Connect with us

Tools & Platforms

AI Intersection Monitoring Could Yield Safer Streets

Published

on


In cities across the United States, an ambitious goal is gaining traction: Vision Zero, the strategy to eliminate all traffic fatalities and severe injuries. First implemented in Sweden in the 1990s, Vision Zero has already cut road deaths there by 50 percent from 2010 levels. Now, technology companies like Stop for Kids and Obvio.ai are trying to bring the results seen in Europe to U.S. streets with AI-powered camera systems designed to keep drivers honest, even when police aren’t around.

Local governments are turning to AI-powered cameras to monitor intersections and catch drivers who see stop signs as mere suggestions. The stakes are high: About half of all car accidents happen at intersections, and too many end in tragedy. By automating enforcement of rules against rolling stops, speeding, and failure to yield, these systems aim to change driver behavior for good. The carrot is safer roads and lower insurance rates; the stick is citations for those who break the law.

The Origins of Stop for Kids

Stop for Kids, based in Great Neck, N.Y., is one company leading the charge in residential areas and school zones. Co-founder and CEO Kamran Barelli was driven by personal tragedy: In 2018, his wife and three-year-old son were struck by an inattentive driver while crossing the street. “The impact launched them nearly 18 meters down the street, where they landed hard on the asphalt pavement,” Barelli says. Both survived, but the experience left him determined to find a solution.

He and his neighbors pressed the municipality to put up radar speed signs. But they turned out to be counterproductive. “Teenagers would race to see who could trigger the highest number,” Barelli says. “And extra police only worked until drivers texted each other to watch out.”

So Barelli and his brother, longtime software entrepreneurs, pivoted their tech business to develop an AI-enabled camera system that never takes a day off and can see in the dark. Installed at intersections, the cameras detect vehicles that fail to come to a full stop; then the system automatically issues citations. It uses AI to draw digital “bounding boxes” around vehicles to track their behavior without looking at faces or activities inside a car. If a driver stops properly, any footage is deleted immediately. Videos of violations, on the other hand, are stored securely and linked with DMV records to issue tickets to vehicle owners. The local municipality determines the amount of the fine.

Stop for Kids has already seen promising results. In a 2022 pilot of the tech in the Long Island town of Saddle Rock, N.Y., compliance with stop signs jumped from just 3 percent to 84 percent within 90 days of installing the cameras. Today, that figure stands at 94 percent, says Barelli. “The remaining 6 percent of non-compliance comes overwhelmingly from visitors to the area who aren’t aware that the cameras are in place.” Since then, the company has installed its camera systems in municipalities in New York and Florida, with a few cities in California up next.

  In a Stop for Kids pilot project, cameras installed at intersections drastically improved drivers’ compliance with stop signs over three months.Stop for Kids

Still, some experts say they’ll wait to pass judgment on the technology’s efficacy. “Those results are impressive,” says Daniel Schwarz, a senior privacy and technology strategist at the New York Civil Liberties Union (NYCLU). “But these marketing claims are rarely backed up by independent studies that validate what these AI technologies can really do.”

Privacy Issues in Automated Ticketing Systems

Privacy is a big concern for communities considering camera enforcement. In the Stop for Kids system, faces inside vehicles and in the rest of the scene are automatically blurred. Identifying images come only from an AI license plate reader. No personal DMV data is shared except with local authorities handling citations. The company has created an online evidence portal that allows vehicle owners to review footage and dispute tickets, helping ensure the system remains fair and transparent.

Watchdog groups are not convinced that this type of technology won’t be subject to mission creep. They say that gear originally introduced to help reach the sympathetic goal of lowering traffic deaths may be updated to do things outside of that scope.

“Expanding the overall goal of such a deployment is as simple as software push,” says NYCLU’s Schwarz. “More functionalities could be introduced, additional features that raise more civil liberties concerns or present other dangers that perhaps the prior version did not.”

Obvio.ai’s Approach

Meanwhile, in San Carlos, Calif., another startup is taking a similar approach with its own twist. Founded in 2023, Obvio.ai has designed a solar-powered, AI-enabled camera system that mounts on utility poles and street lamps near intersections. Like Stop for Kids, Obvio’s system detects rolling stops, illegal turns, and failures to yield. But instead of automating the entire setup, local governments review potential infractions before any citations are issued, ensuring a human is always in the loop.

Obvio.ai co-founder and president Dhruv Maheshwari says the company’s cameras run on solar power and connect to its cloud server via 5G, making them easy to deploy without major construction. Obvio’s AI processor, installed on site with the camera, uses computer vision models to identify cars, bicycles, and pedestrians in real time. The system continuously streams footage but only stores clips when a violation is likely. Everything else is automatically deleted within hours to protect privacy. And, as with Stop for Kids’ tech, the cameras do not use facial recognition to identify drivers—just the vehicle’s license plate.

Last summer, Obvio.ai partnered with Maryland’s Prince George’s County for a pilot program across towns like Colmar Manor, Morningside, Bowie, and College Park. Within weeks, stop-sign violations were cut in half. In Bowie, local leaders avoided concerns about the camera system rollout being a “ticketing for profit” scheme by sending warning letters instead of fines during the trial period.

Vision Zero Is the Target

Though both Stop for Kids and Obvio.ai declined to offer any specifics about where their cameras will appear next, Barelli told IEEE Spectrum that about 60 towns on Long Island, near the place where it conducted its pilot, are interested. “They asked the state legislature to provide a clear framework governing what they can do with systems like ours,” Barelli says. “Right now, it’s being considered by the State Senate.”

“Ultimately, we hope our technology becomes obsolete,” says Maheshwari. “We want drivers to do the right thing, every time. If that means we don’t issue any tickets, that means zero revenue but complete success.”

From Your Site Articles

Related Articles Around the Web



Source link

Tools & Platforms

Searching for boundaries in the AI jungle

Published

on


Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.

He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.

Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.

Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.

Copyrights

One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”

Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.

Personal data

Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”

The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.

“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.

searching-for-boundaries-in-the-ai-jungle2
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.

The team’s activities

Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.

In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.

The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues. 

In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.” 





Source link

Continue Reading

Tools & Platforms

How to start a career in the age of AI – Computerworld

Published

on



How to start a career in the age of AI  Computerworld



Source link

Continue Reading

Tools & Platforms

AI will boost the value of human creativity in financial services, says AWS

Published

on


shomos uddin/Getty Images

Financial services firms are making early gains from artificial intelligence (AI), which is not surprising given that finance is historically an industry that embraces new technologies aggressively.

Also: The AI complexity paradox: More productivity, more responsibilities

One surprising outcome is that AI might end up making the most critical functions of banking, insurance, and trading, or the creative functions that require human insights, even more valuable. 

“What happens is there’s going to be a premium on creativity and judgment that goes into the process,” said John Kain, who is head of market development efforts in financial services for AWS, in an interview with ZDNET via Zoom. 

Also: AI usage is stalling out at work from lack of education and support

By process, he meant those areas that are most advanced, and presumably hardest to automate, such as a bank’s risk calculations.

amazon-aws-2025-john-kain-headshot

Amazon AWS

“So much of what’s undifferentiated will be automated,” said Kaine. “But what that means is what actually differentiates the business and the ability to serve customers better, whether that’s better understanding products or risk, or coming up with new products, from a financial perspective, the pace of that will just go so much more quickly in the future.”

Amazon formed its financial services unit 10 years ago, the first time the cloud giant took an industry-first approach.

For eight years, Kaine has helped bring the cloud giant’s tools to banks, insurers, and hedge funds. That approach includes both moving workloads to the cloud and implementing AI, including the large language models (LLMs) of generative AI (Gen AI), in his clients’ processes. 

“If you look at what we’re trying to do, we’re trying to provide our customers an environment where, from a security, compliance, and governance perspective, we give them a platform that ticks the boxes for everything that’s table stakes for financial services,” said Kaine, “but also gives them the access to the latest technologies, and choice in being able to bring the best patterns to the industry.”

Also: Are AI subscriptions worth it? Most people don’t seem to think so, according to this study

Kaine, who started his career in operations on the trading floor, and worked at firms such as JP Morgan Chase and Nasdaq, had many examples of gains through the automation of financial functions, such as customer service and equity research.

Early use of AWS by financials included things such as back-testing portfolios of investments to predict performance, the kind of workload that is “well-suited to cloud” because it requires computer simulations “to really work well in parallel,” said Kaine.

“That ability to be able to do research much more quickly in AWS meant that investment research firms could quickly see those benefits,” he said. “You’ve seen that repeated across the industry regardless of the firm.”

Taking advantage of the tech

Early implementations of Gen AI are showing many commonalities across firms. “They’ll be repeatable patterns, whether it’s document processing that could show up as mortgage automation with PennyMac, or claims processing with The Travelers Companies.”

Such processes come with an extra degree of sensitivity, Kain said, given the regulated status of finance. “Not only do they have a priority on resilience as well as security, they have evidence that is in a far greater degree than any other industry because the regulations on financial services are typically very prescriptive,” he explained. “There’s a much higher bar in the industry.”

Also: Amazon’s Andy Jassy says AI will take some jobs but make others more ‘interesting’

Finance has been an early adopter of an AI-based technology invented at AWS, originally called Zelkova, and that is now more generally referred to as “automated reasoning.” The technology combines machine-learning AI with mathematical proofs to formally validate security measures, such as who has access to resources in a bank. 

“It was an effort to allow customers to prove that the security controls they put in place were knowably effective,” said Kain. “That was important for our financial services customers,” including hedge fund Bridgewater and other early adopters.

Now, automated reasoning is also being employed to fix Gen AI.

“You’re seeing that same approach now being taken to improve the performance of large language models, particularly with hallucination reduction,” he said. 

To mitigate hallucinations, or “confabulations,” as the errors in Gen AI are more properly known, AWS’s Bedrock platform for running machine learning programs uses retrieval-augmented generation (RAG). 

The RAG approach involves connecting an LLM to a source of validated information, such as a database. The source serves as a gold standard to “anchor” the models to limit error.

Also: Cisco rolls out AI agents to automate network tasks at ‘machine speed’ – with IT still in control

Once anchored, automated reasoning is applied to “actually allow you to create your own policies that will then give you an extra level of security and detail to make sure that the responses that you’re providing [from the AI model] are accurate.”

The RAG approach, and automated reasoning, are increasingly leading clients in financial services to implement “smaller, domain-specific tasks” in AI that can be connected to a set of specific data, he said. 

Financial firms start with Gen AI use cases in surveys of enterprise use, including automating call centers. “From a large language model perspective, there are actually a number of use cases that we’ve seen the industry achieve almost immediate ROI [return on investment],” said Kain. “The foremost is customer interaction, particularly at the call center.”

AWS customers, including Principal Financial, Ally Financial, Rocket Mortgage, and crypto-currency exchange Coinbase, have all exploited Gen AI to “take those [customer] calls, transcribe them in real time, and then provide information to the agents that provide the context of why customers are calling, plus their history, and then guide them [the human call agents] to the right response.” 

Coinbase used that approach to automate 64% of support calls, up from 19% two years ago, with the aim of reaching 90% in the future.

coinbase-presents-at-amazon-aws-financials-services-summit-nyc-2025

Coinbase presents its findings at AWS Summit.

Tiernan Ray/ZDNET

Finding fresh opportunities

Another area where automation is being used is in monitoring alerts, such as fraud warnings. It’s a bit like AI in cybersecurity, where AI handles a flood of signals that would overwhelm a human analyst or investigator.

Fraud alerts and other warnings “generate a large number of false positives,” said Kain, which means a lot of extra work for fraud teams and other financial staff to “spend a good chunk of their day looking at things that aren’t actually fraud.” 

Instead, “customers can use large language models to help accelerate the investigation process” by summarizing the alerts, and then create a summary report to be given to the human investigator. 

Verafin specializes in anti-money laundering efforts and is an AWS customer using this approach. 

“They’ve shown they can save 80% to 90% of the time it takes to investigate an alert,” he said. 

Also: Think DeepSeek has cut AI spending? Think again

Another automation area is “middle office processing,” including customer inquiries to a brokerage for trade confirmation. 

One AWS client, brokerage Jefferies & Co., has set up “agentic AI” where the AI model “would actually go through their inbox, saying, this is a request for confirming a price” of a securities trade. 

That agent passes the request to another agent to “go out and query a database to get the actual trade price for the customer, and then generate the email” that gets sent to the customer.

“It’s not a huge process, it takes a human, maybe, ten, fifteen minutes to go do it themselves,” said Kain, “but you go from something that was minutes down to seconds through agents.” 

The same kinds of applications have been seen in the mortgage and insurance business, he said, and in energy, with Canada’s Total Energy Services confirming contracts. 

Also: You’ve heard about AI killing jobs, but here are 15 news ones AI could create

One of the “most interesting” areas in finance for Gen AI, said Kain, is in investment research. 

Hedge fund Bridgewater uses LLMs to “basically take a freeform text [summary] about an investment idea, break that down into nine individual steps, and, for each step, kick off an [AI] agent that would go understand what data was necessary to answer the question, build a dependency map between the various trade-offs within an investment model, and then write the code to pull real-time data from the investment data store, and then generate a report like a first-year investment professional.”

Credit rating giant Moody’s is using agents to automate memos on credit ratings. However, credit ratings are usually for public companies because only these firms must report their financial data by law. Now, Moody’s peer, S&P Global, has been able to extend ratings to private companies by amassing snippets of data here and there. 

“There’s an opportunity to leverage large language models to scour what’s publicly available to do credit information on private companies,” said Kain. “That allows the private credit market to have better-anchored information to make private credit decisions.”

These represent “just amazing capabilities,” said Kain of the AI use cases.

Moving into new areas

AI is not yet automating many core functions of banks and other financial firms, such as calculating the most complex risk profiles for securities. But, “I think it’s closer than you think,” said Kain.

“It’s not where we’ve completely moved to trusting the machine to generate, let’s say, trading strategies or risk management approaches,” said Kain. 

Also: 5 ways you can plug the widening AI skills gap at your business

However, the beginnings of forecasting and analysis are present. Consider the problem of calculating the impact of new US tariffs on the cash flows of companies. That is “happening today as partially an AI function,” he said. 

Financial firms “are definitely looking at data at scale, reacting to market movements, and then seeing how they should be updating their positions accordingly,” he explained. 

“That ability to ingest data at a global scale is something that I think is so much easier than it was a year ago,” because of Gen AI.

AWS customer Crypto.com, a trading platform for cryptocurrencies, can watch news feeds in 25 different languages using a combination of multiple LLMs. 

“They are able to identify which stories are about currencies, and tell if that is a positive or negative signal, and then aggregate that as inputs to their customers,” for trading purposes. As long as two of the three models monitoring the feeds agreed, “they had conviction that there was a signal there” of value. 

“So, we’re seeing that use of generative AI to check generative AI, if you will, to provide confidence at scale.”

Also: Phishers built fake Okta and Microsoft 365 login sites with AI – here’s how to protect yourself

Those human-centered tasks that remain at the core of banking, insurance, and trading are probably the most valuable in the industry, including the most complex functions, such as creating new derivative products or underwriting initial public offerings. 

Those are areas that will enjoy the “premium” for creativity, in Kain’s view. Yet how much longer these tasks remain centered on human creation is an open question. 

“I wish I had a crystal ball to say how much of that is truly automatable in the next few years,” said Kain. 

“But given the tremendous adoption [of AI], and the ability for us to process data so much more effectively than even just two, three years ago, it’s an exciting time to see where this will all end up.”





Source link

Continue Reading

Trending