The growing use of police body cameras that use artificial intelligence is raising alarms about privacy violations, racial bias and a lack of oversight, according to a report published Tuesday by the R Street Institute, a Washington think tank.
Body-worn cameras, initially introduced in the 2010s to improve transparency and accountability during interactions with the public, are now standard equipment in many police departments across the country. But R Street’s report suggests that the addition of AI has created new risks, such as misidentifying people and collecting sensitive data without consent.
“The line between public security and state surveillance lies not in technology, but in the policies that govern it,” the report warns, citing the increasing integration of facial recognition and real-time video analytics into law enforcement tools.
To combat these risks, the report recommends stricter state regulations, including requiring warrants for facial recognition and establishing higher accuracy thresholds, limiting data retention and mandating regular audits to identify racial or systemic bias.
Logan Seacrest, one of the report’s authors, also emphasized the importance of keeping humans “in the loop” when it comes to oversight.
“Not letting the AI kind of make any final decisions by itself, before running by a team of law enforcement professionals, attorneys, software engineers, whoever needs to be there providing supervision [or] final decisions about flagging people for arrest, flagging officers,” Seacrest said. “That stuff really should remain in human hands.”
The report shows that privacy advocates are growing increasingly concerned about how footage is captured, used and stored. Body-worn cameras don’t just record crimes — they often capture people in distress, experiencing medical emergencies or inside their homes. Some police departments work with technology companies like Clearview AI and Palantir to help analyze footage in real time, often without clear rules or transparency guidelines.
“Predictive systems can also open the door to invasive surveillance, and without clear and enforceable policies to protect civil liberties, these tools could be abused by bad actors,” the report reads.
Police officers in New Orleans recently came under fire for using facial recognition technology across a private network of more than 200 surveillance cameras equipped with AI in violation of city policy. In response, the city proposed an ordinance that would allow police broad use of facial recognition technology.
Seacrest said the backlash was in that case “completely predictable.”
“This was a private facial recognition network that was operating outside the bounds of the law raises obvious civil liberty concerns, basically a warrantless algorithm dragnet here,” he said.
The R Street report also highlights the disproportionate impact AI mistakes can have on communities of color, citing a 2020 incident involving Robert Williams, a Black man who was wrongfully arrested in Michigan after being misidentified by a facial recognition system.
Several states have taken action to address these concerns. California passed legislation prohibiting the use of facial recognition on police-worn cameras, though the law expired in 2023. The Illinois legislature recently strengthened its Law Enforcement Officer-Worn Body Camera Act, mandating retention limits, prohibiting live biometric analysis and requiring officers to deactivate recordings under certain circumstances.
“There’s nothing inherently incompatible about AI and civil liberties, or AI and privacy with proper democratic oversight,” Seacrest said. “Those same tools that authoritarian regimes use to basically monitor and control the population can be applied here in the U.S., in accordance with the Constitution, benefiting all Americans. It’s really just a matter of the guardrails that we put in place for it.”
The report concludes that government oversight of AI-powered body cameras remains inconsistent, especially with there being few national standards that regulate the use of AI in policing. Seacrest said that isn’t necessarily a bad thing.
“I think regulations are often best if they are created and actuated closest to the people that they affect,” said Seacrest. “So the use of body camera AI should really be done at the state and local level.”
One surprising outcome is that AI might end up making the most critical functions of banking, insurance, and trading, or the creative functions that require human insights, even more valuable.
“What happens is there’s going to be a premium on creativity and judgment that goes into the process,” said John Kain, who is head of market development efforts in financial services for AWS, in an interview with ZDNET via Zoom.
By process, he meant those areas that are most advanced, and presumably hardest to automate, such as a bank’s risk calculations.
Amazon AWS
“So much of what’s undifferentiated will be automated,” said Kaine. “But what that means is what actually differentiates the business and the ability to serve customers better, whether that’s better understanding products or risk, or coming up with new products, from a financial perspective, the pace of that will just go so much more quickly in the future.”
Amazon formed its financial services unit 10 years ago, the first time the cloud giant took an industry-first approach.
For eight years, Kaine has helped bring the cloud giant’s tools to banks, insurers, and hedge funds. That approach includes both moving workloads to the cloud and implementing AI, including the large language models (LLMs) of generative AI (Gen AI), in his clients’ processes.
“If you look at what we’re trying to do, we’re trying to provide our customers an environment where, from a security, compliance, and governance perspective, we give them a platform that ticks the boxes for everything that’s table stakes for financial services,” said Kaine, “but also gives them the access to the latest technologies, and choice in being able to bring the best patterns to the industry.”
Kaine, who started his career in operations on the trading floor, and worked at firms such as JP Morgan Chase and Nasdaq, had many examples of gains through the automation of financial functions, such as customer service and equity research.
Early use of AWS by financials included things such as back-testing portfolios of investments to predict performance, the kind of workload that is “well-suited to cloud” because it requires computer simulations “to really work well in parallel,” said Kaine.
“That ability to be able to do research much more quickly in AWS meant that investment research firms could quickly see those benefits,” he said. “You’ve seen that repeated across the industry regardless of the firm.”
Taking advantage of the tech
Early implementations of Gen AI are showing many commonalities across firms. “They’ll be repeatable patterns, whether it’s document processing that could show up as mortgage automation with PennyMac, or claims processing with The Travelers Companies.”
Such processes come with an extra degree of sensitivity, Kain said, given the regulated status of finance. “Not only do they have a priority on resilience as well as security, they have evidence that is in a far greater degree than any other industry because the regulations on financial services are typically very prescriptive,” he explained. “There’s a much higher bar in the industry.”
Finance has been an early adopter of an AI-based technology invented at AWS, originally called Zelkova, and that is now more generally referred to as “automated reasoning.” The technology combines machine-learning AI with mathematical proofs to formally validate security measures, such as who has access to resources in a bank.
“It was an effort to allow customers to prove that the security controls they put in place were knowably effective,” said Kain. “That was important for our financial services customers,” including hedge fund Bridgewater and other early adopters.
Now, automated reasoning is also being employed to fix Gen AI.
“You’re seeing that same approach now being taken to improve the performance of large language models, particularly with hallucination reduction,” he said.
To mitigate hallucinations, or “confabulations,” as the errors in Gen AI are more properly known, AWS’s Bedrock platform for running machine learning programs uses retrieval-augmented generation (RAG).
The RAG approach involves connecting an LLM to a source of validated information, such as a database. The source serves as a gold standard to “anchor” the models to limit error.
Once anchored, automated reasoning is applied to “actually allow you to create your own policies that will then give you an extra level of security and detail to make sure that the responses that you’re providing [from the AI model] are accurate.”
The RAG approach, and automated reasoning, are increasingly leading clients in financial services to implement “smaller, domain-specific tasks” in AI that can be connected to a set of specific data, he said.
Financial firms start with Gen AI use cases in surveys of enterprise use, including automating call centers. “From a large language model perspective, there are actually a number of use cases that we’ve seen the industry achieve almost immediate ROI [return on investment],” said Kain. “The foremost is customer interaction, particularly at the call center.”
AWS customers, including Principal Financial, Ally Financial, Rocket Mortgage, and crypto-currency exchange Coinbase, have all exploited Gen AI to “take those [customer] calls, transcribe them in real time, and then provide information to the agents that provide the context of why customers are calling, plus their history, and then guide them [the human call agents] to the right response.”
Coinbase used that approach to automate 64% of support calls, up from 19% two years ago, with the aim of reaching 90% in the future.
Coinbase presents its findings at AWS Summit.
Tiernan Ray/ZDNET
Finding fresh opportunities
Another area where automation is being used is in monitoring alerts, such as fraud warnings. It’s a bit like AI in cybersecurity, where AI handles a flood of signals that would overwhelm a human analyst or investigator.
Fraud alerts and other warnings “generate a large number of false positives,” said Kain, which means a lot of extra work for fraud teams and other financial staff to “spend a good chunk of their day looking at things that aren’t actually fraud.”
Instead, “customers can use large language models to help accelerate the investigation process” by summarizing the alerts, and then create a summary report to be given to the human investigator.
Verafin specializes in anti-money laundering efforts and is an AWS customer using this approach.
“They’ve shown they can save 80% to 90% of the time it takes to investigate an alert,” he said.
Another automation area is “middle office processing,” including customer inquiries to a brokerage for trade confirmation.
One AWS client, brokerage Jefferies & Co., has set up “agentic AI” where the AI model “would actually go through their inbox, saying, this is a request for confirming a price” of a securities trade.
That agent passes the request to another agent to “go out and query a database to get the actual trade price for the customer, and then generate the email” that gets sent to the customer.
“It’s not a huge process, it takes a human, maybe, ten, fifteen minutes to go do it themselves,” said Kain, “but you go from something that was minutes down to seconds through agents.”
The same kinds of applications have been seen in the mortgage and insurance business, he said, and in energy, with Canada’s Total Energy Services confirming contracts.
One of the “most interesting” areas in finance for Gen AI, said Kain, is in investment research.
Hedge fund Bridgewater uses LLMs to “basically take a freeform text [summary] about an investment idea, break that down into nine individual steps, and, for each step, kick off an [AI] agent that would go understand what data was necessary to answer the question, build a dependency map between the various trade-offs within an investment model, and then write the code to pull real-time data from the investment data store, and then generate a report like a first-year investment professional.”
Credit rating giant Moody’s is using agents to automate memos on credit ratings. However, credit ratings are usually for public companies because only these firms must report their financial data by law. Now, Moody’s peer, S&P Global, has been able to extend ratings to private companies by amassing snippets of data here and there.
“There’s an opportunity to leverage large language models to scour what’s publicly available to do credit information on private companies,” said Kain. “That allows the private credit market to have better-anchored information to make private credit decisions.”
These represent “just amazing capabilities,” said Kain of the AI use cases.
Moving into new areas
AI is not yet automating many core functions of banks and other financial firms, such as calculating the most complex risk profiles for securities. But, “I think it’s closer than you think,” said Kain.
“It’s not where we’ve completely moved to trusting the machine to generate, let’s say, trading strategies or risk management approaches,” said Kain.
However, the beginnings of forecasting and analysis are present. Consider the problem of calculating the impact of new US tariffs on the cash flows of companies. That is “happening today as partially an AI function,” he said.
Financial firms “are definitely looking at data at scale, reacting to market movements, and then seeing how they should be updating their positions accordingly,” he explained.
“That ability to ingest data at a global scale is something that I think is so much easier than it was a year ago,” because of Gen AI.
AWS customer Crypto.com, a trading platform for cryptocurrencies, can watch news feeds in 25 different languages using a combination of multiple LLMs.
“They are able to identify which stories are about currencies, and tell if that is a positive or negative signal, and then aggregate that as inputs to their customers,” for trading purposes. As long as two of the three models monitoring the feeds agreed, “they had conviction that there was a signal there” of value.
“So, we’re seeing that use of generative AI to check generative AI, if you will, to provide confidence at scale.”
Those human-centered tasks that remain at the core of banking, insurance, and trading are probably the most valuable in the industry, including the most complex functions, such as creating new derivative products or underwriting initial public offerings.
Those are areas that will enjoy the “premium” for creativity, in Kain’s view. Yet how much longer these tasks remain centered on human creation is an open question.
“I wish I had a crystal ball to say how much of that is truly automatable in the next few years,” said Kain.
“But given the tremendous adoption [of AI], and the ability for us to process data so much more effectively than even just two, three years ago, it’s an exciting time to see where this will all end up.”
This story is available exclusively to Business Insider
subscribers. Become an Insider
and start reading now. Have an account? .
A consulting firm found that tech companies are “strategically overpaying” recruits with AI experience.
They found firms pay premiums of up to $200,000 for data scientists with machine learning skills.
The report also tracked a rise in bonuses for lower-level software engineers and analysts.
The AI talent bidding war is heating up, and the data scientists and software engineers behind the tech are benefiting from being caught in the middle.
Many tech companies are “strategically overpaying” recruits with AI experience, shelling out premiums of up to $200,000 for some roles with machine learning skills, J. Thelander Consulting, a compensation data and consulting firm for the private capital market, found in a recent report.
The report, compiled from a compensation analysis of roles across 153 companies, showed that data scientists and analysts with machine learning skills tend to receive a higher premium than software engineers with the same skills. However, the consulting firm also tracked a rise in bonuses for lower-level software engineers and analysts.
The payouts are a big bet, especially among startups. About half of the surveyed companies paying premiums for employees with AI skills had no revenue in the past year, and a majority (71%) had no profit.
Smaller firms need to stand out and be competitive among Big Tech giants — a likely driver behind the pricey recruitment tactic, a spokesperson for the consulting firm told Business Insider.
But while the J. Thelander Consulting report focused on smaller firms, some Big Tech companies have also recently made headlines for their sky-high recruitment incentives.
Meta was in the spotlight last month after Sam Altman, CEO of OpenAI, said the social media giant had tried to poach his best employees with $100 million signing bonuses.
While Business Insider previously reported that Altman later quipped that none of his “best people” had been enticed by the deal, Meta’s chief technology officer, Andrew Bosworth, said in an interview with CNBC that Altman “neglected to mention that he’s countering those offers.”
usatoday.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.
Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.com