Designed alongside the UN and Interpol, the Responsible AI Toolkit has already been used to train thousands of officers and dozens of police chiefs across the world.
Police use of AI has raised concerns throughout the US. Cansu Canca says it’s vital to train police to use it ethically. (Photo by Edmund D. Fountain for The Washington Post via Getty Images)
As more and more sectors experiment with artificial intelligence, one of the areas that has most quickly adopted this new technology is law enforcement. It’s led to some problematic growing pains, from false arrests to concerns around facial recognition.
However, a new training tool is now being used by law enforcement agencies across the globe to ensure that officers understand this technology and use it more ethically.
Based largely on the work of Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI, and designed in collaboration with the United Nations and Interpol, the Responsible AI Toolkit is one of the first comprehensive training programs for police focused exclusively on AI. At the core of the toolkit is a simple question, Canca says.
“The first thing that we start with is asking the organization, when they are thinking about building or deploying AI, do you need AI?” Canca says. “Because any time you add a new tool, you are adding a risk. In the case of policing, the goal is to increase public safety and reduce crime, and that requires a lot of resources. There’s a real need for efficiency and betterment, and AI has a significant promise in helping law enforcement, as long as the risks can be mitigated.”
Thousands of officers have already undergone training using the toolkit, and this year, Canca led a training session for 60 police chiefs in the U.S. The U.N. will soon be rolling out additional executive-level training in five European countries as well.
Uses of AI like facial recognition have attracted the most attention, but police are also using AI for simpler things like generating video-to-text transcriptions for body camera footage, deciphering license plate numbers in blurry videos and even determining patrol schedules.
All those uses, no matter how minor they might seem, come with inherent ethical risks if agencies don’t understand the limits of AI and where it’s best used, Canca says.
Understanding when to –– and when not –– to use AI is just as valuable as knowing how to use the technology, says Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI. Photo by Matthew Modoono/Northeastern University
“The most important thing is making sure that every time we create an AI tool for law enforcement, we have as clear an understanding as possible of how likely this tool is to fail, where it might fail, and how we can make sure the police agencies know that it might fail in those particular ways,” Canca says.
Even if an agency claims it needs or wants to use AI, the more important question is whether it’s ready to deploy AI. The toolkit is designed to get law enforcement agencies thinking about what best suits their situation. A department might be ready to develop its own AI tool like a real-time crime center. However, most that are ready to adopt the technology are more likely to procure it from a third-party vendor, Canca explains.
At the same time, it’s important for agencies to also recognize when they aren’t yet ready to use AI.
“If you’re not ready — if you cannot keep the data safe, if you cannot ensure adequate levels of privacy, if you cannot check for bias, basically if your agency is not able to assess and monitor technology for its risks and mitigate those risks — then you probably shouldn’t go super ambitious just yet and instead start building those ethics muscles as you slowly engage with AI systems,” Canca says.
Canca notes that the toolkit is not one-size-fits-all. Each sector, whether it’s policing or education, has its own ethical framework that requires a slightly different approach that is sensitive to the specific ethical issues of that sector.
“Policing is not detached from ethics” and has its own set of ethical questions and criticisms, Canca says, including “a really long lineage of historical bias.”
Understanding those biases is key when implementing tools that could potentially re-create those very biases, creating a vicious cycle of technology and police practice.
“There are districts that have been historically overpoliced, so if you just look at that data, you’re likely to overpolice those areas again,” Canca says. “Then the question becomes, ‘If we understand that’s the case, how can we mitigate the risk of discrimination, how can we supplement the data or ensure that the tool is used for the right purposes?’”
The goal of the toolkit is to avoid those ethical pitfalls by making officers aware that humans are still a vital component of AI. An AI system might be able to analyze a city and suggest which areas might need more assistance based on crime data, but it’s up to humans to decide if a specific neighborhood might need more patrol officers or maybe social workers and mental health professionals.
“Police are not trained to ask the right questions around technology and ethics,” Canca says. “We need to be there to guide them and also push the technology providers to create better technologies.”
“One thing we’ll know for sure is you’re going to have to continually learn … throughout your career,” he said [File]
| Photo Credit: REUTERS
A top Google scientist and 2024 Nobel laureate said Friday that the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace.
“It’s very hard to predict the future, like 10 years from now, in normal cases. It’s even harder today, given how fast AI is changing, even week by week,” Hassabis told the audience. “The only thing you can say for certain is that huge change is coming.”
The neuroscientist and former chess prodigy said artificial general intelligence — a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can — could arrive within a decade. This, he said, will bring dramatic advances and a possible future of “radical abundance” despite acknowledged risks.
Hassabis emphasised the need for “meta-skills,” such as understanding how to learn and optimising one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.
“One thing we’ll know for sure is you’re going to have to continually learn … throughout your career,” he said.
The DeepMind co-founder, who established the London-based research lab in 2010 before Google acquired it four years later, shared the 2024 Nobel Prize in chemistry for developing AI systems that accurately predict protein folding — a breakthrough for medicine and drug discovery.
Greek Prime Minister Kyriakos Mitsotakis joined Hassabis at the Athens event after discussing ways to expand AI use in government services. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality.
“Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical,” he said. “And if they see … obscene wealth being created within very few companies, this is a recipe for significant social unrest.”
Mitsotakis thanked Hassabis, whose father is Greek Cypriot, for rescheduling the presentation to avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.
As the smartphone market nears saturation, smart glasses are emerging as the next frontier for AI-enabled wearable devices. Foxconn is positioning itself beyond contract assembly by investing in local augmented reality (AR) technology company Jorjin…
Learn five key areas to target when laying the groundwork for a potential AI implementation at your facility.
Brand Insights from Easy Automation, Inc.
We are in a transformative era, marked by the increasing implementation of AI in both our personal and professional lives. We’ve already seen tools like ChatGPT make their way into our conversations, and we don’t see these new tools going away. While there are still many unknowns surrounding AI and its potential benefits in agricultural facilities, we believe there is a significant opportunity for these new technologies to enhance the efficiency, safety, and profitability of our facilities.
While there are many different levels of comfort and acceptance in implementing AI tools at our facilities, we’ve identified five key areas to target when laying the groundwork for a potential AI implementation at your facility.
Clean and Refine Existing Data
Identify Missing Data and Capture It
Modernize Technology Stack and Storage
Clarify and Enhance Data Security
Align with Forward-Moving Partners
Clean and Refine Existing Data
Where is your data being recorded and stored? How many different software programs or spreadsheets do you have that store your data? Are those individual systems talking to each other, or is there duplicate data? AI technology can only run as efficiently as the data that is provided. In the agricultural facilities we work with, we often see multiple different software programs, including accounting, formulation, order management, trucking, automation, and many others. While many of these programs are necessary for each facility to achieve its business objectives, the systems must work together to provide clean, accurate, and real-time data to be compatible with any future AI integration.
Identify Missing Data and Capture It
Is there an area in your operation where you don’t have any real information or data? Consider your equipment, hazard monitoring sensors, bin levels, truck routing, fleet management, and truck flow within your facility. What comes to mind for your facility? While some new-built facilities capture all this information from the beginning, as our facilities evolve, there are often areas that are missed. Without this data, we are seeing an inaccurate picture of your whole facility from a data standpoint. The power of AI lies in its ability to see the complete picture of data and draw insights and predictions from historical data. Invest in identifying your missing data and take steps to capture it in preparation for future AI implementation.
Modernize Technology Stack and Storage
At a minimum, your facility needs to be connected to the internet, and data must be stored on an accessible platform. Unfortunately, Excel documents on a desktop won’t suffice. Our recommended criteria for modernizing your technology stack include storing in an easily accessible database that offers API connectivity and cloud-based storage. They can log real-time, all-inclusive facility data quickly and accurately. We aim to avoid data silos with multiple disparate data storage areas and prevent systems that are difficult to access or integrate with. API connectivity will be essential, and we want to avoid any systems that require cumbersome custom development to connect to.
Clarify and Enhance Data Security
Security must be at the forefront of the AI implementation conversation. Your data is one of your most valuable assets. We want to ensure that where you place your data or who you allow to analyze it is a reputable source that has been rigorously vetted. Before placing your data in any AI program, it is essential to understand all of the data privacy and security terms and conditions.
Align with Forward-Moving Partners
Do you want to be an expert in AI implementation at your facility? Maybe. However, we recommend aligning yourself with a partner in the industry who is moving forward in that direction and allowing them to become experts, meeting your needs in this area. It is essential to ask questions that provide insight into where that partner is today, as well as where they are headed in the future. Add it to your company’s roadmap and ensure it is also included on your partners’ roadmaps.
At Easy Automation, we have AI implementation on our roadmap and are actively taking steps forward to provide a solution that makes the most sense for our customers. Are you interested in seeing how we might align or learning more about this? Contact our team at 507-728-8214 or by visiting our website at www.easy-automation.com.
Written by Brian Sokoloski – CTO at Easy Automation, Inc.