Tools & Platforms
How Law Enforcement is Learning to Use AI More Ethically

Designed alongside the UN and Interpol, the Responsible AI Toolkit has already been used to train thousands of officers and dozens of police chiefs across the world.

As more and more sectors experiment with artificial intelligence, one of the areas that has most quickly adopted this new technology is law enforcement. It’s led to some problematic growing pains, from false arrests to concerns around facial recognition.
However, a new training tool is now being used by law enforcement agencies across the globe to ensure that officers understand this technology and use it more ethically.
Based largely on the work of Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI, and designed in collaboration with the United Nations and Interpol, the Responsible AI Toolkit is one of the first comprehensive training programs for police focused exclusively on AI. At the core of the toolkit is a simple question, Canca says.
“The first thing that we start with is asking the organization, when they are thinking about building or deploying AI, do you need AI?” Canca says. “Because any time you add a new tool, you are adding a risk. In the case of policing, the goal is to increase public safety and reduce crime, and that requires a lot of resources. There’s a real need for efficiency and betterment, and AI has a significant promise in helping law enforcement, as long as the risks can be mitigated.”
Thousands of officers have already undergone training using the toolkit, and this year, Canca led a training session for 60 police chiefs in the U.S. The U.N. will soon be rolling out additional executive-level training in five European countries as well.
Uses of AI like facial recognition have attracted the most attention, but police are also using AI for simpler things like generating video-to-text transcriptions for body camera footage, deciphering license plate numbers in blurry videos and even determining patrol schedules.
All those uses, no matter how minor they might seem, come with inherent ethical risks if agencies don’t understand the limits of AI and where it’s best used, Canca says.

“The most important thing is making sure that every time we create an AI tool for law enforcement, we have as clear an understanding as possible of how likely this tool is to fail, where it might fail, and how we can make sure the police agencies know that it might fail in those particular ways,” Canca says.
Even if an agency claims it needs or wants to use AI, the more important question is whether it’s ready to deploy AI. The toolkit is designed to get law enforcement agencies thinking about what best suits their situation. A department might be ready to develop its own AI tool like a real-time crime center. However, most that are ready to adopt the technology are more likely to procure it from a third-party vendor, Canca explains.
At the same time, it’s important for agencies to also recognize when they aren’t yet ready to use AI.
“If you’re not ready — if you cannot keep the data safe, if you cannot ensure adequate levels of privacy, if you cannot check for bias, basically if your agency is not able to assess and monitor technology for its risks and mitigate those risks — then you probably shouldn’t go super ambitious just yet and instead start building those ethics muscles as you slowly engage with AI systems,” Canca says.
Canca notes that the toolkit is not one-size-fits-all. Each sector, whether it’s policing or education, has its own ethical framework that requires a slightly different approach that is sensitive to the specific ethical issues of that sector.
“Policing is not detached from ethics” and has its own set of ethical questions and criticisms, Canca says, including “a really long lineage of historical bias.”
Understanding those biases is key when implementing tools that could potentially re-create those very biases, creating a vicious cycle of technology and police practice.
“There are districts that have been historically overpoliced, so if you just look at that data, you’re likely to overpolice those areas again,” Canca says. “Then the question becomes, ‘If we understand that’s the case, how can we mitigate the risk of discrimination, how can we supplement the data or ensure that the tool is used for the right purposes?’”
The goal of the toolkit is to avoid those ethical pitfalls by making officers aware that humans are still a vital component of AI. An AI system might be able to analyze a city and suggest which areas might need more assistance based on crime data, but it’s up to humans to decide if a specific neighborhood might need more patrol officers or maybe social workers and mental health professionals.
“Police are not trained to ask the right questions around technology and ethics,” Canca says. “We need to be there to guide them and also push the technology providers to create better technologies.”
Science & Technology
Recent Stories
Tools & Platforms
5 Ways to Prepare your Facility for AI Implementation

Learn five key areas to target when laying the groundwork for a potential AI implementation at your facility.
Brand Insights from Easy Automation, Inc.
We are in a transformative era, marked by the increasing implementation of AI in both our personal and professional lives. We’ve already seen tools like ChatGPT make their way into our conversations, and we don’t see these new tools going away. While there are still many unknowns surrounding AI and its potential benefits in agricultural facilities, we believe there is a significant opportunity for these new technologies to enhance the efficiency, safety, and profitability of our facilities.
While there are many different levels of comfort and acceptance in implementing AI tools at our facilities, we’ve identified five key areas to target when laying the groundwork for a potential AI implementation at your facility.
- Clean and Refine Existing Data
- Identify Missing Data and Capture It
- Modernize Technology Stack and Storage
- Clarify and Enhance Data Security
- Align with Forward-Moving Partners
Clean and Refine Existing Data
Where is your data being recorded and stored? How many different software programs or spreadsheets do you have that store your data? Are those individual systems talking to each other, or is there duplicate data? AI technology can only run as efficiently as the data that is provided. In the agricultural facilities we work with, we often see multiple different software programs, including accounting, formulation, order management, trucking, automation, and many others. While many of these programs are necessary for each facility to achieve its business objectives, the systems must work together to provide clean, accurate, and real-time data to be compatible with any future AI integration.
Identify Missing Data and Capture It
Is there an area in your operation where you don’t have any real information or data? Consider your equipment, hazard monitoring sensors, bin levels, truck routing, fleet management, and truck flow within your facility. What comes to mind for your facility? While some new-built facilities capture all this information from the beginning, as our facilities evolve, there are often areas that are missed. Without this data, we are seeing an inaccurate picture of your whole facility from a data standpoint. The power of AI lies in its ability to see the complete picture of data and draw insights and predictions from historical data. Invest in identifying your missing data and take steps to capture it in preparation for future AI implementation.
Modernize Technology Stack and Storage
At a minimum, your facility needs to be connected to the internet, and data must be stored on an accessible platform. Unfortunately, Excel documents on a desktop won’t suffice. Our recommended criteria for modernizing your technology stack include storing in an easily accessible database that offers API connectivity and cloud-based storage. They can log real-time, all-inclusive facility data quickly and accurately. We aim to avoid data silos with multiple disparate data storage areas and prevent systems that are difficult to access or integrate with. API connectivity will be essential, and we want to avoid any systems that require cumbersome custom development to connect to.
Clarify and Enhance Data Security
Security must be at the forefront of the AI implementation conversation. Your data is one of your most valuable assets. We want to ensure that where you place your data or who you allow to analyze it is a reputable source that has been rigorously vetted. Before placing your data in any AI program, it is essential to understand all of the data privacy and security terms and conditions.
Align with Forward-Moving Partners
Do you want to be an expert in AI implementation at your facility? Maybe. However, we recommend aligning yourself with a partner in the industry who is moving forward in that direction and allowing them to become experts, meeting your needs in this area. It is essential to ask questions that provide insight into where that partner is today, as well as where they are headed in the future. Add it to your company’s roadmap and ensure it is also included on your partners’ roadmaps.
At Easy Automation, we have AI implementation on our roadmap and are actively taking steps forward to provide a solution that makes the most sense for our customers. Are you interested in seeing how we might align or learning more about this? Contact our team at 507-728-8214 or by visiting our website at www.easy-automation.com.
Written by Brian Sokoloski – CTO at Easy Automation, Inc.
Tools & Platforms
Propel Manila introduces The Compression Zone model for AI and human collaboration – Campaign Brief Asia

As the industry shifts with the rise of Artificial Intelligence (AI), Propel Manila is positioning itself to integrate these technologies into its work. The independent agency continues to build on its track record in digital and creative services, approaching AI with both innovation and responsibility.
Propel Manila’s roadmap begins with acknowledging the significant impact AI will have on the industry and its own processes. The agency is embracing new technology while remaining aligned with its core values.
“We are not just adapting. We’re building a creative tank of the future — powered by AI, led by people,” says JC Valenzuela, Founder and Chief Executive of Propel Manila. “We embrace innovation but we remain rooted at the heart of everything, the human intervention from knowing the taste, having the instinct, the strategic thinking. That’s our edge.”
At a time when the industry is grappling with disruption, Propel Manila’s AI agenda is not driven by fear nor hype, but by purpose, continuing its advocacy of merging creativity, technology and innovation in ways that amplify human talents.
Internally, the agency has coined the term The Compression Zone to describe its model where AI automation of routine tasks meets human ingenuity. This approach, now becoming a standard, paves way for the two major shifts defining the agency’s strategic direction: The Great Upskilling and The Hybrid Renaissance.
Through The Great Upskilling, Propel Manila is investing in its people by providing tools and training across disciplines, enabling them to master these new AI technologies. It is not about replacing human talents, it’s about empowering every member of its team, unlocking new ways to think, create, and lead.
The Hybrid Renaissance underscores how creativity at Propel Manila lives in between human insight and machine learning by fostering a collaborative environment where AI supports the unique strength of human creativity. This hybrid approach unlocks possibilities in strategic thinking, cultural nuance, emotional intelligence and above all, the agency’s collective success in creating high-impact digital-first creativity.
“Together, we’re building a modern, AI-powered creative tank that is agile, imaginative, and most importantly, powered by the great minds and incredible talent of our people,” Valenzuela added.
The Blueprint on Responsible AI
With great technology comes great responsibility. A powerful tool like AI comes with valid concerns, specifically on privacy and security. From intellectual property to privacy safeguards, Propel Manila is building strong governance around its AI programs, anchored in transparency, ethics, and accountability.
This commitment includes the creation of Propel Manila AI x Human Charter, a codified set of guidelines on how the agency will responsibly utilize AI in partnership with human creativity. This shall set clear standards, boundaries, and best ethical practices to ensure when used right, AI becomes a force multiplier for the kind of work that moves people and brands.
Valenzuela added: “We are setting up frameworks that protect the integrity of our work and the people behind it. In fact, we are forming an AI council within the agency to help build and test new tools and provide feedback on our systems. We use AI to scale brilliance—not to diminish the originality, the ideas that matter, that defines us.”
The Horizon of Co-Creation
Propel Manila has already introduced and has started integrating AI-powered tools into creative processes such as briefing, ideation, production, and performance analyses. The agency is taking a careful and measured approach, allowing teams to learn and adapt as they progress.
As Propel Manila’s journey with AI unfolds, the impact will be felt across all its stakeholders. For clients, it means being with a creative partner that not only embraces change but also delivers work that is faster, sharper, and more strategically relevant. For its people, continuous upskilling will equip them to thrive in this era while empowering them to focus on high-value pursuits such as collaboration and ideation—and ultimately, achieve a healthy work-life balance. And for the agency itself, it means deepening its commitment to future-proofing, ensuring it navigates industry shifts with both agility and responsibility.
“We’re inviting partners, clients, and the creative community to help shape what’s next,” says Arvon Fernandez, Managing Director of Propel Manila. “Explore the tools. Join the conversation. Give feedback. Share ideas for innovation and take part in learning and development opportunities. Let’s co-build and grow this movement together.”
“AI is only as powerful as the people who use it.”
Tools & Platforms
Parliament panel seeks tech, legal solutions to check AI-based fake news

The Standing Committee on Communications and Information Technology, headed by BJP MP Nishikant Dubey, in its draft report, suggested a balanced approach for deploying AI to curb fake news, noting that the technology is being used to detect misinformation but can be a source of misinformation as well.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries