Tools & Platforms
How to write an AI ethics policy for the workplace
If there is one common thread throughout recent research about AI at work, it’s that there is no definitive take on how people are using the technology — and how they feel about the imperative to do so.
Language learning models can be used to draft policies, generative AI can be used for image creation, and machine learning can be used for predictive analytics, Ines Bahr, a senior Capterra analyst who specializes in HR industry trends, told HR Dive via email.
Still, there’s a lack of clarity around which tools should be used and when, because of the broad range of applications on the market, Bahr said. Organizations have implemented these tools, but “usage policies are often confusing to employees,” Bahr told HR Dive via email — which leads to the unsanctioned but not always malicious use of certain tech tools.
The result can be unethical or even allegedly illegal actions: AI use can create data privacy concerns, run afoul of state and local laws and give rise to claims of identity-based discrimination.
Compliance and culture go hand in hand
While AI ethics policies largely address compliance, culture can be an equally important component. If employers can explain the reasoning behind AI rules, “employees feel empowered by AI rather than threatened,” Bahr said.
“By guaranteeing human oversight and communicating that AI is a tool to assist workers, not replace, a company creates an environment where employees not only use AI compliantly but also responsibly” Bahr added.
Kevin Frechette, CEO of AI software company Fairmarkit, emphasized similar themes in his advice for HR professionals building an AI ethics policy.
The best policies answer two questions, he said: “How will AI help our teams do their best work, and how will we make sure it never erodes trust?”
“If you can’t answer how your AI will make someone’s day better, you’re probably not ready to write the policy,” Frechette said over email.
Many policy conversations, he said, are backward, prioritizing the technology instead of the workers themselves: “An AI ethics policy shouldn’t start with the model; it should start with the people it impacts.”
Consider industry-specific issues

A model of IBM Quantum during the inauguration of Europe’s first IBM Quantum Data Center on Oct. 1, 2024, in Ehningen, Germany. The center provides cloud-based quantum computing for companies, research institutions and government agencies.
Thomas Niedermueller via Getty Images
Industries involved in creating AI tools have additional layers to consider: Bahr pointed to research from Capterra that revealed that software vulnerabilities were the top cause of data breaches in the U.S. last year.
“AI-generated code or vibe coding can present a security risk, especially if the AI model is trained on public code and inadvertently replicates existing vulnerabilities into new code,” Bahr explained.
An AI disclosure policy should address security risks, create internal review guidelines for AI-generated code, and provide training to promote secure coding practices, Bahr said.
For companies involved in content creation, an AI disclosure could be required and should address how workers are responsible for the final product or outcome, Bahr said.
“This policy not only signals to the general public that human input has been involved in published content, but also establishes responsibilities for employees to comply with necessary disclosures,” Bahr said.
“Beyond fact-checking, the policy needs to address the use of intellectual property in public AI tools,” she said. “For example, an entertainment company should be clear about using an actor’s voice to create new lines of dialogue without their permission.”
Likewise, a software sales representative could be able to explain to clients how AI is used in the company’s products. Customer data use can also be a part of disclosure policy, for example.
The policy’s in place. What now?
Because AI technology is constantly evolving, employers must remain flexible, experts say.
“A static AI policy will be outdated before the ink dries,” according to Frechette of Fairmarkit. “Treat it like a living playbook that evolves with the tech, the regulations, and the needs of your workforce,” he told HR Dive via email.
HR also should continue to test the AI policies and update them regularly, according to Frechette. “It’s not about getting it perfect on Day One,” he said. “It’s about making sure it’s still relevant and effective six months later.”
Tools & Platforms
SMARTSHOOTER Wins Innovation Award for AI-Driven Precision Fire Control Solutions
SMARTSHOOTER won the Innovation Award in the Army Technology Excellence Awards 2025 for its significant advancements in enhancing small arms accuracy and operational effectiveness through the integration of artificial intelligence and modular technology.
The Army Technology Excellence Awards honor the most significant achievements and innovations in the defense industry. Powered by GlobalData’s business intelligence, the Awards recognize the people and companies leading positive change and shaping the industry’s future.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
SMARTSHOOTER’s SMASH fire control technology has been recognized in the Precision Fire Control category reflecting the company’s approach to integrating artificial intelligence (AI), computer vision, and advanced algorithms into compact, scalable fire control systems that address evolving operational challenges for ground forces.
AI-enabled precision enhances small arms accuracy
Hitting moving or distant targets has traditionally relied on a soldier’s skill and experience. SMARTSHOOTER’s SMASH system changes that equation by using real-time image processing and AI-driven tracking. For instance, when troops face fast-moving evasive threats such as small drones (sUAS), SMASH can automatically lock onto the target, calculate ballistic trajectories, and release the shot only when a hit is assured. This improves hit accuracy during intense battle situations and reduces collateral damage.

The technology has proven valuable against aerial threats that are difficult to engage with conventional optics or unaided marksmanship. Field reports from the Israel Defense Forces (IDF) and U.S. military units show that SMASH-equipped rifles have been effective in neutralizing drones that might otherwise evade traditional countermeasures. By transforming standard infantry weapons into precision platforms, SMARTSHOOTER has addressed a critical gap in dismounted force protection.
Modular and scalable solutions for different missions
The SMASH product family is designed to fit a variety of operational needs, designed for seamless integration with existing force structures. The SMASH 2000L and 3000 models mount directly onto standard rifles without adding much weight or bulk, making them practical for soldiers on patrol. For situations where longer range or better situational awareness is needed, the SMASH X4 adds a four-times magnifying optic and a laser rangefinder to its AI targeting features.
SMARTSHOOTER’s SMASH family of fire control systems also includes the SMASH Hopper, a remote weapon station that can be mounted on vehicles, unmanned platforms, or static defensive positions. It connects with external sensors or C4I (Command, Control, Communications, Computers & Intelligence) systems and can be operated via wired or wireless links. This flexibility means units can use SMASH technology in everything from urban patrols to border security while staying connected with larger command networks.

Operational validation and international adoption
SMARTSHOOTER’s technology is deployed across multiple armed forces and its performance demonstrated in live operational environments. The IDF has deployed SMASH systems across all infantry brigades to counter drone and ground threats along sensitive borders and in battle. Similarly, U.S. Army and Marine Corps units have added SMASH to their counter-drone arsenals following rigorous evaluation by organizations such as the Joint Counter-sUAS Office (JCO) and Irregular Warfare Technical Support Directorate (IWTSD).
These deployments are not limited to controlled trials; they reflect ongoing use in active conflict zones where reliability is crucial. Users have reported higher engagement success rates against both aerial and ground targets, even when facing complex threats like drone swarms or armed quadcopters. The awarding of multi-million-dollar contracts by defense agencies further demonstrates confidence in the system’s capabilities.

Beyond Israel and the United States, SMARTSHOOTER’s solutions have been adopted by NATO partners in Europe, including Germany, the UK, and the Netherlands, as well as by security forces in Asia-Pacific. This broad uptake shows that militaries worldwide see value in this approach to modern battlefield challenges.
Human-in-the-Loop targeting supports ethical use of AI
A distinguishing aspect that sets SMARTSHOOTER apart is its focus on keeping humans in control of engagement decisions. While automation helps with tracking and aiming, operators always make the final call. The technology provides visual cues, such as target locks or shot timing indicators, but never fires autonomously.
This approach aligns with evolving international norms regarding responsible use of AI in defense applications. It ensures that accountability remains with trained personnel rather than algorithms alone, a consideration increasingly scrutinized by policymakers and military leaders alike. By embedding these safeguards into its products from inception, SMARTSHOOTER has addressed both operational needs and ethical concerns associated with next-generation fire control systems.

“We are honored to receive this recognition. This achievement reflects the proven value of our SMASH fire control systems and their ability to transform conventional small arms into precision tools against modern threats, including drones. Deployed by leading military forces worldwide, SMASH continues to enhance operational effectiveness at the squad level, and we remain committed to driving innovation that meets the evolving needs of today’s battlefield.”
– Michal Mor, CEO of SMARTSHOOTER
Company Profile
SMARTSHOOTER is a world-class designer, developer, and manufacturer of innovative fire control systems that significantly increase hit accuracy. With a rich record in designing unique solutions for the warfighter, SMARTSHOOTER technology enhances mission effectiveness through the ability to accurately engage and eliminate ground, aerial, static, or moving targets, including drones, during both day and night operations.
Designed to help military and law enforcement professionals swiftly and accurately neutralize their targets, the company’s combat-proven SMASH Family of Fire Control Systems increases assault rifle lethality while keeping friendly forces safe and reducing collateral damage. The company’s experienced engineers combine electro-optics, computer vision technologies, real-time embedded software, ergonomics, and system engineering to provide cost-effective, easy-to-use solutions for modern conflicts.
Fielded and operational by forces in the US, UK, Israel, NATO countries, and others, SMARTSHOOTER’s SMASH family of solutions provides end-users with a precise hit capability across multiple mission areas, creating a significant advantage for the infantry soldier and ultimately revolutionizing the world of small arms and optics.
SMARTSHOOTER’s headquarters are based in Yagur, Israel. The company has subsidiary companies in Europe, the US, and Australia.
Contact Details
E-mail: info@smart-shooter.com
Links
Website: www.SMART-SHOOTER.com
Tools & Platforms
Augustana University announces AI expert, bestselling author as Critical Inquiry & Citizenship Colloquium speaker

Sept. 16, 2025
This piece is sponsored by Augustana University.
Augustana University’s third annual Critical Inquiry & Citizenship Colloquium will culminate with Dr. Joy Buolamwini as the featured speaker.
Buolamwini, bestselling author, MIT researcher and founder of the Algorithmic Justice League, will give a keynote presentation to the Augustana community, alumni and friends at 4 p.m. Oct. 25 in the Elmen Center, with a book signing to follow.
Generously supported by Rosemarie and Dean Buntrock and in partnership with Augustana’s Center for Western Studies, the Critical Inquiry & Citizenship Colloquium was established in 2023. The colloquium is designed to promote civil discourse and deep reflection with the goal of enhancing students’ skills to think critically and communicate persuasively as citizens of a pluralistic society.
“In an era of unprecedented technological advancement, Dr. Buolamwini’s insights urge us to consider not only the capabilities of artificial intelligence but its ethical implications. Her participation in this year’s colloquium invites meaningful dialogue around integrity, responsibility and the human experience,” Augustana President Stephanie Herseth Sandlin said.
In addition to being a researcher, model and artist, Buolamwini is the author of the U.S. bestseller “Unmasking AI: My Mission To Protect What Is Human in a World of Machines.”
Buolamwini’s research on facial recognition technologies transformed the field of AI auditing. She advises world leaders on preventing AI harm and lends her expertise to congressional hearings and government agencies seeking to enact equitable and accountable AI policy.
Buolamwini’s TEDx Talk on algorithmic bias has almost 1.9 million views, and her TED AI Talk on protecting human rights in an age of AI transforms the boundaries of TED Talks.
As the “Poet of Code,” she also creates art to illuminate the impact of AI on society, with her work featured in publications such as Time, The New York Times, Harvard Business Review, Rolling Stone and The Atlantic. Her work as a spokesmodel also has been featured in Vogue, Allure, Harper’s Bazaar and People. She is the protagonist of the Emmy-nominated documentary “Coded Bias.”
Buolamwini is the recipient of notable awards, including the Rhodes Scholarship, Fulbright Fellowship, Morals & Machines Prize, as well as the Technological Innovation Award from the King Center. She was selected as a 2022 Young Global Leader, one of the world’s most promising leaders younger than 40 as determined by The World Economic Forum, and Fortune named her the “conscience of the AI revolution.”
“Many associate AI with advancement and intrigue. Dr. Buolamwini invites cognitive dissonance by demonstrating the potentials for harm caused by the unexamined use of AI,” said Dr. Shannon Proksch, assistant professor of psychology and neuroscience at Augustana.
“Dr. Buolamwini’s visit will invite the Augustana community to engage in critical thinking and deep reflection around how algorithmic technology intersects with our lives and society as a whole. Her work embodies the goals of the Critical Inquiry & Citizenship Colloquium by challenging us to acknowledge the human impact of AI and remain vigilant about the role that we play in ensuring that these technologies do more to benefit and strengthen our communities than to harm them.”
“AI is the most powerful and disruptive technology of our time, so we’re very excited to bring Dr. Buolamwini to Sioux Falls. She’s an engaging and dynamic speaker whose research and life experience have given her deep insight into how we can ensure that AI is used to promote the flourishing of all,” said Dr. Stephen Minister, Stanley L. Olsen Chair of Moral Values and professor of philosophy at Augustana.
Tickets for the 2025 Critical Inquiry & Citizenship Colloquium are free and available to the public at augie.edu/CICCTickets.
About the Critical Inquiry & Citizenship Colloquium
In partnership with the Center for Western Studies and supported by Rosemarie and Dean Buntrock, this annual one- or two-day colloquium is intended to feature faculty scholars and students, as well as industry, research and policy experts who inspire and facilitate critical thinking, persuasive reasoning and thoughtful discussion around timely and engaging topics in areas ranging from religion, science and politics to history, technology and business. The colloquium kicks off or culminates in a keynote given by thought leaders of national or global prominence.
Tools & Platforms
Conduent Integrates AI Technologies to Modernize Government Payments, Combat Fraud and Improve Customer Experiences for Beneficiaries

Successfully completed AI pilot with Microsoft – now live – boosts fraud detection
FLORHAM PARK, N.J., September 16, 2025–(BUSINESS WIRE)–Conduent Incorporated (Nasdaq: CNDT), a global technology-driven business solutions and services company, is embedding generative AI (GenAI) and other advanced AI technologies into its suite of solutions for state and federal agencies. These technologies aim to improve the disbursement of critical government benefits, enhance the citizen experience, and fortify fraud prevention across major aid programs like Medicaid and the Supplemental Nutrition Assistance Program (SNAP).
As part of a recently completed GenAI pilot with Microsoft – originally announced in 2024 and now fully deployed – Conduent has significantly increased its fraud detection capacity for its largest open-loop payment card programs. Because these cards can be used at a wide range of merchants, monitoring for fraud is particularly complex. Leveraging AI, a small team of specialists can now surveil tens of thousands of accounts for suspicious activity, including identity theft and account takeover with significant improvement in accuracy. This capability is in the process of being scaled to other payment card programs.
Following the pilot’s success, Conduent is now seeking to apply similar AI methodologies to help detect and prevent fraud in Medicaid and closed-loop EBT cards, including SNAP benefits – helping safeguard usage at approved retailers. A leader in government payment disbursements, Conduent currently supports electronic payments for public programs in 37 states.
“As states adapt to evolving budget constraints and eligibility requirements, AI can empower agencies to reduce fraud and improper payments while improving service delivery,” said Anna Sever, President, Government Solutions at Conduent. “With decades of experience supporting critical government programs, Conduent is deepening its investment in AI to expand these gains across multiple programs.”
Transforming Customer Support with AI
Conduent is also deploying AI to drive improvements in the contact center experience for public benefit recipients. A standout example is the Conduent GenAI-powered capability that equips agents with instant access to accurate, program-specific information – reducing call handling times.
Conduent provides U.S. agencies with solutions for healthcare claims administration, government benefit payments, eligibility and enrollment, and child support. Visit Conduent Government Solutions to learn more.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries