Connect with us

Tools & Platforms

How to write an AI ethics policy for the workplace

Published

on


This audio is auto-generated. Please let us know if you have feedback.

If there is one common thread throughout recent research about AI at work, it’s that there is no definitive take on how people are using the technology — and how they feel about the imperative to do so.

Language learning models can be used to draft policies, generative AI can be used for image creation, and machine learning can be used for predictive analytics, Ines Bahr, a senior Capterra analyst who specializes in HR industry trends, told HR Dive via email. 

Still, there’s a lack of clarity around which tools should be used and when, because of the broad range of applications on the market, Bahr said. Organizations have implemented these tools, but “usage policies are often confusing to employees,” Bahr told HR Dive via email — which leads to the unsanctioned but not always malicious use of certain tech tools.

The result can be unethical or even allegedly illegal actions: AI use can create data privacy concerns, run afoul of state and local laws and give rise to claims of identity-based discrimination.

Compliance and culture go hand in hand

While AI ethics policies largely address compliance, culture can be an equally important component. If employers can explain the reasoning behind AI rules, “employees feel empowered by AI rather than threatened,” Bahr said. 

“By guaranteeing human oversight and communicating that AI is a tool to assist workers, not replace, a company creates an environment where employees not only use AI compliantly but also responsibly” Bahr added.

Kevin Frechette, CEO of AI software company Fairmarkit, emphasized similar themes in his advice for HR professionals building an AI ethics policy.

The best policies answer two questions, he said: “How will AI help our teams do their best work, and how will we make sure it never erodes trust?”

“If you can’t answer how your AI will make someone’s day better, you’re probably not ready to write the policy,” Frechette said over email.

Many policy conversations, he said, are backward, prioritizing the technology instead of the workers themselves: “An AI ethics policy shouldn’t start with the model; it should start with the people it impacts.”

Consider industry-specific issues

A model of IBM Quantum during the inauguration of Europe’s first IBM Quantum Data Center on Oct. 1, 2024, in Ehningen, Germany. The center provides cloud-based quantum computing for companies, research institutions and government agencies.

Thomas Niedermueller via Getty Images

 

Industries involved in creating AI tools have additional layers to consider: Bahr pointed to research from Capterra that revealed that software vulnerabilities were the top cause of data breaches in the U.S. last year. 

“AI-generated code or vibe coding can present a security risk, especially if the AI model is trained on public code and inadvertently replicates existing vulnerabilities into new code,” Bahr explained. 

An AI disclosure policy should address security risks, create internal review guidelines for AI-generated code, and provide training to promote secure coding practices, Bahr said.

For companies involved in content creation, an AI disclosure could be required and should address how workers are responsible for the final product or outcome, Bahr said.

“This policy not only signals to the general public that human input has been involved in published content, but also establishes responsibilities for employees to comply with necessary disclosures,” Bahr said.

“Beyond fact-checking, the policy needs to address the use of intellectual property in public AI tools,” she said. “For example, an entertainment company should be clear about using an actor’s voice to create new lines of dialogue without their permission.”

Likewise, a software sales representative could be able to explain to clients how AI is used in the company’s products. Customer data use can also be a part of disclosure policy, for example.

The policy’s in place. What now?

Because AI technology is constantly evolving, employers must remain flexible, experts say. 

“A static AI policy will be outdated before the ink dries,” according to Frechette of Fairmarkit. “Treat it like a living playbook that evolves with the tech, the regulations, and the needs of your workforce,” he told HR Dive via email. 

HR also should continue to test the AI policies and update them regularly, according to Frechette. “It’s not about getting it perfect on Day One,” he said. “It’s about making sure it’s still relevant and effective six months later.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Google and California Community Colleges launch largest higher education AI partnership in the US, equipping millions of students with access to free training

Published

on


In the largest higher education deal of its kind in the US, Google is investing in workforce development for the future, putting California’s community college students at the forefront of the AI-driven economy.

“This collaboration with Google is a monumental step forward for the California Community Colleges,” explains Don Daves-Rougeaux, Senior Advisor to the Chancellor of the California Community Colleges on Workforce Development, Strategic Partnerships, and GenAI. 

“Providing our students with access to world-class AI training and professional certificates ensures they have the skills necessary to thrive in high-growth industries and contribute to California’s economic prosperity. This partnership directly supports our Vision 2030 commitment to student success and workforce readiness. Additionally, offering access to AI tools with data protections and advanced functionality for free ensures that all learners have equitable access to the tools they need to leverage the skills they’re learning, and saves California’s community colleges millions of dollars in potential tool costs.”

All students, faculty, staff and classified professionals at the colleges will be able to access Gemini, Google’s generative AI tool, with data protections, to ensure they can safely use AI tools.

All students and faculty will also receive free access to Google Career Certificates, Google AI Essentials, and Prompting Essentials, providing practical training for in-demand jobs.

“Technology skills, especially in areas like artificial intelligence, are critical for the future workforce,” adds Bryan Lee, Vice President of Google for Education Go-to-Market. “We are thrilled to partner with the California Community Colleges, the nation’s largest higher education system, to bring valuable training and tools like Google Career Certificates, AI Essentials, and Gemini to millions of students. This collaboration underscores our commitment to creating economic opportunity for everyone.”

The ETIH Innovation Awards 2026

The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.

Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.

Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.

Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.



Source link

Continue Reading

Tools & Platforms

Why artificial intelligence is increasing cybersecurity defence and dangers and why human skills are needed

Published

on


An undercurrent of the Financial Review Cyber Summit was that the best firewall is only as strong as the human behind it. It’s a particularly potent message as we enter a new frontier of cyber risks that are constant and evolving.

As Home Affairs and Cyber Security Minister Tony Burke told the audience of corporate executives and cyber professionals, “it doesn’t matter how good your electronic systems are if you haven’t trained your people to be part of the human firewall”.

Loading…



Source link

Continue Reading

Tools & Platforms

CISOs grapple with the realities of applying AI to security functions

Published

on


Turbo boost telemetry

Security AI and automation are beginning to demonstrate significant value, especially in minimizing dwell time and accelerating triage and containment processes, says Myke Lyons, CISO at telemetry and observability pipeline software vendor Cribl.

Their success, however, depends heavily on the prioritization and accuracy of the underlying telemetry, Lyons cautions.

“Within my team, we follow a structured approach to data management: High-priority, time-sensitive telemetry — such as identity, authentication, and key application logs — is directed to high-assurance systems for real-time detection,” Lyons explains. “Meanwhile, less critical data is stored in data lakes to optimize costs while retaining forensic value.”



Source link

Continue Reading

Trending