Connect with us

Ethics & Policy

The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency

Published

on


Transparency in AI is no longer an option

AI is transforming our world, but who gets to look under the hood? In a world where algorithms influence elections, shape job markets, and generate knowledge, transparency is no longer just a “nice-to-have”—it’s the foundation of trust.

This is one of the pressing challenges the Hiroshima AI Process (HAIP) addresses. HAIP is a G7 initiative launched in 2023 that aims to establish a global solution for safe and trustworthy AI. As part of this effort, it has developed, with the OECD, a voluntary reporting framework that invites AI developers to disclose how they align with international guidelines for responsible AI.

Let’s look at some early insights from interviews with 11 of the first 19 participating organisations and a multistakeholder meeting held in Tokyo in June 2025. The findings reveal a picture that is both promising and complex, with lessons for the future of global AI governance.

One framework, many motivations: Why companies are joining HAIP

Why would a company voluntarily publish sensitive information about how it builds AI? It turns out the answer depends on who they are speaking to. Our interviews revealed five key audiences that shape how companies approach their HAIP reports:

Audience Examples Typical Motivation
International bodies OECD, G7 Partners – Visibility in AI governance
– International Alignment
Policy stakeholders Governments, regulators Gain trust
– Influence on regulatory frameworks
Business and technical partners B2B clients, external developers, corporate partners Contractual clarity,
risk accountability
General public Consumers, civil society, job-seeking students Ethical branding
Accessibility
Internal teams Employees Create internal alignment and awareness on AI governance

For some, HAIP is a diplomatic tool to show they are aligned with global norms. For others, it is a means of communicating readiness for future regulation. B2B companies use the reports to inform clients and partners. Some view the report primarily as a public-facing transparency tool, written in clear, relatable language.

Interestingly, many companies emphasise how the internal process of preparing the report—coordinating across departments, aligning terminology, clarifying roles—was just as valuable as the final publication.

The value and challenge of ambiguity

A recurring theme was uncertainty about how much to disclose or the level of detail to provide. Some companies asked: “Should we talk about specific AI models, or company-wide policy?” Others wondered: “Do we write from the perspective of a developer or a deployer?”

And yet, this ambiguity was also seen as a strength. The broad definition of “advanced AI systems” enabled a diverse group of participants to take part, including those working with small language models, retrieval-augmented generation (RAG), or open-weight AI.

This highlights a key trade-off: too much flexibility can weaken comparability, but too much standardisation might discourage participation. Future iterations of the framework will need to carefully balance these aspects.

Ranking or recognition? A cautionary note

Since HAIP employs a standard questionnaire, comparisons across organisations are possible. But should we rank the questionnaires?

At a stakeholder meeting in Tokyo, when researchers presented a draft scoring system, several participants strongly objected. The concern: that simplistic rankings could distort incentives, discourage participation, and shift the focus from transparency to performance signalling.

Instead, HAIP should be seen as a recognition of effort—a credit for choosing openness. While maintaining the credibility of published content is essential, evaluations must remain context-sensitive and qualitative, not one-size-fits-all.

Three proposals for HAIP’s future

Based on the feedback we collected, we would suggest the following improvements:

  • 1. Clarify the target audience

Each organisation should clearly specify its report’s target audience. Is it aimed at policymakers, customers, or the public? This assists readers in understanding the content and prevents mismatched expectations.

  • 2. Promote shared vocabulary

Terms like “safety” or “robustness” are often used differently across organisations. To encourage uniformity, we suggest establishing a shared glossary based on the OECD and other international sources.

  • 3. Raise awareness and provide support

Many interviewees noted that HAIP remains poorly understood, both inside their organisations and in the public eye. To address this, we suggest:

  • Permitting the use of a HAIP logo to indicate participation.
  • Engaging institutional investors who increasingly value transparency in ESG.
  • An annual ‘HAIP Summit’ could showcase updates and good practices.

A new culture of voluntary transparency

Besides being a reporting tool, the HAIP Reporting Framework acts as a cultural intervention. It motivates companies to reflect, coordinate, and disclose in ways they might not have previously considered. Several participants observed that the very act of publishing a report, even a modest one, should be celebrated rather than penalised.

As AI continues to shape societies and economies, voluntary transparency mechanisms like HAIP present a promising model for bottom-up governance. They are not perfect, but they are a good starting point.

By fostering an environment where disclosure is rewarded, not feared, HAIP may well become a template for the future of responsible AI.

The post The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency appeared first on OECD.AI.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

DCO launches new AI ethics tool to advance responsible technology use

Published

on



RIYADH: Across the global construction sector, long considered one of the most resistant to digitization, a quiet revolution is unfolding.


Artificial intelligence is no longer a mere buzzword confined to laboratories and boardrooms. It is increasingly present in the urban fabric, embedded into scaffolding, concrete and command centers.


One company at the heart of this shift is viAct, a Hong Kong-based AI firm co-founded by Gary Ng and Hugo Cheuk. Their aim is to make construction safer, smarter and significantly more productive using a scenario-based AI engine built for complex, high-risk environments.


“Despite being one of the most labor-intensive and hazardous industries, construction remains vastly under-digitized,” Ng told Arab News. “We saw this as an opportunity to bring AI-driven automation and insights to frontline operations.



Unlike conventional surveillance tools that simply record footage, viAct’s platform acts like a digital foreman. It interprets real-time visual data to detect unsafe practices, productivity gaps and anomalies, all without human supervision.


At the core of the platform are intelligent video analytics powered by edge computing. By processing visuals from jobsite cameras and sensors, viAct can flag whether a worker has entered a restricted zone, whether proper personal protective equipment is being worn, or if a crane is operating unsafely.


“This is not just about object detection,” said Ng. “Our AI understands context. It recognizes behaviors — like a worker being too close to the edge without a harness or a truck reversing unsafely — and acts in real time.”


That ability to contextualize data is crucial in megaprojects, where risks multiply with size.


The firm’s technology has already been deployed across East Asia and parts of Europe. Now, the company is eyeing Saudi Arabia and the wider Gulf region, where giga-projects are transforming skylines at record speed.


This section contains relevant reference points, placed in (Opinion field)


Ng confirmed viAct is in active discussions to enter the Saudi market.


“Saudi Arabia’s Vision 2030 is deeply aligned with our mission,” he said. “There’s a growing demand for AI in infrastructure — not just for safety, but also for efficiency, environmental compliance, and transparency.


From NEOM and The Line to Qiddiya and Diriyah Gate, Saudi Arabia is leading one of the most ambitious construction booms in the world. These projects involve thousands of workers, advanced logistics and constant oversight.


However, traditional safety audits and manual inspections are no longer sufficient. “With projects of this scale, real-time monitoring is not a luxury — it’s a necessity,” said Ng.


While viAct hasn’t yet launched in the Kingdom, its platform is fully prepared for Arabic localization and regional compliance standards, including Saudi labor laws and Gulf Cooperation Council safety codes.


What sets viAct apart is how seamlessly it integrates with existing infrastructure. Rather than requiring expensive proprietary equipment, the platform works with standard CCTV cameras and can be deployed in both urban and remote sites.



 


“Our system is plug-and-play,” said Ng. “You don’t need to overhaul your entire setup to use AI. That makes it ideal for companies in transition or for phased construction timelines.”


Its use of edge AI, meaning data is processed on site rather than in a distant cloud, allows viAct to deliver insights even in areas with weak internet connectivity. This feature is particularly useful in Saudi Arabia’s more isolated development zones or early-phase sites with minimal setup.


Its software is also highly customizable. For instance, a client building a hospital might prioritize fall detection and material delays, while a contractor working on an airport runway may need to monitor large machinery and perimeter access.


As automation reshapes industries, many worry that people are being replaced by machines. But Ng insists that viAct’s goal is not to eliminate workers — it is to protect them.


“We’re not building robots to take over,” he said. “We’re building tools that enhance human judgment and ensure safety. When a worker is alerted to a risk before an accident occurs, that’s AI doing its best job.”


In fact, many of viAct’s clients report that once site workers understand the system is not spying on them, but rather observing unsafe situations, adoption becomes smoother. Managers gain better oversight and laborers gain peace of mind.


“We see this as a collaboration between human intelligence and artificial intelligence,” Ng said. “Each has strengths. Together, they’re far more effective.”



Gary Ng co-founded viAct, a Hong Kong-based AI firm, with Hugo Cheuk. (Supplied)


Deploying AI in construction also brings ethical questions to the forefront, particularly in projects run by government entities or involving public infrastructure. Ng is upfront about these concerns.


“All our solutions are GDPR-compliant and privacy-first,” he said, referring to the EU’s General Data Protection Regulation, a comprehensive set of rules designed to protect the personal data of individuals.


“We don’t use facial recognition and we don’t track individuals. The focus is purely on safety, compliance and productivity.”


Workers are anonymized in the system, with all data encrypted and stored securely. Dashboards used by contractors and project leads include logs, alerts and safety scores, allowing for clear documentation and accountability without compromising personal privacy.


This is especially important in the Gulf, where projects often involve multinational labor forces and cross-border stakeholders



Looking ahead, viAct plans to double down on its expansion in the Middle East, continue advancing its AI models and advocate for ethical AI deployment in high-risk sectors.


The company is also exploring ways to integrate predictive analytics, allowing clients to foresee and prevent incidents before they occur. This could eventually shift AI’s role from reactive to proactive, forecasting safety breaches, delivery delays or environmental compliance issues in advance.


Ng believes this kind of intelligent foresight will soon become standard across the construction industry.


“It’s not about replacing humans,” he said. “It’s about building a smarter site, one where decisions are faster, risks are fewer, and lives are safer.”


In the age of giga-projects, that is a future Saudi Arabia is already building.

 



Caption


 



Source link

Continue Reading

Ethics & Policy

Brilliant women in AI Ethics 2024 and AIE Summit speaker

Published

on


Nazareen Ebrahim has built a career where technology, communication, and ethics meet. As the founder of Naz Consulting and AI Ethics Lead at Socially Acceptable, she’s part strategist, part storyteller, and fully committed to making sure Africa’s voice is not just heard but leads. In 2024, she was named one of the 100 Brilliant Women in AI Ethics, a recognition of her growing influence in one of the world’s most urgent conversations.

Nazareen Ebrahim is one of the speakers at the summit. Source: Supplied.

At the AI Empowered (AIE) Summit this August at the CTICC in Cape Town, Ebrahim joins the speaker line-up to share her unique perspective.

(See how you can win tickets to attend at the end of this article.)

What inspired you to start Naz Consulting, and how has your vision for the company evolved over time?

I was a geeky 19 year old tomboy on campus sitting outside the library with my geek crowd. We talked about what we’d like to do when we finished university. Without skipping a beat, I said that I wanted to start a media and communications company. This was in the days before social media. I consulted, bootstrapped and worked with freelancers for a long time. Just before COVID-19 hit, I started to build a team. The dream is to build into Africa’s premier technology communications consultancy.

Why is it important for women to take part in this conversation around AI and marketing and why now?

Women have always contributed significantly in all sectors and industries from research and development to innovation, invention, design and progressive leadership. But the status quo has been to undermine a woman’s achievement as less significant. The need to amplify women’s voices in the age of AI is of paramount importance to further define the importance of this industrial age. The leadership skills and technical prowess presented by women in shaping this technology will anchor the necessity for the AI ethics and tech for good initiatives.

What do you think is getting lost in the way AI is currently being discussed in the marketing world?

The practicality of it. AI is thrown around loosely as an all-encompassing technology designed to be the aha moment of the world. It is in fact humans who direct this as we have done so in every other industrial age. Human beings need to ask the questions, train appropriately for the changes, be open and curious to learning and see AI for what it is:  to amplify and optimise our efforts but never to replace our values.

For marketing professionals attending the summit, what’s one mindset shift you hope they walk away with after your session?

With the confidence to ask the right questions and to be open to changing for relevance in this new and fast changing world. Creativity is found in every facet of life. Marketers have usually held the crown for creativity. Now is the time to embrace the fullness of this industrial age. We are no longer marketers. We are business optimisation technologists – BOTS.

What role do you believe African marketers can play in shaping how AI is developed and applied globally?

We can play the role of providing world class and leading research that presents as accurate a view of our continent, cultures and peoples. We don’t need the West to tell us who we are. AI is a lifecycle comprising multiple components. Models, Data and Training and Resources. Are we allowing ourselves to continue to be led by the West and the oriental East or being owners and builders of the technologies that guide and shape humanity?

Want to be part of the conversation? As a special offer to our readers, you could stand a chance to attend the AI Empowered Summit, inspired by EO Cape Town, taking place on 7–8 August 2025 at the CTICC. We’re giving away two double tickets to this thought-provoking event where innovators like Nazareen Ebrahim will share their insights on the future of AI, ethics, marketing, and beyond. Contact info@aiempowered.co.za to enter.



Source link

Continue Reading

Ethics & Policy

15 AI Movies to Add to Your Summer Watchlist

Published

on


When we think about AI, our first thought is often of a killer robot, rogue computer system, or humanoid child on its quest to become a real boy. Depictions of AI in film are a reflection of how people have viewed AI and similar technologies over the past century. However, the reality of what AI is differs from what we see in science fiction. AI can take many forms but it is almost never humanoid and it most certainly isn’t always bad. Our ideas of what AI is and what it can become originate from very compelling science fiction stories dating as far back as the 19th century and as technology has evolved over the years, people’s ideas, hopes, and fears of AI have grown and evolved with it.

As the field of AI begins to blend the realm of reality and science fiction, let’s look at some films that offer a lens into intelligent machines, what it means to be human, and humanity’s quest to create an intelligence that may someday rival our own. Here are 15 must watch films about AI to add to your summer movie watchlist:



Source link

Continue Reading

Trending