Ethics & Policy
The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency
Transparency in AI is no longer an option
AI is transforming our world, but who gets to look under the hood? In a world where algorithms influence elections, shape job markets, and generate knowledge, transparency is no longer just a “nice-to-have”—it’s the foundation of trust.
This is one of the pressing challenges the Hiroshima AI Process (HAIP) addresses. HAIP is a G7 initiative launched in 2023 that aims to establish a global solution for safe and trustworthy AI. As part of this effort, it has developed, with the OECD, a voluntary reporting framework that invites AI developers to disclose how they align with international guidelines for responsible AI.
Let’s look at some early insights from interviews with 11 of the first 19 participating organisations and a multistakeholder meeting held in Tokyo in June 2025. The findings reveal a picture that is both promising and complex, with lessons for the future of global AI governance.
One framework, many motivations: Why companies are joining HAIP
Why would a company voluntarily publish sensitive information about how it builds AI? It turns out the answer depends on who they are speaking to. Our interviews revealed five key audiences that shape how companies approach their HAIP reports:
Audience | Examples | Typical Motivation |
International bodies | OECD, G7 Partners | – Visibility in AI governance – International Alignment |
Policy stakeholders | Governments, regulators | Gain trust – Influence on regulatory frameworks |
Business and technical partners | B2B clients, external developers, corporate partners | Contractual clarity, risk accountability |
General public | Consumers, civil society, job-seeking students | Ethical branding Accessibility |
Internal teams | Employees | Create internal alignment and awareness on AI governance |
For some, HAIP is a diplomatic tool to show they are aligned with global norms. For others, it is a means of communicating readiness for future regulation. B2B companies use the reports to inform clients and partners. Some view the report primarily as a public-facing transparency tool, written in clear, relatable language.
Interestingly, many companies emphasise how the internal process of preparing the report—coordinating across departments, aligning terminology, clarifying roles—was just as valuable as the final publication.
The value and challenge of ambiguity
A recurring theme was uncertainty about how much to disclose or the level of detail to provide. Some companies asked: “Should we talk about specific AI models, or company-wide policy?” Others wondered: “Do we write from the perspective of a developer or a deployer?”
And yet, this ambiguity was also seen as a strength. The broad definition of “advanced AI systems” enabled a diverse group of participants to take part, including those working with small language models, retrieval-augmented generation (RAG), or open-weight AI.
This highlights a key trade-off: too much flexibility can weaken comparability, but too much standardisation might discourage participation. Future iterations of the framework will need to carefully balance these aspects.
Ranking or recognition? A cautionary note
Since HAIP employs a standard questionnaire, comparisons across organisations are possible. But should we rank the questionnaires?
At a stakeholder meeting in Tokyo, when researchers presented a draft scoring system, several participants strongly objected. The concern: that simplistic rankings could distort incentives, discourage participation, and shift the focus from transparency to performance signalling.
Instead, HAIP should be seen as a recognition of effort—a credit for choosing openness. While maintaining the credibility of published content is essential, evaluations must remain context-sensitive and qualitative, not one-size-fits-all.
Three proposals for HAIP’s future
Based on the feedback we collected, we would suggest the following improvements:
- 1. Clarify the target audience
Each organisation should clearly specify its report’s target audience. Is it aimed at policymakers, customers, or the public? This assists readers in understanding the content and prevents mismatched expectations.
- 2. Promote shared vocabulary
Terms like “safety” or “robustness” are often used differently across organisations. To encourage uniformity, we suggest establishing a shared glossary based on the OECD and other international sources.
- 3. Raise awareness and provide support
Many interviewees noted that HAIP remains poorly understood, both inside their organisations and in the public eye. To address this, we suggest:
- Permitting the use of a HAIP logo to indicate participation.
- Engaging institutional investors who increasingly value transparency in ESG.
- An annual ‘HAIP Summit’ could showcase updates and good practices.
A new culture of voluntary transparency
Besides being a reporting tool, the HAIP Reporting Framework acts as a cultural intervention. It motivates companies to reflect, coordinate, and disclose in ways they might not have previously considered. Several participants observed that the very act of publishing a report, even a modest one, should be celebrated rather than penalised.
As AI continues to shape societies and economies, voluntary transparency mechanisms like HAIP present a promising model for bottom-up governance. They are not perfect, but they are a good starting point.
By fostering an environment where disclosure is rewarded, not feared, HAIP may well become a template for the future of responsible AI.
The post The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency appeared first on OECD.AI.
Ethics & Policy
DCO launches new AI ethics tool to advance responsible technology use
RIYADH: Across the global construction sector, long considered one of the most resistant to digitization, a quiet revolution is unfolding.
Artificial intelligence is no longer a mere buzzword confined to laboratories and boardrooms. It is increasingly present in the urban fabric, embedded into scaffolding, concrete and command centers.
One company at the heart of this shift is viAct, a Hong Kong-based AI firm co-founded by Gary Ng and Hugo Cheuk. Their aim is to make construction safer, smarter and significantly more productive using a scenario-based AI engine built for complex, high-risk environments.
“Despite being one of the most labor-intensive and hazardous industries, construction remains vastly under-digitized,” Ng told Arab News. “We saw this as an opportunity to bring AI-driven automation and insights to frontline operations.
Unlike conventional surveillance tools that simply record footage, viAct’s platform acts like a digital foreman. It interprets real-time visual data to detect unsafe practices, productivity gaps and anomalies, all without human supervision.
At the core of the platform are intelligent video analytics powered by edge computing. By processing visuals from jobsite cameras and sensors, viAct can flag whether a worker has entered a restricted zone, whether proper personal protective equipment is being worn, or if a crane is operating unsafely.
“This is not just about object detection,” said Ng. “Our AI understands context. It recognizes behaviors — like a worker being too close to the edge without a harness or a truck reversing unsafely — and acts in real time.”
That ability to contextualize data is crucial in megaprojects, where risks multiply with size.
The firm’s technology has already been deployed across East Asia and parts of Europe. Now, the company is eyeing Saudi Arabia and the wider Gulf region, where giga-projects are transforming skylines at record speed.
This section contains relevant reference points, placed in (Opinion field)
Ng confirmed viAct is in active discussions to enter the Saudi market.
“Saudi Arabia’s Vision 2030 is deeply aligned with our mission,” he said. “There’s a growing demand for AI in infrastructure — not just for safety, but also for efficiency, environmental compliance, and transparency.
From NEOM and The Line to Qiddiya and Diriyah Gate, Saudi Arabia is leading one of the most ambitious construction booms in the world. These projects involve thousands of workers, advanced logistics and constant oversight.
However, traditional safety audits and manual inspections are no longer sufficient. “With projects of this scale, real-time monitoring is not a luxury — it’s a necessity,” said Ng.
While viAct hasn’t yet launched in the Kingdom, its platform is fully prepared for Arabic localization and regional compliance standards, including Saudi labor laws and Gulf Cooperation Council safety codes.
What sets viAct apart is how seamlessly it integrates with existing infrastructure. Rather than requiring expensive proprietary equipment, the platform works with standard CCTV cameras and can be deployed in both urban and remote sites.
“Our system is plug-and-play,” said Ng. “You don’t need to overhaul your entire setup to use AI. That makes it ideal for companies in transition or for phased construction timelines.”
Its use of edge AI, meaning data is processed on site rather than in a distant cloud, allows viAct to deliver insights even in areas with weak internet connectivity. This feature is particularly useful in Saudi Arabia’s more isolated development zones or early-phase sites with minimal setup.
Its software is also highly customizable. For instance, a client building a hospital might prioritize fall detection and material delays, while a contractor working on an airport runway may need to monitor large machinery and perimeter access.
As automation reshapes industries, many worry that people are being replaced by machines. But Ng insists that viAct’s goal is not to eliminate workers — it is to protect them.
“We’re not building robots to take over,” he said. “We’re building tools that enhance human judgment and ensure safety. When a worker is alerted to a risk before an accident occurs, that’s AI doing its best job.”
In fact, many of viAct’s clients report that once site workers understand the system is not spying on them, but rather observing unsafe situations, adoption becomes smoother. Managers gain better oversight and laborers gain peace of mind.
“We see this as a collaboration between human intelligence and artificial intelligence,” Ng said. “Each has strengths. Together, they’re far more effective.”
Gary Ng co-founded viAct, a Hong Kong-based AI firm, with Hugo Cheuk. (Supplied)
Deploying AI in construction also brings ethical questions to the forefront, particularly in projects run by government entities or involving public infrastructure. Ng is upfront about these concerns.
“All our solutions are GDPR-compliant and privacy-first,” he said, referring to the EU’s General Data Protection Regulation, a comprehensive set of rules designed to protect the personal data of individuals.
“We don’t use facial recognition and we don’t track individuals. The focus is purely on safety, compliance and productivity.”
Workers are anonymized in the system, with all data encrypted and stored securely. Dashboards used by contractors and project leads include logs, alerts and safety scores, allowing for clear documentation and accountability without compromising personal privacy.
This is especially important in the Gulf, where projects often involve multinational labor forces and cross-border stakeholders
Looking ahead, viAct plans to double down on its expansion in the Middle East, continue advancing its AI models and advocate for ethical AI deployment in high-risk sectors.
The company is also exploring ways to integrate predictive analytics, allowing clients to foresee and prevent incidents before they occur. This could eventually shift AI’s role from reactive to proactive, forecasting safety breaches, delivery delays or environmental compliance issues in advance.
Ng believes this kind of intelligent foresight will soon become standard across the construction industry.
“It’s not about replacing humans,” he said. “It’s about building a smarter site, one where decisions are faster, risks are fewer, and lives are safer.”
In the age of giga-projects, that is a future Saudi Arabia is already building.
Caption
Ethics & Policy
DCO launches new AI ethics tool to advance responsible technology use
GENEVA: Saudi Arabia’s Digital Cooperation Organization has launched a pioneering policy tool designed to help governments, businesses and developers ensure artificial intelligence systems are ethically sound and aligned with human rights principles, it was announced on Friday.
Unveiled during the AI for Good Summit 2025 and the WSIS+20 conference in Geneva, the DCO AI Ethics Evaluator marks an important milestone in the organization’s efforts to translate its principles for ethical AI into practical action, it said.
The tool is a self-assessment framework enabling users to identify and mitigate ethical risks associated with AI technologies across six key dimensions.
It provides tailored reports featuring visual profiles and actionable recommendations, aiming to embed ethical considerations at every stage of AI development and deployment.
Speaking at the launch, Omar Saud Al-Omar, Kuwait’s minister of state for communication affairs and current chairman of the DCO Council, described the tool as a resource to help AI stakeholders “align with ethical standards and apply strategies to mitigate human rights impacts.”
He said it drew on extensive research and global consultation to address the growing demand for responsible AI governance.
DCO Secretary-General Deemah Al-Yahya highlighted the urgency of the initiative: “AI without ethics is not progress, it’s a threat. A threat to human dignity, to public trust, and to the very values that bind our societies together.”
She continued: “This is not just another checklist, it is a principled stand, built on best practices and rooted in human rights, to confront algorithmic bias, data exploitation and hidden ethical blind spots in AI.”
Al-Yahya emphasized the evaluator’s wide applicability: “It’s not just for governments, but for anyone building our digital future — developers, regulators, innovators. This is a compass for responsible AI, because ethical standards are no longer optional. They are non-negotiable.”
Alaa Abdulaal, the DCO’s chief of digital economy intelligence, provided a demonstration of the tool at the launch.
“The future of AI will not be shaped by how fast we code, but by the values we choose to encode,” he said.
Also in Geneva, the “AI Readiness Assessment Framework” was reviewed by the Saudi Data & AI Authority.
This key initiative was developed in collaboration with the International Telecommunication Union at the third Global AI Summit, held in Riyadh last year.
During the session, SDAIA representatives included Mohammed Al-Awad, director general of studies, and Rehab Al-Arfaj, director general of strategic partnerships and indicators. They praised the Kingdom’s global role in the governance and development of artificial intelligence technologies and emphasized its contributions to strengthening cooperation.
They also stressed several pioneering national AI initiatives and projects. These included “Aynay,” one of the Kingdom’s advanced medical solutions, which accurately detects and diagnoses diabetic retinopathy.
In addition, Al-Awad and Al-Arfaj highlighted Saudi Arabia’s efforts in launching the “AI Readiness Assessment Framework,” which embodies the Kingdom’s commitment to supporting safe, responsible and sustainable use and development of AI systems.
Ethics & Policy
DCO launches new AI ethics tool to advance responsible technology use
GENEVA: Saudi Arabia’s Digital Cooperation Organization has launched a pioneering policy tool designed to help governments, businesses and developers ensure artificial intelligence systems are ethically sound and aligned with human rights principles, it was announced on Friday.
Unveiled during the AI for Good Summit 2025 and the WSIS+20 conference in Geneva, the DCO AI Ethics Evaluator marks an important milestone in the organization’s efforts to translate its principles for ethical AI into practical action, it said.
The tool is a self-assessment framework enabling users to identify and mitigate ethical risks associated with AI technologies across six key dimensions.
It provides tailored reports featuring visual profiles and actionable recommendations, aiming to embed ethical considerations at every stage of AI development and deployment.
Speaking at the launch, Omar Saud Al-Omar, Kuwait’s minister of state for communication affairs and current chairman of the DCO Council, described the tool as a resource to help AI stakeholders “align with ethical standards and apply strategies to mitigate human rights impacts.”
He said it drew on extensive research and global consultation to address the growing demand for responsible AI governance.
DCO Secretary-General Deemah Al-Yahya highlighted the urgency of the initiative: “AI without ethics is not progress, it’s a threat. A threat to human dignity, to public trust, and to the very values that bind our societies together.”
She continued: “This is not just another checklist, it is a principled stand, built on best practices and rooted in human rights, to confront algorithmic bias, data exploitation and hidden ethical blind spots in AI.”
Al-Yahya emphasized the evaluator’s wide applicability: “It’s not just for governments, but for anyone building our digital future — developers, regulators, innovators. This is a compass for responsible AI, because ethical standards are no longer optional. They are non-negotiable.”
Alaa Abdulaal, the DCO’s chief of digital economy intelligence, provided a demonstration of the tool at the launch.
“The future of AI will not be shaped by how fast we code, but by the values we choose to encode,” he said.
Also in Geneva, the “AI Readiness Assessment Framework” was reviewed by the Saudi Data & AI Authority.
This key initiative was developed in collaboration with the International Telecommunication Union at the third Global AI Summit, held in Riyadh last year.
During the session, SDAIA representatives included Mohammed Al-Awad, director general of studies, and Rehab Al-Arfaj, director general of strategic partnerships and indicators. They praised the Kingdom’s global role in the governance and development of artificial intelligence technologies and emphasized its contributions to strengthening cooperation.
They also stressed several pioneering national AI initiatives and projects. These included “Aynay,” one of the Kingdom’s advanced medical solutions, which accurately detects and diagnoses diabetic retinopathy.
In addition, Al-Awad and Al-Arfaj highlighted Saudi Arabia’s efforts in launching the “AI Readiness Assessment Framework,” which embodies the Kingdom’s commitment to supporting safe, responsible and sustainable use and development of AI systems.
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers2 weeks ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions2 weeks ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained