Connect with us

Ethics & Policy

Sovereign Australia AI believes LLMs can be ethical

Published

on


Welcome back to Neural Notes, a weekly column where I look at how AI is affecting Australia. In this edition: a challenger appears for sovereign Australian AI, and it actually wants to pay creatives.

Sovereign Australia AI is a newly launched Sydney venture aiming to build foundational large language models that are entirely Australian. Founded by Simon Kriss and Dr Troy Neilson, the company’s models are said to be trained on Australian data, run on local infrastructure, and governed by domestic privacy standards.

Related Article Block Placeholder

Article ID: 322896

Setting itself apart from offshore giants like OpenAI and Meta, the company plans to build the system for under $100 million, utilising 256 Nvidia Blackwell B200 GPUs. This is the largest domestic AI hardware deployment to date. 

The startup also says it has no interest in going up against the likes of OpenAI and Perplexity.

“We are not trying to directly compete with ChatGPT or other global models, because we don’t need to. Instead, we are creating our own foundational models that will serve as viable alternatives that better capture the Australian voice,” Neilson said to SmartCompany

There are many public and private organisations in Australia that will greatly benefit from a truly sovereign AI solution, but do not want to sacrifice real-world performance. Our Ginan and Australis models will fill that gap.”

Its other point of differentiation is the plan for ethically sourced datasets. In fact, Sovereign Australia AI says that over $10 million is earmarked for licensing and compensating copyright holders whose work contributes to training. Though it admits it will probably require more cash.

“We want to set that benchmark high. Users should be able to understand what datasets the AI they use is trained on. They should be able to be aware of how the data has been curated,” Kriss told SmartCompany.

“They should expect that copyright holders whose data was used to make the model more capable are compensated. That’s what we will bring to the table.”

The price of ethics

As we have seen in recent weeks, copyright has been at the forefront of the generative AI conversation, with both tech founders and lobbyists arguing for relaxed rules and ‘fair use’ exemptions in the name of innovation

Kriss sees an urgent need to value Australian creativity not just as a resource but as an ethical benchmark.

AI development, he says, must avoid the “Wild West, lawless and lacking empathy” mentality that defined its early years and pursue a path that actively engages and protects local content makers.

There’s also a shift away from Silicon Valley’s ‘move fast and litigate later’ philosophy.

Neilson told SmartCompany he has watched international platforms defer creator payment until legal action forced their hand, pointing out the “asking for forgiveness instead of seeking permission” playbook is now coming with a hefty price tag. 

Related Article Block Placeholder

Article ID: 322045

Moving forward together with content creators, he suggests, is not only right for Australia but essential if they want to build lasting trust and capability.

But compensation sits awkwardly with technical realities. The company is openly exploring whether a public library-like model, using meta tagging and attribution to route payments, could meaningfully support creators. 

Kriss frames this not only as a technical necessity but as a principle: paying for content actually consumed is the backbone of sustainable AI training. The team acknowledges “synthesis is a tough nut to crack,” but for Neilson, that’s a discussion the sector needs rather than something to defer to lawsuits and policy cycles.

As AI industry figures urge Australia to model its approach on the US’ “fair use,” creators and advocates warn this risks legitimising mass scraping and leaving local culture unpaid and unprotected.

This legal ambiguity is rapidly becoming a global standard. Anthropic’s proposed US$1.5 billion book piracy settlement, a case promising $3,000 per affected title, is now on hold as US courts question both the payout and the precedent. 

Judges caution that dismissing other direct copyright claims does not resolve the lawfulness of training AI on copyrighted material, leaving creators and platforms worldwide in limbo. 

And in recent weeks, another US judge dismissed a copyright infringement lawsuit brought against Meta by 13 authors, including comedian Sarah Silverman. It was the second claim of this nature to be dismissed by the court in San Francisco at the time.

When funding and talent collide with ambition

The presence of multiple teams working in this area, such as Maincode’s Matilda model, suggests that Australia’s sovereign AI movement is well underway.

Neilson welcomes this competition and doesn’t see fragmentation as a potential risk.

“We applaud anyone in the space working on sovereign AI. We’re proud to be part of the initial ground swell standing up AI capability in Australia, and we look forward to the amazing things that the Maincode team will build,” Neilson said.

“The worst parties are the ones where you’re in the room by yourself.”

Behind the scenes, the budget calculations remain complicated. Sovereign Australia AI’s planned $100 million investment sits well below what analysts believe is required for competitive, world-class infrastructure. Industry bodies have called for $2–4 billion to ensure genuine sovereign capability. 

While Neilson maintains that local talent and expertise are up to the challenge, persistent skills gaps and global talent poaching mean only coordinated investment can bridge the distance from prototype to deployment.

Transparency and Australian investment

According to Sovereign Australia AI, transparency is a platform feature.

“We can’t control all commercial conversations, but I think it would benefit everyone if these deals were disclosed,” Kriss said.

“Somewhat selfishly, it would benefit us, as users would understand how much of what we charge is going back to creators to make them whole.”

Neilson also welcomes the idea of independent audits.

Related Article Block Placeholder

Article ID: 280226

“How we use that data, may be commercial in confidence, but the raw data, absolutely.”

“This is critical for several reasons. Firstly, it helps eliminate the black box nature of LLMs, where a lack of understanding of the underlying data impedes understanding of outputs. 

Secondly, we want to provide the owners of the data the opportunity to opt out of our models if they choose. We need to tag all data to empower this process and we need to have that process audited so every Australian can be proud of the work we do.”

The team also says its Australian and ethical focus is fundamental.

“We’re sovereign down to our corporate structure, our employees, our supply chain, and where we house our hardware.  We cannot and will not change our tune just because someone is willing to write a larger cheque,” Kriss said.

He also said the startup would refuse foreign investment “without giving it a second thought”.

“For us, this is not about finding any money … it is all about finding the RIGHT money. If an investor said they would not support paying creatives, we would walk away from the deal.  We need to do this right for Australia.”

Finally, the founders challenge conventional notions of what sovereignty and security mean for digital Australia. 

Kriss proposed genuine sovereignty can’t be reduced to protection against threats or state interests alone. 

“Security goes beyond guns, bombs and intelligence. It goes to a musician being secure in being able to pay their bills, to a reporter being confident that their work isn’t being unfairly ripped off. To divorce being fair from sovereignty is downright unAustralian.”

Can principle meet practicality in Australia’s sovereign AI experiment?

The aims of Sovereign Australia AI are an ambitious counterpoint to the global status quo. Whether these promises will prove practical or affordable for all Australian creators is still an open question.

And this will be all the more difficult to achieve without targeting a global market. Sovereign Australia AI has remained firm about building local for local.  

The founders have indicated no plans to chase global scale or go head-to-head with US tech giants.

“No, Australia is our market, and we need this to maintain our voice on the world stage,” Neilson said. 

“Yet we hope that other sovereign nations around the globe see the amazing work that we’re doing and seek us out. We have the capability and smarts to help other nations.

“This would enable us to scale commercially beyond Australia, without jeopardising our sovereignty.”

As for paying creators, the startup is still considering different options.

It says that a sustainable model may be easier to structure with large organisations, which, as Neilson puts it, “have well-developed systems in place for content licensing and attribution”. 

But for individual artists and writers, he acknowledges, the solution could “look to systems like those used by YouTube” to engage at scale. 

“We are not saying we have all the answers, but we are open to working it out for the benefit of all Australians.”



Source link

Ethics & Policy

Heron Financial brings out AI ethics committee and training programme

Published

on



Heron Financial has unveiled an artificial intelligence (AI) ethics committee and student cohort.

The mortgage and protection broker’s newly formed AI ethics committee will give employees the opportunity to offer input on how AI is utilised, which ensures that innovation will be balanced with transparency, fairness and accountability.

Heron Financial said the AI training will be a “combination of technical learning with ethical oversight” to ensure the “responsible integration” of the technology into Everglades, its digital platform.

As part of its AI student cohort, a number of members from different teams will be upskilled with an AI development programme.

Matt Coulson (pictured), the firm’s co-founder, said: “I’m proud to be launching our first AI learning and development cohort. We see AI not just as a way to enhance our operations, but as an opportunity to enhance our people.

“By investing in their skills, we’re giving them knowledge they can use in their work and daily lives. Human and technology centricity are part of our core values; it’s only natural we bring the two together.”


Sponsored

How we built a limited company proposition around brokers’ needs

Sponsored by BM Solutions


 

‘It’s about combining technology and human expertise to raise the bar’

Alongside this, Heron Financial’s Everglades platform will be embedded with AI-driven innovations comprising an AI reporting assistant, an AI adviser assistant and a prediction model, which it said uses basic customer details to predict the likelihood of them gaining a mortgage.

The platform aims to support prospective and existing homeowners by simplifying processes, as well as providing a referral route for clients and streamlining the sales-to-mortgage process to help sales teams and housebuilders.

Also in the pipeline is a new generation of AI ‘agents’ within Everglades, which Heron Financial said will further automate and optimise the whole adviser journey.

Coulson added: “We expect these developments to save hours in the adviser servicing journey, enabling our advisers to spend more time with clients and less time on admin. It’s about combining technology and human expertise to raise the bar for the whole industry.”

At the beginning of the year, Coulson discussed the firm’s green retrofit process in an interview with Mortgage Solutions.





Source link

Continue Reading

Ethics & Policy

Starting my AI and higher education seminar

Published

on


Greetings from early September, when fall classes have begun.  Today I’d like to share information about one of my seminars as part of my long-running practice of being open about my teaching.

It’s in Georgetown University’s Learning, Design, and Technology graduate program. I’m team teaching it with LDT’s founding director, professor Eddie Maloney, who first designed and led the class last year, along with some great guest presenters.  The subject is the impact of AI on higher education.

Every week students dive into one topic, from pedagogy to criticism, changes in politics and economics to using LLMs to craft simulations.  During the semester students lead discussions and also teach the class about a particular AI technology.  Each student will also tackle Georgetown’s Applied Artificial Intelligence Microcredential.

Midjourney-created image of AI impacting a university.

Almost all of the readings are online.  We have two books scheduled: B. Mairéad Pratschke, Generative AI and Education and De Kai, Raising AI: An Essential Guide to Parenting Our Future.

Here’s the syllabus:

Introduction to AI in Higher Education:
Overview of AI, its history, and current applications in academia

Signing up for tech sessions (45 min max) (pair up) and discussion leading spots

Delving deeper into LLMs
Guest speakers: Molly Chehak and Ella Csarno.
Readings:

  1. AI Tools in Society & Cognitive Offloading
  2. MIT study, Your Brain on ChatGPT. (overview, since the study is 120 pages)
  3. Allie K. Miller, Practical Guide to Prompting ChatGPT 5 Differently

Macro Impacts: Economics, Culture, Politics

A broad look at AI’s societal effects—on labor, governance, and policy—with a focus on emerging regulatory frameworks and debates about automation and democracy.

Readings:

Optional: Daron Acemoglu, “The Simple Macroeconomics of AI”

Institutional Responses

This week, we will also examine how colleges and universities are responding structurally to the rise of AI, from policy and pedagogy to strategic planning and public communication.

Reading:

How Colleges and Universities Are Grappling with AI

We consider AI’s influence on teaching and learning through institutional case studies and applied practices, with guest insights on faculty development and student experience.

Guest speaker: Eddie Watson on the AAC&U Institute on AI, Pedagogy, and the Curriculum.

Reading: 

Pedagogy, Conversations, and Anthropomorphism

Through simulations and classroom case studies, we examine the pedagogical potential and ethical complications of human-AI interaction, academic integrity, and AI as a “conversational partner.”

Readings:

AI and Ethics/Humanities

This session explores ethical and philosophical questions around AI development and deployment, drawing on work in the humanities, global ethics, and human-centered design.

Guest speaker: De Kai

Readings: selections from Raising AI: An Essential Guide to Parenting Our Future (TBD)

Critiques of AI

We engage critical perspectives on AI, focusing on algorithmic bias, epistemology, and the political economy of data, while challenging dominant narratives of inevitability and neutrality.

Readings:

Human-AI Learning

This week considers how humans and AI collaborate for learning, and what this partnership means for workforce development, education, and a sense of lifelong fulfillment.

Guest Speaker: Dewey Murdick

Readings: TBD

 

Agentic Possibilities

A close look at emerging AI systems and agents, with attention to autonomy, instructional design, and how educational tools are integrating generative AI features.

Reading

  • Pratschke, Generative AI and Education, chapters 5-6

Future Possibilities

We explore visions for the future of AI in higher education, including utopian and dystopian framings, and ask how ethical leadership and equity might shape what comes next.

Readings:

One week with topic reserved for emerging issues

Topic, materials, and exercises to be determined by instructors and students.

Student final presentations

I’m very excited to be teaching it.

Liked it? Take a second to support Bryan Alexander on Patreon!

Become a patron at Patreon!



Source link

Continue Reading

Ethics & Policy

A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action

Published

on


A policy roundtable in Tokyo and a workshop in Bangkok deepened the dialogue between Southeast Asia and the OECD, fostering collaboration on AI governance across countries, sectors, and policy communities.

A dialogue strengthened by two dynamics

Southeast Asia is rapidly emerging as a vibrant hub for Artificial Intelligence. From Indonesia and Thailand to Singapore, and Viet Nam, governments have launched national AI strategies, and ASEAN published a Guide on AI Governance and Ethics to promote consistency of AI frameworks across jurisdictions.

At the same time, OECD’s engagement with Southeast Asia is strengthening. The 2024 Ministerial Council Meeting highlighted the region as a priority for OECD global relations, coinciding with the tenth anniversary of the Southeast Asia Regional Programme (SEARP) and the initiation of the accession processes for Indonesia and Thailand.

Together, these dynamics open new avenues for practical cooperation on trustworthy, safe and secure AI.

In 2025, this momentum translated into two concrete engagement initiatives: a policy roundtable in Tokyo in May and a co-creation workshop for the OECD AI Policy Toolkit in Bangkok in August. Both events shaped regional dialogues on AI governance and helped to bridge the gap between technical expertise and policy design.

Japan actively supported both initiatives, demonstrating a strong commitment to regional AI governance. At the OECD SEARP Regional Forum in Bangkok, Japan expressed hope that AI would become a new pillar of OECD–Southeast Asia cooperation, highlighting the Tokyo Policy Roundtable on AI as the first of many such initiatives. Subsequently, Japan supported the co-creation workshop in Bangkok in August, helping to ensure a regional focus and high-level engagement across Southeast Asia.

The Tokyo roundtable enabled discussions on AI in agriculture, natural disaster management, national strategies and more

On 26 May 2025, the OECD and its Tokyo Office held a regional policy roundtable, bringing together over 80 experts and policymakers from Japan, Korea, Southeast Asia, and the ASEAN Secretariat, with many more joining online. The event highlighted the importance of linking technical expertise with policy to ensure AI delivers benefits responsibly, drawing on the OECD AI Principles and Policy Observatory. Speakers from ASEAN, Thailand, and Singapore shared progress on implementing their national AI strategies,

AI’s potential came into focus through two powerful examples. In agriculture, it can speed up crop breeding and enable precision farming, while in disaster management, digital twins built from satellite and telecoms data can strengthen early warnings and damage assessments. As climate change intensifies agricultural stress and natural hazards, these cases demonstrate how AI can deliver real societal benefits—while underscoring the need for robust governance and regional cooperation, supported by OECD initiatives such as the upcoming AI Policy Toolkit.

The OECD presented activities in international AI governance, including the AI Policy Observatory, the AI Incidents and Hazards Monitor and the Reporting Framework for the G7 Hiroshima Code of Conduct for Developers of Advanced AI systems.

Bangkok co-creation workshop: testing the OECD AI Policy Toolkit

Following the Tokyo roundtable, the OECD, supported by the Foreign Ministries of Japan and Thailand, hosted the first co-creation workshop for the OECD AI Policy Toolkit on 6 August 2025 in Bangkok. Twenty senior policymakers and AI experts from across the region contributed regional perspectives to shape the tool, which will feature a self-assessment module to identify priorities and gaps, alongside a repository of proven policies and practices. The initiative, led by Costa Rica as Chair of the 2025 OECD Ministerial Council Meeting, has already gained strong backing from governments and organisations, including Japan’s Ministry of Internal Affairs and Communications.

Hands-on discussion on key challenges and practical solutions

The co-creation workshop provided a space for participants to work in breakout groups and discuss concrete challenges, explore practical solutions and design effective AI policies in key domains.

Participants identified several pressing challenges for AI governance in Southeast Asia. Designing public funding programmes for AI research and development remains difficult in an environment where technology evolves faster than policy cycles, while the need for large-scale investment continues to grow.

The scarcity of high-quality, local-language data, weak governance frameworks, limited data-sharing mechanisms,  and reliance on foreign compute providers further constrain progress, alongside the shortage of locally developed AI models tailored to sectoral needs.  

Participants also focused on labour market transformation, digital divides, and the need to advance AI literacy across all levels of society – from citizens to policymakers – to foster awareness of both opportunities and risks.

Participants showcased promising national initiatives, from responsible data-sharing frameworks and investment incentives for data centres and venture capital, to sectoral data platforms and local-language large language models. Countries are also rolling out capacity-building programmes to strengthen AI adoption and oversight, while seeking the right balance between regulation and innovation to foster trustworthy AI.

Across groups, participants agreed on the need to strengthen engagement, foster collaboration, and create enabling conditions for the practical deployment of AI, capacity building, and knowledge sharing.

The instruments discussed during the workshop will feed into the Toolkit’s policy repository, enabling other countries to draw on these experiences and adapt them to their national contexts.

Taking AI governance from global guidance to local practice

The Tokyo roundtable and Bangkok workshop were key milestones in building a regional dialogue on AI governance with Southeast Asian countries. By combining policy frameworks with technical demonstrations, the discussions focused on turning international guidance into practical, locally tailored measures. Southeast Asia is already shaping how global principles translate into action, and with continued collaboration, the OECD AI Policy Toolkit will provide governments in the region—and beyond—with concrete tools to design and implement trustworthy AI.

The authors would like to thank the team members who contributed to the success of these projects: Hugo Lavigne, Luis Aranda, Lucia Russo, Celine Caira, Kasumi Sugimoto, Julia Carro, Nikolas Schmidt and Takako Kitahara.

The post A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action appeared first on OECD.AI.



Source link

Continue Reading

Trending