Ethics & Policy
Your browser is not supported
gadsdentimes.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.
Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on gadsdentimes.com
Ethics & Policy
Artificial intelligence and education with Mark Daley on London Morning

- 11 minutes ago
- News
- Duration 6:32
As artificial intelligence becomes more commonplace, London Morning will be talking about its impact on our lives. Mark Daley, the chief artificial intelligence officer at Western University, will join host Andrew Brown once a month. Their first conversation focused on AI and education.
Ethics & Policy
Ethics-driven Australian AI venture launches with largest local AI infrastructure

A new ethics-driven Australian AI venture, Sovereign Australia AI, has launched with an investment in commercial Australian AI infrastructure which the founders says is designed to keep our nation in control of its digital future. The company says it is committed to setting a new benchmark for transparency and ethics in artificial intelligence, with $10 million set aside to compensate copyright holders whose data is used to train its initial models. Sovereign Australia AI users will know how its models are built, what data they are trained on, and that they reflect Australian values.
Sovereign Australia AI was founded by Troy Neilson and Simon Kriss. Troy combines more than 20 years of startup and commercialisation experience with a PhD in AI and Machine Learning and has overseen over 100 AI-driven solutions across diverse industries. Simon is an AI strategist who has advised large corporates, government bodies and the United Nations on aligning AI technology with commercial and policy objectives. Sovereign Australia AI has also added Annette Kimmet AM, former CEO of the Victorian Gambling and Casino Control Commission to its board.
The company has placed Australia’s largest-ever order for sovereign AI capacity — 256 of the latest NVidia Blackwell B200 GPUs, to power model development at a scale previously unseen in Australia, creating a sovereign capability to create Large Language Models (LLMs) that are of a size that is on par with many foreign frontier models. The hardware will be hosted in secure, Australian-based data centres operated by NEXTDC, ensuring data sovereignty and compliance with Australian privacy and security standards.
Sovereign Australia AI says it has earmarked a minimum of $10 million to source copyrighted materials needed for its models. The company’s mission is to create AI models that are Australian owned, Australian controlled, and trained with ethically sourced datasets. Its upcoming Ginan and Australis models are designed to reflect the culture, language and principles of this nation, and to provide an alternative to offshore systems that may embed foreign values and biases.
To maximise transparency, Sovereign Australia AI says it will openly provide visibility of the data used to train its models. They will also open source the Ginan research model for free public use. This open and proactive stance sets a new benchmark for what is meant by the term ‘ethical AI’.
Sovereign Australia AI has also signed a memorandum of understanding with the Australian Computer Society (ACS). Under the agreement, Sovereign Australia AI will develop a bespoke AI capability for ACS, Australia’s peak professional body for the ICT sector.
“ACS is excited to collaborate with Sovereign Australia AI to help ensure Australia’s technology workforce has access to sovereign, ethical AI built with our nation’s values at its core,” said ACS CEO Josh Griggs.
Co-founder and CEO Simon Kriss said the launch marks a critical step towards securing Australia’s digital future.
“We are already seeing how AI is shaping the way people think, work and engage with information. If the foundational AI models Australians rely on are built offshore, we risk losing control over how our national values are represented,” he said. “Sovereign Australia AI will ensure we have a home-grown alternative that is ethical, transparent, and built for trust. We believe, as an industry, we must define what we mean when we say ‘ethical AI’ — simply saying you are ethical is not enough.
“We want to set that benchmark high — users should be able to understand what datasets the AI they use is trained on. They should be able to be aware of how the data has been curated. They should expect that copyright holders whose data was used to make the model more capable are compensated. That’s what we will bring to the table.”
Co-founder and CTO Troy Neilson said the focus is on sovereignty and trust rather than competition.
“Building sovereign AI capabilities here at home is a significant technological challenge, but we have the expertise in Australia to get the job done,” he said. “We are not trying to directly compete with ChatGPT or other global models, because we don’t need to. Instead, we are creating our own foundational models that will serve as viable alternatives that better capture the Australian voice.
“There are many public and private organisations in Australia who will greatly benefit from a truly sovereign AI solution, but do not want to sacrifice real-world performance. Our Ginan and Australis models will fill that gap with a capable, ethical alternative.”
Originally published
here.
Ethics & Policy
Agents of Change – Partnership on AI

This summer, the generative-AI hype cycle debate officially moved from peak expectations to disillusionment. The evidence is everywhere, with studies highlighting the risks of AI companions, and media reporting on the dangers of chatbots. Research has also shown that 95% of businesses have not yet seen a return from their generative-AI investments.
In response, attention has turned to agents as tools to drive both commercial returns and customer value. Unlike generative-AI applications, which produce content, agents can take direct actions.
Agents offer a win for productivity: a virtual assistant that can independently book a reservation and send calendar invitations is a step up from a generative-AI application that provides a list of top restaurants and available evenings.
Agents also raise the stakes. If a gen-AI application fails, the customer gets a list with incorrect restaurants and dates. If an AI agent fails, its impact is a lot greater. When it comes to calendars and reservations, the risks might seem low, such as booking the wrong date or restaurant. However, if unaddressed, agents could create havoc with a restaurant’s profits or lead to the unlawful release of sensitive, personal information. When applied to other sectors, such as health care or banking, the risks increase significantly.
Partnership on AI was created to tackle challenges just like this. Only through a community that brings together experts from civil society, industry, and academia can we anticipate the impact of emerging AI development on people and respond with clear recommendations for practice and policy.
When it comes to agents, we’ve already begun.
When chatbots and image generators rose in popularity, we defined the terms of what best practice looks like for the entities building, creating, and distributing AI-generated media through our Synthetic Media Framework.
Before the AI Safety Summit in Bletchley Park, we released the first framework for Safe Foundation Model Deployment which accounted for both open and closed releases of advanced models.
Building on these efforts, I shared earlier this year our intent to focus on agents. Here’s what we’ve been doing.
Agent Monitoring
PAI’s AI Safety team is developing a framework for monitoring AI agents. The first publication from this work asserts both the necessity and value of real-time failure detections in AI agent systems. Filled with useful definitions and frameworks, this collaborative paper is the foundation for our ongoing work in this area with seed funding through a grant from Georgetown’s Center for Security and Emerging Technology. A new AI Safety Steering Committee is being formed to take this work forward.
Agent Policy Governance
With the advice of our Policy Steering Committee, PAI’s Policy Team has focused on developing upcoming policy briefs, reports, and convenings on agent governance through the lens of multilateral organizations as well as provincial, state, and local governments. They encourage policy makers to better anticipate future impacts through proactive research, capacity building, and experimentation. This work will also inform our complementary initiative on an AI Assurance Roadmap.
AI and Human Connection
Along with the safety and policy impacts of AI agents, PAI is setting forth new work on how AI affects social connection and information-sharing. Our new AI & Human Connection team is collaborating with a new Steering Committee to address the pressing question: How can AI strengthen and sustain informed and connected communities?
This work will look into how AI agents, such as chatbots, affect social connection. As we’ve seen from countless news reports over the past year, individuals and organizations are exploring AI’s potential for companionship and even mental health therapy. This new effort from PAI and our community will explore how AI is changing how we connect with each other and how we learn about the world.
What’s Next
I look forward to updating you later this year on our work in AI, Labor and Economy where we’re also taking up the question of agents and we’ll continue to seek advice from our Enterprise and Philanthropy Steering Committees. This is truly a team effort.
Since our founding, we have worked with our global community of experts to explore how we can create a trustworthy AI ecosystem, where stakeholders from across sectors contribute to safe and responsible AI.
This work continues today. And importantly, it evolves as the technology shifts and the world we live in changes.
Together, we are building out a robust AI agent governance ecosystem, developing sociotechnical solutions, driving forward policy research, and advancing collective action.
Join us on our mission to create positive change. We need your creativity to make all of this happen. For more information and to get involved, contact us at contact@partnershiponai.org.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi