Connect with us

Ethics & Policy

Agents of Change – Partnership on AI

Published

on


This summer, the generative-AI hype cycle debate officially moved from peak expectations to disillusionment. The evidence is everywhere, with studies highlighting the risks of AI companions, and media reporting on the dangers of chatbots. Research has also shown that 95% of businesses have not yet seen a return from their generative-AI investments.

In response, attention has turned to agents as tools to drive both commercial returns and customer value. Unlike generative-AI applications, which produce content, agents can take direct actions.

Agents offer a win for productivity: a virtual assistant that can independently book a reservation and send calendar invitations is a step up from a generative-AI application that provides a list of top restaurants and available evenings.

Agents also raise the stakes. If a gen-AI application fails, the customer gets a list with incorrect restaurants and dates. If an AI agent fails, its impact is a lot greater. When it comes to calendars and reservations, the risks might seem low, such as booking the wrong date or restaurant. However, if unaddressed, agents could create havoc with a restaurant’s profits or lead to the unlawful release of sensitive, personal information. When applied to other sectors, such as health care or banking, the risks increase significantly.

Partnership on AI was created to tackle challenges just like this. Only through a community that brings together experts from civil society, industry, and academia can we anticipate the impact of emerging AI development on people and respond with clear recommendations for practice and policy.

When it comes to agents, we’ve already begun.

When chatbots and image generators rose in popularity, we defined the terms of what best practice looks like for the entities building, creating, and distributing AI-generated media through our Synthetic Media Framework.

Before the AI Safety Summit in Bletchley Park, we released the first framework for Safe Foundation Model Deployment which accounted for both open and closed releases of advanced models.

Building on these efforts, I shared earlier this year our intent to focus on agents. Here’s what we’ve been doing.

Agent Monitoring

PAI’s AI Safety team is developing a framework for monitoring AI agents. The first publication from this work asserts both the necessity and value of real-time failure detections in AI agent systems. Filled with useful definitions and frameworks, this collaborative paper is the foundation for our ongoing work in this area with seed funding through a grant from Georgetown’s Center for Security and Emerging Technology. A new AI Safety Steering Committee is being formed to take this work forward.

Agent Policy Governance

With the advice of our Policy Steering Committee, PAI’s Policy Team has focused on developing upcoming policy briefs, reports, and convenings on agent governance through the lens of multilateral organizations as well as provincial, state, and local governments. They encourage policy makers to better anticipate future impacts through proactive research, capacity building, and experimentation. This work will also inform our complementary initiative on an AI Assurance Roadmap.

AI and Human Connection

Along with the safety and policy impacts of AI agents, PAI is setting forth new work on how AI affects social connection and information-sharing. Our new AI & Human Connection team is collaborating with a new Steering Committee to address the pressing question: How can AI strengthen and sustain informed and connected communities?
This work will look into how AI agents, such as chatbots, affect social connection. As we’ve seen from countless news reports over the past year, individuals and organizations are exploring AI’s potential for companionship and even mental health therapy. This new effort from PAI and our community will explore how AI is changing how we connect with each other and how we learn about the world.

What’s Next

I look forward to updating you later this year on our work in AI, Labor and Economy where we’re also taking up the question of agents and we’ll continue to seek advice from our Enterprise and Philanthropy Steering Committees. This is truly a team effort.

Since our founding, we have worked with our global community of experts to explore how we can create a trustworthy AI ecosystem, where stakeholders from across sectors contribute to safe and responsible AI.

This work continues today. And importantly, it evolves as the technology shifts and the world we live in changes.

Together, we are building out a robust AI agent governance ecosystem, developing sociotechnical solutions, driving forward policy research, and advancing collective action.

Join us on our mission to create positive change. We need your creativity to make all of this happen. For more information and to get involved, contact us at contact@partnershiponai.org.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Artificial intelligence and education with Mark Daley on London Morning

Published

on


  • 11 minutes ago
  • News
  • Duration 6:32

As artificial intelligence becomes more commonplace, London Morning will be talking about its impact on our lives. Mark Daley, the chief artificial intelligence officer at Western University, will join host Andrew Brown once a month. Their first conversation focused on AI and education.



Source link

Continue Reading

Ethics & Policy

Ethics-driven Australian AI venture launches with largest local AI infrastructure

Published

on


A new ethics-driven Australian AI venture, Sovereign Australia AI, has launched with an investment in commercial Australian AI infrastructure which the founders says is designed to keep our nation in control of its digital future. The company says it is committed to setting a new benchmark for transparency and ethics in artificial intelligence, with $10 million set aside to compensate copyright holders whose data is used to train its initial models. Sovereign Australia AI users will know how its models are built, what data they are trained on, and that they reflect Australian values.

Sovereign Australia AI was founded by Troy Neilson and Simon Kriss. Troy combines more than 20 years of startup and commercialisation experience with a PhD in AI and Machine Learning and has overseen over 100 AI-driven solutions across diverse industries. Simon is an AI strategist who has advised large corporates, government bodies and the United Nations on aligning AI technology with commercial and policy objectives. Sovereign Australia AI has also added Annette Kimmet AM, former CEO of the Victorian Gambling and Casino Control Commission to its board.

The company has placed Australia’s largest-ever order for sovereign AI capacity — 256 of the latest NVidia Blackwell B200 GPUs, to power model development at a scale previously unseen in Australia, creating a sovereign capability to create Large Language Models (LLMs) that are of a size that is on par with many foreign frontier models. The hardware will be hosted in secure, Australian-based data centres operated by NEXTDC, ensuring data sovereignty and compliance with Australian privacy and security standards.

Sovereign Australia AI says it has earmarked a minimum of $10 million to source copyrighted materials needed for its models. The company’s mission is to create AI models that are Australian owned, Australian controlled, and trained with ethically sourced datasets. Its upcoming Ginan and Australis models are designed to reflect the culture, language and principles of this nation, and to provide an alternative to offshore systems that may embed foreign values and biases.

To maximise transparency, Sovereign Australia AI says it will openly provide visibility of the data used to train its models. They will also open source the Ginan research model for free public use. This open and proactive stance sets a new benchmark for what is meant by the term ‘ethical AI’.

Sovereign Australia AI has also signed a memorandum of understanding with the Australian Computer Society (ACS). Under the agreement, Sovereign Australia AI will develop a bespoke AI capability for ACS, Australia’s peak professional body for the ICT sector.

“ACS is excited to collaborate with Sovereign Australia AI to help ensure Australia’s technology workforce has access to sovereign, ethical AI built with our nation’s values at its core,” said ACS CEO Josh Griggs.

Co-founder and CEO Simon Kriss said the launch marks a critical step towards securing Australia’s digital future.

“We are already seeing how AI is shaping the way people think, work and engage with information. If the foundational AI models Australians rely on are built offshore, we risk losing control over how our national values are represented,” he said. “Sovereign Australia AI will ensure we have a home-grown alternative that is ethical, transparent, and built for trust. We believe, as an industry, we must define what we mean when we say ‘ethical AI’ — simply saying you are ethical is not enough.

“We want to set that benchmark high — users should be able to understand what datasets the AI they use is trained on. They should be able to be aware of how the data has been curated. They should expect that copyright holders whose data was used to make the model more capable are compensated. That’s what we will bring to the table.”

Co-founder and CTO Troy Neilson said the focus is on sovereignty and trust rather than competition.

“Building sovereign AI capabilities here at home is a significant technological challenge, but we have the expertise in Australia to get the job done,” he said. “We are not trying to directly compete with ChatGPT or other global models, because we don’t need to. Instead, we are creating our own foundational models that will serve as viable alternatives that better capture the Australian voice.

“There are many public and private organisations in Australia who will greatly benefit from a truly sovereign AI solution, but do not want to sacrifice real-world performance. Our Ginan and Australis models will fill that gap with a capable, ethical alternative.”

Image: Sovereign Australia AI founders Troy Neilson(L) and Simon Kriss (R).

Originally published
here.



Source link

Continue Reading

Ethics & Policy

Philosophy Faculty Lead Ethical Conversations Surrounding AI

Published

on


As artificial intelligence (AI) becomes increasingly integrated into everyday life, UCF’s Department of Philosophy has intentionally been strengthening faculty research in this area, as well as growing opportunities for students to learn more about the impact of technology on humans and the natural and social environments. A primary focus has been examining the ethical implications of AI and other emerging technologies.

Department Chair and Professor of Philosophy Nancy Stanlick emphasizes that understanding AI requires more than technical knowledge; it demands a deep exploration of ethics.

“As science and technology begin to shape more aspects of our lives, fundamental philosophical questions lie at the center of the ethical issues we face, especially with the rise of AI,” Stanlick says. “Perhaps the central [concern] is that it pulls us away from the essence of our humanity.”

Steve Fiore, a philosophy professor whose work is in the cognitive sciences program, investigates how humans interact socially with technology. In 2023, he co-authored a International Journal of Human-Computer Interaction study, titled “Six Human-Centered Artificial Intelligence Grand Challenges,” that serves as a call to the scientific community to design AI systems that prioritize human values and ethical considerations. Fiore also collaborates with the U.S. Department of Defense to explore how emerging technologies may shape national security.

Professor Jonathan Beever played a key role in developing UCF’s artificial intelligence, big data and human impacts undergraduate certificate. The interdisciplinary program equips students with the tools to critically assess and advocate for the ethical development of data-driven technologies, particularly AI and big data.

Associate Lecturer Stacey DiLiberto brings a unique perspective through her work in digital humanities, a field that merges traditional humanities with digital tools. Her research and teaching encourage students to view AI as a tool, while critically examining its impact on identity and creativity. In her classes, she challenges students with questions like “What does it mean to be human when humans can mimic our creativity?” DiLiberto argues that while AI can generate art, it lacks the lived experience and emotional depth that define human expression. Machines cannot replace lived experiences or memories, often lacking pathos when generating art.

While artificial intelligence has made remarkable progress, it does not replicate the depth of human connection or the ethical and moral reasoning inherent to human judgment. Department of Philosophy faculty like Stanlick, Fiore, Beever and DiLiberto provide frameworks for developing technology in ways that uphold ethical standards and preserve human values.

Visit the Department of Philosophy for more information about undergraduate and graduate programs, courses and opportunities to collaborate with the department’s faculty and students.



Source link

Continue Reading

Trending