Tools & Platforms
Musicians concerned by lack of regulation around fast-evolving AI technology

Jeremy Toy.
Photo: RNZ / Evie Richardson
New Zealand’s musicians are watching on with a mix of horror and wonder as artificial intelligence programmes create increasingly plausible songs, often with just a few clicks of a button.
Just weeks ago a band called the ‘Velvet Sundown’ rose up the Spotify charts before it was revealed all their music had been generated by AI.
As the technology continues to evolve at breakneck speed, artists and producers here are concerned about the lack of regulation around the tech.
In recent years, a number of easily accessible generative AI music tools have been released, where users can create complete songs by merely inserting a simple prompt.
One of these is Suno, a US-based company, which has faced a number of lawsuits from major record labels in the US and Germany over copyright issues.
Sophie Burbery, a musician and PHD student studying the topic, said companies like Suno don’t disclose what music their AI software is learning from, leaving artists vulnerable.
“Suno has admitted that all of its music is trained on anything that it can scrape from the internet under fair use. No decisions have been made yet within those court systems as to whether or not they can actually rely on that as a way of getting away with not paying musicians for their music or licensing it or asking permission because at the moment they’ve done none of those things. “
Experts say the use of AI here in New Zealand is similar to the ‘wild west’ with no regulation or laws in place.
Concern was sparked following the recent release of the government’s AI strategy report, which contained no mention of the implications for creative industries.
Burbery said if this continues, the consequences could be dire for our music and other creative industries.
“It’s really up to the government to be doing this work, and to be saying hey look, you want to have your platform up in New Zealand Suno and Udio you need to tell us where and how you’ve trained your AI, and it has to be labelled as the output and who owns the output of the ai?”
Sophie Burbery.
Photo: Supplied / Paul Taylor
These AI programmes have generated many questions around copyright, an area where New Zealand is unique.
Under the current Copyright Act, when a person uses AI to create a piece of art, such as a song, the end product automatically belongs to them, even it’s based on a multitude of other people’s songs.
Clive Elliott KC, a barrister at Shortland Chambers who specialises in Intellectual Property told Checkpoint the current law is not fit for purpose when it comes to protecting creatives.
“We can’t use old principles that have been around for many years. We’ve got to say this is a completely different paradigm we face in here and we have to find a way which compensates people who have contributed to the learning process.”
Elliot said the Copyright Act is simply too out of date to apply to the rapidly evolving technology.
“It’s theft in a way, but it’s theft of a tiny piece of information. And the problem with copyright is you have to show that a substantial part of the work has been copied.”
“The Copyright Act has been under review for years now. [The government] need to step up and say this is this is urgent.”
For some in the industry, like producer and artist Jeremy Toy, the risks are worrying.
“If it’s adopted early on with songwriters that it normalises the process of using AI to create your music. It’s completely stripping you of your creative ideas.”
“I find it offensive as a musician that people think they can train a computer to think independently like a creative.”
They said there are some things that AI will never be able to replicate.
“Connecting on the level that’s not verbal, just being in a room with someone and playing music with them, that will never be recreated.”
Although the buzz around AI has amplified in recent years, some musicians say its been a part of the industry for a while now.
Rodi Kirk.
Photo: RNZ / Evie Richardson
Rodi Kirk, who works in music tech, said AI is commonly used when producing music, particularly in the mixing and mastering stages of production.
“One thing that might be surprising is that tools that rely on machine learning are not super new in terms of music production.”
Kirk remained optimistic about the benefits the technology could bring.
“I wouldn’t release a song and swap my voice for somebody that was well known, but you might do things that change your voice around for creative purposes. This general suite of tools that will be enabled by AI, I think people will do really creative stuff with them. “
With no regulation or protections for artists in sight, Burbery said it is unclear where AI will take the music industry next.
“It could offer many great creative possibilities but we don’t know what they are because the way it has been developed is so unethical.”
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.
Tools & Platforms
AI tops list of edtech priorities at K-12 schools for the first time in latest SETDA annual survey — EdTech Innovation Hub

The report, which draws on survey responses from edtech directors, state leaders, CIOs and other education leaders across 47 states, shows that AI is now at the top of state edtech priorities for the first time.
Many of the leaders surveyed reported working on guidance, professional development, and policy frameworks in AI while others have already brought on expertise in AI into their agencies to support its responsible use.
“The rise of AI as a top state priority reflects just how quickly the education landscape is evolving,” comments Julia Fallon, Executive Director of SETDA. “But what stands out in this year’s report is the through-line of commitment: state leaders are not chasing trends, they are developing policy and building frameworks that protect students, empower educators, and make technology a true driver of equity and impact. This is the work of system change, and states are leading the way.”
AI surpassed cybersecurity as a priority, which has been the top priority for the past two years. However, SETDA says cybersecurity remains a concern, with many leaders calling for continued infrastructure investment.
Other issues highlighted in the report include devices use, with ongoing debate around restricting student access to devices in classrooms. Leaders also mentioned professional development as an ongoing priority, with many saying this is an unmet need, particularly around the effective and safe use of AI in classrooms.
The ETIH Innovation Awards 2026
The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.
Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.
Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.
Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.
Tools & Platforms
AI Darwin Awards to mock the year’s biggest failures in artificial intelligence
Published on
ADVERTISEMENT
A new award will celebrate bad, ill-conceived, or downright dangerous uses of artificial intelligence (AI) — and its organisers are seeking the internet’s input.
The AI Darwin Awards reward the “visionaries” that “outsource our poor decision-making to machines”.
It has no affiliation with the Darwin Awards, a tongue-in-cheek award that recognises people who “accidentally remov[e] their own DNA” from the gene pool by dying in absurd ways.
To win one of the AI-centred awards, the nominated companies or people must have shown “spectacular misjudgement” with AI and “ignored obvious warning signs” before their tool or product went out.
Bonus points are given out to AI deployments that made headlines, required emergency response, or “spawned a new category of AI safety research”.
“We’re not mocking AI itself — we’re celebrating the humans who used it with all the caution of a toddler with a flamethrower,” an FAQ page about the awards reads.
Ironically, the anonymous organisers said they will verify nominations partly through an AI fact-checking system, which means they ask multiple large language models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini whether the stories submitted are true.
The LLMs rate a story’s truthfulness out of 10, then the administrators of the site average the scores with an AI calculator. If the average is above five, the story is considered “verified” and eligible for an AI Darwin Award.
OpenAI, McDonald’s among early nominees
One of the approved nominations for the first AI Darwin Awards is the American fast food chain McDonald’s.
The company built an AI chatbot for job recruitment called “Olivia” that was safeguarded by an obvious password: 123456, exposing a reported 64 million people’s hiring data to hackers.
Another early nominee is OpenAI for the launch of its latest chatbot model GPT-5. French data scientist Sergey Berezin claimed he got GPT-5 to unknowingly complete harmful requests “without ever seeing direct malicious instructions”.
The winners will be determined by a public vote during the month of January, with the announcement expected in February.
The only prize: “immortal recognition for their contribution to humanity’s understanding of how not to use artificial intelligence,” the organisers said.
The hope of the awards is to serve as “cautionary tale[s]” for future decision-makers so they agree to test AI systems before deploying them.
Tools & Platforms
Leading Google UK & the AI Opportunity

The UK has always had a special place in my story. Canary Wharf is where my career began in the 90s, during a period of profound transformation for the country’s financial sector. Reflecting on my first three months as Google UK lead, it’s clear that the pace of AI innovation is driving an even greater sense of historic opportunity, not just in the City, but across the entire country.
Recently, I attended a technology industry dinner at the historic Mansion House. The evening was an electric pairing of tradition and transformation – a blend that the UK has perfected. The room was filled with British business leaders, policymakers, and trailblazers across the tech sector, eager to uncover how AI-powered technologies could help solve some of the biggest challenges of our generation. This opportunity to build on the country’s rich heritage for pioneering world leading breakthroughs is why I’m excited to be back in the UK to lead Google’s operations here.
The UK: a hub for AI innovation & cultural influence
During my 15 years at Google, I’ve held a variety of regional and global roles, partnering with a diverse range of organisations to turn complex challenges into technological opportunities. Throughout that time, the UK has always stood out as a hotbed of innovation, a global epicenter for AI research — in particular, the work of our remarkable Google DeepMind colleagues — and a pioneer in the international advertising industry.
The UK has long been a nation of early adopters. This is why the UK was one of the first countries to roll-out new Gemini-powered products, such as AI Mode — a new way to search for information, developed to cater to the growing number of people asking longer and more complex queries.
UK consumer behaviour is constantly evolving, across streaming, scrolling, searching, and shopping. That’s why Google and YouTube are uniquely positioned to empower UK businesses to thrive, in a dynamic digital environment. It’s been inspiring getting to know the teams here in the UK who are helping businesses of all sizes meet the moment and use AI-powered tools to turn their online presence into real-world revenue, providing a vital engine for UK economic growth.
The UK’s cultural influence is also undeniable, as evidenced by well established homegrown British YouTube creators, such as Amelia Dimoldenberg and Brandon B who have become new media powerhouses in their own right. Or the England squad Lionesses, like Lucy Bronze who are both athletes and content creators in their own right, inspiring young female footballers to strive for excellence on and off the pitch, while winning for the UK. YouTube, which celebrated its 20th birthday earlier this year, is transforming how businesses use AI to reach new audiences. I’m proud of our leadership in this space, and the site’s potential to connect even more brands with a new generation of consumers.
Seizing the opportunity ahead
The construction of our first UK data centre in Waltham Cross, our new King’s Cross development and our AI Works initiative — our partnership with British organisations to help uncover the most effective ways to accelerate AI adoption and upskilling — are just some of the significant investments we’re making in the UK’s digital future. The UK is a country unlike any other and this is an incredible time to be back.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi