Michael Yezerski
Chris Prestidge
Editor’s note: The promise and peril of artificial intelligence has captivated Washington D.C., Silicon Valley, Wall Street and Hollywood. Composer Michael Yezerski has taken a hands-on approach to it: The author of the score of the likes of the Oscar-winning short The Last Thing, Blindspotting (the movie and the series), Sean Byrne’s The Devil’s Candy, this year’s Dangerous Animals and the just released Liam Neeson-starring Ice Road: Vengeance put the tech to the test, as he details in a guest column for Deadline.
The other week at a party, I was asked by a picture editor if I am feeling the threat of AI.
I honestly replied that I am not. But then he told me that he uses AI music generators in his everyday work as a picture editor for commercials and all of a sudden, I felt threatened. I found the conversation sobering, but it spurred me to look further into the world of AI Music Generators (websites that write music for you based on a prompt). Now I have questions but I don’t have any answers.
AI Music is here and it’s here to stay. I think that much is clear.
At the moment, the technology is still nascent, and it is impressive for what it can do already (The Velvet Sundown, anyone?). But will it ever surpass human musical achievement? I have my doubts.
Michael Yezerski
Chris Prestidge
Using the AIs, I generated a raft of instrumental tracks in a variety of styles (sticking to instrumentals because they are the most applicable to my work). The electronic tracks (EDM, dance pop, etc) were quite impressive, whilst I found cinematic and classical tracks to be less so. I have to assume that this is only temporary and that the models will soon turn their focus to more complex musical structures.
I found that the AIs were able to churn out derivative dance, pop, basic rock, metal, punk with relative ease and incredible speed. Now these don’t feel human (yet) but you can’t exactly write them off either. I could see a world where certain filmmakers gravitate to some of these options. However, to my ear, they can’t yet replicate the very real energy that a live band or a real piano player would bring to the same scene and harmonically they all feel a bit odd.
I can see real value in music professionals using some of these AIs as idea generators. In certain styles, they are a quick way to get around writer’s block. Even so, all the tracks contained choices that I would never make in my own style as composer, and right now, the interfaces do not allow for the kind of changes that I would want.
Of course there are very real issues of copyright ownership and moral rights here. Whose music have these AIs been trained on? The Society of Composers & Lyricists, the Songwriters Guild of America and Music Creators North America are warning their members about the serious implications of assigning the rights to AI companies to train off their own music. And right now, there is a fierce campaign in Washington aimed at curbing AI companies’ request to label all content as “fair use” regardless of copyright ownership. It should be noted here that a 10-year moratorium on states passing their own laws regulating AI was removed from the budget bill before it passed last week.
I understand the desire to train on existing works. It’s almost human.
The dilemma for all composers is that we do start out by imitating the writers we admire. We are looking for the secret formula, convinced that there actually is one. But over time, the only secret that I’ve found is that there is no secret. Does anyone really know why a particular song goes viral? Or why a great score works so well that it gets used as temp music in countless successive productions? We know great music when we hear it, creating it is hard.
James Cameron recently suggested that we should be focusing on the output of these AIs and not the training. I agree to a certain extent and I worry that a picture editor, with a knowledge of music that is nowhere near that of a professional musician, may not recognize when an AI has unintentionally committed a copyright violation. I could foresee a scenario whereby a piece of music will be synced to picture, broadcast, and then called out (resulting in a tricky battle of ownership and responsibility).
Music is that most human of communications.
A language built of thousands of little mistakes, accidents and inconsistencies that, at its very best, is transformative and life-affirming to the human ear. Great music triggers an emotional response that can evoke core memories, peak experiences and foster feelings of community and intimacy with others. When I write, it’s often the happy accidents, mistakes and weird connections that end up defining the score (like in Dangerous Animals, where we really had to break the mold to find the exact sound for the “shark scream” – a combination of wailing strings, performing a difficult glissando, accompanied by analogue synths).
‘Dangerous Animals’
IFC/Shudder
So while I may start in one direction, often something unexpected happens and I end up improving on the sound based on my own cultural, historical and contextual knowledge. Will an AI ever be able to do that? Can AI innovate or only emulate?
And this is where I think composers and performers have their argument.
Can an AI spend seven months with a director honing, searching, defining and redefining a sound for their narrative masterwork (not to mention providing emotional support during that time!)? Can an AI engage interesting and unusual performers to bring the music to life like Hans Zimmer does? Can an AI take all of our contemporary cultural knowledge and turn it into song lyrics that delight and surprise us like Lin-Manuel Miranda does?
As composers, we are specialists and we have immersed ourselves in an evolving language that is thousands of years old. That language thrives on innovation and falters when it becomes stale and repetitive. AI Music Generators have made it incredibly easy to “re-create” sounds on a never-before-imagined scale.
But that is never where the goalposts were.
For me at least, I’m always looking further out.
Southern California grocery chain, Gelson’s, is partnering with Upshop to deploy an analytical approach to its market, using data, artificial intelligence, and operational insight as it looks to punch above its weight.
By adopting Upshop’s platform, Gelson’s says it will infuse intelligence into its forecasting, total store ordering, production planning, and real-time inventory processes, ensuring every location is tuned into local demand dynamics.
This means shoppers will find what they want, when they want it, all while store teams benefit from tools that simplify workflows, reduce waste, and increase efficiency.
“In a competitive grocery landscape, scale isn’t everything – intelligence is,” says Ryan Adams, President and CEO at Gelson’s Markets. “With Upshop’s embedded platform and AI driven capabilities, we’re empowering our stores to be hyper-responsive, efficient, and focused on the guest experience. It’s how Gelson’s can compete at the highest level.”
⇓ More from ICTworks
By Wayan Vota on July 8, 2025
Digital skills and technology solutions are more critical for African economies as they embrace digital transformation. Countries are positioning themselves as major tech hubs as the world goes virtual.
Sign Up Now for More Entrepreneurship Training Programs
Entrepreneurs need to master artificial intelligence and advanced AI solutions available today for business growth and development. AI skills are an important tool for promoting social and economic development, creating new jobs, and driving innovation.
MEST AI Startup Program is a bold redesign of Meltwater Entrepreneurial School of Technology’s flagship Training Program. It is built to prepare West Africa’s most promising tech talents to build, launch, and scale world-class AI startups.
West Africa has world-class tech talent, and it’s time AI solutions built on the continent reach users everywhere.
The MEST AI Startup Program is a fully-funded, immersive experience hosted in Accra, Ghana. Over an intensive seven-month training phase, founders receive hands-on instruction, technical mentorship, and business coaching from companies such as OpenAI, Perplexity, and Google.
The top ventures then advance to a four-month incubation period, and startups have an opportunity to pitch for pre-seed investment of up to $100, 000 and join the MEST Portfolio.
Apply Now! Deadline is August 22, 2025
Do you want to get startup investments for a technology business? Or learn how to win more contracts? Then please sign up now to get our email updates. We are constantly publishing new funding opportunities.
Filed Under: Featured, Funding
More About: Artificial Intelligence, Entrepreneurship, ICT Training, Project Funding, Startups
Gone are the days of six-fingered hands or distorted faces: AI-generated video is becoming increasingly convincing, attracting Hollywood, artists, and advertisers, while shaking the foundations of the creative industry.
To measure the progress of AI video, you need only look at Will Smith eating spaghetti. Since 2023, this unlikely sequence, entirely fabricated, has become a technological benchmark for the industry.
Two years ago, the actor appeared blurry, his eyes too far apart, his forehead exaggeratedly protruding, his movements jerky, and the spaghetti didn’t even reach his mouth.
The version published a few weeks ago by a user of Google’s Veo 3 platform showed no apparent flaws whatsoever.
“Every week, sometimes every day, a different one comes out that’s even more stunning than the next,” said Elizabeth Strickler, a professor at Georgia State University.
Between Luma Labs’ Dream Machine launched in June 2024, OpenAI’s Sora in December, Runway AI’s Gen-4 in March 2025, and Veo 3 in May, the sector has crossed several milestones in just a few months.
Runway has signed deals with Lionsgate studio and AMC Networks television group.
Lionsgate vice president Michael Burns told New York Magazine about the possibility of using artificial intelligence to generate animated, family-friendly versions from films like the “John Wick” or “Hunger Games” franchises, rather than creating entirely new projects.
“Some use it for storyboarding or previsualization,” steps that come before filming, “others for visual effects or inserts,” said Jamie Umpherson, Runway’s creative director.
Burns gave the example of a script for which Lionsgate has to decide whether to shoot a scene or not.
To help make that decision, they can now create a 10-second clip “with 10,000 soldiers in a snowstorm.”
That kind of pre-visualization would have cost millions before.
In October, the first AI feature film was released: “Where the Robots Grow” is an animated film without anything resembling live action footage.
For Alejandro Matamala Ortiz, Runway’s co-founder, an AI-generated feature film is not the end goal, but a way of demonstrating to a production team that “this is possible.”
Still, some see an opportunity.
In March, startup Staircase Studio made waves by announcing plans to produce seven to eight films per year using AI for less than $500,000 each, while ensuring it would rely on unionised professionals wherever possible.
“The market is there,” said Andrew White, co-founder of small production house Indie Studios.
People “don’t want to talk about how it’s made,” White pointed out. “That’s inside baseball. People want to enjoy the movie because of the movie.”
But White himself refuses to adopt the technology, considering that using AI would compromise his creative process.
Jamie Umpherson argues that AI allows creators to stick closer to their artistic vision than ever before, since it enables unlimited revisions, unlike the traditional system constrained by costs.
“I see resistance everywhere” to this movement, observed Georgia State’s Strickler.
This is particularly true among her students, who are concerned about AI’s massive energy and water consumption as well as the use of original works to train models, not to mention the social impact.
But refusing to accept the shift is “kind of like having a business without having the internet,” she said. “You can try for a little while.”
In 2023, the American actors’ union SAG-AFTRA secured concessions on the use of their image through AI.
Strickler sees AI diminishing Hollywood’s role as the arbiter of creation and taste, instead allowing more artists and creators to reach a significant audience.
Runway’s founders, who are as much trained artists as they are computer scientists, have gained an edge over their AI video rivals in film, television, and advertising.
But they’re already looking further ahead, considering expansion into augmented reality and virtual reality; for example creating a metaverse where films could be shot.
“The most exciting applications aren’t necessarily the ones that we have in mind,” said Umpherson. “The ultimate goal is to see what artists do with technology.”
Published – July 08, 2025 08:44 am IST
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
Donald Trump suggests US government review subsidies to Elon Musk’s companies
Rethinking Venture Capital’s Talent Pipeline
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
From chatbots to collaborators: How AI agents are reshaping enterprise work
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH