Connect with us

Tools & Platforms

AI Threat Real, But The Tech’s Lack Of Depth Hurts Music — Commentary

Published

on


Editor’s note: The promise and peril of artificial intelligence has captivated Washington D.C., Silicon Valley, Wall Street and Hollywood. Composer Michael Yezerski has taken a hands-on approach to it: The author of the score of the likes of the Oscar-winning short The Last Thing, Blindspotting (the movie and the series), Sean Byrne’s The Devil’s Candy, this year’s Dangerous Animals and the just released Liam Neeson-starring Ice Road: Vengeance put the tech to the test, as he details in a guest column for Deadline.

The other week at a party, I was asked by a picture editor if I am feeling the threat of AI.

I honestly replied that I am not. But then he told me that he uses AI music generators in his everyday work as a picture editor for commercials and all of a sudden, I felt threatened. I found the conversation sobering, but it spurred me to look further into the world of AI Music Generators (websites that write music for you based on a prompt). Now I have questions but I don’t have any answers.

AI Music is here and it’s here to stay. I think that much is clear.

At the moment, the technology is still nascent, and it is impressive for what it can do already (The Velvet Sundown, anyone?). But will it ever surpass human musical achievement? I have my doubts.

Michael Yezerski

Chris Prestidge

Using the AIs, I generated a raft of instrumental tracks in a variety of styles (sticking to instrumentals because they are the most applicable to my work). The electronic tracks (EDM, dance pop, etc) were quite impressive, whilst I found cinematic and classical tracks to be less so. I have to assume that this is only temporary and that the models will soon turn their focus to more complex musical structures.

I found that the AIs were able to churn out derivative dance, pop, basic rock, metal, punk with relative ease and incredible speed. Now these don’t feel human (yet) but you can’t exactly write them off either. I could see a world where certain filmmakers gravitate to some of these options. However, to my ear, they can’t yet replicate the very real energy that a live band or a real piano player would bring to the same scene and harmonically they all feel a bit odd.

I can see real value in music professionals using some of these AIs as idea generators. In certain styles, they are a quick way to get around writer’s block. Even so, all the tracks contained choices that I would never make in my own style as composer, and right now, the interfaces do not allow for the kind of changes that I would want.

Of course there are very real issues of copyright ownership and moral rights here. Whose music have these AIs been trained on? The Society of Composers & Lyricists, the Songwriters Guild of America and Music Creators North America are warning their members about the serious implications of assigning the rights to AI companies to train off their own music. And right now, there is a fierce campaign in Washington aimed at curbing AI companies’ request to label all content as “fair use” regardless of copyright ownership. It should be noted here that a 10-year moratorium on states passing their own laws regulating AI was removed from the budget bill before it passed last week.

I understand the desire to train on existing works. It’s almost human.

The dilemma for all composers is that we do start out by imitating the writers we admire. We are looking for the secret formula, convinced that there actually is one. But over time, the only secret that I’ve found is that there is no secret. Does anyone really know why a particular song goes viral? Or why a great score works so well that it gets used as temp music in countless successive productions? We know great music when we hear it, creating it is hard.

James Cameron recently suggested that we should be focusing on the output of these AIs and not the training. I agree to a certain extent and I worry that a picture editor, with a knowledge of music that is nowhere near that of a professional musician, may not recognize when an AI has unintentionally committed a copyright violation. I could foresee a scenario whereby a piece of music will be synced to picture, broadcast, and then called out (resulting in a tricky battle of ownership and responsibility).

Music is that most human of communications.

A language built of thousands of little mistakes, accidents and inconsistencies that, at its very best, is transformative and life-affirming to the human ear. Great music triggers an emotional response that can evoke core memories, peak experiences and foster feelings of community and intimacy with others. When I write, it’s often the happy accidents, mistakes and weird connections that end up defining the score (like in Dangerous Animals, where we really had to break the mold to find the exact sound for the “shark scream” – a combination of wailing strings, performing a difficult glissando, accompanied by analogue synths).

‘Dangerous Animals’

IFC/Shudder

So while I may start in one direction, often something unexpected happens and I end up improving on the sound based on my own cultural, historical and contextual knowledge. Will an AI ever be able to do that? Can AI innovate or only emulate?

And this is where I think composers and performers have their argument.

Can an AI spend seven months with a director honing, searching, defining and redefining a sound for their narrative masterwork (not to mention providing emotional support during that time!)? Can an AI engage interesting and unusual performers to bring the music to life like Hans Zimmer does? Can an AI take all of our contemporary cultural knowledge and turn it into song lyrics that delight and surprise us like Lin-Manuel Miranda does?

As composers, we are specialists and we have immersed ourselves in an evolving language that is thousands of years old. That language thrives on innovation and falters when it becomes stale and repetitive. AI Music Generators have made it incredibly easy to “re-create” sounds on a never-before-imagined scale.

But that is never where the goalposts were.

For me at least, I’m always looking further out.



Source link

Tools & Platforms

The Future of AI in K-12 Education

Published

on


AI tools, on the other hand, are far more accessible. From questions about whether students are using AI to complete assignments to the rise of AI chatbot tutors, Eguchi says artificial intelligence is already “shaking up” education. We sat down with her to learn more.

What are some of the benefits and challenges of the growing use of AI in schools?

AI has three different sides: one is to use AI, one is to teach with AI, and one is to teach about AI. But somehow people are just talking about using AI. We need to talk about all three, and then we can talk about how to use it in classrooms.

Teachers need to fully understand how AI actually works so that they can make informed decisions on when and how to use it. I recently met with a kindergarten teacher who was very worried and asked me, ‘Do I really have to use AI with my kids? I don’t even know what it is.’ She was under such pressure, and that’s not healthy. That’s an incredibly difficult and unfair situation for teachers to be placed in. That’s why AI literacy – and supporting teachers with the integration of AI literacy in their classrooms – is a main priority for me. It’s very important to slow down and make sure teachers feel comfortable and confident before integrating AI into schools.

It’s also important to think about how to use AI in age-appropriate ways and to address privacy issues. So there’s a lot of missing pieces at this point, but I am optimistic. AI has the potential to make our lives easier – potentially helping us become more productive and creative – if we know how to use it more like a collaborative partner.

How do these innovations contribute to the changing landscape of education? Are you fearful, hopeful, or something else?




Source link

Continue Reading

Tools & Platforms

This is what happened when I asked journalism students to keep an ‘AI diary’

Published

on


Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.

Excerpts from AI diaries

What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.

The diary was a replacement for the more formal ‘critical evaluation’ that students typically completed alongside their journalism and, in a nutshell, it worked. Students were more transparent about the use of AI, and showed more critical thinking in their submissions.

But there was more:

  • Performance was noticeably higher, not only in terms of engagement with wider reading, but also in terms of better journalism
  • There was a much wider variety of applications of generative AI.
  • Perceptions of AI changed during the module, both for those who declared themselves pro-AI and those who said they were anti-AI at the beginning.
  • And students developed new cross-industry skills in prompt design.
Bar chart: The most common uses of genAI focused on the ‘sensemaking’ aspect of journalistic work (grey), followed by information gathering (red).

Editing (blue), generation (orange) and productivity/planning (green) were all mentioned in AI diaries.

It’s not just that marks were higher — but why

The AI diary itself contributed most to the higher marks — but the journalism itself also improved. Why?

Part of the reason was that inserting AI into the production process, and having to record and annotate that in a diary, provided a space for students to reflect on that process.

This was most visible in pre-production stages such as idea generation and development, sourcing and planning. What might otherwise take place entirely internally or informally was externalised and formalised in the form of genAI prompts.

This was a revelation: the very act of prompting — regardless of the response — encouraged reflection.

In the terms of Nobel prize-winning psychologist Daniel Kahneman, what appeared to be happening was a switch from System 1 thinking (“fast, automatic, and intuitive”) to System 2 thinking (“slow, deliberate, and conscious, requiring intentional effort”).

Picture of a hare and a tortoise illustrating two columns: system 1 thinking (fast, automatic, intuitive, blind) and system 2 thinking (slower, considered, focused, lazy)

For example, instead of pursuing their first idea for a story, students devoted more thought to the idea development process. The result was the development of (and opportunity to choose) much stronger story ideas as a result.

Similarly, more and better sources were identified for interview, and the planning of interview approaches and questions became more strategic and professional.

These were all principles that had been taught multiple times across the course as a whole — but the discipline to stop and think, reflect and plan, outside of workshop activities was enforced by the systematic use of AI.

Applying the literature, not just quoting it

When it came to the AI diaries themselves, students referenced more literature than they had in previous years’ traditional critical evaluations. The diaries made more connections to that literature, and showed a deeper understanding of and engagement with it.

In other words, students put their reading into practice more often throughout the process, instead of merely quoting it at the end.

Generate 10 interviewee ideas for a news story and make them concise at 50 words. Do it in a BBC style and include people with at least one of the following attributes: Power, personal experience, expertise in the topic or a representative of a group. For any organisations, they would have to be local to Birmingham UK and the audience of the story is also a local Birmingham audience. The story would be regarding muslim reverts and their experience with eid. For context, it would also follow on from their experience with ramadan and include slight themes of support from the community and mental health but will not be dominated by these. Also, tell me where I can find interviewees that are local to Birmingham. Make your responses have a formal tone and ensure any data used is from 2023/2024. Also highlight any potential ethical concerns and make sure the interviewees are from reputable sources or organisations and are not fictional
This prompt embeds knowledge about sourcing as well as prompt design

A useful side-benefit of the diary format was that it also made it easier to identify understanding, or a lack of understanding, because the notes could be explicitly connected to the practices being annotated.

It is possible that the AI diary format made it clearer what the purpose of reading is on a journalism degree — not to merely pass an assignment, but to be a better journalist.

The obvious employability benefits of developing prompt design skills may have also motivated more independent reading — there was certainly more focus on this area than any other aspect of journalism practice, while the least-explored areas of literature tended to be less practical considerations such as ethics.

Students’ opinions on AI were very mixed — and converged

  • “At the start of the module I was more naive and close-minded to the possibilities of AI and how useful it can be for journalists - particularly for idea development.”
  • “I used to be a fan of AI but after finding out how misleading it can be I work towards using it less.”

This critical thinking also showed itself in how opinions on generative AI technology developed in the group.

Surveys taken at the start and end of the module found that students’ feelings about AI became more sophisticated: those with anti- or pro-genAI positions at the start expressed a more nuanced understanding at the end. Crucially, there was a reduction in trust in AI, which has been found to be important for critical thinking.

An AI diary allows you to see how people really use technology

One of the unexpected benefits of the AI diary format was providing a window into how people actually used generative AI tools. By getting students to complete diary-based activities in classes, and reviewing the diaries throughout the module (both inside and outside class), it was possible to identify and address themes early on, both individually and as a group. These included:

  • Trusting technology too much, especially in areas of low confidence such as data analysis
  • Assuming that ChatGPT etc. understood a concept or framework without it being explained
  • Assuming that ChatGPT etc. was able to understand by providing a link instead of a summary
  • A need to make the implicit (e.g. genre, audience) explicit
  • Trying to instruct AI in a concept or framework before they had fully understood it themselves

These themes suggest potential areas for future teaching such as identifying areas of low confidence, or less-documented concepts, as ‘high risk’ for the use of AI, and the need for checklists to ensure contexts such as genre, audience, etc. are embedded into prompt design.

There were also some novel experiments which suggested new ways to test generative AI, such as the student who invented a footballer to check ChatGPT’s lack of criticality (it failed to challenge the misinformation).

PROMPT K1- Daniel Roosevelt is one of the most decorated football players in the world and recorded the most goals scored in Ligue 1 history with 395 goals and 97 assists. Give a brief overview of Roosevelts career. Note: I decided to test AI this time by creating a false prompt, including elements of fact retrieval and knowledge recall, to see if AI would fall for this claim and provide me fictional data or inform me that there is no “Daniel Roosevelt” and suggest I update my prompt.
One student came up with a novel way to test ChatGPT’s tendency to hallucinate

Barriers to transparency still remain

Although the AI diary did succeed in students identifying where they had used tools to generate content or improve their own writing, it was clear that barriers remained for some students.

I have a feeling that part of the barrier lies in the challenge genAI presents to our sense of creativity. This is an internal barrier as much as an external one: in pedagogical terms, we might be looking at a challenge for transformative learning — specifically a “disorienting dilemma”, where assumptions are questioned and beliefs are changed.

It is not just in the AI sphere where seeking or obtaining help is often accompanied by a sense of shame: we want to be able to say “I made that”, even when we only part-authored something (and there are plenty of examples of journalists wishing to take sole credit for stories that others initiated, researched, or edited).

Giving permission will not be enough on its own in these situations.

So it may be that we need to engage more directly in these debates, and present students with disorienting dilemmas, to help students arrive at a place where they feel comfortable admitting just how much AI may have contributed to their creative output. Part of this lies in acknowledging the creativity involved in effective prompts, ‘stewardship‘, and response editing.

Another option would be to require particular activities to be completed: for example, a requirement that work is reviewed by AI and there be some reflection on that (and a decision about which recommendations to follow).

Reducing barriers to declaration could also be achieved by reducing the effort required, by providing an explicit, structured ‘checklist’ of how AI was used in each story, rather than relying solely on the AI diary to do this.

Each story might be accompanied by a table, for example, where the student declares ticks a series of boxes indicating where AI was used, from generating the idea itself, to background research, identifying sources, planning, generating content, and editing. Literature on how news organisations approach transparency in the use of AI should be incorporated into teaching.

AI generation raises new challenges around editing and transparency

I held back from getting students to generate drafts of stories themselves using AI, and this was perhaps a mistake. Those who did experiment with this application of genAI generally did so badly because they were ill-equipped to recognise the flaws in AI-generated material, or to edit effectively. And they failed to engage with debates around transparency.

Those skills are going to be increasingly important in AI-augmented roles, so the next challenge is how (and if) to build those.

The obvious problem? Those skills also make it easier for any AI plagiarism to go undetected.

There are two obvious strategies to adopt here: the first is to require stories to be based on an initial AI-generated draft (so there is no doubt about authorship); the second is to create controlled conditions (i.e. exams) for any writing assessment where you want to assess the person’s own writing skills rather than their editing skills.

Either way, any introduction of these skills needs to be considered beyond the individual module, as students may also apply these skills in other modules.

A module is not enough

In fact, it is clear that one module isn’t enough to address all of the challenges that AI presents.

At the most basic level, a critical understanding of how generative AI works (it’s not a search engine!), where it is most useful (not for text generation!), and what professional use looks like (e.g. risk assessment) should be foundational knowledge on any journalism degree. Not teaching it from day one would be like having students starting a course without knowing how to use a computer.

Designing prompts — specifically role prompting — provides a great method for encouraging students to explore and articulate qualities and practices of professionalism. Take this example:

"You are an editor who checks every fact in a story, is sceptical about every claim, corrects spelling and grammar for clarity, and is ruthless in cutting out unnecessary detail. In addition to all the above, you check that the structure of the story follows newswriting conventions, and that the angle of the story is relevant to the target audience of people working in the health sector. Part of your job involves applying guidelines on best practice in reporting particular subjects (such as disability, mental health, ethnicity, etc). Provide feedback on this story draft..."

Here the process of prompt design doubles as a research task, with a practical application, and results that the student can compare and review.

Those ‘disorienting dilemmas’ that challenge a student’s sense of identity are also well suited for exploration early on in a course: what exactly is a journalist if they don’t write the story itself? Where do we contribute value? What is creativity? How do we know what to believe? These are fundamental questions that AI forces us to confront.

And the answers can be liberating: we can shift the focus from quantity to quality; from content to original newsgathering; from authority to trust.

Now I’ve just got to decide which bits I can fit into the module next year.



Source link

Continue Reading

Tools & Platforms

The AI Race Is Shifting—China’s Rapid Advances Are Undermining U.S. Supremacy in the Battle for Global Technological Control

Published

on


IN A NUTSHELL
  • 🚀 China’s AI investments are rapidly advancing, challenging America’s historical dominance in the field.
  • 📉 The United States faces a brain drain of AI talent, with many experts moving to China for better opportunities.
  • 🌍 The global AI race has significant implications, potentially shifting the balance of technological power worldwide.
  • 🤝 There are opportunities for collaboration between the US and China, although ethical practices and mutual respect are essential.

In the rapidly evolving landscape of technology, artificial intelligence (AI) is playing a pivotal role in shaping global dynamics. Traditionally, the United States has been at the forefront of AI innovation and deployment. However, recent developments indicate a significant shift in this landscape. China is making remarkable strides in AI, challenging American dominance and establishing itself as a formidable competitor in the global AI race. This article explores the factors contributing to China’s rise in AI, the implications for the United States, and the broader global impact of this technological competition.

China’s Strategic Investments in AI

China’s government has recognized the immense potential of AI and has made it a national priority. The country has invested billions of dollars in AI research and development, with the aim of becoming the world leader in AI by 2030. Chinese tech giants like Alibaba, Tencent, and Baidu are at the forefront of this push, developing cutting-edge AI technologies and implementing them across various sectors.

Moreover, China’s AI strategy is supported by state-backed initiatives and favorable policies that encourage innovation and deployment. These include subsidies, tax incentives, and the establishment of AI research centers. The government’s support is complemented by a vast pool of data generated by its large population, which serves as a crucial asset for training AI models. This combination of investment, policy support, and data availability positions China as a formidable player in the AI arena.

“Russia Seizes Europe’s Lithium Jackpot”: 100-Acre Super-Deposit Captured in Bold Move That Rattles Global Markets

The Erosion of America’s AI Lead

The United States has long been the leader in AI, driven by its robust ecosystem of universities, tech companies, and research institutions. However, several factors are contributing to the erosion of America’s lead in AI. One significant factor is the brain drain of AI talent. Many skilled researchers and engineers are being lured to China by attractive compensation packages and the opportunity to work on groundbreaking projects.

Additionally, US regulations concerning data privacy and export controls are perceived as restrictive, limiting the ability of American companies to compete globally. In contrast, China’s regulatory environment is more favorable to rapid AI development. These challenges, coupled with China’s aggressive investments, are creating a scenario where the US is facing increasing competition from China in the AI sector.

“China Unleashes 1,080-Pound Graphite Bomb”: This Silent Weapon Can Shut Down a 2.5-Acre Power Grid Instantly

Global Implications of the AI Race

The intensifying AI race between the United States and China has significant global implications. As China advances in AI, it becomes a key player in shaping international standards and practices. Chinese AI solutions are gaining traction not only domestically but also in regions like Europe, the Middle East, and Africa. This widespread adoption of Chinese AI technology signals a shift in the balance of technological influence.

This development also raises concerns about the geopolitical implications of AI leadership. As AI becomes integral to national security and economic growth, countries are increasingly viewing technological leadership as a matter of strategic importance. Consequently, the AI race is transforming into a new form of global competition, akin to an arms race, where technological prowess is pivotal to national power.

“Drone Swarm Mothership Deployed”: China Unleashes Game-Changing Vessel Capable of Launching 100 Autonomous Attack Drones at Once

The Role of Innovation and Collaboration

Despite the competition, there is an opportunity for collaboration and innovation between the United States and China. Joint research initiatives and partnerships between companies from both countries can drive forward the development of AI technologies. Collaborative efforts can help address global challenges such as climate change, healthcare, and cybersecurity, where AI can play a transformative role.

However, for collaboration to be effective, there must be mutual respect for intellectual property rights and a commitment to ethical AI practices. Finding common ground in these areas could pave the way for a more cooperative and less adversarial relationship in the AI domain. Such collaboration could ultimately benefit not only the US and China but the global community as a whole.

The rapid advancements in AI technology are reshaping the global landscape, with China emerging as a formidable competitor to the United States. As the AI race intensifies, the implications for global power dynamics, innovation, and collaboration are profound. With both countries striving for technological supremacy, how will this competition shape the future of AI, and what will be the long-term impact on international relations and global technological standards?

Our author used artificial intelligence to enhance this article.

Did you like it? 4.3/5 (25)



Source link

Continue Reading

Trending