Connect with us

AI Research

An indie band is blowing up on Spotify, but people think it’s AI

Published

on


An indie psych rock band has amassed more than 850,000 listeners on Spotify in a matter of weeks and generated buzz throughout the music industry — but nobody is exactly sure if it’s real or not.

The Velvet Sundown, a band bent on “Saving Modern Rock,” according to its Instagram account, has even some music industry veterans confused. The images put forward by the band all look like they were created by artificial intelligence. The music? That’s harder to say.

Rick Beato, a music producer with more than 5 million subscribers on YouTube, identified what he called “artifacts,” particularly in one of the tracks’ guitar and keyboard parts. He said that can indicate a song was created by AI.

“This is having a lot of problems, and I suspect that it may be because this is an AI track,” Beato said in a YouTube video, after running one of The Velvet Sundown’s songs through Apple’s Logic Pro track splitter. “Every time you have an AI song, they are full of artifacts.”

Whether the band is real, fake or something in between, its emergence and the broader debate about it add to a growing concern about the future of art, culture and authenticity in the era of advanced generative artificial intelligence. Many major tech platforms have already seen floods of AI-generated content, while AI influencers are becoming increasingly common on social media platforms.

Velvet Sundown appears to have first emerged in June, according to its social media profiles. On Spotify, the band has a “Verified Artist” badge, offering some sense of authority. On X, The Velvet Sundown teased an upcoming album, “Paper Sun Rebellion,” and nodded to questions about doubts about the band’s origins.

Aside from the quick rollout of songs, its uncannily plasticine promotional images of band members have prompted accusations of AI use as well.

In a video announcing the release of its upcoming album later this month, the band pushed back against accusations that it isn’t “real,” stating in one video that “you believed the lie, and danced to it anyway.”

“They said we’re not real,” the account posted. “Maybe you aren’t either.”

The band’s bio on Spotify claims that the group is composed of four people: singer Gabe Farrow, guitarist Lennie West, Milo Rains, “who crafts the band’s textured synth sounds,” and percussionist Orion “Rio” Del Mar. Farrow purportedly also plays the mellotron, which is an electro-mechanical instrument that plays recorded sounds when its keys are pressed.

“There’s something quietly spellbinding about The Velvet Sundown,” their Spotify bio states. “You don’t just listen to them, you drift into them. Their music doesn’t shout for your attention; it seeps in slowly, like a scent that suddenly takes you back somewhere you didn’t expect.”

Questions about the band’s origins were further complicated after other social accounts purporting to represent the band began rejecting claims that it was using AI-generated images or music, as well as a person who spoke to Rolling Stone claiming to be connected to the band who called it an “art hoax.” That person later admitted in a Substack post that his claim to represent the band was itself a hoax.

The Velvet Sundown said that the person quoted in the article is not affiliated with it in “any way.”

“He does not represent us, speak for us, or have any connection to this project,” The Velvet Sundown said in a statement to NBC News via Instagram.

On Thursday, the social media accounts tied to the band’s Spotify account posted that “someone is trying to hijack the identity of The Velvet Sundown by releasing unauthorized interviews, publishing unrelated photos, and creating fake profiles claiming to represent us.”

The Velvet Sundown’s YouTube publisher, Distrokid, did not respond to requests for comment. Spotify also did not respond to a request for comment.

The band’s meteoric rise highlights modern issues around AI and how difficult it can be to verify what is and is not real on the internet. Last year, Google researchers found that AI image misinformation has surged on the internet since 2023. A Consumer Reports investigation found that leading AI voice-cloning programs have no meaningful barriers to stop people from nonconsensually impersonating others.

According to the music streaming app Deezer, which uses its own tool to identify AI-generated content, 100% of The Velvet Sundown’s tracks were created using AI. Deezer labels that content on its site, ensuring that AI-generated music does not appear on its recommended playlists and that royalties are maximized for human artists.

“AI-generated music and AI bands may generate some value to the user, so we still want to display that,” Alexis Lanternier, the CEO of Deezer, said. “We just want to make sure that the remuneration is taken in a different way.”

Every week, about 18% of the tracks being uploaded to Deezer — roughly 180,000 songs — are flagged by the platform’s tool as being AI-generated. That number has grown threefold in the past two years, Lanternier said.

Suno and Udio, both generative AI music creation programs, declined to say whether The Velvet Sundown’s music was created using their software.

“I think people are getting too far down the rabbit hole of dissecting is it AI, is it not AI? And forgetting the important question, which is like, how did it make you feel? How many people liked it?” said Mikey Shulman, CEO and co-founder of Suno.

According to Suno’s rights and ownership policy, songs made by its users who are subscribed to its higher-tier plans are covered by a commercial use license. That allows them to monetize and distribute songs on platforms like Spotify without attributing them to Suno.

“There are Grammy winners who use Suno, you know, every day in their production,” said Shulman.

Recently, Grammy Award-winning record producer Timbaland launched an AI artist named TaTa with his new entertainment company, Stage Zero. He told Billboard that TaTa, who created a catalog of AI-generated music through Suno, was neither an “avatar” nor a “character.”

Suno was one of two AI companies sued last year by major record labels — including Universal Music Group, Sony Music Entertainment and Warner Music Group — which allege that the companies infringed on the labels’ recording copyrights in order to train their music-generating models.

About a year into the legal battle, however, the music labels have begun talks to work out a licensing deal so that Suno and Udio could use copyrighted recordings by compensating the artists for their work, according to a Bloomberg report published last month.

It’s a trend that’s become worrisome to artists like Kristian Heironimus, who is a member of the band Velvet Meadow (not to be confused with the now-viral The Velvet Sundown).

“I’ve been working for, like, six years just constantly releasing music, working my day job,” Heironimus said. “It is kind of disheartening just seeing an AI band, and then — in, like, what, two weeks? — [have] like, 500,000 monthly listeners.”

The creep of generative AI into music and other creative industries has incited backlash from those who worry about the devaluation of their human work, as many AI developers have been known to scrape data from the internet without human creators’ knowledge or consent.

Beyond ethical debates about the consequences of the AI impact on human labor, some online worry about the rise of low-quality AI slop as these tools grow increasingly capable of replicating voices, generating full-length songs and creating visuals from text prompts.

Heironimus said there are similarities between his band, Velvet Meadow, and The Velvet Sundown, beyond the names. One of the members pictured in The Velvet Sundown’s Spotify band photo, for example, looks similar to a photo of Heironimus when he used to have long hair, he said. The bands also fall within the same genre, though Heironimus described The Velvet Sundown’s tracks as “soulless.”

Shulman, of Suno, said most streaming music is already “algorithmically driven.”

“People don’t realize just how depersonalized music has become and how little connection the average person has with the artist behind the music,” he said. “It’s a failure of imagination to think that in the future, it can’t be a lot better.”

But Lanternier, of Deezer, argues that as AI continues to evolve, streaming platforms should also be trying to ensure artists can make enough royalties to survive.

“People are not only interested in the sound. They are interested in the whole story of an artist — in the whole brand of an artist,” Lanternier said. “We believe that what is right to do is to support the real artist, so that they continue to create music that people love.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Researchers Use Hidden AI Prompts to Influence Peer Reviews: A Bold New Era or Ethical Quandary?

Published

on


AI Secrets in Peer Reviews Uncovered

Last updated:

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

In a controversial yet intriguing move, researchers have begun using hidden AI prompts to potentially sway the outcomes of peer reviews. This cutting-edge approach aims to enhance review processes, but it raises ethical concerns. Join us as we delve into the implications of AI-assisted peer review tactics and how it might shape the future of academic research.

Banner for Researchers Use Hidden AI Prompts to Influence Peer Reviews: A Bold New Era or Ethical Quandary?

Introduction to AI in Peer Review

Artificial Intelligence (AI) is rapidly transforming various facets of academia, and one of the most intriguing applications is its integration into the peer review process. At the heart of this evolution is the potential for AI to streamline the evaluation of scholarly articles, which traditionally relies heavily on human expertise and can be subject to biases. Researchers are actively exploring ways to harness AI not just to automate mundane tasks but to provide deep, insightful evaluations that complement human judgment.

The adoption of AI in peer review promises to revolutionize the speed and efficiency with which academic papers are vetted and published. This technological shift is driven by the need to handle an ever-increasing volume of submissions while maintaining high standards of quality. Notably, hidden AI prompts, as discussed in recent studies, can subtly influence reviewers’ decisions, potentially standardizing and enhancing the objectivity of reviews (source).

Incorporating AI into peer review isn’t without challenges. Ethical concerns about transparency, bias, and accountability arise when machines play an integral role in shaping academic discourse. Nonetheless, the potential benefits appear to outweigh the risks, with AI offering tools that can uncover hidden biases and provide more balanced reviews. As described in TechCrunch’s exploration of this topic, there’s an ongoing dialogue about the best practices for integrating AI into these critical processes (source).

Influence of AI in Academic Publishing

The advent of artificial intelligence (AI) is reshaping various sectors, with academic publishing being no exception. The integration of AI tools in academic publishing has significantly streamlined the peer review process, making it more efficient and less biased. According to an article from TechCrunch, researchers are actively exploring ways to integrate AI prompts within the peer review process to subtly guide reviewers’ evaluations without overt influence (). These AI systems analyze vast amounts of data to provide insightful suggestions, thus enhancing the quality of published research.

The inclusion of AI in peer review is not without its challenges, though. Experts caution that the deployment of AI-driven tools must be done with significant oversight to prevent any undue influence or bias that may occur from automated processes. They emphasize the importance of transparency in how AI algorithms are used and the nature of data fed into these systems to maintain the integrity of peer review (TechCrunch).

While some scholars welcome AI as a potential ally that can alleviate the workload of human reviewers and provide them with analytical insights, others remain skeptical about its impact on the traditional rigor and human judgment in peer evaluations. The debate continues, with public reactions reflecting a mixture of excitement and cautious optimism about the future potential of AI in scholarly communication (TechCrunch).

Public Reactions to AI Interventions

The public’s reaction to AI interventions, especially in fields such as scientific research and peer review, has been a mix of curiosity and skepticism. On one hand, many appreciate the potential of AI to accelerate advancements and improve efficiencies within the scientific community. However, concerns remain over the transparency and ethics of deploying hidden AI prompts to influence processes that traditionally rely on human expertise and judgment. For instance, a recent article on TechCrunch highlighted researchers’ attempts to integrate these AI-driven techniques in peer review, sparking discussions about the potential biases and ethical implications of such interventions.

Further complicating the public’s perception is the potential for AI to disrupt traditional roles and job functions within these industries. Many individuals within the academic and research sectors fear that an over-reliance on AI could undermine professional expertise and lead to job displacement. Despite these concerns, proponents argue that AI, when used effectively, can provide invaluable support to researchers by handling mundane tasks, thereby allowing humans to focus on more complex problem-solving activities, as noted in the TechCrunch article.

Moreover, the ethical ramifications of using AI in peer review processes have prompted a call for stringent regulations and clearer guidelines. The potential for AI to subtly shape research outcomes without the overt consent or awareness of the human peers involved raises significant ethical questions. Discussions in media outlets like TechCrunch indicate a need for balanced discussions that weigh the benefits of AI-enhancements against the necessity to maintain integrity and trust in academic research.

Future of Peer Review with AI

The future of peer review is poised for transformation as AI technologies continue to advance. Researchers are now exploring how AI can be integrated into the peer review process to enhance efficiency and accuracy. Some suggest that AI could assist in identifying potential conflicts of interest, evaluating the robustness of methodologies, or even suggesting suitable reviewers based on their expertise. For instance, a detailed exploration of this endeavor can be found at TechCrunch, where researchers are making significant strides toward innovative uses of AI in peer review.

The integration of AI in peer review does not come without its challenges and ethical considerations. Concerns have been raised regarding potential biases that AI systems might introduce, the transparency of AI decision-making, and how reliance on AI might impact the peer review landscape. As discussed in recent events, stakeholders are debating the need for guidelines and frameworks to manage these issues effectively.

One potential impact of AI on peer review is the democratization of the process, opening doors for a more diverse range of reviewers who may have been overlooked previously due to geographical or institutional biases. This could result in more diverse viewpoints and a richer peer review process. Additionally, as AI becomes more intertwined with peer review, expert opinions highlight the necessity for continuous monitoring and adjustment of AI tools to ensure they meet the ethical standards of academic publishing. This evolution in the peer review process invites us to envision a future where AI and human expertise work collaboratively, enhancing the quality and credibility of academic publications.

Public reactions to the integration of AI in peer review are mixed. Some welcome it as a necessary evolution that could address long-standing inefficiencies in the system, while others worry about the potential loss of human oversight and judgment. Future implications suggest a field where AI-driven processes could eventually lead to a more streamlined and transparent peer review system, provided that ethical guidelines are strictly adhered to and biases are meticulously managed.



Source link

Continue Reading

AI Research

Xbox producer tells staff to use AI to ease job loss pain

Published

on


An Xbox producer has faced a backlash after suggesting laid-off employees should use artificial intelligence to deal with emotions in a now deleted LinkedIn post.

Matt Turnbull, an executive producer at Xbox Game Studios Publishing, wrote the post after Microsoft confirmed it would lay off up to 9,000 workers, in a wave of job cuts this year.

The post, which was captured in a screenshot by tech news site Aftermath, shows Mr Turnbull suggesting tools like ChatGPT or Copilot to “help reduce the emotional and cognitive load that comes with job loss.”

One X user called it “plain disgusting” while another said it left them “speechless”. The BBC has contacted Microsoft, which owns Xbox, for comment.

Microsoft previously said several of its divisions would be affected without specifying which ones but reports suggest that its Xbox video gaming unit will be hit.

Microsoft has set out plans to invest heavily in artificial intelligence (AI), and is spending $80bn (£68.6bn) in huge data centres to train AI models.

Mr Turnbull acknowledged the difficulty of job cuts in his post and said “if you’re navigating a layoff or even quietly preparing for one, you’re not alone and you don’t have to go it alone”.

He wrote that he was aware AI tools can cause “strong feelings in people” but wanted to try and offer the “best advice” under the circumstances.

The Xbox producer said he’d been “experimenting with ways to use LLM Al tools” and suggested some prompts to enter into AI software.

These included career planning prompts, resume and LinkedIn help, and questions to ask for advice on emotional clarity and confidence.

“If this helps, feel free to share with others in your network,” he wrote.

The Microsoft cuts would equate to 4% of Microsoft’s 228,000-strong global workforce.

Some video game projects have reportedly been affected by the cuts.

More on this story



Source link

Continue Reading

AI Research

Multilingualism is a blind spot in AI systems

Published

on


For internationally operating companies, it is attractive to use a single AI solution across all markets. Such a centralized approach offers economies of scale and appears to ensure uniformity. Yet research from CWI reveals that this assumption is on shaky ground: the language in which an AI is addressed, influences the answers the system provides – and quite significantly too.

Language steers outcomes

The problem goes beyond small differences in nuance. Researcher Davide Ceolin, tenured researcher within the Human-Centered Data Analytics group at CWI, and his international research team discovered that identical Large Language Models (LLM’s) can adopt varying political standpoints, depending on the language used. They delivered more economically progressive responses in Dutch and more centre-conservative ones in English. For organizations applying AI in HR, customer service or strategic decision-making, this results in direct consequences for business processes and reputation.

These differences are not incidental. Statistical analysis shows that the language of the prompt used has a stronger influence on the AI response than other factors, such as assigned nationality. “We assumed that the output of an AI model would remain consistent, regardless of the language. But that turns out not to be the case,” says Ceolin.

For businesses, this means more than academic curiosity. Ceolin emphasizes: “When a system responds differently to users with different languages or cultural backgrounds, this can be advantageous – think of personalization – but also detrimental, such as with prejudices. When the owners of these systems are unaware of this bias, they may experience harmful consequences.”

Davide Céolin speaking at a symposium

Prejudices with consequences

The implications of these findings extend beyond political standpoints alone. Every domain in which AI is deployed – from HR and customer service to risk assessment – runs the risk of skewed outcomes as a result of language-specific prejudices. An AI assistant that assesses job applicants differently depending on the language of their CV, or a chatbot that gives inconsistent answers to customers in different languages: these are realistic scenarios, no longer hypothetical.

According to Ceolin, such deviations are not random outliers, but patterns with a systematic character. “That is extra concerning. Especially when organizations are unaware of this.”

For Dutch multinationals, this is a real risk. They often operate in multiple languages but utilize a single central AI system. “I suspect this problem already occurs within organizations, but it’s unclear to what extent people are aware of it,” says Ceolin. The research also suggests that smaller models are, on average, more consistent than the larger, more advanced variants, which appear to be more sensitive to cultural and linguistic nuances.

What can organizations do?

The good news is that the problem can be detected and limited. Ceolin advises testing AI systems regularly using persona-based prompting, which involves testing different scenarios where the language, nationality, or culture of the user varies. “This way you can analyze whether specific characteristics lead to unexpected or unwanted behaviour.”

Additionally, it’s essential to have a clear understanding of who works with the system and in which language. Only then you can assess whether the system operates consistently and fairly in practice. Ceolin advocates for clear governance frameworks that account for language-sensitive bias, just as currently happens with security or ethics.

Structural approach required

According to the researchers, multilingual AI bias is not a temporary phenomenon that will disappear on its own. “Compare it to the early years of internet security,” says Ceolin. “What was then seen as a side issue turned out to be of strategic importance later.” CWI is now collaborating with the French partner institute INRIA to unravel the mechanisms behind this problem further.

The conclusion is clear: companies that deploy AI in multilingual contexts would do well to consciously address this risk not only for technical reasons, but also to prevent reputational damage, legal complications and unfair treatment of customers or employees.

“AI is being deployed increasingly often, but insight into how language influences the system is in its infancy,” concludes Ceolin. “There’s still much work to be done there.”

Author: Kim Loohuis
Header photo: Shutterstock



Source link

Continue Reading

Trending