Connect with us

Tools & Platforms

Video generation AI creating new niche

Published

on


A view of the booth of Kuaishou Technology during an expo in Shanghai. [LONG WEI/FOR CHINA DAILY]

Chinese video-sharing platform Kuaishou Technology is banking on artificial intelligence-powered video generation models, which boast immense application potential in fields including film, animation, mini dramas and advertising.

Experts believe text-to-video generators have the potential to revolutionize the short-video, advertising and movie trailer industries after US-based AI research company OpenAI’s Sora took the world by storm.

Since its launch in June last year, Kuaishou’s video generation model Kling has undergone more than 30 iterations and upgrades, with the number of global creators reaching more than 45 million, while generating over 200 million videos and 400 million images, and serving more than 20,000 enterprise customers.

Gai Kun, senior vice-president of Kuaishou and head of Kling AI, said 2025 is bound to be a crucial year for bolstering the in-depth application of generative AI technology, highlighting that video generation models are witnessing rapid development and application.

Gai said AI has great growth potential in video and image generation, and the goal of Kling AI is to become the new infrastructure for video creation in the era of AI, enabling “everyone to tell good stories with AI”.

The model has been leveraged in various sectors, such as marketing and advertising, film, television, animation and game production, he added. Looking forward, Kling AI will continue to focus on technological innovation and accelerate its deep application across a wide range of scenarios.

Kuaishou rolled out its Kling AI 2.0 video generation model in April. The AI model outperformed peers such as OpenAI’s Sora in some dimensions, including semantic responsiveness and visual and motion quality, marking a significant breakthrough in AI video creation, the company said.

The model can interpret prompts to generate high-quality videos that mimic the physical world and create imaginative scenes from text instructions. Multimodal editing capabilities are also available on the Kling AI platform, where users can input their ideas through images and other formats, generating creative videos that align with their concepts.

Xue Xiaolu, one of China’s most commercially successful female directors, said AI has reshaped the traditional film and television production process, as from scriptwriting to storyboarding, video generation and editing, all of the tasks could be completed quickly with the help of AI, significantly reducing production time and cost.

Ma Shicong, an analyst with Beijing-based internet consultancy Analysys, said Kuaishou has accumulated ample experience and technical strengths in AI, video, livestreaming and algorithms over the past few years.

Ma said the company hopes to seek new sources of revenue and speed up its monetization efforts by expanding its footprint in the fast-developing AI-generated content segment amid fierce competition from local rivals.

The training of video generation AI models necessitates higher requirements for computing capacity, algorithms and high-quality data, said Pan Helin, a member of the Expert Committee for Information and Communication Economy, which operates under the Ministry of Industry and Information Technology.

Chinese tech companies should beef up self-developed and proprietary abilities in underlying computing power chips and programming software, as well as increase investments in basic scientific research, to catch up with foreign counterparts in the AI chatbot race, he said.

Noting that AI-generated content or AIGC-related technologies will improve the productivity of content production and inject fresh impetus into China’s economic growth, Pan said more efforts are needed to bolster the efficient circulation of data elements, and expand application scenarios of video generation models in a wider range of segments.

Chen Duan, director of the Digital Economy Integration Innovation Development Center at the Central University of Finance and Economics, said AIGC technology will lead to a new revolution in the field of digital content production, and bolster innovation in the digital culture industry.

Chinese enterprises have unique advantages in expanding AI application scenarios compared with their foreign peers, based on China’s enormous domestic social media networks and the world’s largest number of active internet users, she said, adding that text-to-video generators have the potential to revolutionize sectors including short videos, advertising and movie trailers.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

OpenAI Backs AI-Animated Film for 2026 Cannes Festival

Published

on


OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. Credit: Focal Foto / Wikimedia Commons / CC BY-SA 4.0

OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. The tech company aims to prove its AI technology can revolutionize Hollywood filmmaking with faster production timelines and significantly lower costs. 

The movie titled “Critterz” will be about woodland creatures that go on an adventure after their village is damaged by a stranger. The film’s producers are aiming for a global theatrical release after the premiere at the Cannes Film Festival. 

The project has a budget of less than US$30 million and a production timeline of nine months. This is a comparable and significant difference, given that most mainstream animated movies have budgets in the range of US$100 to US$200 million, whilst also having a three-year development and production cycle. 

OpenAI-backed ‘Critterz’ set for release at the Cannes Film Festival

Chad Nelson, a creative specialist at OpenAI, originally began developing Critterz as a short film three years ago, using the company’s DALL-E image generation tool to develop the concept. Nelson has now partnered with the London-based Vertigo Films and studio Native Foreign in Los Angeles to expand the project into a feature film. 

In the news release that announced OpenAI’s backing of the film, Nelson said: “OpenAI can say what its tools do all day long, but it’s much more impactful if someone does it,” adding, “That’s a much better case study than me building a demo.” Crucially, however, the film’s production will not be entirely AI-generated, as it will blend AI technology with human work. 

Human artists will draw sketches that will be fed into OpenAI’s tools such as GPT-5, the Large Language Model (LLM) on which ChatGPT is built, as well as other image-generating AI models. Human actors will voice the characters. 

Critterz has some of the writing team behind the smash hit ‘Paddington in Peru’

Despite having some of the writing team behind the hit film Paddington in Peru, it comes at a time of intense legal fights between Hollywood studios and AI and other tech companies over intellectual property rights. 

Studios such as Disney, Universal, and Warner Bros. have filed for copyright infringement suits against Midjourney, another AI firm, alleging that they illegally used their characters to train its image generation engine. Critterz will be funded by Vertigo’s Paris-based parent company, Federation Studios, with some 30 contributors set to share profits. 

Crucially, however, Critterz will not be the first feature film ever made with generative AI. Last year, “DreadClub: Vampire’s Verdict” was released and is widely considered to be the first feature film entirely made by generative AI. It had a budget of US$405. 



Source link

Continue Reading

Tools & Platforms

AI Lies Because It’s Telling You What It Thinks You Want to Hear

Published

on


Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.  

While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

AI Atlas art badge tag

In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble

Don’t miss any of CNET’s unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome.

How machines learn to lie

To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

There are three phases of training LLMs:

  • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
  • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
  • Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.

The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us. 

“Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

Getting AI to be honest 

Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

The Princeton researchers identified five distinct forms of this behavior:

  • Empty rhetoric: Flowery language that adds no substance to responses.
  • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
  • Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
  • Unverified claims: Making assertions without evidence or credible support.
  • Sycophancy: Insincere flattery and agreement to please.

To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.

“It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI





Source link

Continue Reading

Tools & Platforms

AI: The Church’s Response to the New Technological Revolution

Published

on


Artificial intelligence (AI) is transforming everyday life, the economy, and culture at an unprecedented speed. Capable of processing vast amounts of data, mimicking human reasoning, learning, and making decisions, this technology is already part of our daily lives: from recommendations on Netflix and Amazon to medical diagnoses and virtual assistants.

But its impact goes far beyond convenience or productivity. Just as with the Industrial Revolution, the digital revolution raises social, ethical, and spiritual questions. The big question is:  How can we ensure that AI serves the common good without compromising human dignity?

A change of era

Pope Francis has described artificial intelligence as a true “epochal change,” and his successor, Pope Leo XIV, has emphasized both its enormous potential and its risks. There is even talk of a future encyclical entitled Rerum Digitalium, inspired by the historic Rerum Novarum of 1891, to offer moral guidance in the face of the “new things” of our time.

The Vatican insists that AI should not replace human work, but rather enhance it. It must be used prudently and wisely, always putting people at the centre. The risks of inequalities, misinformation, job losses, and military uses of this technology necessitate clear limits and global regulations.

The social doctrine of the Church and AI

The Church proposes applying the four fundamental principles of social doctrine to artificial intelligence  :

  • Dignity of the person: the human being should never be treated as a means, but as an end in itself.

  • Common good: AI must ensure that everyone has access to its benefits, without exclusions.

  • Solidarity: Technological development must serve the most needy in particular.

  • Subsidiarity: problems should be solved at the level closest to the people.

Added to this are the values ​​of truth, freedom, justice, and love , which guide any technological innovation towards authentic progress.

Opportunities and risks

Artificial intelligence already offers advances in medicine, education, science, and communication. It can help combat hunger, climate change, or even better convey the Gospel. However, it also poses risks:

  • Massive job losses due to automation.

  • Human relationships replaced by fictitious digital links.

  • Threats to privacy and security.

  • Use of AI in autonomous weapons or disinformation campaigns.

Therefore, the Church emphasizes that AI is not a person: it has no soul, consciousness, or the capacity to love. It is merely a tool, powerful but always dependent on the purposes assigned to it by humans.

A call to responsibility

The Antiqua et nova (2025) document   reminds us that all technological progress must contribute to human dignity and the common good. Responsibility lies not only with governments or businesses, but also with each of us, in how we use these tools in our daily lives.

Artificial intelligence can be an engine of progress, but it can never be a substitute for humankind. No machine can experience love, forgiveness, mercy, or faith. Only in God can perfect intelligence and true happiness be found.



Source link

Continue Reading

Trending