Connect with us

Tools & Platforms

Vidu updates Q1 AI video generation model to handle up to seven image inputs

Published

on


Vidu AI, a generative artificial intelligence video platform developed by Chinese firm ShengShu Technology, today announced an update to its latest Q1 model featuring an advanced “reference-to-video” feature powered by semantic understanding.

The company is developing a generative video AI model that competes with OpenAI’s Sora, which can produce vivid video sequences. The update allows for richer video context for the production of video scenes involving multiple elements that remain the same between clips from frame to frame.

Users can now upload up to seven reference images and include a prompt that combines them for the AI to use in a scene. For example, the AI uses what the company calls “semantic understanding” to reference the images and relate them to the text prompt and even infer missing elements to generate key objects.

“This update breaks through the limits of what creators thought they could do with AI video,” said Chief Executive Luo Yihang. “We’re getting closer to enabling users to create fully realized scenes, complete with a detailed cast of characters, objects, and backgrounds, by expanding multi-image referencing to support up to seven inputs.”

For example, a user could upload an image of a young woman in a green dress, an idyllic forest scene and an owl. Then input the prompt: “The woman plays the violin in the forest while the owl flies down and lands on a nearby branch at sunrise.”

Yihang said the Vidu Q1 semantic core engine will generate a violin in her hands, preserving scene consistency and narrative quality throughout the clip. Using this technology, creators no longer need to face steep technical hurdles when attempting to create complex scenes. A text prompt and images are all they need when producing consistent video scenes.

Vidu is competing with Google LLC’s Veo 3, released in late May. Its generative video capabilities include natural English prompts and reference images alongside a filmmaking tool called Flow, which allows users to manage narrative design to develop entire short AI-generated films that include visuals, special effects and audio, including speech.

ShengShu announced a partnership with Los Angeles-based animation studio Aura Productions in late March to release a 50-episode short film sci-fi anime series fully generated by AI. The project seeks to redefine digital entertainment by using AI capabilities to augment traditional narrative techniques. It is slated for release across major social media platforms this year.

“AI is no longer just a tool; it’s a creative enhancement that allows us to scale production while maintaining artistic integrity,” said D.T. Carpenter, showrunner at Aura, told Variety about the project.

Image: Vidu AI

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  

CUBE Alumni Network

C-level and Technical

Domain Experts

Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Source link

Tools & Platforms

Anthropic Taps Higher Education Leaders for Guidance on AI

Published

on


The artificial intelligence company Anthropic is working with six leaders in higher education to help guide how its AI assistant Claude will be developed for teaching, learning and research. The new Higher Education Advisory Board, announced in August, will provide regular input on educational tools and policies.

According to a news release from Anthropic, the board is tasked with ensuring that AI “strengthens rather than undermines learning and critical thinking skills” through policies and products that support academic integrity and student privacy.

As teachers adapt to AI, ed-tech leaders have called for educators to play an active role in aligning AI to educational standards.


“Teachers and educators and administrators should be in the decision-making seat at every critical decision-making point when AI is being used in education,” Isabella Zachariah, formerly a fellow at the U.S. Department of Education’s Office of Educational Technology, said at the EDUCAUSE conference in October 2024. The Office of Educational Technology has since been shuttered by the Trump administration.

To this end, advisory boards or councils involving educators have emerged in recent years among ed-tech companies and institutions seeking to ground AI deployments in classroom experiences. For example, the K-12 software company Otus formed an AI advisory board earlier this year with teachers, principals, instructional technology specialists and district administrators representing more than 20 school districts across 11 states. Similarly, software company Frontline Education launched an AI advisory council last month to allow district leaders to participate in pilots and influence product design choices.

The Anthropic board taps experts in the education, nonprofit and technology sectors, including two former university presidents and three campus technology leaders. Rick Levin, former president of Yale University and CEO of Coursera, will serve as board chair. Other members include:

  • David Leebron, former president of Rice University
  • James DeVaney, associate vice provost for academic innovation at the University of Michigan
  • Julie Schell, assistant vice provost of academic technology at the University of Texas at Austin
  • Matthew Rascoff, vice provost for digital education at Stanford University
  • Yolanda Watson Spiva, president of Complete College America

The board contributed to a recent trio of AI fluency courses for colleges and universities, according to the news release. The online courses aim to give students and faculty a foundation in the function, limitations and potential uses of large language models in academic settings.

Schell said she joined the advisory board to explore how technology can address persistent challenges in learning.

“Sometimes we forget how cognitively taxing it is to really learn something deeply and meaningfully,” she said. “Throughout my career, I’ve been excited about the different ways that technology can help accentuate best practices in teaching or pedagogy. My mantra has always been pedagogy first, technology second.”

In her work at UT Austin, Schell has focused on responsible use of AI and engaged with faculty, staff, students and the general public to develop guiding principles. She said she hopes to bring the feedback from the community, as well as education science, to regular meetings. She said she participated in vetting existing Anthropic ed-tech tools, like Claude Learning mode, with this in mind.

In the weeks since the board’s announcement, the group has met once, Schell said, and expects to meet regularly in the future.

“I think it’s important to have informed people who understand teaching and learning advising responsible adoption of AI for teaching and learning,” Schell said. “It might look different than other industries.”

Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor’s degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.





Source link

Continue Reading

Tools & Platforms

Duke AI program emphasizes critical thinking for job security :: WRAL.com

Published

on


Duke’s AI program is spearheaded by a professor who is not just teaching, he also built his own AI model. 

Professor Jon Reifschneider says we’ve already entered a new era of teaching and learning across disciplines.

He says, “We have folks that go into healthcare after they graduate, go into finance, energy, education, etc. We want them to bring with them a set of skills and knowledge in AI, so that they can figure out: ‘How can I go solve problems in my field using AI?'”

He wants his students to become literate in AI, which is a challenge in a field he describes as a moving target. 

“I think for most people, AI is kind of a mysterious black box that can do somewhat magical things, and I think that’s very risky to think that way, because you don’t develop an appreciation of when you should use it and when you shouldn’t use it,” Reifschneider told WRAL News.

Student Harshitha Rasamsetty said she is learning the strengths and shortcomings of AI.

“We always look at the biases and privacy concerns and always consider the user,” she said.

The students in Duke’s engineering master’s programs come from all backgrounds, countries, even ages. Jared Bailey paused his insurance career in Florida to get a handle on the AI being deployed company-wide. 

He was already using AI tools when he wondered, “What if I could crack them open and adjust them myself and make them better?”

John Ernest studied engineering in undergrad, but sought job security in AI.

“I hear news every day that AI is replacing this job, AI is replacing that job,” he said. “I came to a conclusion that I should be a part of a person building AI, not be a part of a person getting replaced by AI.”

Reifschneider thinks warnings about AI taking jobs are overblown. 

In fact, he wants his students to come away understanding that humans have a quality AI can’t replace. That’s critical thinking. 

Reifschneider says AI “still relies on humans to guide it in the right direction, to give it the right prompts, to ask the right questions, to give it the right instructions.”

“If you can’t think, well, AI can’t take you very far,” Bailey said. “It’s a car with no gas.”

Reifschneider told WRAL that he thinks children as young as elementary school students should begin learning how to use AI, when it’s appropriate to do so, and how to use it safely.

WRAL News went inside Wake County schools to see how it is being used and what safeguards the district is using to protect students. Watch that story Wednesday on WRAL News.



Source link

Continue Reading

Tools & Platforms

WA state schools superintendent seeks $10M for AI in classrooms

Published

on


This article originally appeared on TVW News.

Washington’s top K-12 official is asking lawmakers to bankroll a statewide push to bring artificial intelligence tools and training into classrooms in 2026, even as new test data show slow, uneven academic recovery and persistent achievement gaps.

Superintendent of Public Instruction Chris Reykdal told TVW’s Inside Olympia that he will request about $10 million in the upcoming supplemental budget for a statewide pilot program to purchase AI tutoring tools — beginning with math — and fund teacher training. He urged legislators to protect education from cuts, make structural changes to the tax code and act boldly rather than leaving local districts to fend for themselves. “If you’re not willing to make those changes, don’t take it out on kids,” Reykdal said.

The funding push comes as new Smarter Balanced assessment results show gradual improvement but highlight persistent inequities. State test scores have ticked upward, and student progress rates between grades are now mirroring pre-pandemic trends. Still, higher-poverty communities are not improving as quickly as more affluent peers. About 57% of eighth graders met foundational math progress benchmarks — better than most states, Reykdal noted, but still leaving four in 10 students short of university-ready standards by 10th grade.

Reykdal cautioned against reading too much into a single exam, emphasizing that Washington consistently ranks near the top among peer states. He argued that overall college-going rates among public school students show they are more prepared than the test suggests. “Don’t grade the workload — grade the thinking,” he said.

Artificial intelligence, Reykdal said, has moved beyond the margins and into the mainstream of daily teaching and learning: “AI is in the middle of everything, because students are making it in a big way. Teachers are doing it. We’re doing it in our everyday lives.”

OSPI has issued human-centered AI guidance and directed districts to update technology policies, clarifying how AI can be used responsibly and what constitutes academic dishonesty. Reykdal warned against long-term contracts with unproven vendors, but said larger platforms with stronger privacy practices will likely endure. He framed AI as a tool for expanding customized learning and preparing students for the labor market, while acknowledging the need to teach ethical use.

Reykdal pressed lawmakers to think more like executives anticipating global competition rather than waiting for perfect solutions. “If you wait until it’s perfect, it will be a decade from now, and the inequalities will be massive,” he said.

With test scores climbing slowly and AI transforming classrooms, Reykdal said the Legislature’s next steps will be decisive in shaping whether Washington narrows achievement gaps — or lets them widen.

TVW News originally published this article on Sept. 11, 2025.


Paul W. Taylor is programming and external media manager at TVW News in Olympia.



Source link

Continue Reading

Trending