Connect with us

Tools & Platforms

How can we create a sustainable AI future?

Published

on


With innovation comes impact. The social media revolution changed how we share content, how we buy, sell and learn, but also raised questions around technology misuse, censorship and protection. Every time we take a step forward, we also need to tackle challenges, and AI is no different.

One of the major challenges for AI is its energy consumption. Together, datacenters and AI currently use between 1-2% of the world’s electricity, but this figure is rising fast.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

The Future of Emerging AI Solutions

Published

on


AI has captivated industries with promises to redefine efficiency, innovation and decision-making. Some of the nation’s biggest companies, including Microsoft, Meta and Amazon, are projected to pour an astonishing $320 billion into AI by 2025. As remarkable as these developments are, the technology’s swift evolution has exposed some significant challenges. Though these issues aren’t insurmountable, navigating them requires careful consideration and a smart strategy. Take data depletion, for example — one of the more pressing concerns fueled by AI’s rapid rise.

Also Read: The GPU Shortage: How It’s Impacting AI Development and What Comes Next?

AI systems are trained on enormous datasets, but they’re now consuming high-quality, human-generated data faster than it can be created. A shortage of diverse, reliable content could hinder the long-term sustainability of model training. Synthetic data offers one potential solution, but it comes with its own set of risks, including quality degradation and bias reinforcement. Another emerging path is agentic AI, which learns more like humans and adapts in real time without relying solely on static datasets.

Given all the options, high-tech companies’ eagerness to explore these emerging technologies is understandable, but it’s critical to avoid the bandwagon effect when considering new solutions. Before jumping headfirst into the AI race, organizations need to understand not just what’s possible, but what’s sustainable.

Develop a Clear AI Strategy to Pursue Right-Fit Solutions

It’s not just AI but the diverse potential of its applications that has enticed countless companies to jump on board; however, tales of instant success across the AI spectrum of offerings are rare. A baby-steps approach seems to be the rule rather than the exception, as indicated by a recent Deloitte survey that found only 4% of enterprises pursuing AI are actively piloting or implementing agentic AI systems. Organizations that adopt various forms of AI for trendiness rather than intention often find themselves stuck in the trial phase with little to show for their efforts. Scattered approaches lead to wasted resources, siloed projects and negligible ROI.

Businesses that align their initiatives with core objectives are better positioned to unlock AI’s potential. A successful strategy focuses on solving tangible problems, not indulging in alluring technology for appearance’s sake. Comprehensive plans should include solutions that automate routine tasks, such as document processing or repetitive workflows, and tools that enhance decision-making by leveraging advanced data models to predict outcomes.

AI strategies should also embrace technology as a way to strengthen the workforce by augmenting human intelligence rather than replacing it. For example, agentic AI can play a pivotal role in enhancing sales operations as agents can autonomously engage with prospects, answer questions and even close deals — all while collaborating with human colleagues. This human-AI partnership delivers greater efficiency and personalization. Unlike reactive bots, agentic models facilitate meaningful, refined outcomes while retaining emotional intelligence.

Strategies Should Combat Data Depletion and Protect Existing High-Quality Data 

AI’s ravenous appetite for data is raising alarms across industries. Researchers predict the supply of human-generated internet data suitable for training expansive AI models will be exhausted between 2026 and 2032, creating an innovation bottleneck with big potential implications.

AI strategies must recognize that the value lies in the technology’s ability to interpret complex scenarios and conditions. So without the right training data, AI’s outputs are at risk of becoming narrow, biased or obsolete. High-quality, diverse datasets are essential to building reliable models that reflect real-world diversity and nuance.

Amid the looming data drought, synthetic data offers a glimmer of hope. Companies can generate AI data that mirrors real-world situations to potentially offset proprietary content limitations and create task-specific datasets. While promising, synthetic data does come with its own set of drawbacks, such as quality decay, also known as model collapse. Continuously training AI on AI-generated content leads to degraded performance over time, similar to the way photocopying a photocopy repeatedly would erode the original image quality.

Also Read: Why Q-Learning Matters for Robotics and Industrial Automation Executives

Beyond exploring options to generate new data, high-tech businesses must also ensure their strategies prioritize the security of existing datasets. Poor data hygiene, errors and accidental deletions can derail AI operations and lead to costly setbacks. For example, Samsung Securities once issued $100 billion worth of phantom shares due to an input error. By the time the issue was caught, employees had already sold approximately $300 million in nonexistent stock, triggering a major financial and reputational fallout for Samsung.

Protecting data assets means building a sturdy governance framework that includes regular backups, fail-safe protocols and continuous data audits to create an operational safety net. Additionally, investing in advanced cybersecurity mitigates risks like data breaches or external attacks, safeguarding a company’s most valued digital assets.

Preparing for an AI-Driven Future 

The incoming wave of AI success belongs to organizations that blend innovation with intentionality. Businesses that resist hype and take a grounded approach to sustainable transformation stand the best chance of maximizing emerging technology’s potential.

The development of a true, proactive AI strategy hinges on the successful alignment of innovation with clear business objectives and measurable goals. Prioritizing high-quality, diverse datasets ensures accurate, unbiased AI decision-making, while exploring solutions like synthetic data can combat various risks, such as data depletion. AI is reshaping industries with unprecedented momentum. By acting deliberately and ethically, high-tech businesses can turn this technological watershed moment into a long-term competitive advantage.

[To share your insights with us, please write to psen@itechseries.com]



Source link

Continue Reading

Tools & Platforms

Vidu updates Q1 AI video generation model to handle up to seven image inputs

Published

on


Vidu AI, a generative artificial intelligence video platform developed by Chinese firm ShengShu Technology, today announced an update to its latest Q1 model featuring an advanced “reference-to-video” feature powered by semantic understanding.

The company is developing a generative video AI model that competes with OpenAI’s Sora, which can produce vivid video sequences. The update allows for richer video context for the production of video scenes involving multiple elements that remain the same between clips from frame to frame.

Users can now upload up to seven reference images and include a prompt that combines them for the AI to use in a scene. For example, the AI uses what the company calls “semantic understanding” to reference the images and relate them to the text prompt and even infer missing elements to generate key objects.

“This update breaks through the limits of what creators thought they could do with AI video,” said Chief Executive Luo Yihang. “We’re getting closer to enabling users to create fully realized scenes, complete with a detailed cast of characters, objects, and backgrounds, by expanding multi-image referencing to support up to seven inputs.”

For example, a user could upload an image of a young woman in a green dress, an idyllic forest scene and an owl. Then input the prompt: “The woman plays the violin in the forest while the owl flies down and lands on a nearby branch at sunrise.”

Yihang said the Vidu Q1 semantic core engine will generate a violin in her hands, preserving scene consistency and narrative quality throughout the clip. Using this technology, creators no longer need to face steep technical hurdles when attempting to create complex scenes. A text prompt and images are all they need when producing consistent video scenes.

Vidu is competing with Google LLC’s Veo 3, released in late May. Its generative video capabilities include natural English prompts and reference images alongside a filmmaking tool called Flow, which allows users to manage narrative design to develop entire short AI-generated films that include visuals, special effects and audio, including speech.

ShengShu announced a partnership with Los Angeles-based animation studio Aura Productions in late March to release a 50-episode short film sci-fi anime series fully generated by AI. The project seeks to redefine digital entertainment by using AI capabilities to augment traditional narrative techniques. It is slated for release across major social media platforms this year.

“AI is no longer just a tool; it’s a creative enhancement that allows us to scale production while maintaining artistic integrity,” said D.T. Carpenter, showrunner at Aura, told Variety about the project.

Image: Vidu AI

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  

CUBE Alumni Network

C-level and Technical

Domain Experts

Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Source link

Continue Reading

Tools & Platforms

As Congress Releases the AI Regulatory Hounds, A Reminder | American Enterprise Institute

Published

on


The centerpiece of the so-called “One Big Beautiful Bill” in tech policy circles was the “AI moratorium,” a temporary federal limit on state regulation of artificial intelligence. The loss of the AI moratorium, stripped from the bill in the Senate, elicited howls of derision from AI-focused policy experts such as the indefatigable Adam Thierer. But the moratorium debate may have distracted from an important principle: Regulation should be technology neutral. The concept of AI regulation is essentially broken, and neither states nor Congress should regulate AI as such.

Nothing is straightforward. The AI moratorium was not a moratorium at all. Contorted to fit into a budget reconciliation bill, it meant to disincentivize regulation by withholding federal money for 10 years from states that are “limiting, restricting, or otherwise regulating artificial intelligence models.”

Vi Adobe Stock.

It is economically unwise for states to regulate products and services offered nationally or globally. When they do so unevenly, a thicket of regulations and lost innovation is likely. Compliance costs rise disproportionately relative to the benefits of protections that more efficient laws could achieve.

But I’m ordinarily a stout defender of the decentralized system created by our Constitution. I believe it is politically unwise to move power to remote levels of government. With Geoff Manne, I’ve written about avoiding burdensome state regulation through contracts rather than preemption of state law.  So before the House AI Task Force’s meeting to consider federalism and preemption, I was in the “mushy middle.”

With the moratorium gone, federal AI regulation would justify preempting the states, giving us efficient regulation, right? Nothing is straightforward.

Nobody—including at the federal level—actually knows what they are trying to regulate. Take a look at the definition of AI in the Colorado legislation, famously signed yet lamented by tech-savvy governor Jared Polis. In Colorado, “Artificial Intelligence System” means

any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

Try excluding an ordinary light switch from the definition. Must you wrestle with semantics? I’m struck by the meaningless dualities. Take “explicit or implicit objective.” Is there a third category? Or are these words meant to conjure some unidentified actor’s intent? See also “physical or virtual environments.” (Do you want to change all four tires? No, just the front two and back two.) Someone thought extra words would add meaning, but they actually confess its absence.

Defining AI is fraught because “artificial intelligence” is a marketing term, not a technology. For policymaking purposes, it’s an “anti-concept.” When “AI” took flight in the media, countless tech companies put it on their websites and in their sales pitches. That doesn’t mean that AI is an identifiable, regulatable thing.

So pieces of legislation like those in Colorado, New York, and Texas use word salads to regulate anything that amounts to computer-aided decision-making. Doing so will absorb countless hours as technologists and businesspeople consult with lawyers to parse statutes rather than building better products. And just think of the costs and complexities—and the abuses—when these laws turn out to regulate all decision-making that involves computers.

Technologies and marketing terms change rapidly. Human interests don’t. That’s why technology-neutral regulation is the best form—regulation that punishes bad outcomes no matter the means. Even before this age of activist legislatures, the law already barred killing people, whether with a hammer, an automobile, an automated threshing machine, or some machine that runs “AI.”

The Colorado legislation is a gaudy, complex, technology-specific effort to prevent wrongful discrimination. That is better done by barring discrimination as such, a complex problem even without the AI overlay. New York’s legislation is meant to help ensure that AI doesn’t kill people—a tiny but grossly hyped possibility. Delaying the adoption of AI through regulations like New York’s will probably kill more people (statistically, by denying life-extending innovations) than the regulations save.

Texas—well, who knows what the Texas bill is trying to do.

The demise of the AI moratorium will incline some to think that federal AI regulation is the path forward because it may preempt unwise state regulation. But federal regulation would not be any better. It would be worse in an important respect—slower and less likely to change with experience.

The principle of technology-neutral regulation suggests that there should not be any AI regulation at all. Rather, the law should address wrongs as wrongs no matter what instruments or technologies have a role in causing them.



Source link

Continue Reading

Trending