Connect with us

Tools & Platforms

THG Studios unveils Virtual Production with new AI-tech investment

Published

on


As the artificial intelligence (AI) revolution gathers pace, THG Studios is proving to be a major early adopter. 

The UK e-commerce firm’s creative agency has launched its latest Virtual Production (VP) capability following further investment in technology, worth seven figures so far, it says.

It notes the investment in cutting-edge technologies “will maximise brand budgets by unlocking a new level of flexibility and creative freedom”. 

So what is it? Its VP screen “offer[s] television-quality production at a fraction of the cost and time”. 
 
Blending physical and digital elements in real time, it claims to allow brands to “visualise and create scenes in a dynamic and efficient way”. 

Further, virtual production screens can be curated and edited in real time, including elements like lighting, time of day, weather, and location. 
 
THG Studios’ director of creative services, Cat Mellor, said: “Brands can now explore outer space in the morning, walk the streets of Tokyo city at noon, and traverse the Sahara Desert before dinner – all without leaving our studio. Beyond having complete creative licence to make the impossible possible, our Virtual Production brings precision and control to shoots, supported by our skilled supervisors and technical artists.” 
 
He added: “At a time when brand budgets are more stretched than ever, our suite of Virtual Production and AI services save time, cut logistics, and reduce costs. We’ve seen brands saving up to 50% in total production time, as well as up to 40% in cost savings compared to traditional on-location shoots.”   
 
Disney Store has already used the tech with a Disney spokesperson adding: “There’s something around pairing the immersiveness of the VP screen and the depth of a traditional set build to take our storytelling to the next level. The business result? Our Haunted Mansion teaser campaign has been the most engaged with social assets in five years.” 
 
The announcement follows the launch of THG Studios’ iLab in 2024, its in-house generative AI innovation hub that offers iGen, an AI image creation tool providing brands with a low-risk, high-impact environment in which to experiment with AI. 
 

Copyright © 2025 FashionNetwork.com All rights reserved.



Source link

Tools & Platforms

Community Editorial Board: Considering Colorado’s AI law

Published

on


Members of our Community Editorial Board, a group of community residents who are engaged with and passionate about local issues, respond to the following question: During the recent special session, Colorado legislators failed to agree on an update to the state’s yet-to-be-implemented artificial intelligence law, despite concerns from the tech industry that the current law will make compliance onerous. Your take?

Colorado’s artificial intelligence law, passed in 2024 but not yet in effect, aims to regulate high-risk AI systems by requiring companies to assess risk, disclose how AI is used and avoid discriminatory outcomes. But as its 2026 rollout approaches, tech companies and Governor Polis argue the rules are too vague and costly to implement. Polis has pushed for a delay to preserve Colorado’s competitiveness, and the Trump administration’s AI Action Plan has added pressure by threatening to withhold federal funds from states with “burdensome” AI laws. The failure to update the law reflects a deeper tension: how to regulate fast-moving technology without undercutting economic growth.

Progressive lawmakers want people to have rights to see, correct and challenge the data that AI systems use against them. If an algorithm denies you a job, a loan or health coverage, you should be able to understand why. On paper, this sounds straightforward. In practice, it runs into the way today’s AI systems actually work.

Large language models like ChatGPT illustrate the challenge. They don’t rely on fixed rules that can be traced line by line. Instead, they are trained on massive datasets and learn statistical patterns in language. Input text is broken into words or parts of a word (tokens), converted into numbers, and run through enormous matrices containing billions of learned weights. These weights capture how strongly tokens relate to one another and generate probabilities for what word is most likely to come next. From that distribution, the model picks an output, sometimes the top choice, sometimes a less likely one. In other words, there are two layers of uncertainty: first in the training data, which bakes human biases into the model, and then in the inference process, which selects from a range of outputs. The same input can therefore yield different results, and even when it doesn’t, there is no simple way to point to a specific line of data that caused the outcome. Transparency is elusive because auditing a model at this scale is less like tracing a flowchart and more like untangling billions of connections.

These layers of uncertainty combine with two broader challenges. Research has not yet shown whether AI systems discriminate more or less than humans making similar decisions. The risks are real, but so is the uncertainty. And without federal rules, states are locked in competition. Companies can relocate to jurisdictions with looser standards. That puts Colorado in a bind: trying to protect consumers without losing its tech edge.

Here’s where I land: Regulating AI is difficult because neither lawmakers nor the engineers who build these systems can fully explain how specific outputs are produced. Still, in sensitive areas like housing, employment, or public benefits, companies should not be allowed to hide behind complexity. Full transparency may be impossible, but clear rules are not. Disclosure of AI use should be mandatory today, and liability should follow: If a system produces discriminatory results, the company should face lawsuits as it would for any other harmful product. It is striking that a technology whose outputs cannot be traced to clear causes is already in widespread use; in most industries, such a product would never be released, but AI has become too central to economic competitiveness to wait for full clarity. And since we lack evidence on whether AI is better or worse than human decision-making, banning it outright is not realistic. These models will remain an active area of research for years, and regulation will have to evolve with them. For now, disclosure should come first. The rest can wait, but delay must not become retreat.

Hernán Villanueva, chvillanuevap@gmail.com


Years ago, during a Senate hearing into Facebook, senators were grilling Mark Zuckerberg, and it was clear they had no idea how the internet works. One senator didn’t understand why Facebook had to run ads. It took Zuckerberg a minute to understand the senator’s question, as he couldn’t imagine anyone being that ignorant on the subject of the hearing! Yet these senators write and enact laws governing Facebook.

Society does a lot of that. Boulder does this with homelessness and climate change. They understand neither, yet create and pass laws, which, predictably, do nothing, or sometimes, make the problem worse. Colorado has done it before, as well, when it enacted a law requiring renewable energy and listed hydrogen as an energy source. Hydrogen is only an energy source when it is separated from oxygen, like in the sun. On Earth, hydrogen is always bound to another element and, therefore, it is not an energy source; it is an energy carrier. Colorado continued regulating things it doesn’t understand with the Colorado AI Act (CAIA), which shows a fundamental misunderstanding of how deep learning and Large Language Models, the central technologies of AI today, work.

The incentive to control malicious AI behavior is understandable. If AI companies are creating these on purpose, let’s get after them. But they aren’t. But bias does exist in AI programs. The bias comes from the data used to train the AI model. Biased in what way, though? Critics contend that loan applications are biased against people of color, even when a person’s race is not represented in the data. The bias isn’t on race. It is possibly based on the person’s address, education or credit score. Banks want to bias applicants based on these factors. Why? Because it correlates with the applicant’s ability to pay back the loan.

If the CAIA makes it impossible for banks to legally use AI to screen loan applicants, are we better off? Have we eliminated bias? Absolutely not. If a human is involved, we have bias. In fact, our only hope to eliminate bias is with AI, though we aren’t there yet because of the aforementioned data issue. So we’d still have bias, but now loans would take longer to process.

Today, there is little demand for ditch diggers. We have backhoes and bulldozers that handle most of that work. These inventions put a lot of ditch diggers out of work. Are we, as a society, better for these inventions? I think so. AI might be fundamentally different from heavy equipment, but it might not be. AI is a tool that can help eliminate drudgery. It can speed up the reading of X-rays and CT scans, thereby giving us better medical care. AI won’t be perfect. Nothing created by humans can be. But we need to be cautious in slowing the development of these life-transforming tools.

Bill Wright, bill@wwwright.com



Source link

Continue Reading

Tools & Platforms

AI-assisted coding rises among Indian tech leaders: Report

Published

on

By


India’s tech leaders are increasingly adopting coding tools backed by artificial intelligence (AI), with concerns over security and over-reliance driving strict review protocols, as per a new survey by online design platform Canva. The survey, conducted in August 2025, included 300 full-time technology leaders spanning the United States, the United Kingdom, Germany, France, India, and Australia.

Key findings from India

  • 100% of technology leaders reported using AI coding tools personally or for their organisation.
  • About 94% of developers use AI-assisted coding every day.
  • 84% of the respondents see their organisation’s usage increase significantly within the next year. About 72% cited productivity gains due to the use of AI tools.

Governance is a must

  • As per the survey, all respondents took a strong view of governance while using AI tools for professional purposes.
  • 98% said all AI-generated code is put through peer review before going into production.
  • 92% flagged risks when deploying AI code without human oversight, especially on maintainability and security.
  • Most oversight responsibility lies with CTOs and CIOs, according to 72% of surveyed leaders.

Skills and hiring

In terms of upskilling and developing hiring trends:

  • 98% of respondents believe AI is transforming developer skillsets, and all leaders were comfortable with candidates using AI tools during technical interviews.
  • About 28% flagged concerns such as over-reliance without accountability and compliance exposure. About 20% also said AI tools may lead to junior staff struggling to develop traditional skills.

Canva’s CTO Brendan Humphreys emphasised the need for humans to leverage AI as an enhancement, not a replacement. “When paired with human judgment and expertise, it unlocks significant benefits — from rapid prototyping to faster development cycles and greater productivity.”



Source link

Continue Reading

Tools & Platforms

AI-Enabled Heart Diagnostics Put HeartFlow (HTFL) in the Spotlight

Published

on


HeartFlow, Inc. (NASDAQ:HTFL) is one of the AI Stocks to Watch Out For in 2025On September 2, JPMorgan analyst Robbie Marcus initiates coverage on the stock with an Overweight rating and a price target of $36.00.

The firm believes that Heartflow is “one of the clearest and most pioneering downstream beneficiaries of the AI revolution in the healthcare sector.”

The company is using artificial intelligence in healthcare to help diagnose heart disease. Through their technology, doctors can gain better insights of a patient’s heart while reducing costs and improving workflow.

Jirsak/Shutterstock.com

JP Morgan believes that even though investors are rightly focused on the hardware, data, and infrastructure that’s driving modern artificial intelligence, HTFL stands out as a clear and pioneering downstream beneficiary of the AI revolution in healthcare.

“The company operates at the unique intersection of regulated healthcare and technology through its novel, AI-enabled, diagnostic software for coronary artery disease (CAD). The novel platform uses AI and advanced computational fluid dynamics to create a personalized 3D model of a patient’s heart based off of a single coronary computed tomography angiography (CCTA). Heartflow is one of the first MedTech companies to offer a clinically meaningful and reimbursed product that leverages modern day computing power and machine learning/AI to address a clinical unmet need in coronary artery disease (CAD), while simultaneously offering the potential to streamline procedural workflow and lower overall cost of care. As one of the few pure-play software MedTech businesses, Heartflow benefits from a capital-light business model, an excellent gross margin profile, clinical/regulatory moat with healthy reimbursement, and a highly differentiated diagnostic solution in a large, untapped market.”

HeartFlow, Inc. (NASDAQ:HTFL) is a medical technology company.

While we acknowledge the potential of HTFL as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you’re looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock.

READ NEXT: 10 Must-Watch AI Stocks on Wall Street and 10 AI Stocks Investors Are Watching Closely.

Disclosure: None.



Source link

Continue Reading

Trending