AI Research
A model for ‘art-grade’ 3D assets
Tencent has released a model that could be quite literally game-changing for how developers create 3D assets.
The new Hunyuan3D-PolyGen model is Tencent’s first attempt at delivering what they’re calling “art-grade” 3D generation, specifically built for the professionals who craft the digital worlds we play in.
Creating high-quality 3D assets has always been a bottleneck for game developers, with artists spending countless hours perfecting wireframes and wrestling with complex geometry. Tencent reckons they’ve found a way to tackle these headaches head-on, potentially transforming how studios approach asset creation entirely.
Levelling up generating 3D assets
The secret sauce behind Hunyuan3D-PolyGen lies in what Tencent calls BPT technology. In layman’s terms, it means they’ve figured out how to compress massive amounts of 3D data without losing the detail that matters. In practice, that means it’s possible to generate 3D assets with tens of thousands of polygons that actually look professional enough to ship in a commercial game.
What’s particularly clever is how the system handles both triangular and quadrilateral faces. If you’ve ever tried to move 3D assets between different software packages, you’ll know why this matters. Different engines and tools have their preferences, and compatibility issues have historically been a nightmare for studios trying to streamline their workflows.
According to technical documentation, the system utilises an autoregressive mesh generation framework that performs spatial inference through explicit and discrete vertex and patch modelling. This approach ensures the production of high-quality 3D models that meet stringent artistic specifications demanded by commercial game development.
Hunyuan3D-PolyGen works through what’s essentially a three-step dance. First, it takes existing 3D meshes and converts them into a language the AI can understand.
Using point cloud data as a starting point, the system then generates new mesh instructions using techniques borrowed from natural language processing. It’s like teaching the AI to speak in 3D geometry, predicting what should come next based on what it’s already created.
Finally, the system translates these instructions back into actual 3D meshes, complete with all the vertices and faces that make up the final model. The whole process maintains geometric integrity while producing results that would make any technical artist nod in approval.
Tencent isn’t just talking about theoretical improvements that fall apart when tested in real studios; they’ve put this technology to work in their own game development studios. The results? Artists claim to report efficiency gains of over 70 percent.
The system has been baked into Tencent’s Hunyuan 3D AI creation engine and is already running across multiple game development pipelines. This means it’s being used to create actual 3D game assets that players will eventually interact with.
Teaching AI to think like an artist
One of the most impressive aspects of Hunyuan3D-PolyGen is how Tencent has approached quality control. They’ve developed a reinforcement learning system that essentially teaches the AI to recognise good work from bad work, much like how a mentor might guide a junior artist.
The system learns from feedback, gradually improving its ability to generate 3D assets that meet professional standards. This means fewer duds and more usable results straight out of the box. For studios already stretched thin on resources, this kind of reliability could be transformative.
The gaming industry has been grappling with a fundamental problem for years. While AI has made impressive strides in generating 3D models, most of the output has been, quite frankly, not good enough for commercial use. The gap between “looks impressive in a demo” and “ready for a AAA game” has been enormous.
Tencent’s approach with Hunyuan3D-PolyGen feels different because it’s clearly been designed by people who understand what actual game development looks like. Instead of chasing flashy demonstrations, they’ve focused on solving real workflow problems that have been frustrating developers for decades.
As development costs continue to spiral and timelines get ever more compressed, tools that can accelerate asset creation without compromising quality become incredibly valuable.
The release of Hunyuan3D-PolyGen hints at a future where the relationship between human creativity and AI assistance becomes far more nuanced. Rather than replacing artists, this technology appears designed to handle the grunt work of creating 3D assets, freeing up talented creators to focus on the conceptual and creative challenges that humans excel at.
This represents a mature approach to AI integration in creative industries. Instead of the usual “AI will replace everyone” narrative, we’re seeing tools that augment human capabilities rather than attempt to replicate them entirely. The 70 percent efficiency improvement reported by Tencent’s teams suggests this philosophy is working in practice.
The broader implications are fascinating to consider. As these systems become more sophisticated and reliable, we might see a fundamental shift in how game development studios are structured and how projects are scoped. The technology could democratise high-quality asset creation, potentially allowing smaller studios to compete with larger operations that traditionally had resource advantages.
The success of Hunyuan3D-PolyGen could well encourage other major players to accelerate their own AI-assisted creative tools beyond generating 3D assets, potentially leading to a new wave of productivity improvements across the industry. For game developers who’ve been watching AI developments with a mixture of excitement and scepticism, this might be the moment when the technology finally delivers on its promises.
See also: UK and Singapore form alliance to guide AI in finance
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
AI Research
Joint UT, Yale research develops AI tool for heart analysis – The Daily Texan
A study published on June 23 in collaboration with UT and Yale researchers developed an artificial intelligence tool capable of performing and analyzing the heart using echocardiography.
The app, PanEcho, can analyze echocardiograms, or pictures of the heart, using ultrasounds. The tool was developed and trained on nearly one million echocardiographic videos. It can perform 39 echocardiographic tasks and accurately detect conditions such as systolic dysfunction and severe aortic stenosis.
“Our teammates helped identify a total of 39 key measurements and labels that are part of a complete echocardiographic report — basically what a cardiologist would be expected to report on when they’re interpreting an exam,” said Gregory Holste, an author of the study and a doctoral candidate in the Department of Electrical and Computer Engineering. “We train the model to predict those 39 labels. Once that model is trained, you need to evaluate how it performs across those 39 tasks, and we do that through this robust multi site validation.”
Holste said out of the functions PanEcho has, one of the most impressive is its ability to measure left ventricular ejection fraction, or the proportion of blood the left ventricle of the heart pumps out, far more accurately than human experts. Additionally, Holste said PanEcho can analyze the heart as a whole, while humans are limited to looking at the heart from one view at a time.
“What is most unique about PanEcho is that it can do this by synthesizing information across all available views, not just curated single ones,” Holste said. “PanEcho integrates information from the entire exam — from multiple views of the heart to make a more informed, holistic decision about measurements like ejection fraction.”
PanEcho is available for open-source use to allow researchers to use and experiment with the tool for future studies. Holste said the team has already received emails from people trying to “fine-tune” the application for different uses.
“We know that other researchers are working on adapting PanEcho to work on pediatric scans, and this is not something that PanEcho was trained to do out of the box,” Holste said. “But, because it has seen so much data, it can fine-tune and adapt to that domain very quickly. (There are) very exciting possibilities for future research.”
AI Research
Google launches AI tools for mental health research and treatment
Google announced two new artificial intelligence initiatives on July 7, 2025, designed to support mental health organizations in scaling evidence-based interventions and advancing research into anxiety, depression, and psychosis treatments.
The first initiative involves a comprehensive field guide developed in partnership with Grand Challenges Canada and McKinsey Health Institute. According to the announcement from Dr. Megan Jones Bell, Clinical Director for Consumer and Mental Health at Google, “This guide offers foundational concepts, use cases and considerations for using AI responsibly in mental health treatment, including for enhancing clinician training, personalizing support, streamlining workflows and improving data collection.”
The field guide addresses the global shortage of mental health providers, particularly in low- and middle-income countries. According to analysis from the McKinsey Health Institute cited in the document, “closing this gap could result in more years of life for people around the world, as well as significant economic gains.”
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Google for Health, Google DeepMind, Grand Challenges Canada, McKinsey Health Institute, and Wellcome Trust, targeting mental health organizations and task-sharing programs globally.
What: Two AI initiatives including a practical field guide for scaling mental health interventions and a multi-year research investment for developing new treatments for anxiety, depression, and psychosis.
When: Announced July 7, 2025, with ongoing development and research partnerships extending multiple years.
Where: Global implementation with focus on low- and middle-income countries where mental health provider shortages are most acute.
Why: Address the global shortage of mental health providers and democratize access to quality, evidence-based mental health support through AI-powered scaling solutions and advanced research.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The 73-page guide outlines nine specific AI use cases for mental health task-sharing programs, including applicant screening tools, adaptive training interfaces, real-time guidance companions, and provider-client matching systems. These tools aim to address challenges such as supervisor shortages, inconsistent feedback, and protocol drift that limit the effectiveness of current mental health programs.
Task-sharing models allow trained non-mental health professionals to deliver evidence-based mental health services, expanding access in underserved communities. The guide demonstrates how AI can standardize training, reduce administrative burdens, and maintain quality while scaling these programs.
According to the field guide documentation, “By standardizing training and avoiding the need for a human to be involved at every phase of the process, AI can help mental health task-sharing programs effectively scale evidence-based interventions throughout communities, maintaining a high standard of psychological support.”
The second initiative represents a multi-year investment from Google for Health and Google DeepMind in partnership with Wellcome Trust. The funding, which includes research grant funding from the Wellcome Trust, will support research projects developing more precise, objective, and personalized measurement methods for anxiety, depression, and psychosis conditions.
The research partnership aims to explore new therapeutic interventions, potentially including novel medications. This represents an expansion beyond current AI applications into fundamental research for mental health treatment development.
The field guide acknowledges that “the application of AI in task-sharing models is new and only a few pilots have been conducted.” Many of the outlined use cases remain theoretical and require real-world validation across different cultural contexts and healthcare systems.
For the marketing community, these developments signal growing regulatory attention to AI applications in healthcare advertising. Recent California guidance on AI healthcare supervision and Google’s new certification requirements for pharmaceutical advertising demonstrate increased scrutiny of AI-powered health technologies.
The field guide emphasizes the importance of regulatory compliance for AI mental health tools. Several proposed use cases, including triage facilitators and provider-client matching systems, could face classification as medical devices requiring regulatory oversight from authorities like the FDA or EU Medical Device Regulation.
Organizations considering these AI tools must evaluate technical infrastructure requirements, including cloud versus edge computing approaches, data privacy compliance, and integration with existing healthcare systems. The guide recommends starting with pilot programs and establishing governance committees before full-scale implementation.
Technical implementation challenges include model selection between proprietary and open-source systems, data preparation costs ranging from $10,000 to $90,000, and ongoing maintenance expenses of 10 to 30 percent of initial development costs annually.
The initiatives build on growing evidence that task-sharing approaches can improve clinical outcomes while reducing costs. Research cited in the guide shows that mental health task-sharing programs are cost-effective and can increase the number of people treated while reducing mental health symptoms, particularly in low-resource settings.
Real-world implementations highlighted in the guide include The Trevor Project’s AI-powered crisis counselor training bot, which trained more than 1,000 crisis counselors in approximately one year, and Partnership to End Addiction’s embedded AI simulations for peer coach training.
These organizations report improved training efficiency and enhanced quality of coach conversations through AI implementation, suggesting practical benefits for established mental health programs.
The field guide warns that successful AI adoption requires comprehensive planning across technical, ethical, governance, and sustainability dimensions. Organizations must establish clear policies for responsible AI use, conduct risk assessments, and maintain human oversight throughout implementation.
According to the World Health Organization principles referenced in the guide, responsible AI in healthcare must protect autonomy, promote human well-being, ensure transparency, foster responsibility and accountability, ensure inclusiveness, and promote responsive and sustainable development.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- July 7, 2025: Google announces two AI initiatives for mental health research and treatment
- January 2025: California issues guidance requiring physician supervision of healthcare AI systems
- May 2024: FDA reports 981 AI and machine learning software devices authorized for medical use
- Development ongoing: Field guide created through 10+ discovery interviews, expert summit with 20+ specialists, 5+ real-life case studies, and review of 100+ peer-reviewed articles
AI Research
New Research Shows Language Choice Alone Can Guide AI Output Toward Eastern or Western Cultural Outlooks
A new study shows that the language used to prompt AI chatbots can steer them toward different cultural mindsets, even when the question stays the same. Researchers at MIT and Tongji University found that large language models like OpenAI’s GPT and China’s ERNIE change their tone and reasoning depending on whether they’re responding in English or Chinese.
The results indicate that these systems translate language while also reflecting cultural patterns. These patterns appear in how the models provide advice, interpret logic, and handle questions related to social behavior.
Same Question, Different Outlook
The team tested both GPT and ERNIE by running identical tasks in English and Chinese. Across dozens of prompts, they found that when GPT answered in Chinese, it leaned more toward community-driven values and context-based reasoning. In English, its responses tilted toward individualism and sharper logic.
Take social orientation, for instance. In Chinese, GPT was more likely to favor group loyalty and shared goals. In English, it shifted toward personal independence and self-expression. These patterns matched well-documented cultural divides between East and West.
When it came to reasoning, the shift continued. The Chinese version of GPT gave answers that accounted for context, uncertainty, and change over time. It also offered more flexible interpretations, often responding with ranges or multiple options instead of just one answer. In contrast, the English version stuck to direct logic and clearly defined outcomes.
No Nudging Needed
What’s striking is that these shifts occurred without any cultural instructions. The researchers didn’t tell the models to act more “Western” or “Eastern.” They simply changed the input language. That alone was enough to flip the models’ behavior, almost like switching glasses and seeing the world in a new shade.
To check how strong this effect was, the researchers repeated each task more than 100 times. They tweaked prompt formats, varied the examples, and even changed gender pronouns. No matter what they adjusted, the cultural patterns held steady.
Real-World Impact
The study didn’t stop at lab tests. In a separate exercise, GPT was asked to choose between two ad slogans, one that stressed personal benefit, another that highlighted family values. When the prompt came in Chinese, GPT picked the group-centered slogan most of the time. In English, it leaned toward the one focused on the individual.
This might sound small, but it shows how language choice can guide the model’s output in ways that ripple into marketing, decision-making, and even education. People using AI tools in one language may get very different advice than someone asking the same question in another.
Can You Steer It?
The researchers also tested a workaround. They added cultural prompts, telling GPT to imagine itself as a person raised in a specific country. That small nudge helped the model shift its tone, even in English, suggesting that cultural context can be dialed up or down depending on how the prompt is framed.
Why It Matters
The findings concern how language affects the way AI models present information. Differences in response patterns suggest that the input language influences how content is structured and interpreted. As AI tools become more integrated into routine tasks and decision-making processes, language-based variations in output may influence user choices over time.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Jack Dorsey Builds Offline Messaging App That Uses Bluetooth Instead of the Internet
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business7 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way