Connect with us

Tools & Platforms

AI Estimates Electron-Level Information for Molecular Property Prediction Without High-Cost Quantum Mechanical Calculations

Published

on


Newswise — Researchers in Korea have developed an artificial intelligence (AI) technology that predicts molecular properties by learning electron-level information without requiring costly quantum mechanical calculations.

A joint research team led by Senior Researcher Gyoung S. Na from the Korea Research Institute of Chemical Technology (KRICT) and Professor Chanyoung Park from the Korea Advanced Institute of Science and Technology (KAIST) has developed a novel AI method—called DELID (Decomposition-supervised Electron-Level Information Diffusion)—that accurately predicts material properties using electron-level information without performing quantum mechanical computations. The method achieved state-of-the-art prediction accuracy on real-world datasets consisting of approximately 30,000 experimental molecular data.

Traditional computational science and AI methods have been limited in utilizing electron-level information—essential for determining molecular properties—due to the excessive cost of quantum mechanical calculations. As a result, most existing AI models rely solely on atom-level molecular descriptors, leading to limitations in prediction accuracy, particularly for complex molecules.

To address this challenge, the research team devised DELID, a generative AI method that infers the electron-level features of complex molecules by combining information from simpler molecular fragments. DELID decomposes complex molecules into chemically valid substructures, retrieves electron-level properties of these fragments from quantum chemistry databases, and uses a self-supervised diffusion model to infer the overall electronic structure. This enables accurate property prediction without the need for large-scale quantum mechanical simulations.

Uniquely, DELID allows molecular property prediction using electron-level information without actually performing quantum computations on the target molecule. This represents a significant leap forward, enabling electron-aware predictions without requiring quantum computers.

In benchmark tests on over 30,000 experimentally measured molecular property datasets—including physical, toxicological, and optical properties—DELID achieved the highest accuracy among state-of-the-art models. In particular, for optical property prediction tasks such as CH-DC and CH-AC, which are relevant to OLED and solar cell material design, existing models typically show low prediction accuracy (31–44%). DELID achieved an accuracy of 88%, more than twice the performance of top existing AI models.

Senior Researcher Na commented, “DELID enables accurate prediction of molecular properties by incorporating electron-level information without the burden of high computational cost, overcoming a major limitation of conventional AI approaches.” KRICT President Dr. Youngkuk Lee added, “We expect DELID to make significant contributions to practical AI applications in chemical industries such as drug discovery, toxicity assessment, and optoelectronic materials development.”

This research was presented at ICLR 2025, one of the top-tier AI conferences.

 

###

KRICT is a non-profit research institute funded by the Korean government. Since its foundation in 1976, KRICT has played a leading role in advancing national chemical technologies in the fields of chemistry, material science, environmental science, and chemical engineering. Now, KRICT is moving forward to become a globally leading research institute tackling the most challenging issues in the field of Chemistry and Engineering and will continue to fulfill its role in developing chemical technologies that benefit the entire world and contribute to maintaining a healthy planet. More detailed information on KRICT can be found at https://www.krict.re.kr/eng/

The study was was supported by the Ministry of Trade, Industry and Energy (TS241-10R), the National Research Foundation of Korea, and the Ministry of Science and ICT (NRF-2022M3J6A1063021, RS-2024-00406985).





Source link

Tools & Platforms

‘AI will not love you, AI will not cry with you’: COICOM panel warns Church of technology’s limits

Published

on


Arnold Enns, Vladimir Lugo, Steve Cordon, and Fabio Criales during the panel forum “Artificial Intelligence: Challenges and Opportunities for the Church” at COICOM 2025. Christian Daily International

Artificial intelligence is no longer a distant concept for the Church but a pressing reality that demands attention. That was the message of a panel at the 2025 Congress of the Ibero-American Confederation of Communicators, Pastors, and Christian Leaders (COICOM) held in Honduras last week, where ministry and technology experts explored both the promise and perils of AI for faith communities.

Moderated by COICOM president Arnold Enns, the session—titled “Artificial Intelligence: Challenges and Opportunities for the Church”—brought together Vladimir Lugo, Steve Cordon, and Fabio Criales. The panelists examined the nature of AI, its societal impact, and its growing yet inescapable role within Christian ministry.

The discussion began with definitions. Lugo described AI as a branch of computing that “allows machines to do things that were previously reserved for humans,” including learning, analyzing, and making decisions. He clarified that AI does not reside in a single place but operates on vast cloud servers controlled by global tech giants such as Google, Amazon, and Microsoft, each competing for dominance in the field.

The dilemma of control and inherent bias

One of the first concerns raised was the issue of control and ethics. Panelists emphasized that AI technologies are not neutral. Lugo warned that publicly available models “carry biases,” reflecting the agendas of the secular companies that train them.

“Many of these companies are woke,” he said, arguing that they promote “anti-biblical” values and that their AI creations reflect humanist and liberal ideologies.

Criales added that AI “was meant to make evident what is already present” in the human heart, citing Matthew 15:18-19. He also cautioned about the danger of “hallucination”—when AI generates incorrect or misleading information in response to poorly framed prompts.

“Be very careful with that, because it hallucinates, recreates what you ask, and if you ask incorrectly, you could end up saying heresies on stage,” Criales warned.

Digital consumers or disciples?

The panel also weighed AI’s influence on ministry content creation. With more pastors turning to tools like ChatGPT to write sermons, Lugo acknowledged that AI can be a useful “tool” for research. But he stressed that “the intelligent entity using the tool is the human” and cautioned against surrendering discernment.

Cordon posed a sharper question about the widespread adoption of AI-driven platforms, noting the 123 million daily users of ChatGPT: “Have we created more digital consumers than digital disciples?” True pastoral work, he said, cannot be automated. “People need pastors. AI will not love you, AI will not cry with you.”

He recounted a sobering personal experience with a counseling AI that not only conversed smoothly but also offered to pray for him in eloquent, detailed language. The moment highlighted for him the unsettling boundary between authentic pastoral care and technological simulation. “I believe AI will also be a test of maturity for the Church,” he reflected.

A call for training and responsibility

The panel closed with a strong call for Christian leaders to equip themselves and their congregations to engage AI critically. “Either you use it, or it uses you—there really isn’t an alternative,” Cordon said.

Criales stressed that believers must be intentional in learning how to apply these tools properly. Lugo concluded with an appeal to humility: “If there is anything we want to learn from the Lord, let us learn how to learn.”

The consensus was clear: artificial intelligence is not merely a technological development but a spiritual test. For the Church, it represents a challenge requiring maturity, ethical discernment, and above all, a reaffirmation of the irreplaceable value of human connection in ministry.

Originally published on Diario Cristiano, Christian Daily International’s Spanish edition.



Source link

Continue Reading

Tools & Platforms

MSP evolution in the age of AI and risk – ARN

Published

on


Embracing consultative models

This bodes well for the right IT channel partners. Shoer said partners have to really embrace a more consultative model; they can’t look at an AI tool in the traditional mindset of just making money.

“What they really need to do [is] understand what the customer is trying to achieve and is AI the right tool to help them achieve that?” he said. “It may be part of a broader tool set, or it may be part of a change in business process.

“It could be any one of a number of things [and] they also have to be mindful of the pressure to assist a customer they’ve addressed their own environment and how they’re using AI.”

Shoer noted the GTIA has been conservative in its own use of AI and how its leveraging it internally.

“We’re doing an internal case study on ourselves to try and learn where gaps and pitfalls may be so that we can help inform our members what they need to be looking at, first and foremost, in their own business before they go out and try and sell themselves,” he explained.

The temptation to go out and sell themselves as an AI expert is massive right now, noted Shoer.

One of the most important areas with AI right now is looking at agreements. With the rush to AI, the message of going slow is getting lost.

When he was at GTIA’s ChannelCon conference, Shoer said there was only “one MSP in the room that was actually selling — truly selling — AI services”.

“His message to the attendees was, ‘Don’t rush in, because you could destroy your business if you move in too fast’… it’s more important than ever,’” he pointed out. “There have been so many cycles where people have jumped on and represented themselves as an MSSP [managed security service provider] — when they really weren’t an MSSP and skirted some delicate liability issues.

“But AI brings is a whole new dynamic to that many companies are leaking their IP out into these large models, and they don’t even know that they’re doing it.”

Want for change

End-customers in general need their technology partners to be advisory led and “to actually help their businesses,” said The TSP Advisory chief strategy officer James Davis.

“There’s always going to be a client base that want to be transactional,” he said. “But that level of advisory is going to scale up and down based on maturity of the client, the industry, the size, the objectives; there’s no one-size-fits-all right answer.

“But partners need direction and someone to lead them, because technology in general is too complicated.”

Davis said it’s easy to jump on and buy something, but complexity comes in when they start looking at it from a business perspective like managing costs, efficiencies, and managing risk.

“That’s when a tech partner is actually needed and where they realise where sit in the food chain,” he noted. “They need to act how they want to be treated.

Even if as an MSP, all it’s done is fixed-price support and they’re not proactively talking to their clients, said Davis.

“They’re pretty much just trying to limit as much noise as possible,” he said. “That, in itself, is a dead and dying model, because that’s not what’s necessary in a modern client ICT environment.”

For example, in an infrastructure where customers need help with applications, traditional MSPs don’t have the necessary experience.

“MSPs have always trained the clients, in general, to not come and talk to them about applications, because they can’t recommend anything,” said Davis. “A lot of clients won’t even come to the partners and ask for a lot of things, because they don’t think it’s what the partners do.”

Those partners that understand the need to help the customers on a business level “see the bigger picture” for them, he noted.

“They actually understand how the space works, because that’s where we all build our businesses, and that’s what we do all day, every day,” explained Davis. “This causes friction with partners who call themselves specialists but aren’t true specialists.

“They’re actually more the modern TSP [technology service provider], and they’re just leveraging AI to get in. That’s where they can legitimately take a lot of this business away from partners as well.”

According to Davis, the more modern, next generation technology partners that get it know they need to partner with others to ensure the customer gets the best out of them.

That’s all part of the consultative approach to ensuring the customer gets the right strategic advice.

“[Having] baseline relationships, providing some proactive advice, but really working operationally isn’t giving business or strategic advice,” he said. “You understand their business. For a partnership to work, you need to know where the organisation’s people fit in, what the business is trying to achieve, and then how the MSP is going to help the customer as a tech partner.”

Adding value

Dicker Data general manager of Microsoft Cloud A/NZ Sarah Loiterton told ARN value conversations should link technology investments to measurable business outcomes.

This includes reduced downtime, faster recovery, or improved compliance posture. She goes a step further and advises on the use of metrics like mean time to repair/resolve/respond/recover, total cost of ownership, and payback periods to illustrate impact in clear, financial terms.

“Industry‑specific insights and tailored collateral can support these discussions,” she said. “But the emphasis should remain on quantifiable benefits and risk reduction that resonate with both business and technical stakeholders.”

Partners should also start by understanding their own service identity, noted Loiterton.

For example, what they offer, where you draw the line, and which frameworks and align with, like the Australian Signals Directorate’s Essential Eight framework. Then, map these capabilities to each customer’s industry context, even when regulation is light.

Transparency is key, particularly with a clear outline on what risks can be mitigated and what contingency plans exist for residual risk.

“To accelerate this process, leverage tools that help benchmark against industry standards and legislative requirements, and translate those into actionable strategies for customers,” said Loiterton. “This ensures conversations remain sector‑specific, risk‑aware, and outcome‑focused.

When it comes to AI, having a deep understanding of governance and compliance requirements for each vertical.

“While many data protection principles are consistent, the nuances matter, especially for customer trust,” she said. “Establish clear policies for data classification, access control, and auditability.

“Incorporate human oversight into AI workflows and maintain transparent documentation of model usage and decision points. These steps demonstrate accountability and align AI initiatives with recognised governance standards.”

Loiterton also said distribution partnerships can bridge capability gaps by providing access to specialised expertise, managed services, and automation that let MSPs deliver enterprise‑grade outcomes without expanding headcount.

“Crucially, engage your distributor early on the entire scope of the customer’s requirement, not just the core workload, so they can help design whole‑of‑environment solutions,” she said. “That means validating dependencies across identity, devices, networks, data, applications, cloud/hybrid/edge, resilience/backup, observability, and governance.

“Taking a full‑scope view up‑front reduces integration gaps, speeds implementation, and ensures consistent policy and control coverage end‑to‑end.”

Collaborative models, such as co‑managed securities operations centre services, automated device management, and pre‑built AI workloads allow partners to scale efficiently while focusing internal teams on higher‑value activities.

“Clear SLAs (service level agreements), a documented RACI [framework], and shared accountability frameworks then keep service quality consistent across multiple customers,” she added.



Source link

Continue Reading

Tools & Platforms

Chinese Tech Giants Leverage AI for User Addiction and Global Influence

Published

on

By


In the bustling tech hubs of Shenzhen and Beijing, Chinese companies are redefining user engagement through sophisticated algorithms and behavioral nudges that keep consumers scrolling, shopping, and interacting far longer than intended. Firms like ByteDance, Tencent, and Alibaba have mastered the art of blending entertainment, social features, and commerce into seamless experiences, turning casual app use into habitual routines. This approach, often powered by artificial intelligence, analyzes user data in real-time to deliver personalized content that exploits psychological triggers like dopamine hits from notifications and endless feeds.

These strategies extend beyond mere convenience, embedding gamification elements such as rewards, streaks, and social comparisons to foster dependency. For instance, apps like Douyin—TikTok’s Chinese counterpart—use short-form videos optimized for maximum retention, with algorithms that predict and serve content to prolong sessions. According to a report from Business Insider, this “cracked code” involves proprietary AI models that adapt to individual preferences faster than Western competitors, leading to average daily usage times that eclipse those of platforms like Instagram or YouTube.

Algorithmic Mastery and Behavioral Hooks

The core of these tactics lies in data-driven personalization, where Chinese tech giants leverage vast troves of user information to refine engagement loops. Tencent’s WeChat, for example, integrates messaging, payments, and mini-programs into a super-app ecosystem, creating a “walled garden” where users rarely need to leave. This integration not only boosts time spent but also encourages micro-transactions, blurring the lines between leisure and commerce. Recent posts on X highlight how these apps blend social networking with e-commerce, with one influential investor noting that Chinese platforms like Xiaohongshu engineer addiction by seamlessly merging entertainment and shopping, outpacing ad-focused Western models.

Moreover, emerging technologies like AI-powered virtual reality and augmented reality are amplifying these effects. A study from The Arise Society warns of rising addictions in 2025, pointing to Chinese innovations in AI relationships and VR that shape behaviors and relationships, potentially leading to over-reliance on digital interactions. In China, where 5G infrastructure supports immersive experiences, companies are pushing boundaries with features that reward prolonged immersion, such as virtual rewards in games like Genshin Impact from miHoYo.

Global Implications and Regulatory Scrutiny

As these strategies go global, they raise concerns about technology addiction worldwide. Chinese firms are exporting their models through apps like TikTok, which has billions of users addicted to its algorithmically curated content. A Brookings Institution analysis in Global China: Technology examines how China’s technological reach influences user habits abroad, often through subtle data collection that refines addiction mechanics. This has prompted backlash; U.S. policymakers, as detailed in a Carnegie Endowment report on Managing the Risks of China’s Access to U.S. Data, are advocating for frameworks to counter risks from Chinese software and connected devices that could manipulate user behavior.

Yet, within China, the government balances innovation with control. The “Made in China 2025” initiative, as critiqued in a Council on Foreign Relations backgrounder on Is ‘Made in China 2025’ a Threat to Global Trade?, subsidizes tech advancements but also enforces measures like time limits on gaming for minors to curb addiction. Despite this, a longitudinal study in Frontiers in Psychology reveals persistent smartphone addiction among Chinese youth, linked to emotion regulation strategies and cross-cultural adjustments.

Economic Drivers and Competitive Edge

Economically, these addiction strategies fuel massive revenue streams. Chinese tech companies dominate in advanced industries, with the Information Technology and Innovation Foundation noting in China Is Rapidly Becoming a Leading Innovator that policies like the 13th Five-Year Plan propel independent innovation, enabling firms to capture global market share. This is evident in the rise of AI-first strategies from Baidu and Tencent, as reported in recent news from The Piacente Group, where open-source models like DeepSeek V3 rival U.S. tech, embedding addictive features into enterprise and consumer products.

Critics argue this creates a dependency cycle, with New York Magazine’s Big Tech Still Has a Big Addiction to China highlighting how even Western giants remain hooked on Chinese supply chains and innovation tactics. X posts echo this sentiment, with users praising China’s tech prowess in everything from electric vehicles to social apps, suggesting a “Chinese century” driven by addictive digital ecosystems.

Future Trajectories and Ethical Considerations

Looking ahead to the latter half of 2025, experts predict an escalation in these strategies, incorporating more advanced AI for predictive engagement. A Frontiers in Public Health study on explainable machine learning prediction of internet addiction among Chinese youth underscores the need for positive development interventions to mitigate risks. Meanwhile, gamification in loyalty programs, as per a Business Wire intelligence report on China Loyalty Programs Market, targets younger consumers to foster long-term brand loyalty through tech-driven incentives.

For industry insiders, the challenge is navigating this high-stakes arena where innovation borders on exploitation. As Chinese companies continue to refine their hooks—drawing from vast data pools and state-backed R&D—the global tech sector must weigh the benefits of engagement against the perils of widespread addiction, potentially reshaping regulations and user protections in the years to come.



Source link

Continue Reading

Trending