Connect with us

Tools & Platforms

UAE launches new low-cost AI model, challenging OpenAI and DeepSeek. Meet K2 Think

Published

on


Published on


ADVERTISEMENT

A new, cheaper artificial intelligence (AI) model has entered the technology race, this time from the United Arab Emirates (UAE).

The Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi on Tuesday unveiled the release of a low-cost reasoning model that it hopes will rival DeepSeek and OpenAI. 

In January, the AI bubble got a shot of air after China-based research lab DeepSeek said it had been closely catching up to the achievements of the United States’s OpenAI, which makes ChatGPT, using a fraction of its budget and energy. 

The UAE’s model, called K2 Think, is smaller in terms of parameters, or 

the configuration variables of a machine learning model which control how it processes data and makes predictions, compared to its AI competitors, including DeepSeek. However, the researchers behind it say its performance is on par with OpenAI and DeepSeek’s reasoning models. 

The university said in a press release that its K2 Think is “a new class of reasoning model,” adding that “it employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems”.

“What was special about our model is we treat it more like a system than just a model,” Hector Liu, director of MBZUAI’s institute of foundation models, told CNBC in an interview.

“So, unlike a regular open source model where we can just release the model, we actually deploy the model and see how we can improve the model over time”.

MBZUAI also said that it is “one of the fastest and most efficient reasoning systems in existence”. It says K2 Think can achieve 2,000 tokens per second, which is roughly 1,500 words. 

K2 Think was built on Alibaba’s Qwen 2.5 large language model and is run on hardware from AI chipmaker Cerebras. 

Like DeepSeek’s R1 model, K2 Think is also open source, meaning its training data and weights are available to the public.

“This new level of transparency ensures that every step of how the model learns to reason can be studied, reproduced, and extended by the global research community,” the university said. 

The global AI race

The technology could have major implications for the global AI race. 

Though the US, followed by China, reigns supreme, other countries are trying to make their mark in AI. 

“K2 Think is a defining moment for AI in the UAE,” MBZUAI said. “It reflects how open innovation and close public–private partnerships can position Abu Dhabi as a global leader in AI, demonstrating that the future of reasoning will be shaped not only by size, but by ingenuity and collaboration”.



Source link

Tools & Platforms

Google Cloud expects strong growth thanks to demand for AI

Published

on


Google Cloud CEO Thomas Kurian paints a rosy picture for the cloud service provider. During a Goldman Sachs technology conference in San Francisco, he said that the company has approximately $106 billion in contracts outstanding. According to him, more than half of that can be converted into revenue in the next two years.

In the second quarter of 2025, parent company Alphabet reported $13.6 billion in revenue for Google Cloud, an increase of 32 percent over the previous year. If the forecast is correct, according to The Register, this means that the cloud service provider could add around $53 billion in additional revenue by 2027.

Google Cloud’s market position is often compared to that of its biggest rivals. Microsoft reported annual revenue of $75 billion for Azure this year, while AWS recorded $30.9 billion in the same quarter, a growth of 17.5 percent.

Faster transition to the cloud

Kurian emphasized that many companies still run IT systems on-premises. He expects the transition to the cloud to accelerate, with artificial intelligence playing a decisive role. Increasingly, customers are looking for suppliers who can help transform their business operations with AI applications, rather than just hosting services.

Google claims to have an advantage in this regard thanks to its own investments in AI infrastructure. Its systems are said to be more energy-efficient and deliver more computing power than those of its competitors. According to Kurian, the storage and network are also designed in such a way that they can easily switch from training to inference.

For investors, the most important thing is how AI is converted into revenue. Kurian mentioned usage-based rates, subscriptions, and value-based models, such as paying per saved service request or higher ad conversions. In addition, AI use leads to increased purchases of security and data services.

According to Kurian, 65 percent of customers now use Google Cloud AI tools. On average, this group purchases more products than organizations that do not yet use AI. Examples of applications include digital product development, customer service, back-office processes, and IT support. For example, Google helped Warner Bros. re-edit The Wizard of Oz for the Las Vegas Sphere, and Home Depot uses AI to answer HR questions more quickly.

Kurian’s message: cloud infrastructure only becomes truly profitable when companies purchase AI services on top of it. With this, Google Cloud wants to position itself firmly in the next phase of the cloud market.



Source link

Continue Reading

Tools & Platforms

New AI Tool Predicts Treatments That Reverse Cell Disease

Published

on


In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.

Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.

The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Sept. 9 in Nature Biomedical Engineering, was supported in part by federal funding.

By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.

“Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect,” said study senior author Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at HMS. “PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”

The traditional drug-discovery approach — which focuses on activating or inhibiting a single protein — has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades — think immune checkpoint inhibitors and CAR T-cell therapies — work by targeting disease processes in cells.

The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.

How PDGrapher works: Mapping complex linkages and effects

PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another.

In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.

PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”

“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?’” Zitnik said.

Advantages of the new model

The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.

Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.

The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A — an enzyme already targeted by approved chemotherapies — as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.

The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.

What this AI advance spells for the future of medicine

The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.

This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.

Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.

Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work — offering new biological insights that could propel biomedical discovery even further.

The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.

“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.

Reference: Gonzalez G, Lin X, Herath I, Veselkov K, Bronstein M, Zitnik M. Combinatorial prediction of therapeutic perturbations using causally inspired neural networks. Nat Biomed Eng. 2025:1-18. doi: 10.1038/s41551-025-01481-x

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.



Source link

Continue Reading

Tools & Platforms

Driving the Way to Safer and Smarter Cars

Published

on


A new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs.

As autonomous vehicles have only begun to appear on limited public roads, it has become clear that achieving widespread adoption will take longer than early predictions suggested. With Level 3 systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it’s going to be a long climb. Much of the technology that got the industry to Level 3 will not scale in all the needed dimensions—performance, memory usage, interconnect, chip area, and power consumption.

This paper looks at the challenges waiting down the road, including increasing AI operations while decreasing power consumption in realizable solutions. It introduces a new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs that can help solve them.

Read more here.



Source link

Continue Reading

Trending