Meta invested in Scale AI in a deal that valued the data-labeling startup at $29 billion and brought in its 28-year-old CEO Wang [File]
| Photo Credit: REUTERS
Apple’s top executive in charge of artificial intelligence models, Ruoming Pang, is leaving the company for Meta Platforms, Bloomberg News reported on Monday, citing people with knowledge of the matter.
Meta and Apple did not immediately respond to Reuters requests for comment.
The development comes as tech giants such as Meta aggressively chase high-profile acquisitions and offer multi-million-dollar pay packages to attract top talent in the race to lead the next wave of AI.
Meta CEO Mark Zuckerberg has reorganised the company’s AI efforts under a new division called Meta Superintelligence Labs, Reuters reported last week.
The division will be headed by Alexandr Wang, former CEO of data labeling startup Scale AI. He will be the chief AI officer of the new initiative at the social media giant, according to a source.
Last month, Meta invested in Scale AI in a deal that valued the data-labeling startup at $29 billion and brought in its 28-year-old CEO Wang.
At HITEC 2025 in Indianapolis, we spoke with Jeremy Krol, CEO of Phunware, about how mobile technology and AI are reshaping guest interactions in hospitality. Jeremy explained how Phunware”s platform enhances the guest experience by combining branded mobile apps, wayfinding, and behavioural data to drive engagement and revenue. He also shared how AI is enabling predictive personalisation while empowering staff to focus on meaningful guest connections.
With innovation comes impact. The social media revolution changed how we share content, how we buy, sell and learn, but also raised questions around technology misuse, censorship and protection. Every time we take a step forward, we also need to tackle challenges, and AI is no different.
One of the major challenges for AI is its energy consumption. Together, datacenters and AI currently use between 1-2% of the world’s electricity, but this figure is rising fast.
To complicate matters, these estimates change as our AI technologies and usage patterns evolve. In 2022, datacenters including AI and cryptocurrencies platforms used around 460 TWh of power. In early 2024, it was projected they could use up to an additional 900 TWh by 2030. In early 2025, this figure was radically revised downwards to approximately 500 TWh, largely because of more efficient AI models and datacentre technologies. Furthermore, to put this in context, demand from the electric vehicle industry will likely reach 854TWh by 2030, with domestic and industrial heating sitting at around 486TWh.
However, this growth is still significant, and everyone – providers and users alike – has a duty to make sure their use of AI tools is as efficient as possible.
Gregory Lebourg
Global Environment Director, OVHcloud.
How is AI infrastructure getting more power-efficient?
Whether it’s Moore’s law telling us we’ll see more transistors on the same chip, or Koomey’s law telling us we’ll see more computations per joule of energy used, computing has always become more efficient over time and the GPUs, the AI “engines”, will certainly follow that trend.
When we look back between 2010 and 2018, the amount of datacenter compute being done increased by 550%, but energy use increased by only 6%. We are already seeing this kind of improvement in AI workloads, and we have many reasons to be a bit more optimistic about the future.
We are also seeing a rise in the adoption of liquid cooling technologies. According to Markets and Markets, the market for liquid cooling in datacenters will grow almost tenfold in the next seven years. Water has a thermal conductivity far greater than air, making liquid cooling techniques more power-efficient (and therefore cheaper) than air cooling. This is ideal for AI workloads, which tend to consume more power and run hotter than non-AI workloads. Water cooling dramatically increases the power usage effectiveness of datacenters.
Furthermore, we also see significant innovation in the liquid cooling field itself. Historically, datacenters have used direct liquid to chip cooling (DLTC) where cooling plates sit on CPUs or GPUs. As power (and consequently heat) loads rise, we are seeing more immersion cooling, where the entire server is immersed in a non-conductive liquid and all components can be cooled simultaneously.
This format can even be combined with DLTC cooling, ensuring that server components which usually ‘run hot’ (like the CPU and GPU) receive greater cooling power, while the rest of the server is cooled by the surrounding fluid.
How can we make AI more resource-efficient?
Alongside power, we usually consider water as a resource in its own right. Consider a standard internet search. An AI-powered search uses around 25ml of water, where a non-AI-powered search will use 50 times less: half a milliliter. On an industrial scale, a recent test case run by the National Renewable Energy Laboratory found smart water cooling reduced water consumption by around 56%; in their case, over a million liters of water a year.
It’s also important to think about the minerals that our infrastructure uses, because these don’t exist in isolation. Re-using components where possible, or recycling them when it’s not, can be an enormously efficient way to both avoid unnecessary purchases and reduce the environmental impact of AI.
As an example, consider lithium, a key component in electric cars. Lithium can require up to half a million liters of water and generate fifteen tonnes of CO2 for one tonne of metal. At the same time, there’s a geopolitical element to our resource usage: around a third of our nickel, which is used in heatsinks, used to come from Russia.
In many cases, it’s even possible to recover certain metals. For example, using pyrolysis, you can obtain “black” copper from complex components. Then, via electrolysis, separate the elements to recover pure copper, nickel, iron, palladium, titanium, silver and gold, turning e-waste into valuable assets. Although this will not be considerable revenue stream, it’s a strong example of sustainability being a revenue generator rather than a cost center!
How can users make their AI processes more power-efficient?
It’s not enough for users to rely on datacenter operators and equipment manufacturers to reduce energy consumption and carbon footprints. All organizations need to be mindful of energy consumption and ensure their business is sustainable by design wherever possible.
To give a hands-on example, AI model training is rarely sensitive to latency, because it’s not usually a user-facing process. This means it can be done anywhere and as a result, should be done in locations which have a greater access to renewable energy. A company that does model training in Canada rather than in Poland, for example, will have a carbon footprint approximately 85% lower.
At the same time, it’s important to be pragmatic about AI infrastructure. According to Intel PCF / OVHcloud LCA, an NVIDIA H100 has a cradle-to-gate (manufacturing) carbon footprint approximately three times higher than an NVIDIA L4, reinforcing how important it is for organizations to understand which GPUs they need for the job.
In many cases, the latest GPU will be important – in particular, when organizations are trying to bring applications to market quickly – but in some, a lower-spec and more sustainable GPU will do the same job in the same time.
AI sustainability: an exercise in attention to detail
Overall, there’s absolutely no doubt that our power and resource consumption is going to increase in future; that’s the price of progress. What we can do is ensure we set a precedent to make every single part of our AI supply chains and processes as efficient as possible from the get-go, so that future developments also incorporate this into their standard operating procedures.
If we can make fractional gains wherever possible, they’ll add up and make sure that today’s needs don’t compromise the world of tomorrow.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Apple Silently Acquires Two AI Startups To Enhance Vision Pro Realism And Strengthen Apple Intelligence With Smarter, Safer, And More Privacy-Focused Technology
Apple seems to be focused on boosting not only the work it has been doing on the Vision Pro headset but also in escalating its AI ambitions further by advancing its Apple Intelligence initiatives. To help with driving its efforts it seems to be resorting to a a technique of acquiring smaller firms time after time that would be solely focused on excelling in the technology. It seems to not be slowing down any time soon as it has recently acquired two more companies to help strengthen not only its talent pool but also with growing its innovation through the new technology stacks added up.
Apple has now bought two companies in to help it strengthen its next wave of innovation and advance in Apple Intelligence
MacGeneration was the one to uncover about Apple recently taking over two additional companies to continue with its low-profile strategy of growing Apple Intelligence by slowly building its talent and technology. One of the acquired companies is TrueMeeting, a startup with expertise in AI avatars and facial scanning. All the users need is an iPhone to scan their faces and then could see a hyper realistic version of themselves being created. While the official website has been taken down, but the technology company has seems to align with Apple’s ambitions regarding its Vision Pro and the attempts at an immersive experience.
TrueMeeting’s main expertise lies in the CommonGround Human AI that is meant to make virtual interactions feel more natural and human and can be integrated seamlessly with a wide range of applications. Although there has been no official comment on the acquisition by either of the parties but it looks like Apple has went ahead with it to further its development of Personas in the Apple Vision Pro headset, which are basically the lifelike digital avatars and refine its technology to improve on the spatial computing experience.
Apple additionally has also acquired WhyLabs, a firm focused on improving the reliability of these large language models (LLMs). It excels in dealings with issues such as bugs and AI hallucinations by helping developers with maintaining consistency and accuracy in the AI systems. Apple by taking over this company wants to not only advance further its Apple Intelligence but also ensure the tools are reliable and safe, which are the core values of the company and something direly needed to help integrate the models across varied platforms and ensure a consistent experience.
WhyLabs is not only focused on monitoring the performance of these models and ensuring reliability but also has expertise in providing safeguards for these systems to help combat misuse owing to security vulnerabilities. It is able to block any harmful output in these AI models and again aligns completely with Apple’s stance on privacy and user trust. This acquisition is especially vital with the growing expansion of Apple Intelligence capabilities across the ecosystem.
Apple seems to be doubling its efforts on the AI front and ensuring a more immersive experience without compromising on the the technology remaining safe and the systems acting responsibly.