Connect with us

Tools & Platforms

Elon Musk’s xAI lays off 500 jobs amid strategy shift to Specialist AI tutors: Report

Published

on


Elon Musk’s artificial intelligence company, xAI, has laid off around 500 employees from its data annotation team, according to a report by Business Insider. The move, communicated late on Friday evening, affects workers who were responsible for training the company’s generative AI chatbot, Grok.

xAI lays off 500 data annotation staff

As per the report, in an email sent to staff, xAI said it was reducing its focus on developing general AI tutors and would instead concentrate resources on specialist AI tutors. “After a thorough review of our Human Data efforts, we’ve decided to accelerate the expansion and prioritisation of our specialist AI tutors, while scaling back our focus on general AI tutor roles,” the message stated. “As part of this shift in focus, we no longer need most generalist AI tutor positions and your employment with xAI will conclude.”

Employees were told their system access would be revoked immediately. However, salaries would continue to be paid until the end of their contracts or until 30 November, adds the report.

Expansion of specialist AI roles

The company has reportedly made clear it is ramping up investment in specialist AI tutors across fields such as video games, web design, data science, medicine, and STEM. On 13 September, xAI announced plans to expand this team tenfold, saying the roles were “adding huge value”.

Notably, the layoffs follow recent reports that senior members of the data annotation team had their Slack accounts deactivated before the formal announcement was made.

In other news, earlier this month, Musk once again put the spotlight on artificial intelligence, as he highlighted the predictive abilities of X’s AI chatbot, Grok. On his official X account, the billionaire shared a link to a live benchmark platform, urging users to test Grok’s forecasting prowess.

In his first tweet, Musk wrote, “Download the @Grok app and try Grok Expert mode. For serious predictions, Grok Heavy is the best.” He followed up with, “The ability to predict the future is the best measure of intelligence.”

The link pointed to FutureX, a platform designed to evaluate how well large language models (LLMs) can predict real-world events. Developed by Jiashuo Liu and collaborators, FutureX presents AI agents with tasks spanning politics, economics, sports and cultural trends, scoring their predictions in real time.



Source link

Tools & Platforms

AI, art, and sound converge in holosculpture interactive artwork

Published

on


HoloSculpture integrates AI, display technology, and sound

 

HoloSculpture is an interactive artwork developed by Turbulence Lab and Hamza Kırbaş that integrates artificial intelligence, anamorphic display technology, and sound within a single physical object. Conceived as both a sculptural form and a digital interface, the project explores how AI can be embodied in a tangible medium.

 

The piece incorporates a 4D anamorphic display, which generates a sense of depth and spatial presence, and a studio-grade sound system that enhances audiovisual interaction. Activated by voice, HoloSculpture engages in dialogue, uses gesture-like movements, and presents shifting visual compositions, creating a dynamic relationship between user and object.


all images courtesy of Hamza Kırbaş and Turbulence Lab

 

 

HoloSculpture functions as both sculpture and digital interface

 

Each unit is produced as a signed and numbered edition and includes three distinct AI ‘characters,’ programmed with unique modes of response and expression. In addition, owners receive three certified digital artworks associated with the sculpture. Through this combination, HoloSculpture by Turbulence Lab and designer Hamza Kırbaş operates simultaneously as a functional device and a collectible artwork, positioning itself at the intersection of design, technology, and contemporary art.

AI, art, and sound converge in holosculpture interactive artwork
the work functions as both sculpture and digital interface

AI, art, and sound converge in holosculpture interactive artwork
a 4D anamorphic display creates depth and spatial presence

 

AI, art, and sound converge in holosculpture interactive artwork
studio-grade sound enhances the audiovisual experience

AI, art, and sound converge in holosculpture interactive artwork
shifting visuals create a dynamic interaction with the viewer

 





Source link

Continue Reading

Tools & Platforms

Latin America’s Digital Non-Alignment Gains Ground in the AI Crossfire

Published

on


Washington offers AI built for dominance, Beijing touts cooperation and infrastructure sharing. Latin America, squeezed between standards, chips, and geopolitics, is weighing a third path—one that prizes interoperability, cultural trust, and sovereignty over lock-in. The decision will shape generations.

Two Competing Blueprints

This past summer, the United States unveiled Winning the Race: America’s AI Action Plan, a strategy that describes artificial intelligence as a zero-sum contest. The accompanying executive order launched an AI Exports Program promising a full U.S. stack—from chips and cloud to models and guardrails—and urging allies to adopt American standards. Latin American officials interviewed by Americas Quarterly compared it to a “digital Monroe Doctrine,” a bid to create spheres of influence through contracts and code.

China countered with its Action Plan on Global Governance of Artificial Intelligence, calling AI a “global public good.” Beijing pledged UN-based rule-making, knowledge transfer, and infrastructure support for the Global South. Premier Li Qiang stressed cooperation over competition. The pitch is bolstered by cheaper options and tools like DeepSeek that feel attainable for governments under budget pressure. But, as policymakers told Americas Quarterly, buying into a Chinese ecosystem wholesale could swap dependencies—updates, patches, and data pipelines—from Washington to Beijing.

Both visions are designed to bind partners tightly. Latin America must now decide whether to sign up or to draft its own.

Pressure Points in the Region

For Brazil, the tightrope is real. It has codified an AI ethics framework and built research centers like CPQD, yet still relies heavily on U.S. clouds and developer ecosystems while courting Chinese financing for telecoms and manufacturing. Officials in Brasília told Americas Quarterly the challenge is to keep space for homegrown innovation without antagonizing either superpower.

Mexico faces sharper constraints. A 2,000-mile border with the U.S. makes Chinese infrastructure a national security red flag for Washington. Still, Mexican industry remains deeply integrated into Chinese supply chains, and its digital economy requires multiple partners to prevent shortages in chips and connectivity.

Chile, with ambitions to be a neutral regional hub, is marketing itself as a meeting point for data and cloud. Argentina’s courts, meanwhile, have already slowed state surveillance tools on privacy and due-process grounds, underscoring a civic culture that shapes technology adoption. Analysts told Americas Quarterly the diversity of political systems and industrial bases makes a single alignment impossible—hence the appeal of a third way.

Digital Non-Alignment as Strategy

That third way is what scholars and officials increasingly call digital non-alignment. The idea is to create interoperable standards and diversified infrastructure that let countries draw from both ecosystems while maintaining sovereignty. Some of the pieces are already visible.

Chile’s CENIA is leading LatamGPT, a regional model project where training data and roadmaps are steered by Latin American institutions. The hardware mix is deliberately plural: U.S. hyperscalers like AWS alongside Huawei cloud and regional providers, stitched together with the new Humboldt undersea cable linking South America to Asia-Pacific via Google.

Brazil’s $4 billion AI Plan aims to promote sovereignty through domestic models, public financing via BNDES, and rule-setting via the G20 and UN, while still allowing room for foreign vendors to prevent lock-in.

Scaled up, the region could form a Latin American AI consortium to pool research funds, share scarce GPU clusters, and co-own strategic datasets—from agriculture yields to climate models—under regional governance. Joint procurement would prevent any single country from being cornered by exclusivity clauses. Institutions like Mercosur, the Pacific Alliance, and the IDB could anchor a standards spine grounded in Latin American values, not imported templates. Officials told Americas Quarterly that such a move would give Brasília, Santiago, and Mexico City more leverage in negotiations with both powers.

Culture, Trust, and a Homegrown Market

Sovereignty is not just hardware and code. It is also a social license. Latin America’s legal culture leans toward human-centered tech: Brazil’s LGPD guarantees the right to review automated decisions; Chile’s AI policy is undergoing public consultation; Argentine courts suspended Buenos Aires’ facial recognition system on privacy grounds. These instincts make the region fertile ground for trustworthy AI systems with contestability and explainability built in from the start.

That ethic could become a market advantage. While U.S. and Chinese firms chase massive frontier models, Latin American developers can focus on practical breakthroughs: credit scoring with appeal rights, farm forecasting tuned for smallholders, logistics that cut port delays, or health triage tools that respect privacy. Legal scholars told Americas Quarterly that cooperative traditions—agricultural co-ops, community health networks, solidarity finance—can guide AI adoption with less resistance and clearer public value.

Culture is no guarantee. Procurement reform, data stewardship, and incentives that reward real outcomes are still needed. But trust itself can be an economic niche, one that makes Latin American AI distinct rather than derivative.

EFE@GIAN EHRENZELLER

The Clock Is Ticking

The window for genuine choice is closing fast. Washington and Beijing are already hardening lanes through export controls, cloud credits, and preferred-vendor partnerships. Countries that hesitate may soon find themselves choosing not between two philosophies, but between leftovers once the map is drawn.

A regional strategy is still within reach: build an interoperable stack, treat data as a strategic asset with accountable governance, federate compute across borders, and leverage both ecosystems without being locked into either. As officials told Americas Quarterly, the real trap is believing the only options are American dominance or Chinese cooperation.

Also Read: Peru’s Peñico Discovery Shows How the Caral Civilization Faced Crisis With Cooperation, Not Conquest

Latin America can choose differently—crafting standards that fit its development goals, embedding trust in technology, and keeping sovereignty in its own hands. The decision will determine whether AI becomes another story of dependency, or the moment the region writes its own script in the digital century.



Source link

Continue Reading

Tools & Platforms

Artificial intelligence is reshaping healthcare, but at what environmental cost?

Published

on


Amidst widespread promotion of artificial intelligence (AI), the environmental impacts are not receiving enough scrutiny, including from the health sector, writes Jason Staines.

When Hume City Council recently rejected part of a proposed $2 billion data centre precinct in Melbourne’s north, it put the spotlight on the largely overlooked environmental costs of artificial intelligence (AI) and the communities most at risk of bearing these costs.

The council had originally approved planning permits for the Merrifield data centre precinct, but rescinded support for one facility after residents and campaigners raised concerns about energy and water use, local infrastructure, and consultation with Traditional Owners. The backlash may be a sign that policymakers are starting to consider AI’s ecological footprint.

As AI is rolled out in increasingly sensitive areas of public life, including healthcare, policing, and welfare, governments have focused on the need for ethical, safe, and responsible deployment. There are also fierce debates over copyright and AI’s impact on jobs.

As important as these discussions are, environmental consequences have rarely been part of the equation to date.

Missing piece of the ‘responsibility’ puzzle

Governments and stakeholders have been busy discussing how AI might help lift Australia’s sagging productivity; Treasurer Dr Jim Chalmers wrote earlier this month that he expects AI to “completely transform our economy”, and that he is optimistic AI “will be a force for good”.

However, the Treasurer and others promoting AI are less vocal about the technology’s negative externalities.

The Productivity Commission’s interim report, Harnessing data and digital technology, pointed to AI’s potential to “improve service delivery, lift productivity and help solve complex social and environmental challenges”. But it largely overlooked AI’s environmental impacts, saying “few of AI’s risks are wholly new issues” and that higher energy demand is par for the course with “the information technology revolution”.

Likewise, a Therapeutic Goods Administration (TGA) consultation paper on the regulation of AI in medical device software omitted any discussion of environmental impact, despite recommending more transparency and accountability in the way AI tools are assessed and monitored.

The absence matters. As AI becomes embedded in essential services such as healthcare, its environmental footprint becomes not just a technical issue, but a public health and equity concern, particularly for communities already facing water insecurity or climate risk.

In a sector that is trying to decarbonise and reduce its impact, uncritical adoption of AI could prove counterproductive.

There are serious equity questions when governments invest in digital transformation strategies without accounting for the cultural impacts of water-intensive technologies such as AI (Jack Kinny / Shutterstock).

First Nations perspectives

For some First Nations communities, water scarcity is not theoretical, it is a daily reality.

In remote and regional Australia, many First Nations peoples face ongoing systemic barriers to safe, reliable, and culturally appropriate water access. These are demands that extend far beyond infrastructure and include deeply held cultural and spiritual connections with water.

Research conducted by CSIRO between 2008 and 2010 in the Daly (NT) and Fitzroy (WA) river catchments was the first of its kind to document Indigenous social and economic values tied to aquatic ecosystems, linking river flows directly to Indigenous livelihoods, resource use, and planning processes.

The Northern Australia Water Resource Assessment reinforces these insights, framing rivers as vessels of sustenance, heritage, and governance, and asserting Traditional Owners as inherently central to water and development planning.

Yet Australia’s AI reform dialogue has mostly omitted these cultural linkages, even when it does consider AI’s consumption of resources. In the context of AI-powered healthcare, this omission is especially troubling.

Innovations celebrated for improving diagnostics or service delivery often rely on energy- and water-intensive data systems, whose environmental toll is seldom disclosed or evaluated through an equity lens. When AI is embedded in healthcare services for Indigenous populations, with no accounting for its resource footprint, those least consulted risk bearing the heaviest cost.

This raises serious equity questions when governments invest in digital transformation strategies without accounting for water-intensive technologies such as AI.

As the United Nations Environment Programme has noted, policymakers must ensure the social, ethical, and environmental aspects of AI use are considered, not just the economic benefits.

“We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale,” said Golestan (Sally) Radwan, Chief Digital Officer of the United Nations Environment Programme.

Just how thirsty is AI?

Data centres remain one of the fastest-growing consumers of global electricity, using approximately 460 terawatt‑hours in 2022, and projected to more than double by 2026 with increasing AI and cryptocurrency activity.

These facilities often depend on water-intensive cooling systems, such as evaporative methods, which can consume large volumes of potable water and exacerbate stress on local supply. With AI workloads driving higher server densities and increased heat output, water demand for cooling is rising sharply, especially for hyperscale data centres, making water scarcity a growing operational risk.

For context, a study from the University of California, Riverside calculated that training GPT‑3 in Microsoft’s advanced US data centres evaporated about 700,000 litres of clean freshwater, a sobering figure for a single model’s development phase.

AI is being promoted as a climate solution, through better modelling, emissions tracking, and even water management optimisation. But the industry’s own resource use can directly undermine those goals.

As the OECD notes: “AI-enabled products and services are creating significant efficiency gains, helping to manage energy systems and achieve the deep cuts in greenhouse gas (GHG) emissions needed to meet net-zero targets. However, training and deploying AI systems can require massive amounts of computational resources with their own environmental impacts.”

According to one report, data centres in the US consumed about 4.4 percent of the country’s electricity in 2023, which could nearly triple to 12 percent by 2028. Meanwhile, Google’s US data centres went from using 12.7 billion litres of cooling water in 2021 to over 30 billion litres just three years later, and UC Riverside estimates that running just 20 to 50 ChatGPT queries uses roughly half a litre of fresh water.

In response, the global tech sector has invested heavily in green branding. Microsoft, for example, has publicly committed to being “carbon negative and water positive by 2030”.

Notable absence

Australia’s healthcare system is rapidly adopting AI across clinical, administrative, and operational domains.

From diagnostic imaging to digital scribes, clinical decision support, and personalised treatment plans, AI is being held up as a core enabler of future-ready care. Federal reports, such as Our Gen AI Transition, point to AI’s potential to improve efficiency and free up clinicians for more patient-centred work.

But that optimism comes with a caveat: the integration of AI into healthcare is unfolding with limited consideration of its environmental toll. The healthcare sector is already one of the most resource-intensive in Australia, responsible for around seven percent of national greenhouse gas emissions. AI risks adding a new layer of resource demand.

While regulatory bodies are beginning to grapple with questions of safety, accountability, and clinical transparency, environmental impacts remain conspicuously absent from most discussions.

The TGA, for instance, has flagged a need for clearer regulation of AI in medical software, noting that some tools, such as generative AI scribes, may already be operating outside existing rules if they suggest diagnoses or treatments without approval. Yet neither the TGA’s consultation paper nor its updated guidance documents meaningfully address the carbon or water costs of these tools.

According to Consumer Health Forum CEO Dr Elizabeth Deveny, the TGA’s review surfaced critical issues around trust, transparency, and consent, from hidden AI tools embedded in routine software, to confusion about who is responsible when things go wrong.

She notes: “Trust is the real product being managed.” Yet environmental transparency is foundational to that trust, particularly when AI is deployed into hospitals and clinics already experiencing the impacts of climate and infrastructure strain.

Equally important is the broader policy context. A Deeble Institute Perspectives Brief cautions that AI’s success in healthcare hinges on transparent and nationally consistent implementation frameworks, shaped through co-design with clinicians and consumers.

But such frameworks must also consider the material cost of AI, not just its clinical or administrative promise. Otherwise, we risk solving one set of problems, such as workforce strain or wait times, while silently compounding others, including water insecurity, emissions, and energy grid pressure.

Global pressure is building

In Europe, data centre water use is already triggering regulatory scrutiny. The EU is finalising a Water Resilience Strategy that will impose usage limits on tech companies, with a focus on AI-related growth.

“The IT sector is suddenly coming to Brussels and saying we need a lot of high-quality water,” said Sergiz Moroz of the European Environment Bureau. “Farmers are coming and saying, look, we cannot grow the food without water.”

Even the United Nations has weighed in, stating bluntly: “AI has an environmental problem”.

The UNEP emphasises that AI strategies must be deeply integrated with sustainability goals, moving beyond simple transparency and they should “integrate sustainability goals into their digitalization and AI strategies”.

The new Coalition for Environmentally Sustainable AI, co‑led by the UNEP, brings together governments, academia, industry and civil society to ensure that “the net effect of AI on the planet is positive”.

Communities’ concerns

The Hume Council’s data centre decision is not an isolated objection, and as Australia rapidly expands its digital infrastructure to support AI and other emerging technologies, the communities asked to host this infrastructure will increasingly demand to be heard.

Data centres have a real and often disruptive presence: high heat output, constant noise from cooling systems, diesel backup generators, as well as their heavy water and energy use. Once operational, they offer few jobs and limited direct benefit to the communities that surround them.

Late last year, the Senate Select Committee on Adopting Artificial Intelligence observed that stakeholders provided extensive evidence on AI’s environmental footprint, from energy use, greenhouse gas emissions, to water consumption, and also recognised AI’s potential to help mitigate these same challenges.

Yet, despite this, only one of the report’s 13 recommendations addressed the environment, and even that was disappointingly vague:

“That the Australian Government take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.”

As Dr Bronwyn Cumbo, a transdisciplinary social researcher at the University of Technology Sydney, writes in The Conversation, Australia has a unique opportunity to embed genuine community participation in the design and planning of its digital infrastructure.

“To avoid amplifying the social inequities and environmental challenges of data centres,” she argues, “the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.”

Public trust in AI cannot be divorced from the physical and environmental contexts in which it operates. If the benefits of AI are to be shared, then the burdens, from emissions and water use to noise and land occupation, must be acknowledged and addressed. This is especially true in healthcare, where ethical use and public confidence are paramount.

The Hume Council vote is a reminder that local communities are paying attention. Whether policymakers and those with an interest in promoting wider uptake of AI are listening is another matter. Likewise, there are questions about whether the health sector is doing enough to investigate and highlight the potential environmental impacts.

Jason Staines is a communications consultant with a background spanning journalism, government, and strategic advisory roles. He has reported for outlets including AAP, Dow Jones, The Sydney Morning Herald and The Age, and later worked in government as a Senior Research Officer at the Australian Treasury’s post in New Delhi and as an analyst in Canberra. He holds a Master of International Relations from the University of Sydney and a Bachelor of Arts (Communication) from the University of Technology Sydney.

The article was written in his capacity as a Croakey editor and journalist.

This article was originally published by Croakey.

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.



Source link

Continue Reading

Trending