Connect with us

Tools & Platforms

Artificial Intelligence & Technology Institute Launched at Fairfield University, Stressing Responsible AI Use — Connecticut by the Numbers

Published

on


Along those lines, school officials highlighted “the ways in which AI can make things faster, more individualized, and more flexible, and thereby benefit myriad industries.” As part of a launch event in late April, five students showcased their AI-generated short films, produced as part of their coursework.

As evidenced by the AI-generated films, tech-heavy coursework, and prominent speakers, Fairfield Dolan continues to lead and explore opportunities in the AI space.  Envisioned for the institute: events, lectures, and visits to campus from prominent speakers with expertise in the AI and tech space.  In addition, “supporting impactful research projects such as providing advice and consultation, including both academic and practical research projects to improve the awareness of the latest trends in AI field, as well as the day-to-day productivity.”

“We are dedicated to building practical, freely available tools and offering consultation, positioning Fairfield Dolan as a leader in putting technology ethically in service of the common good,” Dr. Tao said.  The website echoes that positioning, pointing out “the Institute also provides advising not only to promote faculty to use AI and analytics, but also for students to use AI and technology responsibly.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Artificial intelligence is reshaping healthcare, but at what environmental cost?

Published

on


Amidst widespread promotion of artificial intelligence (AI), the environmental impacts are not receiving enough scrutiny, including from the health sector, writes Jason Staines.

When Hume City Council recently rejected part of a proposed $2 billion data centre precinct in Melbourne’s north, it put the spotlight on the largely overlooked environmental costs of artificial intelligence (AI) and the communities most at risk of bearing these costs.

The council had originally approved planning permits for the Merrifield data centre precinct, but rescinded support for one facility after residents and campaigners raised concerns about energy and water use, local infrastructure, and consultation with Traditional Owners. The backlash may be a sign that policymakers are starting to consider AI’s ecological footprint.

As AI is rolled out in increasingly sensitive areas of public life, including healthcare, policing, and welfare, governments have focused on the need for ethical, safe, and responsible deployment. There are also fierce debates over copyright and AI’s impact on jobs.

As important as these discussions are, environmental consequences have rarely been part of the equation to date.

Missing piece of the ‘responsibility’ puzzle

Governments and stakeholders have been busy discussing how AI might help lift Australia’s sagging productivity; Treasurer Dr Jim Chalmers wrote earlier this month that he expects AI to “completely transform our economy”, and that he is optimistic AI “will be a force for good”.

However, the Treasurer and others promoting AI are less vocal about the technology’s negative externalities.

The Productivity Commission’s interim report, Harnessing data and digital technology, pointed to AI’s potential to “improve service delivery, lift productivity and help solve complex social and environmental challenges”. But it largely overlooked AI’s environmental impacts, saying “few of AI’s risks are wholly new issues” and that higher energy demand is par for the course with “the information technology revolution”.

Likewise, a Therapeutic Goods Administration (TGA) consultation paper on the regulation of AI in medical device software omitted any discussion of environmental impact, despite recommending more transparency and accountability in the way AI tools are assessed and monitored.

The absence matters. As AI becomes embedded in essential services such as healthcare, its environmental footprint becomes not just a technical issue, but a public health and equity concern, particularly for communities already facing water insecurity or climate risk.

In a sector that is trying to decarbonise and reduce its impact, uncritical adoption of AI could prove counterproductive.

There are serious equity questions when governments invest in digital transformation strategies without accounting for the cultural impacts of water-intensive technologies such as AI (Jack Kinny / Shutterstock).

First Nations perspectives

For some First Nations communities, water scarcity is not theoretical, it is a daily reality.

In remote and regional Australia, many First Nations peoples face ongoing systemic barriers to safe, reliable, and culturally appropriate water access. These are demands that extend far beyond infrastructure and include deeply held cultural and spiritual connections with water.

Research conducted by CSIRO between 2008 and 2010 in the Daly (NT) and Fitzroy (WA) river catchments was the first of its kind to document Indigenous social and economic values tied to aquatic ecosystems, linking river flows directly to Indigenous livelihoods, resource use, and planning processes.

The Northern Australia Water Resource Assessment reinforces these insights, framing rivers as vessels of sustenance, heritage, and governance, and asserting Traditional Owners as inherently central to water and development planning.

Yet Australia’s AI reform dialogue has mostly omitted these cultural linkages, even when it does consider AI’s consumption of resources. In the context of AI-powered healthcare, this omission is especially troubling.

Innovations celebrated for improving diagnostics or service delivery often rely on energy- and water-intensive data systems, whose environmental toll is seldom disclosed or evaluated through an equity lens. When AI is embedded in healthcare services for Indigenous populations, with no accounting for its resource footprint, those least consulted risk bearing the heaviest cost.

This raises serious equity questions when governments invest in digital transformation strategies without accounting for water-intensive technologies such as AI.

As the United Nations Environment Programme has noted, policymakers must ensure the social, ethical, and environmental aspects of AI use are considered, not just the economic benefits.

“We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale,” said Golestan (Sally) Radwan, Chief Digital Officer of the United Nations Environment Programme.

Just how thirsty is AI?

Data centres remain one of the fastest-growing consumers of global electricity, using approximately 460 terawatt‑hours in 2022, and projected to more than double by 2026 with increasing AI and cryptocurrency activity.

These facilities often depend on water-intensive cooling systems, such as evaporative methods, which can consume large volumes of potable water and exacerbate stress on local supply. With AI workloads driving higher server densities and increased heat output, water demand for cooling is rising sharply, especially for hyperscale data centres, making water scarcity a growing operational risk.

For context, a study from the University of California, Riverside calculated that training GPT‑3 in Microsoft’s advanced US data centres evaporated about 700,000 litres of clean freshwater, a sobering figure for a single model’s development phase.

AI is being promoted as a climate solution, through better modelling, emissions tracking, and even water management optimisation. But the industry’s own resource use can directly undermine those goals.

As the OECD notes: “AI-enabled products and services are creating significant efficiency gains, helping to manage energy systems and achieve the deep cuts in greenhouse gas (GHG) emissions needed to meet net-zero targets. However, training and deploying AI systems can require massive amounts of computational resources with their own environmental impacts.”

According to one report, data centres in the US consumed about 4.4 percent of the country’s electricity in 2023, which could nearly triple to 12 percent by 2028. Meanwhile, Google’s US data centres went from using 12.7 billion litres of cooling water in 2021 to over 30 billion litres just three years later, and UC Riverside estimates that running just 20 to 50 ChatGPT queries uses roughly half a litre of fresh water.

In response, the global tech sector has invested heavily in green branding. Microsoft, for example, has publicly committed to being “carbon negative and water positive by 2030”.

Notable absence

Australia’s healthcare system is rapidly adopting AI across clinical, administrative, and operational domains.

From diagnostic imaging to digital scribes, clinical decision support, and personalised treatment plans, AI is being held up as a core enabler of future-ready care. Federal reports, such as Our Gen AI Transition, point to AI’s potential to improve efficiency and free up clinicians for more patient-centred work.

But that optimism comes with a caveat: the integration of AI into healthcare is unfolding with limited consideration of its environmental toll. The healthcare sector is already one of the most resource-intensive in Australia, responsible for around seven percent of national greenhouse gas emissions. AI risks adding a new layer of resource demand.

While regulatory bodies are beginning to grapple with questions of safety, accountability, and clinical transparency, environmental impacts remain conspicuously absent from most discussions.

The TGA, for instance, has flagged a need for clearer regulation of AI in medical software, noting that some tools, such as generative AI scribes, may already be operating outside existing rules if they suggest diagnoses or treatments without approval. Yet neither the TGA’s consultation paper nor its updated guidance documents meaningfully address the carbon or water costs of these tools.

According to Consumer Health Forum CEO Dr Elizabeth Deveny, the TGA’s review surfaced critical issues around trust, transparency, and consent, from hidden AI tools embedded in routine software, to confusion about who is responsible when things go wrong.

She notes: “Trust is the real product being managed.” Yet environmental transparency is foundational to that trust, particularly when AI is deployed into hospitals and clinics already experiencing the impacts of climate and infrastructure strain.

Equally important is the broader policy context. A Deeble Institute Perspectives Brief cautions that AI’s success in healthcare hinges on transparent and nationally consistent implementation frameworks, shaped through co-design with clinicians and consumers.

But such frameworks must also consider the material cost of AI, not just its clinical or administrative promise. Otherwise, we risk solving one set of problems, such as workforce strain or wait times, while silently compounding others, including water insecurity, emissions, and energy grid pressure.

Global pressure is building

In Europe, data centre water use is already triggering regulatory scrutiny. The EU is finalising a Water Resilience Strategy that will impose usage limits on tech companies, with a focus on AI-related growth.

“The IT sector is suddenly coming to Brussels and saying we need a lot of high-quality water,” said Sergiz Moroz of the European Environment Bureau. “Farmers are coming and saying, look, we cannot grow the food without water.”

Even the United Nations has weighed in, stating bluntly: “AI has an environmental problem”.

The UNEP emphasises that AI strategies must be deeply integrated with sustainability goals, moving beyond simple transparency and they should “integrate sustainability goals into their digitalization and AI strategies”.

The new Coalition for Environmentally Sustainable AI, co‑led by the UNEP, brings together governments, academia, industry and civil society to ensure that “the net effect of AI on the planet is positive”.

Communities’ concerns

The Hume Council’s data centre decision is not an isolated objection, and as Australia rapidly expands its digital infrastructure to support AI and other emerging technologies, the communities asked to host this infrastructure will increasingly demand to be heard.

Data centres have a real and often disruptive presence: high heat output, constant noise from cooling systems, diesel backup generators, as well as their heavy water and energy use. Once operational, they offer few jobs and limited direct benefit to the communities that surround them.

Late last year, the Senate Select Committee on Adopting Artificial Intelligence observed that stakeholders provided extensive evidence on AI’s environmental footprint, from energy use, greenhouse gas emissions, to water consumption, and also recognised AI’s potential to help mitigate these same challenges.

Yet, despite this, only one of the report’s 13 recommendations addressed the environment, and even that was disappointingly vague:

“That the Australian Government take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.”

As Dr Bronwyn Cumbo, a transdisciplinary social researcher at the University of Technology Sydney, writes in The Conversation, Australia has a unique opportunity to embed genuine community participation in the design and planning of its digital infrastructure.

“To avoid amplifying the social inequities and environmental challenges of data centres,” she argues, “the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.”

Public trust in AI cannot be divorced from the physical and environmental contexts in which it operates. If the benefits of AI are to be shared, then the burdens, from emissions and water use to noise and land occupation, must be acknowledged and addressed. This is especially true in healthcare, where ethical use and public confidence are paramount.

The Hume Council vote is a reminder that local communities are paying attention. Whether policymakers and those with an interest in promoting wider uptake of AI are listening is another matter. Likewise, there are questions about whether the health sector is doing enough to investigate and highlight the potential environmental impacts.

Jason Staines is a communications consultant with a background spanning journalism, government, and strategic advisory roles. He has reported for outlets including AAP, Dow Jones, The Sydney Morning Herald and The Age, and later worked in government as a Senior Research Officer at the Australian Treasury’s post in New Delhi and as an analyst in Canberra. He holds a Master of International Relations from the University of Sydney and a Bachelor of Arts (Communication) from the University of Technology Sydney.

The article was written in his capacity as a Croakey editor and journalist.

This article was originally published by Croakey.

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.



Source link

Continue Reading

Tools & Platforms

AI engineers are being deployed as consultants and getting paid $900 per hour

Published

on


AI engineers are being paid a premium to work as consultants to help large companies troubleshoot, adopt, and integrate AI with enterprise data—something traditional consultants may not be able to do.

PromptQL, an enterprise AI platform created by San Francisco-based developer tooling company Hasura, is doling out $900-per-hour wages to its engineers tasked with building and deploying AI agents to analyze internal company data using large language models (LLMs).

The price point reflects the “intuition” and technical skills needed to keep pace with a rapidly-changing technology, Tanmai Gopal, PromptQL’s cofounder and CEO, told Fortune

Gopal said the company hourly wage for AI engineers as consultants is “aligned with the going rate that you would see for AI engineers,” but that “it feels like we should be increasing that price even more,” as customers aren’t pushing back on the price PromptQL sets.

“MBA types… are very strategic thinkers, and they’re smart people, but they don’t have an intuition for what AI can do,” Gopal said.

Gopal declined to disclose any customers that have used PromptQL to integrate AI into their businesses, but says the list includes “the largest networking company” as well as top fast food, e-commerce, grocery and food delivery tech companies, and “one of the largest B2B companies.”

Oana Iordăchescu, founder of Deep Tech Recruitment, a boutique agency focused on AI, quantum, and frontier tech talent, told Fortune enterprises and startups are competing for senior AI engineers at “unprecedented rates,” and which is leading to wage inflation.

Iordăchescu said the wages are priced “far above even Big Four consulting partners,” who often make around $400 to $600 per hour.

“Traditional management consultants can design AI strategies, but most lack the hands-on technical expertise to debug models, build pipelines, or integrate systems into legacy infrastructure,” Iordăchescu said. “AI engineers working as consultants bridge that gap. They don’t just advise, they execute.”

AI consultant Rob Howard told Fortune he wasn’t surprised at “mind-blowing numbers” like a $900-per-hour wage for AI consulting work, as he’s seen a price premium on projects that have an AI component while companies rush to adopt it into their businesses.

Howard, who is also the CEO Innovating with AI, a program to teach people to become AI consultants in their own right, said some students of his have sold AI trainings or two-day boot camps that net out to $400 or $500 per hour.

“The pricing for this is high in general across the market, because it’s in demand and new and relatively rare to find, you know, people who are qualified to do it,” Howard said.

A recent report published by MIT’s NANDA initiative, revealed that while generative AI holds promise for enterprises, 95% of initiatives to drive rapid revenue growth failed. Aditya Challapally, the lead author of the report and a research contributor to project NANDA at MIT, previously told Fortune the AI pilot program failures did not fall on the quality of the AI models, but the “learning gap” for both tools and organizations.

“Some large companies’ pilots and younger startups are really excelling with generative AI,” Challapally told Fortune earlier this month. Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,” he said. 

“It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

Jim Johson, an AI consulting executive at AnswerRocket, told Fortune the $900-per-hour wage “makes perfect sense” when considering companies have spent two years experimenting with AI and “have little to show for it.” 

“Now the pressure’s on to demonstrate real progress, and they’re discovering there’s no easy button for enterprise AI,” Johnson said. “This premium won’t last forever, but right now companies are essentially buying insurance against joining that 95% failure statistic.”

Gopal said PromptQL’s business model to have AI engineers serve as both consultants and forward deployed engineers (FDEs)—hybrid sales and engineering jobs tasked with integrating AI solutions—is what makes their employees so valuable.

This new wave of AI engineer consultants is shaking up the consulting industry, Gopal said. But he sees his company as helping shift traditional consulting partnership expectations and culture. 

“The demand is there,” he said. “I think what makes it hard is that leaders, especially in some of the established companies… are kind of more used to the traditional style of consultants.” 

Gopal said the challenge for his company will be to “drive that leadership and education, and saying, ‘Folks, there is a new way of doing things.’”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Tools & Platforms

ChatGPT causes outrage after it refuses to do one task for users

Published

on


As the days go by, ChatGPT is becoming more and more of a useful tool for humans.

Whether it be asking for information on a subject, help in drafting an email, or even opinions on fashion choices, AI is becoming an essential part of some people’s lifestyles in 2025.

People are having genuine conversations with programmes like ChatGPT for advice on situations in life, and while it answers almost anything you ask it, there’s one question it refuses to answer, for some bizarre reason.

The popular AI chatbot’s capabilities seemed endless, but this seemingly newly found barrier has driven people crazy on social media, who don’t understand why it says no to answering this one question.

To nobody’s surprise, this confusion online has stemmed from a viral TikTok video.

AI is heavily relied upon by some (Getty Stock Image)

All the user did was demand for ChatGPT to count to a million – but how did the chatbot respond?

“I know you just won that counting, but the truth is counting all the way to a million would literally take days,” it replied.

While he kept insisting, the bot kept turning the request down, with the voice saying that it ‘isn’t really practical’, ‘even for me’.

The hilarious exchange included the bot saying that it’s ‘not really possible’ either, saying that it simply won’t be able to carry the prompt out for him.

Replies included the bot repeatedly saying that it ‘understood’ and ‘heard’ what he was saying, but his frustrations grew as the clip went on.

Many have now questioned why this might be the case, as one wrote in response to the user’s anger: “I don’t even use ChatGPT and I’ll say this is a win for them. AIs should not be enablers of abusive behaviour in their users.”

Another posted: “So AI does have limits?! Or maybe it’s just going through a rough day at the office. Too many GenZ are asking about Excel and saving Word documents.”

A third claimed: “I think it’s good that AI can identify and ignore stupid time-sink requests that serve no purpose.”

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)

Others joked that the man would be first to be targeted in an AI-uprising, while some suggested that the programme might need a higher subscription plan to count to a million.

“What it really wanted to say is the amount of time you require is higher than your subscription,” a different user said.

As long as you don’t ask the bot to count to ridiculous numbers, it looks like it can help with more or less anything else.



Source link

Continue Reading

Trending