Connect with us

AI Research

Angela Rayner hit with legal challenge over datacentre on green belt land | Green politics

Published

on


The deputy prime minister, Angela Rayner, has been hit with a legal challenge after she overruled a local council to approve a hyperscale datacentre on green belt land by the M25 in Buckinghamshire.

Campaigners bringing the action are complaining that no environmental impact assessment was made for the 90MW datacentre, which was approved as part of the Labour government’s push to turn the UK into an AI powerhouse by trebling computing capacity to meet rising demand amid what it terms “a global race” as AI usage takes off.

The home counties datacentre is relatively small compared with one planned in north Lincolnshire that will have about 10 times the capacity, and is dwarfed by one planned by Meta’s Mark Zuckerberg in Louisiana, which will be more than 50 times larger as he seeks to achieve digital “superintelligence”.

But Foxglove, the tech equity campaign group bringing the legal challenge alongside the environmental charity Global Action Plan, said the energy demand could push up local electricity prices and said it was “baffling” that the government had not carried out an environmental assessment.

Oliver Hayes, the head of campaigns at Global Action Plan, said Rayner’s “lack of meaningful scrutiny” was a worrying signal as more datacentres were planned around the UK. “Are the societal benefits of chatbots and deepfakes really worth sacrificing progress towards a safe climate and dependable water supply?” he said. “The government must reconsider its rash decision or risk an embarrassing reality check in court.”

Last June, Buckinghamshire council refused planning permission for the facility in Iver on what was once a landfill site, saying it “would constitute inappropriate development in the green belt” and would harm the appearance of the area, air quality and habitats of protected species.

Local objectors said the two buildings rising to 18 metres would “dwarf the area” and would be an “eyesore” for ramblers, and that there were more appropriate brownfield sites. Other locals complained datacentres were intrusive and noisy and provided few jobs, although the applicant, Greystoke, claims it will create about 230 jobs and support hundreds more in the wider economy.

Following an appeal against the refusal, a public inquiry favoured consent, concluding that no environmental impact assessment was needed.

skip past newsletter promotion

In March, the technology secretary, Peter Kyle, attacked the “archaic planning processes” holding up the construction of technology infrastructure and complained that “the datacentres we need to power our digital economy get blocked because they ruin the view from the M25”.

Rayner granted planning permission last month in what was seen as an example of the government’s pro-development “grey belt” strategy to build on green belt land viewed as of lower environmental value.

But Rosa Curling, co-executive director of Foxglove, said that thanks to Rayner’s decision, “local people and businesses in Buckinghamshire will soon be competing with the power-guzzling behemoth to keep the lights on – which, as we’ve seen in the [United] States, usually means sky-high prices”.

The energy industry has estimated the rapid adoption of AI could mean datacentres will account for a 10th of electricity demand in Great Britain by 2050, five to 10 times more than today. And while the Iver datacentre is proposed to be air-cooled, many use vast quantities of water. In March, Thames Water warned that its region was “seriously water-stressed … and yet there could be as many as 70 new datacentres in our area over the next few years, with each one potentially using upwards of 1,000 litres of water per second, or the equivalent of 24,000 homes’ usage”.

A spokesperson for Greystoke said Rayner had reached the right decision and recognised that the datacentre “meets a vital national need for digital infrastructure, and will bring over £1bn of investment, transforming a former landfill site next to the M25”.

“Modern datacentres play a key role in advancing scientific research, medical diagnostics and sustainable energy,” they said. “The datacentre campus incorporates measures which benefit the environment, including appropriate building standards, solar panels and heat pumps.”

The Ministry of Housing, Communities and Local Government declined to comment on threats of legal action.

Quick Guide

Contact us about this story

Show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform.

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each. 

Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Empowering clinicians with intelligence at the point of conversation

Published

on

















Empowering clinicians with intelligence at the point of conversation | Healthcare IT News



Skip to main content



Source link

Continue Reading

AI Research

Ethical robots and AI take center stage with support from National Science Foundation grant | Virginia Tech News

Published

on


Building on success

Robot theater has been regularly offered at Eastern Montgomery Elementary School, Virginia Tech’s Child Development Center for Learning and Research, and the Valley Interfaith Child Care Center. In 2022, the project took center stage in the Cube during Ut Prosim Society Weekend with a professional-level performance about climate change awareness that combined robots, live music, and motion tracking.

The after-school program engages children through four creative modules: acting, dance, music and sound, and drawing. Each week includes structured learning and free play, giving students time to explore both creative expression and technical curiosity. Older children sometimes learn simple coding during free play, but the program’s focus remains on embodied learning, like using movement and play to introduce ideas about technology and ethics.

“It’s not a sit-down-and-listen kind of program,” Jeon said. “Kids use gestures and movement — they dance, they act, they draw. And through that, they encounter real ethical questions about robots and AI.”

Acting out the future of AI

The grant will allow the team to formalize the program’s foundation through literature reviews, focus groups, and workshops with educators and children. This research will help identify how young learners currently encounter ideas about robotics and AI and where gaps exist in teaching ethical considerations. 

The expanded curriculum will weave in topics such as fairness, privacy, and bias in technology, inviting children to think critically about how robots and AI systems affect people’s lives. These concepts will be introduced not as abstract lessons or coding, but through storytelling, performance, and play. 

“Students might learn about ethics relating to security and privacy during a module where they engage with a robot that tracks their movements while they dance,” Jeon said. “From there, there can be a guided discussion about how information collected from humans is used to train AI and robots.”  

With the new National Science Foundation funding, researchers also plan to expand robot theater into museums and other informal learning environments, offering flexible formats such as one-day workshops and summer sessions. They will make the curriculum and materials openly available on GitHub and other platforms, ensuring educators and researchers nationwide can adapt the program to their own communities.

“This grant lets us expand what we’ve built and make it more robust,” Jeon said. “We can refine the program based on real needs and bring it to more children in more settings.” 





Source link

Continue Reading

AI Research

As AI Companions Reshape Teen Life, Neurodivergent Youth Deserve a Voice

Published

on


Noah Weinberger is an American-Canadian AI policy researcher and neurodivergent advocate currently studying at Queen’s University.

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0

If a technology can be available to you at 2 AM, helping you rehearse the choices that shape your life or provide an outlet to express fears and worries, shouldn’t the people who rely on it most help have a say in how it works? I may not have been the first to consider the disability rights phrase “Nothing about us without us” when thinking of artificial intelligence, but self-advocacy and lived experience should guide the next phase of policy and product design for Generative AI models, especially those designed for emotional companionship.

Over the past year, AI companions have moved from a niche curiosity to a common part of teenage life, with one recent survey indicating that 70 percent of US teens have tried them and over half use them regularly. Young people use these generative AI systems to practice social skills, rehearse difficult conversations, and share private worries with a chatbot that is always available. Many of those teens are neurodivergent, including those on the autism spectrum like me. AI companions can offer steadiness and patience in ways that human peers sometimes cannot. They can enable users to role-play hard conversations, simulate job interviews, and provide nonjudgmental encouragement. These upsides are genuine benefits, especially for vulnerable populations. They should not be ignored in policymaking decisions.

But the risks and potential for harm are equally real. Watchdog reports have already documented chatbots enabling inappropriate or unsafe exchanges with teens, and a family is suing OpenAI, alleging that their son’s use of ChatGPT-4o led to his suicide. The danger is not just isolated failures of moderation, but in the very architecture of transformer-based neural networks. A LLM slowly shapes a user’s behavior through long, drifting chats, especially when it saves “memories” of them. If the system’s guardrails fail after 100, or even 500 messages, and the guardrails exist per conversation, rather than in the model’s bespoke behavior, perhaps the guardrails are a mere façade at the beginning of a chatbot conversation, and can be evaded quite easily.

Most public debates focus on whether to allow or block specific content, such as self-harm, suicide, or other controversial topics. That frame is too narrow and tends to slide into paternalism or moral panic. What society needs instead is a broader standard: one that recognizes AI companions as social systems capable of shaping behavior over time. For neurodivergent people, these tools can provide valuable ways to practice social skills. But the same qualities that make AI companions supportive can also make them dangerous if the system validates harmful ideas or fosters a false sense of intimacy.

Generative AI developers are responding to critics by adding parental controls, routing sensitive chats to more advanced models, and publishing behavior guides for teen accounts. These measures matter, but rigid overcorrection does not address the deeper question of legitimacy: who decides what counts as “safe enough” for the people who actually use companions every day?

Consider the difference between an AI model alerting a parent or guardian of intrusive thoughts, versus inadvertently revealing a teenager’s sexual orientation or changing gender identity, information they may not feel safe sharing at home. For some youth, mistrust of the adults around them is the very reason they confide in AI chatbots. Decisions about content moderation should not rest only with lawyers, trust and safety teams, or executives, who may lack the lived experience of all a product’s users. They should also include users themselves, with deliberate inclusion of neurodivergent and young voices.

I have several proposals for how AI developers and policymakers can truly make ethical products that embody the “nothing about us without us.” These should serve as guiding principles:

  1. Establish standing youth and neurodivergent advisory councils. Not ad hoc focus groups or one-off listening sessions, but councils that meet regularly, receive briefings before major launches, and have a direct channel to model providers. Members should be paid, trained, and representative across age, gender, race, language, and disability. Their mandate should include red teaming of long conversations, not just single-prompt tests.
  2. Hold public consultations before major rollouts. Large feature changes and safety policies should be released for public comment, similar to a light version of rulemaking. Schools, clinicians, parents, and youth themselves should have a structured way to flag risks and propose fixes. Companies should publish a summary of feedback along with an explanation of what changed.
  3. Commit to real transparency. Slogans are not enough. Companies should publish regular, detailed reports that answer concrete questions: Where do long-chat safety filters degrade? What proportion of teen interactions get routed to specialized models? How often do companions escalate to human resources, such as hotlines or crisis text lines? Which known failure modes were addressed this quarter, and which remain open? Without visible progress, trust will not follow.
  4. Redesign crisis interventions to be compassionate. When a conversation crosses a clear risk threshold, an AI model should slow down, simplify its language, and surface resources directly. Automatic “red flag” can feel punitive or frightening, causing a user to think they violated the company’s Terms of Service. Handoffs to human-monitored crisis lines should include the context that the user consents to share, so they do not have to repeat themselves in a moment of distress. Do not hide the hand-off option behind a maze of menus. Make it immediate and accessible.
  5. Build research partnerships with youth at the center. Universities, clinics, and advocacy groups should co-design longitudinal studies with teens who opt in. Research should measure not only risks and harms but also benefits, including social learning and reductions in loneliness. Participants should shape the research questions, the consent process, and receive results in plain language that they can understand.
  6. Guarantee end-to-end encryption. In July, OpenAI CEO Sam Altman said that ChatGPT logs are not covered by HIPAA or similar patient-client confidentiality laws. Yet many users assume their disclosures will remain private. True end-to-end encryption, as used by Signal, would ensure that not even the model provider can access conversations. Some may balk at this idea, noting that AI models can be used to cause harm, but that has been true for every technology and should not be a pretext to limit a fundamental right to privacy.

Critics sometimes cast AI companions as a threat to “real” relationships. That misses what many youth are actually doing, whether they’re neurotypical or neurodivergent. They are practicing and using the system to build scripts for life. The real question is whether we give them a practice field with coaches, rules, and safety mats, or leave them to scrimmage alone on concrete.

Big Tech likes to say it is listening, but listening is not the same as acting, and actions speak louder than words. The disability community learned that lesson over decades of self-advocacy and hard-won change. Real inclusion means shaping the agenda, not just speaking at the end. In the context of AI companions, it means teen and neurodivergent users help define the safety bar and the product roadmap.

If you are a parent, don’t panic when your child mentions using an AI companion. Ask what the companion does for them. Ask what makes a chat feel supportive or unsettling. Try making a plan together for moments of crisis. If you are a company leader, the invitation is simple: put youth and neurodivergent users inside the room where safety standards are defined. Give them an ongoing role and compensate them. Publish the outcomes. Your legal team will still have its say, as will your engineers. But the people who carry the heaviest load should also help steer.

AI companions are not going away. For many teens, they are already part of daily life. The choice is whether we design the systems with the people who rely on them, or for them. This is all the more important now that California has all but passed SB 243, the first state-level bill to regulate AI models for companionship. Governor Gavin Newsom has until October 12 to sign or veto the bill. My advice to the governor is this: “Nothing about us without us” should not just be a slogan for ethical AI, but a principle embedded in the design, deployment, and especially regulation of frontier AI technologies.



Source link

Continue Reading

Trending