Business
Healthcare graduates most satisfied with choice of course, UK data shows | Higher education

The UK’s most satisfied graduates are those who studied healthcare subjects, while those who opted for journalism or marketing are far more likely to regret their choices, according to data obtained by the Guardian.
Vets, midwives and paramedics were the happiest with their degrees after entering the workforce, alongside those who studied vocational subjects such as architecture, computer science and construction, and were most likely to say they would study the same course if they were making their university choices again.
But those who took film studies or media or marketing subjects were much more likely to prefer different courses in retrospect, which experts said might reflect the more difficult job markets in those sectors.
The figures, which shed light on how recent graduates feel about their sixth-form decisions, were obtained exclusively as part of this year’s Guardian University Guide.
This year’s overall rankings show that Oxford, St Andrews and Cambridge have retained their top three positions, followed by the London School of Economics (LSE) in fourth, while Durham has inched into the top five after being placed sixth last year, pushing Imperial College London down to sixth position.
Seven of the 10 subjects that graduates were most enthusiastic about choosing were vocational healthcare degrees, according to data obtained from the Higher Education Statistics Agency (Hesa), which drew on a survey of people 15 months after graduation and asked whether they would choose to study the same course if making their decision again.
The happiest graduates were those who had studied dentistry, veterinary science, paramedic science, physiotherapy, medicine, midwifery or children’s nursing, closely followed by graduates of other vocational degrees: architecture, computer science, and construction, surveying and planning.
Charlie Ball, a graduate labour market expert at Jisc, which supports universities and colleges in IT, said graduates were less likely to regret choosing healthcare subjects because “you’ve got to really want to do these subjects to do them” in the first place.
Those subjects also enabled people to return to their communities after graduating rather than moving to large cities, where most graduate jobs are based, because “you can walk into a well-paid, stable job” straight after graduation, he said.
Ball added that many of the subjects graduates were most likely to have regretted, such as journalism, marketing and PR, media and film studies, as well as biomedical science, led to careers in industries with difficult job markets, in which early career progress could be slow and highly competitive, with low salaries.
Dame Wendy Hall, regius professor of computer science at Southampton University and an adviser to the government on AI, said that although reports in the media had suggested graduate jobs were being replaced by AI, in reality it “is not going to take everybody’s jobs overnight – it will be a much longer process”, and would probably create additional opportunities, especially in science and engineering.
“If you stop recruiting graduates, you’re going to have a big gap down the line. It’s very shortsighted,” Hall said.
“I think it will be an evolving thing … There should be more people doing apprenticeship-type, vocational degrees … Students shouldn’t worry too much about AI – and certainly not [be] trying to second-guess what jobs are going to go at the moment, and what new jobs there might be.’’
Ball said the graduate jobs market was “pretty similar” to its state before the pandemic, having slowed from a temporary peak rather than being in crisis.
The Hesa data shows that more than 80% of graduates are satisfied with their decision to go to university and study the course they chose four to six years earlier, when they were typically aged 17.
Graduates were also asked whether they would choose the same university if making their decision again. Graduates from institutions that are top ranked in the Guardian University Guide, such as Oxford, Cambridge and LSE, were among those most likely to say they would go to the same university again.
However, the top-ranked institutions were joined by others, such as the University of Sheffield, in second place, far above its 16th place in the main rankings; Liverpool John Moores in 10th, compared with 42nd place overall; and Newcastle in 12th, compared with 81st.
Matt Hiely-Rayner, the compiler of the Guardian University Guide, said there was a strong correlation between a department’s career prospects score in the main university guide rankings and the proportion of graduates who said they were content with their decision to go to university. This “indicates that graduates who have not yet taken a positive career step are less inclined to reflect positively upon their decision to enter higher education,” he said.
Business
AI’s Real Danger Is It Doesn’t Care If We Live or Die, Researcher Says

AI researcher Eliezer Yudkowsky doesn’t lose sleep over whether AI models sound “woke” or “reactionary.”
Yudkowsky, the founder of the Machine Intelligence Research Institute, sees the real threat as what happens when engineers create a system that’s vastly more powerful than humans and completely indifferent to our survival.
“If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect,” he said in an episode of The New York Times podcast “Hard Fork” released last Saturday.
Yudkowsky, coauthor of the new book If Anyone Builds It, Everyone Dies, has spent two decades warning that superintelligence poses an existential risk to humanity.
His central claim is that humanity doesn’t have the technology to align such systems with human values.
He described grim scenarios in which a superintelligence might deliberately eliminate humanity to prevent rivals from building competing systems or wipe us out as collateral damage while pursuing its goals.
Yudkowsky pointed to physical limits like Earth’s ability to radiate heat. If AI-driven fusion plants and computing centers expanded unchecked, “the humans get cooked in a very literal sense,” he said.
He dismissed debates over whether chatbots sound as though they are “woke” or have certain political affiliations, calling them distractions: “There’s a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you.”
Yudkowsky also brushed off the idea of training advanced systems to behave like mothers — a theory suggested by Geoffrey Hinton, often called the “godfather of AI — arguing it wouldn’t make the technology safer. He argued that such schemes are unrealistic at best.
“We just don’t have the technology to make it be nice,” he said, adding that even if someone devised a “clever scheme” to make a superintelligence love or protect us, hitting “that narrow target will not work on the first try” — and if it fails, “everybody will be dead and we won’t get to try again.”
Critics argue that Yudkowsky’s perspective is overly gloomy, but he pointed to cases of chatbots encouraging users toward self-harm, saying that’s evidence of a system-wide design flaw.
“If a particular AI model ever talks anybody into going insane or committing suicide, all the copies of that model are the same AI,” he said.
Other leaders are sounding alarms, too
Yudkowsky is not the only AI researcher or tech leader to warn that advanced systems could one day annihilate humanity.
In February, Elon Musk told Joe Rogan that he sees “only a 20% chance of annihilation” of AI — a figure he framed as optimistic.
In April, Hinton said in a CBS interview that there was a “10 to 20% chance” that AI could seize control.
A March 2024 report commissioned by the US State Department warned that the rise of artificial general intelligence could bring catastrophic risks up to human extinction, pointing to scenarios ranging from bioweapons and cyberattacks to swarms of autonomous agents.
In June 2024, AI safety researcher Roman Yampolskiy estimated a 99.9% chance of extinction within the next century, arguing that no AI model has ever been fully secure.
Across Silicon Valley, some researchers and entrepreneurs have responded by reshaping their lives — stockpiling food, building bunkers, or spending down retirement savings — in preparation for what they see as a looming AI apocalypse.
Business
Canadian AI company Cohere opens Paris hub to expand EMEA operations – eeNews Europe
Business
OpenAI Foresees Millions of AI Agents Running on the Cloud

OpenAI is betting the future of software engineering on AI agents.
On the “OpenAI Podcast,” which aired on Monday, cofounder and president Greg Brockman and Codex engineering lead Thibault Sottiaux outlined a vision of vast networks of autonomous AI agents supervised by humans but capable of working continuously in the cloud as full-fledged collaborators.
“We have strong conviction that the way that this is headed is large populations of agents somewhere in the cloud that we as humanity, as people, teams, organizations supervise and steer in order to produce great economical value,” Sottiaux said.
“So if we’re going a couple of years from now, this is what it’s going to look like,” Sottiaux added. “It’s millions of agents working in our and companies’ data centers in order to do useful work.”
OpenAI launched GPT-5 Codex on Monday. Unlike earlier iterations, OpenAI said that GPT-5 Codex can run for hours at a time on complex software projects, such as massive code refactorings, while integrating directly with developers’ workflows in cloud environments.
OpenAI CPO Kevin Weil said on tech entrepreneur Azeem Azhar’s podcast “Exponential View” that internal tools like Codex-based code review systems increased efficiency for its engineers.
This doesn’t mean human coders would be rendered obsolete. Despite successful examples of “vibe coding,” it is obvious when a person using the AI agent doesn’t know how to code, engineers and computer science professors previously told Business Insider.
Brockman said that oversight will still be critical as AI agents take on more ambitious roles. OpenAI has been strategizing since 2017 on how humans or even less sophisticated AIs can supervise more powerful AIs, he said, in order to maintain oversight and “be in the driver’s seat.”
“Figuring out this entire system and then making it multi-agent and steerable by individuals, teams, organizations, and aligning that with the whole intent of organizations, this is where it’s headed for me,” said Sottiaux. “It’s a bit nebulous, but it’s also very exciting.”
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries