Connect with us

Tools & Platforms

Sam Altman Confirms Indefinite Delay of OpenAI’s Open-Source Model Citing Safety Concerns

Published

on


OpenAI CEO Sam Altman has officially confirmed that the release of the organization’s much-anticipated open-source AI model has been postponed indefinitely. The decision, according to Altman, is rooted primarily in safety concerns and the broader implications of making powerful AI tools widely accessible.

This move marks a significant moment in the evolving conversation about responsible AI development, highlighting the delicate balance between innovation, transparency, and ethical safeguards in the field.

The Original Plan and Its Significance

OpenAI had initially announced plans to release an open-source version of its advanced AI model to foster wider collaboration within the AI research community. The open-source initiative was expected to accelerate AI innovation, enabling developers, researchers, and organizations worldwide to access cutting-edge technology and contribute to its evolution.

However, as the project progressed, OpenAI’s leadership grew increasingly cautious about the potential misuse or unintended consequences of releasing such powerful AI models without robust safety nets in place.

Safety First: The Core Reason for the Delay

Sam Altman’s confirmation underscores OpenAI’s commitment to responsible AI deployment. In his statement, Altman emphasized that the company is prioritizing the safe integration of AI technologies into society over rapid dissemination.

“Safety is not a checkbox,” Altman said. “It’s a continuous process that requires thorough testing, real-world feedback, and sometimes difficult decisions about what to release and when.”

The decision to delay the open-source model release reflects concerns about potential risks, including misuse for generating disinformation, deepfakes, or other malicious applications. Moreover, there is an ongoing challenge in ensuring that AI systems operate fairly, avoid bias, and respect privacy.

Industry Reaction

The AI community and tech industry have responded with a mix of understanding and disappointment. While many acknowledge the importance of prioritizing safety, some have expressed frustration over the slowed pace of open access to advanced AI tools.

Dr. Lisa Monroe, an AI ethics researcher, remarked, “OpenAI’s caution is warranted, especially given how quickly AI can be weaponized or cause unintended harm. But transparency and community involvement remain crucial.”

Other experts have pointed out that delaying open-source releases may slow innovation but can also prevent dangerous misuse that could ultimately set the industry back.

The Broader Context: AI Safety and Regulation

OpenAI’s decision comes amid growing global scrutiny of AI technologies. Governments, policymakers, and advocacy groups are increasingly calling for clearer regulations and guidelines to govern the development and deployment of AI systems.

By delaying the release, OpenAI aligns with calls for a more measured approach to AI innovation — one that includes comprehensive safety assessments and collaboration with regulatory bodies.

Sam Altman has been vocal about the need for regulation in the AI space, advocating for international cooperation to manage risks while enabling technological progress.

While the open-source model’s release is on hold indefinitely, OpenAI continues to develop and improve its AI offerings. The company’s flagship products, including ChatGPT and GPT-4 series models, remain widely accessible through controlled APIs.

OpenAI is also reportedly investing in advanced safety research, robustness testing, and partnerships aimed at mitigating risks associated with AI misuse.

For developrs and researchers eager to explore OpenAI’s technology, the current approach means continued reliance on existing APIs and tools, rather than fully open-source versions.

The delay in releasing OpenAI’s open-source model highlights the broader challenge facing the AI industry: how to balance rapid innovation with the ethical, social, and safety implications of increasingly powerful technologies.

As AI models grow more capable, ensuring they are deployed responsibly becomes paramount. OpenAI’s cautious stance may serve as a model for others in the field, reinforcing the message that safety and ethics should be integrated into every stage of AI development.



Source link

Tools & Platforms

Godfather of AI Geoffrey Hinton warns AI to create ‘massive unemployment’, but only these will be hardest hit

Published

on


Geoffrey Hinton, often called the “godfather of AI,” warned that artificial intelligence will trigger “massive unemployment and a huge rise in profits,” further deepening inequality between the rich and the poor. In a recent interview with the Financial Times, the Nobel Prize winner and former Google scientist said AI will be used by the wealthy to cut jobs and maximize profits.“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”During the interview, Geoffrey Hinton said industries that rely on routine tasks will be hardest hit, while highly skilled roles and healthcare could benefit. “If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he said in an earlier interview.

Godfather of AI’s disagreement with Sam Altman

Hinton also dismissed OpenAI CEO Sam Altman’s idea of a universal basic income, arguing it “won’t deal with human dignity” or replace the sense of value people get from work.A survey from the New York Fed recently found that companies using AI are more likely to retrain workers than fire them, though layoffs are expected to rise.Hinton reiterated concerns about AI’s risks, saying there is a 10% to 20% chance of the technology wiping out humanity after the emergence of superintelligence. He also warned AI could be misused to create bioweapons, adding that while China is taking the threat seriously, the Trump administration has resisted tighter regulation.“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” he said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad.”

Geoffrey Hinton on leaving Google

Geoffrey Hinton also explained why he left Google in 2023, rejecting reports that he quit to speak more freely about AI’s risks. “I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said.While he mostly uses AI for research, Hinton admitted OpenAI’s ChatGPT even played a role in his personal life. He revealed that a former girlfriend used the chatbot “to tell me what a rat I was” during their breakup.“I didn’t think I had been a rat, so it didn’t make me feel too bad,” he quipped.





Source link

Continue Reading

Tools & Platforms

Sam Altman or Elon Musk: AI godfather Geoffrey Hinton’s 6-word reply on who he trusts more

Published

on


When asked to choose who he trusts more between Tesla CEO Elon Musk and OpenAI chief executive Sam Altman, AI “godfather” Geoffrey Hinton offered a different kind of answer. The choice seemed so difficult that Hinton didn’t use an analogy from science. Instead, he recalled a quote from Republican Senator Lindsey Graham, when asked to pick between two presidential candidates.He remembered a moment from the 2016 presidential race when Graham was asked to choose between Donald Trump or Ted Cruz. Graham’s response, delivered with a wry honesty, was a line Hinton had never forgotten: “It’s like being shot or poisoned.”

Hinton’s warning on potential of AI technology in destruction

Hinton was speaking to The Financial Times during an interview where he sounded the alarm about AI’s potential dangers. Once a key figure in accelerating AI development, Hinton has shifted to expressing deep concerns about its future. He believes that AI poses a grave threat to humanity, arguing that the technology could soon help an average person create bioweapons.“A normal person assisted by AI will soon be able to build bioweapons and that is terrible,” Hinton said, adding, “Imagine if an average person in the street could make a nuclear bomb.”During a two-hour interview, Hinton discussed various topics, including AI’s “nuclear-level” threats, his own use of AI tools, and even how a chatbot contributed to his recent breakup. He also recently warned that AI could soon surpass human abilities, including the power to manipulate emotions by learning from vast datasets to influence feelings and behaviors more effectively than people.Hinton’s concerns stem from his belief that AI is genuinely intelligent. He argues that “by any definition of intelligence, AI is intelligent.” Using several analogies, he explained that an AI’s experience of reality is not so different from a human’s.“It seems very obvious to me. If you talk to these things and ask them questions, it understands,” Hinton stated. He added that there is “very little doubt in the technical community that these things will get smarter.”

Made in India Space Chip: Vikram-32 Explained





Source link

Continue Reading

Tools & Platforms

Anthropic’s Claude restrictions put overseas AI tools backed by China in limbo

Published

on


An abrupt decision by American artificial intelligence firm Anthropic to restrict service to Chinese-owned entities anywhere in the world has cast uncertainty over some Claude-dependent overseas tools backed by China’s tech giants.

After Anthropic’s notice on Friday that it would upgrade access restrictions to entities “more than 50 per cent owned … by companies headquartered in unsupported regions” such as China, regardless of where they are, Chinese users have fretted over whether they could still access the San Francisco-based firm’s industry-leading AI models.

While it remains unknown how many entities could be affected and how the restrictions would be implemented, anxiety has started to spread among some users.

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.

Singapore-based Trae, an AI-powered code editor launched by Chinese tech giant ByteDance for overseas users, is a known user of OpenAI’s GPT and Anthropic’s Claude models. A number of users of Trae have raised the issue of refunds to Trae staff on developer platforms over concerns that their access to Claude would no longer be available.

Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP alt=Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP>

A Trae manager responded by saying that Claude was still available, urging users not to consider refunds “for the time being”. The company had just announced a premium “Max Mode” on September 2, which boasted access to significantly more powerful coding abilities “fully supported” by Anthropic’s Claude models.

Other Chinese tech giants offer Claude on their coding agents marketed to international users, including Alibaba Group Holding’s Qoder and Tencent Holdings’ CodeBuddy, which is still being beta tested. Alibaba owns the South China Morning Post.

ByteDance and Trae did not respond to requests for comment.

Amid the confusion, some Chinese AI companies have taken the opportunity to woo disgruntled users. Start-up Z.ai, formerly known as Zhipu AI, said in a statement on Friday that it was offering special offers to Claude application programming interface users to move over to its models.

Anthropic’s decision to restrict access to China-owned entities is the latest evidence of an increasingly divided AI landscape.

In China, AI applications and tools for the domestic market are almost exclusively based on local models, as the government has not approved any foreign large language model for Chinese users.

Anthropic faced pressure to take action as a number of Chinese companies have established subsidiaries in Singapore to access US technology, according to a report by The Financial Times on Friday.

Anthropic’s flagship Claude AI models are best known for their strong coding capabilities. The company’s CEO Dario Amodei has repeatedly called for stronger controls on exports of advanced US semiconductor technology to China.

Anthropic completed a US$13 billion funding round in the past week that tripled its valuation to US$183 billion. On Wednesday, the company said its software development tool Claude Code, launched in May, was generating more than US$500 million in run-rate revenue, with usage increasing more than tenfold in three months.

The firm’s latest Claude Opus 4.1 coding model achieved an industry-leading score of 74.5 per cent on SWE-bench Verified – a human-validated subset of the large language model benchmark, SWE-bench, that is supposed to more reliably evaluate AI models’ capabilities.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.





Source link

Continue Reading

Trending