AI Insights
ChatGPT-5: Rapid advancement of Artificial Intelligence continues but tech still can’t replace people

As Artificial Intelligence continues to develop at a rapid pace, OpenAI has released a “significant” upgrade with the newest GPT-5 model.
The newest upgrades claims to have the “best response” times, being “great at coding”, feeling like “an expressive writing partner”, giving “more useful health answers” and being “safer and more accurate” compared to old.
OpenAI chief executive Sam Altman said these are innovations were a “significant step forward.”
Sign up to The Nightly’s newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.
Despite being one of the more recognisable AI models with nearly 700 million weekly users, it hasn’t had industry-leading frontier model – with NVIDIA being seen as the top dog – but the new developments could put the OpenAI technology as the AI leader.
Mr Altman compared the development of models to a school student.
“GPT-3 sort of felt like talking to a high school student… maybe you’d get a right answer, maybe you’d get something crazy,” he said.
“GPT-4 felt like you’re talking to a college student.
“GPT-5 is the first time that it really feels like talking to a PhD-level expert.”
New features introduce individuality, with users able to customise the personality and colours of their chats, voice improvements, study mode and being able to connect it with your Google account.
But those worried about losing their jobs to AI need not worry yet.
Mr Altman said there is still a way to go until artificial general intelligence can think on a human level.
“This is not a model that continuously learns as it’s deployed from things it finds, which is something that, to me, feels like it should be part of AGI,” he said.
“But the level of capability here is a huge improvement.”
Despite falling short of thinking like a person, users can now give their AI a personality ranging from cynic, listener, nerd or the fitting choice of a robot.
Critical thinking flaws aside, the developers still had plenty to praise about with the advancements.
“The vibes of this model are really good,” head of ChatGPT Nick Turley said.
“I think that people are really going to feel that, especially average people who haven’t been spending their time thinking about models.”
The new model is available to try for all users, but there is an undisclosed limit of prompts for free users before it will default back to the “ChatGPT-5 mini” version.
AI Insights
AI’s Uncertain Cost Effects in Health Care | American Enterprise Institute

The health care industry has a long history of below-average productivity gains, but there is cautious optimism that artificial intelligence (AI) will break the pattern. As in the past, the industry’s misaligned incentives might stymie progress.
A 2024 economic study found that existing AI platforms could deliver up to $360 billion in annual cost reductions without harming the quality of care delivered to patients. If realized, the financial relief for employers, consumers, and taxpayers would not be trivial.
The potential uses of AI in health care are numerous. AI could streamline the reading of diagnostic images, speed up accurate identification of complex conditions (and thus reduce the need for more testing), eliminate repetitive back-off tasks, prevent payments for unneeded services, target fraud, and less expensively identify drug compounds with potential therapeutic value. The savings from these applications are not theoretical; market participants are already using existing AI tools to pursue each of these objectives.
But there are two sides to the health care negotiating table, and the other side—hospitals, physician practices, and publicly-subsidized insurance plans looking to maximize their revenue—can leverage AI too. The net effect remains uncertain and will depend on which side of the table is most effective at leveraging the technology’s power.
AI scribes are an example of a use that could go either way. The tool will save time for doctors and their support staff by quickly and easily translating audio notes from patient encounters into data entries for electronic health records. At the same time, a recent news story noted that AI scribes also facilitate “chart reviews” aimed at ensuring no services that can be billed to insurance plans are missed. In effect, the industry is discovering that AI scribes are more effective than humans at maximizing practice revenue.
Medicare Advantage (MA) plans are sure to use AI in a similar way to boost the adjustment scores which, affect their monthly capitated payments from the Medicare program.
While potentially powerful, AI does not solve the basic problem in health care, which is that there are weak incentives for cost control.
In employer-sponsored insurance (ESI), higher costs are partially subsidized by a federal tax break which grows in value with the expense of the plan. In traditional Medicare, hospitals and doctors get paid more when they provide more services. If AI were used to eliminate unnecessary care, provider incomes would fall dramatically, which is why facilities and clinicians are more likely to use the technology to justify providing more care with higher prices for patients than to become more efficient.
Insurers would seem to have a stronger incentive for cost control, but their main clients—employers and workers—are mostly interested in broad provider networks, not cost control. Insurers can earn profits just as easily when costs are high as when they are low.
If AI is to lead to lower costs, the government and employers will need to deploy it aggressively to identify unnecessary spending, and then incentivize patients to migrate toward lower-cost insurance and care options.
For instance, employers could use AI to pore through pricing data made available by transparency rules to identify potential cost-cutting opportunities for their workers. That, however, is only step one. Step two should be a change in plan design that rewards workers—who use the information AI uncovers to choose hospitals and doctors that can deliver the best value at the lowest cost. The savings from lower-priced care should be shared with workers through lower cost-sharing and premiums.
The government should implement similar changes in Medicare, either through existing regulatory authority or through changes in law approved by Congress.
With patients incentivized to seek out lower-cost care, hospitals and doctors would be more willing to use AI to identify cost-cutting strategies. For instance, AI could be used to design care plans for complex patients that minimize overall costs, or to offer more aggressive preventive care to patients with health risks identified by AI tools.
Health care is awash with underused data. Patient records include potentially valuable information that could be harnessed to prevent emerging problems at far less cost than would be the case for treating the conditions after they have begun to inflict harm. In other words, AI might be used to vastly improve patient outcomes while also reducing costs.
But this upending of the industry will not occur if all of the major players would rather stick with business as usual to protect their bottom lines.
Congress should keep all of this in mind when considering how best to ensure AI delivers on its potential in health care. The key is to change incentives in the market so that those providers who use AI to cut their costs are rewarded with expanded market shares rather than lost revenue.
AI Insights
Got $3,000? 3 Artificial Intelligence (AI) Stocks to Buy and Hold for the Long Term. – MSN
AI Insights
US Senator Ted Cruz Proposes SANDBOX Act to Waive Federal Regulations for AI Developers

US Senator Ted Cruz (R-TX), chairman of the Senate Commerce Committee, at a hearing titled “AI’ve Got a Plan: America’s AI Action Plan” on Wednesday, September 10, 2025.
On Wednesday, United States Senator Ted Cruz (R-TX) unveiled the “Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation Act,” or the SANDBOX Act. The 41-page bill would direct the director of the White House Office of Science and Technology Policy (OSTP) to establish a federal “regulatory sandbox” for AI developers to apply for waivers or modifications on compliance with federal regulations in order to test, experiment with, or temporarily offer AI products and services.
In a statement, Cruz said the legislation is consistent with the goals of the Trump administration’s AI Action Plan, which was released in July, and is the first step toward a “new AI framework” that can “turbocharge economic activity, cut through bureaucratic red tape, and empower American AI developers while protecting human flourishing.”
The bill would create a mechanism for companies to apply to the OSTP director for a waiver or modification to rules or regulations under any federal agency “that has jurisdiction over the enforcement or implementation of a covered provision for which an applicant is seeking a waiver or modification” under the sandbox program. Waivers or modifications would be granted for a two-year period, with four potential renewals totaling up to a decade.
Applicants under the program must demonstrate that “how potential benefits of the product or service or development method outweigh the risks, taking into account any mitigation measures,” including descriptions of “foreseeable risks” such as “health and safety,” “economic damage,” and “unfair or deceptive trade practices.” Applicants that receive a waiver are not immune to civil or criminal liability that may result from the deployment of their AI product or service. The bill requires mandatory incident reporting under a public disclosure mechanism.
Federal agencies are given 90 days to review applications. If an agency does not submit a decision or seek an extension by the deadline, the OSTP director is permitted to presume that the agency does not object. If an application is denied, it can be appealed.
The bill also includes a provision for Congressional review of rules and regulations that “should be amended or repealed as a result of persons being able to operate safely without those covered provisions” under the sandbox program. The OSTP director is tapped to identify any such provisions in a “special message” to Congress submitted each year.
The bill also contemplates coordination with “State programs that are similar or comparable to the Program,” including to “accept joint applications for projects benefitting from both Federal and State regulatory relief” and to harmonize other aspects of the program.
The Senate Commerce Committee’s announcement said the bill is backed by groups including the Abundance Institute, the US Chamber of Commerce, and the Information Technology Council (ITI). Public Citizen, a watchdog group, said in a statement that the bill puts public safety on the “chopping block” in favor of “corporate immunity.”
The announcement of the bill was timed alongside a Senate Commerce hearing titled “AI’ve Got a Plan: America’s AI Action Plan,” which featured testimony from OSTP director Michael Kratsios. During the hearing, Cruz laid out a legislative agenda on AI, including reducing the regulatory burden on AI developers. But, he said, AI developers should still face consequences if they create harm.
“A regulatory sandbox is not a free pass,” said Cruz. “People creating or using AI still have to follow the same laws as everyone else. Our laws are adapting to this new technology.”
In response to a question from Cruz, Kratsios said he would support the approach described by the SANDBOX Act.
The new legislation follows a failed effort by Cruz and other Republicans to impose a sweeping moratorium on the enforcement of state laws regulating artificial intelligence. Earlier this year, the House passed the moratorium as part of the so-called “One Big, Beautiful” bill, or HR 1. After efforts by Cruz to move the measure through the Senate by tying it to the allocation of $42 billion in funding for the Broadband Equity and Access Deployment (BEAD) program, the chamber voted 99-1 to strip it out of the budget bill prior to passage. Still, some experts remain concerned that the administration may try to use other federal levers to restrict state AI laws.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi