Connect with us

AI Insights

OpenAI closes gap to artificial general intelligence with GPT-5

Published

on


OpenAI has updated its large language model (LLM) in ChatGPT to GPT-5, which it says takes a significant step towards artificial general intelligence (AGI). In a blog post, the company said GPT-5 delivers leaps in accuracy, speed, reasoning, context recognition, structured thinking and problem-solving. 

“We anticipate early adoption to drive industry leadership on what’s possible with AI powered by GPT‑5, leading to better decision-making, improved collaboration and faster outcomes on high-stakes work for organisations,” said OpenAI.

From a technology perspective, OpenAI has built GPT-5 around a unified system, which it claims offers a smart, efficient model that answers most questions, combined with a deeper reasoning model for harder problems and a real‑time router that can quickly decide which to use. The company said the router is continuously trained on real signals, including when users switch models, preference rates for responses and measured correctness.

One of the features of the router, according to OpenAI, is that it enables the LLM to continue operating using a smaller model, once usage limits are reached.

Another change concerns safety. With GPT‑5, OpenAI has introduced a new form of safety training called “safe completions”, which teaches the model to give the most helpful answer where possible, while still staying within safety boundaries.

Describing the approach, the company said that sometimes GPT-5 may offer a partial answer to a user’s question or only answer at a high level. “If the model needs to refuse, GPT‑5 is trained to transparently tell you why it is refusing, as well as provide safe alternatives,” said OpenAI.

It also claims to offer better AI coding compared to previous models. “We’ve found GPT‑5 is excellent at digging deep into codebases to answer questions about how various pieces work or interoperate,” said OpenAI. “In a codebase as complicated as OpenAI’s reinforcement learning stack, we’re finding that GPT‑5 can help us reason about and answer questions about our code, accelerating our own day-to-day work.”

In the SWE-Bench Verified⁠, a model is given a code repository and issue description, and must generate a patch to solve the issue. GPT-5 achieved an accuracy of 75% with around 10,000 tokens, compared with OpenAI’s o3 model, which scored 69% accuracy using 13,741 tokens.

We’re at a tipping point. GPT-5 promises even more realism, more precision and more ease for the user. That’s great for innovation, but it’s also a gift to fraudsters
Gary Hall, Medius

Commenting on the launch, Grant Farhall, chief product officer at Getty Images, said GPT-5 would further reshape its relationship with content, creativity and imagery.

“As AI content becomes more convincing, we need to ask ourselves, ‘Are we protecting the people and creativity behind what we see every day?’ Authenticity matters, but it doesn’t come for free. It’s more important now that we look at exactly how AI models are being trained – if it is on permissioned content and that creators are being compensated for their works being trained,” he said.

“In addition, our global consumer research has found that people increasingly crave authenticity and transparency, especially in visual content. With AI evolving at pace, the real question is: Will GPT-5-generated content feel relatable and real, or will it further fuel demand for genuinely human, nuanced work?”

As AI gets closer to AGI, the risk of such systems being used fraudulently also increases significantly.

Gary Hall, chief product Officer at Medius, warned: “We’re at a tipping point. GPT-5 promises even more realism, more precision and more ease for the user. That’s great for innovation, but it’s also a gift to fraudsters. When AI-generated documents are indistinguishable from the real thing, legacy finance systems simply can’t cope. This is no longer a niche IT issue – it’s a frontline finance challenge.”

Along with access via OpenAI, GPT-5 is also available across Microsoft platforms, including Microsoft 365 Copilot, Copilot Studio, Microsoft Copilot, GitHub Copilot, Visual Studio Code and Azure Al Foundry.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Future-proofing the enterprise: Cultivating 3 essential leadership skills for the agentic AI era

Published

on


The agentic AI era is here, and it will reshape how businesses operate. The question is: Is your leadership team equipped to handle it? How quickly can you equip your leadership team and workforce with the capabilities to harness their power?

This isn’t just about integrating more automation; it’s about leading organizations through a paradigm shift where autonomous AI agents will increasingly define workflows, decision-making and competitive advantage. This necessitates a strategic focus on three core leadership skills, designed not just to future-proof individual careers, but to ensure the enduring resilience, transformation and innovative capacity of your entire enterprise.

1. The “agent architect”: Mastering prompt engineering and strategic oversight

The Challenge: In the traditional IT landscape, leadership defines requirements and teams build to spec. In the Agentic Era, the “spec” becomes a high-level goal, and the “build” is largely executed by autonomous agents. Without effective guidance, these agents can stray, underperform or even introduce new risks.



Source link

Continue Reading

AI Insights

AI can predict which patients need treatment to preserve their eyesight

Published

on


Researchers have successfully used artificial intelligence (AI) to predict which patients need treatment to stabilize their corneas and preserve their eyesight, in a study presented today (Sunday) at the 43rd Congress of the European Society of Cataract and Refractive Surgeons (ESCRS).

The research focused on people with keratoconus, a visual impairment that generally develops in teenagers and young adults and tends to worsen into adulthood. It affects up to 1 in 350 people. In some cases, the condition can be managed with contact lenses, but in others it deteriorates quickly and if it is not treated, patients may need a corneal transplant. Currently the only way to tell who needs treatment is to monitor patients over time.

The researchers used AI to assess images of patients’ eyes, combined with other data, and to successfully predict which patients needed prompt treatment and which could continue with monitoring.

The study was by Dr. Shafi Balal and colleagues at Moorfields Eye Hospital NHS Foundation Trust, London, and University College London (UCL), UK. He said: “In people with keratoconus, the cornea – the eye’s front window – bulges outwards. Keratoconus causes visual impairment in young, working-age patients and it is the most common reason for corneal transplantation in the Western world.

“A single treatment called ‘cross-linking’ can halt disease progression. When performed before permanent scarring develops, cross-linking often prevents the need for corneal transplantation. However, doctors cannot currently predict which patients will progress and require treatment, and which will remain stable with monitoring alone. This means patients need frequent monitoring over many years, with cross-linking typically performed after progression has already occurred.”

The study involved a group of patients who were referred to Moorfields Eye Hospital NHS Foundation Trust for keratoconus assessment and monitoring, including scanning the front of the eye with optical coherence tomography (OCT) to examine its shape. Researchers used AI to study 36,673 OCT images of 6,684 different patients along with other patient data.

The AI algorithm could accurately predict whether a patient’s condition would deteriorate or remain stable using images and data from the first visit alone. Using AI, the researchers could sort two-thirds of patients into a low-risk group, who did not need treatment, and the other third into a high-risk group, who needed prompt cross-linking treatment. When information from a second hospital visit was included, the algorithm could successfully categorise up to 90% of patients.

Cross linking treatment uses ultraviolet light and vitamin B2 (riboflavin) drops to stiffen the cornea, and it is successful in more than 95% of cases.

Our research shows that we can use AI to predict which patients need treatment and which can continue with monitoring. This is the first study of its kind to obtain this level of accuracy in predicting the risk of keratoconus progression from a combination of scans and patient data, and it uses a large cohort of patients monitored over two years or more. Although this study is limited to using one specific OCT device, the research methods and AI algorithm used can be applied to other devices. The algorithm will now undergo further safety testing before it is deployed in the clinical setting.


Our results could mean that patients with high-risk keratoconus will be able to receive preventative treatment before their condition progresses. This will prevent vision loss and avoid the need for corneal transplant surgery with its associated complications and recovery burden. Low-risk patients will avoid unnecessary frequent monitoring, freeing up healthcare resources. The effective sorting of patients by the algorithm will allow specialists to be redirected to areas with the greatest need.”


Dr. Shafi Balal, Moorfields Eye Hospital NHS Foundation Trust

The researchers are now developing a more powerful AI algorithm, trained on millions of eye scans, that can be tailored for specific tasks, including predicting keratoconus progression, but also other tasks such as detecting eye infections and inherited eye diseases.

Dr. José Luis Güell, ESCRS Trustee and Head of the Cornea, Cataract and Refractive Surgery Department at the Instituto de Microcirugía Ocular, Barcelona, Spain, who was not involved in the research, said: “Keratoconus is a manageable condition, but knowing who to treat, and when and how to give treatment is challenging. Unfortunately, this problem can lead to delays, with many patients experiencing vision loss and requiring invasive implant or transplant surgery.

“This research suggests that we can use AI to help predict who will progress, even from their first routine consultation, meaning we could treat patients early before progression and secondary changes. Equally, we could reduce unnecessary monitoring of patients whose condition is stable. If it consistently demonstrates its effectiveness, this technology would ultimately prevent vision loss and more difficult management strategies in young, working-age patients.”



Source link

Continue Reading

AI Insights

Billionaire Dan Loeb Just Changed His Mind on This Incredible Artificial Intelligence (AI) Stock

Published

on


After eliminating it from his fund’s portfolio in the first quarter, this stock was one of Loeb’s biggest purchases in the second quarter.

Billionaire Dan Loeb is one of the most-followed activist investors on Wall Street. His hedge fund, Third Point, manages $21.1 billion, with around one-third of that invested in a public equity portfolio.

He is supported by a team of over 60 people, but ultimately, Loeb is in charge of the moves in Third Point’s portfolio. He said that by mid-April, he had sold out of most of the “Magnificent Seven” stocks, taking gains off the table early in 2025 before the market crashed amid tariff concerns.

By the end of the first quarter, he’d sold off significant pieces of his stakes in Microsoft and Amazon while completely eliminating positions in Tesla, Apple, and Meta Platforms (META 0.62%). But Loeb was a buyer of most of those again in the second quarter, including Meta. Here’s why Loeb may have changed his mind on the AI leader.

Image source: Getty Images.

Why did Loeb sell Meta in the first place?

Loeb’s decision to sell Meta shares seemed mostly to have been driven by its rising valuations. Shares of Meta reached a forward P/E ratio of 26.5 during the first quarter.

“We realized gains earlier in the year through opportunistic sales near the highs in Meta,” Loeb said in his first-quarter letter to Third Point investors.

It’s very likely that Loeb was concerned about that valuation as uncertainty grew about President Donald Trump’s trade policies. Meta’s core advertising business relies on business confidence. If businesses aren’t confident in their ability to source their products or in the consumer’s willingness to spend, they’re going to be less willing to pay up for advertising on Meta’s apps.

Meanwhile, Meta is investing heavily in artificial intelligence infrastructure. Management said it plans to spend $60 billion to $65 billion on capital expenditures this year, up from $39 billion in 2024. Given the growing uncertainty about what the near-term returns on those investments might be, Loeb took an opportunity to take some money off the table.

Tiptoeing back in

Third Point ended the second quarter with 150,000 shares of Meta. While that only accounted for about 1.5% of its public equity portfolio at the time, it was still enough to make it one of the hedge fund’s biggest purchases in the quarter.

So, what led to the reversal?

It may have been the strong first-quarter earnings report Meta delivered at the end of April. The company saw strong revenue growth, expanded its operating margin, and expressed a lot of confidence about the next quarter and beyond. It raised its capital expenditure plans as well.

Management also made it clear that Meta’s investments in artificial intelligence are already paying off. That assertion was supported by growth in both ad impressions and average price per ad, which it boosted by consistently improving its content and ad recommendation algorithms. The long-term potential for AI to make it easier for marketers to advertise on Meta’s properties and for it to expand advertising opportunities remains a key focus of the company’s spending.

But Meta shares are once again trading at a high valuation. In fact, the stock now carries a higher earnings multiple than it did when Loeb and his team sold the stock in the first quarter.

Should retail investors buy Meta Platforms now?

Meta’s first-quarter results gave investors like Loeb confidence in the stock, and its second-quarter results were arguably even better.

Revenue growth accelerated, and its operating margin expanded once again. The operating margin gains are perhaps the most impressive facet of the narrative, as management has warned about an increase in depreciation expenses from all of its AI investments.

But those AI investments may be the differentiating factor between Meta and other digital advertising platforms. Meta is able to offer marketers higher returns on their ad spending, even while charging them premium prices. As a result, Meta grew its revenue faster than smaller social media platforms did last quarter.

That should give investors confidence that its AI strategy is already paying off. Combine that with the long-term potential for AI to transform the business, and it makes sense for the stock to trade at a premium price. With shares currently trading at just over 27 times expected forward earnings, it may still be underpriced. We won’t know whether or not Loeb took profits once again until November, when Third Point files its next 13F disclosure with the Securities and Exchange Commission. But for most retail investors, Meta shares are worth buying or holding onto right now.

Adam Levy has positions in Amazon, Apple, Meta Platforms, and Microsoft. The Motley Fool has positions in and recommends Amazon, Apple, Meta Platforms, Microsoft, and Tesla. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

Trending