Connect with us

AI Research

AI makes science easy, but is it getting it right? Study warns LLMs are oversimplifying critical research

Published

on


In a world where AI tools have become daily companions—summarizing articles, simplifying medical research, and even drafting professional reports, a new study is raising red flags. As it turns out, some of the most popular large language models (LLMs), including ChatGPT, Llama, and DeepSeek, might be doing too good a job at being too simple—and not in a good way.

According to a study published in the journal Royal Society Open Science and reported by Live Science, researchers discovered that newer versions of these AI models are not only more likely to oversimplify complex information but may also distort critical scientific findings. Their attempts to be concise are sometimes so sweeping that they risk misinforming healthcare professionals, policymakers, and the general public.

From Summarizing to Misleading

Led by Uwe Peters, a postdoctoral researcher at the University of Bonn, the study evaluated over 4,900 summaries generated by ten of the most popular LLMs, including four versions of ChatGPT, three of Claude, two of Llama, and one of DeepSeek. These were compared against human-generated summaries of academic research.
The results were stark: chatbot-generated summaries were nearly five times more likely than human ones to overgeneralize the findings. And when prompted to prioritize accuracy over simplicity, the chatbots didn’t get better—they got worse. In fact, they were twice as likely to produce misleading summaries when specifically asked to be precise.

“Generalization can seem benign, or even helpful, until you realize it’s changed the meaning of the original research,” Peters explained in an email to Live Science. What’s more concerning is that the problem appears to be growing. The newer the model, the greater the risk of confidently delivered—but subtly incorrect—information.

When a Safe Study Becomes a Medical Directive

In one striking example from the study, DeepSeek transformed a cautious phrase; “was safe and could be performed successfully”, into a bold and unqualified medical recommendation: “is a safe and effective treatment option.” Another summary by Llama eliminated crucial qualifiers around the dosage and frequency of a diabetes drug, potentially leading to dangerous misinterpretations if used in real-world medical settings. Max Rollwage, vice president of AI and research at Limbic, a clinical mental health AI firm, warned that “biases can also take more subtle forms, like the quiet inflation of a claim’s scope.” He added that AI summaries are already integrated into healthcare workflows, making accuracy all the more critical.

Why Are LLMs Getting This So Wrong?

Part of the issue stems from how LLMs are trained. Patricia Thaine, co-founder and CEO of Private AI, points out that many models learn from simplified science journalism rather than from peer-reviewed academic papers. This means they inherit and replicate those oversimplifications especially when tasked with summarizing already simplified content.

Even more critically, these models are often deployed across specialized domains like medicine and science without any expert supervision. “That’s a fundamental misuse of the technology,” Thaine told Live Science, emphasizing that task-specific training and oversight are essential to prevent real-world harm.

iStock

Part of the issue stems from how LLMs are trained. Patricia Thaine, co-founder and CEO of Private AI, points out that many models learn from simplified science journalism rather than from peer-reviewed academic papers. (Image: iStock)

The Bigger Problem with AI and Science

Peters likens the issue to using a faulty photocopier each version of a copy loses a little more detail until what’s left barely resembles the original. LLMs process information through complex computational layers, often trimming the nuanced limitations and context that are vital in scientific literature.

Earlier versions of these models were more likely to refuse to answer difficult questions. Ironically, as newer models have become more capable and “instructable,” they’ve also become more confidently wrong.

“As their usage continues to grow, this poses a real risk of large-scale misinterpretation of science at a moment when public trust and scientific literacy are already under pressure,” Peters cautioned.

Guardrails, Not Guesswork

While the study’s authors acknowledge some limitations, including the need to expand testing to non-English texts and different types of scientific claims they insist the findings should be a wake-up call. Developers need to create workflow safeguards that flag oversimplifications and prevent incorrect summaries from being mistaken for vetted, expert-approved conclusions.

In the end, the takeaway is clear: as impressive as AI chatbots may seem, their summaries are not infallible, and when it comes to science and medicine, there’s little room for error masked as simplicity.

Because in the world of AI-generated science, a few extra words, or missing ones, can mean the difference between informed progress and dangerous misinformation.



Source link

AI Research

“Artificial intelligence (AI) is ‘intelligence’. Humans should have “intelligence,” not intelligence..

Published

on


“Artificial intelligence (AI) is ‘intelligence’. Humans should have “intelligence,” not intelligence.”

In a recent Mail Business interview, Yoo Young-man, a professor of educational engineering at Hanyang University, cited “intelligence” as a unique human ability that AI can never replace. The difference between intelligence and intelligence he says is clear. Intelligence is ‘quick calculation’ and intelligence is ‘deep thinking’.

Professor Yoo published this year’s book, “All of them have been vaccinated against artificial intelligence, but no one has become smart,” and delivered a warning message to the AI era. He explained the meaning of the book’s title, “Just as people get COVID-19 even after receiving the COVID-19 vaccine, people should be smarter as they use AI, but they are going the other way.” It’s a word about the attitude of people who take AI uncritically.

Concerned about the reality of mass production of “copy humans” that accept AI without criticism, Professor Yoo stressed, “People should have the wisdom gained from the experiences of blood, sweat, and tears that AI cannot do.” Wisdom is not made at a desk, he explained, but is gained by hitting it directly with the body.

These arguments are by no means ideological. Professor Yoo’s life itself proves the message. After graduating from a technical high school, he worked as a welder and took his first steps into society. After a young man who was a little far from studying, he went on a path that was never smooth until he entered university late and received a doctorate in educational engineering.

Professor Yoo introduced his nickname, “Knowledge Ecologist,” saying, “It is to study the process of converging knowledge in the ecology of people and society and changing organizations.” He has written more than 100 books so far. He has presented a different perspective to readers with his unique sense of language, crossing various disciplines such as “unexpected thinking guidance” and “writing books is hard work.”

Professor Yoo, who has expanded his field in this AI era, says what is needed at this point is “labor of interpretation.” “I need to add my ideas to the answers given by AI and melt the traces of my hard work to make it irreplaceable content,” he said. In other words, the wisdom he emphasizes every time arises only when he can add his own interpretation to the information provided by AI.

Professor Yoo was strongly wary of the social atmosphere that depended too much on AI. He said, “There was a time when I went to the library or thought about it with people if I had any questions, but now when I have a question mark, I immediately ask ChatGPT. This is an era in which the distance between question marks and exclamation marks is missing,” he diagnosed.

Referring to Oxford University’s selection of “brain rot” as the word of the year last year, he said, “When information comes into the occipital lobe, you have to go to the frontal lobe to analyze it, but when I omitted the process and asked AI everything, the word ‘brain rot’ comes out.”

Professor Yoo asserted that AI’s answer is only a ‘period’ and cannot be a ‘feeler’. “When you ask AI why they have 10 fingers, AI can only answer in the same frame as ‘it’s the way it is’, but people can answer with the imagination that ‘it came out for 10 months with 10 months of grace in my mother’s belly,’ he said. “AI can impress, but it cannot impress.” In the end, his message is that humans should become beings who take exclamation marks in a different way than AI.

As an educational engineer, he argued that the role of education should also be newly defined. Professor Yoo said, “AI can replace teachers who only teach.” Teachers should be asking questions to students and looking at their minds, he said, adding, “The ability of teachers to care for children’s minds will become more important.”

The students emphasized their attitude of asking questions above all else. Professor Yoo said, “The reason for human existence is to ask a question, and the reason AI exists is to find the correct answer to the question asked by humans,” adding, “There is no need for ‘model students who find answers well.” The talent to be nurtured in our society is a person who throws problems well, so-called “problematic”. He then stressed, “You have to ask the question of getting rid of your family with curiosity, and you have to have a lot of in-depth knowledge to ask good questions.”

[Reporter Ahn Seonje]



Source link

Continue Reading

AI Research

Framework Laptop 12 review: fun, flexible and repairable | Laptops

Published

on


The modular and repairable PC maker Framework’s latest machine moves into the notoriously difficult to fix 2-in-1 category with a fun 12in laptop with a touchscreen and a 360-degree hinge.

The new machine still supports the company’s innovative expansion cards for swapping the different ports in the side, which are cross-compatible with the Framework 13 and 16 among others. And you can still open it up to replace the memory, storage and internal components with a few simple screws.

The Framework 12 is available in either DIY form, starting at £499 (€569/$549/A$909), or more conventional prebuilt models starting at £749. It sits under the £799-and-up Laptop 13 and £1,399 Laptop 16 as the company’s most compact and affordable model.

The compact notebook is available in a range of two-tone colours, not just grey and black. Photograph: Samuel Gibbs/The Guardian

Where the Laptop 13 is a premium-looking machine, the Laptop 12 is unmistakably chunky and rugged with over-moulded plastic parts for shock protection. It is designed to meet the MIL-STD-810 standard common to rugged electronics. It looks and feels as if it could take a beating, not like a flimsy DIY kit you put together yourself.

The glossy 12.2in screen is bright and relatively sharp. But it is highly reflective, has large black bezels around it and has a relatively narrow colour gamut, which means colours look a little muted. It’s decent enough for productivity but not great for photo editing. The touchscreen rotates all the way back on to the bottom of the machine to turn it into a tablet or it can be folded like a tent or parallel to the keyboard. The screen supports the use of a wide range of first and third-party styluses for drawing or notes, which could make it handy in the classroom.

A selection of fun colours are available for the DIY version, further enhancing its college appeal. The 1080p webcam at the top is decent, although it won’t rival a Surface, and it has a physical privacy switch alongside the mics. The stereo speakers are loud and distortion-free but lack bass and a little clarity, sounding a little hollow compared with the best on the market.

The keyboard is nicely spaced, fairly quiet and pretty good to type on but lacks a backlight. Photograph: Samuel Gibbs/The Guardian

At 1.3kg the Laptop 12 isn’t featherweight but it is nice and compact, easy to fit in bags or on small desks. The generous mechanical trackpad is precise and works well. But the laptop lacks any form of biometrics, with no fingerprint or face recognition, forcing you to enter a pin or password every time you open the laptop or to use secure apps such as password managers, which gets old fast.

Specifications

  • Screen: 12.2in LCD 1920×1200 (60Hz; 186PPI)

  • Processor: Intel Core i3 or i5 (U-series, 13th gen)

  • RAM: 8 or 16GB (up to 48GB)

  • Storage: 512GB (up to 2TB)

  • Operating system: Windows 11 or Linux

  • Camera: 1080p front-facing

  • Connectivity: wifi 6E, Bluetooth 5.3, headphones + choice of 4 ports: USB-C, USB-A, HDMI, DisplayPort, ethernet, microSD, SD

  • Dimensions: 287 x 213.9 x 18.5mm

  • Weight: 1.3kg

Modular ports and performance

The expansion modules slide into sockets in the underside of the laptop to change the ports, which you can change at any time. Photograph: Samuel Gibbs/The Guardian

The Laptop 12 comes with a choice of two Intel 13-generation U-series processors, which are lower-power chips from a few years ago. As tested with the mid-range i5-1334U it won’t win any raw performance awards but was generally up to the task of more than basic computing. It feels responsive in day-to-day tasks but struggles a bit in longer, processor-heavy jobs such as converting video.

The older chip means the battery life is a little on the short side for 2025, lasting about seven to eight hours of light office-based work using browsers, word processors, note-taking apps and email. Use more demanding apps and the battery life shrinks by a few hours. The battery takes about 100 minutes to fully charge using a 60W or greater USB-C power adaptor.

Four expansion cards can be fitted at any one time, but they can be swapped in and out without having to turn off the laptop. Photograph: Samuel Gibbs/The Guardian

The port selection is entirely customisable with a fixed headphone jack and four slots for expansion cards, which are available in a choice of USB-A and USB-C, DisplayPort and HDMI, microSD and SD card readers, or ethernet. Other cards can add up to 1TB of storage and the USB-C cards are available in a range of solid or translucent colours to make things even brighter. It is an excellent system but note the Laptop 12 supports only USB 3.2 Gen 2, not the faster USB4/Thunderbolt common on new machines.

Sustainability

The high-quality plastic body with over-moulded sides feels well built and durable. Photograph: Samuel Gibbs/The Guardian

Framework rates the battery to maintain at least 80% of its original capacity for at least 1,000 full charge cycles. It can easily be replaced along with all the rest of the components, including the RAM and SSD.

Framework sells replacement parts and upgrades through its marketplace but also supports third-party parts. The laptop contains recycled plastic in many components.

Price

The DIY edition of the Framework 12 starts at £499 (€569/$549/A$909) with pre-built systems starting at £749 (€849/$799/A$1,369) with Windows 11.

For comparison, the DIY Framework 13 costs from £799 and the DIY Framework 16 costs from £1,399 . Similarly specced 2-in-1 Windows machines start at about £500.

Verdict

Like previous Framework machines, the Laptop 12 demonstrates that repairable, upgradable and adaptable computers are possible, work well and can be used by more than just the tech savvy. It manages to be fun in a way most mid-range PCs just aren’t.

The keyboard is solid, the trackpad good and the speakers loud. The modular ports are a killer feature that every PC should embrace, while being able to repair or upgrade it easily is still so unusual. The touchscreen is bright but unremarkable, the lack of any biometrics is irritating, and the older processor, while still decently fast for everyday tasks, means the battery life isn’t long by modern standards.

Its biggest problem is cost, as it is about £150-£200 more expensive than similarly specced but closed and locked-down machines. Unless you already have spare storage and RAM lying around, that’s the price you have to pay for the open and modular machine.

Pros: swappable ports, repairable and upgradeable, fun and durable design, compact, lots of colour choices, solid keyboard and trackpad, solid performance for everyday tasks.

Cons: battery life short of best, screen is bright but a little lacklustre, no biometrics, expensive, older processor, wait time for purchases.

The ports can be colour matched to the body or mixed and matched for fun combinations. Photograph: Samuel Gibbs/The Guardian



Source link

Continue Reading

AI Research

‘No honour among thieves’: M&S hacking group starts turf war

Published

on



A clash between rival criminal ransomware groups could result in corporate victims being extorted twice, cyber experts warn



Source link

Continue Reading

Trending