Connect with us

AI Research

New Libyan artificial intelligence system ‘‘LIBIGPT’’ to be launched soon

Published

on

































New Libyan artificial intelligence system ‘‘LIBIGPT’’ to be launched soon
























The Libyan Authority for Scientific Research announced last Sunday (17 August) the imminent launch of the new Libyan Artificial Intelligence system, “LibiGPT”.

Developed by one of its leading Libyan scientific advisors, Dr. Ali Othman Al-Baji, the Authority said it uses the latest Deep Learning technologies and will put Libya at the heart of Artificial Intelligence.

Dr. Ali Othman Al-Baji, meanwhile, says the key features of LibiGPT are:

  • Higher accuracy and intelligence in understanding and analysis compared to other models.
  • Supports multiple languages to expand global communication.
  • A strategic step to enhance Libya’s position and leadership in the field of artificial intelligence.
  • LibiGPT is not just a technical project, but rather a national vision aimed at highlighting Libya’s strength on the global map in the field of artificial intelligence.



Source link

AI Research

Prediction: This Artificial Intelligence (AI) Semiconductor Stock Will Join Nvidia, Microsoft, Apple, Alphabet, and Amazon in the $2 Trillion Club by 2028. (Hint: Not Broadcom)

Published

on


This company is growing quickly, and its stock is a bargain at the current price.

Big tech companies are set to spend $375 billion on artificial intelligence (AI) infrastructure this year, according to estimates from analysts at UBS. That number will climb to $500 billion next year.

The biggest expense item in building out AI data centers is semiconductors. Nvidia (NVDA -3.38%) has been by far the biggest beneficiary of that spend so far. Its GPUs offer best-in-class capabilities for general AI training and inference. Other AI accelerator chipmakers have also seen strong sales growth, including Broadcom (AVGO -3.70%), which makes custom AI chips as well as networking chips, which ensure data moves efficiently from one server to another, keeping downtime to a minimum.

Broadcom’s stock price has increased more than fivefold since the start of 2023, and the company now sports a market cap of $1.4 trillion. Another year of spectacular growth could easily place it in the $2 trillion club. But another semiconductor stock looks like a more likely candidate to reach that vaunted level, joining Nvidia and the four other members of the club by 2028.

Image source: Getty Images.

Is Broadcom a $2 trillion company?

Broadcom is a massive company with operations spanning hardware and software, but its AI chips business is currently steering the ship.

To that end, AI revenue climbed 46% year over year last quarter to reach $4.4 billion. Management expects the current quarter to produce $5.1 billion in AI semiconductor revenue, accelerating growth to roughly 60%. AI-related revenue now accounts for roughly 30% of Broadcom’s sales, and that’s set to keep climbing over the next few years.

Broadcom’s acquisition of VMware last year is another growth driver. The software company is now fully integrated into Broadcom’s larger operations, and it’s seen strong success in upselling customers to the VMware Cloud Foundation, enabling enterprises to run their own private clouds. Over 87% of its customers have transitioned to the new subscription, resulting in double-digit growth in annual recurring revenue.

But Broadcom shares are extremely expensive. The stock garners a forward P/E ratio of 45. While its AI chip sales are growing quickly and it’s seeing strong margin improvement from VMware, it’s important not to lose sight of how broad a company Broadcom is. Despite the stellar growth in those two businesses, the company is still only growing its top line at about 20% year over year. Investors should expect only incremental margin improvements going forward as it scales the AI accelerator business. That means the business is set up for strong earnings growth, but not enough to justify its 45 times earnings multiple.

Another semiconductor stock trades at a much more reasonable multiple, and is growing just as fast.

The semiconductor giant poised to join the $2 trillion club by 2028

Both Broadcom and Nvidia rely on another company to ensure they can create the most advanced semiconductors in the world for AI training and inference. That company is Taiwan Semiconductor Manufacturing (TSM -3.05%), which actually prints and packages both companies’ designs. Almost every company designing leading-edge chips relies on TSMC for its technological capabilities. As a result, its market share of semiconductor manufacturing has climbed to more than two-thirds.

TSMC benefits from a virtuous cycle, ensuring it maintains and grows its massive market share. Its technology lead helps it win big contracts from companies like Nvidia and Broadcom. That gives it the capital to invest in expanding capacity and research and development for its next-generation process. As a result, it maintains its technology lead while offering enough capacity to meet the growing demand for manufacturing.

TSMC’s leading-edge process node, dubbed N2, will reportedly charge a 66% premium per silicon wafer over the previous generation (N3). That’s a much bigger step-up in price than it’s historically managed, but the demand for the process is strong as companies are willing to spend whatever it takes to access the next bump in power and energy efficiency. While TSMC typically experiences a significant drop off in gross margin as it ramps up a new expensive node with lower initial yields, its current pricing should help it maintain its margins for years to come as it eventually transitions to an even more advanced process next year.

Management expects AI-related revenue to average mid-40% growth per year from 2024 through 2029. While AI chips are still a relatively small part of TSMC’s business, that should produce overall revenue growth of about 20% for the business. Its ability to maintain a strong gross margin as it ramps up the next two manufacturing processes should allow it to produce operating earnings growth exceeding that 20% mark.

TSMC’s stock trades at a much more reasonable earnings multiple of 24 times expectations. Considering the business could generate earnings growth in the low 20% range, that’s a great price for the stock. If it can maintain that earnings multiple through 2028 while growing earnings at about 20% per year, the stock will be worth well over $2 trillion at that point.

Adam Levy has positions in Alphabet, Amazon, Apple, Microsoft, and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Broadcom and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

Physicians Lose Cancer Detection Skills After Using Artificial Intelligence

Published

on


Artificial intelligence shows great promise in helping physicians improve both their diagnostic accuracy of important patient conditions. In the realm of gastroenterology, AI has been shown to help human physicians better detect small polyps (adenomas) during colonoscopy. Although adenomas are not yet cancerous, they are at risk for turning into cancer. Thus, early detection and removal of adenomas during routine colonoscopy can reduce patient risk of developing future colon cancers.

But as physicians become more accustomed to AI assistance, what happens when they no longer have access to AI support? A recent European study has shown that physicians’ skills in detecting adenomas can deteriorate significantly after they become reliant on AI.

The European researchers tracked the results of over 1400 colonoscopies performed in four different medical centers. They measured the adenoma detection rate (ADR) for physicians working normally without AI vs. those who used AI to help them detect adenomas during the procedure. In addition, they also tracked the ADR of the physicians who had used AI regularly for three months, then resumed performing colonoscopies without AI assistance.

The researchers found that the ADR before AI assistance was 28% and with AI assistance was 28.4%. (This was a slight increase, but not statistically significant.) However, when physicians accustomed to AI assistance ceased using AI, their ADR fell significantly to 22.4%. Assuming the patients in the various study groups were medically similar, that suggests that physicians accustomed to AI support might miss over a fifth of adenomas without computer assistance!

This is the first published example of so-called medical “deskilling” caused by routine use of AI. The study authors summarized their findings as follows: “We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

Consider the following non-medical analogy: Suppose self-driving car technology advanced to the point that cars could safely decide when to accelerate, brake, turn, change lanes, and avoid sudden unexpected obstacles. If you relied on self-driving technology for several months, then suddenly had to drive without AI assistance, would you lose some of your driving skills?

Although this particular study took place in the field of gastroenterology, I would not be surprised if we eventually learn of similar AI-related deskilling in other branches of medicine, such as radiology. At present, radiologists do not routinely use AI while reading mammograms to detect early breast cancers. But when AI becomes approved for routine use, I can imagine that human radiologists could succumb to a similar performance loss if they were suddenly required to work without AI support.

I anticipate more studies will be performed to investigate the issue of deskilling across multiple medical specialties. Physicians, policymakers, and the general public will want to ask the following questions:

1) As AI becomes more routinely adopted, how are we tracking patient outcomes (and physician error rates) before AI, after routine AI use, and whenever AI is discontinued?

2) How long does the deskilling effect last? What methods can help physicians minimize deskilling, and/or recover lost skills most quickly?

3) Can AI be implemented in medical practice in a way that augments physician capabilities without deskilling?

Deskilling is not always bad. My 6th grade schoolteacher kept telling us that we needed to learn long division because we wouldn’t always have a calculator with us. But because of the ubiquity of smartphones and spreadsheets, I haven’t done long division with pencil and paper in decades!

I do not see AI completely replacing human physicians, at least not for several years. Thus, it will be incumbent on the technology and medical communities to discover and develop best practices that optimize patient outcomes without endangering patients through deskilling. This will be one of the many interesting and important challenges facing physicians in the era of AI.



Source link

Continue Reading

AI Research

AI exposes 1,000+ fake science journals

Published

on


A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.

The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.

Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers — for a hefty fee.

Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”

His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?

Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.

But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.

“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”

The shake down

When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality — or, at least, that’s the goal.

A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.

Often, they target researchers outside of the United States and Europe, such as in China, India and Iran — countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.

“They will say, ‘If you pay $500 or $1,000, we will review your paper,'” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”

A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)

But keeping pace with the spread of those publications has been daunting for humans.

To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.

Among those journals, the AI initially flagged more than 1,400 as potentially problematic.

Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.

“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”

A firewall for science

Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.

“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”

The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.

The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data — what he calls a “firewall for science.”

“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”

Co-authors on the study included Han Zhuang at the Eastern Institute of Technology in China and Lizheng Liang at Syracuse University in the United States.



Source link

Continue Reading

Trending