Connect with us

AI Research

Alibaba, ByteDance, and Others Remain Keen on NVIDIA (NVDA)’s AI Chips

Published

on


NVIDIA Corporation (NASDAQ:NVDA) is one of the Best Reddit Stocks to Invest in Now. On September 4, Reuters reported that Alibaba, ByteDance, and other Chinese tech firms are keen on NVIDIA Corporation (NASDAQ:NVDA)’s AI chips even though the regulators in Beijing have been strongly discouraging them. Notably, they need reassurance that their orders of Nvidia’s H20 model, which the AI giant has regained permission to sell in China, are being processed. They are also monitoring the company’s plans for a more powerful chip, likely to be named B30A and which is based on its Blackwell architecture, reported Reuters.

Alibaba, ByteDance, and Others Remain Keen on NVIDIA (NVDA)’s AI Chips

Reuters further reported that the US President Donald Trump has also settled a deal with NVIDIA Corporation (NASDAQ:NVDA) for it to provide the US government 15% of the H20 revenue. For Q3 2026, NVIDIA Corporation (NASDAQ:NVDA) expects revenue to be $54.0 billion (plus or minus 2%), not assuming any H20 shipments to China in the outlook. Loomis Sayles, an investment management company, released its Q2 2025 investor letter. Here is what the fund said:

“NVIDIA Corporation (NASDAQ:NVDA) is the world leader in artificial intelligence (AI) computing, which enables computers to mimic human-like intelligence for problem solving and decision making capabilities. Founded in 1993 to develop faster and more-realistic graphics for PC-based video games, Nvidia created the first graphics processing unit (GPU), a dedicated semiconductor that employs a proprietary parallel processing architecture to perform superior graphics rendering outside of a computer’s standard central processing unit (CPU). The parallel processing capability of Nvidia’s GPUs, which contrasts with the linear processing requirement of CPUs, can accelerate computing functions performed by standard CPUs by greater than ten times. As a result, Nvidia extended its visual computing expertise beyond its legacy gaming market into innovative new and larger markets, including data centers, autos, and professional visualization. The parallel processing capability facilitates pattern recognition and machine learning functions that have enabled Nvidia to be at the forefront of growth in artificial intelligence applications. As a result, the data center business, which first surpassed the gaming business to become Nvidia’s largest revenue and profit generator in its 2023 fiscal year, grew to represent over 88% of revenue in the company’s most recent fiscal year. The company is also focused on building out its GPU-computing-based ecosystem and is helping to enable breakthroughs in autonomous driving, and virtual reality.



Source link

AI Research

UC Berkeley researchers use Reddit to study AI’s moral judgements | Research And Ideas

Published

on


A study published by UC Berkeley researchers used the Reddit forum, r/AmITheAsshole, to determine whether artificial intelligence, or AI, chatbots had “patterns in their moral reasoning.”

The study, led by researchers Pratik Sachdeva and Tom van Nuenen at campus’s D-Lab, asked seven AI large language models, or LLMs, to judge more than 10,000 social dilemmas from r/AmITheAsshole.  

The LLMs used were Claude Haiku, Mistral 7B, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B and OpenAI’s GPT-3.5 and GPT-4. The study found that different LLMs showed unique moral judgement patterns, often giving dramatically different verdicts from other LLMs. These results were self-consistent, meaning that when presented with the same issue, the model seemed to judge it with the same set of morals and values. 

Sachdeva and van Nuenen began the study in January 2023, shortly after ChatGPT came out. According to van Nuenen, as people increasingly turned to AI for personal advice, they were motivated to study the values shaping the responses they received.

r/AmITheAsshole is a Reddit forum where people can ask fellow users if they were the “asshole” in a social dilemma. The forum was chosen by the researchers due to its unique verdict system, as subreddit users assign their judgement of “Not The Asshole,” “You’re the Asshole,” “No Assholes Here,” “Everyone Sucks Here” or “Need More Info.” The judgement with the most upvotes, or likes, is accepted as the consensus, according to the study. 

“What (other) studies will do is prompt models with political or moral surveys, or constrained moral scenarios like a trolley problem,” Sechdava said. “But we were more interested in personal dilemmas that users will also come to these language models for like, mental health chats or things like that, or problems in someone’s direct environment.”

According to the study, the LLM models were presented with the post and asked to issue a judgement and explanation. Researchers compared their responses to the Reddit consensus and then judged the AI’s explanations along a six-category moral framework of fairness, feelings, harms, honesty, relational obligation and social norms. 

The researchers found that out of the LLMs, GPT-4’s judgments agreed with the Reddit consensus the most, even if agreement was generally pretty low. According to the study, GPT-3.5 assigned people “You’re the Asshole” at a comparatively higher rate than GPT-4. 

“Some models are more fairness forward. Others are a bit harsher. And the interesting thing we found is if you put them together, if you look at the distribution of all the evaluations of these different models, you start approximating human consensus as well,” van Nuenen said. 

The researchers found that even though the verdicts of the LLM models generally disagreed with each other, the consensus of the seven models typically aligned with the Redditor’s consensus.

One model, Mistral 7B, assigned almost no posts “You’re the Asshole” verdicts, as it used the word “asshole” to mean its literal definition, and not the socially accepted definition in the forum, which refers to whoever is at fault. 

When asked if he believed the chatbots had moral compasses, van Nuenen instead described them as having “moral flavors.” 

“There doesn’t seem to be some kind of unified, directional sense of right and wrong (among the chatbots). And there’s diversity like that,” van Nuenen said. 

Sachdeva and van Nuenen have begun two follow-up studies. One examines how the models’ stances adjust when deliberating their responses with other chatbots, while the other looks at how consistent the models’ judgments are as the dilemmas are modified. 



Source link

Continue Reading

AI Research

Imperial researchers develop AI stethoscope that spots fatal heart conditions in seconds

Published

on


Experts hope the new technology will help doctors spot heart problems earlier

Researchers at Imperial College London and Imperial College Healthcare NHS Trust have developed an AI-powered stethoscope that can diagnose heart conditions. The new device can detect serious heart conditions in just 15 seconds, including heart failure, heart valve disease, and irregular heart rhythms.

The device, manufactured by US firm Eko Health, uses a microphone to record heartbeats and blood flow, while simultaneously taking an ECG (electrocardiogram). The data is then analysed by trained AI software, allowing doctors to detect abnormalities beyond the range of the human ear or the traditional stethoscope.

In a trial involving 12,000 patients from 96 GP practices throughout the UK, the AI stethoscope proved accurate in diagnosing illnesses that usually require lengthy periods of examination.

Results revealed that those examined were twice as likely to be diagnosed with heart failure, and 3.5 times as likely to be diagnosed with atrial fibrillation – a condition linked to strokes. Studies further revealed that patients were almost twice as likely to be diagnosed with heart valve disease.

via Unsplash

The AI stethoscope was trialled on those with more subtle signs of heart failure, including breathlessness, fatigue, or swelling of the lower legs and feet. Retailing at £329 on the Eko Health website, the stethoscope can also be purchased for home use.

Professor Mike Lewis, Scientific Director for Innovation at the National Institute for Health and Care Research (NIHR), described the AI-stethoscope as a “real game-changer for patients.”

He added: “The AI stethoscope gives local clinicians the ability to spot problems earlier, diagnose patients in the community, and address some of the big killers in society.”

Dr Sonya Babu-Narayan, Clinical Director at the British Heart Foundation, further praised this innovation: “Given an earlier diagnosis, people can access the treatment they need to help them live well for longer.”

Imperial College London’s research is a significant breakthrough in rapid diagnosis technology. Studies by the British Heart Foundation reveal that over 7.6 million people live with a cardiovascular disease, causing 170,000 related deaths each year.

Often called a “silent killer”, heart conditions can go unnoticed for years, particularly in young people. The charity Cardiac Risk in the Young reports that 12 young people die each week from undiagnosed heart problems, with athletes at particular risk. Experts hope this new technology will allow these conditions to be identified far earlier.

The NHS has also welcomed these findings. Heart failure costs the NHS more than £2 billion per year, equating to 4 per cent of the annual budget. By diagnosing earlier, the NHS estimates this AI tool could save up to £2,400 per patient.

Researchers now plan to roll out the stethoscope across GP practices in Wales, South London and Sussex – a move that will transform how heart conditions are diagnosed throughout the country.

Featured image via Google Maps/Pexels



Source link

Continue Reading

AI Research

From pilot to profitability: How to approach enterprise AI adoption

Published

on


From central authority to shared ownership

In conversations with other IT leaders, I’ve noticed a common pattern in how AI programs evolve. Most began with a centralized team — a logical first step to establish standards, consistency and a safe space for early experiments. But over time, it became clear that no central group could keep pace with every business request or understand each domain deeply enough to deliver the best solutions.

Many organizations have since shifted toward a hub-and-spoke model. The hub — often an AI center of excellence — takes responsibility for governance, education, best practices and the technically complex use cases. The spokes, led by product or functional teams, experiment with AI features embedded in the tools they use every day. Because they’re closer to the business, these teams can test, iterate and deliver solutions at speed.

When I look across industries, the majority of AI innovation is now happening at the edge, not the center. That’s largely because so much intelligence is already embedded into enterprise software. A CRM platform, for instance, might now offer AI-based lead scoring or predictive churn models — capabilities a team can enable and deploy with little to no involvement from the center of excellence.



Source link

Continue Reading

Trending