Connect with us

AI Research

AI Research Firm Reka Valued at $1 Billion 

Published

on


Artificial intelligence (AI) research/product development firm Reka AI has raised $110 million.

The company’s new funding round, announced Tuesday (July 22), included contributions from Nvidia and data cloud company Snowflake, and will help Reka scale its multimodal platform for more widespread enterprise adoption.

“Reka is known for its ultra-efficient multimodal models developed by a world-class research team,” the company said in a news release.

“The company’s focus on efficient training and serving infrastructure has enabled it to develop market-leading models at a fraction of the cost. Reka Flash—a multimodal model that understands video, image, text, and audio—is the workhorse of Reka’s product offerings.”

The round values Reka at $1 billion, according to a report by Bloomberg News. The same report notes that Snowflake had held talks to acquire Reka last year, though those discussions ended when “both companies decided it made sense to move independently,” CEO Dani Yogatama told Bloomberg.

Vivek Raghunathan, vice president of AI engineering at Snowflake, said the company would offer Reka AI’s models and other tools to its clients.

“Very few teams in the world have the capability to build what they’ve built,” Raghunathan said. “Almost everyone at that level of talent is at OpenAI, Meta or Anthropic. Reka is one of the rare independents — and they’ve proven they can compete.”

Snowflake earlier this year announced plans for a new Silicon Valley “AI hub” as well as its goal of — along with its startup accelerator and its venture capital partner — investing up to $200 million in early stage startups.

In other artificial intelligence news, PYMNTS wrote earlier this week about the use of AI benchmarks — the type of standards achieved each time companies like Google or OpenAI roll out a new model — in helping guide vendor decisions, identify growth areas and determine whether a model is suitable.

“The first path to discernment is understanding the nature of these benchmarks,” the report said. “These benchmarks are standardized tests that measure an AI model’s proficiency in several areas: math, science, language understanding, coding and reasoning, among other topics.”

Without benchmarks, companies would need to depend on marketing claims or one-sided case studies when figuring out which AI system to use.

“Benchmarks orient AI,” Percy Liang, director of Stanford’s Center for Research on Foundation Models, said at a 2023 Fellows Fund event. “They give the community a North Star.”



Source link

AI Research

Oxford University and Ellison Institute link for AI vaccine research

Published

on


The University of Oxford has announced the launch of a new vaccine research programme in collaboration with the Ellison Institute of Technology (EIT), following the receipt of £118m ($159.2m) in research funding.

The initiative, named CoI-AI (Correlates of Immunity-Artificial Intelligence), will be led by the Oxford Vaccine Group.

This programme aims to integrate Oxford’s expertise in human challenge studies, immune science and vaccine development with EIT’s advancements in AI technology.

The objective is to enhance the understanding of the immune response to infections and the protective effects of vaccines.

The CoI-AI programme will focus on the immune system’s reaction to pathogens that lead to severe infections and contribute to antibiotic resistance, including Streptococcus pneumoniae, Staphylococcus aureus and E coli.

These pathogens are responsible for widespread illnesses and have proven challenging for traditional vaccine strategies.

Researchers will employ human challenge models, where volunteers are safely exposed to specific bacteria in controlled environments, alongside modern immunology and AI methodologies to identify immune responses that correlate with protection.

Oxford Vaccine Group director Professor Sir Andrew Pollard stated: “This programme addresses one of the most urgent problems in infectious disease by helping us to understand immunity more deeply to develop innovative vaccines against deadly diseases that have so far evaded our attempts at prevention.

“By combining advanced immunology with artificial intelligence, and using human challenge models to study diseases, CoI-AI will provide the tools we need to tackle serious infections and reduce the growing threat of antibiotic resistance.”

In December 2024, Oxford and EIT formalised a long-term strategic partnership aimed at addressing some of the most pressing challenges faced by humanity.

This collaboration encompasses a range of disciplines: generative biology, clinical medicine, plant science, sustainable energy and public policy.

The initiative is supported by computational resources provided by Oracle, which include a dedicated AI team and a scholars programme designed to cultivate the next generation of scientists.

EIT chairman Larry Ellison stated: “Researchers in the CoI-AI programme will use artificial intelligence models developed at EIT to identify and better understand the immune responses that predict protection.

“This vaccine development programme combines Oxford’s leadership in immunology and human challenge models with cutting-edge AI, laying the groundwork for a new era of vaccine discovery – one that is faster, smarter and better able to respond to infectious disease outbreaks throughout the world.”

The Oxford Vaccine Group operates within the Department of Paediatrics at the University of Oxford’s Medical Sciences Division.

“Oxford University and Ellison Institute link for AI vaccine research” was originally created and published by Pharmaceutical Technology, a GlobalData owned brand.

 


The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.



Source link

Continue Reading

AI Research

The Smartest Artificial Intelligence (AI) Stocks to Buy With $1,000

Published

on


AI investing is still one of the most promising trends on the market.

Buying artificial intelligence (AI) stocks after the run they’ve had over the past few years may seem silly. However, the reality is that many of these companies are still experiencing rapid growth and anticipate even greater gains on the horizon.

By investing now, you can get in on the second wave of AI investing success before it hits. While it won’t be nearly as lucrative as the first round that occurred from 2023 to 2024, it should still provide market-beating results, making these stocks great buys now.

Image source: Getty Images.

AI Hardware: Taiwan Semiconductor and Nvidia

The demand for AI computing power appears to be insatiable. All of the AI hyperscalers are spending record amounts on building data centers in 2025, but they’re also projecting to top that number in 2026. This bodes well for companies supplying products to fill those data centers with the computing power needed for processing AI workloads.

Two of my favorites in this space are Nvidia (NVDA -3.38%) and Taiwan Semiconductor Manufacturing (TSM -3.05%). Nvidia makes graphics processing units (GPUs), which have been the primary computing muscle for AI workloads so far. Thousands of GPUs are connected in clusters due to their ability to process multiple calculations in parallel, creating a powerful computing machine designed for training and processing AI workloads.

Inside these GPUs are chips produced by Taiwan Semiconductor, the world’s leading contract chip manufacturer. TSMC also supplies chips to Nvidia’s competitors, such as Advanced Micro Devices, so it’s playing both sides of the arms race. This is a great position to be in, and it has led to impressive growth for TSMC.

Both Taiwan Semiconductor and Nvidia are capitalizing on massive data center demand, and have the growth to back it up. In Q2 FY 2026 (ending July 27), Nvidia’s revenue increased by 56% year over year. Taiwan Semiconductor’s revenue rose by 44% in its corresponding Q2, showcasing the strength of both of these businesses.

With data center demand only expected to increase, both of these companies make for smart buys now.

AI Hyperscalers: Amazon, Alphabet, and Meta Platforms

The AI hyperscalers are companies that spend a significant amount of money on AI computing capacity for internal use and to provide tools for consumers. Three major players in this space are Amazon (AMZN -1.16%), Alphabet (GOOG 0.56%) (GOOGL 0.63%), and Meta Platforms (META -1.69%).

Amazon makes this list due to the boost its cloud computing division, Amazon Web Services (AWS), is experiencing. Cloud computing is benefiting from the AI arms race because it allows clients to rent computing power from companies that have more resources than they do. AWS is the market leader in this space, and it is a huge part of Amazon’s business. Despite making up only 18% of Q2 revenue, it generated 53% of Amazon’s operating profits. AWS is a significant beneficiary of AI and is helping drive the stock higher.

Alphabet (GOOG 0.56%) (GOOGL 0.63%) also has a cloud computing wing with Google Cloud, but it’s also developing one of the highest-performing generative AI models: Gemini. Alphabet has integrated Gemini into nearly all of its products, including its most important, Google Search.

With the integration of generative AI into the traditional Google Search, Alphabet has bridged a gap that many investors feared would be the end for Google. This hasn’t been the case, and Alphabet’s impressive 12% growth in Google Search revenue in Q2 supports that. Despite its strong growth, Alphabet is by far the cheapest stock on this list, trading for less than 21 times forward earnings.

AMZN PE Ratio (Forward) Chart

AMZN PE Ratio (Forward) data by YCharts

With Alphabet’s strength and strong position, combined with a cheap stock valuation, it’s an excellent one to buy now.

To round out this list, Meta Platforms (META -1.69%) is another smart pick. It’s the parent company of social media platforms Facebook and Instagram, and gets a huge amount of money from ads. As a result, it’s investing significant resources into improving how AI designs and targets ads, and it’s already seeing some effects. AI has already increased the amount of time users spend on Facebook and Instagram, and is also driving more ad conversions.

We’re just scratching the surface of what AI can do for Meta’s business, and with Meta spending a significant amount of money on top AI talent, it should be able to convert that into some substantial business wins.

AI is a significant boost for the world’s largest companies, and I wouldn’t be surprised to see them outperform the broader market in the coming year as a result.

Keithen Drury has positions in Alphabet, Amazon, Meta Platforms, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

Therapists say AI can help them help you, but some see privacy concerns

Published

on


Therapists in Winnipeg have started using artificial intelligence-powered tools to listen in and transcribe sessions, which some say helps provide better patient care — but it’s raising concerns around privacy and ethical risks for some patients and experts.

Wildwood Wellness Therapy director Gavin Patterson has been using a tool called Clinical Notes AI at his Osborne Village practice for the past 11 months to summarize and auto-generate patient assessments and treatment plans with the click of a button.

Once he has consent from clients to use the software, he turns it on and it transcribes the sessions in real time. Patterson said it’s improving care for his 160 patients.

“Notes before were good, but now it’s so much better,” he said. “When I’m working with clients one-on-one, I’m able to free myself of writing down everything and be fully present in the conversation.”

Patterson sees up to 10 patients daily, making it difficult to remember every session in detail. But AI lets him capture the entire appointment.

“It gives me a lot of brain power back, and it helps me deliver a higher product of service,” he said.

The software also cuts down on the time it would normally take to write clinical notes, letting him provide care to more patients on average.

Tools like Clinical Notes AI can listen to, and transcribe, therapy sessions in real time. (Jeff Stapleton/CBC)

Once patient notes are logged, Patterson said the transcripts from the session are deleted.

As an extra layer of security, he makes sure to record only information the AI absolutely needs.

“I don’t record the client’s name,” he said. “There’s no identifying marks within the note,” which is intended to protect patients from possible security breaches.

But 19-year-old Rylee Gerrard, who has been going to therapy for years, says while she appreciates that an AI-powered tool can help therapists with their heavy workloads, she has concerns about privacy.

“I don’t trust that at all,” said Gerrard, noting she shares “very personal” details in therapy. Her therapist does not currently use AI, she says.

A woman stands smiling, there are trees and a building behind her.
Rylee Gerrard, 19, says she has concerns about privacy when it comes to AI use in therapy. (Travis Golby/CBC )

“I just don’t know where they store their information, I don’t know who owns that information … where all of that is kind of going,” Gerrard said, adding she’s more comfortable knowing that her therapist is the only person with details from her sessions.

Unlike the artificial intelligence Patterson uses, there are tools like Jane — used by some clinics in Winnipeg — which can record audio and video of a patient’s session and make transcriptions.

Recordings are stored until a clinician deletes them permanently, but even then, they stay in the system for seven days before being permanently deleted, according to the software’s website.

CBC News reached out to the company multiple times asking about its security protocols but didn’t receive a reply prior to publication. A section on security on the Jane website says it has a team whose “top priority is to protect your sensitive data.”

Caution, regulation needed: privacy expert 

Ann Cavoukian, the executive director of Global Privacy and Security by Design Centre — an organization that helps people protect their personal data — says there are privacy risks when AI is involved.

“AI can be accessed by so many people in an unauthorized manner,” said Cavoukian, a former privacy commissioner for the province of Ontario.

“This is the most sensitive data that exists, so you have to ensure that no unauthorized third parties can gain access to this information.”

A portrait of a woman with glasses gazing into the camera.
Privacy expert Ann Cavoukian says the use of AI increases the risk of sensitive information ending up in the wrong hands. (Dave MacIntosh/CBC)

She says most, if not all, AI transcription technologies used in health care lack adequate security measures to protect against external access to data, leaving sensitive information vulnerable.

“You should have it in your back pocket, meaning in your own system — your personal area where you get personal emails … and you are in control,” she said.

In Manitoba, AI scribes used in health care or therapy settings have no provincial regulations, according to a statement from the province.

Cavoukian said she understands the workload strain therapists face, but thinks the use of AI in therapy should be met with caution and regulated.

“This is the ideal time, right now, to embed privacy protective measures into the AI from the outset,” she said.

She wants governments and health-care systems to proactively create regulations to protect sensitive information from getting into the wrong hands.

“That can cause enormous harm to individuals,” she said. “That’s what we have to stop.”

Recording sessions not a new technology

The concept of recording therapy sessions is not new to Peter Bieling, a clinical psychologist and a professor of psychiatry and behavioural neurosciences at McMaster University in Hamilton. Therapists have been doing that in other ways, with the consent of patients, for years, he said.

“It used to be old magnetic tape and then it was cassettes,” said Bieling, adding there was always the risk of recordings falling into the wrong hands.

He understands the apprehension around the use of AI in therapy, but encourages people to see it as what it is — a tool and an updated version of what already exists.

A man sits with headphones on smiling.
Clinical psychologist Peter Bieling agrees there are concerns around AI and security, but says its use could improve patient care. (CBC)

The use of scribing tools will not change therapy sessions, nor will it replace therapists, he said — artificial intelligence cannot diagnose a patient or send in documentation, he noted, so practitioners still have the final say.

“Electronic health records have been making recommendations and suggestions, as have the authors of guidelines and textbook writers, for many years,” said Bieling.

But like Cavoukian, he believes more regulations are needed to guide the use of AI. Failing to implement those may lead to problems in the future, he said.

“These agencies are way too late and way too slow,” said Bieling.

For now, Cavoukian advises patients to advocate for themselves.

“When they go in for therapy or any kind of medical treatment, they should ask right at the beginning, ‘I want to make sure my personal health information is going to be protected. Can you tell me how you do that?'”

Asking those types of questions may put pressure on systems to regulate the AI they use, she said.

How Winnipeg therapists are using AI in sessions

From transcribing patient notes to generating assessments and treatment plans, AI-powered tools are helping therapists with heavy workloads, but some are concerned about privacy and ethical risks.



Source link

Continue Reading

Trending