Connect with us

AI Insights

Bears vs. Vikings NFL props, best SportsLine Machine Learning Model AI predictions: Williams under 214.5 yards

Published

on


NFL Week 1 concludes with a Monday Night Football matchup at 8:15 p.m. ET between the Chicago Bears and Minnesota Vikings (-1, 43.5). Quarterback J.J. McCarthy will make his regular season debut after missing last year due to injury, and he’ll see a member of his draft class on the other side of the field in Caleb Williams. NFL prop bettors will likely target the two young quarterbacks with NFL prop picks, in addition to proven playmakers like Justin Jefferson, D.J. Moore and Aaron Jones. Jefferson, who torched the Bears earlier in his career, has been contained in recent matchups which could influence MNF prop picks. He dealt with a hamstring injury in the preseason, but is not on the injury report for Monday.

The two-time All-Pro has been held under 75 receiving yards in three straight games versus Chicago as Jefferson has an SNF prop total of 77.5 receiving yards. Both the Over and Under would return -112, per the latest NFL prop odds, as his early chemistry with McCarthy will be a focal point. Before betting any Vikings vs. Bears props for Monday Night Football, you need to see the Bears vs. Vikings prop predictions powered by SportsLine’s Machine Learning Model AI.

Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop. 

For Vikings vs. Bears NFL betting on Monday Night Football, the Machine Learning Model has evaluated the NFL player prop odds and provided Bears vs. Vikings prop picks. You can only see the Machine Learning Model player prop predictions for Minnesota vs. Chicago here.

Top NFL player prop bets for Bears vs. Vikings

After analyzing the Vikings vs. Bears props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model says Bears quarterback Williams goes Under 214.5 passing yards (-112 at FanDuel). Primetime games like what he’ll see on Sunday night weren’t too favorable to Williams as a rookie. He lost all three he played in, had one total passing score across them, was sacked an average of 5.3 times and, most relevant to this NFL prop, Williams failed to reach even 200 passing yards in any of the three.

One of those games came against the Vikings in Week 15 as Williams finished with just 191 yards through the air. Minnesota terrorized quarterbacks a year ago as it led the NFL with 24 defensive interceptions, held opposing QBs to the second-lowest passer rating (82.4) and racked up the fourth-most sacks (49). Given Minnesota’s prowess in defending the pass, and Williams’ primetime struggles, the SportsLine Machine Learning Model forecasts him to finish with just 172 passing yards, making Under 214.5 a 4.5-star NFL prop. See more NFL props here, and new users can also target the FanDuel promo code, which offers new users $300 in bonus bets if their first $5 bet wins:

How to make NFL player prop bets for Chicago vs. Minnesota

In addition, the SportsLine Machine Learning Model says another star sails past his total and has four additional NFL props that are rated four stars or better. You need to see the Machine Learning Model analysis before making any Vikings vs. Bears prop bets for Monday Night Football.

Which Bears vs. Vikings prop bets should you target for Monday Night Football? Visit SportsLine now to see the top Vikings vs. Bears props, all from the SportsLine Machine Learning Model.





Source link

AI Insights

Patients turn to AI to interpret lab tests, with mixed results : Shots

Published

on


People are turning to Chatbots like Claude to get help interpreting their lab test results.

Smith Collection/Gado/Archive Photos/Getty Images


hide caption

toggle caption

Smith Collection/Gado/Archive Photos/Getty Images

When Judith Miller had routine blood work done in July, she got a phone alert the same day that her lab results were posted online. So, when her doctor messaged her the next day that overall her tests were fine, Miller wrote back to ask about the elevated carbon dioxide and something called “low anion gap” listed in the report.

While the 76-year-old Milwaukee resident waited to hear back, Miller did something patients increasingly do when they can’t reach their health care team. She put her test results into Claude and asked the AI assistant to evaluate the data.

“Claude helped give me a clear understanding of the abnormalities,” Miller said. The generative AI model didn’t report anything alarming, so she wasn’t anxious while waiting to hear back from her doctor, she said.

Patients have unprecedented access to their medical records, often through online patient portals such as MyChart, because federal law requires health organizations to immediately release electronic health information, such as notes on doctor visits and test results.

And many patients are using large language models, or LLMs, like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, to interpret their records. That help comes with some risk, though. Physicians and patient advocates warn that AI chatbots can produce wrong answers and that sensitive medical information might not remain private.

But does AI know what it’s talking about?

Yet, most adults are cautious about AI and health. Fifty-six percent of those who use or interact with AI are not confident that information provided by AI chatbots is accurate, according to a 2024 KFF poll. (KFF is a health information nonprofit that includes KFF Health News.)

That instinct is born out in research.

“LLMs are theoretically very powerful and they can give great advice, but they can also give truly terrible advice depending on how they’re prompted,” said Adam Rodman, an internist at Beth Israel Deaconess Medical Center in Massachusetts and chair of a steering group on generative AI at Harvard Medical School.

Justin Honce, a neuroradiologist at UCHealth in Colorado, said it can be very difficult for patients who are not medically trained to know whether AI chatbots make mistakes.

“Ultimately, it’s just the need for caution overall with LLMs. With the latest models, these concerns are continuing to get less and less of an issue but have not been entirely resolved,” Honce said.

Rodman has seen a surge in AI use among his patients in the past six months. In one case, a patient took a screenshot of his hospital lab results on MyChart then uploaded them to ChatGPT to prepare questions ahead of his appointment. Rodman said he welcomes patients’ showing him how they use AI, and that their research creates an opportunity for discussion.

Roughly 1 in 7 adults over 50 use AI to receive health information, according to a recent poll from the University of Michigan, while 1 in 4 adults under age 30 do so, according to the KFF poll.

Using the internet to advocate for better care for oneself isn’t new. Patients have traditionally used websites such as WebMD, PubMed, or Google to search for the latest research and have sought advice from other patients on social media platforms like Facebook or Reddit. But AI chatbots’ ability to generate personalized recommendations or second opinions in seconds is novel.

What to know: Watch out for “hallucinations” and privacy issues

Liz Salmi, communications and patient initiatives director at OpenNotes, an academic lab at Beth Israel Deaconess that advocates for transparency in health care, had wondered how good AI is at interpretation, specifically for patients.

In a proof-of-concept study published this year, Salmi and colleagues analyzed the accuracy of ChatGPT, Claude, and Gemini responses to patients’ questions about a clinical note. All three AI models performed well, but how patients framed their questions mattered, Salmi said. For example, telling the AI chatbot to take on the persona of a clinician and asking it one question at a time improved the accuracy of its responses.

Privacy is a concern, Salmi said, so it’s critical to remove personal information like your name or Social Security number from prompts. Data goes directly to tech companies that have developed AI models, Rodman said, adding that he is not aware of any that comply with federal privacy law or consider patient safety. Sam Altman, CEO of OpenAI, warned on a podcast last month about putting personal information into ChatGPT.

“Many people who are new to using large language models might not know about hallucinations,” Salmi said, referring to a response that may appear sensible but is inaccurate. For example, OpenAI’s Whisper, an AI-assisted transcription tool used in hospitals, introduced an imaginary medical treatment into a transcript, according to a report by The Associated Press.

Using generative AI demands a new type of digital health literacy that includes asking questions in a particular way, verifying responses with other AI models, talking to your health care team, and protecting your privacy online, said Salmi and Dave deBronkart, a cancer survivor and patient advocate who writes a blog devoted to patients’ use of AI.

Physicians must be cautious with AI too

Patients aren’t the only ones using AI to explain test results. Stanford Health Care has launched an AI assistant that helps its physicians draft interpretations of clinical tests and lab results to send to patients.

Colorado researchers studied the accuracy of ChatGPT-generated summaries of 30 radiology reports, along with four patients’ satisfaction with them. Of the 118 valid responses from patients, 108 indicated the ChatGPT summaries clarified details about the original report.

But ChatGPT sometimes overemphasized or underemphasized findings, and a small but significant number of responses indicated patients were more confused after reading the summaries, said Honce, who participated in the preprint study.

Meanwhile, after four weeks and a couple of follow-up messages from Miller in MyChart, Miller’s doctor ordered a repeat of her blood work and an additional test that Miller suggested. The results came back normal. Miller was relieved and said she was better informed because of her AI inquiries.

“It’s a very important tool in that regard,” Miller said. “It helps me organize my questions and do my research and level the playing field.”

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF .



Source link

Continue Reading

AI Insights

Artificial intelligence can predict risk of heart attack – mydailyrecord.com

Published

on



Artificial intelligence can predict risk of heart attack  mydailyrecord.com



Source link

Continue Reading

AI Insights

China’s Open-Source Models Are Testing US AI Dominance

Published

on


While the AI boom seemingly began in Silicon Valley with OpenAI’s ChatGPT three years ago, 2025 has been proof that China is highly competitive in the artificial intelligence field — if not the frontrunner. The Eastern superpower is building its own open-source AI programs that have demonstrated high performance as they put ubiquity and effectiveness over profitability (while still managing to make quite a bit of money), the Wall Street Journal reports.

DeepSeek is probably the most well-known Chinese AI entity in the U.S., whose R1 reasoning model became popular at the start of the year. Being open-source, as opposed to proprietary, means these programs are free and their source code can be downloaded, used and tinkered with by anyone. Qwen, Moonshot, Z.ai and MiniMax are other such programs.

This is in contrast to U.S. offerings like ChatGPT, which, though free to use (up to a certain level of compute), are not made available to be modified or extracted by users. (OpenAI did debut its first open-source model, GPT-OSS, last month.)

American companies like OpenAI are racing to catch up — monopolies and industry-standard technologies are often the ones that are the most accessible and customizable. The Trump administration wagered that open-source models “could become global standards in some areas of business and in academic research” in July.

Want to join the conversation on how the security of information and data is impacting our global power struggle with China? Attend the 2025 Intel Summit on Oct. 2, from Potomac Officers Club. This GovCon-focused event will include a must-attend panel discussion called “Guarding Innovation: Safeguarding Research and IP in the Era of Strategic Competition With China.” Register today!

China has already declared an economic war on the West using espionage at the forefront of its campaign. —David Shedd

China’s Tech Progress Has Big Implications

The Intel Summit panel will feature, among other distinguished guests, David Shedd, a highly experienced intel community official who was acting director of the Defense Intelligence Agency (after serving as its deputy director for four years) and deputy director of national intelligence for policy, plans and procedures.

Shedd spoke to GovCon Wire in an exclusive interview about China-U.S. competition ahead of his appearance on the panel. He said that China’s progress in areas like AI should not be taken lightly and could portend greater problems and tension in the future.

“Sensitive IP or technological breakthroughs in things like AI, stealth fighter jets, or chemical formulas lost to an adversary do not happen in a vacuum. They lead, instead, to the very direct and very serious loss of the relative capabilities that define and underpin the balance and symbiosis of relationships within the international system,” Shedd commented.

Open-source models are attractive to organizations, WSJ said, because they can customize the programs and use them internally and protect sensitive data. In their Intel Summit panel session, Shedd and his counterparts will explore how the U.S. might embrace open-source more firmly as a way to stay agile in the realm of research and IP protection.

Who Is Stronger, America or China?

“The Great Heist,” coming Dec. 2025

Shedd, along with co-author Andrew Badger, is publishing a book on December 2 entitled “The Great Heist: China’s Epic Campaign to Steal America’s Secrets.” Published through HarperCollins, the volume will focus on the campaign of intellectual property theft the Chinese government is waging against the U.S.

Shedd elaborated for us:

“The PRC/CCP’s unrelenting pursuit of stolen information from the West and the U.S. in particular has propelled China’s economic and military might to heights previously unimaginable. Yet we collectively continue to underestimate the scale of this threat. It’s time for the world to fully comprehend the depth and breadth of China’s predatory behavior.

Our national security depends on how we respond—and whether we finally wake up to the reality that China has already declared an economic war on the West using espionage at the forefront of its campaign. It already has a decades-long head start.”

Don’t miss former DIA Acting Director David Shedd, as well as current IC leaders like Deputy Director of National Intelligence Aaron Lukas and CIA’s AI office Deputy Director Israel Soong at the 2025 Intel Summit on Oct. 2! Save your spot before it’s too late.

You’ve already read all related articles.

https://www.youtube.com/watch?v=videoseries



Source link

Continue Reading

Trending