AI Research
Commanders vs. Packers NFL props, SportsLine Machine Learning Model AI picks: Jordan Love Over 223.5 passing

The NFL Week 2 schedule gets underway with a Thursday Night Football matchup between NFC playoff teams from a year ago. The Washington Commanders battle the Green Bay Packers beginning at 8:15 p.m. ET from Lambeau Field in Green Bay. Second-year quarterback Jayden Daniels led the Commanders to a 21-6 opening-day win over the New York Giants, completing 19 of 30 passes for 233 yards and one touchdown. Jordan Love, meanwhile, helped propel the Packers to a dominating 27-13 win over the Detroit Lions in Week 1. He completed 16 of 22 passes for 188 yards and two touchdowns.
NFL prop bettors will likely target the two young quarterbacks with NFL prop picks, in addition to proven playmakers like Terry McLaurin, Tucker Kraft and Josh Jacobs. Green Bay’s Jayden Reed has been dealing with a foot injury, but still managed to haul in a touchdown pass in the opener. The Packers enter as a 3.5-point favorite with Green Bay at -187 on the money line. Before betting any Commanders vs. Packers props for Thursday Night Football, you need to see the Commanders vs. Packers prop predictions powered by SportsLine’s Machine Learning Model AI.
Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop.
For Packers vs. Commanders NFL betting on Monday Night Football, the Machine Learning Model has evaluated the NFL player prop odds and provided Bears vs. Vikings prop picks. You can only see the Machine Learning Model player prop predictions for Washington vs. Green Bay here.
Top NFL player prop bets for Commanders vs. Packers
After analyzing the Commanders vs. Packers props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model says Packers quarterback Love goes Over 223.5 passing yards (-112 at FanDuel). Love passed for 224 or more yards in eight games a year ago, despite an injury-filled season. In 15 regular-season games in 2024, he completed 63.1% of his passes for 3,389 yards and 25 touchdowns with 11 interceptions.
In a 30-13 win over the Seattle Seahawks on Dec. 15, he completed 20 of 27 passes for 229 yards and two touchdowns. Love completed 21 of 28 passes for 274 yards and two scores in a 30-17 victory over the Miami Dolphins on Nov. 28. The model projects Love to pass for 259.5 yards, giving this prop bet a 4.5 rating out of 5. See more NFL props here, and new users can also target the FanDuel promo code, which offers new users $300 in bonus bets if their first $5 bet wins:
How to make NFL player prop bets for Washington vs. Green Bay
In addition, the SportsLine Machine Learning Model says another star sails past his total and has four additional NFL props that are rated four stars or better. You need to see the Machine Learning Model analysis before making any Commanders vs. Packers prop bets for Thursday Night Football.
Which Commanders vs. Packers prop bets should you target for Thursday Night Football? Visit SportsLine now to see the top Commanders vs. Packers props, all from the SportsLine Machine Learning Model.
AI Research
Coming for Your Job or Improving Your Performance?

Epic’s Electronic Medical Record and Ancillary Systems Release AI Upgrades
Epic Systems announced a host of artificial intelligence (AI) tools last month at its annual conference. With its more than 300 million patient records (in a country with <400 million people) and more than 3500 different hospital customers, Epic has either released or is currently working on more than 200 different AI tools.1
Their vast data stores are being used to create predictive models and train their own AI tools. The scale of Epic and AI and what it can be used for is both exciting and frightening.
Optimizing Tools to Reduce the Burden of Health Care Administration
Typically, Epic Systems and other technology solution providers’ first entry into AI implementation comes in the form of reducing menial tasks and attempting to automate patient-customer interactions. In my discussions with health care administrators, it is not uncommon for them to attribute 40% to 60% of the cost of health care to administrative tasks or supports. For every physician, there are generally at least 3 full-time equivalencies (workforce members) needed to support that physician’s work, from scheduling to rooming patients to billing and a host of other support efforts.
About the Author
Troy Trygstad, PharmD, PhD, MBA, is the executive director of CPESN USA, a clinically integrated network of more than 3500 participating pharmacies. He received his PharmD and MBA degrees from Drake University and a PhD in pharmaceutical outcomes and policy from the University of North Carolina. He has recently served on the board of directors for the Pharmacy Quality Alliance and the American Pharmacists Association Foundation. He also proudly practiced in community pharmacies across the state of North Carolina for 17 years.
Reducing Cost and Improving Patient-Customer Experience
Reducing administration should reduce costs. Early-entry AI tools are generally aimed at reducing administrative cost, while simultaneously improving the patient experience extramural to the care delivery process (the bump in customer experience is the side benefit, not the motivation). In fact, all of us are more likely to be assigned an AI assistant as a customer than to use one as a health care provider at this juncture. AI is currently deployed over multiple sectors and customer service scenarios, interacting with us indirectly in recent years and now more directly as time passes. Remember that Siri and Alexa continue to grow just like humans, and now there are thousands and thousands of these AI bots. There is a strong possibility that if you answer a spam phone call, the “person” on the other end of the line is an AI tool (being?) and not a human.
Will AI Be an Antidote to Health Care Professional Burnout?
Charting is a drag. Ask any medical provider and one of their least favorite tasks is writing patient encounters for documentation’s sake rather than for the sake of patient care. I’ve personally known hundreds of physicians over my 2 decades of collaborating with them and the majority do most of their charting during off hours (thanks to technology), eating into their work-life balance and wellbeing. A recent study of providers using AI tools for charting found a 40% reduction in documentation burden and better and more complete charting.2 Could AI reduce the burden of menial pharmacy tasks that take away from patient care as well? Very likely yes.
But what if the business model doesn’t change? The biggest lie ever told in pharmacy was that technology was going to free pharmacists to provide patient care without a subsequent workflow and economic model to support it. Therefore, our profession went from filling 150 prescriptions a day to 300 a day to, in some cases, up to 500 per day per pharmacist. The only thing the technology did was increase the throughput of the existing business model. It didn’t support a new model at all.
That is the concern of many physicians as well. Will AI merely increase the number of encounters expected of them or will it actually improve their care delivery and practice satisfaction? That’s a question explored in a recent Harvard Business School article that points to upcoding bias (documentation of higher levels of care to bill more revenue), reduction in administrative cost, and reduced clerical full-time equivalents as the seeming “wins” for health systems administrators thus far, rather than better and more cost-efficient care delivery overall.3 Unsurprisingly to pharmacists, the business model is driving AI use, not the desired practice model.
AI as the New “Peripheral Brain” and Decision Support System
Those of us of a certain age remember a time in pharmacy school when we first entered the practice world under the supervision of a preceptor. At that time, the “peripheral brain” was a notebook that contained the latest prescribing guidelines, infectious disease–drug matches, and other clinical information. Then along came a handheld electronic version of it. Then came Google. Then the implementation of cloud computing. And now AI.
AI is already in place for many physicians and other health care providers, and I fear pharmacy may actually be late to the game in an arms race to make the drug assessment–prescribing–filling process even more efficient. But efficient at what? Administrative tasks? Order entry? Prior authorization documentation?
What About the Effects on Health System and Community Pharmacy Practice?
What if the rest of the world views the practice of pharmacy as consisting entirely of administrative tasks and not assessment and care delivery? If the AI tool is the physician’s peripheral brain, why is there a need for the pharmacist to make recommendations or find drug therapy problems? If the AI tool is instructing the care manager on which medications the care team needs to gather information about and report back to the peripheral brain, why have a pharmacist on the team? There will be many who say, “Oh, AI will absolutely replace the need for pharmacists because they don’t (actually) deliver care. They are a means of medication distribution and a great source of knowledge of medications, but AI will be better at that.”
Too Little Discussion and Planning Not Underway in Pharmacy Circles
The AI takeover is not some distant future reality. The reality is weeks and months away, not years and decades. Nvidia (the chipmaker essential for AI processing) has seen its stock price rise more than 900% in the past 3 years as investors awaken to the speed with which AI is moving. AI is already starting to move from helper to replacement for many jobs and we could see AI agents doing research autonomously within 6 to 18 months and becoming the experts in every field of study known to humans by 2030 (or sooner).
What are we doing in the pharmacy world to prepare, take advantage of, and plant our flag as the medication optimization experts that utilize AI better than anyone else? As far as I can tell at this juncture, we’ve given AI a passing glance and are waiting for AI to come to us, rather than aligning and integrating with AI at the outset.
AI Could Be the Best and Worst Thing for Pharmacy. We Must Learn Lessons From the Past.
There is so much work to do, from regulatory discussions with our state boards of pharmacy to scoping the future of practice alongside technology solution providers to teaching the next generation of pharmacists as well as those already in practice about how to use AI to deliver safer, more effective, and more innovative care.
And above all, practice follows the business model. If provider status was important pre-AI, it has become critical post AI. If we are a profession of clerical work, we will be replaced. If we are a profession of providers, we will harness the immense capabilities of our future AI assistants. No more “This will save you time so you can care for patients” baloney, when there is no economic support model for care delivery sizable enough to employ a quarter of a million pharmacists. We should all be demanding to see evidence of the billable time from our employers, policy makers, and regulators. That is the only sustainable path when the peripheral brain is in the cloud and is the known universe’s best version of it.
REFERENCES
1. What health care provisions of the One Big Beautiful Bill Act mean for states. National Academy for State Health Policy. July 8, 2025. Accessed July 21, 2025. https://nashp.org/what-health-care-provisions-of-the-one-big-beautiful-bill-act-mean-for-states/
2. Graham J. The big, beautiful health care squeeze is here: what that means for your coverage. Investor’s Business Daily. July 18, 2025. Accessed July 21, 2025. https://www.investors.com/news/big-beautiful-bill-trump-budget-health-care-coverage/
3. Constantino AK. Bristol Myers Squibb, Pfizer to sell blockbuster blood thinner Eliquis at 40% discount. CNBC. July 17, 2025. Accessed July 21, 2025. https://www.cnbc.com/2025/07/17/bristol-myers-squibb-pfizer-to-sell-eliquis-at-40percent-discount.html
AI Research
NVIDIA AI Releases Universal Deep Research (UDR): A Prototype Framework for Scalable and Auditable Deep Research Agents

Why do existing deep research tools fall short?
Deep Research Tools (DRTs) like Gemini Deep Research, Perplexity, OpenAI’s Deep Research, and Grok DeepSearch rely on rigid workflows bound to a fixed LLM. While effective, they impose strict limitations: users cannot define custom strategies, swap models, or enforce domain-specific protocols.
NVIDIA’s analysis identifies three core problems:
- Users cannot enforce preferred sources, validation rules, or cost control.
- Specialized research strategies for domains such as finance, law, or healthcare are unsupported.
- DRTs are tied to single models, preventing flexible pairing of the best LLM with the best strategy.
These issues restrict adoption in high-value enterprise and scientific applications.
What is Universal Deep Research (UDR)?
Universal Deep Research (UDR) is an open-source system (in preview) that decouples strategy from model. It allows users to design, edit, and run their own deep research workflows without retraining or fine-tuning any LLM.
Unlike existing tools, UDR works at the system orchestration level:
- It converts user-defined research strategies into executable code.
- It runs workflows in a sandboxed environment for safety.
- It treats the LLM as a utility for localized reasoning (summarization, ranking, extraction) instead of giving it full control.
This architecture makes UDR lightweight, flexible, and model-agnostic.

How does UDR process and execute research strategies?
UDR takes two inputs: the research strategy (step-by-step workflow) and the research prompt (topic and output requirements).
- Strategy Processing
- Natural language strategies are compiled into Python code with enforced structure.
- Variables store intermediate results, avoiding context-window overflow.
- All functions are deterministic and transparent.
- Strategy Execution
- Control logic runs on CPU; only reasoning tasks call the LLM.
- Notifications are emitted via
yield
statements, keeping users updated in real time. - Reports are assembled from stored variable states, ensuring traceability.
This separation of orchestration vs. reasoning improves efficiency and reduces GPU cost.
What example strategies are available?
NVIDIA ships UDR with three template strategies:
- Minimal – Generate a few search queries, gather results, and compile a concise report.
- Expansive – Explore multiple topics in parallel for broader coverage.
- Intensive – Iteratively refine queries using evolving subcontexts, ideal for deep dives.
These serve as starting points, but the framework allows users to encode entirely custom workflows.

What outputs does UDR generate?
UDR produces two key outputs:
- Structured Notifications – Progress updates (with type, timestamp, and description) for transparency.
- Final Report – A Markdown-formatted research document, complete with sections, tables, and references.
This design gives users both auditability and reproducibility, unlike opaque agentic systems.
Where can UDR be applied?
UDR’s general-purpose design makes it adaptable across domains:
- Scientific discovery: structured literature reviews.
- Enterprise due diligence: validation against filings and datasets.
- Business intelligence: market analysis pipelines.
- Startups: custom assistants built without retraining LLMs.
By separating model choice from research logic, UDR supports innovation in both dimensions.
Summary
Universal Deep Research signals a shift from model-centric to system-centric AI agents. By giving users direct control over workflows, NVIDIA enables customizable, efficient, and auditable research systems.
For startups and enterprises, UDR provides a foundation for building domain-specific assistants without the cost of model retraining—opening new opportunities for innovation across industries.
Check out the PAPER, PROJECT and CODE. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.
AI Research
GAO Review Finds 94 Federal AI Adoption Requirements
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi