AI Research
Garmin Forerunner 970 review: the new benchmark for running watches | Garmin
Garmin’s new top running watch, the Forerunner 970, has very big shoes to fill as it attempts to replace one of the best training and race companions available. Can a built-in torch, a software revamp and voice control really make a difference?
The new top-of-the-line Forerunner takes the body of the outgoing Forerunner 965 and squeezes in a much brighter display, useful new running analytics and more of the advanced tech from Garmin’s flagship adventure watch the Fenix 8.
These upgrades come at a steep cost of £630 (€750/$750/A$1,399) – £30 more than its predecessor – placing it right at the top of the running and triathlon watch pile, although less than the £780 Fenix 8.
The 970 is about the same size as the outgoing 965 with a 47mm case and a beautiful, crisp and very bright 1.4in OLED screen. The touchscreen is covered in super-hard sapphire glass similar to luxury watches, while the titanium bezel finishes off the polycarbonate body in a choice of three colours.
Quite a lot of the upgrades are trickle-downs from the Fenix 8 and make the 970 a better everyday smartwatch. It has Garmin’s new offline voice control system, which allows you to quickly set timers and alarms, access settings or start activities. The watch also connects to your phone’s voice assistant and takes calls on your wrist via Bluetooth.
A revamped interface speeds up access to notifications from your smartphone by swiping down from the top of the screen. With an iPhone you can view and dismiss text-only notifications but connected to an Android phone you can also see images in notifications and directly reply to them from the watch. The 970 has Garmin Pay for contactless payments, although bank support is limited, and can control music on your phone or download playlists from Spotify, Amazon Music, YouTube Music and others for phone-free music on runs.
The best new feature is the LED torch built into the 970’s top edge. It was invaluable on the Fenix 8 and is my favourite new addition to the Forerunner. It is bright enough to light your way on the street at night or find things buried in dark cupboards but can be turned a dim red to avoid waking everyone at home. It can also be used as a strobe light for running to help keep you visible at night.
The battery lasts about six days with general smartwatch usage, including having the screen on all the time, all-day and night monitoring of health, plenty of notifications and copious use of the torch. The screen has automatic brightness but turning it down one notch in settings, which was still plenty bright enough to see outdoors, added a couple of days to the battery life. Turning the always-on display setting off extended it further to about 12 to 15 days.
Specifications
-
Screen: 1.4in AMOLED (454×454)
-
Case size: 47mm
-
Case thickness: 13.2mm
-
Band size: standard 22mm
-
Weight: 56g
-
Storage: 32GB
-
Water resistance: 50 metres (5ATM)
-
Sensors: GNSS (Multiband GPS, Glonass, Galileo), compass, thermometer, heart rate, pulse Ox
-
Connectivity: Bluetooth, ANT+, wifi
Running and activity tracking
Its predecessor was a fantastic running watch filled to the brim with metrics, helpful analysis and buckets of customisation options, on which the 970 only builds. The screen is large enough to be able to clearly see up to eight data fields on screen at once. Maps look particularly good and are easy to use with touch.
It has the latest dual-band GPS, while Garmin’s algorithms consistently have higher tracking accuracy than its rivals, even with similar systems. The new Gen 5 Elevate heart rate sensor on the back improves pulse monitoring in tricky conditions, and provides ECG (arrhythmia) readings.
The 970 has Garmin’s suite of industry-leading fitness, recovery and training metrics, which are joined by a few new and interesting statistics, including two that attempt to help you prevent injury.
Impact load quantifies how hard a run is on your body based on its intensity and difficulty compared with an easy, flat run at slower speeds. One fast, hard 7km run was rated as equivalent to a gentler 12km run, which felt about right in my feet and legs and made me consider taking a longer recovery time before the next workout.
In addition, the new running tolerance feature tracks your mileage over a seven-day period and advises how much more you can run without increasing your chance of injury. Many runners, including myself, have injured themselves when ramping up their weekly distance too fast when training for a race, which this new stat is an attempt to avoid by giving you suggested guard rails.
The 970 also has a new running economy feature that tracks efficiency of your form, including how much speed you lose as your foot hits the ground, but it relies on Garmin’s latest heart rate monitor strap, the HRM 600 – a £150 separate purchase.
Running battery life is a solid 11-plus hours with its highest accuracy settings and listening to offline music via Bluetooth headphones, or about 16 hours without music. Turning down the screen brightness a bit added several hours to the running battery life, while reducing the GPS accuracy mode can last up to 26 hours.
Solid general health monitoring
The Garmin isn’t entirely about running, triathlon and its 30-plus sport tracking features. It also has a comprehensive suite of general health monitoring tools, including good sleep, activity, stress, women’s health and heart health tracking rivalling an Apple Watch or similar.
Most of Garmin’s most advanced training tools also monitor your recovery from exercise during the rest of the day and night, advising you in the morning and during the day how your body is doing. It has a built-in sleep coach, a running or triathlon coach and various advisers for activity, suggesting when to do a hard workout and when to take it easy. The daily suggested workouts are dynamic and based on your sleep and recovery, so it will never prompt you to do a hardcore workout when you’ve had a terrible night. These automatic workouts can be replaced by a coaching plan, either using Garmin’s solid tools or third-party ones placed on a calendar before a race.
Sustainability
The watch is generally repairable with options available via support. The battery is rated to maintain at least 90% of its original capacity after two years of weekly charging. The watch does not contain any recycled materials. Garmin guarantees security updates until at least 21 May 2027 but typically supports its devices far longer. It offers recycling schemes on new purchases.
Price
The Garmin Forerunner 970 costs £629.99 (€749.99/$749.99/A$1,399).
For comparison, the Garmin Fenix 8 costs from £780, the Forerunner 570 costs £460, the Garmin Forerunner 965 costs £499.99, the Apple Watch Ultra 2 costs £799, the Coros Pace Pro costs £349, the Suunto Race S costs £299 and the Polar Vantage V3 costs £519.
Verdict
Garmin continues to set the bar for running watches with the Forerunner 970. It isn’t a dramatic leap over the outgoing Forerunner 965, instead adding a few bits to the already excellent formula.
The screen is brighter, covered in scratch-resistant sapphire and ringed by a titanium bezel, which gives it a premium look and feel alongside a more modern and responsive interface. The added bells and whistles of voice control and faster access to notifications make using it as a smartwatch alternative much easier. Though wearing it is still a statement about your sporty priorities compared with an Apple or Pixel Watch.
The upgraded heart rate sensor helps keep things locked during more difficult exercises and adds ECG readings for more comprehensive heart health tracking. But it is the built-in torch that is the best addition for daily life. Every watch should have one.
Meanwhile, the new impact load and running tolerance features could be very useful for avoiding strain and injury, adding to the already excellent training and recovery tracking. Plus it has market-leading running accuracy and detailed onboard maps for routes or if you get lost.
If you want a premium running and triathlon watch with all the bells and whistles, the Forerunner 970 is the best you can get. It just comes at a very high cost.
Pros: super bright OLED screen, built-in torch, phone and offline voice control, Garmin Pay, extensive tracking and recovery analysis for running and many other sports, full offline mapping, offline Spotify, buttons and touch, most accurate GPS, ECG.
Cons: expensive, limited Garmin Pay bank support, still limited smartwatch features compared with Apple/Google/Samsung watches, battery life shorter than LCD rivals.
AI Research
Cyber Command creates new AI program in fiscal 2026 budget
U.S. Cyber Command’s budget request for fiscal 2026 includes funding to begin a new project specifically for artificial intelligence.
While the budget proposal would allot just $5 million for the effort — a small portion of Cybercom’s $1.3 billion research and development spending plan — the stand-up of the program follows congressional direction to prod the command to develop an AI roadmap.
In the fiscal 2023 defense policy bill, Congress charged Cybercom and the Department of Defense chief information officer — in coordination with the chief digital and artificial intelligence officer, director of the Defense Advanced Research Projects Agency, director of the National Security Agency and the undersecretary of defense for research and engineering — to jointly develop a five-year guide and implementation plan for rapidly adopting and acquiring AI systems, applications, supporting data and data management processes for cyber operations forces.
Cybercom created its roadmap shortly thereafter along with an AI task force.
The new project within Cybercom’s R&D budget aims to develop core data standards in order to curate and tag collected data that meet those standards to effectively integrate data into AI and machine learning solutions while more efficiently developing artificial intelligence capabilities to meet operational needs.
The effort is directly related to the task of furthering the roadmap.
As a result of that roadmap, the command decided to house its task force within its elite Cyber National Mission Force.
The command created the program by pulling funds from its operations and maintenance budget and moving them to the R&D budget from fiscal 2025 to fiscal 2026.
The command outlined five categories of various AI applications across its enterprise and other organizations, including vulnerabilities and exploits; network security, monitoring, and visualization; modeling and predictive analytics; persona and identity; and infrastructure and transport.
Specifically, the command’s AI project, Artificial Intelligence for Cyberspace Operations, will aim to develop and conduct pilots while investing in infrastructure to leverage commercial AI capabilities. The command’s Cyber Immersion Laboratory will develop, test and evaluate cyber capabilities and perform operational assessments performed by third parties, the budget documents state.
In fiscal 2026, the command plans to spend the $5 million to support the CNMF in piloting AI technologies through an agile 90-day pilot cycle, according to the documents, which will ensure quick success or failure. That fast-paced methodology allows the CNMF to quickly test and validate solutions against operational use cases with flexibility to adapt to evolving cyber threats.
The CNMF will also look to explore ways to improve threat detection, automate data analysis, and enhance decision-making processes in cyber operations, according to budget documents.
AI Research
Researchers Use Hidden AI Prompts to Influence Peer Reviews: A Bold New Era or Ethical Quandary?
AI Secrets in Peer Reviews Uncovered
Last updated:
Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
In a controversial yet intriguing move, researchers have begun using hidden AI prompts to potentially sway the outcomes of peer reviews. This cutting-edge approach aims to enhance review processes, but it raises ethical concerns. Join us as we delve into the implications of AI-assisted peer review tactics and how it might shape the future of academic research.
Introduction to AI in Peer Review
Artificial Intelligence (AI) is rapidly transforming various facets of academia, and one of the most intriguing applications is its integration into the peer review process. At the heart of this evolution is the potential for AI to streamline the evaluation of scholarly articles, which traditionally relies heavily on human expertise and can be subject to biases. Researchers are actively exploring ways to harness AI not just to automate mundane tasks but to provide deep, insightful evaluations that complement human judgment.
The adoption of AI in peer review promises to revolutionize the speed and efficiency with which academic papers are vetted and published. This technological shift is driven by the need to handle an ever-increasing volume of submissions while maintaining high standards of quality. Notably, hidden AI prompts, as discussed in recent studies, can subtly influence reviewers’ decisions, potentially standardizing and enhancing the objectivity of reviews (source).
Incorporating AI into peer review isn’t without challenges. Ethical concerns about transparency, bias, and accountability arise when machines play an integral role in shaping academic discourse. Nonetheless, the potential benefits appear to outweigh the risks, with AI offering tools that can uncover hidden biases and provide more balanced reviews. As described in TechCrunch’s exploration of this topic, there’s an ongoing dialogue about the best practices for integrating AI into these critical processes (source).
Influence of AI in Academic Publishing
The advent of artificial intelligence (AI) is reshaping various sectors, with academic publishing being no exception. The integration of AI tools in academic publishing has significantly streamlined the peer review process, making it more efficient and less biased. According to an article from TechCrunch, researchers are actively exploring ways to integrate AI prompts within the peer review process to subtly guide reviewers’ evaluations without overt influence (). These AI systems analyze vast amounts of data to provide insightful suggestions, thus enhancing the quality of published research.
Moreover, AI applications in academic publishing extend beyond peer review management. AI algorithms can analyze and summarize large datasets, providing researchers with new insights and enabling faster discoveries. As TechCrunch suggests, these technologies are becoming integral to helping researchers manage the ever-increasing volume of scientific literature (). The future of academic publishing might see AI serving as co-authors, providing accurate data analysis and generating hypotheses based on trends across studies.
Public reactions to the influence of AI in academic publishing are mixed. Some view it as a revolutionary tool that democratizes knowledge production by reducing human errors and biases. Others, however, raise concerns over ethical implications, fearing that AI could introduce new biases or be manipulated to favor particular agendas. As TechCrunch highlights, the key challenge will be to implement transparent AI systems that can be held accountable and ensure ethical standards in academic publishing ().
Looking ahead, the influence of AI in academic publishing is poised to grow, potentially transforming various aspects of research dissemination. AI-powered platforms could revolutionize the accessibility and dissemination of knowledge by automating the proofreading and formatting processes, making academic work more readily available and understandable globally. However, as TechCrunch notes, the future implications of such developments require careful consideration to balance innovation with ethical integrity, especially in how AI technologies are governed ().
Challenges and Concerns in AI Implementation
Implementing AI technologies across various sectors presents numerous challenges and concerns, particularly regarding transparency, ethics, and reliability. As researchers strive to integrate AI into processes like peer review, hidden AI prompts can sometimes influence decisions subtly. According to “TechCrunch” in their article about researchers influencing peer review processes with hidden AI prompts, such practices raise questions about the integrity of AI systems . Ensuring AI operates within ethical boundaries becomes crucial, as we must balance innovation with maintaining trust in automated systems.
Furthermore, the opacity of AI algorithms often leads to public and expert concerns about accountability. When AI systems make decisions without clear explanations, it can diminish users’ trust. In exploring the future implications of AI in peer review settings, it becomes apparent that refinements are needed to enhance transparency and ethical considerations. As noted in the TechCrunch article, there is an ongoing debate about the extent to which AI should be allowed to influence decisions that have traditionally been human-centric . This calls for a framework that sets clear standards and guidelines for AI implementation, ensuring its role supplements rather than overrides human judgment.
In addition to transparency and ethics, reliability is another significant concern when implementing AI. The technological robustness of AI systems is continuously tested by real-world applications. Errors or biases in AI can lead to unintended consequences that may affect public perception and acceptance of AI-driven tools. As industries increasingly rely on AI, aligning these systems with societal values and ensuring they are error-free is paramount to gaining widespread acceptance. The TechCrunch article also highlights these reliability issues, suggesting that developers need to focus more on creating accurate, unbiased algorithms .
Experts Weigh in on AI-driven Peer Review
In recent years, the academic community has seen a growing interest in integrating artificial intelligence into the peer review process. Experts believe that AI can significantly enhance this critical phase of academic publishing by bringing in efficiency, consistency, and unbiased evaluation. According to a report on TechCrunch, researchers are exploring ways to subtly incorporate AI prompts into the peer review mechanism to improve the quality of feedback provided to authors (TechCrunch).
The inclusion of AI in peer review is not without its challenges, though. Experts caution that the deployment of AI-driven tools must be done with significant oversight to prevent any undue influence or bias that may occur from automated processes. They emphasize the importance of transparency in how AI algorithms are used and the nature of data fed into these systems to maintain the integrity of peer review (TechCrunch).
While some scholars welcome AI as a potential ally that can alleviate the workload of human reviewers and provide them with analytical insights, others remain skeptical about its impact on the traditional rigor and human judgment in peer evaluations. The debate continues, with public reactions reflecting a mixture of excitement and cautious optimism about the future potential of AI in scholarly communication (TechCrunch).
Public Reactions to AI Interventions
The public’s reaction to AI interventions, especially in fields such as scientific research and peer review, has been a mix of curiosity and skepticism. On one hand, many appreciate the potential of AI to accelerate advancements and improve efficiencies within the scientific community. However, concerns remain over the transparency and ethics of deploying hidden AI prompts to influence processes that traditionally rely on human expertise and judgment. For instance, a recent article on TechCrunch highlighted researchers’ attempts to integrate these AI-driven techniques in peer review, sparking discussions about the potential biases and ethical implications of such interventions.
Further complicating the public’s perception is the potential for AI to disrupt traditional roles and job functions within these industries. Many individuals within the academic and research sectors fear that an over-reliance on AI could undermine professional expertise and lead to job displacement. Despite these concerns, proponents argue that AI, when used effectively, can provide invaluable support to researchers by handling mundane tasks, thereby allowing humans to focus on more complex problem-solving activities, as noted in the TechCrunch article.
Moreover, the ethical ramifications of using AI in peer review processes have prompted a call for stringent regulations and clearer guidelines. The potential for AI to subtly shape research outcomes without the overt consent or awareness of the human peers involved raises significant ethical questions. Discussions in media outlets like TechCrunch indicate a need for balanced discussions that weigh the benefits of AI-enhancements against the necessity to maintain integrity and trust in academic research.
Future of Peer Review with AI
The future of peer review is poised for transformation as AI technologies continue to advance. Researchers are now exploring how AI can be integrated into the peer review process to enhance efficiency and accuracy. Some suggest that AI could assist in identifying potential conflicts of interest, evaluating the robustness of methodologies, or even suggesting suitable reviewers based on their expertise. For instance, a detailed exploration of this endeavor can be found at TechCrunch, where researchers are making significant strides toward innovative uses of AI in peer review.
The integration of AI in peer review does not come without its challenges and ethical considerations. Concerns have been raised regarding potential biases that AI systems might introduce, the transparency of AI decision-making, and how reliance on AI might impact the peer review landscape. As discussed in recent events, stakeholders are debating the need for guidelines and frameworks to manage these issues effectively.
One potential impact of AI on peer review is the democratization of the process, opening doors for a more diverse range of reviewers who may have been overlooked previously due to geographical or institutional biases. This could result in more diverse viewpoints and a richer peer review process. Additionally, as AI becomes more intertwined with peer review, expert opinions highlight the necessity for continuous monitoring and adjustment of AI tools to ensure they meet the ethical standards of academic publishing. This evolution in the peer review process invites us to envision a future where AI and human expertise work collaboratively, enhancing the quality and credibility of academic publications.
Public reactions to the integration of AI in peer review are mixed. Some welcome it as a necessary evolution that could address long-standing inefficiencies in the system, while others worry about the potential loss of human oversight and judgment. Future implications suggest a field where AI-driven processes could eventually lead to a more streamlined and transparent peer review system, provided that ethical guidelines are strictly adhered to and biases are meticulously managed.
AI Research
Xbox producer tells staff to use AI to ease job loss pain

An Xbox producer has faced a backlash after suggesting laid-off employees should use artificial intelligence to deal with emotions in a now deleted LinkedIn post.
Matt Turnbull, an executive producer at Xbox Game Studios Publishing, wrote the post after Microsoft confirmed it would lay off up to 9,000 workers, in a wave of job cuts this year.
The post, which was captured in a screenshot by tech news site Aftermath, shows Mr Turnbull suggesting tools like ChatGPT or Copilot to “help reduce the emotional and cognitive load that comes with job loss.”
One X user called it “plain disgusting” while another said it left them “speechless”. The BBC has contacted Microsoft, which owns Xbox, for comment.
Microsoft previously said several of its divisions would be affected without specifying which ones but reports suggest that its Xbox video gaming unit will be hit.
Microsoft has set out plans to invest heavily in artificial intelligence (AI), and is spending $80bn (£68.6bn) in huge data centres to train AI models.
Mr Turnbull acknowledged the difficulty of job cuts in his post and said “if you’re navigating a layoff or even quietly preparing for one, you’re not alone and you don’t have to go it alone”.
He wrote that he was aware AI tools can cause “strong feelings in people” but wanted to try and offer the “best advice” under the circumstances.
The Xbox producer said he’d been “experimenting with ways to use LLM Al tools” and suggested some prompts to enter into AI software.
These included career planning prompts, resume and LinkedIn help, and questions to ask for advice on emotional clarity and confidence.
“If this helps, feel free to share with others in your network,” he wrote.
The Microsoft cuts would equate to 4% of Microsoft’s 228,000-strong global workforce.
Some video game projects have reportedly been affected by the cuts.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business4 days ago
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth