Connect with us

Tools & Platforms

AAMI Aligns With National Academy of Medicine AI Code of

Published

on


Arlington, VA, July 03, 2025 (GLOBE NEWSWIRE) — The Association for the Advancement of Medical Instrumentation (AAMI) today announces its alignment with the National Academy of Medicine’s (NAM) recently published AI Code of Conduct (AICC). AAMI intends to leverage the Code to inform its work addressing the use of artificial intelligence and machine learning in medical devices.

Entitled An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, the AICC is a non-binding resource that organizations in the medical field can use as a baseline for evaluating AI adoption and deployment. The Code identifies six core commitments for any ethical application and implementation of AI in the medical field.

  1. Advance Humanity
  2. Ensure Equity
  3. Engage Impacted Individuals
  4. Improve Workforce Well-Being
  5. Monitor Performance
  6. Innovate and Learn

“AI has the potential to further transform health care, but where there is opportunity there is also risk,” noted Robert Burroughs, AAMI’s Chief Learning and Development Officer. “As the leading developer of voluntary, consensus standards for the medical device industry, our mission is to help ensure the safe and effective use of health technology. The AICC’s six core principles provide a framework we can use to align our diverse efforts across standards development, education, and certification. We intend to use the AICC as a key reference document and will share it widely with our members and stakeholders.”

AAMI Vice President of Standards, Matt Williams, also anticipates clear benefits for AAMI. “We fully expect the AICC will provide a welcome resource for AAMI Committee members as they produce regulatory-ready guidance documents for the entire medical device sector.”

Laura Adams, Senior Advisor at the National Academy of Medicine, describes the AICC as a “definitive answer to a disjointed landscape” of AI guidelines and frameworks. Describing its six commitments, she noted that readers “find them to be practical. We clarified them, we elevated what was out in the field, and we are finding that people are using [the AICC] as a touchstone.”

In an effort to facilitate the adoption of these essential guidelines, Adams has authored a guide titled “Why and How to Use the NAM AICC.” To further accelerate adoption, AAMI is also producing an executive summary of the 189-page AICC later this year.

The rigorous NAM development process involves an extensive field review that brings together the world’s leading experts. Among the experts who contributed to the AICC were Google’s Senior Director of Global Digital Health Strategy & Regulatory and AAMI board member Bakul Patel, Pat Baird, Regulatory Head of Global Software Standards at Philips, who serves as co-chair of AAMI’s AI Committee, and AAMI’s Vice President of Industry and Emerging Technologies, Timothy Hsu.

Do you have questions for the AAMI staffers or members who contributed to the National Academy of Medicine’s AI Code of Conduct? Get in touch with AAMI’s communications team at dvisnovsky@aami.org.



Source link

Tools & Platforms

Tech Companies Pay $200,000 Premiums for AI Experience: Report

Published

on


  • A consulting firm found that tech companies are “strategically overpaying” recruits with AI experience.
  • They found firms pay premiums of up to $200,000 for data scientists with machine learning skills.
  • The report also tracked a rise in bonuses for lower-level software engineers and analysts.

The AI talent bidding war is heating up, and the data scientists and software engineers behind the tech are benefiting from being caught in the middle.

Many tech companies are “strategically overpaying” recruits with AI experience, shelling out premiums of up to $200,000 for some roles with machine learning skills, J. Thelander Consulting, a compensation data and consulting firm for the private capital market, found in a recent report.

The report, compiled from a compensation analysis of roles across 153 companies, showed that data scientists and analysts with machine learning skills tend to receive a higher premium than software engineers with the same skills. However, the consulting firm also tracked a rise in bonuses for lower-level software engineers and analysts.

The payouts are a big bet, especially among startups. About half of the surveyed companies paying premiums for employees with AI skills had no revenue in the past year, and a majority (71%) had no profit.

Smaller firms need to stand out and be competitive among Big Tech giants — a likely driver behind the pricey recruitment tactic, a spokesperson for the consulting firm told Business Insider.

But while the J. Thelander Consulting report focused on smaller firms, some Big Tech companies have also recently made headlines for their sky-high recruitment incentives.

Meta was in the spotlight last month after Sam Altman, CEO of OpenAI, said the social media giant had tried to poach his best employees with $100 million signing bonuses

While Business Insider previously reported that Altman later quipped that none of his “best people” had been enticed by the deal, Meta’s chief technology officer, Andrew Bosworth, said in an interview with CNBC that Altman “neglected to mention that he’s countering those offers.”





Source link

Continue Reading

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | usatoday.com
logo

usatoday.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.com



Source link

Continue Reading

Tools & Platforms

From software engineers to CEO: OpenAI VP Srinivas Narayanan says AI redefining engineering field – Technology News

Published

on


In a recent comment on the importance of AI in the field of jobs, OpenAI’s VP of Engineering, Srinivas Narayanan has said that AI can make software engineers CEOs. The role of software engineers is undergoing a fundamental transformation, with artificial intelligence pushing them to adopt a strategic, “CEO-like” mindset, said Narayanan, at the IIT Madras Alumni Association’s Sangam 2025 conference. 

Narayanan emphasised that AI will increasingly handle the “how” of execution, freeing engineers to focus on the “what” and “why” of problem-solving. “The job is shifting from just writing code to asking the right questions and defining the ‘what’ and ‘why’ of a problem,” Narayanan stated on Saturday. “For every software engineer, the job is going to shift from being an engineer to being a CEO. You now have the tools to do so much more, so I think that means you should aspire bigger,” he said.

“Of course, software is interesting and exciting, but just the ability to think bigger is going to be incredibly empowering for people, and the people who succeed (in the future) are the ones who are going to be able to think bigger,” he added.

Joining Narayanan on stage, Microsoft’s Chief Product Officer Aparna Chennapragada echoed this sentiment, cautioning against simply retrofitting AI onto existing tools. “AI isn’t a feature you can just add on. We need to start building with an AI-first mindset,” she asserted, highlighting how natural language interfaces are replacing traditional user experience layers. Chennapragada also coined the phrase, “Prompt sets are the new PRDs,” referring to how product teams are now collaborating closely with AI models for faster and smarter prototyping.

Narayanan shared a few examples of AI’s ever-expanding capabilities, including a reasoning model developed by OpenAI that successfully identified rare genetic disorders in a Berkeley-linked research lab. He said there’s enormous potential of AI as a collaborator, even in complex research fields.

Not all is good with AI

While acknowledging the transformative power, Narayanan also addressed the inherent risks of AI, such as misinformation and unsafe outputs. He mentioned OpenAI’s iterative deployment philosophy, citing a recent instance where a model exhibiting “sycophancy” traits was rolled back during testing. Both speakers underscored the importance of accessibility and scale, with Narayanan noting a significant 100-fold drop in model costs over the past two years, aligning with OpenAI’s mission to “democratise intelligence.”



Source link

Continue Reading

Trending