Connect with us

AI Insights

Virtual panel: How software engineers and team leaders can excel with artificial intelligence

Published

on


Key Takeaways

  • AI impacts the way that software is being developed. We can use AI to automate repetitive coding tasks and boost productivity, while maintaining human oversight and implementing guardrails to ensure code quality.
  • To cope with the challenges and exploit the opportunities of artificial intelligence, we need to equip developers with foundational AI/ML knowledge, prompt engineering skills, and critical thinking skills to evaluate and manage AI-generated outputs.
  • Engineering leaders leverage software teams by encouraging collaboration between developers and AI tools, fostering a clean code culture, and establishing governance frameworks for responsible AI use.
  • Companies can promote resilience through psychological safety, open communication, transparency about AI strategies, and ongoing opportunities for upskilling.
  • To keep software development sustainable and ensure the mental well-being of software developers and team leaders, companies should address AI-related anxieties by positioning AI as a supportive tool, reinforcing job security, and giving developers time and space to adapt.

Introduction

Artificial intelligence is now generally available and is being used by many software developers in their daily work. It is not only impacting individual work, but also the way that professionals work together in teams and how software teams are being managed.

In this panel, we’ll discuss how artificial intelligence is reshaping the way that software is being developed, and what mindset and skills are required for software developers and engineering leaders to become adaptable and resilient in the age of AI.

The panelists:

  • Courtney Nash – Internet Incident Librarian & Research Analyst at The VOID
  • Mandy Gu – Senior Software Development Manager @Wealthsimple
  • Hien Luu – Sr. Engineering Manager @Zoox | Author of MLOps with Ray

InfoQ: How has the rise of artificial intelligence impacted the way that software is being developed?

Courtney Nash: From what we hear in the media and product pitches, AI is making development seemingly quicker and more productive (though the jury is still out on this objectively), but in doing so it is adding unforeseen complexity and the likelihood of unexpected surprises later on. This addition of complexity is in part due to our inability to peel off the top of the AI black box and see how or why it’s doing what it’s doing. We can’t inspect how an AI arrived at the code or solutions that it did, and AI tools can’t model the broader complexity of systems, with which they may interact without awareness.


This knowledge is most critical when things don’t go as planned. When AI-generated software fails, how will we know where to look, or what to investigate when trying to stop the bleeding and get things back up and running and learn from what happened and feed that back into the system?


When it comes to AI and automation in software systems, my research focuses mainly on our own mental models of these tools. This research tends to view AI as a way to replace human work, rather than supporting and augmenting it. These mental models create unrealistic dichotomies (“Machines are better at these tasks/Humans are better at those tasks”) that don’t reflect the realities of software development for today’s modern complex systems. Research from other domains has shown that automation (and now, AI) is built on a “substitution myth”, which stems from the belief that people and computers have fixed strengths and weaknesses, and therefore all we need to do is give separate tasks to each agent (computer/person) according to their strengths.


As long as software development and AI designers continue to fall prey to the substitution myth, we’ll continue to develop systems and tools that, instead of supposedly making humans lives easier/better, will require unexpected new skills and interventions from humans that weren’t factored into the system/tool design (Wrong, Strong, and Silent: What Happens when Automated Systems With High Autonomy and High Authority Misbehave?, Dekker & Woods, 2024).

Mandy Gu: A lot more code is being written by AI (or with AI assistance). Anthropic’s CEO predicted that AI generated code will account for ninety percent of the code being written within the next six months.


On one hand, this change could be a huge productivity boost for developers, potentially accelerating timelines for software delivery and reducing development cost. With the time they get back, developers can focus on high-level design, architecture, and more complex problem-solving. On the other hand, companies and organizations will need to make sure they have the right checks and guardrails so that code quality standards are still being met. There will also be a shift towards better documentation and stronger contextual awareness to take advantage of these tools.

Hien Luu: Software development covers a lot of ground, from understanding requirements, architecting, designing, coding, writing tests, code review, debugging, building new skills and knowledge, and more. AI has now reached a point where it can automate or speed up almost every part of the process.


This is an exciting time to be a builder. A lot of the routine, repetitive, and frankly boring parts of the job, the “cognitive grunt work”, can now be handled by AI. Developers especially appreciate the help in areas like generating test cases, reviewing code, and writing documentation. When those tasks are off our plate, we can spend more time on the things that really add value: solving complex problems, designing great systems, thinking strategically, and growing our skills.


The recent advancements of AI coding agents have pushed them far beyond simple autocomplete. Tools like Cursor, Claude Code, and others are becoming standard components of the modern developer’s toolkit. However, developers still need to provide oversight, making sure the generated code meets quality standards, does not introduce new bugs, and is secure. Careful review, solid testing, and good test coverage are still non-negotiable requirements.

InfoQ: What skills do software developers need to cope with the challenges and exploit the opportunities of artificial intelligence?

Courtney Nash: First and foremost, they need to be empowered to trust their hard-earned knowledge and expertise in the face of AI tools that often evade direct introspection or a clear explanation of how the model works. In order to cope with the glut of AI tools and models, they’ll need knowledge of the domain question so they know when the model is not working as intended (or worse, is hallucinating). In particular they’ll need a clear understanding of how the model operates, and what the model is trained on. They’ll need time to build experience with the model and how to work effectively with it, e.g., whether to handhold it in small steps or make it respond to everything at once.


In their recent paper “Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance“, researchers Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman propose a trust scale for AI that might be helpful for developers and their teammates when it comes to assessing how confident they are with a given AI tool or model:


  • I am confident in the [tool]. I feel that it works well.
  • The outputs of the [tool] are very predictable.
  • The tool is very reliable. I can count on it to be correct all the time.
  • I feel safe that when I rely on the [tool] I will get the right answers.
  • The [tool] is efficient in that it works very quickly.
  • I am wary of the [tool]
  • The [tool] can perform the task better than a novice human user
  • I like using the system for decision making


Ultimately, a skill that will be essential for developers, and difficult to replace with AI, will be knowing how to detect and listen to the subtle signals that point at a system (or a team with AI involved) working suboptimally. For example, have you been using LLMs/AI to paste over areas of friction or tooling gaps which you don’t revisit and end up glossing over, and how will you know that’s happening?

Mandy Gu: The elephant in the room is “whether AI will take over my job one day?”. Until this year, I always thought no, but the recent technological advancements and new product offerings in this space are beginning to change my mind. The reality is that we should be prepared for AI to change the software development role as we know it.


To adapt to these changes, software developers should embrace AI as a tool. As AI becomes a more effective tool, so will the benefits of learning how to use it:


  • A rudimentary understanding of prompt engineering, the dos and don’ts can go a long way.
  • Every software developer should try an AI code assistant at least once.


However, we should also be aware of the pitfalls and risks of using AI. How do we review and guarantee that the code written by an AI is held to the same quality standards as for a human? What do we do if an AI assistant asks for a secret to assist with debugging?


Lastly, software developers should lean in to the critical thinking abilities that make us human. As AI makes coding more accessible, a developer’s impact will shift towards architecture design and translating business problems into technical requirements (as opposed to purely focusing on execution).

Hien Luu: The skills software developers need to thrive in the age of AI span three main areas: AI technical skills, systems thinking, and soft skills. To stay competitive, developers should evolve beyond a “T-shaped” profile, deep expertise in one area with broad general knowledge, toward a “Pi-shaped” profile, with depth in multiple areas and the ability to bridge them effectively.


A solid understanding of AI/ML concepts, especially how large language models (LLMs) are trained, how they behave, and where their limitations lie, is becoming essential. Knowing the strengths of LLMs and their weaknesses, such as bias and hallucinations helps developers use them effectively while guarding against errors. One particularly valuable skill is prompt engineering, the ability to clearly and precisely communicate intent to AI systems. Developers who master this communication will be more productive, more effective, and better equipped to build AI-powered applications. As Andrew Ng said at the Interrupt conference in May 2025, “The ability to tell a computer exactly what you want it to do will be a crucial skill for developers”.


While AI is excellent at repetitive software development tasks like coding, test writing, and code reviews, it struggles with higher-level systems thinking, system design, architecture decisions, and solving complex problems. These areas are now more valuable than ever, and developers who invest in strengthening their skills here will increase their career resilience and amplify their value in an AI-driven world.


Soft skills, particularly critical thinking and problem analysis, are also crucial. The ability to break down complex issues, apply logical reasoning, and weigh trade-offs allows developers to evaluate AI-generated output with a discerning eye. These soft skills are key to maintaining quality, preventing subtle bugs, and avoiding the accumulation of technical debt. In short, these skills act as a safeguard against the risks of being overly reliant on AI.

InfoQ: How can engineering leaders leverage software teams in using techniques and tools based on artificial intelligence in their daily work?

Courtney Nash: Leaders supporting teams that are using or adopting AI must acknowledge and invest in the hard-earned human expertise that their employees possess. Instead of viewing AI as a replacement for perceived human weaknesses, they can build what are called Joint Cognitive Systems (JCS), which provides a new view of how computers and people can not only co-exist, but support each other’s work in novel and advantageous ways.


In their 2002 paper “MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination“, Dekker and Woods identified three critical aspects of a JCS:


Mutual Predictability


In highly interdependent activities like software development and operations, planning our own actions (including coordination actions) becomes possible only when we can accurately predict what others (including automation/AI) will do. Skilled teams become mutually predictable through shared knowledge and coordination activities developed through extended experience in working together. How will AI fit into those activities?


Directability


For AI to be a good team member in a JCS, it must also be directable. Directability refers to the capacity for deliberately assessing and modifying actions as conditions and priorities change. Effective coordination requires AI and humans to provide adequate responsiveness to the others’ influence as the work unfolds, and especially when the unexpected happens.


Common Ground


Effective coordination requires establishing and maintaining common ground, including the pertinent knowledge, beliefs, and assumptions that the involved parties share (see my answer to the next question for more on this topic!). Common ground enables everyone to comprehend the messages and signals that help coordinate work. Team members must be alert for signs of possible erosion of common ground and take preemptive action to forestall a potentially disastrous breakdown of team functioning.


Leaders will have a real challenge, because most AI systems are not designed with these considerations in mind, so they will have to be creative and flexible to support their team working with an AI that is incapable of true joint cognitive work.

Mandy Gu: Engineering leaders should encourage adoption of AI tools and make it easy for software developers to try out these tools. Instead of waiting for the industry to align a winning tool, we should move quickly on reversible decisions.


Leaders should also make sure the secure way is enabled by default, with the right configurations, checks and balances being set from day one. In addition, engineering leaders need to invest in education and training to help their teams leverage this new technology effectively.


Lastly, leaders should continue to build a culture of writing clean code that is simple to understand, which will go a long way for humans and AI alike.

Hien Luu: Engineering leaders need to take an active role in guiding their teams through the AI transformation. This starts with clear communication of expectations, establishing a strong AI governance framework, and implementing comprehensive measurement systems.


Leaders should emphasize that AI is there to augment, not replace, human developers, and that the productivity gains it brings should be reinvested into higher-value, creative, and critical-thinking tasks, not simply used to increase workload.


A robust AI governance framework is essential to avoid security and compliance pitfalls. It should define clear guidelines for AI tool usage, outline adoption criteria, and prevent the uncontrolled spread of unvetted tools within the organization.


Measurement should also be holistic. In addition to traditional metrics like utilization, code acceptance rates, and developer satisfaction, leaders should track indicators of long-term health such as code quality, test coverage, maintainability, and feature delivery velocity. Good tracking ensures AI adoption is driving sustainable productivity and not introducing hidden technical debt.

InfoQ: What can companies do to cultivate a culture of resilience and enable their software developers to thrive in chaos and uncertainty?

Courtney Nash: This question can’t possibly be answered properly in a few paragraphs, but I’ll do my best and suggest some further reading and resources for people to dive further into the answer. I’ll start with a few definitions.


Resilience is the opposite of brittleness; it is the ability to bounce back, to adapt and respond when situations go awry and exceed known solutions, all without breaking down or experiencing catastrophic failure. Resilience is neither reliability, delivering the same outcome every time, nor redundancy, backups and similar methods for effectively supporting reliability.


As system safety and human factors researcher Dr. David Woods has wisely said, “Resilience is a verb“.


MIT Professor Edgar Schein defines organizational culture as “the pattern of basic assumptions that a given group has invented, discovered, or developed in learning to cope with its problems of external adaptation and internal integration, and that have worked well enough to be considered valid, and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems”. (Organizational Culture and Leadership, Edgar Schein, 2010.)


In this regard, culture is a collection of values (derived from assumptions and experience) upon which its constituent members act. Like resilience, it is active. Existing members of a culture must be explicit and transparent about what those values and associated behaviors are, and new members must be actively taught these values and behaviors by those existing members. Culture is alive. It is not a handbook or a one-off onboarding training video. This transparency is a critical piece of building a sustainable culture that includes AI: how will leaders factor AI into a living, changing, active culture?


Combining these two views, cultivating a culture of resilience hinges on investing in expertise. It succeeds when leaders trust their employees and give them autonomy to express their expertise and disseminate it throughout the organization without fear of blame or retribution. A culture of resilience celebrates learning from failure.


Some suggested further reading on this topic includes:



Mandy Gu: In response to the uncertainty introduced by AI, companies should make sure we embrace these technologies safely and securely. Make good data security practices easy with the right checks and balances, and leverage GenAI deployment options offered by their cloud providers (e.g., Bedrock for AWS) to ensure data from AI integrations stays within your cloud tenant.


Companies should also be transparent about their AI strategies to address potential anxieties and give their developers the space and resources to continuously learn and upskill in these changing technological environments.

Hien Luu: A culture of resilience starts with psychological safety and open communication, creating an environment where developers can share their learning journeys and mistakes, express ideas, ask questions, and voice concerns without fear of judgment or retaliation.


Leaders play a key role here. When they openly share their own challenges and learning curves with AI, it signals to the team that vulnerability is not only acceptable but valued. Hack days, regular developer meetups, and other forums for sharing lessons learned (including mistakes) help developers see they’re not alone in navigating change. These practices foster connection, mutual support, and the confidence to experiment, even in times of uncertainty.

InfoQ: What’s needed to keep software development sustainable and ensure the mental well-being of software developers and team leaders?

Courtney Nash: System safety researcher Sidney Dekker notes a few key aspects of organizations that foster resilience and a culture that best supports the people doing that work. These organizations:


  • Never take past success as a guarantee of future success.
  • Keep risk discussions alive even when everything is fine and dandy.
  • Continuously update their mental models of how their system(s) work.
  • Actively consider diverse inputs. They seek minority opinions, and stay open minded about what they hear in those opinions.
  • Unilaterally allow individuals to “stop the production line” without consequence if they feel something is going to go awry.


The other research I always point people towards, when the topic of sustainable work, mental well-being, and burnout comes up, is from Dr. Christine Maslach. She’s a leading expert on burnout in a variety of industries, and has spoken at a number of tech conferences over the past decade. Her research captures key areas in which leaders should invest to keep work sustainable.


Maslach identified six main strategic areas that leaders need to invest in to keep work sustainable and avoid burnout:


  • A manageable workload
  • Providing people with agency or control over their work
  • Ensuring people feel appropriately rewarded for the work they do
  • A sense of belonging to a community
  • Fairness in the work environment
  • Ensuring people feel their work is aligned with the shared values of the organization


A mismatch along any of those axes can lead to burnout down the road. The more mismatches there are, the more likely it’s going to happen. Having alignment along all of these is like having “money in the bank” for when people do need to stretch and work a bit harder when called for. Factoring AI into this model is a new frontier that is largely ignored in the gold rush to new products, models, and computing power.


AI poses a unique set of challenges to all six of these areas of investment for leaders and their teams. Leaders who work towards helping AI and software developers co-exist as “team players” will be more likely to have sustainable, higher performing teams than those who view AI as a substitution for human performance and expertise.

Mandy Gu: Engineering leaders need to address any AI anxieties that may be lingering. Leaders need to be transparent about AI strategies and their expectations, and make it easy for anyone in the company to share feedback about these strategies.


Companies will also need to give teams space to learn and adapt to these new technologies, and lay the foundations to leverage AI effectively. In some cases, companies may also need to reposition their productivity metrics to reflect the work being completed. While it’s tempting to conflate a ninety percent reduction in code being written as cutting delivery time by the same amount, there is so much more that goes under the tip of the iceberg than just writing the code.

Hien Luu: Clear communication about what AI can and cannot do is essential. When AI is positioned as a powerful tool to augment developers rather than replace them, it helps reduce anxiety about job security and fosters greater acceptance and adoption.


Providing accessible, ongoing training that fits into a developer’s busy schedule is equally important. Pairing training with regular “office hours” or open Q&A sessions creates a safe space for learning and troubleshooting. Together, these measures help ease feelings of being overwhelmed, support continuous growth, and keep software development sustainable in the long run.

Conclusions

Using artificial intelligence, software can be developed more quickly. Productivity can also be increased, as repetitive parts can be handled by AI. But AI can add unforeseen complexity and increase the likelihood of unexpected surprises later on. Developers still need to provide oversight, making sure the generated code meets quality standards. Checks and guardrails need to be in place to ensure that code quality standards are being met.

To cope with the challenges and exploit the opportunities of artificial intelligence, software developers need knowledge of the domain question so they know when the model is not working as intended, or hallucinating. To embrace AI as a tool, software developers need to understand AI/ML concepts and prompt engineering. However, they should also be aware of the pitfalls and risks of using AI. Critical thinking and problem analysis are crucial to evaluate AI-generated output.

Engineering leaders can leverage software teams by considering AI and teams as joint cognitive systems and supporting each other’s work in novel and advantageous ways. They should encourage adoption of AI tools and make it easy for software developers to try out tools, and continue to build a culture of writing clean code that is simple to understand. Leaders can establish an AI governance framework, and implement comprehensive measurement systems to support usage of AI.

To cultivate a culture of resilience, companies should invest in expertise. A culture of resilience starts with psychological safety and open communication. Leaders should trust their employees and give them autonomy to express their expertise and disseminate it throughout the organization without fear of blame or retribution. Companies should be transparent about their AI strategies to address potential anxieties and give developers space and resources to continuously learn and upskill in these changing technological environments.

To ensure the mental well-being of software developers, engineering leaders need to address any AI anxieties that may be lingering, and give teams space to learn and adapt to new technologies. When AI is positioned as a powerful tool to augment developers rather than replace them, it helps reduce anxiety about job security and fosters greater acceptance and adoption. Leaders who work towards helping AI and software developers co-exist as “team players” will be more likely to have sustainable, higher performing teams than those who view AI as a substitution for human performance and expertise.





Source link

AI Insights

AI can be a great equalizer, but it remains out of reach for millions of Americans; the Universal Service Fund can expand access

Published

on


In an age defined by digital transformation, access to reliable, high-speed internet is not a luxury; it is the bedrock of opportunity. It impacts the school classroom, the doctor’s office, the town square and the job market.

As we stand on the cusp of a workforce revolution driven by the “arrival technology” of artificial intelligence, high-speed internet access has become the critical determinant of our nation’s economic future. Yet, for millions of Americans, this essential connection remains out of reach.

This digital divide is a persistent crisis that deepens societal inequities, and we must rally around one of the most effective tools we have to combat it: the Universal Service Fund. The USF is a long-standing national commitment built on a foundation of bipartisan support and born from the principle that every American, regardless of their location or income, deserves access to communications services.

Without this essential program, over 54 million students, 16,000 healthcare providers and 7.5 million high-need subscribers would lose internet service that connects classrooms, rural communities (including their hospitals) and libraries to the internet.

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

The discussion about the future of USF has reached a critical juncture: Which communities will have access to USF, how it will be funded and whether equitable access to connectivity will continue to be a priority will soon be decided.

Earlier this year, the Supreme Court found the USF’s infrastructure to be constitutional — and a backbone for access and opportunity in this country. Congress recently took a significant next step by relaunching a bicameral, bipartisan working group devoted to overhauling the fund. Now they are actively seeking input from stakeholders on how to best modernize this vital program for the future, and they need our input.

I’m urging everyone who cares about digital equity to make their voices heard. The window for our input in support of this vital connectivity infrastructure is open through September 15.

While Universal Service may appear as only a small fee on our monthly phone bills, its impact is monumental. The fund powers critical programs that form a lifeline for our nation’s most vital institutions and vulnerable populations. The USF helps thousands of schools and libraries obtain affordable internet — including the school I founded in downtown Brooklyn. For students in rural towns, the E-Rate program, funded by the USF, allows access to the same online educational resources as those available to students in major cities. In schools all over the country, the USF helps foster digital literacy, supports coding clubs and enables students to complete homework online.

By wiring our classrooms and libraries, we are investing in the next generation of innovators.

The coming waves of technological change — including the widespread adoption of AI — threaten to make the digital divide an unbridgeable economic chasm. Those on the wrong side of this divide experienced profound disadvantages during the pandemic. To get connected, students at my school ended up doing homework in fast-food parking lots. Entire communities lost vital connections to knowledge and opportunity when libraries closed.

But that was just a preview of the digital struggle. This time, we have to fight to protect the future of this investment in our nation’s vital infrastructure to ensure that the rising wave of AI jobs, opportunities and tools is accessible to all.

AI is rapidly becoming a fundamental tool for the American workforce and in the classroom. AI tools require robust bandwidth to process data, connect to cloud platforms and function effectively.

The student of tomorrow will rely on AI as a personalized tutor that enhances teacher-led classroom instruction, explains complex concepts and supports their homework. AI will also power the future of work for farmers, mechanics and engineers.

Related: Getting kids online by making internet affordable

Without access to AI, entire communities and segments of the workforce will be locked out. We will create a new class of “AI have-nots,” unable to leverage the technology designed to propel our economy forward.

The ability to participate in this new economy, to upskill and reskill for the jobs of tomorrow, is entirely dependent on the one thing the USF is designed to provide: reliable connectivity.

The USF is also critical for rural health care by supporting providers’ internet access and making telehealth available in many communities. It makes internet service affordable for low-income households through its Lifeline program and the Connect America Fund, which promotes the construction of broadband infrastructure in rural areas.

The USF is more than a funding mechanism; it is a statement of our values and a strategic economic necessity. It reflects our collective agreement that a child’s future shouldn’t be limited by their school’s internet connection, that a patient’s health outcome shouldn’t depend on their zip code and that every American worker deserves the ability to harness new technology for their career.

With Congress actively debating the future of the fund, now is the time to rally. We must engage in this process, call on our policymakers to champion a modernized and sustainably funded USF and recognize it not as a cost, but as an essential investment in a prosperous, competitive and flourishing America.

Erin Mote is the CEO and founder of InnovateEDU, a nonprofit that aims to catalyze education transformation by bridging gaps in data, policy, practice and research.

Contact the opinion editor at opinion@hechingerreport.org.

This story about the Universal Service Fund was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

AI Insights

Examining the Evolving Landscape of Medical AI

Published

on


I. Glenn Cohen discusses the risks and rewards of using artificial intelligence in health care.

In a discussion with The Regulatory Review, I. Glenn Cohen offers his thoughts on the regulatory landscape of medical artificial intelligence (AI), the evolving ways in which patients may encounter AI in the doctor’s office, and the risks and opportunities of a rapidly evolving technological landscape.

The use of AI in the medical field poses new challenges and tremendous potential for scientific and technological advancement. Cohen highlights how AI is increasingly integrated into health care through tools such as ambient scribing and speaks to some of the ethical concerns around data bias, patient privacy, and gaps in regulatory oversight, especially for underrepresented populations and institutions lacking resources. He surveys several of the emerging approaches to liability for the use of medical AI and weighs the benefits and risks of permitting states to create their own AI regulations in the absence of federal oversight. Despite the challenges facing regulators and clinicians looking for ways to leverage these new technologies, Professor Cohen is optimistic about AI’s potential to expand access to care and improve health care quality.

A leading expert on bioethics and the law, Cohen is the James A. Attwood and Leslie Williams Professor of Law at Harvard Law School. He is an elected member of the National Academy of Medicine. He has addressed the Organisation for Economic Co-operation and Development, members of the U.S. Congress, and the National Assembly of the Republic of Korea on medical AI policy, as well as the North Atlantic Treaty Organization on biotechnology and human advancement. He has provided bioethical advising and consulting to major health care companies.

The Regulatory Review is pleased to share the following interview with I. Glenn Cohen.

The Regulatory Review: In what ways is the average patient today most likely to encounter artificial intelligence (AI) in the health care setting?

Cohen: Part of it will depend on what we mean by “AI.” In a sense, using Google Maps to get to the hospital is the most common use, but that’s probably not what you have in mind. I think one very common use we are already seeing deployed in many hospitals is ambient listening or ambient scribing. I wrote an article on that a few months ago with some colleagues. Inbox management—drafting initial responses to patient queries that physicians are meant to look over—is another way that patients may encounter AI soon. Finally, in terms of more direct usage in clinical care, AI involvement in radiology is one of the more typical use cases. I do want to highlight your use of “encounter,” which is importantly ambiguous between “knowingly” or “unknowingly” encounter. As I noted several years ago, patients may never be told about AI’s involvement in their care. That is even more true today.

TRR: Are some patient populations more likely to encounter or benefit from AI than others?

Cohen: Yes. There are a couple of ethically salient ways to press this point. First, because of contextual bias, those who are closer demographically or in other ways to the training data sets are more likely to benefit from AI. I often note that, as a middle-aged Caucasian man living in Boston, I am well-represented in most training data sets in a way that, say, a Filipino-American woman living in rural Arkansas may not be. There are many other forms of bias, but this form of missing data bias is pretty straightforward as a barrier to receiving the benefits from AI.

Second, we have to follow the money. Absent charitable investment, what gets built depends on what gets paid for. That may mean, to use the locution of my friend and co-author W. Nicholson Price II, that that AI may be directed primarily toward “pushing frontiers”—making excellent clinicians in the United States even better, rather than “democratizing expertise”—taking pretty mediocre physician skills and scaling access to them up via AI to improve access across the world and in parts of the United States without good access to healthcare.

Third, ethically and safely implementing AI requires significant evaluation, which requires expertise and imposes costs. Unless there are good clearinghouses for expertise or other interventions, this evaluation is something that leading academic medical centers can do, but many other kinds of facilities cannot.

TRR: What risks does the use of AI in the medical context pose to patient privacy? How should regulators address such challenges?

Cohen: Privacy definitely can be put at risk by AI. There are a couple of ways that come to mind. One is just the propensity to share information that AI invites. Take, for example, large language models such as ChatGPT. If you are a hospital system getting access for your clinicians, you are going to want to get a sandboxed instance that does not share queries back to OpenAI. Otherwise, there is a concern you may have transmitted protected information in violation of the Health Insurance Portability and Accountability Act (HIPAA), as well as your ethical obligations of confidentiality. But if the hospital system makes it too cumbersome to access the LLM, your clinicians are going to start using their phones to access it, and there goes your HIPAA protections. I do not want to make it sound like this is a problem unique to medical AI. In one of my favorite studies—now a bit dated—someone rode in elevators at a hospital and recorded the number of privacy and other violations.

A different problem posed by AI in general is that it worsens a problem I sometimes call data triangulation: the ability to reidentify users by stitching together our presence in multiple data sets, even if we are not directly identified in some of the sensitive data sets. I have discussed this issue in an article, where I include a good illustrative real-life example involving Netflix.

As for solutions, although I think there is space for improving HIPAA—a topic I have discussed along with the sharing of data with hospitals—I have not written specifically about AI privacy legislation in any great depth.

TRR: What are some emerging best practices for mitigating the negative effects of bias in the development and use of medical AI?

Cohen: I think the key starting point is to be able to identify biases. Missing data bias is a pretty obvious one to spot, though it is often hard to fix if you do not have resources to try to diversify the population represented in your data set. Even if you can diversify, some communities might be understandably wary of sharing information. But there are also many harder-to-spot biases.

For example, measurement or classification bias is where practitioner bias is translated into what is in the data set. What this may look like in practice is that women are less likely to receive lipid-lowering medications and procedures in the hospital compared to men, despite being more likely to present with hypertension and heart failure. Label bias is particularly easy to overlook, and it occurs when the outcome variable is differentially ascertained or has a different meaning across groups. A paper published in Science by Ziad Obermeyer and several coauthors has justifiably become the locus classicus example.

A lot of the problem is in thinking very hard at the front end about design and what is feasible given the data and expertise you have. But that is no substitute for auditing on the back end because even very well-intentioned designs may prove to lead to biased results on the back end. I often recommend a paper by Lama H. Nazer and several coauthors, published in PLOS Digital Health, to folks as a summary of the different kinds of bias.

All that said, I often finish talks by saying, “If you have listened carefully, you have learned that medical AI often makes errors, is bad at explaining how it is reaching its conclusion and is a little bit racist. The same is true of your physician, though. The real question is what combination of the two might improve on those dimensions we care about and how to evaluate it.”

TRR: You have written about the limited scope of the U.S. Food and Drug Administration (FDA) in regulating AI in the medical context. What health-related uses of AI currently fall outside of the FDA’s regulatory authority?

Cohen: Most is the short answer. I would recommend a paper written by my former post-doc and frequent coauthor, Sara Gerke, which does a nice job of walking through it. But the punchline is: if you are expecting medical AI to have been FDA-reviewed, your expectations are almost always going to be disappointed.

TRR: What risks, if any, are associated with the current gaps in FDA oversight of AI?

The FDA framework for drugs is aimed at showing safety and efficacy. With devices, the way that review is graded by device classes means that some devices skirt by because they can show a predicate device—in an AI context, sometimes quite unrelated—or they are classified as devices rather than general wellness products. Then there is the stuff that FDA never sees—most of it. For all these products, there are open questions about safety and efficacy. All that said, some would argue that the FDA premarket approval process is a bad fit for medical AI. These critics may defend FDA’s lack of review by comparing it to areas such as innovation in surgical techniques or medical practices, where FDA largely does not regulate the practice of medicine. Instead, we rely on licensure of physicians and tort law to do a lot of the work, as well as on in-house review processes. My own instinct as to when to be worried—to give a lawyerly answer—is it depends. Among other things, it depends on what non-FDA indicia of quality we have, what is understood by the relevant adopters about how the AI works, what populations it does or does not work for, what is tracked or audited, what the risk level in the worst-case scenario looks like, and who, if anyone, is doing the reviewing.

TRR: You have written in the past about medical liability for harms caused to patients by faulty AI. In the current technological and legal landscape, who should be liable for these injuries?

Cohen: Another lawyerly answer: it’s complicated, and the answer will be different for different kinds of AI. Physicians ultimately are responsible for a medical decision at the end of the day, and there is a school of thought that treats AI as just another tool, such as an MRI machine, and suggests that physicians are responsible even if the AI is faulty.

The reality is that few reported cases have succeeded against physicians for a myriad of reasons detailed in a paper published last year by Michelle M. Mello and Neel Guha. W. Nicholson Price II and I have focused on two other legs of the stool in the paper you asked about: hospital systems and developers. In general, and this may be more understandable given that in tort liability for hospital systems is not all that common, it seems to me that most policy analyses place too little emphasis on the hospital system as a potential locus of responsibility. We suggest “the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed for adaptation and monitoring. If that information is unavailable, we suggest that liability should shift from hospitals to the developers keeping information secret.”

Elsewhere, I have also mused as to whether this is a good space for traditional tort law at all and whether instead we ought to have something more like the compensation schemes we see for vaccine injuries or workers’ compensation. In those schemes, we would have makers of AI pay into a fund that could pay for injuries without showing fault. Given the cost and complexity of proving negligence and causation in cases involving medical AI, this might be desirable.

TRR: The U.S. Senate rejected adding a provision to the recently passed “megalaw” that would have set a 10-year moratorium on any state enforcing a law or regulation affecting “artificial intelligence models,” “artificial intelligence systems,” or “automated decision systems.” What are some of the pros and cons of permitting states to develop their own AI regulations?

Cohen: This is something I have not written about, so I am shooting from the hip here. Please take it with an even larger grain of salt than what I have said already. The biggest con to state regulation is that it is much harder for an AI maker to develop something subject to differential standards or rules in different states. One can imagine the equivalent of impossibility-preemption type effects: state X says do this, state Y says do the opposite. But even short of that, it will be difficult to design a product to be used nationally if there are substantial variations in the standards of liability.

On the flip side, this is a feature of tort law and choice of law rules for all products, so why should AI be so different? And unlike physical goods that ship in interstate commerce, it is much easier to geolocate and either alter or disable AI running in states with different rules if you want to avoid liability.

On the pro side for state legislation, if you are skeptical that the federal government is going to be able to do anything in this space—or anything you like, at least—due to the usual pathologies of Congress, plus lobbying from AI firms, action by individual states might be attractive. States have innovated in the privacy space. The California Consumer Privacy Act is a good example. For state-based AI regulation, maybe there is a world where states fulfill the Brandeisian ideal of laboratories of experimentation that can be used to develop federal law.

Of course, a lot of this will depend on your prior beliefs about federalism. People often speak about the “Brussels Effect,” relating to the effects of the General Data Protection Regulation on non-European privacy practices. If a state the size of California was to pass legislation with very clear rules that differ from what companies do now, we might see a similar California effect with companies conforming nationwide to these standards. This is particularly true given that much of U.S. AI development is centered in California. One’s views about whether that is good or bad depend not only on the content of those rules but also on the views of what American federalism should look like.

TRR: Overall, what worries you most about the use of AI in the medical context? And what excites you the most?

Cohen: There is a lot that worries me, but the incentives are number one. What gets built is a function of what gets paid for. We may be giving up on some of what has the highest ethical value, the democratization of expertise and improving access, for lack of a business model that supports it. Government may be able to step in to some extent as a funder or for reimbursement, but I am not that optimistic.

Although your questions have led me to the worry side of the house, I am actually pretty excited. Much of what is done in medicine is unanalyzed, or at least not rigorously so. Even the very best clinicians have limited experience, and even if they read the leading journals, go to conferences, and complete other standard means of continuing education for physicians, the amount of information they can synthesize is orders of magnitude smaller than that of AI. AI may also allow scaling of the delivery of some services in a way that can serve underrepresented people in places where providers are scarce.



Source link

Continue Reading

AI Insights

AI and machine learning for engineering design | MIT News

Published

on


Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.

“When people think about mechanical engineering, they’re thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”

In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.

Play video

Cat Trees to Motion Capture: AI and ML for Engineering Design

Video: MIT Department of Mechanical Engineering

“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.

First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.

The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.

Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.

Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”

The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.

“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.

“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.

Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.

“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.” 



Source link

Continue Reading

Trending