Tools & Platforms
Advice from J&J MedTech’s digital head on building trust in AI

We recently spoke with Johnson & Johnson MedTech SVP and Global Head of Digital Shan Jegatheeswaran about the medtech developers’s new Polyphonic AI Fund for Surgery through Nvidia and Amazon Web Services (AWS).
At the same time as that interview, we asked for any advice he could offer to help medtech developers and manufacturers better understand device user needs, build trust in artificial intelligence, and advance digitization efforts within their organizations.
The following has been lightly edited for clarity and space.
MDO: What have you learned in your time in medtech about understanding user needs?
Johnson & Johnson MedTech SVP and Global Head of Digital Shan Jegatheeswaran [Photo courtesy of J&J]
Jegatheeswaran: “It’s having your software work in the wild. We can create the best digital experience in a lab in the basement of the building where we work, but there’s nothing like dropping it into an OR and seeing what works and what doesn’t. We’re pretty lucky that our relationships allow us to enter ORs that are dynamic and arms length from us so it’s truly being tested. We have an early access program for Polyphonic where we selected a few hospitals globally — not just within the U.S. because the environments are somewhat different — for testing and getting feedback. Voice of customer at a feature level while they’re in the heat of battle is really important. The second thing is I’m personally lucky from a team perspective We have functioning ORs in the building where we are. Working closely with the Ottava teams and the Monarch teams, we can actually see a product used in a semi-real environment. That’s the starting point. And bringing in outside influence. We take inspiration from the automotive industry, from the capital markets industry, from life sciences or our J&J Innovative Medicine friends because a lot of these problems have been solved for at the first principle level. We don’t want to reinvent the wheel.”
What technology do we need in terms of infrastructure or next-generation components like sensors to achieve J&J’s vision for AI?
Jegatheeswaran: “Technology isn’t the limiting factor. The technology exists sufficiently enough where we can add value, whether it’s through AI or not. Ultimately, the end user doesn’t really care whether AI is involved. It makes it better. But they want an outcome. The limitations we’re working through and for which I think we’re uniquely positioned are in (no pun intended) the soft tissue around making technology work at scale. That’s how do you think about regulatory globally, not just at a hospital level, how you think about a trusted experience when it comes to things like AI, how you think about managing risk and contracts, and the change management for the folks who are ultimately going to use this software output in an OR, a very dynamic, human-first setting. Those are things that require time, patience, study, and that’s what we’ve done in the past with devices and human-centered design and human factors. We have to do the same with software and approach it in the same way with digital.”
Related: What J&J MedTech’s new Dualto says about the OR of the future — and Ottava
The term artificial intelligence is being thrown around a lot right now, everything from algorithms to large language models, and I’d like to know how you define AI and what kinds you think have the most potential for medtech?
Jegatheeswaran: “When we talk to customers, there’s typically two flavors of value pools. One is clinical in nature, and so we’re hearing a lot of that from surgeons themselves and their teams. The other is more administrative on the business side of the hospital, and it’s more around efficiency. Both are valid value pools. Surgeons today are facing almost an impossible situation when they’re going to a procedure. Patients are living longer, comorbidities are more complex or exist, there’s a ton of new technology coming out, and the workforce is thinning out. It’s a perfect storm. Surgeons are asking for help, and AI can accelerate and augment and automate steps within the procedure process, at least initially, that make it simpler. Every surgeon before they go into a procedure is doing some sort of pre-thinking. They’re speaking with peers, looking at imaging reports, looking at patient health records and histories, and looking at their own notes on procedures with this patient or previous patients of a similar nature. All of that is manual and not recorded anywhere. Why can’t we collect that dataset and on top of that run an AI model to give them the salient outcome of the top three risks, things you can do preemptively, things that you want to make sure your patient does before they come into procedure, comments to your team in terms of how they can prep best the night before and the day of? That is something that can be done for surgeons with the tech that exists today, and that’s something that we’re working on. That’s a big outcome just in terms of preparing in the best way for a complex or normal procedure. On the efficiency side, we’ve seen this happen in the movement of people and the optimization of the movement of people and activities within a confined space. The OR is a dynamic experience: people coming in and out of the room, a lot of equipment working, a lot of sounds. The efficiency side is essentially how many procedures can you do in a day and how can you decrease the level of complications coming out of procedures. You think about technology like ambient AI in the OR, laparoscopic video, and then connecting the dots with patient outcome and then the EMR coming in. That for me is an efficiency play that many companies do a good job of today. And AI has a role to play because you can optimize at an OR level, you can optimize at a hospital level, but with AI, you can optimize at a system-of-systems level, and those best practices can then be fed back to nurses, administrators, etc. That’s why I see the value of AI in the short term. In the long term, the jury’s out in terms of what AI can can help surgery with. I actually think surgery is probably the most personal and sensitive use of AI. It’s literally within someone’s body. And so while speed is important, so is trust and quality. We want to approach this responsibly.”
Related: J&J MedTech arms its Monarch robot for futuristic lung cancer therapies
How do you build trust in AI?
Jegatheeswaran: “We don’t start with looking for a problem to solve with AI. It’s not a hammer looking for a nail. The first step is being authentic about what you’re trying to solve. We have very close, intimate relationships with surgeons and clinical teams. We’ve been in surgery for over 100 years and we’re proud that we’re part of the craft. So the first step is starting with the problem we’re looking to solve. The second is meeting the user where they are. We can drop a Ferrari into the middle of a desert, but it’s absolutely useless. We need to build the infrastructure consistently and piecemeal and change along with our end users and the market and have that evolve. We’ve done that sort of curve going back to sterilization and laparoscopic. Coming forward to digital, we’re going to have to have that same change management curve. Net-net, it involves careful design and co-creation with our end users — which we’re pretty strong at — working backwards-in, understanding what is it we’re trying to solve and whether AI or digital is the actual path. In many cases the answer is yes. In some cases, no. And third is education and training, from new residents all the way up to very experienced surgeons and teams. There’s an element of education and training that often gets overlooked, and it’s really important and near and dear to us.”
Related: Philips offers recommendations for building trust in medtech AI
Can you tell us a little about the digital improvements you led at Baker Hughes before you joined J&J?
Jegatheeswaran: “I came from the oil and gas industry, which was great training in some ways for surgery. It’s a regulated environment, global business, the stakes are high, similar to surgery. … It was about efficiency: How do you make your production field more efficient, create the least environmental impact when you’re drilling, make sure you’re drilling in the right areas? It was a lot around workflow efficiency, training and safety for the workers, accuracy — whether you’re drilling or producing oil or gas — and doing it at scale. There’s no space for one-offs in oil and gas, similar to surgery. Being able to do that at scale in very remote places was our focus.”
Do you have tips for device developers and manufacturers trying to make similar moves toward digitization?
Jegatheeswaran: “Embrace it, because if it’s not here, it’s coming. Be humble enough to understand that no one company, no one entity, is going to figure it out. This is going to have to be a coalition of the willing. And that’s how we’re approaching it. Stay true to the starting point, which is patients come first. This is another technology wave. It’s not changing the fundamentals of what medtech needs to deliver, which is, better patient outcomes and that that’s always going to hold true.”
Do you have a mantra or motto that your team would say you repeat all the time?
Jegatheeswaran: “Ultimately, the user comes first. That’s really the guiding principle. It could be a patient, it could be a surgeon, it could be anyone. That moment of truth when they’re engaging with the product we’ve built, and the susceptibility to use and reuse that product is what makes or breaks a good solution. Because we can have the best science in the world, but if it’s not adopted, it doesn’t really matter. Technology is one one leg of the stool, but data and design are the other two legs. Data has to be quality compliant, etc., and then design is your technical component, which serves cyber and cost, and then your user component, which is your interface and your human design. If you don’t have those three legs of the stool, you don’t have a stool. The user comes first.”
MDO Min-Vasive Medtech: Join us for live interviews and audience Q&As with minimally invasive engineers from Edwards Lifesciences, Jupiter Endo and Compremium
Tools & Platforms
Driving the Way to Safer and Smarter Cars

A new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs.
As autonomous vehicles have only begun to appear on limited public roads, it has become clear that achieving widespread adoption will take longer than early predictions suggested. With Level 3 systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it’s going to be a long climb. Much of the technology that got the industry to Level 3 will not scale in all the needed dimensions—performance, memory usage, interconnect, chip area, and power consumption.
This paper looks at the challenges waiting down the road, including increasing AI operations while decreasing power consumption in realizable solutions. It introduces a new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs that can help solve them.
Read more here.
Tools & Platforms
Tech companies are stealing our books, music and films for AI. It’s brazen theft and must be stopped | Anna Funder and Julia Powles

Today’s large-scale AI systems are founded on what appears to be an extraordinarily brazen criminal enterprise: the wholesale, unauthorised appropriation of every available book, work of art and piece of performance that can be rendered digital.
In the scheme of global harms committed by the tech bros – the undermining of democracies, the decimation of privacy, the open gauntlet to scams and abuse – stealing one Australian author’s life’s work and ruining their livelihood is a peccadillo.
But stealing all Australian books, music, films, plays and art as AI fodder is a monumental crime against all Australians, as readers, listeners, thinkers, innovators, creators and citizens of a sovereign nation.
The tech companies are operating as imperialists, scouring foreign lands whose resources they can plunder. Brazenly. Without consent. Without attribution. Without redress. These resources are the products of our minds and humanity. They are our culture, the archives of our collective imagination.
If we don’t refuse and resist, not just our culture but our democracy will be irrevocably diminished. Australia will lose the wondrous, astonishing, illuminating outputs of human creative toil that delight us by exploring who we are and what we can be. We won’t know ourselves any more. The rule of law will be rendered dust. Colony indeed.
Tech companies have valorised the ethos “move fast and break things”, in this case, the law and all it binds. To “train” AI, they started by “scraping” the internet for publicly available text, a lot of which is rubbish. They quickly realised that to get high-quality writing, thinking and words they would have to steal our books. Books, as everyone knows, are property. They are written, often over years, licensed for production to publishers and the rental returns to authors are called royalties. No one will write them if they can be immediately stolen.
Copyright law rightfully has its critics, but its core protections have enabled the flourishing of book creation and the book business, and the wide (free but not “for free”) transmission of ideas. Australian law says you can quote a limited amount from a book, which must be attributed (otherwise it’s plagiarism). You cannot take a book, copy it entirely and become its distributor. That is illegal. If you did, the author and the publisher would take you to court.
Yet what is categorically disallowed for humans is being seriously discussed as acceptable for the handful of humans behind AI companies and their (not yet profit-making) machines.
To the extent they care, tech companies try to argue the efficiency or necessity of this theft rather than having to negotiate consent, attribution, appropriate treatment and a fee, as copyright and moral rights require. No kidding. If you are setting up a business, in farming or mining or manufacturing or AI, it will indeed be more efficient if you can just steal what you need – land, the buildings someone else constructed, the perfectly imperfect ideas honed and nourished through dedicated labour, the four corners of a book that ate a decade.
Under the banner of progress, innovation and, most recently, productivity, the tech industry’s defence distils to “we stole because we could, but also because we had to”. This is audacious and scandalous, but it is not surprising. What is surprising is the credulity and contortions of Australia’s political class in seriously considering retrospectively legitimising this flagrantly unlawful behaviour.
The Productivity Commission’s proposal for legalising this theft is called “text and data mining” or TDM. Socialised early in the AI debate by a small group of tech lobbyists, the open secret about TDM is that even its proponents considered it was an absolute long shot and would not be taken seriously by Australian policymakers.
Devised as a mechanism primarily to support research over large volumes of information, TDM is entirely ill-suited to the context of unlawful appropriation of copyright works for commercial AI development. Especially when it puts at risk the 5.9% of Australia’s workforce in creative industries and, speaking of productivity, the $160bn national contribution they generate. The net effect if adopted would be that the tech companies can continue to take our property without consent or payment, but additionally without the threat of legal action for breaking the law.
Let’s look at just who the Productivity Commission would like to give this huge free-kick to.
Big Tech’s first fortunes were made by stealing our personal information, click by click. Now our emails can be read, our conversations eavesdropped on, our whereabouts and spending patterns tracked, our attention frayed, our dopamine manipulated, our fears magnified, our children harmed, our hopes and dreams plundered and monetised.
The values of the tech titans are not only undemocratic, they are inhumane. Mark Zuckerberg’s empathy atrophied as his algorithm expanded. He has said, “A squirrel dying in front of your house may be more relevant to you right now than people dying in Africa.” He now openly advocates “a culture that celebrates aggression” and for even more “masculine energy” in the workplace. Eric Schmidt, former head of Google, has said, “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”
The craven, toadying, data-thieving, unaccountable broligarchs we saw lined up on inauguration day in the US have laid claim to our personal information, which they use for profit, for power and for control. They have amply demonstrated that they do not have the flourishing of humans and their democracies at heart.
And now, to make their second tranche of fortunes under the guise of AI, this sector has stolen our work.
Our government should not legalise this outrageous theft. It would be the end of creative writing, journalism, long-form nonfiction and essays, music, screen and theatre writing in Australia. Why would you work if your work can be stolen, degraded, stripped of your association, and made instantly and universally available for free? It will be the end of Australian publishing, a $2bn industry. And it will be the end of us knowing ourselves by knowing our own stories.
Copyright is in the sights of the technology firms because it squarely protects Australian creators and our national engine of cultural production, innovation and enterprise. We should not create tech-specific regulation to give it away to this industry – local or overseas – for free, and for no discernible benefit to the nation.
The rub for the government is that much of the mistreatment of Australian creators involves acts outside Australia. But this is all the more reason to reinforce copyright protection at home. We aren’t satisfied with “what happens overseas stays overseas” in any other context – whether we’re talking about cars or pharmaceuticals or modern slavery. Nor should we be when it comes to copyright.
Over the last quarter-century, tech firms have honed the art of win-win legal exceptionalism. Text and data mining is a win if it becomes law, but it’s a win even if it doesn’t – because the debate itself has very effectively diverted attention, lowered expectations, exhausted creators, drained already meagerly resourced representatives and, above all, delayed copyright enforcement in a case of flagrant abuse.
So what should the government do? It should strategise, not surrender. It should insist that any AI product made available to Australian consumers demonstrate compliance with our copyright and moral rights regime. It should require the deletion of stolen work from AI offerings. And it should demand the negotiation of proper – not token or partial – consent and payment to creators. This is a battle for the mind and soul of our nation – let’s imagine and create a future worth having.
Tools & Platforms
AI-related court cases surge in Beijing

A Beijing court has observed an increasing number of cases related to artificial intelligence in recent years, highlighting the need for collaborative efforts to strengthen oversight in the development and application of this advanced technology.
Since the Beijing Internet Court was established in September 2018, it has concluded more than 245,000 lawsuits. “Among them, cases involving AI have been growing rapidly, primarily focusing on issues such as the ownership of copyright for AI-generated works and whether the use of AI-powered products or services constitutes infringement,” Zhao Changxin, vice-president of the court, said on Wednesday.
He told a news conference that as AI empowers more industries, disputes involving this technology are no longer limited to the internet sector but are now widely permeating fields including culture, entertainment, finance, and advertising.
“The fast development of the technology has not only introduced new products and services, but also brought about new legal risks such as AI hallucinations and algorithmic problems,” he said, adding judicial decisions should seek a balance between encouraging technological innovation and upholding social ethics.
In the handling of AI-related disputes, he emphasized that the priority needs to be given to safeguarding individual dignity and rights. For example, the court last year issued a landmark ruling that imitating someone”s voice through AI without their permission constitutes an infringement of their personal rights.
He suggested that internet users enhance legal awareness, urging technology developers to strictly abide by laws to ensure the legality of their data sources and the foundational modes of origin.
Meanwhile, he said that AI service providers should fulfill their information security obligation by promptly taking measures to halt the generation, transmission, and elimination of illegal content, and make necessary corrections.
In addition, he called on judicial bodies to work with other authorities, including those on cyberspace management, market regulation, and public security, to tighten the supervision in AI application, drawing clear boundaries of responsibilities and duties for the technology developers and service providers.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi