Connect with us

AI Research

Ketryx Closes $39M Series B Round to Power the Future of Regulated Artificial Intelligence for Life Sciences

Published

on


Insider Brief

  • Ketryx raised $39M Series B led by Transformation Capital, with participation from Lightspeed, MIT’s E14 Fund, Ubiquity Ventures, and 53 Stations, bringing total funding to over $55M; Vinay Shah of Transformation Capital joins the board.
  • Its AI-native compliance platform automates validation, traceability, and regulatory workflows (FDA/EU MDR-ready), enabling life sciences teams to achieve up to 90% faster documentation and 10x quicker release cycles without sacrificing safety.
  • Already used by three of the top five global medtech companies and innovators like DeepHealth and Heartflow, Ketryx is positioning itself as the key AI infrastructure layer for regulated product development in healthcare and beyond.

Ketryx, the AI-powered compliance platform helping life sciences companies ship safer products faster, has announced a $39 million Series B led by Transformation Capital, with participation from existing investors including Lightspeed Venture Partners, MIT’s E14 Fund, Ubiquity Ventures, and 53 Stations. This latest round brings the company’s total funding to over $55 million, and Vinay Shah, Partner and Founding Team Member at Transformation Capital, will join Ketryx’s board.

Ketryx is solving one of the most difficult challenges in the life sciences: the need to accelerate product innovation without compromising safety or compliance. This challenge is more urgent than ever with teams racing to incorporate AI into regulated workflows and products.

“I’ve spent the last decade at the intersection of AI and life sciences, watching it evolve from an emerging tool to a critical application for patients,” said Erez Kaminski, CEO and founder of Ketryx. “It’s now time to accelerate adoption and ensure AI is safe, reliable, and ready for regulated environments.”

Life sciences teams are struggling to balance rigorous compliance requirements amid the rapidly accelerating pace of innovation. While cloud-based tools and rapidly evolving LLMs are transforming what’s possible, these regulated teams are still operating on infrastructure not designed for this velocity of change.

Ketryx is an AI-native compliance platform built to meet this challenge. It automates validation, traceability, and regulatory workflows — including FDA/EU MDR-ready documentation — across the product development lifecycle to help teams release safer products faster. Customers report up to a 90% reduction in documentation time and over 10x faster release cycles.

“In Medtech, long-term success depends on balancing innovation with the uncompromising demands of safety and compliance,” said Bill Hawkins, former CEO of Medtronic and new Ketryx investor. “This balance has historically been hard to achieve. Ketryx has built the infrastructure that allows both to advance together. Their ability to deliver this level of rigor at true enterprise scale is why I’m proud to support them as they shape the future of regulated software.”

The company’s platform is built for the enterprise and already used by three of the top five global medtech companies, several Fortune 500 organizations, and AI-powered companies such as DeepHealth, Heartflow, and Aignostics. With adoption accelerating, Ketryx is emerging as the key AI infrastructure layer for product development in regulated industries.

“Medtech teams are leading the way in applying artificial intelligence to improve patient outcomes, creating products that meet the highest safety and regulatory standards,” said Vinay Shah, Partner and Founding Team Member at Transformation Capital. “In our diligence, Fortune 500 giants and fast-growing innovators consistently praised Ketryx for proving that compliance can accelerate, rather than slow, technological progress. We believe Ketryx is defining the future of regulated infrastructure across industries and are proud to back them in their next stage of growth.”

Kaminski continued, “Having Transformation Capital, the pre-eminent digital health VC & growth equity firm, as our lead partner, gives us more than just capital. They understand exactly what it takes to build and scale healthcare technology companies. With their backing and industry connections, we’re continuing our global expansion, accelerating our product roadmap, and hiring rapidly in both Boston and Austria.”

With real-time traceability and documentation, Ketryx brings zero-lag compliance to the heart of product development, empowering teams to release more products, more safely, and faster than ever before.

About Ketryx
Ketryx transforms the product lifecycle of life science teams to deliver safer products, faster. Trusted by three of the world’s top five medical device manufacturers, its AI-powered compliance platform overlays existing tools to automate documentation, create traceability, and accelerate release cycles — without disrupting existing workflows. Ketryx AI Agents cut manual work by 90 percent and close compliance gaps, elevating speed and quality across the entire product lifecycle. For more information, visit www.ketryx.com.

SOURCE



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

As they face conflicting messages about AI, some advice for educators on how to use it responsibly

Published

on


When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.

One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.

I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.

Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.

First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.

We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.

You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.

Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.

Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.

Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.

AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.

When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.

In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?

Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.

Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.

For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.

I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.

Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.

Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.

Contact the opinion editor at opinion@hechingerreport.org.

This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

AI Research

Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal

Published

on



Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal  The CSR Journal



Source link

Continue Reading

AI Research

OpenAI business to burn $115 billion through 2029 The Information

Published

on


OpenAI CEO Sam Altman walks on the day of a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room at the White House in Washington, D.C., U.S., September 4, 2025.

Brian Snyder | Reuters

OpenAI has sharply raised its projected cash burn through 2029 to $115 billion as it ramps up spending to power the artificial intelligence behind its popular ChatGPT chatbot, The Information reported on Friday.

The new forecast is $80 billion higher than the company previously expected, the news outlet said, without citing a source for the report.

OpenAI, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, some $1.5 billion higher than its projection from earlier this year, the report said.

The company did not immediately respond to Reuters request for comment.

To control its soaring costs, OpenAI will seek to develop its own data center server chips and facilities to power its technology, The Information said.

OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, saying OpenAI plans to use the chip internally rather than make it available to customers.

The company deepened its tie-up with Oracle in July with a planned 4.5-gigawatts of data center capacity, building on its Stargate initiative, a project of up to $500 billion and 10 gigawatts that includes Japanese technology investor SoftBank. OpenAI has also added Alphabet’s Google Cloud among its suppliers for computing capacity.

The company’s cash burn will more than double to over $17 billion next year, $10 billion higher than OpenAI’s earlier projection, with a burn of $35 billion in 2027 and $45 billion in 2028, The Information said.

Read the complete report by The Information here.



Source link

Continue Reading

Trending