MedTech AI Monitor W43


Welcome, MedTech Professionals.

W43 Edition of The MedTech AI Monitor

Hey there,

Welcome to the inaugural edition of the MedTech AI Monitor, where you get a quick update on what is going on in the AI world.

What's important, and what's not.

If you want to hear more about a certain topic, just reply to this email. It goes to my inbox, and I will respond.

Have a great rest of the week!

-Greg


"too long; didn't read":

  • FDA's AI council formed, brings the band together
  • FDA might be OK if you don't know how AI works
  • AI generated data can be used for clinical trials
  • LLMs are hard to evaluate, future guidance needed

Walking the Tightrope: The FDA's AI balancing act.

The FDA has been thrown headfirst into AI device regulation, and it's clear they’re taking this responsibility seriously. As AI becomes more ingrained in medical devices and drug development, the FDA is building a network of experts and councils to guide how this technology is used.

Who's in charge?

Right now, multiple FDA centers are contributing to AI regulation. You’ve got the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), and even the Digital Health Center of Excellence. All of these groups are bringing their own expertise to the table, which sounds great in theory—but it also creates the risk of overlapping roles and a lot of red tape. The recent formation of CDER’s AI Council is supposed to help keep everything on track, but with so many players involved, I wonder if this might slow things down more than help.

I can't tell you...

One of the most interesting debates within the FDA is how to handle “black box” AI models—those where even the developers can’t fully explain how they work. The FDA has signaled that they’re open to these kinds of models as long as there’s other evidence to back them up. So, if an AI model produces reliable results that align with real-world data, it might be good enough, even if we don’t fully understand the process behind it.

“If you don’t understand how the model works, but you see concurrence between the output of the model and the data that you already have, then this is some sort of a validation, even though you don’t really understand how the model works.” -Hussein Ezzeldin, Senior Staff Fellow, CBER.

AI generated clinical data

I attended an FDA workshop in August and was surprised how much leeway drug development and clinical "digital twins" were given. The CEO of Unlearn.ai spoke about their AI generated data, statistical methodology PROCOVA, and how a trial participant’s digital twin offers a probabilistic forecast of their future clinical outcomes if assigned to the control group.

Tweet of the week

After reading the JAMA paper, my main takeaway:

"Special mechanisms to evaluate large language models and their uses are needed."

Think ChatGPT that talks to patients or helps with a med device's development. The FDA still has a way to go when it comes to evaluating GenAI LLMs, including post market performance marketing.

ie. What happens after approval/clearance when the LLM changes?

Balancing act

AI/ML devices and development in MedTech is only getting more complex, and the FDA is moving quickly to ensure it’s used responsibly. But with so many initiatives popping up, they’ll need to make sure everyone’s on the same page. The key will be finding that sweet spot where regulation encourages innovation without creating unnecessary barriers.

Let’s see how they manage to strike that balance.


If you haven't checked out these resources yet after sign-up, take a look.

Pressure-test your Data Compliance Measures (exercise) [pdf]

StackSafe ChatGPT Prompt Method [pdf]

MIT AI Risk Repository [excel]

400 S 4th St Ste 410, PMB 376137, Minneapolis, MN 55415-1419
Unsubscribe · Preferences

The MedTech AI Monitor

Learn how companies are actually using Artificial Intelligence. Delivered Wednesdays.

Read more from The MedTech AI Monitor
The 50 most promising Digital Health Startups of 2024

Greg Matson Welcome, MedTech Professionals. W52 Edition of The MedTech AI Monitor Hey there, I thought about pushing today's email out because of the holiday, but I started this newsletter to bring some consistency to the non-stop AI news cycle- and that's what you will get. Every Wednesday. So, if you've taken some time off this week, enjoy the R&R and read up on what happened this week in Artificial Intelligence. Merry Christmas, and I will see you next week! -Greg tl;dr OpenAI 12 Days of...

Hey Julie- How has the AIGP certification through IAPP changed your approach to AI governance, and what value does it bring to the field

Greg Matson Welcome, MedTech Professionals. W51 Edition of The MedTech AI Monitor Hey there, It has been a busy week as AI frontier model companies have been frantically pushing out new updates and features. I'm planning on wrapping all those up in an end of year finale email, stay tuned! This week I got the chance to chat with Julie Caccamise from Quality Matters Consulting about the AIGP credential and her thoughts on AI frameworks. Have a great rest of your week! -Greg tl;dr -Still no FDA...

The 50 most promising Digital Health Startups of 2024

Greg Matson Welcome, MedTech Professionals. W50 Edition of The MedTech AI Monitor Hey there, Welcome to this week's AI update. We're talking about regulation, AI radiology, and the '12 Days of OpenAI'.🎄 Let's get into it- have a great rest of your week! Greg Matson tl;dr US HC system not ready to validate AI algos OpenAI releasing 12 new features in 12 days Most "Open" AI models are actually closed Join me tomorrow at 12p CST for a 15min LI live! 1. Debating regulation of AI-enabled medical...