Welcome, MedTech Professionals.
W43 Edition of The MedTech AI Monitor
Hey there,
Welcome to the inaugural edition of the MedTech AI Monitor, where you get a quick update on what is going on in the AI world.
What's important, and what's not.
If you want to hear more about a certain topic, just reply to this email. It goes to my inbox, and I will respond.
Have a great rest of the week!
-Greg
"too long; didn't read":
- FDA's AI council formed, brings the band together
- FDA might be OK if you don't know how AI works
- AI generated data can be used for clinical trials
- LLMs are hard to evaluate, future guidance needed
Walking the Tightrope: The FDA's AI balancing act.
The FDA has been thrown headfirst into AI device regulation, and it's clear they’re taking this responsibility seriously. As AI becomes more ingrained in medical devices and drug development, the FDA is building a network of experts and councils to guide how this technology is used.
Who's in charge?
Right now, multiple FDA centers are contributing to AI regulation. You’ve got the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), and even the Digital Health Center of Excellence. All of these groups are bringing their own expertise to the table, which sounds great in theory—but it also creates the risk of overlapping roles and a lot of red tape. The recent formation of CDER’s AI Council is supposed to help keep everything on track, but with so many players involved, I wonder if this might slow things down more than help.
I can't tell you...
One of the most interesting debates within the FDA is how to handle “black box” AI models—those where even the developers can’t fully explain how they work. The FDA has signaled that they’re open to these kinds of models as long as there’s other evidence to back them up. So, if an AI model produces reliable results that align with real-world data, it might be good enough, even if we don’t fully understand the process behind it.
“If you don’t understand how the model works, but you see concurrence between the output of the model and the data that you already have, then this is some sort of a validation, even though you don’t really understand how the model works.” -Hussein Ezzeldin, Senior Staff Fellow, CBER.
AI generated clinical data
I attended an FDA workshop in August and was surprised how much leeway drug development and clinical "digital twins" were given. The CEO of Unlearn.ai spoke about their AI generated data, statistical methodology PROCOVA, and how a trial participant’s digital twin offers a probabilistic forecast of their future clinical outcomes if assigned to the control group.
Tweet of the week
After reading the JAMA paper, my main takeaway:
"Special mechanisms to evaluate large language models and their uses are needed."
Think ChatGPT that talks to patients or helps with a med device's development. The FDA still has a way to go when it comes to evaluating GenAI LLMs, including post market performance marketing.
ie. What happens after approval/clearance when the LLM changes?
Balancing act
AI/ML devices and development in MedTech is only getting more complex, and the FDA is moving quickly to ensure it’s used responsibly. But with so many initiatives popping up, they’ll need to make sure everyone’s on the same page. The key will be finding that sweet spot where regulation encourages innovation without creating unnecessary barriers.
Let’s see how they manage to strike that balance.
If you haven't checked out these resources yet after sign-up, take a look.
Pressure-test your Data Compliance Measures (exercise) [pdf]
StackSafe ChatGPT Prompt Method [pdf]
MIT AI Risk Repository [excel]