• Skip to primary navigation
  • Skip to main content
  • Skip to footer
FAMEcast

FAMEcast

A Faculty Development Podcast from The Ohio State University College of Medicine

  • Home
  • Show Notes
  • Podcast Player
  • Episode Archives
  • Resources
  • Contact Us
  • About FAME
  • Terms of Use

The Practical Use of AI for Clinician-Scientists – FAMEcast 010

August 19, 2025 by FAMEcast

Show Notes

Description

Dr Carmen Quatman visits the studio as we consider the evolving role of artificial intelligence in clinical research. Discover how AI can help us write letters, articles, and chapters… summarize long documents, structure presentations, improve communications, and collaborate with data teams. We also explore the ethical use—and limitations—of your AI research assistant. Please join us!

Topics

Artificial Intelligence
Clinical Research

Learning Objectives

At the end of this activity, participants should be able to:

  1. Describe the practical applications of AI tools in the context of clinical research tasks.
  2. Demonstrate strategies for using AI to assist with writing, summarizing, and organizing content.
  3. Analyze the benefits and limitations of AI in supporting data interpretation and collaboration.
  4. Develop an individualized plan to experiment with AI tools in a safe, productive, and ethical manner.

Guest

Dr Carmen Quatman
Associate Professor of Orthopedic Surgery and Emergency Medicine
The Ohio State University College of Medicine

Links

Further Reading
NIH: Artificial Intelligence for Biomedical Research
NIH: Ethical Considerations of Using ChatGPT in Healthcare
NIH: Bridge2AI Program
JAMA: Comparing Physician and AI Chatbot Responses to Patient Questions
Johns Hopkins: Machine Learning and Artificial Intelligence in Healthcare
Nature: How ChatGPT and Other Tools Could Change Scientific Writing
Stanford: Human-Centered Artificial Intelligence

AI Tools and Platforms
ChatGPT (Open AI)
Copilot (Microsoft)
Claude (Anthropic)
Research Rabbit
Elicit – The AI Research Assistant
Yomu.ai
Scite.ai

Episode Transcript

[Dr Mike Patrick]
This episode of FAMEcast is brought to you by the Center for Faculty Advancement, Mentoring and Engagement at the Ohio State University College of Medicine.

Hello everyone, and welcome once again to FAMEcast. We are a faculty development podcast from the Ohio State University College of Medicine.

This is Dr. Mike. It’s episode 10. We’re calling this one the Practical Use of AI for Clinician Scientists.

I want to welcome all of you to the program. We are continuing our exploration of the evolving role of artificial intelligence in academic medicine. You will recall that last time we met back in episode nine, we covered the use of AI in medical education.

Today, we shine a light on clinical research and discover how AI tools can help clinician scientists in their everyday work. Artificial intelligence is transforming clinical research, but not by replacing you and me, thankfully, at least not yet. And hopefully that won’t happen.

Instead, AI can help faculty get unstuck in their writing. It can help summarize long documents, structure presentations, improve communication, and partner more effectively with data teams. In this episode of FAMEcast, we’re going to explore how academic medical faculty can use AI as a thinking partner across the many stages of the research process, while remaining mindful of ethical considerations and AI’s limitations.

Of course, in our usual FAMEcast fashion, we have a terrific guest joining us in the studio to discuss the topic. Dr. Carmen Quatman is an Associate Professor of Orthopedic Surgery and Emergency Medicine at Ohio State. Before we get to her, I do want to remind you the information presented in our podcast is for general educational purposes only, and your use of this audio program is subject to the FAMEcast Terms of Use Agreement, which you can find at FameCast.org.

So, let’s take a quick break. We’ll get Dr. Carmen Quatman settled into the studio, and then we will be back to talk about the evolving role of artificial intelligence in clinical research. It’s coming up right after this.

Dr. Carmen Quatman is an orthopedic surgeon at the Ohio State University Wexner Medical Center and an Associate Professor of Orthopedics and Emergency Medicine at the Ohio State University College of Medicine. In addition to specializing in orthopedic trauma care and geriatric orthopedics, she is also passionate about clinical research and the appropriate and ethical use of AI to support her work. That is what she’s here to talk about, the practical use of AI for clinician scientists.

Before we dive into our topic, let’s pause and offer a warm FAMEcast welcome to Dr. Carmen Quatman. Thank you so much for stopping by the studio today.

[Dr Carmen Quatman]
Yeah, thank you so much for the opportunity. I’m really excited to participate in this.

[Dr Mike Patrick]
Yeah, we are excited to have you here as well. In our last episode, we talked about just sort of in general using AI in academic medicine, with a particular focus on education and advocacy and clinical care. So, let’s talk a little bit about researchers and scientists.

What is AI and why should clinicians and researchers care about it?

[Dr Carmen Quatman]
Yeah, I love this topic. The caveat is I would definitely not dub myself an expert by any means, but I’m a high enthusiast, enthusiast, experimenter, having a lot of fun learning the new techniques as things evolve. So, for me, AI, you know, rather than diving into a really deep conversation about it, I really think AI is a tool, just like any other tool I use in my clinical and research practice, whether it’s a stats tool or some other type of thing.

But what’s really neat is it’s very dynamic and a lot of opportunity to experiment even within the tools, whether you’re doing it for writing or learning purposes, or I use AI in my clinical practice when I use ambient listening. And I love how efficient that helps me be. And I’m not sure if that, you know, dovetailed in the last session, but I pretty much use it all day every day, all the way from very simplistic things such as asking for advice about making a recipe.

So, I do use AI in every capacity that I can possibly think of. If I don’t know if it will work, I give it a try and see what it might look like. And sometimes I’ve really used it a lot in my own environment.

I love to laser engrave, for example. So, like sometimes I like to come up with fun sayings and I love iterating in different types of large language models to see what types of things I can come up with beyond my own creative thinking. So, I just think it’s a really fun tool that I leverage every day.

[Dr Mike Patrick]
Yeah, absolutely. And I love the idea of just sort of experimenting, because it makes it easier when you actually have a task you need to do. If you have some background, you’ve used it, you’ve played with it on maybe some not so serious things just to get comfortable with the way that it works and how it responds and that sort of thing.

As a clinician scientist, I’m sure you do a lot of writing, and it seems that AI is really, you know, well-groomed for helping writers in terms of fine-tuning things or outlining things or writing the first draft. How do you use AI in your scientific writing?

[Dr Carmen Quatman]
You know, I think I’ve been a really skill-trained writer. I’ve been writing for years, but I’ve always used different techniques, including asking others. I love writing in a team room where you can basically shout out a word, like, I’m trying to think of a word that says this and captures this concept.

Throw me some ideas. And, you know, I would use the source a lot to really enhance my writing. But the way I use AI and different large-language models specifically for my writing is more like a copy editor, a brainstorming buddy, a conceptual whiteboard to basically throw ideas up there and see what sticks for myself.

I think a lot of people have a misconception of the idea of the ways you can abuse it, and there’s a lot of nervousness being put out there for the way the technology is iterating. And I think the biggest piece of this is the human oversight, and I trust myself for how I use that human oversight. I’m a deep thinker about how it’s coming up, so most of the time I’m writing on my own, and I’m taking that and then I’ll put it into, for example, ChatGPT and say, please make this more concise and succinct.

So, it’s my words with a copy editor on top of it, and I do it in small batches. I would never upload an entire document and say, edit this. I usually do it in small batches because, one, I think it’s a lot more concise and accurate when I do that, and two, it makes sure that I don’t abuse the situation.

So, making sure that it stays my phrasing, my words. Like I said, that writing buddy piece is huge. And I think it’s so important because one of the areas that I struggled the most is kind of perfectionism.

And so now I can essentially use my own words and thought processes, and have it edit in the moment so that I don’t get all tripped up for an hour trying to go back and get rid of my sentences so that it’s not like a run-on sentence or the wording isn’t quite right. It just allows me to be a lot more free with my thought process as I do it.

[Dr Mike Patrick]
Yeah, yeah, absolutely. As we talked about in our last episode, AI is only as good as the data that it trains on. And chat GPT training basically on the Internet and billions of documents, it’s going to come across things that are evidence-based and trustworthy and things that are not so evidence-based and trustworthy.

Have you played with any of the tools that the database that it’s trained on is actually like PubMed and research places? So, it’s a little smarter with, and it’s not as prone to hallucination, I should say.

[Dr Carmen Quatman]
I have really tried to experiment a little bit with each different thing. And the hard part is as soon as you start to, if you don’t like one, you always have to go back and try it again and see if it’s going to do that. But I think my biggest worry is a hallucination.

So, I rarely have used it for doing total like scoping reviews or pulling the paper. And I do it for like opportunity, almost like you would use Google, that you can, you know, look at Google, get a little bit of information, but you have to do your own fact checking. So, if it does pull a paper, for example, that it might say that, you must fact check that it pulled it.

Like it’s just as good as that. Like it might perform it in a search. Like even when you use PubMed for a keyword search, you have to do human oversight to say, yes, this abstract fits or it does not fit.

You must do that in any of these tools. And, you know, one of the examples I’ve given is like I was trying to use, I was trying to create a presentation for a patient facing thing. And I was using a large language model to help kind of generate that imaging.

And no matter how hard I tried, it would not generate a diverse looking group of people. So, it was always a white male physician. It was always a white male patient.

And every time I would try to iterate, it was, I’d have to go down that pipeline because I think a lot of what was generated was from previous images that that’s what we get. And so, it did create a missed opportunity. And had I not been really diligent to redo it, I was getting very classic, almost, you know, highly you, if I wanted to create a natural looking kind of healthy-looking person, I would get a picture of Barbie, right?

Like it was not like matching what I feel like was representative. And I think that was a really good example of how like, it’s just, it will only do what it’s trained on for years and years and years. A lot of the training that we’ve had, the data did not include diverse populations or very narrow populations.

And that’s what, that’s what always comes out when you, it’s all only as good as that training data. So, and it’s, it’s, it’s great in so many ways, but if we don’t use that realistic, pragmatic approach, we’re going to continue to evolve bad science. That’s, I have, I have a love hate relationship with meta-analyses for that exact reason.

Meta-analyses can be really great because we can generate a large, larger kind of sample to really ask hard questions. At the same time, if we didn’t collect that data, we don’t really know the limitations. We could just be perpetuating bad things for, and so it really requires us to, you know, use that level of scrutiny for application.

As a clinician, I would never take a randomized controlled trial as my only decision making for a patient. Many of those randomized controlled trials were literally designed to be super rigorous so that we can understand the mechanism and many of my patients never fall into that sampling that they did for that trial. So, if I only applied that rigorous trial research, I would, I not only miss an opportunity, but I could be applying that information wrong.

So, no matter how I use these tools, that’s how I try to think about it, is that it’s really only as good as my oversight and clinical judgment that I apply to that, that information that comes out.

[Dr Mike Patrick]
Yeah. As you, as you’re doing research in the literature before you embark on something like, you know, you have a clinical question that you want to answer, have you used AI to sort of summarize long documents, you know, like put in the PDF and say, hey, give me brief synopsis. And then, then if you decide, hey, this is going to be a helpful paper, then doing the thorough reading rather than spending, you know, half an hour reading something and you’re like, this is worthless.

[Dr Carmen Quatman]
So, so I do, and I actually talk, I haven’t used it a lot myself. I, a lot of my trainees have been doing that and kind of trying to find more efficient techniques, but I secretly have like a love for language. And sometimes I really like looking at the way they phrase something in a sentence.

And I use that as almost a guide myself. And I’m the highlighter printed out still. I can’t get rid of that, that love for language sometimes.

So, I, I have, I think I’ve just had years of training where I’ve kind of trained my eye to look for the things I’m looking for. It’s just like reading an abstract. An abstract can help me screen whether it’s going to be useful or not.

But a lot of times I want to look at that figure myself. I want to see some of the most rich things are in the figure designs that people put together. So, if I, if I lose that ability to see the deeper way they’ve constructed the storytelling, I just haven’t been able to really get rid of my love.

But I do fall into the trap of sometimes I get down in the deep, deep weeds. And I think there would be an opportunity for me to, to stay more focused if I did, did use that technology in that way.

[Dr Mike Patrick]
Yeah. You make a good point with your trainees being more on board with using these tools in a way that we have just, our brains have, have through our training and through our experience, the way that we approach literature. But for folks who this is what they know right out of the gate, it’s really going to be a game changer, don’t you think?

[Dr Carmen Quatman]
I, as long as…

[Dr Mike Patrick]
Love it or hate it.

[Dr Carmen Quatman]
Yeah, I think as long as we give them the opportunity to think critically first, I, you know, there’s a lot of, there’s even like functional MRI coming about how like people are losing their critical thinking skills. And I think making sure that they understand that part of being a trained scientist, for example, is that critical thinking piece. And so, it could make us highly more effective, more able to be streamlined.

One of the things I think about is like the review processes, or even like writing my IRBs and things like that, which just are slogging through, we could reverse engineer it so that most of it can get screened then with human oversight would be really powerful. But I think it still starts at the heart of what we know. You can’t provide good human oversight if you don’t have that critical thinking, reasonable approach to that, to know that what is being proposed is actually practical and pragmatic and applicable.

And so, I think I love it for my, our trainees, but I still start at the, you must know the limitations, and you must be like, that has to be like almost the number one forefront. I, as a surgeon, I think about like, you must know the anatomy and know all the danger zones before you start like making your incision more like smaller or trying to do the cutesy types of things. Like I feel the same way with science.

We must know the, the, the critical junctions, the, you know, the, the, the biggest limitations we have before we, we carve in deeper. So as long as trainees know that, and they, they have that foundational expectation and ethics approach, I feel like it’s an amazing opportunity to do this.

[Dr Mike Patrick]
What about brainstorming research questions or aim statements or have you had an opportunity where AI maybe switched your thinking a little bit on a particular topic?

[Dr Carmen Quatman]
All the time. I love, again, I, I’ve always been a fan of the whiteboard in the background to keep me on track and kind of mind map and think about that. I would say the, the thing that’s really helped me the most is there’s such an art to creating an aim.

There’s such a, there’s such a, if you just say, make an aims page about this topic, it will generate something that’s very, at first glance might look like really decent, but if you actually carve into it, you’ll realize the things that are proposed don’t necessarily, not only not possible, but like they’re very theoretical or very, you know, far down the road possibility sometimes, but it’s great at taking a sentence that I’m trying to like, make this more active sense, make, dive a little bit deeper. Instead of just saying, I’m going to process the data as your aim, you’re going to say, I’m going to develop this critical strategy for, you know, answering this question. So, like it’s able, I’m able to like phrase it in a way that like my heart says I’m really processing the data, but I can make it a much more active aim.

For example, I love using it for sentence, like for coming up with the titles or the acronyms I need to tell the story. I think it’s really powerful and, and iterating. And I often use a phrase, tell me 10 examples, iterate 10 more, because I think that it allows you to, you’ll, if you use it enough, you’ll see that it has very, as any human being, very clear tendencies to use the same phrasing.

I think the classic M dash has been totally terrorized in the, in the social media about don’t putting M dash. And I actually love M dashes. I use it a lot in writing.

I find it to be a very effective way of like, transitioning. Now I never put it in anything because I don’t want anybody to use that as their screening tool to be like they, they used AI for this. And I encourage nobody to put M dashes if they’re like applying to things, because I think people are using that as a number one thing to say it’s screened out.

Or it loves the same five words that come up about like, I think break, breaking, breakthrough barriers and things like that seems to be like, I can almost tell now if I look at an abstract that they used AI for it. And it’s not that I’m opposed to it because I love more exciting titles, they draw me in. But you definitely start to see the same one.

So, you have to be a lot more thoughtful to get deeper into the, to the iterations to make sure that you’re getting, getting fun stuff. But I do the same thing for Ames.

[Dr Mike Patrick]
Yeah. You know, speaking of, of voice and, you know, being able to tell, oh, AI must have written this because of these patterns. If you have a collection of things that you have written, and you subscribe to AI, so example, ChatGPT that I can, that I subscribe to, so it keeps a memory of what it’s doing for me, and I can sort of train it, you know, you can have it read all of the abstracts that you have written in the past, and then start to mimic your voice.

And the next thing you know, it really does sound like you wrote it.

[Dr Carmen Quatman]
It does a really good job. It’s impressively good for letter writing. I do a lot of letter recommendations, a lot of support letters for grant writing, like the last grant, I had to do 20 letters of support.

And most of the time you’re drafting those for your teammates, because you’re doing it in a way, because you know, the grant and what you need. But I have loved it, because I, I find it still remains in my voice, but I can say make it different. It’s the same letter draft, but then it’s made different so that like, even the sentence structure, and I’m like, wow, this, this still sounds like I wrote it.

There’s, there’s not a lot of like, I have an, I have an identical twin sister. And we it’s so funny when we team write, like we have very, very different writing styles. And you can almost, our styles are so different.

You can definitely tell where she wrote and I wrote, but I feel like if I fed that writing into ChatGPT, it would make it sound like one voice. And so, I love, I love that even in, in the fact that we’re identical twins and we write very differently, ChatGPT can make us sound like the same person. One of the, one of the areas I’ve found it to be incredibly effective is I partner with international collaborators and English as a second language, when they send their first drafts can be that, that translation can be really choppy and hard for us to like, to team write.

So, a lot of times I have them run it through ChatGPT before it comes to my table so that we’re on the same page in terms of the structure that they want. And then I can basically double check their, their thought process for what they were writing in that sentence, because they still might not catch it in their translation from their native language. So, like, that’s been a real time saver for me and for them.

And I think it’s made them much better writers when they’ve tried to go into that English with English being a second language for them. So, it’s been very effective. But I do, I do recommend if people haven’t seen how to make that persona in ChatGPT to really describe in the settings, you can go in and customize it and make it sound like you.

It’s a, it’s a beautiful way to have that persona match your expertise, your, your background and your own writing.

[Dr Mike Patrick]
Yes, yes. And the other, just to be fair, the other platforms do a similar thing, you know, so Copilot and Grok and, you know, all the other ones that are out there and growing, the list is growing by the day. Often you do need like a paid subscription for it to do the more advanced training, you know, so you can get it to do exactly what you want it to do without having to refine it every time.

[Dr Carmen Quatman]
That’s, that’s a great point.

[Dr Mike Patrick]
Have you used AI to help understand statistics? I think for a lot of clinician scientists, like often that is a weakness for us is understanding statistical packages and really relying on our statisticians on our research teams. Have you used it to help you understand data a little better?

[Dr Carmen Quatman]
Yeah, I’ve done very simple things like make a table about this particular thing, contrast this versus this in terms of which stats package I should use or the limitations of this, which test would be right for this, asking this type of thing. So just almost like a, a key collaborator to make sure I’m in my decision-making process. And even more recently, I was in the deep weeds for a big, big grant and the questions that came back from the critiques and the reviewers, clearly from scientists were, they were tripping me up.

So, like just even trying to understand what they’re asking was extremely powerful. I’m like, oh, what they’re really asking is this, because my language doesn’t match their language sometimes in terms of what I think they’re trying to get at. So, it just allowed me to like, at least understand what I think that their questions were, and then go back to what I needed to do and even ask how I should answer, how I should respond to this.

Like it has been a very powerful like way of redoing the power analysis. I’ve done that just very, I would, I can dabble. I’m, I’m a translator at best.

I’m certainly not an expert in how to use stats and that’s how I’ve used AI is to help educate me and also help make sure that my responses will make sense back to the person trying to read them. Like I’m responding back to a biostatistician whose expertise is in this, we’re talking and trying to come to consensus about what might be the right model to use. Here’s my thoughts.

Here was the response I got. How would you respond back? Like that was a really powerful way for me to like really deep dive deep in the weeds for how I could make sure I’m meeting them in the middle for what we need to do.

[Dr Mike Patrick]
Yeah. Yeah. And then I think having a little humility, because if you, if it does hallucinate or it gives you something that’s incorrect, but it’s not something that’s in your wheelhouse, you’re far less likely to catch that.

So, you know, when you, when, when you’re writing and it’s to do with, it has to do with your research of, and you know, even if it’s going out and pulling a reference, you’re going to understand that this is probably a hallucination because it just doesn’t make sense. But that’s a little harder when it’s something that you’re not an expert at. Like for me, that would definitely be biostatistics.

[Dr Carmen Quatman]
Yeah. And in fact, I was super proud of myself because I did that. And then I took it to my biostatistician expert, and they were like, this is great.

How did you come up with that? I’m like, well, I had a buddy that helped me get to that sentence.

[Dr Mike Patrick]
Does yours have a name? Do you, do you have a name for your, for your iteration of ChatGPT?

[Dr Carmen Quatman]
I did at one point. What did I, I forget what it was. I did for a while, but yeah, like I just, yeah, refer to it as my buddy these days, but yeah, I did at the beginning because I thought it was so fun.

Yeah.

[Dr Mike Patrick]
Mine is Rose.

[Dr Carmen Quatman]
I love it. Well, you know, you know, it was great in the beginning. I have a lot of, we have a really large lab, and we actually had like a little hackathon, and we were like, how could we use this technology and the things that we were doing?

And we were trying all kinds of things. We ordered pizza. We all had a lot of fun with it.

And it was one, it just got us exploring and trying to think differently. And we’ve written a couple of grants after our brainstorming that were like, how could we use this for learners in a way that would allow them to explore like difficult scenarios? Or, you know, I, I were in the middle of a, in publication, we’re revising one where we were basically like, this is a really hard, complex systems problem.

And so, we asked ChatGPT with this, like, these are the prompts we used, how would you solve this problem? And then we took it almost like a Delphi study to other people and said, how do you rate these responses? And one of the coolest things that came out of it was the response.

It wasn’t even so much about the responses, but how much other people were finding the responses of the responses of people to the ChatGPT. Like, for example, one thing that really came out that I thought was fascinating, we were looking at how can we make sure that we’re helping people feel like they belong? Orthopedics is a pretty non-diverse field.

We’re one of the lowest in parity. One of the things the senior people participating were like, we would like to hear more about what trainees want to experience. And trainee people kept saying, we want to hear how, what the advice is coming down.

So, it was a real opportunity to demonstrate that there’s a unification. But the only reason we got there was some of the prompts that came back from ChatGPT made them start going down that pathway. So, it was really, really cool to see how we were using it as a brain board to unite it and then, you know, kind of come back around for the dialogue that we had amongst each other.

And so that was really, really fun. It made us explore things that never even came to mind for us, I think.

[Dr Mike Patrick]
Yeah, that is so interesting. I love that. And without AI, you may, I mean, you may have come to the same conclusion, but it might have taken a lot longer before you, before you figured that out.

[Dr Carmen Quatman]
I think it came up with really unique questions that might not have come top of mind for like, or we tend to follow some very standard, you know, validated questionnaires. And what it really did was make us think differently. Like, you know, came up with strategies that I would have never thought we would have come up with, you know?

[Dr Mike Patrick]
You know, you talked about struggling with IRB stuff and just the drudgery of it. How can AI help researchers with IRB protocols and, you know, getting things approved? Because that can take a lot of time.

[Dr Carmen Quatman]
One of the things I wish we would get more to, I was just having this discussion earlier today about, you know, IRB is really about ethics and safe research. And yet with the human oversight, we end up in the weeds about something that’s not necessarily make or break. And if you get a new person, you may put the, you may submit the exact same protocol.

And it’s not, not only is it not systematic in terms of the review, if I were just flipping populations, if we’re diving in the weeds about like, some of the things that don’t really matter, we miss opportunities. I wish our system here would let us do this. If they had a regulatory thing, and you could put in your aims pages, for example, and have it spit out something.

And then with human oversight, I could go in, we would be matching, I know what the IRB wants, I know what I want, comes together, and then the human oversight comes in. And they could basically say, are you meeting, you know, meeting the minimum criteria? So I almost hope that we come to, you know, an opportunity in the very near future, where it’s almost like, I create a bot, for example, for some of the aims pages, and making our letters of support, we upload the bio sketch of somebody, we upload the aims page, it, it mirrors it, and it creates a fairly good template.

That’s where I would like to see us get so that like, when we are diving in the weeds, at least we’re doing it systematically, instead of just human oversight right now. And so, I think that there’s a real opportunity to make sure we there’s so much human error in that too, right? The reason we’re getting feedback on the, if I submitted the same IRB, we’re getting feedback from somebody else about we didn’t hit that.

Well, they may be absolutely right. But it wasn’t caught the first time. So, like, we have an opportunity to make sure that all that stuff is up front and caught.

And we could even run it through before we submit it to know. And that’s just, I think that would really, really improve efficiency. So almost reverse engineer the process.

[Dr Mike Patrick]
Yeah.

[Dr Carmen Quatman]
And that’s something that comes at the end.

[Dr Mike Patrick]
Yeah. And that’s something an IRB board could actually, if they’re, you know, if they had an ambassador to actually set it up, could put in the things that people submit, and then also put in what we changed about it, what we liked, what we didn’t like. And so that then the, the GPT can start to look at those relationships and analyze the data and start to learn what this particular IRB board’s looking for and what they’re not.

But that’s, gosh, that’s a research paper right there.

[Dr Carmen Quatman]
Yeah, yeah. Yeah, there’s so many weeds. I mean, I also see this, and I think it’s already going there.

There’s, you know, in the journal review process, you’re starting to see that happen. And that it’s almost like they’re having a hard time getting people say yes to reviews anymore, because one, it’s not incentivized. And you know, time and bandwidth are our hardest commodity.

So, when if you’re not like, asking the expert and giving them some type of return of investment, there where we’re seeing it harder and harder to get reviews done. And I’ve gotten, you know, decent journals where they come back and say, we haven’t been able to find reviewers. Sorry, we’re going to reject it because we’re out of time.

I mean, how devastating is that? I’ve never seen that really happening until more, more recently. And so, we’re going to have to come up with strategies that allow for first screen at least, so that the time and bandwidth of people are not wasted.

Because along with the augment of using it as a writer, they’re getting more and more submissions to keep up with. And so, we’re going to have to, we’re going to have to find a way through this, for sure.

[Dr Mike Patrick]
Could reviewers use ChatGPT to help them write their review?

[Dr Carmen Quatman]
You know, that’s, now even on grants, they’re saying don’t use, don’t use it. The, you know, the problem is the minute you’re uploading it, the safety and the compliance and the security of it is tough. I personally do use it for wordsmithing.

Like I, I’m like, this is a harsh critique. I need to make this a little less harsh. So, I’m not uploading any of the, you know, security stuff, but I am policing myself a little bit to make sure that I, my temperament and my hangry, when I attacked this review, don’t come into the review process.

And I often find myself really happy with how the phrasing comes out. Like, you know, it’s not so blunt. It’s, it’s a, it’s a lot more critique-like and not judgment-like.

And so, I really think that there’s a real opportunity. We definitely need to make sure that the security checks are in place to do it safely with really strict guidelines. So, I almost imagine them offering a platform that is completely, you know, supervised by the journal or the, the grants place that allows you to at least make it feel like a more professional, respectful review.

Cause I, I know we’ve all been on the back, there’s all kinds of memes about reviewer too. And, you know, I think there’s a way to make things a lot more respectful and critique, getting rid of some of our human bias, should we have these techniques available? So, I don’t know what that will look like, but I’m excited for what it could look like.

[Dr Mike Patrick]
Yeah. Yeah. Privacy kind of flows right from that in terms of the security checks.

And we also want to maintain patient privacy. And if you just stick patient information in ChatGPT, that gets shared with the larger ChatGPT community because now it’s, it’s part of that GPT’s training. But then there are closed systems, like for example, at our institution at Nationwide Children’s, we have a version of Copilot that is not just open up to the internet.

So, it’s a, you know, a little safer to put more sensitive data into because it’s a closed system, so to speak. In terms of using clinical data and, you know, looking through the EHR, how, how do you maintain privacy while also, you know, realizing that that’s important data to consider in, in many research projects, especially clinical research?

[Dr Carmen Quatman]
I think it’s an excellent point. And I think you have to absolutely know your institution’s guidelines, and you must know the platform that you’re using the safety behind it. I never enter patient information into anything other than our, you know, we have the same approved platform for Copilot.

And Copilot has some major strengths. It also has some major weaknesses. I have not found it to be as effective for buddy writing in, in my, you know, thing.

So, like just knowing what is, as a researcher, we are, we are supposed to be ethical. You know, we go through our modules every year, we’re ethically trained, we, city training, nothing has changed. And so as long as you’re following the rules and you can feel safe about it, I think that’s really important.

But I think that’s what I mean at the very beginning of what our trainees need to know. What, what’s the ethical, right way, critical way of thinking through this before you apply it, just blanketly apply it to everything. I personally have thought, we’ve, we actually had a whole brainstorming session about the, the ins and outs of using, for example, we used to just transcribe focus groups.

There’s that, and then the Zoom platforms came around where you could actually have it auto transcribe for you, which are still, you know, you know, not that great. Still requires a lot of human oversight. And now we have these AI summaries that kind of come out in that.

And whether that’s in the patient capture in the clinic, whether that’s, you know, there’s great things that are now getting captured in the EMR that were not, because I was way, you know, it was based entirely on my recall that happened 12 to 24 hours later, because we’re pack, jam packing our clinics. And I love the AI, like the ability for the ambient listening to kind of put out things I didn’t even think about, and they do it in a way more professional way than I would have probably documented. But when I did transcription, it didn’t capture that way.

It caught my errors and what I said out loud. And for some reason, the AI summary seemed to be really, really good, but you have to, you know, it’ll say things like I’m an, as an orthopedic surgeon, I’ll say, you know, there’s a fracture line still present, callus remaining, but the AI summary goes bone, bone healed, right? Like, so like, there’s some like nuances that are still definitely missing.

And if you don’t, if you don’t use the human oversight, you miss it. But man, does it capture great things like the fact that I, that the patient’s going on a trip in a couple weeks, and that’s really important to them for their, you know, improvement. And it just is a great reminder that I can ask them how that trip went, or if they went, like things that I, I probably wouldn’t get in my documentation, but I think about it from a research perspective, and what that means.

And if I, if I only use the AI summary for a focus group, the richness that is missed in the ability to directly quote what was said. So, there’s an opportunity for both. And I, and I think we have to make sure we acknowledge that, that when you’re really trying to capture the human emotion, and the experiences that you’re getting, you have to realize that the AI summary is a summary at best, and the transcription can have such richness to it, but it has to, has to be there.

So, I even think about that with the clinical, you know, uses that I have, even how we capture an EHR, we’re going to miss some richness that might have come out in a direct, direct transcription. And so, I think it’s, they’re both augments, they’re both tools, they’re both opportunities that we just have to keep in mind as we, as we move into it.

[Dr Mike Patrick]
And when, when you have AI components in your EMR, that does make it a little safer, you know? So, if I’m saying, hey, find this, find relationships between this or that or the other thing, and Epic, for example, has an AI tool that allows you to do that, that can, can make it a lot easier. And you feel more secure about the fact that you’re analyzing patient data.

[Dr Carmen Quatman]
Totally agree. And I think it will, like, so in the world of, you know, medicine and the way we’ve been really good about chart reviews, there’s still so much human error in that extraction, whereas AI will be able to find the signal through the noise, I hope, in a way that allows us to really explore harder questions, more complex situations.

[Dr Mike Patrick]
We’ve talked a lot about, as we’ve gone sort of the downside, in addition to the upside of using AI in clinical research, what are, what are some of the risks that we’ve talked about that I, that folks really need to keep top of mind when they’re using these tools? Yeah.

[Dr Carmen Quatman]
Well, I think most importantly, like, if you’re using it in the clinical setting is making sure we get permissions right up front from the patients. I have yet to have a patient say, no, not one. In fact, I bring them part of the process, and I let the node generate and then they, it looks like magic to them.

But I think we have to keep that front of mind, the choice of the patient, if we’re using it in the clinical setting. The same thing goes with research. There’s, there’s definitely discomfort.

I work a lot with EMS research out in the field, like, so people who call 911, even when we look at that data, there’s a lot of missed data because there’s so much going on about like, you know, in the moment of decision making, like, missed blood pressure readings or blood pressure cuff readings or things that like would, you know, they probably did and that helped in their judgment, but didn’t get captured. And there’s all kinds of tools coming out, they call it like, like they have on the airplane, the black box, black box technology coming out could be incredible in a learning perspective for us, like what we can do there, they’re bringing them into our own health system around ORs and ED and understanding how we make decisions. And it’s going to make us so much better.

And at the same time, being watched has a real impact that there’s a, you know, there’s actually a phrase for that, the Hawthorne effect. We change our behavior, the minute we know we’re being watched, being judged, and it can almost be paralyzing if you think you’re like being judged at that level. So, I think, you know, all these technologies coming out offer such a unique opportunity for us to think differently.

But in the moment of talking to my EMS providers about it, they were like really nervous. They’re like, oh my gosh, that will we have we were regulated so much that will never happen. And I was kind of laughing to myself because I used to think the same thing and it’s already happening.

So, I hope we all get to be a part of the voice of how it’s used and what it’s used for, and the regulations and safety and ethics that go around it rather than it just being applied blanketly without thinking in that way. We’re really good at just applying new policies without understanding the ramifications. So real thoughtful upstream, before you even use it as a research tool, what are the things that are what are the limitations and the barriers that we’re seeing?

Where are the ethics coming out that we haven’t thought about? What is the harm in publishing this information without really strong oversight? What are we perpetuating forward that could hurt our community public health?

One of the things I always think about is like we put out all kinds of papers about PT is not helpful, physical therapy is not helpful for patients compared to just an exercise sheet. And I’m like, that’s when everything falls to the mean. There are there are outliers on all perspectives.

And we if we aren’t really thoughtful for how we say that everything will fall to the mean. And so, the same thing goes for how we apply to AI. We’re training it on that the data that comes up is to the meat, like it’s like the everything that surfaces to the top is because it surfaces top by the algorithm intentionally.

And though we still have patients that will never be in those studies that were published, we have patients that are so complex. And the research methods we’ve applied for years and years and years have been focused on mechanistically diving down to this narrow thing. And so, we have to be extremely thoughtful for the application of how we use this, whether it’s patient care, whether it’s education, and whether it’s teaching, we have to make sure we are the voices that help design that.

And it’s not just this kind of blanket approach that it’s going to save the world. It’s going to be all the things that come out about how we’re not going to meet doctors. And I’m like, as someone who is a doctor, I can’t imagine what that means to my own care.

Like I just would be frightened. AI is not going to hold my hand when we’re making hospice decisions for my family. You know what I mean?

Like, it’s not going to do that.

[Dr Mike Patrick]
Yeah, so absolutely. Yeah, definitely. And the other point that you had previously brought up about just losing the ability to think critically is, you know, frightening, especially in health care.

How can we continue to even though we’re using AI to make sure that we’re still exercising our critical thinking?

[Dr Carmen Quatman]
I think it’s a really important thought game. One way one of my colleagues is using it, for example, is we don’t have enough time and oversight to train people in critical decision making, but it sure could be amazing for scenario making and iterating that scenario. So, one of the early earliest exercises we did out of fun was like, you have to have a conversation about providing bad news to a patient, they now have a cancer diagnosis.

How you know, how would you go about approaching that and then asking the AI to grade us and give us feedback. And there is so much opportunity for simulation within these that we may never be able to do to replicate the what we call the act, you know, after action debrief in the OR. But imagine if we had that opportunity for them to go explore it after a tough situation, or a thought process or grade me on my pre plan for this surgery, or, you know, so like, giving them learners an opportunity to explore things that they might feel really uncomfortable doing in in the real world practice could truly transform how we do things.

But it’s only as good as what we train it to be on. And we have to be very thoughtful about it. But it’s, it was really fun.

We did things like experimenting, like a harsh answer, and they, you know, a beautifully written answer, and the feedback would come back, like, you know, this is really long, and you use jargon that was may not be at the reading level that of expectation, try these types of things. And I’m like, this is, it generates it very quickly, right. So, like, there is a huge opportunity in that space, whether it’s critical thinking about your aims, or, you know, it’s almost a safe training platform opportunity that is, I don’t even know that we’re even hitting the edge on how we can potentially, potentially use it.

But we still have to be very thoughtful. I, that’s one of my favorite questions to ask. I’m trying to think about doing this, what are the things I should be thinking about, right?

Like, and then thinking deeply about, I didn’t even think about that as a as a possible way to test that or question that or unite very rare lab findings we’re having, it will be able to think, think so much deeper than what we are doing, at the risk of not just using our common sense, you know?

[Dr Mike Patrick]
Yeah, I wonder what this conversation is like five years from now. It’s going to be a completely different conversation, and both exciting and maybe frightening as well. We’ll have to just wait and see.

[Dr Carmen Quatman]
I think, I think, I think even in my own surgical practice, orthopedics is really known for, they’re constantly iterating implants and, you know, re-innovating. And it’s a fascinating thing as a scientist, because to me, like, we’re, we were so incremental in our thinking, and how can you apply that, but we’re a little bit more liberal, even in our clinical practice for some of the things that we do. And for good, a lot of times, I think, because it allows us to really provide even top-notch patient care, but it requires us to also constantly be reevaluating, was that the right thing?

So that we have to like, we can’t just assume it’s the right thing. We’re going to be in this new battle of, this is great, and though, and then kind of retesting our assumptions will be, be critical.

[Dr Mike Patrick]
Yeah, absolutely. For someone who has not used AI at all in the past, and they are a clinical researcher, what are some good first steps for, for dipping your toes in the water?

[Dr Carmen Quatman]
Yeah, well, what I, what I love, for example, that we’ve been doing at Ohio State is, like, there’s some training opportunities in the co-pilot world right now that are coming out that you can really get yourself, like, jump in and try that DAX. There’s like, you know, if you’re, if you’re a clinician, if you’re a researcher, pull up one of the platforms you’ve always wanted to try and just test it out in an environment where it’s safe. Like, you don’t have to even tell anybody you’re experimenting.

And don’t, and maybe not even take off big bite. Like, I went to plan my daughter’s birthday, you know, party, and the theme is going to be this, give me 10 ideas, right? Just so you can see the types of things that come out of it and, and how fast it comes out.

And or, you know, or try it as a writing buddy. It’s a really simple thing to do. Take some, like, I have a, I have, you know, an abstract that didn’t get accepted, and I want to iterate it.

But it was 1000 words here, and it needs to be 500 here. Cut this to 500 words, right? Like, there are really easy ways to just see how well it can do for you.

And you might be really pleasantly surprised. So, I think that my best advice is just, you know, if you’re a scientist, that’s your job, you’re supposed to be experimenting, have some fun experimenting. As a clinician, if you’re really uncomfortable with it, find a friend who’s using it and see and have them explain to the UI that, you know, why you’re why you’re why you’re sold on using the technology.

And maybe just pick up a few strategies from them. I it’s one of my favorite things to do is have a little five-minute coaching session with friends who haven’t tried it. Like, let me just show you what’s the problem you’re facing?

Let me just show you what this can do. And then just getting them just a little bit to flip their, you know, mindset around it and their intimidation and all of a sudden, they take off and they they’re showing me techniques. You know, like, yeah, yeah, I learned this today.

And I think that that’s a really fun thing, too. So, find a friend. Yeah.

[Dr Mike Patrick]
So, one of the one of the fun things that I have done is asking it to tell me about myself. Like, what do you know about me? And it’s like, oh, wow.

Even its insight regarding yourself.

[Dr Carmen Quatman]
But now we get to see like, yeah, you know, I’ve been using I was a very early adopter. So, I think that would be really fun. I’m going to try that today.

[Dr Mike Patrick]
Yeah. Yeah.

[Dr Carmen Quatman]
It’s had it’s now had two years of my brain. It is eye opening. That’s fascinating.

I haven’t tried that. I’m going to try it.

[Dr Mike Patrick]
Well, this has been a really wonderful conversation. We are going to have a lot of resources over in the show notes at Famecast.org for this episode, which is episode 10. So, a lot of further reading if you’re interested in diving a little deeper into all of this.

And then we’re also going to have some of the A.I. tools and platforms that we have talked about on here. So just as an example, we have some links to the National Institutes of Health. They have a page on artificial intelligence for biomedical research, ethical considerations of using chat GPT and health care and a bridge to a program.

We also have a link to a GMO comparing physician and A.I. chatbot responses to patient questions. That’s definitely an interesting article. And then Johns Hopkins, a machine learning and artificial intelligence and health care nature magazine, how chat GPT and other tools could change scientific writing.

And from Stanford, a human centered artificial intelligence. So just all links and resources to help you further take a deeper dive into the use of A.I. and health care and in particular as it relates to clinical research. Then we’ll also have links, of course, to chat GPT and Copilot Claude.

We also have some of the research specific A.I. platforms that you can check out. So YOMU.AI is one that I have used. That’s a pretty good one.

Elicit is another site and Research Rabbit. So, we’ll have links to all of these things. So, you can just take an afternoon, hopefully, and explore.

Once again, Dr. Carmen Quatman, associate professor of orthopedic surgery and emergency medicine at The Ohio State University College of Medicine. Thank you so much for joining us today.

[Dr Carmen Quatman]
Thank you. What a great fun day.

[Dr Mike Patrick]
We are back with just enough time to say thanks once again to all of you for taking time out of your day and making FAMEcast a part of it. Really do appreciate that. And of course, thank you to our guests this week.

Again, Dr. Carmen Quatman, associate professor of orthopedic surgery and emergency medicine at The Ohio State University College of Medicine. Don’t forget, you can find FAMEcast wherever podcasts are found. There may be an easier way for you to listen and subscribe if you have not already done so.

We are in the Apple podcast app, Spotify, iHeart Radio, Amazon Music, Audible, and most other podcast apps for iOS and Android. Our landing site is Famecast.org. You’ll find our entire archive of past programs there, along with show notes for each of the episodes, our terms of use agreement and our handy contact page.

If you would like to suggest a topic for the program or if you just want to say hi, I do read each and every one of those that come through and would love to hear from you. Reviews are also helpful wherever you get your podcasts. We always appreciate when you share your thoughts about the show.

And you can also find additional resources on our website for faculty development. So, if you head over to Famecast.org, click on the resources tab, that’s up at the top of the page. We do have two links to faculty development modules on Scarlet Canvas.

One set of modules is on advancing your clinical teaching. And another is FD4Me, which is faculty development for medical educators. And there are scores of learning modules on Scarlet Canvas.

So be sure to follow those links to find lots more useful information, specifically targeting academic medical faculty. A couple of additional podcasts that I host. If you are a pediatric provider, we have PediaCast CME, that stands for continuing medical education.

We do offer free category one credit for those who listen. And those include doctors, nurse practitioners, physician assistants, nurses, pharmacists, psychologists, social workers, and dentists. And since Nationwide Children’s Hospital is jointly accredited by all of those professional organizations, it’s likely we offer the exact credits you need to fulfill your state’s continuing medical education requirements.

Shows and details are available at the landing site for that program, PediacastCME.org. You can also listen wherever podcasts are found. Simply search for PediaCast CME.

And you may be medical faculty, but if you are not trained in pediatrics and you are a mom or a dad, you may have questions about pediatric healthcare. I also host PediaCast. It’s an evidence-based podcast for moms and dads.

Lots of pediatricians and other medical providers also tune in as we cover pediatric news and interview pediatric and parenting experts. Shows are available at the landing site for that program, Pediacast.org. Also available wherever podcasts are found, simply search for PediaCast.

Thanks again for stopping by. And until next time, this is Dr. Mike saying, stay focused, stay balanced, and keep reaching for the stars. So long everybody.

Filed Under: Technology Tagged With: Academic, AI, Artificial Intelligence, Clinical Research, Dr Carmen Quatman, Faculty Development, FAME, MedEd, Medical Ethics, Medicine, Ohio State, Podcast

Footer

Episode Archives

  • September 2025 (1)
  • August 2025 (1)
  • June 2025 (2)
  • May 2025 (1)
  • April 2025 (2)
  • March 2025 (1)
  • February 2025 (1)
  • January 2025 (2)
  • December 2024 (1)

Episode Categories

  • Career Journey (3)
  • FD-ED Credit (3)
  • Mentorship (1)
  • Promotion (3)
  • Teaching (2)
  • Technology (2)
  • Wellness (3)
  • Home
  • Show Notes
  • Podcast Player
  • Episode Archives
  • Resources
  • Contact Us
  • About FAME
  • Terms of Use

©2024 · FAMEcast