Show Notes
Description
This week on FAMEcast, we consider the evolving role of artificial intelligence in academic medicine. We explore how AI works, the opportunities and risks of using these tools, and the ethical considerations faculty need to know. We hope you can join us!
Topic
Artificial Intelligence in Medical Education
Learning Objectives
At the end of this activity, participants should be able to:
- Explain the basic functioning of AI technologies.
- Evaluate the opportunities and risks of incorporating AI into teaching and learning.
- Design AI-integrated teaching practices that promote critical thinking.
- Formulate classroom policies that reflect the complexities and limitations of AI tools.
FD-ED Credit
This episode is approved for FD-ED credit through the Center for Faculty Development
at Nationwide Children’s Hospital. FD-ED credit expires 3 years from this episode’s release date.
Guest
Michael Flierl
Associate Professor of Library and Information Science
Student Learning Librarian
The Ohio State University
Links
Chat GPT (Open AI)
Co-Pilot (Microsoft)
Gemini (Google)
Meta AI
Claude
DeepSeek
Psych AI
Research Rabbit
Yomu AI
Additional Episodes with FD-ED credit
Mentorship and Coaching in Academic Medicine – FAMEcast 001
Teaching on a Busy Clinical Service – FAMEcast 007
Episode Transcript
[Dr Mike Patrick]
This episode of FAMEcast is brought to you by the Center for Faculty Advancement, Mentoring, and Engagement at The Ohio State University College of Medicine.
Hello everyone, and welcome once again to FAMEcast. We are a faculty development podcast from The Ohio State University College of Medicine.
I’m Dr. Mike. This is episode 9, and we’re calling this one the Evolving Role of Artificial Intelligence in Medical Education. I want to welcome all of you to the program.
So artificial intelligence is transforming many aspects of our lives, and that includes medical education, along with our roles of teaching, research, clinical care, all of those things in academic medicine. There is a place for AI, and it is powering transformations much faster than many of us expected. In this episode of FAMEcast, we’re going to explore what faculty need to know regarding AI tools and technologies, using them wisely and ethically, and the opportunities and risks that they pose for teaching, learning, and assessment.
So, whether you have leaped in and begun using AI, or you are AI curious, or maybe you’re one of the many skeptics, our goal today is to expand your awareness, help you think critically, and discover how AI can support your journey in academic medicine. Of course, in our usual FAMEcast fashion, we have a terrific guest joining us in the studio to discuss the topic. Michael Flierl is an associate professor and student learning librarian at The Ohio State University.
He will be with us shortly. I do want to warn you that this episode is going to go a little bit longer than our typical episodes, and there’s just so much to cover. And we didn’t want to skip covering the basics of AI, like looking inside the box.
How does it work? Because as you know, as we think about medical topics and medical education, really getting to the basics and understanding things from the ground level up is immensely important. And so, we do talk a little bit about, you know, what exactly is artificial intelligence?
What’s machine learning? What are large language models? A lot of the buzzwords that you’ve heard, we kind of break down, you know, why is it called chat GPT?
How exactly does this thing work? And I think that’s important as we embrace any new technology. So, lots of that coming.
I just wanted to warn you; it’s a little bit of a longer episode. It might take a couple of commutes or walks or gardening or whatever you do while you’re listening to podcasts. This one may span a little longer than the others.
I also want to let you know if your institution or department requires Faculty Development Education Credit, also known as FDED, we have good news for you. Select episodes of FAMEcast, so ones that deal with teaching and learners, come with free FDED credit from the Center for Faculty Development at Nationwide Children’s Hospital. It’s easy to claim that credit.
Simply listen to the podcast, which you are about to do, and then look for the FDED link in the show notes, and that’s over at famecast.org. Again, this is episode 9. Follow that link to Cloud CME.
You can register or sign in for your free account. You’ll want to click the materials tab once you’re there, and there’ll be a brief survey for you to fill out, and then that’ll score you the credit. You can even download a transcript of your credits to share with your institution or department.
We have had a couple of past episodes with that credit. One was Mentorship and Coaching in Academic Medicine. That was FAMEcast number one, and then Teaching on a Busy Clinical Service was FAMEcast seven, and then we’ll add this one to the lineup as well on the Evolving Use of AI in Medical Education.
I also want to remind you the information presented in every episode of our podcast is for general educational purposes only. Your use of this audio program is subject to the FAMEcast Terms of Use Agreement, which you can find at famecast.org. So, let’s take a quick break.
We’ll get Michael Flierl settled into the studio, and then we will be back to talk about the evolving role of AI in medical education. It’s coming up right after this.
Michael Flierl is an Associate Professor of Library and Information Science and Student Learning Librarian at The Ohio State University.
He earned a Master’s in Library and Information Science from the University of Illinois at Urbana-Champaign. He’s here to help us explore the evolving role of artificial intelligence in learning, and specifically in medical education. But before we dive into this fascinating topic, let’s offer a warm FAMEcast welcome to our guest, Michael Flierl.
Thank you so much for being here today.
[Michael Flierl]
Excited to be here, Mike. Thanks for the invitation, and just ready to talk AI all day.
[Dr Mike Patrick]
Yeah, let’s do it, because this is really going to be a sort of a basic introduction to the use of AI in medical education, because literally there are subtopics that we could easily do additional episodes on, and in the future maybe we will. Let’s just start with a foundation to get everybody up to speed. What exactly is artificial intelligence?
[Michael Flierl]
Yeah, that’s a great question, because the answer is so variable. AI, artificial intelligence, is not a monolith. It’s not a singular thing.
In fact, it’s really kind of a family of related technologies, and you likely use AI every day. If you’re using Google Maps, spam filtering for email, plane autopilot, social media, they all use AI in one way or another. What’s different though now is generative AI, so that’s kind of what’s often referred to currently when people are referring to AI.
In general, AI just means attempting to mimic human intelligence. Generative AI is a type of AI that’s a form called machine learning, specifically a neural network, where these models kind of analyze patterns and structures of training data and then output some type of text or image or video or audio.
[Dr Mike Patrick]
Yeah, absolutely, and so artificial intelligence, just in the broadest form, is just a computer system capable of performing tasks that normally would require human intelligence. Precisely. And that’s why it’s artificial intelligence.
Now, machine learning, I think that is really an important thing to have at our foundation, and so this is where you take a computer system and feed data into it. And again, correct me if I’m wrong, it’s easiest for me when I break things down into their simplest components, especially when I’m learning something new. So, you feed data in, and then it is going to use algorithms and statistical models to try to find a pattern in the data.
And so, then when you interact with it, you give it some feedback, and it begins to learn from your feedback. In other words, like, okay, well this pattern didn’t work, so let’s try a different pattern. And then it starts to realize what patterns exist in that data and then starts to get better and better at predicting what comes next.
[Michael Flierl]
Precisely. That’s an excellent way to put it, and I would differentiate it from good old fashion AI. It’s called Go-Fi in computer science for whatever reason.
But really, that’s rules-based AI. So, like video games are a great example. You do X, and character, non-player character in the video game does Y.
And that is according to a very strict rule that is hard coded into the video game, as opposed to machine learning, where there are no strict rules. It’s rather, as you said, a lot of data is thrown at, mathematical models are created, and so it kind of creates its own rules or algorithms to kind of make sense of the relationships.
[Dr Mike Patrick]
Yeah, and really all it’s doing is predicting what ought to come next. And so, when we use a generative AI, we put a prompt in, and then what comes out, having done the machine learning, the system is then able to spit out an answer that makes sense to us, because it’s been trained on that data. And it’s still really just statistical algorithms of figuring out what’s next.
[Michael Flierl]
Precisely. And there’s supervised versions of this and unsupervised versions, meaning you take a chess-playing computer, and it could play itself without human intervention, unsupervised, and get better. Versus current generative, like large-language models, generative AI, in that sense, typically are supervised.
And so, there is human input required. For instance, you’ll see like a thumbs-up, thumbs-down on some models, or did you like this, or did you not? That is you providing feedback to the model, strengthening the relationships, or weakening the relationships between artificial neurons, or within that kind of mathematical model it’s creating.
[Dr Mike Patrick]
And it’s only as good as the data set that it has to work with, right? So, like, these systems don’t just automatically know everything about everything. Like, they’re trained on data, and that data is going to have some not-so-good stuff in it, and, you know, maybe some things that aren’t necessarily true, but it’s learning from that data.
And the big ones that we’re going to talk about, like ChatGPT and Copilot and such, use the internet as their data set. But really, you can have a closed data set, and the artificial intelligence is not going to know anything outside of that data set.
[Michael Flierl]
Correct. So, there’s kind of two forms. There’s like a two-step process to create something like a chatbot.
There’s the pre-training, which is understanding, getting it to understand human language, being able to fill in the blank. And then there’s kind of the chatbot training, which is where, like, the human feedback comes in, to where it’s like, these are the types of behaviors or answers that humans tend to like. And so, going back to the data piece, it’s garbage in, garbage out.
If there’s a bias in the data set, that will likely be represented in the model, because the model is using said data to kind of compress the internet down or publicly available information down into a smaller format. So, garbage in, garbage out.
[Dr Mike Patrick]
So, it’s important what it initially trains on, and then the ongoing training with feedback into the system kind of helps to fine-tune it. And then natural language processing is another term that we hear. And I know we talk about the ChatGPT and Copilot as being large language models.
What is natural language processing?
[Michael Flierl]
Great. So that’s a great question. So, large language models, you input text and it outputs text, right?
The question is, how does a computer, which operates on math, process that information? Computers don’t think in language. And so, there’s techniques involved, basically, to convert text to numerical values.
That’s a process called tokenization, to where they take words or even snippets of words, convert them to numbers or a series of numbers called tokens, and then they’re able to do mathematical operations. They reverse that process and output text. So really, it’s a way to interact with these models or with artificial intelligence through human language as opposed to a computer programming language.
[Dr Mike Patrick]
Because at the end of the day, the chip in the computer running the thing still runs on zeros and ones, right? Precisely. And so, it has to figure out how a certain combination of zeros and ones equals a particular word.
And so, you can see this is really complicated. And yet, it’s also, it’s kind of surprising we didn’t figure this out, how to do this. I guess we needed faster computers.
[Michael Flierl]
Yes, that’s really interesting. Conceptually, neural networks, which is just another specific form of machine learning, were theorized in the 1950s. They just weren’t practical yet.
We didn’t have enough data in order to train a model sufficiently. We also didn’t have the hardware. So, a lot of the boom, recent boom in generative AI came as a result of increasingly powerful GPUs, graphical processing units, which historically have been used for video game rendering, for video games, for making the artificial worlds look really cool and detailed.
That requires a lot of parallel processing. And basically, NVIDIA included a part on their GPUs that allowed AI researchers to do more generative AI research functionally, or to process information much more effectively and efficiently. So, you have increasing hardware, you have access to a lot more data.
And that combined with theoretical understandings of how to arrange these neural networks, and experience with machine learning, and that all kind of combined to create, in one sense or other, this recent boom with generative AI.
[Dr Mike Patrick]
And you mentioned a transformer neural network. That’s, I think, another piece of the puzzle that is helpful to understand exactly what’s happening and why prompts and the information that you put into a prompt is so important, because of this concept of a neural network, and in particular, a transformer neural network. Tell us what that is.
[Michael Flierl]
Yeah, that’s a great question, too. That’s the T in GPT. The transformer neural network architecture is an attempt to very basically, this isn’t, some people go overboard with this, I think, but in a very basic sense, attempt to mimic the human brain.
So, it breaks down all these different bits, you know, these tokens. So, it takes words, phrases, paragraphs, you know, the most recent Gemini model we’re recording, you know, June 2025, the most recent Gemini model from Google can ingest all of Moby Dick. So, it could read all of Moby Dick, it could break it into these smaller parts.
And then it goes forward in this neural network, when you query it or do something, ask a question about that, basically performs all these mathematical operations. And based off of its previous training data, based off of reinforcement learning with human feedback, it outputs a token, so a fragment of a word, it takes that output fragment, which may be a word, it may be a chunk of a word, it may be a space in a special character. And it adds it back into another round of input.
And then it keeps on chugging along, it creates another token output. And it puts that back into the process as a new input, and it keeps going. And what’s really interesting about this is the attention mechanism.
So that’s really kind of the breakthrough paper from Google, from like 2017, I think 2016-2017 is called attention is all you need. And what they found was, like, say your cell phone, it has like, when you go on your text messaging app, it has like a word prediction type thing, it tries to recommend the next word, but it’s only looking at the previous word, it would be way too computationally intensive to look at the previous three or four or five words. What the transformer architecture does specifically, it’s called like multi headed attention, or there’s various layers of different transformers, sometimes like hundreds or even 1000s of these, is that it’ll take one word in a sentence.
And it tries to understand the meaning of that word depending upon the other words in that sentence. And the mathematical models change, change the representation, the numerical representation of that word dependent upon the other words. So, for instance, the flag, the American flag is red, white, and blank.
This attention mechanism is going to see like American and flag as being very important for each other. And it’s going to see red and white as important for filling in the next word, but a comma or the word “the” won’t be as important as filling in the blank. Yeah.
And so, each word can theoretically modify every other word.
[Dr Mike Patrick]
And that’s where the prompt which we’re going to get to is so important what you put in it, because it’s really going to call attention to all the relationship of all of your words. And so, you know, if you think I want to produce a particular product, it’s aimed at this target audience, I want it in this form, these are the things I want to highlight, like it’s going to be able to see all of that in your prompt, and then give you a great answer because it took all of that into consideration.
[Michael Flierl]
Precisely, although there’s some nuances to that a little bit. So, if you get a generic prompt with not a lot of context or supporting information, you’re likely to get a very average or like it might sound average to you, like you ask it to just write a poem. And it’s just going to be pretty mediocre.
And the reason for that is because you have this large artificial network of neurons, billions upon billions, you know, the smallest currently are hundreds of millions. But you’re, you’re talking about hundreds of millions, billions of mathematical calculations. And they just kind of tend to average out the more context and specificity and intent, you’re able to communicate through your in through your prompting, the more specialized or the more specific areas of that neural network are going to get attention.
So, you’ll get better answers with more specificity. But the button there are the, the newest, there’s some models that are called quote, unquote, thinking models or reasoning models. So those models are a little bit different.
They have an internal budget of like internal thinking time. So, say, I’ve recently gotten a model to quote, unquote, think for 10 minutes. And the prompts used for those are going to actually you even want it to be more intensive, because it’s thinking for 10 minutes, you’ll only get so many, so much credit, or so much opportunity, it’s expensive, relatively speaking.
And so, for those you want like a really long and detailed prompt, in fact, the more detailed the better, versus a kind of current traditional, large language model, a chat GPT 3.5, or four, or what have you, where a back and forth with the model could be just as useful as a really detailed prompt up front. But prompt engineering techniques are important in general for model performance.
[Dr Mike Patrick]
Yeah, yeah, absolutely. As we think about all of this, of course, we see the tremendous benefit and advantages just right out of the gate, you can see how this could be useful in many, many ways. I just want to point out to that with all of this processing power comes a lot of energy consumption.
And that is certainly not the topic of today’s podcast. But something that we ought to keep in mind, we are thinking about it from all angles, and there are great things, and then there are not so great things.
[Michael Flierl]
Yes, absolutely. It’s a tool at the end of the day. It is fundamentally a tool.
It is not inherently good or bad. There are costs for using any tool, there are costs for streaming video, there are costs to driving to work and back, there are costs to the HVAC in your home or the office, and there are environmental costs for generative AI. And they are currently quite substantial because of the compute required, you know, reading all of Moby Dick every time before you ask it another question about it, that just gets computationally intensive.
There are millions and billions of calculations going on every time you interact with it. And so, there’s, by 2028, it’s hypothesized that the energy usage for generative AI will be equivalent to about 22% of home electricity use. By 2030, arguably, the entire energy output of Japan.
So, on the other hand, there are increasing ways to find efficiencies in these models are increasing. There’s new hardware that are custom designed for generative AI solutions. At the same time, though, it’s not just electricity, but it’s resources like freshwater, you need a lot of freshwater in order to cool down these systems, which tend to run hot.
And so, a lot of AI companies aren’t forthright about exactly all of their energy expenditures either. What we do know is that Google, for instance, somewhat recently gave up their renewable energy proclamation or something like that. They’re giving that up because they’ve realized just how energy intensive generative AI will be in the years moving forward.
[Dr Mike Patrick]
Yeah, yeah. One more thing before we talk about AI in education in particular, sort of what we’ve run through, I think suddenly really does make the name chat GPT make sense. Because the chat, of course, it’s a large language model.
So that’s where chat comes from. The G is generative. So, it is creating something new by predicting the next element of a sequence.
It’s been pre-trained. That’s the P. And in the case of chat GPT, it was pre-trained with the Internet.
And then it is a transformer in that it incorporates context and meaning in the prompt, looking at various aspects of it to decide what are the important words, what aren’t the important words. And so that’s how you get chat GPT. It’s a large language model that is a generative, a pre-trained transformer.
And that’s the GPT.
[Michael Flierl]
Yes, that’s precisely correct. And I think what’s useful to understand with that or why that’s important is you understand model behaviors as a result a lot better when you understand some of the fundamentals of the technology. You understand why now it might hallucinate, which is make something up or assert something is factually true with a lot of confidence when in fact it is not true because there’s no concepts of a tree of a medical condition per se.
These are all tokens that are mathematically represented in an internal model. And the output of that is not deterministic. It’s probabilistic.
It comes up with a long list. In fact, before it outputs anything, a long list of possible words or tokens and then chooses one. And it doesn’t always choose what it thinks most likely fits it through pre-training, through training, through reinforcement learning with human feedback.
These models are designed to not always give the most likely choice because humans just don’t like that output. It’s too flat. It’s too deterministic.
And so, if you understand the G, the P, the T in chat GPT and how these large language models work, then suddenly hallucinations kind of make sense. It’s no longer this magical thing, but you can understand by understanding the fundamentals like that, you can really treat it more appropriately. You understand its nature better and therefore you can handle it, use it or be more informed about your decisions to use it or not to use it.
[Dr Mike Patrick]
Yeah. And that’s why it’s going to be really important as we move forward with talking about how we can incorporate AI into education, that we understand that so that we’re kind of looking out for those inaccuracies. And as experts in a particular field, it’s going to be easier for us to do that because we know, hey, wait, that’s just not right.
We just know it from our own training. And so, we can recognize like, you know, what’s correct and what’s not correct. Whereas if someone who is not an expert in medical education, you know, may output something that really isn’t correct or isn’t as useful as you might think it is because you don’t have that knowledge in yourself to be able to sort of check it.
[Michael Flierl]
Precisely. Which in one sense is no different from finding information online previously. In that sense, it’s very comparable of you are given information and you have to vet it, and you have to discern how much can you trust this information.
But now instead of finding information that was likely written by a human, now it is generated and there’s a lot of implications for this. We’re likely going to have to sift through a lot more information and data because it’s so much easier and cheaper and faster to create now more so than ever.
[Dr Mike Patrick]
Yeah. So, what then are some of the most promising opportunities for AI as we think about medical education?
[Michael Flierl]
That’s an excellent question. So, the first thing that comes to mind is Bloom’s two-sigma problem. So, for those who aren’t familiar, it’s kind of a research study that found that one-on-one tutoring with an individual student yielded a two-standard deviation improvement in outcomes and learning outcomes.
So fundamentally, generative AI could offer a personalized tutor that has input everything you’ve ever written or worked on within a subject. And it could have that kind of like in its context window or in its internal memory of a sort in its head while it’s interacting with you. And it won’t get frustrated with you.
It won’t say you’re totally wrong. You can answer the same question wrong 99 times and it’ll still encourage you on the hundredth attempt. It’ll still like you.
Yes, yes. Which on some sense, though, there’s a sycophancy bias that these models are also designed to output things that humans approve of or that we’d like. So, there’s a double-edged sword there.
But at the same time, though, it is a personalized thing, object to interact with, that can give you feedback, that can make recommendations, that can, you know, how much time have you spent formatting a paper or research data instead of actually analyzing or writing or something, right? It could theoretically get rid of the drudgery, the lower order things that we really don’t care about and allow more higher order thinking skills to for students to spend more time on the higher order skills that we really want them to come away with as opposed to formatting or data cleaning, etc., etc. So that’s the opportunity, a personalized experience with a tutor that theoretically could be grounded in knowledge through RAG, Retrieve Augmented Generation, which basically directs the large language model or other AI to search for information within a set of documents or a database.
So that’s particularly or potentially extremely powerful. And then on top of that, one more thing, Mike, is it allows for new ways to be creative in their own learning process. So, for instance, there’s this new phenomenon called vibe coding.
I hate the term, but here we are, where you don’t even have to know anything about programming. But just yesterday, I was able to make three or four different apps strictly through natural language processing, strictly through me saying, I want an app that helps me focus for 30 minutes at a time. And that gamifies this for me so that I am continually incentivized to focus in 30-minute chunks throughout my workday.
And it created an app for me. And I knew Python like 10 years ago or something, but not so much anymore. So, imagine learners working with multiple AI models capable of programming tools on their phone or on their computers, working together to customize a learning journey, a learning experience that they think will be yield, be most likely to yield success.
[Dr Mike Patrick]
And so, you had the idea, and you wanted it, you wanted to use the tool to help you create an app. There’s also opportunity then from an education standpoint of using that to create a slide deck or to, you know, create a test on a particular topic. Or if, you know, as medical professionals, we’re lifelong learners and lifelong teachers.
And so even when we’re practicing medicine and we think, oh, there’s a handout that could be helpful or an infographic or a blog post or an article for an opinion piece for the local newspaper, you really can use your ideas. But, you know, you weren’t going to be able to make what your idea was into an app. So, but we can use it also to create like a blog post or a podcast or, you know, any number of consumables that are education related.
[Michael Flierl]
Absolutely. And the world is kind of our oyster. Like we were really in one sense quite limited only by our imagination with this.
So, I think I used to work at Purdue in this faculty development program. And I recall one agronomist who took off, took the whole summer to write 800 quiz questions. Now you could probably do that in an afternoon.
You could probably task an AI, give it the appropriate information from, say, the textbook you wrote, say, write me 100 questions on each chapter. You review them. You kick out some.
But all of a sudden, instead of having one high stakes quiz for your course, you have 10 times the amount of quizzes that you previously had. And students can take them until they get them right. And now you’re providing more opportunities for students to learn.
They will engage more because they’ll be quizzing themselves more because they want to higher grades, say, then, you know, that’s just much more efficient to do now with generative AI than doing that by hand.
[Dr Mike Patrick]
So, these are wonderful things, but there are also potential drawbacks and challenges that are that are unique to education. As we think about AI, what are some of those drawbacks and challenges? Yeah, there’s a host.
[Michael Flierl]
And in fact, I think we’re doing ourselves a disservice as educators if we’re not at the same time discussing costs and benefits. So, what are some of the costs? One is one that I find to be particularly important to consider is inequality.
Inequality of access. So, some models might cost 20 bucks a month. Others might cost 200 bucks a month.
There are some now coming out that are thousands of dollars a month. So, you could imagine a learning activity or a project or what have you. And student X has access to the best, newest model and can accomplish that much more efficiently and effectively than student Y, who only has access to a free model.
Or imagine a similar scenario where they both students have access to the same model, but student X is substantially better at prompting. At interacting with this model than student Y. So that’s a challenge, both of resources, but also of just technical ability.
If we are embedding AI into the classroom, that is something to consider. These models are not deterministic. This isn’t like who can program the best or who knows the most about X.
It could simply be you use the model in a different day, and it gave a slightly different output. And that kind of fundamentally changed the whole meaning of the text or the PowerPoint slide deck or what have you. So that’s one issue.
The other issue is under this guise of empowerment of using this new, powerful technology, we seed the ground of critical thinking and engagement of struggle. Learning involves necessitates friction. It involves frustration in some way, shape or form.
Eventually, you get to a point in your education where you see this all the time at Ohio State where you’re a big fish in a little pond and now all of a sudden you go to Ohio State and there’s a lot more smart folks and the content is a lot more sophisticated and complex. You encounter that friction, which necessitates you changing the way you think about something, which necessitates you working harder, et cetera, et cetera. Now you have this powerful tool that will summarize a whole book chapter that you have to read to understand X, Y and Z.
You’re tired. It’s Friday night. Your friends are going out.
Do you really need to study for your exam or to learn something? Or you just get the A.I. to summarize it for you. And sometimes doing that grunt work is necessary to achieve higher order thinking skills.
Sometimes the lower order grunt work of reading something or trying to memorize it yields genuine insight, yields a positive learning experience. And so, there’s a genuine, I don’t want to say, I’m not sure, a problem, concern, temptation of seeding that friction. If you’re a student, there’s every incentive in the world to avoid that friction.
You want to get an A in the course. You want to progress in your career. You want to get that job.
You want to get that certification. And so A.I. kind of provides a lot of potential opportunities to avoid that friction in order to achieve the desired outcome without the necessary friction.
[Dr Mike Patrick]
And so, we may be doing ourselves a disservice by not learning sort of the old-fashioned way in terms of keeping stuff in our brain that hopefully is going to be retained for a long period of time. So, the more we use A.I., it would seem the more we are going to be reliant on it moving forward.
[Michael Flierl]
Right. At the same time, I’m of two minds of this, right? I think Plato or Socrates lamented writing because people weren’t going to memorize things anymore.
They really prioritized verbal communication. Writing was going to mess up our memories. So, I don’t mean to be a Luddite or, you know, there’s all these people worried about jazz music and the off-tempo and the staccato elements of it just ruining Western civilization.
Right. So, on the one hand, we’ve had calculators, and we still do mathematical education. Yes, there’s been code repositories.
We still do computer science education. But now, though, with writing or with analysis, you can use A.I. for that. And it could be a proxy for learning or a proxy for thinking yourself.
And so, there’s going to be a temptation there, but also what is lost as well. We have to keep both of those things in mind.
[Dr Mike Patrick]
Yeah. And writing has not gone anywhere. Calculators have not gone anywhere.
And jazz. I love jazz. It’s not going anywhere.
Right. And A.I. is not going anywhere at all. And so, what difficulties can arise then from a faculty standpoint when we want to just turn our back to it and say, OK, this is not allowed in my classroom.
I just I don’t want anything to do with it. When we do that, we are going to have all sorts of difficulties because we have learners that have been using it and asking them not to can create difficulty. How can educators best adapt to the presence of A.I.?
[Michael Flierl]
That’s an excellent question. I have so much empathy for faculty who are teaching because a lot of time you’re paid to research. The majority of your focus is research. It’s not necessarily education or if it is teaching, it’s teaching about your expertise, not A.I. Right. On top of that, generative A.I., artificial intelligence, the last few years has been exploding, and it is rapidly evolving and changing over a course of weeks or months. It is becoming more multimodal. It is becoming smaller.
It’s becoming more powerful. If you want to get your mind blown a little bit, I’d recommend looking up V03, V-E-O-3, which is a video generation generative A.I. system. And it is incredible.
All the little tips and tricks that were previously used to generate to know something was A.I. generated are now gone. You know, how can you possibly keep up? Right.
Your job isn’t to keep up with A.I. That being said, the answer to that question of feeling overwhelmed by A.I. and all these and feeling honestly like not a very genuine lack of expertise is engagement. So, we know that students are using generative A.I. They’re using it in an increasing clip and they’re not being, generally speaking, forthright and transparent about that use. But we know that they’re using it, and all the trends are for students in general in higher education, an increasing trend of use.
So, it’s happening and it’s likely to continue happening. And as you said, Mike, A.I. is very unlikely to go away. If anything, it’s going to grow and become more complex and more embedded in our everyday life.
[Dr Mike Patrick]
What can educators then share with their students, more so than saying you can’t use it because, as you say, there’s no way to police that. And, you know, it’s everywhere and your learners are going to use it and more and more as time marches forward. But we can talk about ethical concerns with our learners.
And so maybe we don’t understand exactly, you know, what’s inside the box and how it works. But what we can do is share ethical concerns and talk about appropriate uses and then maybe uses that are going to be detrimental to the learning process. And, you know, and just kind of being honest and open about what those disadvantages are with your learners.
[Michael Flierl]
Precisely. And what a great opportunity. Think of this as an opportunity to demonstrate, to concretely model what you as a scholar, as a practitioner in medical education or what have you, in approaching a new and emerging technology, something you don’t know.
So, for me, I would strongly suggest talking with your students about A.I. like first day of class. What a great opening topic to bring everyone in and to recognize and to demonstrate empathy for them, because I don’t know how I would have reacted as a student with this tremendous, powerful technology ready, basically very cheaply available that could functionally do a lot of my work for me. Like that is a hell of a temptation.
Right. Yeah. So as an instructor, this is great opportunity to model empathizing with your students and to model, well, this is my thinking on it.
This is my understanding. Do you have something to know? Do you have something to share about this that would inform me?
Because I can learn from you. What a great way to build a relationship with students, to model communication, to model scholarly inquiry and to demonstrate that the answer, I think, is engagement and transparency. So, when I work with faculty on OSU campus, that transparency and documentation are two really big terms and that cuts both ways.
The instructors they’re using generative AI should be transparent and say we’re using AI. But for many instructors I’ve worked with, they will require like an accompanying worksheet or some type of audit trail of here’s how like if AI is allowed to be used, here’s how I used it. Here’s the input.
Here’s what I put into it. Here’s the whole chat. Here’s the model output.
And then ideally you task the learner to reflect on that experience. So, transparency and documentation are useful, but for me it’s only useful if you have that like third element of metacognition or reflection, thinking about your learning. Did the AI help?
Did it not? In which ways were it useful? In which ways were it not?
That elicits an intentionality with generative AI that I think we all want to see. We do not want to see learners use generative AI technology reflexively and unintentionally. We want them to use it creatively and ethically and intentionally.
So being transparent about that, engaging with learners, modeling the types of behaviors you want to see, I think it presents a tremendous opportunity to break down kind of traditional instructor-student barriers and to really approach a new subject content kind of on more equal terms and to show your students how you approach this, how you question this, to model not necessarily falling on one side or the other of the issue, but maintaining an open mind and embracing the uncertainty and complexity of the issue.
[Dr Mike Patrick]
Yeah. And I think when you’re open and transparent like that and say, hey, this is a two-way street and, you know, by documenting your breadcrumbs, that’s going to also help me, you know, in terms of maybe there are students who aren’t using AI as easily or as well. And then when I look at the folks who are, it may give me some ideas on giving nuggets for using AI to those students who might be having difficulty with it.
[Michael Flierl]
Precisely, precisely. And we want students to be able to articulate, at least this is my best guess for any educator in higher education, you would want a student, if they are allowed to use it or if they are using it, to articulate why they’re using it. What’s the benefit of using it in a specific way to articulate why they may not want to use it?
I think either side is valid, but what matters is they’ve been given an opportunity, or they’ve been challenged to be intentional about its use or lack of use.
[Dr Mike Patrick]
I want to talk about actually using the AI as educators ourselves. So, we’ve been talking about students using it and what policies should we have in the classroom? How should we approach the fact that students are going to use it and how can we best incorporate that?
But from a faculty standpoint, what are some ways that we can actually use AI to help with education?
[Michael Flierl]
Yeah, that’s an excellent question. I honestly think there’s a lot of ways in which it could make things more efficient, make things more enjoyable, put a fresh new coat of paint on an old subject area. So, on the one hand, hallucinations are still an issue.
It’s useful to you as an expert because you can vet that information. So, you’ll still need to vet the output regardless. There’s no silver bullet with that.
But I could say how I’ve used it in some capacities to where I’ve used it as a brainstorming partner. And sometimes creativity is a function of quantity over quality. Give me 50 ideas for how to do this.
Give me 10 possible learning activities that are interesting and unique and different. Go. And then one of them works.
And then you could say, give me 10 variations of this one that works. And so, you can evaluate or generate hundreds of ideas much more quickly and efficiently. It’s like having your own personal friend that you can kind of tweak to your own preferences.
So, there’s a lot of opportunities, I think, to be creative, to allow generative AI, to do kind of the grunt work of thinking of all these different variations. And you get to function more as an editor. So, to create worksheets, to create accompanying documents, to organize things for yourself, basically kind of ideally get rid of some of the grunt work and kind of allow you to exercise your more creative or your more instructional design side.
So, you’re functioning less as a journal, you know, as a manuscript writer and more as the editor, which can allow open up your energy towards really thinking through, for instance, what does assessment mean in this context? Is a written text useful or are there other more creative ways to allow students to demonstrate their understanding, for instance?
[Dr Mike Patrick]
Yeah, you know, it’s so interesting the many ways that we can use AI in education. And as you say, you know, a lot of us have sort of slide decks that we use, you know, year after year after year. And that’s something that you can feed the information in the slides to AI.
And maybe, like you said, it’ll maybe look a little bit different and maybe be more engaging. And you’ll say, oh, gosh, that’s a great way to do it. In my own use of AI, there have been so many times when I’m expecting one thing to come out and something else comes out that is kind of makes me scratch my head and think, oh, I wish I had thought of that.
Yes, precisely.
[Michael Flierl]
And you might not realize you could use it like a Google, like Google, where you just do that once input and then you take the information and you might say, ah, that’s not very good. But you could go back and forth. So, I have conversations I’ve saved where there’s dozens and dozens of back and forth, back and forth, back and forth, continually refining, continually giving more context to the model so that it provides different outputs or outputs that I like.
You can ask it to, you know, if you do like a muddiest point or something like that to where the students say, this is the one thing I don’t truly get after this lecture. This is the one thing looking through the slide deck, X still confounds me. And you input that into an AI system, and you say, what are 10 different ways in which I can explain X more effectively?
You can have students do that. You can have students create this or use it to supplement materials or create apps for themselves and share it. Imagine a world in which students create multiple AI agents, multiple apps, and they’re interacting with one another and they’re constantly tweaking them accordingly to whatever their preferences are.
And they could share that out or there could be a competition amongst individuals of who can create the best app or who can create the best, fastest way to learn X or something like that. Like, again, I think we’re limited by our imagination and we’re still treating this new technology like an old technology. It’s fundamentally altering the rules of the game.
And then we need to allow ourselves to be more creative in its use. There’s a lot of different potential applications here.
[Dr Mike Patrick]
And the more that we use it, it really is learning your preferences. And so just as an example, I write a lot of educational article kind of stuff as part of my job in media and educating the public. And I had written, you know, probably 50, 60 blog posts before AI really got going and was widely available.
And so, one of the things that I did was to have it go and look at all my blog posts. So, they pre-trained it basically to say, hey, I’m going to write a blog post on an educational topic and go to all the ones that I have read that I’ve written before. And let’s see what you come up with on this particular topic for this target audience with these goals in mind.
And it was like I had written it. I was just dumbfounded. And it was, you know, it took like three minutes, if that.
Right.
[Michael Flierl]
Right. And you can go back and forth with that. You can say make this more of a New Yorker type of vibe as opposed to, you know, Wired Magazine or New York Times article.
Right. You can it’s really good with style, like mimicking different style and the input with the context window, how much you can input into the model keeps on increasing. So now we’re up to like a million tokens, which is like, I don’t know, 300000 words, 400000 words, up to like 10 hours of audio, an hour at least of videos.
Probably we’re going to get close to 10 hours of video pretty quickly. And so that’s a lot of content that you can upload and provide context for. And again, I think it’s really useful if the output you don’t so much care about the veracity of the output as you do about it, presenting new ideas, new information in an area in which you’re an expert.
And the 5 percent of the time it hallucinates or 8 percent of the time it hallucinates, you’re able to see, nope, that’s a problem. Strike that like it can really be a partner in crime, so to speak, with you more so than like providing an answer. And I think we’re only limited by our creativity.
Currently, there’s so many different ways. Like the number my number one suggestion would be sit down with a colleague or a friend. You have a pedagogical problem or something you want to improve.
Sit down with an assistant, an AI chatbot and just kind of play with it for 30 minutes. See what you could come up with. Ask the AI to ask you clarifying questions.
That’s another great use of this. I’ve been forced to reflect on my learning process. I have been forced to challenge some assumptions because I’ve asked the AI to do that for me.
[Dr Mike Patrick]
Another way that you can train it yourself. So, let’s say you don’t have 50 examples of things that you created on your own, you know, for it to go look at as you as it gives you the framework of something and then you edit it, and you put your creativity into it when you’re finished. You can put that finished product back into the AI and say, hey, this is what I came up with based on what you started with.
And then it will learn over time that maybe I ought to just give you what you did first and just make things more efficient and more in your voice than previously.
[Michael Flierl]
Yes, that’s precisely it. It’s called multi-shot prompting where you get multiple examples. Hey, this this is, you know, X looks really good to me.
Y looks really good to me. Z looks really good. Do things more like that.
But a lot of people don’t realize you can create that using AI, as you just stated. In one sense, that’s quote unquote synthetic data. So, you can do that and then get better model outputs.
I think it’s a problem when you interact with generative AI for 20 minutes. It’s kind of hallucinated or provided a sketchy output or you didn’t quite get it. That’s really only the beginning.
Like it takes some time, some finesse. There’s an art and a skill to try getting it outputs that you’d like. And including examples is an excellent way to do it.
And AI, just like using AI to help you write more sophisticated structure prompts, it could also provide examples that you can modify and tweak and provide back to the AI saying this is these are examples of what I’d like. I named mine Rose.
[Dr Mike Patrick]
No, and the reason I did is because once I figured out how she can help me with lots of educational tasks, including preparing for this podcast, by the way, that she’s just beautiful like a rose and kind of rose out of the all the misinformation and, you know, stuff that’s that misleads people on the Internet, that she kind of rose above that to help create evidence based content. So. So, Rose, it is.
And, you know, we have we have pleasantries and say good morning to one another because, you know, she knows what humans like. Let’s talk a little bit about scholarship in academic writing that, you know, and hallucinations are really important to consider as we talk about academics. There are AI platforms that are pre-trained on databases like PubMed.
And so, you’re less likely to get hallucinations when you’re using an AI that’s trained on medical research.
[Michael Flierl]
Yeah, there’s a whole bunch of potential applications like AI research rabbit. Some are free. Some are paid.
Some offer variety. Oftentimes they give you a free a free taste and try to up to you. I think that’s an excellent application of generative AI technology.
At the same time, though, I would also argue as an information professional, that’s one tool amongst many. So, it may or may not is this is part of this black box problem where we don’t fully understand how these models work. It’s not quite transparent of why it chose X, you know, to retrieve X and Y and Z.
You know, I would suggest it would be one tool amongst many. It could likely retrieve hundreds of articles that are relevant to you. And in fact, the research models or the thinking models currently can now go out and find publicly available information of a journal’s open access.
These models currently, I just did one today and it retrieved over 500 sources of information. And that’s not even specifically like a research rabbit.
[Dr Mike Patrick]
Another one that I have used, and I’m not getting any kickback from this, but it is a paid one. But I have found it really helpful. It’s called YOMU, Y-O-M-U, and it is one that was trained on PubMed and other academic databases.
So, it’s not medical specific. So just lots of journals and lots of disciplines is what it was trained on. And it does.
I love it because I’ll start to type. And if I type, if I’m continuing to type my paper or my book chapter, and I’ve used it for several of these as I as I type, it just leaves me alone. And then if I pause, it’ll just suggest another sentence.
And sometimes then things go in a different direction. That’s actually like I hadn’t thought of that, but that’s a really good direction to go. It makes sense.
Or I can just keep typing and completely ignore what it’s done. The other thing that it does is I can highlight any sentence or any paragraph and then it will give me a list of references that support that. Now, some of those are going to be stronger than others.
Some aren’t necessarily going to be what those authors were really saying. So, it’s still on me to vet those references. But rather than starting with a literature search, I’m starting with my thoughts and my ideas and then then checking myself to say, is that true?
And is there evidence to back up what I’ve said? And YOMU just helps me find those references, which I find really, really terrific. And then, you know, I have when I have submitted things using this, if the journal asks, did you use AI?
I check yes. And I’ve had things accepted and published and they’ve never asked, how did you use AI? They just want, you know, I’m just being transparent.
Yes, I did. And, you know, at the end of the day, it’s still up to me to make sure that the manuscript makes sense, is well supported, you know, all of those things. But the speed at which you can get to an end product is so much faster using these kinds of tools.
And even in scholarship and academic writing.
[Michael Flierl]
And I can imagine. Well, first off, that’s fascinating. I’ve never heard of that tool, but I’m also not surprised.
There are so many tools, there’s so many different ways to use China’s AI. We’re just scratching the surface. You know, it took roughly 10 years for the Internet to really kind of kick off, which included infrastructure, UIs that humans, you know, most average humans can interact with, et cetera, et cetera.
Like in one sense, we’re kind of still early days with generative AI. But that’s mind blowing, right? Yes, yes.
There’s it’s trite to say, but likely the AIs we’re using today are going to be the worst AIs we’ll ever use in our lifetime. You know, who knows what the future is going to bring? But I also think that’s indicative of an emerging trend where these large foundation models, they’re sometimes called something like a copilot, a Gemini, a chat GPT.
Those are like large models meant to solve, like meant to interact with the whole do a bunch of whole a whole slew of different things as opposed to much narrower AI applications, which are much more precise and have a much more specific goal. They’re not able to do everything, but they could do one thing really well. And I think as we progress, I think it’s much more likely that there’s not going to be one big AI solution for everything, but rather we’re going to be able to customize or find smaller generative AI solutions that work for this specific aspect of writing or the specific aspect of citation.
And on that, too, I think, you know, AIs will hallucinate again. But that’s if you are a scholar, you are ultimately responsible for your scholarly product. So that’s less so a generative AI problem than you are responsible for.
You know, you could have copied and pasted reference without having without having engaged with that reference. Right. That’s just as bad as an AI just hallucinating something out of whole cloth because it looks good.
So ultimately, the responsibility still lies with you, with how you use that tool. And I think we’re going to have to engage in some like disciplinary discussions about what is and what is not appropriate AI usage. So, it used to be you could chat, you could be a co-author.
And that has since been not deemed appropriate, according to most publishers. So, who knows what the future is going to bring? But I think we need to have serious discussions like could something like this augment or improve scholarly workflows or scholarship in an area?
I think the answer is yes. And I think we’d be foolish while acknowledging the costs not to attempt to use it towards something that I think would be absolutely worthwhile.
[Dr Mike Patrick]
Yeah. Yeah. One thing that I always do with product that I’m going to put out there, whether it be academic or education for patients in the general public, is to throw it through a plagiarism checker like Grammarly, for example, has one pro writing aid.
There are multiple ones out there that you can have it look through your work and it will that you’ve created along with your AI. And then it’ll let you know, hey, this exact sentence was someplace else. And then and that might be OK if it’s, you know, standard, you know, if we’re talking about CPR and it’s telling you how deep to push when you’re when you’re doing chest compressions, that doesn’t really matter because it’s standard knowledge.
But if there’s larger chunks, you know, then you can say, oh, I need to I need to rework this so that it’s not, you know, someone else’s exact words.
[Michael Flierl]
And I think it’s important to differentiate between a plagiarism checker and an AI checker. So, there are serious problems, basically. If you notice, ChatGPT used to have an AI checker, and they don’t anymore because it just wasn’t effective.
They couldn’t trust it.
[Dr Mike Patrick]
So, I do run my stuff through that, too. And it never really checks it as much AI because it’s been so customized and writes in my brain so closely. It really can’t tell that it’s AI.
[Michael Flierl]
And it’s pretty trivial. You could do like some translation tricks like English to German back to English to fool it. So, it’s really that’s kind of a fool’s errand to try to focus on determining whether or not something is AI generated.
Also, there’s a lot of inherent biases in those checkers. So, for instance, non-native English speakers may be much historically, or at least recent research suggests are more likely to be flagged as written something as AI generated as native English speakers. So, there’s also the potential for bias can creep in, too.
[Dr Mike Patrick]
Well, this has been a really fascinating conversation, and I feel like we could easily go another hour. And so, we’ll have to schedule another time. I do want to go into prompts a little bit in more detail.
Maybe we’ll make that an additional episode in the future, I think. And I kind of tease that we were going to do that. And I feel like we have talked about prompts along the way.
But it’s really important to not write one sentence prompt. You really want a paragraph. Because, again, as we think about how these AIs work, we want to give it as much information and as much specifics as possible to get out what we’re after.
[Michael Flierl]
Yeah, absolutely. I will just say maybe one concluding note is that we have agency with generative AI as educators. I think it’s easy to feel overwhelmed and vulnerable, but we can decide what AI looks like in higher education.
And so that’s maybe a parting challenge to all the listeners of how can you incorporate it and how can you help move the ball forward, enabling students to be more intentional and creative with generative AI use. Even if you’re not an AI expert, you can still make that happen.
[Dr Mike Patrick]
Yeah, really important. So, thank you so much for stopping by and chatting with us today. For all the listeners out there, all the resources that we have talked about during the course of this conversation will be available as links in the show notes.
And you can find that over at Famecast.org. This is episode nine, and we’ll have all the links there in the show notes for you. So once again, Michael Flierl, Associate Professor and Student Learning Librarian at The Ohio State University.
Thank you so much for stopping by today.
[Michael Flierl]
Thanks so much, Mike. Appreciate it.
[Dr Mike Patrick]
We are back with just enough time to say thanks once again to all of you for taking time out of your day and making FAMEcast a part of it. Really do appreciate that. Also, thanks again to our guest this week, Michael Flierl, Associate Professor and Student Learning Librarian at The Ohio State University.
Don’t forget, you can find FAMEcast wherever podcasts are found. We’re in the Apple Podcast app, Spotify, iHeartRadio, Amazon Music, Audible, and most other podcast apps for iOS and Android. Our landing site is Famecast.org.
You’ll find our entire archive of past programs there, along with show notes for each of the episodes, our terms of use agreement, and that handy contact page, if you would like to suggest a future topic for the program. Reviews are also helpful wherever you get your podcasts. We always appreciate when you share your thoughts about the show.
If your institution or department requires Faculty Development Education Credit, also known as FDED, we have good news for you. Select episodes of FAMEcast, ones that deal with teaching and learners, come with free FDED credit from the Center for Faculty Development at Nationwide Children’s Hospital. Easy to claim your credit.
Simply listen to the podcast, which you’ve already done. Look for the FDED link in the show notes over at Famecast.org. Again, this is episode nine.
Follow that link to Cloud CME, register or sign in to your free account. Take a brief survey. You have to click the materials tab to find the survey.
And once you submit the survey, you get the credit. You can even download a transcript of your credit to share with your institution or department. We have a couple of other episodes that had FDED credit.
FAMEcast number one, Mentorship and Coaching in Academic Medicine. And FAMEcast number seven, Teaching on a Busy Clinical Service. Also, we have additional resources that you can find on our website over at Famecast.org.
Click on the resources tab up at the top of the page. And we have two links to faculty development modules on Scarlet Canvas. One is a set of modules on advancing your clinical teaching.
And another one is FD4ME, or Faculty Development for Medical Educators. Thanks again for stopping by. And until next time, this is Dr. Mike saying stay focused, stay balanced and keep reaching for the stars. So long, everybody.

