You’re Not Too Late How Clinicians Can Embrace and Master AI Tools Today
The AI-Ready DoctorJune 21, 2025x
3
00:43:2629.85 MB

You’re Not Too Late How Clinicians Can Embrace and Master AI Tools Today



Welcome back to The AI-Ready Doctor, the podcast that demystifies artificial intelligence and shows you how to put it to work in your clinical practice right now. In today’s episode, Dr. Hassan Bencheqroun, an experienced critical care physician and digital medicine researcher, tackles one of the biggest hurdles facing clinicians today: the fear that AI is just too complicated to learn or use. Dr. Bencheqroun breaks down what makes AI accessible even to those without a technical background and shares personal stories from the ICU where AI became not a replacement, but a powerful assistant that restored his purpose in medicine.

From busting myths about AI’s capabilities, to offering real-world examples of how AI can streamline rounds and improve patient care, this episode is packed with practical advice, relatable anecdotes, and empowering takeaways. Whether you’re hesitant to try new tools or feel like you’re already behind, Dr. Bencheqroun explains why you’re right on time to join this digital transformation. Plus, he shares essential AI terms every clinician should know, reveals the skill of prompt writing as the new “SOAP note,” and answers the big question: What can AI never replace in medicine?

If you’ve ever wondered how to actually talk to AI, when to trust it and when to double-check, this conversation is for you. Stay tuned as we make AI feel less like science fiction, and more like your next step toward better, more fulfilling practice.

Timestamps:

00:00 AI Can't Replace Human Care

03:38 AI's Limitations in Medicine

08:09 "Mastering AI Prompting"

12:11 "AI: A Tireless Smart Assistant"

16:10 AI Interaction Basics: Prompting & Models

20:14 Synthetic Data: AI's Creative Limitation

22:43 AI Responsibility and Privacy Concerns

27:09 AI Tool Surpasses Expectations

28:49 AI Prompt: Clarifying Intentions

32:12 AI-Assisted ICU Rounds

36:02 AI Revolutionizing Information Retrieval

41:20 AI in Medicine: Caution Advised

44:53 AI Revolutionizing Medical Diagnostics

47:21 Key Skills for AI Utilization

50:39 AI's Impact on Medical Community

52:58 Spark & Share: Inspire Colleagues


How AI is Revolutionizing Medicine: Key Takeaways from “The AI Ready Doctor” Podcast

Artificial Intelligence (AI) is rapidly transforming healthcare, but many clinicians still feel apprehensive and sometimes overwhelmed by the idea of integrating AI into daily medical practice. The latest episode of “The AI Ready Doctor” podcast Dr. Hassan Bencheqroun breaks down these barriers, highlighting that AI doesn’t have to be intimidating or complex. Instead, it can be a powerful tool for any clinician, regardless of their technical background.

Here are some insightful lessons from the episode on how AI can empower medical professionals, improve patient care, and why you’re not too late to the party.

AI Isn’t Here to Replace You It’s Here to Help

One of the biggest fears among healthcare professionals is that AI might one day replace them. Dr. Bencheqroun tackles this misconception head-on. He emphasizes that while AI can process and synthesize vast amounts of information, it lacks the human touch empathy, contextual judgment, and the ability to build authentic relationships with patients.

“No AI can do what we do,” Dr. Bencheqroun says. “We deal with emotion, memory, and real energy exchanges. AI deals in data and algorithms.”

Prompts Are the New Clinical Skill

Early in his AI journey, Dr. Bencheqroun admits he expected AI to be smarter than it was almost like the scanners from Star Trek. He quickly learned that the way you ask AI (known as “prompting”) is critical.

Think of AI as an intern who never sleeps: it can provide immense knowledge, but only if you give precise instructions. Prompt engineering learning how to communicate effectively with AI is fast becoming one of the top skills in medicine.

For example, instead of simply asking “What is a Swan Ganz catheter?” Dr. Bencheqroun recommends a more tailored prompt, such as, “Act as an experienced interventional cardiologist and explain Swan Ganz catheter in three sentences.” The specificity of your prompt produces better, more practical answers much like crafting a great SOAP note.

Essential AI Terms Every Clinician Should Know

Dr. Benchaqroun offers five critical terms clinicians should understand:

  1. Prompt/Prompting: The questions or instructions you provide to AI. Better prompts yield better results.

  2. Model and Dataset: The AI’s “brain” what it has been trained on and how it generates answers.

  3. Hallucination: When AI makes up information that sounds plausible but isn’t true. Always verify critical data.

  4. Synthetic Data: Artificially generated information that may mimic but not perfectly reflect real-world data.

  5. Shadow AI: The use of AI tools without oversight, potentially risking privacy and compliance, especially with sensitive health data.

Real-world AI in the ICU and Beyond

Dr. Bencheqroun shares powerful practical examples of AI in action:

  • Using AI to help draft compassionate, customized conversations for patients and families saving cognitive load during emotionally charged moments.

  • Leveraging AI during ICU rounds to instantly synthesize the latest literature tailored to complex cases, freeing up more time for critical decision-making.

  • Creating presentations and educational content quickly, enabling more focus on high-value activities like patient care and teaching.

AI is For Everyone and Now Is the Best Time to Start

You’re not behind. In fact, Dr. Bencheqroun asserts, “You’re exactly on time.” Just as clinicians had to adapt to EMRs and new medical technologies, AI is simply the next evolution in healthcare tools one that works with natural language and augments, rather than replaces, your expertise.

Top Takeaways for Clinicians:

  1. Don’t fear imperfection. Practice using AI tools like any clinical skill, you’ll get better with time.

  2. Learn to prompt. The quality of your input determines the quality of your output.

  3. Always be the human in the loop. Use critical judgment and verify AI’s output, keeping patient care and safety paramount.

The message is clear: AI can empower physicians, reduce burnout, and sharpen clinical thinking when used thoughtfully. Now is the time to experiment, iterate, and make AI an assistant that elevates not replaces your unique human skills.

Want to dive deeper? Listen to the full episode of “The AI Ready Doctor” for real-world examples and practical tips on making AI your daily medical ally.


Dr. Bs LinkedIn Page: https://www.linkedin.com/in/drbmedicalai/

Dr.Bs website: https://drbmedicalai.com/med-ai-academy/

The AI-Ready Doctor website: https://aireadydoctor.com/

TopHealth Media: https://www.tophealth.care/


[00:00:12] Welcome back to another episode of The AI-Ready Doctor, the show that simplifies AI so you can use it your way right away. Today's episode tackles one of the biggest mental roadblocks out there, the fear that AI is just too complex. But as we'll hear from Dr. Hassan Benchakran, critical care physician, researcher, someone deep in the trenches of digital medicine,

[00:00:38] AI isn't inherently complicated. It's just rarely taught in ways that actually stick. Let's fix that. Right, Dr. Benchakran? Right. I think it's the fact that AI isn't inherently overwhelming. It's just under-explained. And that's the goal is to enhance AI literacy. How do I establish AI in my day-to-day practice?

[00:01:04] Awesome. So let's rewind for a second. When you heard about AI in medicine, what was your honest reaction? Were you excited? Skeptic? What was it like for you? I think when I heard AI at first, I was almost recoiled because most of us physicians deal with the human body. We don't think of ourselves as techies. So when I heard AI, I rolled my eyes and I said, let's see what you got, but it's probably going to be one of those techie things that they want to put into medicine.

[00:01:34] I didn't expect it to be this practical, this usable in our life. So you've mentioned before in last episode that AI didn't replace medicine for you. It restored your purpose. Can you take us back to that moment where everything shifted? I think I'm going to blend that moment with today's moment. We sort of have to migrate away from the AI is going to replace us. No one can do what we do.

[00:02:03] No one can sit at the bed of the patient, hold their hand. No one can have that nuance. Well, I don't want to say no one. I meant no AI can do what we do. And so to think that AI could replace contextual, nuance, back and forth, energy exchange, communication exchange, and being there for your patient is going to be pretty difficult. We deal with emotion. We deal with memory.

[00:02:32] We deal with exchange of energies. AI deals with tokens and simple, you know, half a word or a whole word and predictability and algorithms. We took an oath. AI didn't. So the fact that it's going to replace us, I remain very, very skeptical of that. But that moment where I realized that AI is not going to replace me, it came pretty close.

[00:03:00] I mean, we all remember Watson, the IBM computer that won a chess game and reasoned and did all the things that we wished we could do. And that moment was pretty sobering. Take that and compare it to now taking care of patients. At some point when I started to understand what AI can do, but mainly what it cannot do.

[00:03:30] How did I do that? It was one day where I was in the intensive care unit and I was going to see a patient who was a bit of a head scratcher. I had just finished a morning report session whereby we present a patient slowly as they come in.

[00:03:50] And your audience made out of medical students and residents and fellow colleagues ask you questions and you give information about the patient slowly in order to guide them through the maze of what's the next step. Are we able to draft a list of possibilities, which we call differential diagnosis, and then the next piece of data.

[00:04:13] And then you go in and rearrange that list all the way to getting feedback from the testing that we've done and see if we can arrive to the diagnosis. And I went to, after de-identifying everything, pose it as a question to, at that time, Chad GPT and then Gemini and then Claude. And I looked at the answers that were given. They were pretty generic.

[00:04:40] They were pretty not quite as reasoned as when we put it in a context. You had to feed it certain things in order to guide it to where it went. Granted, since then, clinical, not clinical, but deep reasoning has been added to those models and it is much better than it was back then. But still, that's the moment where I identified that it might be able to give me some very cool sounding diagnoses.

[00:05:09] But at the end of the day, putting it in the context of the patient to arrive to the conclusion was ours and ours only. And that's the day. It's not what it could do. I realized what it could not. I think that we talked about it in our last episode, but it's really important to land back into that idea that how AI is different from humans. And I love the way that you touch up on that.

[00:05:36] Like we exchange, we communicate in a way that builds connection and the emotional part. And that's just something that a machine or a computer or a tool like this cannot do. But it can be an extension maybe of our brain and help us rather than do things for us. So having said that, what did you get wrong about AI at the beginning and what helped you in learning? That's an interesting question. What did I get wrong about AI in the beginning?

[00:06:06] Many things. But one of the most redundant and repetitive items was that somehow I ascribed to it my own bias towards how smart it is. Almost like a Star Trek episode. I don't know if some of us remember the Star Trek scanner, which was passed over one of the actors of Star Trek Enterprise. And it just spits out everything that you need. We have that bias in us.

[00:06:35] And I expected it almost to read my mind. With AI, unless you specifically tell it what you want, it may not give you what you want. Case in point, if I were to think of an AI as an intern that never sleeps, yes, they have the knowledge, but I can't just tell them, go start antibiotics and then be mad at them for not telling them which ones.

[00:07:01] So I learned that how you ask AI is fundamental. It doesn't matter which AI. I learned AI in filmmaking. I learned AI in image generation. I learned AI in data analysis. Across the board, the first thing to learn is how to ask AI, which is now known as how to prompt AI or prompt engineering.

[00:07:26] It is a skill and one of the top 10 skills in every industry using AI in 2025.

[00:07:33] If I were to ask AI to give me an explanation about what a Swann-Gans catheter is versus if I asked it to think of itself as an experienced interventional cardiologist, then asked for, tell me what a Swann-Gans catheter is. The responses are completely different. The first one is wide open.

[00:08:02] The second one narrows the field in which it looks and the output that it gives you. If I added a third thing, think of yourself as an interventional cardiologist. Tell me what a Swann-Gans catheter. Now I want you to give it to me in three sentences only. Then that narrows it down even further to give me what I want. So it's not wandering everywhere to look for that answer.

[00:08:30] Prompting is absolutely essential. And the second thing is how I unlearned how I did it is by trying several things to see which one gives me what I want. And we call that iteration. You need to send back the answer with a new parameter. Sometimes the joke is, if only you can just say, is that the best you can do? It is hilarious.

[00:08:57] It actually apologizes to you and then redrafts its answer by saying, I'm going to give you a bit of a better answer. And wow, if you even augment that by saying, I want you to give it to me in 250 characters or less, or as in a fifth graders language, or half of it in English and half of it in Spanish. You can do whatever you want.

[00:09:23] But that's how I unlearned it, is by sending back and forth that conversation. And if you think about it, it's similar to a human conversation. Not to say that AI is human, but that's a behavior that we're used to, is what I'm trying to reference. That's super interesting. Like, it grabs the information, it processes the information, and it communicates the information based on the character that you provide for it to enact, almost. Exactly. Exactly. That's fun, actually.

[00:09:54] So if, in your own words, Dr. B, if you had to explain to your mom or a tired resident at 2 a.m. what AI is, how would you break it down? I would say, well, to my mom and to a tired resident, it's two different things. I remember, actually, both my parents, but especially my dad, wanted to learn AI. It was funny because whenever I was posting on my LinkedIn about AI, you could find him reading it and then making a comment.

[00:10:21] And I came home one day and I'm like, Dad, get off my social media. But it was more along the lines of, this is a super smart assistant at your fingertips at any time, day or night, that never gets tired, but still needs supervision. And I think I could say the same thing to a colleague.

[00:10:43] It is certainly almost like an intern that never sleeps, but still requires supervision.

[00:10:51] Imagine if we had a incredible tool that we use all the time, such as either UpToDate or now Open Evidence, and you put it together with the smartest resident or fellow and make that entity drink five Red Bulls and made it do all of the hard clerical stuff that you don't enjoy doing.

[00:11:16] So you can go back to being the curator rather than the secretary, if you wish. That is how I would explain it to a colleague. And for the people that think they are too late to experiment or to learn about AI, especially those clinicians, what would you say? I love this. It makes me smile when I hear that because you're not too late. You're literally exactly on time. A lot of us did not grow up learning how to code or how to program.

[00:11:45] We grew up basically in the hospital triage in chaos. We worked with so many complex things every single day. We had to learn how MRI works in order for us to interpret how the images fall into the context of the patient. We had to learn how ultrasound works. God, we had to learn EMR and how to put in orders.

[00:12:11] And I remember when we had to put in orders for sputum culture and I couldn't get it until one day somebody says, try respiratory culture. Nobody who works in the hospital would put it as a respiratory culture. Things have gotten better in all of these technologies. And now we have various courses that we can go to in order to get a leg up. But AI is still developing. So you're exactly on time.

[00:12:39] AI just needs your clinical brain to be its compass. And you need a tool that you can talk to in natural language. You've learned EMR. You won't have to learn that complex for this one because it works with natural language. All you have to do is be more specific. Beautiful, Dr. V. So now that you've inspired us all to learn about AI and now that we feel like we're not behind but right on time,

[00:13:07] give us five AI terms every clinician should know. And please explain each one in one sentence. Wonderful. I'm sure you've heard me say more than once prompting. Prompt is a keyword. And what it means is how you talk to AI, like your question and your request. Whether you want a meta-analysis, whether you want a literature review,

[00:13:32] whether you want help with drafting an H&P or a summary, whether you want AI to give you sentences in order to have a family meeting so that you can relate to them and show compassion, you can also ask it to just communicate with you in a natural conversation during your five-minute break. The way you talk to AI is prompting, and that has to be learned via trial and error.

[00:14:01] Term number two would be the model and the data set. What that means is AI is a program that has been trained on a data set. And how that data set training has been programmed and what data set has it required to train that model could make all the difference in the world. Example, one of my friends was having difficulties and conflicts as we would with her spouse.

[00:14:29] And she brought that situation to two types of AI different from each other. One of them told her, this is how you resolve a conflict, gave her keywords and methods by which to do that. The other one told her to leave him completely because she deserves better. And if you looked at the training data set and the model for each one of them,

[00:14:54] the first one was trained on social work, literature, and how to resolve conflicts and mediate. The other one was trained on romance novels and looked at relationships in that utopic, idealistic way. And the answer was completely different. So if you're not careful as to what model and data set it was trained on, you could get various answers.

[00:15:22] And if you are someone, as we are seeing now, over-relying on AI as if it's the smartest thing there is, you may miss this. So model and training data set is important. The third one is called hallucination. It kind of stuck now, but it's not the best word that I like for what I'm about to explain. It's AI making up stuff.

[00:15:46] And in medicine, we already know this phenomenon in patients that live with the condition of Alzheimer's, dementia, for example, whereby when you ask them something and they can't remember, they make up something, a name for somebody, or that they went somewhere, or that, yes, I remember you telling me that story. We call that confabulation. I think it's a much better word for it because hallucination is when you ask AI to give you,

[00:16:16] for example, references about a certain topic, and it literally makes up references with authors and a name and a journal. It is so almost accurate that it makes you believe it is. So I call it lion with confidence. And that's why we always have to double check on the output. So hallucination is a big deal.

[00:16:40] The fourth one, I would probably want to bring up the word that is synthetic data. And what that means is if you ask AI to generate new text or a new picture, it mixes things that it has learned from its data set and model and brings you a new image that's never existed before.

[00:17:07] Well, this is fine and dandy for a picture, but when you are starting to use AI for something more elaborate, such as processing healthcare data, it makes new data. That's why it's called generative AI. But after a while, we found out that synthetic data is only regurgic data that was already there. It doesn't give creative new things that the genius of humanity has come up with. We're starting to see some edges of that.

[00:17:37] For example, a research in idiopathic pulmonary fibrosis, for example, that generated based on chemistry, generative AI and medical AI, a new molecule for the receptors and the chemistry of idiopathic pulmonary fibrosis. And now that protein is being tested in phase two and phase three studies to see if it can be a new medication. This is still very few and far between.

[00:18:06] Overall synthetic data is feared to not be as creative as we thought it would be. And then the last name is shadow AI. And shadow AI means that your residents, medical students, employees using AI tools without oversight or approval. And why it matters is because it can lead to security gaps, compliance risks, and let's face it, HIPAA violations in healthcare.

[00:18:35] I want to share with my colleagues that no matter what you say about AI, you're still responsible for the output that you're using and responsible for making sure no health protected information makes it into a model of AI because they're not yet, they've not given us the security that they are confidential enough and that they protect the patient's privacy and ethics.

[00:19:02] So those are the five words I want to say today. Prompting, the way you talk to AI. Model and training set, that is the brain of the system, what it was trained on to give you the answers you wish for. Hallucination, where AI says something that sounds right, but it's totally made up. And the fourth one is synthetic data. And that is when data is artificially generated, mimicking real world information,

[00:19:31] but without containing actual real world data. And then the fifth one is shadow AI, which is using AI tools by individuals without organizational approval or oversight. And why it matters is because it may lead to HIPAA violation and compliance risk and ultimately risk patient health and ethics being violated. Wow. So many things to learn, Dr. B. And now something a little bit more of your personal experience.

[00:20:01] What was the first AI tool or prompt that actually impressed you? Ooh, that is a good question. So the first prompt that made my jaw drop is when we all had those patients. And it was a case of a patient that was still young, had a young husband and a teenage daughter. And I was walking in the room to tell them that she is terminal and that the chances of her reaching the end of the day alive

[00:20:31] were extremely slim, let alone for her to come home. I was not looking forward to that conversation. It was a particularly busy day where I had five ICU admissions and we needed to do rounds. I had two residents to supervise, a noon lecture to go to. So I felt like I did not have enough space to retreat to in order to compose myself and put together a compassionate, but honest conversation.

[00:20:58] And so I basically went, compared three tools and I gave them the prompt. I want you to give me emotionally compelling, authentic, but honest words and sentences to use in the meeting I will have with an unfortunate young patient dying of multi-organ failure due to septic shock from pneumonia. Her husband and her teenage daughter are in the room. I am an intensive care physician.

[00:21:26] I want you to act as a specialist in these conversations. And you may derive your information from reputable sources on the internet and provide me with five or six sentences that I could read quickly and it would help me. And it was spot on. Now, many would say, well, is AI now who's going to talk to the patients? That means you are not really genuine.

[00:21:53] At the end of the day, I am the person who used those sentences. And I said to myself, that doesn't make it inauthentic. I could have used my social worker and talked to him or her and asked them to give me these sentences and it would not have been seen out of the norm. So why is this any different? And when I walked in the room and I was kind of tired and needing to connect with that family, it allowed me sentences that I found quickly

[00:22:23] and I reached out quickly to in order to make that meeting meaningful, even as I was tired. So that's the prompt that made my jaw drop. It was pretty spot on. The AI tool that made my jaw drop is an AI tool that we will probably come up with at some point and do a demo that allows you to do PowerPoint presentations. And I just asked it to put together a presentation of 15 slides about COPD,

[00:22:51] where we talk about patient compliance, where we talk about COPD exacerbations, future research and patient-centric recommendations. And I watched it in less than a minute create very compelling 15 slides that I had very little to update because it had access to the internet. And in my prompt, I told it where to go. I've used it and taught it to countless people since. Wow. So that prompt,

[00:23:20] like it ties back to everything that you've mentioned before of like, it was such a specific and complete prompt, the one you used. So I think this is very clear on how would you have given AI three words or just like, tell me how to, you know, just tell me how to tell this family that they're losing somebody they love. Something else would have come up and not exactly what you wanted or what you meant to share with this family.

[00:23:50] Because at the end of the day, you are the one that's setting the tone and the intention. So it's just helping you articulate or come up with the right words maybe, but not actually doing it for you. It just made it much easier for me to do the job I want rather than feel burned out or tired. And then later on, think back and say, I should have done this. I should have said that. That's really, that's really good. And I can't wait to see that other AI tool with a PowerPoint presentation.

[00:24:20] I've never seen that. There are a few that do the same. One of them is called Gamma. That's the one that I use a lot. The other one is called Beautiful.ai. That allows you to literally take your slides to a top level. The third one is called Dectopus, which has templates of decks that you could use. And AI chooses what to show you. Recently, I used one that is called Type Set.

[00:24:49] And that allowed me to create either PowerPoint presentations or even carousels for social media. And what I liked about them is that you can prompt each slide. If you don't like the slide and the picture that it showed, you can go back to that picture and prompt a new picture and it's generating it using AI. I had a presentation for the National Arab American Medical Association where I wanted it to tie up to the Arab identity.

[00:25:17] So I asked it to have an Arab flavor to the pictures. And then I changed it for a different one where it was more for medical students. Another one, it was more for AI in nursing. So it allows you to hyper-customize your presentations. And I'm sure we'll have one of our podcasts to do a demo for it. Really looking forward to that, Dr. B. And going back into the ICU,

[00:25:48] what's a real world example in the ICU, in education or at home where AI saved you or made your job easier? Saved you time or made your job easier? I'll tell you one that I use all the time. Now, a lot of people have already heard about ambient scribes. And usually that is the real world example, but that's not the one I'm going to use today. The one I'm going to use today is something that I incorporated in the past year in my ICU rounds.

[00:26:16] And that is when we are looking at a patient, we cannot have all of our knowledge right there with us in our brain. And what we're used to right now is, I'll look it up later. Because questions are generated during rounds and you're making decisions at that time and you're entering orders and you have your multidisciplinary team with you and everything is happening at that moment. Maybe even the family is there and you convey the plan to them.

[00:26:44] What I have now added is the ability to ask questions from AI regarding a medical topic. The last case was a patient who came in with a cardiac arrest, was on a ventilator, resuscitated, discovered that she had a blood clot in her lungs, which we assumed that that's what caused the cardiac arrest. And when we looked at where that comes from, almost invariably, it comes from the legs.

[00:27:14] We call that a deep vein thrombosis in the lower extremity or low extremity DVT. Well, this time her lower extremity had none, but she had a large upper extremity deep vein thrombosis. For those to cause pulmonary embolism, it's possible, but we don't encounter it a lot. And then this person, when we gave her blood thinners and a thrombectomy

[00:27:40] where we removed that blood clot, the question became, what is the incidence of pulmonary embolism due to upper extremity DVT? And especially that the patient started to bleed, so we had to stop the blood thinner. A lower extremity DVT, we would put an IVC filter. You can't use an IVC filter to protect the patients from this blood clot in the upper extremity or the arm.

[00:28:10] So we were in that discussion and we did not have evidence in our brain. We know we've heard of it before, so what we would have done in the past is we'll look at it later or why don't you go and look it up and tomorrow present it to us? Well, we didn't. Now my residents know that I asked them to pull up their phones right away and I tell them it's no longer cheating. We shouldn't be embarrassed to do that. Medicine is not always a quiz and recall knowledge,

[00:28:38] while it's important for us to have, is no longer the only source by which you make decisions. You can aid your recall knowledge with real-time, customized, specific answers to your specific questions to that specific patient right at the bedside and you actually, we got all of the literature immediately. I asked each resident to use a different tool and they compared and contrasted and within three, four minutes, we moved on

[00:29:08] after we've looked at the literature that is specific to that patient. Before that, you would actually look on UpToDate or you would look on Medscape or you'd look on, and you have to read the entire thing. Even if you Google, you have to click on each link. I almost sometimes marvel what would be the generation in 10 years thinking of us? Would they sit and think? You mean to say that if you wanted a piece of information, you went to a website

[00:29:38] and you put in words and that website was controlled by algorithms that are nebulous to you by a certain company and then it gives you 160,000 answers and you have to click on each one of them, read it, and in your mind, synthesize them to get your answer? Is that what you're telling me? It would, now that's what we do and it's normal, but it would sound so archaic then

[00:30:08] because AI is making it so now that it synthesizes all of that and it gives you the hyper-customized answer within seconds in order for you to help the patient in ways we couldn't even dream of before. Wow, and it almost is like you have to get rid of the taboo to actually move forward. That's really interesting. AI, it's pretty new. Like we're in it, like you said, is the present moment, but we still have to remove the taboo in order to, you know,

[00:30:37] help us move forward and grow in any field, but it's really interesting too. I say, normalize the conversation around using AI in an everyday task, including rounds. They don't have to be quizzes anymore. And I think that so far I've given some examples where anyone who says, where do I start? This is how you start. Use AI in rounds. Use AI making

[00:31:07] known lectures to your students or residents. Use AI to augment the discussion with your patients, especially if you're tired. Use AI to help you on your night shift when your brain is tired to come up with possibilities of why the patient is sick. Use AI to come up with hyper-specialized, customized answers to the questions that are evidence-based. Something that we always wanted to do, we just did not know how to operationalize. And I think that's really important because it's like

[00:31:37] making AI something that empowers you rather than something that limits you. And I think when it comes to quizzes, for example, in the past, people would get really nervous if everything is just a quiz, everything is testing you. But if you have this tool that is there to help you, you feel more empowered, therefore you make better decisions, therefore you're just more confident at doing your job. Very much so. Okay, so you teach AI like it's a clinical skill. Why do you say

[00:32:06] prompt writing is a new soap note? Prompt writing is the new soap note because when we want to do a progress note on a patient, it is a prompt note that you have a structure for that you start very messily and you get used to and better at over time. And it is the backbone of how you take care of your patients. If you learn the wrong way

[00:32:36] how to make a soap note, you will confuse everybody else in your team. That is why I feel prompt writing is the new soap note. But I have updated that recently. I am now saying that prompt writing is a skill as important to learn and should follow the same as a procedure. You need to see one, do one, and then teach one. You also need to not be afraid of it the first time because it's going to be clumsy initially. Eventually,

[00:33:06] by trial and error, you will get better at it. So prompt writing now is the new procedure skill. Procedure skill. I like that. So you've said if it helps you think better, use it. If it makes you stop thinking, question it. Can you please expand on that? I like to think that there are two components to our medical life when it comes to knowledge. On the one hand, we teach it as multiple choice questions and answers and that's how we test it.

[00:33:36] But in real life, it actually is a critical thinking. It's a pathway in your brain that you still have to put in actual real-world scenarios. So AI is going to help you think and perhaps put in context certain suggestions. But if you over-rely on it, AI still has the potential to hallucinate. Therefore, over-relying on it without questioning

[00:34:05] it is a danger and a violation of our oath to our patients. So if you ask an ambient AI scribe to listen to the conversation you have with your patient and it creates a note for you and you just copy and paste it and move on without reading it, without changing certain things, without reorganizing and reprioritizing, then that is a place where it makes you stop thinking and that's when you should question it.

[00:34:35] Right now, ambient scribes are just taking a note and organizing it, but we are already seeing the AI companies moving into coding, moving into medical decision assistance in the progress note. So over-reliance on AI is a big deal and we have to have organizational safeguards in terms of how we coach our physicians, residents, nurses even

[00:35:04] to be the human in the loop. That's your sixth term of the day. Human in the loop in order to always be the supervisor and not just rely on it to do the thinking for you. So remembering that you are the human using AI. And what part of medicine do you think AI will transform the most in the next five years? And what part of medicine will never belong to machines? Part of medicine that is now tedious and repetitive.

[00:35:34] We see AI shine in scanning data and looking at various patterns. In the ICU, predictive analytics is incredible in terms of pick sepsis. It already can look in context, draw from labs, draw from your note, draw from the vital and organize patients from the first one to round on and not just go by numbers of beds, but look for the sickest

[00:36:04] one. It already has predictive analytics of who might have a cardiac arrest in the next few hours. It already has predictive analytics who might be fluid responsive versus not and so on and so forth. So predictive analytics is huge. We see it shine in diagnostics, in radiology, in dermatology, in pathology, and we're already seeing multiple studies showing how AI is now going to be

[00:36:33] interceding in the electronic medical record that now has a portal to patients who now see those results before we have the ability to speak to them as physicians. We can explain what that biopsy means, what that spot on your x-ray means. AI now is able to translate in an explainable fashion those reports to patients and connect them to evidence that is reputable so that if they want to look up those things, they are at their fingertips before they come to

[00:37:02] you so that you don't feel like you have to undo some of the harm that is already caused now by Dr. Google. So we can see a lot of these things already. I mentioned the AI in research where we're already seeing some synthetic proteins come up as tested for drug development for perhaps problems that we did not pay attention or see before because no human eye can scan this way and determine the patterns this way. So there's multiple ways

[00:37:32] that I see it helping humanity. Exciting future, exciting present and future ahead of us. Dr. B, so I'd like to ask you for three takeaways from this episode. What would you share with our listeners today? Three takeaways. Anything that we started in our life and in our clinical training never started right the first time. Otherwise, if we stopped after the first time we tried to stand up from crawling and we fell, we would never walk. So know

[00:38:02] that you will be a bit clunky, a bit clumsy, a bit messy, please divorce yourself from perfection. Use a tool and try it for one week. Then use the second one, try it for another week. That's the Dr. AIB method. The second takeaway would be anything AI. Doesn't matter if it is in education, in law, in medicine, in any AI. Two things to remember. One is how you ask it

[00:38:31] will determine how you're going to use it. So prompting is a skill. It's not a magic trick. It will teach us how to start being specific and not skip things. And the second one, when you are handed a tool as an AI and you don't know much about it, you need to ask what model was it trained on in order to know what is the output it's going to give you. We're starting to see AI in EMR. We're starting to see AI in billing, in dictation.

[00:39:01] If you don't get curious to know what model has it been trained on, then you will be over-trusting. And the third one is always be the supervisor of that AI. Anything that you get, check it before you give it. Same thing as if I asked my resident that started for the first day in the ICU, I'm going to check the orders they put in. They want me to check the orders they put in in order to

[00:39:31] make sure that they don't harm a patient. So always be the curator, the person who verifies. AI is your assistant, not your replacement. I want us to use it to sharpen our thinking. I want it to remove all the automated stuff that burdens us and increases our burnout, but the clinical judgment is ours. And isn't that the fun part of medicine anyways? Thank you, Dr. B. Well, finally,

[00:40:01] who are you really speaking to when you record this podcast? Who is in your mind when we hit record? It's amazing that you asked me this question because people think that I speak to doctors. I am speaking to patients. I am speaking to caretakers of patients. I am speaking to the LGBTQ plus community who is looking for a physician that sees them and that would help them with their specific healthcare needs. I am

[00:40:31] speaking to the investor in the next AI solution. I'm speaking to the researcher physician. I'm speaking to the fellow who is in their last year where they have to publish, they have to look for a job, they have to look at how to move, and they still have to finish their third year. I'm speaking to the medical student who did not grow up in an English-speaking environment and feels like they may need a leg up in order to sort of step up

[00:41:01] their game. I am speaking to the medical educator who before, in order to have a video of something needed to hire a whole video production team, when now they could use an AI tool to create that video. I'm speaking to administrators and policymakers to understand how AI is and how much it can magnify biases, how much it can magnify problems if we're not careful. I'm speaking to women

[00:41:31] in medicine who feel like they are always nurturing and they're always told that they don't negotiate their worth well. how to say no when you are presented with an opportunity and you don't know how to say no but at this moment you have to say no there are plenty of ways and people that I speak to and maybe lastly I'm speaking to the spouses of healthcare professionals because we bring home a lot of this and they may

[00:42:00] need some support and help on understanding what we say because we are speaking of our day and how it worn us out but they don't know what a central line is or what somebody who is getting a pneumomediastinum means but they want to help us and they want to be supportive of us so these things are incredibly important and these

[00:42:35] best and maybe the only chance of our patients so if there is something out there to help them why not learn how to use it thank you so much Dr. B so much to look forward to the next episode well if you're still with us you've taken another step forward to becoming an AI ready doctor and that's no small thing Dr. B reminded us today you don't need to master every algorithm to get started you just need questions judgment and

[00:43:05] willingness to try if this episode gave you a spark or even just a new way to think about your own practice send it to a colleague who's still on the fence until next time stay curious stay thoughtful and stay AI ready thank you very much from the AI ready doctor have a wonderful day