“I am sure I can’t tolerate uncertainty”.

    “Medicine is a science of uncertainty and an art of probability”

                    -Sir William Osler

    There were many people who were saddened at the passing of Alex Trebek. My family was no exception. My parents and I would watch “Jeopardy!”  At 6 pm, yelling out answers while eating dinner. Without fail we would yell at the players to bet it all on the Daily Double; my mom and dad would marvel at how I could miss most of the questions in the History section and yet run the category on Potent Potables.  Decades later, I still watch it with my family.  And I always feel a little bad when I miss a question. 

    Alex’s fellow Canadian Dr. Osler, one of the founders of Johns Hopkins Hospital and a renowned diagnostician, probably would have rocked “Jeopardy!”. Interesting, given his quote above. Are we, as a profession, comfortable with being uncertain? 

“Tolerating Uncertainty- the Next Medical Revolution?” , link below, was published in the NEJM in 2016. It’s only two pages and totally worth the read. 

https://areasoci.sirm.org/uploads/Documenti/SDS/828b6893d23b878ccb2be267ac322d6a1c91085c.pdf

It really doesn’t even question whether or not we as medical professionals are comfortable with uncertainty. It just assumes we aren’t. We are constantly presented during the educational process with messsages that we are either “right” or “stupid”. 

            Uncertainty=ignorance

In our minds, the path to success begins with the correct answer.  Anything else is unacceptable. The jump to a diagnosis to just to get to an answer may be just as problematic, as we may miss key information or exercise cognitive bias for no other reason than we want to push that button and get our Daily Double correct. 

Trying to achieve a sense of certainty too quickly 

                                 |

                                 | < cognitive bias

                                 |

Premature closure of the decision making process.    

Ok. So how to make this process change? The authors suggest that the first thing to do is to not equate uncertainty with a lack of knowledge or being bad at what we do. Uncertainty is not the final destination. It’s a rest stop. If uncertainty is the rest stop, then what’s the car we drive to get there?

Tolerance of uncertainty ——> CURIOSITY

Wanting to know the answer, being transparent about not knowing the answer and committing to the patient that we truly desire to get to it for their benefit.  Maybe that’s why the authors equate a decreasing ability to tolerate being uncertain with an increased risk of burnout. The pressure to be “right” all the time is not realistic. 

I teach martial arts in my spare time (it’s therapy for me. Anti-burnout meds. And to spar kids half my age and not get beat up bolsters my ego) and we traditionally ask students questions  at belt rank exams about martial arts history, techniques, etc. We do our best as instructors to teach students the information they need for this, but of course sometimes they freeze up at testing. So we teach them one simple phrase if they forget. 

“I don’t know, but I will find out”. 

We teach them that that is a perfectly acceptable answer. AND IT IS. We are not taught to do this in medical school or residency. And we don’t realize that, when we say this to our patient, because we really don’t know and we need to take more history, do another physical, do more research, maybe do another test, that the patient is absolutely fine with it. In my experience, patients are much more likely to have a collaborative relationship with a physician who honestly expresses his or her humanity. 

It’s not about right or wrong. Maybe the best bit of evidence that lets us know that’s it is ok ?

I want to be a Jedi. 🙂

Next up. Is there a way to assess our natural tolerance of uncertainty? 

Thanks for reading. Peace.

What kind of stats game do you play?

Quick post today. 

Anyone who reads medical literature on the regular has seen “p” values. We have a basic understanding of the statistics used (I’m totally referring to myself; I have relearned statistics in a short-term form for almost every board exam I’ve ever taken).

It wasn’t until I started the deep dive into medical diagnosis and the fits and foibles thereof that I learned about Bayes theorem.

Here’s an easy link. 

https://stats.stackexchange.com/q/22

The easy link looks a lot nicer than the actual equation. 

https://wikimedia.org/api/rest_v1/media/math/render/svg/87c061fe1c7430a5201eef3fa50f9d00eac78810

Basically,  the frequentist says “common things are common”.  “When you hear hoofbeats, think horses”. 

The Bayesian says, “ I hear hoofbeats. It must be horses”, but then looks out the window and says “but I see black and white stripes on those horses. Sooo, maybe those hoofbeats are zebras”. Bayesians update the hypothesis based on the new data collected. 

I wonder if there are many truly excellent diagnosticians out there who are Bayesians, and don’t even know it. 

I also wonder how often we miss the diagnosis in front of us because of frequentism.

Thanks for reading and considering. More to come. Peace.

Shared Decision Making, grass roots style

    In my experience, transparency is not something discussed with regard to clinical decision making. This is a link posted by the Patient and Client Council, to promote a project from the Clinical Education Center in Northern Ireland. They are asking for patients to participate in a Zoom conference to discuss Shared Decision Making between patients and clinicians. 

    “Shared decision making is a process in which individuals and clinicians work together to understand and decide what tests, treatments, or support packages are most suitable bearing in mind a person’s own circumstances. It brings together the individual’s expertise about themselves and what is important to them together with the clinician’s knowledge about the benefits and risks of the options”.

https://patientclientcouncil.hscni.net/events/shared-decision-making/

    How awesome is this? Do you think this is something that would be viable in the US? 

Thanks for reading. Peace.

In SPADEs


    My dad had a great saying for his medical students and residents. 

“Eighty percent of the patients we see will get better no matter what we do, 10 percent will get worse no matter what we do. It’s the ten percent in between that we have a true opportunity to help”. 

And his students probably never said this to him, but I was his daughter. I could get away with it 

“Dad, how do we know which is the ten percent we can help?”

How do we identify those patients? And even if we could do that, how do we make sure we don’t screw it up and miss something? 

This one concept leads us most readily into the importance of understanding the diagnostic process, and ourselves within it; the cognitive nature of medical diagnosis, even within the larger framework of decision making as a whole. 

We could actually go through each chart in which a diagnosis is missed, reviewing the clinician’s process, looking at the data and deciding where the breakdown in decision making occurred. There are LOTS of problems with this.

  1. It’s a hard-fought [sporting event of your choice] and we already know who won and lost. Knowing the end absolutely skews data analysis. We are looking for confirmation of a diagnosis in which we are confident is correct.
  2. Because we know the end, small pieces of information which would have been central to making the diagnosis are more easily recognized. The narrative changes because we are telling the story with the documented conclusion in mind. 
  3. We have oodles and oodles of time to review a chart. The clinician had maybe 5 minutes to see the patient and review the data. We are looking at  the chart at our comfy desk with a cup of pour-over and fuzzy socks. The guy or girl who did the work was seeing upwards of thirty patients in clinic and was trying to get to his or her kid’s soccer game at 530. 

Chances are, if a chart is being reviewed, something bad happened. While we all seek to be excellent diagnosticians, The main reason this whole topic even matters is because we are trying to avoid the catastrophic outcome. 

A paper published in the British Medical Journal in 2018 sought to take a broader scope of data review to identify problems with medical decision making. It discussed using Symptom- Disease Pair Analysis of Diagnostic Error (SPADE) to look at large blocks of clinical data and identify missed diagnoses (Lieberman and Newman-Toker, 2018) 

Let’s say you have a 25 year old otherwise healthy female patient that comes in to an urgent care with shortness of breath. The patient is worked up, and the physician who sees her diagnoses her with bronchitis, gives her antibiotics and sends her home. Two days later, the patient presents to the Emergency Department, bottle of antibiotics in hand, with significant worsening of her symptoms. At that time, she undergoes a VQ scan which is read as high probability and she is diagnosed with a pulmonary embolus . On review of her records, we find she was started on combination oral contraceptives in the last three months, which increased her risk of thrombotic events. She is started on anticoagulants, changed to a progestin- only oral contraceptive, and is discharged home in good condition. 

In our SPADE analysis, we would use the symptom-disease pair of shortness of breath- pulmonary embolus.

https://qualitysafety.bmj.com/content/qhc/27/7/557/F1.large.jpg?download=true

“The framework shown here illustrates differences in structure and goals of the ‘look back’ (disease to symptoms) and ‘look forward’ (symptoms to disease) analytical pathways. These pathways can be thought of as a deliberate sequence that begins with a target disease known to cause poor patient outcomes when a diagnostic error occurs: (1) the ‘look back’ approach defines the spectrum of high-risk presenting symptoms for which the target disease is likely to be missed or misdiagnosed; (2) the ‘look forward’ approach defines the frequency of diseases missed or misdiagnosed for a given high-risk symptom presentation.” (Lieberman and Newman-Toker, 2018) .

Ideally, we can collect SPADE data on a multiplicity of symptom-diagnosis dyads and look at the number of times a diagnosis is missed by “looking back”. We can then use that data to “look forward”, based on a patient’s symptom presentation. 

The authors point out that because of the ease is identifying symptoms and diagnoses in different health system databases (ICD-10 might be our friend after all:)) that collecting large amounts of data and calculating frequency of symptom to diagnosis should not only show us how often miss, it should allow us to see when we improve. 

Cool. “Cool cool cool”. (Abed, “Community” 2009). 

But wait. If SPADE can show us symptom-diagnosis dyads, indicating the frequency which they two may be associated, is it possible to calculate the likelihood of having a disease given a presenting symptom? And if so, are there other associated factors which may make the disease more likely in the presence of said symptom (i.e. new start oral contraceptive use and pulmonary emboli) ?

Stay tuned.  We are just getting started on our journey. Thanks for reading. Peace. 

“Pilot”.Community Season 1, Episode 1, NBC, September 17,2009.

Liberman AL, Newman-Toker DE Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data BMJ Quality & Safety 2018;27:557-566.