Guest Commentary

Medical Education: Knowledge, Skepticism, and the Central Role of the Individual Patient

Gabriel M. Aisenberg, MD, and Gus W. Krucke, MD

Aisenberg GM, Krucke GW. Medical education: knowledge, skepticism, and the central role of the individual patient. Consultant. 2020;60(1):12-14. doi:10.25270/con.2020.01.00002


There are two extremes that snuff out the desire for knowledge: dogmatism and radical skepticism. Dogmatism asserts that the search is unnecessary because we already have the truth. Radical skepticism says the search is pointless because there is no truth. Two conditions, then, are necessary for a real philosophical conversation: belief that there is truth, and an absence of certainty about it.

Micah Goodman, in Maimonides and the Book That Changed Judaism1

The dichotomy established between opinion (doxa) and knowledge or science (episteme) in the days of Plato and ancient Greece has been progressively moving toward the latter. Epistemology is the study of knowledge, its limits, and its validity.2 It does not teach us how to think, but it makes us realize how we do so.

To understand what follows, let us try to agree upon the following concepts:

1. What is to be known—that is, knowledge—exists even in absence of someone who can acquire it.

2. There is a limit to how much a learner can learn.

3. There is a limit in the quality of what a learner can learn.

     a. The quality of what is learned depends on its veracity.

     b. The quality of what is learned depends on how our minds connect simpler concepts into complex ideas.

Recognizing that physicians commonly act upon their knowledge, it is then fair to state that when they make mistakes, the following faults should be considered: A dearth of knowledge (ignorance); having accepted incorrect theories; and having organized these theories or conclusions in a manner that leads to improper action (cognitive error).

We present a collection of circumstances that, in our view, in isolation or together can lead to error.


Most of our patients come to us with symptoms, not with diagnoses. Symptoms are by definition subjective; they manifest through the myriad of diseases of which we are aware. Through the common thread of early medical education, we are taught to consider symptoms before disease processes. However, experienced clinicians who encounter their patients tend to think in terms of diseases, especially the diseases they know best.

On the journey of maturation into experienced clinicians, it is not uncommon to attempt to make pieces of the diagnostic puzzle fit our version of understanding for any given disease process. We argue that the identification of the patient’s problem avoids the folly of anchoring and confirmation bias.

The problem is the crucible of medical thinking. In as few words as possible, the physician identifies an objective descriptor of the patient’s disease without bias in a manner that best illustrates the diagnostic challenge.

To illustrate our hypothesis, consider congestive heart failure (CHF) as a problem. CHF is not a disease; rather, it is a problem of various possible etiologies (diagnoses). The advantage of recognizing CHF as a problem rather than a diagnosis is that, while we treat its manifestations, we consider that landing upon the right diagnostic runway can potentially lead to a diverse therapeutic path with the possibility of improved outcomes.

In sum, if the problem is accurate, eventually, the diagnosis will also be valid. The corollary is troublesome. If the problem is incorrect, the diagnosis will generally remain elusive.


In medical school, we frequently teach and learn about diseases and their common risk factors. Consider toxic shock syndrome. This pleomorphic syndrome manifests with fever above 38.5°C, a scaly rash, hypotension, mucositis, confusion, thrombocytopenia, and liver and kidney dysfunction. During the late 1970s, a substantial number of cases occurred among women who used highly absorbent tampons during their periods. The association was so strong that a movement led to the removal from the market of that particular type of tampon in less than 2 years. However, the heuristic link has persisted in our minds. While teaching on rounds, the experienced physician asks students, “A young woman presents to you with signs and symptoms of toxic shock. Is she likely using a tampon?” The attending then considers the answer likely to be “Yes.” What we know is that the syndrome is caused by a toxin released by certain strains of Staphylococcus aureus, not the tampon itself.4,5

Another clinical fallacy results from the assumption that pulmonary embolism (PE) must be considered in patients with cancer and dyspnea. While certain types of cancer increase the risk of PE, if chest radiography reveals obvious findings, such as multiple metastases or significant pleural effusion accounting for the shortness of breath, there is no guilt by association, and the diagnosis of PE should no longer be pursued.


Odds ratios measure the probability that an event occurs in the presence of an exposure, divided by the probability of such an event in absence of such exposure.6 This approach is instrumental when the studied disease is uncommon. We argue that most physicians in training do not have a clear understanding of what frequency means in medicine. For instance, breast and lung cancers have a prevalence measured per 100,000 people, whereas obesity, tobacco use, and type 2 diabetes mellitus are measured per 100 people.

The experienced clinician asks the students, “What is more common: to smoke and have lung cancer, or to smoke and not to have lung cancer?” The obvious answer is “not to have lung cancer.” This is true in spite of the multiple studies linking cause and effect and the molecular mechanisms that explain that association. Therefore, the next time an attending physician shows a student a chest radiograph revealing a hilar lung mass, it is wise to recall that the mass itself represents the real problem, whether or not the patient is a smoker.


Evidence-based medicine integrates physicians’ clinical experience, patients’ values and opinions, and the best available scientific evidence in a manner focused on improved patient care. In practice, this means that the patient is at the center of decision-making, with the foundations of medical literature acting as a guide to proper care. We argue that the algorithmic approach to the care of the patient at the bedside is not truly patient-centered. This is conspicuously evident in the setting of emergency care.

Consider that the best available evidence is not always the best evidence; that we tend to believe mostly in blinded, randomized, controlled studies, even when the results of less-systematic studies can still be valid and useful7,8; and that we frequently accept the absence of evidence as being equal to evidence of absence. Thus the phrase often used in the medical literature, “There is no evidence to support an intervention,” is a poor descriptor, since it cannot distinguish between interventions that benefit some (level C), those that benefit none (level D), or those for which the evidence for benefit is insufficient (level I).9


Our bodies (cells, tissues, organs) work in certain ways. Physiology is the study of these functions. As such, pathophysiology is the study of bodily dysfunction. Signs and symptoms tend to be our descriptors of dysfunction, which experienced physicians use to delineate a problem and a diagnosis. These concepts are usually learned too early in medical school, so they are not readily available in the mind of our learners when they first see a patient.10


A moviegoer arrives to the theater 20 minutes late. There is darkness, the perfect metaphor for ignorance. The characters dance to the beat of a drum with which the viewer is unfamiliar, without the proper context to enjoy and understand the movie. The moviegoer was not there for the opening scene, and as a consequence, completely misses the point. We feel this is analogous to every new physician–patient encounter.10

Physicians feel pressured for time during the patient visit. There is the urge to have an answer for every complaint, and as such tests get ordered “just in case.” This is the type of diagnostic and therapeutic incontinence that leads to premature recommendations and treatments without really fully understanding the problem. These errors in thinking and reasoning detract from a full recognition of the natural history of disease. Moreover, the undeniable business aspect of the practice of medicine frequently leads to discharging patients prematurely, preventing the learners from discovering the way to recovery, the end of that movie.

Another conundrum is a proper understanding of the manner in which interventions affect the natural history of disease. For instance, the patient with CHF receives furosemide and an angiotensin-converting enzyme inhibitor but does not progress in a solidly predictable pattern. Is it time to consider another diagnosis? How long does it usually take to correct hypoxemia in patients being treated for a pulmonary embolus or Pneumocystis jiroveci pneumonia? The impatient physician, unaware that it may take weeks to months for the oxygen level to normalize, may be tempted to think of other diagnoses too early. Knowledge of the history of disease modified by treatment illuminates the proper course of action, eliminating the temptation to jump into a different rabbit hole.11,12


In health care, career advancement frequently depends on the opinion of others. Many learners prioritize obtaining high grades and being liked by their teachers, sometimes sacrificing their expected insatiable curiosity. On the other hand, many insecure teachers opt for answers to questions without deeply judging the origin and strength of the answer. Learning to say “I don’t know” is a complex exercise that requires strength of character and integrity.

Occasionally we relinquish decision-making authority to other sources:

Consultants. In our opinion, the more someone knows about something, the more likely that person will access their rather narrow perspective when rendering an opinion (if you are a hammer, everything looks like a nail). In the inverted pyramid of proper medical care, consultants have overtaken the practice of medicine in the United States. It is also our opinion that it is common for clinicians to blindly follow a consultant’s recommendations.

Laboratory and imaging. When a study is ordered and its indication is questionable, the finding is often termed “incidental” if the result is unexpected. Our view is that if a finding does not fit the clinical picture, in the best-case scenario, we might have encountered a second problem without having solved the first one.

Medical literature. Physicians should not forget that guidelines are like crutches: They should be used primarily by those who otherwise cannot walk. Moreover, what is published does not always narrate what is best for the patient in front of us. While we have to diligently review the literature pertaining to our case, it is the patient who should always be the center of our decision-making universe, thus selecting available information that is valid for the problem at hand.


In many scientific studies, especially those that weigh the efficacy of new drugs, the impact of an effect will intellectually depend on how the results are framed. For example, in the treatment of pancreatic adenocarcinoma, the FOLFIRINOX chemotherapy regimen was shown to be significantly more effective than gemcitabine in prolonging life up to 1 year. The study also showed more toxicity among patients in the FOLFIRINOX group. Both groups included patients in relatively good health, even when they had metastatic cancer. A rapid read of the study would lead us to opt for the first treatment option, FOLFIRINOX. Taken in context, if quality of life and length of life are considered, the impact on any given individual patient could be considered negligible.13

Moreover, many in the scientific community have proposed redefining the range of statistical significance from P < .05 (which would become suggestive of rejecting the null hypothesis) to P < .005 (the new significant). Most medical journals also demand that authors provide confidence intervals and uncertainty measures.14

Finally, another pitfall that we might consider is “HARKing” (hypothesizing after the results are known). In practical terms, if a database has enough columns, each one including possible exposures for a studied outcome, a statistically significant association will inevitably be found between an exposure and any given outcome. Even when the association makes no biological sense, a potential author might be tempted to believe the fallacy and publish it. This type of publication often flies under the radar and, in our opinion, can cause harm to the evolving minds and critical thinking skills of physicians in training.15


Cognitive error is defined as an adaptive process of human knowledge that allows for rapid decision-making, resulting in judgments and actions improperly or shallowly considered. In general, these errors result from using prior experiences and behaviors rather than analytic reasoning.16 Consider that more than 40 types of cognitive errors have been described; therefore, it is logical that some are more common pitfalls than others.17 Numerous strategies have been described with the goal of reducing these errors or increasing our ability to recognize them.17

In summary, both what we don’t know and what we learn wrongly can lead us to error. The academic environment is supposed to support interaction with other scholars, opposing opinions, canceling extreme positions, all the while promoting open learning without absolute terms (institutional disconfirmation).18 Remaining skeptical is an obligation and an opportunity to promote authentic intellectual growth among those who genuinely wish to learn.

Gabriel Aisenberg, MD, is an associate professor of general internal medicine and an associate director of the Internal Medicine Residency Program at the McGovern Medical School at the University of Texas Health Science Center at Houston.

Gus W. Krucke, MD, is an associate professor of general internal medicine at the McGovern Medical School at the University of Texas Health Science Center at Houston.


  1. Goodman M. Maimonides and the Book That Changed Judaism: Secrets of The Guide for the Perplexed. Philadelphia, PA: Jewish Publication Society; 2015:244.
  2. Carpio AP. Principios de Filosofía: Una Introducción a su Problemática. 2nd ed. Buenos Aires, Argentina; Glauco: 2004.
  3. Joudah F, Fred HL. The morning report card. Resid Staff Physician. 2003;​49(9):22-26.
  4. Vostral SL. Rely and toxic shock syndrome: a technological health crisis. Yale J Biol Med. 2011;84(4):447-459.
  5. Vostral S. Toxic shock syndrome, tampons and laboratory standard-setting. CMAJ. 2017;189(20):E726-E728. doi:10.1503/cmaj.161479
  6. Norton EC, Dowd BE, Maciejewski ML. Odds ratios—current best practice and use. JAMA. 2018;320(1):84-85. doi:10.1001/jama.2018.6971
  7. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):​71-72. doi:10.1136/bmj.312.7023.71
  8. Smith GCS, Pell JP. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ. 2003;327(7429):1459-1461. doi:10.1136/bmj.327.7429.1459
  9. Braithwaite RS. EBM’s six dangerous words. JAMA. 2013;310(20):2149-2150. doi:10.1001/jama.2013.281996
  10. Aisenberg GM. You Can Tell: Reflections From a Medical Educator. Amazon Press; 2017.
  11. Wilson JE III, Pierce AK, Johnson RL, et al. Hypoxemia in pulmonary embolism, a clinical study. J Clin Invest. 1971;50(3):481-491.
  12. Aisenberg GM, Ocazionez-Trujillo D, Arduino R. Duration of hypoxemia in Pneumocystis jiroveci pneumonia. J Microbiol Genet. 2019;4:120. doi:10.29011/2574-7371.000120
  13. Conroy T, Hammel P, Hebbar M, et al; Canadian Cancer Trials Group and the Unicancer-GI–PRODIGE Group. FOLFIRINOX or gemcitabine as adjuvant therapy for pancreatic cancer. N Engl J Med. 2018;379(25):2395-2406. doi:10.1056/NEJMoa1809775
  14. Ioanidis JPA. The proposal to lower P value thresholds to .005. JAMA. 2018;​319(14):1429-1430. doi:10.1001/jama.2018.1536
  15. Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998;2(3):196-217. doi:10.1207/s15327957pspr0203_4
  16. Molony DA. Cognitive bias and the creation and translation of evidence into clinical practice. Adv Chronic Kidney Dis. 2016;23(6):346-350. doi:10.1053/​j.ackd.2016.11.018
  17. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi:10.1097/00001888-200308000-00003
  18. Lukianoff G, Haidt J. The Coddling of the American Mind. New York, NY: Penguin Press; 2018.