Back to Search Start Over

Maintaining competence in general internal medicine

Authors :
Marianne M. Green
Source :
Journal of general internal medicine. 30(2)
Publication Year :
2015

Abstract

I believe that I am a competent physician. While I suppose that many people have an inflated sense of their own competence (and this is certainly true of physicians who consistently overestimate their performance on clinical quality indicators), this assessment is based, at least in part, on information from three reliable sources. First, our university-based internal medicine practice provides quarterly feedback to individual physicians on a number of quality measures. My results, compared both to my peers as well as to national standards, are comfortably above the mean. Second, the medical center provides us with patient satisfaction scores based on patient surveys. My scores have been consistently near the top. Third, as an active clinical teacher and preceptor, I have a thick portfolio of relatively decent teaching evaluations. I believe that they too provide insight into my clinical competence; certainly, physician-teachers with a poor grasp of medical reasoning or who have not kept up with the medical literature are unlikely to be assessed favorably. All of these measures are flawed and subject to one bias or another, but collectively, they give a consistently positive view of my competence as a general internist. What brings about this moment of self-reflection? Not anything as profound or potentially life altering as a mid-life crisis; instead, it was the more prosaic experience of taking the Maintenance of Certification (MOC) in Internal Medicine board exam. Every 10 years all internal medicine physicians who were board certified in 1990 or later must complete requirements for the MOC, including passing a comprehensive multiple-choice exam, in order to maintain their certification. I last had to re-certify about 10 years ago and had high hopes that there had been substantial improvement in the process in the intervening years; alas, I was disappointed to discover that not much has changed. I can now tell you what the ideal tidal volume is for an intubated patient with Acute Respiratory Distress Syndrome (ARDS). No doubt this is crucial clinical information—just not for me or for most other physicians like me. As a general internist who practices entirely in the ambulatory setting, I will never be responsible for ventilator settings, and in fact, I never was. Likewise, shortly before the test, on the advice of a colleague, I committed to memory the formula for calculating serum osmolality. While it was no great feat to re-install this formula back into my short-term memory bank (though I do find memorization a bit more challenging than I did 20 years ago), with the easy availability of formulas, clinical algorithms and guidelines on smartphones and tablets, having to commit these sorts of facts to memory for a multiple choice test is just plain dumb. My disappointment with the MOC exam extends beyond what we are expected to know, though I found much of it to be either irrelevant to my clinical practice or consisting of information that could be easily looked up. I was even more disappointed with the absence of what I consider to be the true core competencies of a general internist, the essential knowledge and skills required to take good care of medical outpatients. As a primarily outpatient-based general medicine physician, I spend much of my time working with patients to change their behavior—assessing and improving their adherence to medication, increasing their exercise, modifying their diet, etc. Of the 180 questions I had to answer in the daylong exam, not one tested my competency on that topic—or on any one of a dozen other topics essential to my clinical practice. Of the handful of questions on mental and behavioral health topics, there was not one that asked about the diagnosis and management of common depression or anxiety disorders, also core skills of a competent outpatient general internist. I suspect that my generalist colleagues who practice primarily in the inpatient setting also felt that much of the material to be mastered was not relevant to their clinical practice. This is one of the challenges that the American Board of Internal Medicine (ABIM) faces in trying to develop a ‘one size fits all’ approach to the MOC exam. Over the past decade, the algorithmic growth in medical information and the huge increase in patients with multiple, complex chronic medical problems, coupled with the increasing specialization of general internists as hospital medicine specialists or ambulatory medicine specialists, has made it more difficult to identify one common internal medicine knowledge base. Instead of sticking with an old paradigm that no longer reflects the current practice of internal medicine, the ABIM should continue to develop new ways of promoting and assessing engagement in clinical practice, such as it has done with the Practice Improvement Modules, Point of Care Clinical Question Module and a few others. While imperfect, these modules are more effective in engaging adult learners and in making the MOC relevant to their clinical practice than the current process. The ABIM states on their website that MOC “promotes lifelong learning and the enhancement of the clinical judgment and skills essential for high quality patient care.” While I (and others1) agree with the importance of lifelong learning for physicians and with the imperative of having an objective process by which a physician’s knowledge and skills are periodically assessed, it seems clear that the current MOC system, and the multiple choice exam in particular, fall far short of this goal. Multiple-choice questions are the wrong instrument to use for the assessment of diagnostic reasoning and other higher-level cognitive skills essential for competent patient care. Several articles in this issue of JGIM focus on physician education and clinical competence. One such paper, authored by Colla et al.2, examines clinical practice through the lens of the ABIM Foundation’s successful Choosing Wisely campaign. Using Medicare data from 2006 to 2011, they attempt to estimate the prevalence of 11 Choosing Wisely services by creating claims-based algorithms to identify low value services and in the context of geographic variation across regions. Perhaps not surprisingly, they found significant overuse of low-value services and substantial variation across different hospital systems. Low-value care was overused by both generalists and specialists in clinical services, such as the use of antipsychotics in patients with dementia and overuse of preoperative cardiac evaluation (a favorite topic, by the way, of the MOC exam). Many professional societies participated in the Choosing Wisely initiative, and in this issue, Riggs and Ubel3 reflect on the role of professional societies in limiting indication creep. In their provocative Perspective, they state that indication creep occurs when an intervention meant to benefit patients with a specific health condition is expanded to encompass either a new condition or a new population of patients. They go on to argue that, similar to the efforts of organized medicine to reduce waste through the Choosing Wisely campaign, professional societies should take the lead in preventing indication creep, either by not recommending interventions that go beyond existing evidence or by advocating for and facilitating new clinical trials when feasible. Perhaps the best strategy to get physicians to practice competent, high-value care is to train them earlier in the educational pipeline. One important aspect of quality of care in residency education has to do with patient ‘handoffs,’ either in the hospital or in the clinic. In this issue of JGIM, Pincavage et al.4 address the issue of handoffs in the outpatient setting. They describe an intervention conducted at an academic medicine residency clinic to improve the process for patients when they transition to a new resident physician. Two months prior to the transfer of care, they gave patients a packet of information that included a welcome letter and a photograph from their new primary care provider, as well as a visit preparation tool to help facilitate communication with their new doctor. The second phase of the intervention included a hand drawn “comic” titled “Mrs. B. Changes Doctors,” which they found more effectively captured patient’s attention and engaged them in the hand-off process. In an accompanying editorial, Bump5 points out that patient handoffs between residents in the clinic or on the wards may be detrimental to patient care. What remains to be seen, however, is whether this sort of ‘comic’ intervention can lead to a decrease in errors and adverse events. But perhaps the most important leverage point to improve patient care in the 21st century is described by Lin, Schillinger and Irby6 in a short piece on “value-added” medical education. They assert that value-added education has the potential to transform our approach to medical education by adhering to a set of five principles, including early integrated workplace learning for all medical students and the fusion of robust experiential learning experiences with the delivery of high-performing, patient-centered primary care. This is a bold approach to medical education that has the potential to transform our concept of what it means to be a competent physician in the 21st century. With this concept of “value added” education in mind, perhaps some day soon we will be assessing the competence of ambulatory-based general internists with a process that examines their ability to function effectively as members of a multi-disciplinary team in a patient-centered medical home. But for now, if asked about the tidal volume for a patient with ARDS, I’d go with 6–8 mL/kg of ideal body weight. Break a leg.

Details

ISSN :
15251497
Volume :
30
Issue :
2
Database :
OpenAIRE
Journal :
Journal of general internal medicine
Accession number :
edsair.doi.dedup.....937aa5d2a3ba1bc957e48f5d72e234e8