Monday, June 28, 2010


If you want to understand medical doctors better, you need to know that they constitute a tribe--with its own traditions, customs, language, and perspectives.

We physicians call our professional education “training.” That's an appropriate term, because in addition to learning medicine we get steadily inducted into the tribe. The tribal curriculum is implicit, yet almost any physician anywhere in the United States will agree that his or her medical education was characterized by a hefty degree of
  • Rigid hierarchy. Freshman medical students occupy a niche just below mollusks. Sophomores and juniors are noticed, but mainly as nuisances, like rodents. Seniors find that the summit they’ve attained is no higher than dirt on the shoes of interns, who in turn are nameless drudges to resident physicians. And so on up to the pyramid’s apex, the chair of the department, who glows with success while fretting about the associate professors clawing at his or her ankles.

Within such a system, one learns one’s exact place through the principle that abuse flows downhill. As a senior student, I saw a resident loudly upbraid my intern. An hour later, that same intern berated me for a minor omission. And that evening, I’m sorry to say, I joked with my roommates about how dirt-ignorant one of my patients was.

  • Inadequacy. There’s far more for doctors-in-training to know than anyone can possibly learn; even worse, what was so yesterday ain’t necessarily so today. Little wonder students continually feel deficient. They compensate with “roundsmanship,” casually citing obscure medical facts that send their classmates into seizures of self-doubt. Years of such defensiveness tend to breed feigned confidence just a tick short of arrogance.

  • Overwork and sleep deprivation. Ripping along on caffeine all night to prepare for next morning’s freshman physiology exam is grueling, but it’s only practice for later training, when the junior student sees patients around the clock. In addition, sleeplessness dulls the edge of intellect, leaving the student ready, even longing, to accept instruction unquestioningly.

The overwork issue blossoms in American media every ten years, like a cicada cycle. Learning that a surgical resident about to operate on us hasn’t slept since last Whitsuntide, we jump off the gurney, write letters to editors, and call for Congressional hearings. Headlines blare outrage for a week, and then the issue burrows underground again for a decade. That this pattern has persisted so long with so little reform might suggest to a visiting anthropologist that we regard healing as a zero-sum transfusion, the patient’s improvement being roughly equivalent to the healer’s depletion.

In any case, overwork is likely to remain mainstream healthcare’s modus operandi indefinitely. I’ve heard from more than one medical educator, for example, that many resident physicians view any limits on their work week as insults to their overachievement ethic. (Overwork isn’t limited to doctors, by the way: it’s not uncommon for relatives of sick people to work themselves into sickness.)

  • No free time. Do it now, “stat,” and when you’re finished, draw that blood down in 121. Thinking, wondering, and chatting with patients are extraneous to the work at hand. The relentless activity obsession is rewarded later, in practice, when third parties pay doctors for physical procedures, never for simply sitting and listening to their patients.

  • Unhesitating dedication. When medical students’ personal and family needs compete with training obligations, they’re sternly reminded that a physician’s devotion to duty must be virtually monastic (an apt term, as non-medical people are often referred to as “lay.”) By their third year, students know that a meaningful personal life must remain on the back burner indefinitely. One of my classmates, an orthodox Jew, said he couldn’t be available to work on Saturdays, the Sabbath. The dean had heard that one before. He handed the student a photocopy from his files, a letter he’d secured years before from a rabbi stating that “saving lives” justified working on the Sabbath.

  • Social isolation. Medical students spend their off-time either studying hermetically or restricting themselves to the tight circle of their classmates. During years cloistered together, away from worldly competing views, they succumb to pressures toward clinical detachment while their skills in plain old personal relations atrophy.

Last year I asked a young pathologist about his work. He told me one of its advantages was his ongoing patient contact.

I took heart. “That’s unusual,” I said. “Most pathologists are notorious medical homebodies. How exactly do you contact patients?”

He said, “Well, sometimes I’ll examine a tissue slide and it doesn’t quite add up. But when I go to the ward and read the patient’s chart, things almost always fall into place.”

I was puzzled. “And you see the patients, then?”

Now he looked puzzled. “See the patients? Why would I do that? All I need is the chart.”

If my list of implicit curricula—hierarchy, inadequacy, sleep deprivation, enforced busyness, hyperdedication and isolation—sounds familiar, it’s probably because it recalls the “brainwashing” techniques that cults and totalitarian governments use to “re-educate” captive audiences. Immersed for years in this ambiance, it’s a rare medical student whose personality doesn’t accommodate to it. The physicians I know who truly are skilled in personal contact aren’t that way because of their training, but despite it.

A few years ago I attended a medical conference on quacks and cults. One of the presenters, a professor of internal medicine, described the features of a cult. They matched my list. I raised my hand and suggested that medical training might actually constitute a cult. Obviously he’d considered this before. Smiling, he said, “Well, what you say does pertain to surgical training.”

That got a laugh from everyone, and partly, I suspect, because his gentle refutation was actually an admission. We docs are a tribe, exclusive and internally consistent, and we’re evidently satisfied with it.

Wednesday, June 23, 2010


I heard a story today that’s a carbon copy of dozens I’ve encountered. In a meeting with hospice staff, the patient asked that a particular pain med regimen be continued. Her family members, though, vociferously demanded much stronger meds. The hospice people countered that their patient’s wishes in the matter were clear. Tempers flared.

What does one do here?

The hospice people decided that emotions were running too high to allow a reasonable discussion at the moment, so withdrew with a promise to talk with the family the next day. The next morning, the hospice team interviewed these relatives outside the presence of the patient. It turned out they were frantic from witnessing their loved one hurting, and so wanted the maximum done for her. During this conversation they came to realize their motivation was based less in her than in their own suffering. They opened up, talked about it, cried, and finally relinquished their demands for different treatment.

I thought the hospice team handled this beautifully. It seems a common ethic in hospice circles is to constantly ask the question, “Who’s suffering here, and from what?”

When we practitioners assume the patient is our only mission, the suffering of those around that person diminishes in our view to almost negligible. But if we can widen our scope at the bedside and ask that question, we’ll not only help heal others that are hurting, but certainly make our own day easier.

Tuesday, June 15, 2010


Okay, one of my curmudgeon rants...

I’ve written plenty here about how life-threatening illnesses cajole us into examining our lives and making changes. But seldom can we make these changes all by ourselves, in a vacuum. One reason we end up staying a stale course is that we’re embedded in a social matrix that favors inertia. Make a change, and it’s likely those around you will advise, “Hey, we liked you better when you were neurotic.” In other words, significant change requires social support. We thrive when we’re encouraged. The genuine community that supports creative change, though, is in trouble.
My wife Ronnie and I were enjoying a quiet morning sipping coffee recently, on the patio of our favorite hangout. The sun glinted off its metal roof, surreally illuminating the sycamore leaves overhead. A puff of wind caressed my cheek, and I was at peace. Suddenly a full marching band struck up “Stars and Stripes Forever.” As I scanned Broad Street for the glint of brass, the man at the next table reached into his pocket and answered his cell phone.

Ronnie whispered, “Disgusting. Not just bothersome. He’s invading public space with a private act. He might as well just stand up and pee in the flowers.”

Cell phones can irritate me, too, but what exactly is that about? I recalled an incident a few years earlier, while I awaited a flight in the vast expanse of Los Angeles International Airport. LAX teems with solicitors, people collecting for missionary churches, the elderly, the homeless, and, no doubt, themselves. I noticed that they shied away from people who were on their cell phones, which in those days were about the size of a shoe. I didn’t feel like dealing with a solicitor, so as one headed toward me, I whipped off my sandal and held it to my ear. He took one perfunctory glance and veered off. I figured he either fell for the ploy or thought I was flat-out nuts. In any case, I’d implicitly announced I wasn’t there. I’d showed him I had at least one foot in cyberspace. I was in my own virtual gated community rather than in the commons.

Civilization, though, is people living together, behaving consensually with one another, enacting inherited, evolved manners and mores. And it’s decaying.

My farmer friend Alan hires interns every summer. His operation is rather famous, so he receives far more intern applications than there are positions. He told me a surprising number of people apply via text messages. He ignores these, but fears they’re becoming the norm. I suspect this is occurring across the board. It seems an increasing number of young people, raised in this digital age, have little notion of conversation. Transmitting and receiving sound bites, they are indeed exchanging information, but the transaction remains superficial—facts without feeling, depthless data.

Sure, I may have become one of the geezers who sit on the porch complaining about “kids these days.” I remember older relatives a half-century ago, bemoaning the fact that Elvis and his ilk were destroying civilization. (“And my God, have you seen this, what’s his name, ‘Little Richard?’”) So maybe the social dissolution I’m describing isn’t actually so, or exists but is ultimately harmless. But why let all this gray hair go to waste?

Wednesday, June 9, 2010


As it does regularly, the issue of survival prediction arose today in our support group meeting.

When we’re informed that we have a life-threatening disease, it’s natural for us to ask how life-threatening it is. Is the Reaper at the front door now, or what? “How much time do I have, Doc?” is probably the most common way of asking the question.

The answer, of course, is that no one knows. No one can know. We’re aware that's the case, but still desire some guideline, a frame. So the doc gives us statistics, maybe saying something like, “Well, eighty percent of people with your diagnosis survive two years.”

There’s something essential to know about disease statistics: by definition, they describe the characteristics of a group, never an individual. How does a group picture apply to you? Do you have an eighty percent chance of surviving two years? No. In two years you’ll either be alive or you won’t—one hundred percent or zero.

How should you interpret these numbers, then? More often then not, patients remember only the time factor—in this case two years—and carve that into their memory, forgetting the percentage: “Doc gave me two years.”

That’s one problem these statistics can generate. Another one is that once you hear “two years” you can’t forget it. Think of a judge admonishing the jury to “ignore the previous testimony.” Impossible, so that duration pronouncement, sitting in the back of your mind like a dour raven, can become voodoo. I’ve seen too many people succumb right on time. Hasn’t much of world literature taught us how self-fulfilling prophecies can be?

There’s another problem, too. We traditionally describe survival rates with a mathematical model called a “bell curve.” You’ve seen plenty of these. It looks like this:

The curve says a small number of patients die soon, a similar number survive a long time, and the majority fall in between. When you hear statistics involving your life-threatening disease, all you know for sure is that you’re somewhere, anywhere, in that curve. (In fact, even when you’re as fit as a fiddle, you, being mortal, are somewhere in that curve.)

Bell curves are based on the results of large-scale studies that take just a few categories into account: type of cancer, age of patient, and gender, for example. The studies seldom consider less easily measurable characteristics, like the patient’s degree of interest and involvement, quality of relationships, exercise, diet, stress management, spiritual skills, and so on.

A few do, though. Several years ago a study of women with stage four breast cancer compared their survival with the number of “confidants” they enjoyed. A confidant was defined here as a close personal friend, someone with whom the patient felt comfortable discussing anything. Women with no confidants evinced a particular average survival time. Those with one confidant lived longer, on the average, and those with two even longer. The correlation held for up to six confidants.

I know oncologists who, even when pressed, offer no numbers. They prefer that their patients spend their energy moving themselves rightward under the curve  rather than fretting about their location in it.

Saturday, June 5, 2010


I needed to speak with a medical specialist last week. I’d never met him and had no appointment. I just dropped into his office. The receptionist wasn’t at her desk. Soon a young woman passed by in the hallway. She smiled at me and waved.

“Hi. I’ll be with you in a second.”

She took her seat, introduced herself, and asked how she could help me. I explained that I just wanted to talk with the doc for a minute. She walked back to his office, returned, and said, “Sure. He’ll be right out.”

He emerged, we chatted, I got the info I needed, and I left.

Contrast this with an experience in another doctor’s office a couple of months ago, when I was sick. I called ahead and was told to just drop in. When I arrived at the desk, the young woman receptionist was on the phone. She offered no sign of recognition. After she finally hung up and did a little paperwork, she took notice of me.


“I need to see the doctor. I was told to come in.”

“What’s wrong?”

“Bronchitis. I might need to be on an antibiotic.”

“It’s probably viral,” she said. “When it’s viral, antibiotics don’t help.”

“Yeah, right. But I’d like to see the doctor anyway.”

Two very different receptions, right? The first felt welcoming, the second was as though I’d been caught trespassing.

Some doctors don’t appreciate the considerable influence their front desk can have. After all, it’s where patients, who are by definition suffering, first make medical contact. Being sick, they’re unusually sensitive and vulnerable, and commonly anxious and angry. They’re also hopeful, as they seek relief in any form. They’ll even accept the placebo effect.

That’s why I find it important, for example, to hold our cancer support group meetings in the hospital, which is itself a placebo. As an edifice dedicated to competent responses to sickness, one can begin to feel treated just by being in it.

Doctors, too, are placebos. Wearing a symbol of their craft—a stethoscope, a white coat—and radiating knowledgeable confidence, they tell us implicitly that we’ve entrusted our problem to good hands. Their offices, too, can serve the same function, but only if the docs consciously manage in that direction.

Too many medical offices simulate factories in which patients are considered interchangeable units in need of repair. That’s too bad, not only for the patients whose humanity is minimized, but for the employees, who, human themselves, yearn for more contact, but learned somewhere that the preferred “professional” style is detachment.

Nothing changes without new information. Our support group members continually advise one another to give feedback to their doctors--to praise them for the healing strokes they receive from the docs or their office staff, and to tell them honestly when contact is less than healing. If patients don’t communicate these feelings, nothing will change. And more humanity in the office doesn’t raise the overhead a dime.

Tuesday, June 1, 2010


A couple of weeks ago, while embarking on a vacation trip, I became sick. I was hugely fatigued, disoriented, insomnic and had a degree of fever. My pathology-dominated doctor mind kicked in: was this flu? Lyme disease? Lymphoma? I fretted mightily, and finally fell into a solid two-day sleep. When I awoke I was as good as new.

Or maybe better. My wife, Ronnie (Veronica, literally “speaker of truth”) commented, “You’re different now.”


“You’re more chatty, communicative.”

Hm. So I paid closer attention, and of course she was right. I was indeed more interested in engaging, and really enjoying it.

Relatives whom we visited asked, “So what were you sick with?”

I replied, “I don’t think I was sick. I molted.”

Remember molting? As snakes grow, they periodically shed their skins. The new snake emerges from the old snake, and I’m convinced it’s a painful process. I realized I’d experienced this before, in other events I’d labeled sickness. I also recalled that our kids’ sicknesses seemed to be uniformly followed by what we called “growth spurts.” In their convalescence they’d suddenly know how to read, or to ride their two-wheeler.

So here’s a fantasy I’d like to run by you. Might it be that sickness, as real and debilitating as it is, has an adaptive function, too, in removing us from our regular lives? In temporarily separating us from our habits, we have the opportunity to see ourselves anew. I compare the process to hockey’s penalty box. When you’re playing, you can only see part of the game. When you’re off the ice, in the box, you miss playing but on the other hand you can see the whole game.

A life-threatening illness like cancer is superb at grabbing our attention in this way. A recurring conversational component within our cancer support group is the unexpected benefits that this particular penalty box confers. People regularly say things like, “Cancer’s no pleasure, but if I hadn’t had it, I wouldn’t have quit that awful job in the cube farm,” or, “I’d have stayed indefinitely in that toxic marriage.” In fact, a couple of group members recently proposed that we write an anthology about cancer’s “benefits.” This sounds perverse outside a cancer group, so I don’t think the book would be a best-seller.

Anyway, I want to ask: have you also considered your episodes of sickness to be events—however uncomfortable—that ultimately improved your life?