Pphysicians consistently overestimate their patients’ level of health literacy. In a randomized controlled trial in 2005, researchers at the University of California, San Francisco wanted to test whether telling doctors about a patient’s limited health literacy altered the way the physician interacted with that patient. At an urban, academic public hospital, they screened 441 patients with type 2 diabetes for health literacy using the standardized short form of the Test of Functional Health Literacy in Adults (s-TOFHLA). Based on those results, they chose 182 patients with low scores and then divided the primary care physicians of those patients into two groups, only one of which was told about the patients’ limited health literacy. The researchers found that physicians notified of their patients’ limited health literacy had originally overestimated health literacy in 62% of their patients (the real-world result being that they often miss the opportunity to take steps to improve the situation).
In such studies, researchers can use s-TOFHLA or other tools to diagnose patients’ health literacy. That test gauges whether patients can read such things as the labels on prescription bottles by asking them to fill in missing words. Another standard research questionnaire—REALM, for Rapid Estimate of Adult Literacy in Medicine—scores patients’ ability to read and pronounce common medical words, usually a list of 66 in increasing order of complexity (from “eye” to “antibiotics”). But that process is anything but rapid, and neither REALM nor s-TOFHLA is practical in the clinic, where caregivers don’t have sufficient time during a rushed office visit to use the tools.
There might, however, be time to ask a single question, and Lisa Chew, associate professor of medicine at the University of Washington, and her colleagues have devised one-question screens that may help determine whether a patient will be able to understand medical information. A doctor could ask, “How confident are you in filling out medical forms by yourself?” or “How often do you have someone help you read hospital materials?” A 2012 review of studies that compared those single questions to REALM and TOFHLA found that the much simpler tests often sufficed to determine health literacy. For example, a patient who says he “always” gets help reading hospital materials (rather than “often,” “sometimes,” “occasionally” or “never”) tends to fit the TOFHLA definition of inadequate health literacy.
Yet not everyone believes patients should be screened at all. “It’s extremely controversial,” says Dean Schillinger, a researcher at the University of California, San Francisco and a co-author of the 2005 study that found physicians overestimated the health literacy of patients even when they knew a significant proportion of their patients had limited literacy. “There’s a reluctance to stigmatize patients as having limited literacy skills—a scarlet letter effect.” Rather than try to determine how literate a patient may be, he prefers to put the onus on doctors to communicate clearly at all times, as if every patient had limited literacy. “Straightforward communication is better for everyone,” he says. Then, if patients use technical terms, doctors can ratchet up the complexity of an explanation.
Research does suggest that the communication skills of physicians and other caregivers have room for improvement. In a 2007 study of diabetics, for example, researchers audiotaped 74 doctor-patient encounters and found that eight out of 10 were characterized by medical jargon the doctors didn’t explain, with an average of four unclarified terms per visit. When patients in the study were later asked the meaning of various diabetes terms, their comprehension tended to be low, even though many had indicated that they had been able to follow their doctors’ explanations.