If you’ve tried, over the past several years, to pay close attention to health news as reported online, on TV and in newspapers, you’re probably exhausted from the constantly changing “expert” medical advice.
Just the passionate debate about best practices surrounding the biggest health headline of recent memory—Covid 19—has been enough to keep a diligent observer perplexed. Soap or hand sanitizer? To mask or not to mask? How many days of isolation? Hydroxychloroquine, ivermectin and bleach? And there’s constant back and forth about plenty of other non-Covid issues: Perhaps you learned about antioxidants (notably vitamins E and C and beta carotene), which you can get both from foods and supplements. These antioxidants may help lower the risk of heart disease, cancer, cataracts, and other ills. The scientist who confidently told you this on Live with Somebody! was a charismatic charmer who led the groundbreaking study that just appeared in The Impeccable Journal of Medicine. Not only was the evidence “very exciting,” but she was taking hefty amounts of antioxidant supplements herself. So you started taking the pills. Next thing, you read that a study conducted in Finland showed that not only was beta carotene not protective against lung cancer, it actually seemed to increase the risk of getting it. Feeling deceived, you stopped taking your supplements and even gave up your daily carrot.
You may have seen changing medical advice with oat bran (good one week, outmoded the next), margarine (you switched to this supposed health food a few years ago, and now it has been tagged as an artery-clogger), DDT and breast cancer (first linked, then not), hot dogs and childhood leukemia (a headline-maker that soon pooped out, since even the researchers had a hard time explaining their findings), and household electromagnetic fields (cancer again—but by then you had gotten bored). Do these folks just not know what they are talking about?
In fact, the experts don’t change their minds as often as it may seem. Here at the Wellness Letter, for example, we never told you that margarine was a health food or that a little oat bran would solve your cholesterol problems. Both these foods were hyped by the media and by manufacturers—but most nutritionists never thought or said there was anything magic about them. A few researchers and journalists eagerly spread the idea that your power line and your electric toaster and clock could give you cancer. Most experts thought all along that the evidence was pretty thin. Changing medical advice may be due to headline writers changing their minds more often than scientists.
Science is a process, not a product, a work in progress rather than a book of rules. Scientific evidence accumulates bit by bit. This doesn’t mean scientists are bumblers (though perhaps a few are), but that they are trying to accumulate enough data to get at the truth, which is always a difficult job. Within the circle of qualified, well-informed scientists, there is bound to be disagreement, too. The same data look different to different people. A good scientist is often his/her own severest critic. Because science is dynamic, it is always a possibility, indeed a likelihood, that tomorrow’s data and wisdom will truly supercede today’s. That’s a dynamic we’ve saw play out over and over in regards to Covid, and it’s a good thing: Who would want their doctors to be unwilling to change their minds as science provides new information?
The ongoing search for truth and the resulting changing medical advice is also complicated by:
- Intense public interest in health
- Hunger for quick solutions
- Journalists trying to make a routine story sound exciting
- Publishers and TV producers looking for audiences
- Scientists looking for fame and grants
- Medical journals thirsting for prestige
- Entrepreneurs thirsting for profits
- Politicians seeking wins in the polls.
With all the changing “expert” medical advice, it pays to keep your wits about you as you listen, watch, and read.
The search for evidence
In general, there are three ways to look for evidence about health:
Basic research is conducted in a laboratory, involving “test tube” or “in vitro” (within glass) experiments, or experiments with animals such as mice. Such work is vital for many reasons. For one, it can confirm observations or hunches and provide what scientists call plausible mechanisms for a theory. If a link between heart disease and smoking is suspected, laboratory experiments might show how nicotine affects blood vessels.
The beauty of lab research is that it can be tightly controlled. Its limitation is that what happens in a test tube or a laboratory rat may not happen in a free-living human being.
Clinical or interventional trials are founded on observation and treatment of human beings. As with basic research, the “gold standard” clinical trial can and must be rigorously controlled. There’ll be an experimental group or groups (receiving a bona-fide drug or treatment) and a control group (receiving a placebo, or dummy, treatment). A valid experiment must also be “blinded,” meaning that no subject knows whether he/she is in the experimental or the control group. In a double-blind trial, the researchers don’t know either.
But clinical trials have their limitations, too. The researchers must not knowingly endanger human life and health—there are ethics committees these days to make sure of this. Also, selection criteria must be set up. If the research is about heart disease, maybe the researchers will include only men, since middle-aged men are more prone to heart disease than women the same age. Or maybe they’ll include only nurses, because nurses can be reliably tracked and are also good reporters. But these groups are not representative of the whole population. It may or may not be possible to generalize the findings. The study that determined aspirin’s efficacy against heart attacks, for instance, was a well-designed interventional trial. But, for various reasons, nearly all the participants were middle-aged white men. No one is sure that aspirin works the same way for other people.
Epidemiologic studies. These generate the most news because so many of them have potential public appeal. An indispensable arm of research, epidemiology looks at the distribution of disease (“epidemics”) and risk factors for disease in a human population in an attempt to find disease determinants. Compared with clinical trials or basic research, epidemiology is beset with pitfalls. That’s because it deals with people in the real world and with situations that are hard to control.
The two most common types of epidemiologic research are:
Case control studies. Let’s say you’re studying lung cancer. You select a group of lung cancer patients and match them (by age, gender, and other criteria) with a group of healthy people. You try to identify which factors distinguish the healthy subjects (the “controls”) from those who got sick.
Cohort studies. You select a group and question them about their habits, exposures, nutritional intake, and so forth. Then you see how many of your subjects actually develop lung cancer (or whatever you are studying) over the years, and you try to identify the factors associated with lung cancer.
Pitfalls and dead ends
Epidemiologic studies cannot usually prove cause and effect, but can identify associations and risk factors. Furthermore, epidemiology is best at identifying very powerful risk factors—smoking for lung cancer, for example. It is less good at risk assessment when associations are weak—between radon gas in homes and lung cancer, for example.
No matter how well done, any epidemiologic study may be open to criticism. Here are just a few of the problems:
- People may not reliably report their eating and exercise habits. (How many carrots did you eat each month as an adolescent? How many last month? Few of us could say.) People aware of the benefits of eating vegetables may unconsciously exaggerate their vegetable consumption on a questionnaire. That’s known as “recall bias.”
- Hidden variables or “confounders” may cloud results. A study might indicate that eating broccoli reduces the risk of heart disease. But broccoli eaters may be health-conscious and get a lot of exercise. Was it the broccoli or the exercise?
- Those included in a study may seem to be a randomly selected, unbiased sample and then turn out not to be. For example, searching for a control group in one study, a researcher picked numbers out of the telephone book at random and called his subjects in the daytime. But people who stay home during the day may not be a representative sample. Those at home in the daytime might tend to be very young or very old, ill, or recovering from illness.
- Health effects, especially where cancer is concerned, may take 20 years or more to show up. It’s not always financially or humanly possible to keep a study running that long.
Reading health news in an imperfect world
And this is only the half of it. Sometimes the flaws lie in the study, sometimes in the way it has been promoted and reported. Science reporters may be deluged with data. Many are expected to cover all science, from physics and astronomy to the health effects of hair dyes. Sometimes health reporters may not even have read the studies in question or may not understand the statistics.
Many medical organizations issue press releases. Some of these are excellent, and some aren’t. Some deliberately try to manipulate the press, overstating the case, failing to provide context, and so forth. Researchers, institutions, and corporations often hire public relations people to promote their work. These people may actually know less than the enterprising reporter who calls to interview them.
Finally, people tend to draw their own conclusions, no matter what the article says.
- “May”: does not mean “will.”
- “Contributes to,” “is linked to,” or “is associated with”: does not mean “causes.”
- “Proves”: scientific studies gather evidence in a systematic way, but one study, taken alone, seldom proves anything.
- “Breakthrough”: this happens only now and then—for example, the discovery of penicillin or the polio vaccine. But today the word is so overworked as to be meaningless.
- “Doubles the risk” or “triples the risk”: may or may not be meaningful. Do you know what the risk was in the first place? If the risk was 1 in a million, and you double it, that’s still only 1 in 500,000. If the risk was 1 in 100 and doubles, that’s a big increase.
- “Significant”: a result is “statistically significant” when the association between two factors has been found to be greater than might occur at random (this is worked out by a mathematical formula). But people often take “significant” to mean “major” or “important.”
However, the bottom line is pretty good…
None of this means epidemiology doesn’t work. One study may not prove anything, but a body of research, in which evidence accumulates bit by bit, can uncover the truth. Research into human health has made enormous strides and is still making them. There may be no such thing as a perfect study, but here is only the briefest list of discoveries that came out of epidemiologic research:
- Smoking is the leading cause of premature death in developed countries.
- High blood cholesterol is a major cause of coronary artery disease and heart attack.
- Exercise is important for good health.
- Good nutrition offers protection against cancer; or, conversely, poor nutrition is a factor in the development of cancer.
- Obesity is a risk factor for heart disease, cancer, and diabetes.
The list could go on and on. Amid the ever-present changing “expert” medical advice, we suggest that you retain a spirit of inquiry and a healthy skepticism, but not lapse into cynicism. The “flip-flops” you perceive are often not flip-flops at all, except in the mind of some headline writer.
There is a great deal of good reporting, and it’s an interesting challenge to follow health news. You don’t believe everything you read or see on TV about politics, business, or foreign relations, so it’s no surprise that you shouldn’t believe some health news. Luckily, there are many sources for health news—none infallible, but some a lot better than others.
- Don’t jump to conclusions. A single study is no reason for changing your health habits. Distinguish between an interesting finding and a broad-based public health recommendation.
- Always look for context. A good reporter—and a responsible scientist—will always place findings in the context of other research. Yet the typical news report seldom alludes to other scientific work.
- If it was an animal study or some other kind of laboratory study, be cautious about generalizing. Years ago lab studies suggested that saccharin caused cancer in rats, but epidemiologic studies later showed it didn’t cause cancer in humans.
- Beware of press conferences and other hype. Scientists, not to mention the editors of medical journals, love to make the front page of major newspapers and hear their studies mentioned on the evening news. The fact that the study in question may have been flawed or inconclusive or old news may not seem worth mentioning. This doesn’t mean you shouldn’t believe anything. Truth, too, may be accompanied by hype.
- Notice the number of study participants and the study’s length. The smaller the number of subjects and the shorter the time, the greater the possibility that the findings are erroneous or misleading.