You raise a good question, particularly since results often seem to conflict. One week a study finds that coffee or chocolate or some other food or drug is good for you and the next week another study concludes the opposite.
One of the first questions to ask when you come across information about a “new study” is whether it is really a “study” at all, or an analysis, which just looks at previously published studies to address a new issue. Analysis and meta-analysis pool data from chosen studies to investigate and identify relationships. Statistically, analysis can really only ask questions, or identify new questions to be studied, they can’t answer them. Unfortunately the popular press frequently calls them “studies” and often presents their findings as study conclusions, which they are not. If the news item is actually a study, you need to consider whether it took place in humans, animals, or in a test-tube. While studies in animals or test tubes can suggest important effects of a drug or a type of food, we really can’t say that the same thing will happen in people without human studies.
These mostly fall into two categories – observational studies and randomized clinical trials. Observational studies look at a large group of people over time to see how their diets, habits (such as exercise) or a drug they may be taking impact their health. Scientists can get a lot of valuable information this way but not proof of cause and effect. If results suggest that people who ate the most vegetables, ran three miles a day or took a certain drug lived longer or avoided a specific disease, you can’t be sure that the vegetables, the exercise or drug was responsible. Something else might have affected the results – family history, for example, if that wasn’t considered when the data were analyzed.
With randomized clinical trials, researchers recruit volunteers who have a specific health problem and, assign them randomly to two or more groups. One group gets the drug or other intervention under study, while the other gets a placebo. Neither the researchers nor the volunteers know who is taking the real thing. Over time, the investigators collect information on how the condition is affected and what side effects occur. At the study’s end, the results are “unblinded,” so that the researchers can determine how well the drug being studied worked – or didn’t work. When reading about these studies and determining if they apply to you, consider how similar to you the participants were. Were they of a similar age, sex, education level, income group, and ethnic background with the same health concerns and lifestyle?
You also need to know how to interpret risk. If a study finds that a drug reduces the risk of a disease by, say, 50 percent, you should be aware that this is the “relative” risk reduction, and doesn’t mean that 50 percent of everyone who takes the drug won’t get sick. Relative risk signifies the number of people who don’t get sick while taking the drug compared to the number who normally would get the disease being studied. For example, if out of 100 people, one person could be expected to have a heart attack, a 50 percent decreased risk would mean that a heart attack could be expected to occur in only 0.5 out of 100 people on the drug. The term “absolute risk” refers to the 0.5 number, and can give you a better picture of how helpful taking the drug – or drinking the coffee or eating the chocolate – really would be.
Another consideration is who paid for the research. If it was a drug company or other sponsor with a financial interest in the outcome, you might want to see results in studies examining the same intervention from other sources before you accept the findings.
Andrew Weil, M.D.
“Understanding Risk: What Do Those Headlines Really Mean?” National Institute on Aging, accessed August 18, 2015, https://www.nia.nih.gov/health/publication/understanding-risk#same