The problem with: Nutrition Science
Welcome to the second post in my 'The problem with:' series. In these posts I try to take an objective look at some of my own personal beliefs and interests. This post is all about nutrition science, and more specifically nutrition studies. I know I have cited studies in the past while defending some of my own dietary practices without fully understanding the details of that study. There is a lot to this one so it is a long enough post, but I learned a lot researching it and hopefully it contains some useful information for others.
TL;DR: Nutrition studies are both hard to do and interpret. Furthermore, the publics only exposure to the results of those studies are often reductionist and sensationalist news headlines and fad diets.
I've always thought of myself as a man of science. By 'man of science' I mean I have an interest in science, not that I have any formal education in any of the major disciplines (I'm not counting computer science as a science in of itself). I read popular science books and blog posts that give me a decent but very high level understanding of physics, biology and chemistry. More importantly what I get from them is a basic understanding of the scientific method. Basically I'm scientifically literate, but by literate I'm more at the Harry Potter end of the spectrum than the Ulysses end.
What inspired this post were my attempts at researching the positive and negative implications of different dietary choices. In doing so (as I have mentioned in a previous post) I was confronted by a myriad of conflicting articles and scientific studies. I've always viewed science as being somewhat absolute. That it's backed by math and logic, meaning it can guide you to the true nature of world without ambiguity. So why then when it comes to health does it feel like such a chore to find accurate and trustworthy information?
For the sake of this post I won't be talking about any specific studies or their implications, but rather how studies are performed. Understanding this most basic of concepts from the world of science is invaluable when looking to science for guidance in any area of your life. This for me is the problem with science, and health science in particular. We generally don't understand how science is done and the impact that this has on the information that eventually makes it to our social media feeds or news articles.
The Scientific Method
So what even is science? At its core it is simply the study of the physical world through observation and experiment. We are curious creatures and so naturally we ask questions about the things we interact with every day. Science is the attempt to answer those questions through standardized means. The method by which scientists come up with those answers can be summarized as follows:
Step one of the scientific method is figuring out what question you want to know the answer to. Why does the moon only come out at night? And where does the sun go? Why do things always fall to the ground rather than up to the sky? And why does water sometimes fall from the sky? Perhaps most importantly, why the hell do I feel the need to yawn just by reading the word yawn?
Once you know what question you are asking it makes sense to do some research to see what information is already out there. You could start by searching online for existing research papers on the given topic or visiting a local library. After all, the question you are asking might already have a satisfactory answer. The word 'satisfactory' is important here - there may be answers but they could be wrong or incomplete.
Formulate a hypothesis
'Hypothesis' is one of those really sciency sounding words that gets thrown around when someone wants to sound smart. All it is is an educated guess. Something that seems logical and can be tested. It usually takes the following form: 'If <something> then <something else>'. For example:
If I lift weights regularly then my muscles will get bigger
If I eat processed food then I will gain weight
Design and run an experiment
This is the part where you test your hypothesis. You are trying to figure out whether your educated guess was right or wrong. The design of the experiment is crucial as a bias experiment could lead to a false positive - ie. you confirm your hypothesis but the test was set up such that it favored that outcome in the first place. For example with our hypothesis 'If I eat processed food then I will gain weight':
If in our experiment we have a single person eat processed food for 12 hours straight and then sleep for 12 hours straight we will likely see the expected result - but this doesn't prove our hypothesis, it proves that eating excessive amounts of processed food will cause us to gain weight. What we would really need is several groups of people eating various different amounts of calories worth of processed food to see the effect that it has on their weight.
Analyze the results
Once you have completed your experiment the results must of course be checked. Did the experiment confirm your initial hypothesis? Perhaps the data shows that the experiment needs adjustments in order to provide more meaningful results. The data could even completely disprove your original hypothesis - which is not necessarily a bad thing. Proving that something is not true can be just as valuable as proving that something is true. You can rule out that hypothesis and search for a different answer to your question. For example with our hypothesis 'If I lift weights regularly then my muscles will get bigger':
Sure, your muscles will get bigger if you lifts weights. However if you don't consistently add more weight then your muscle growth will stop pretty quickly. You will also not experience muscle growth without consuming adequate dietary calories to support that muscle growth. So while there is some truth to the hypothesis it is important to analyze the result carefully.
Report the results
It is important the share the results and data from any scientific experiments/studies. The scientific community at large can then offer feedback or re-run experiments to confirm the findings. This is know as a peer review. Without peer review you need to trust the word of the person that did the original experiment or study, so you should always look for peer reviewed studies.
When it comes to nutrition studies most of us only ever see what the mainstream media choose to report on (usually with sensationalist headlines). This often leads to a barrage of confusing and conflicting information. Eggs are healthy one day and deadly the next. Butter is back! Or is it? Does it still cause heart disease? What about red meat? Does an apple a day really keep the doctor away? Is lack of exercise the root of weight gain or is it dietary choices?
Why all the conflicting information? Who can we trust when it comes to advice about how to live a healthy life? To answer these questions I think it is important to have a basic understanding how scientists test the above claims.
Let's take eggs for example, a food that is known to be high in cholesterol, which in turn is linked to heart disease. If we start with the hypothesis 'If eggs increase cholesterol in the body and high cholesterol is linked to heart disease then eliminating eggs from the diet should reduce incidents of heart disease.' That seems like a pretty logical hypothesis, right? Now lets try to figure out how we could test it out.
There are a bunch of different approaches we can take to test our hypothesis but before we explore our options it is worth calling attention to a couple of important points:
The Placebo Effect
A beneficial effect produced by a placebo drug or treatment, which cannot be attributed to the properties of the placebo itself
The placebo effect is real and no one really understands why. A doctor could give a patient a pill made only of sugar and tell them that this pill will cure their headache. Sugar of course cannot cure the patient's headache, but the placebo effect means that because the patient thinks they are consuming an actual drug that it will work like the real drug would have. And it gets even more bizarre than that, as a patient could even be told that it is a placebo and it will still work.
Correlation does not equal causation
Correlation: a mutual relationship or connection between two or more things.
Causation: the action of causing something.
When we see a relationship between two sets of statistics it is often tempting to jump to the conclusion that one caused the other, but this isn't always the case. For example one could say that drinking milk causes death because everyone who dies has drunk milk at some point in their life. When we analyze data it is vitally important that we do not fall into this trap. For more (sometimes hilarious) examples of this in effect check out this website, which constructs graphs from statistics with the express purpose of correlating completely unrelated phenomena (for example a correlation can be found between the number of people drowned in swimming pools to the number of films Nicholas Cage appeared in that year).
Publication bias occurs when the actual result of an experiment affects whether or not the research is actually published or distributed.
Researchers that disprove a hypothesis or find negative results may not want to share results as it can be seen as a failure. Furthermore journals may want to publish research that garners more attention, meaning that important but less eye catching research is never published. This has a knock on effect in that researchers may favor more sensationalist or hot topics that are more likely to be published.
Science doesn't pay for itself and studies are often funded by parties with vested interests. This is especially prevalent when it comes to health science. For example it is in the interest of companies that sell sugar laden food stuffs to fund research that downplays the health consequences of high sugar intake and suppress results that reflect negatively on the industry. It is always worth taking a look at the who is funding research when considering the results of a study.
It's important to understand these concepts because they directly impact the design and outcome of any scientific study. We need to actively account for the placebo affect while also ensuring that our results are indicative of causation rather than simply being a correlation. Research is useless if no one ever sees it so publication is rightly a concern, and those studies cannot be carried out without funding, which could compromise the research itself. Complicated right? These are the challenges that scientists face on a daily basis, and also factors that anyone reading a study needs to be aware of. If you find a study that does not account for them you can basically discard any conclusions contained within.
With that in mind lets consider how we can begin to understand the impact of our food choices on our health. The first step is to understand the various different ways that a study can be designed:
Meta-analysis: Combine and analyze the data from multiple different research studies to find a single conclusion with more statistical weight behind it. Researchers don't perform any new research, but aggregate information which can leading to higher statistical power. However researchers must carefully select the studies they wish to include in the analysis. With meta-analysis it is important to be mindful of study design and publication bias as they can skew the statistics.
Systematic review: A systematic review is a critical assessment of research conducted on a particular topic. It's kind of like a film critic reviewing all movies within a specific genre. The review itself will contain an overview of the findings of the combined studies. Often a prerequisite for a meta-analysis. The data extracted from a systematic review can be used in the statistics based meta analysis.
Randomized control: A study in which participants are randomly assigned to different groups. This aims to reduce biases in the study as the researcher cannot intentionally or otherwise organize the participants in such a way that favors the result in one way or another. For example if you want to study the effect of a certain type of food on athletic performance you shouldn't assign all of the more athletic looking participants to the group that will be consuming the food as they would likely perform better regardless of said food.
Cohort study: Groups of people are studied relative to other groups. For example one 'cohort' could be people between the ages of thirty and forty years of age that smoke ten or more cigarettes a day. Another cohort could be non-smokers in the same age bracket. The study would then seek to identify the affects of smoking on that cohort in comparison to the non smokers.
Case-control study: Start with the result and work backwards. For example researchers could interview a group of people with a particular illness in the hope of identifying a common thread that could have caused those people to end up suffering with that illness. One could take a large group of people suffering from diabetes for example and try to find what, if anything, they all have in common.
Animal research: Studies conducted on animals, common when testing possible side effects of new drugs.
In vitro studies: Studies are performed on isolated cells or pieces of tissue in a petri dish or test tube. Such studies are convenient but may not fully predict how cells will react to treatment in their natural environment. For example a drug may successfully kill cancer cells in the test tube but fail to do so in the human body.
But wait, there's more. As if that isn't enough there are a couple of important concepts and study variations that also need to be understood:
Control groups: In a study a control group is a group of participants that do not receive the treatment. At the end of the study they can be compared against the group that did receive the treatment. In an experiment it is often important to track change, so you need a basis for comparison which is why having a control group is important.
Double blind method: The double blind method is when neither the participants or the researchers know who is actually receiving the treatment. For example when testing a new drug there would be a control groups that receives sugar pills and another group that receives the actual drug. If neither the researcher or participant knows who is receiving the actual drug then the risk of the placebo affect impacting the research is reduced. It has been found that if a researcher knows who is receiving the real drug they can inadvertently and subtly give clues to the participants, thereby affecting the outcome.
Empirical research: Research based on direct observation, statistics and experiment. Generally it means that there is some evidence that is directly observable by our senses.
Qualitative studies: Results derive from interviews, observations and interpretations, not what you would call hard fact. This type of research is often preliminary and exploratory in nature.
Quantitative studies: Research that uses numerical analysis. Usually produces hard data that can be used in statistical analysis.
Epidemiological studies: Study of health and disease in specific populations (with focus on who, when and where).
There's a lot more I could have covered here, but these I felt were the most important, and a great starting point. Armed with this new found knowledge of the scientific method and study design I think we are ready to tackle our earlier hypothesis once again:
'If eggs increase cholesterol in the body and high cholesterol is linked to heart disease then eliminating eggs from the diet should reduce incidents of heart disease.'
The ideal test might be to have two groups of people, one that does consume eggs and one that doesn't. We would then need to monitor the groups until the end of their lives and review what the most common causes of death were in each. If the group that were allowed eat eggs suffered from higher incidents of heart disease then we could make the naive assumption that the eggs contributed directly to the higher rates of heart disease.
But now that we know what we do about scientific studies this clearly won't stand. After all, correlation doesn't equal causation and there could be many other factors at play that we are not accounting for. This study of ours would also be impractical. It would need to happen over such a long period of time that it would be difficult to find participants willing to commit to it. Funding would also likely be an issue, so it seems like we might have to look more towards cohort and case control studies.
So what if we look at groups of people that have a high risk for cardiovascular disease (CVD) and compare them to groups of people that don't? In Ireland CVD accounts for 33% of all deaths. One of the countries with the lowest rates for CVD is France. So what if we were to compare the consumption of eggs in both those countries? Clearly this wouldn't do because it would fail to take into account a myriad of other factors, such as rates of smoking and exercise.
Unfortunately it seems that while we could get some useful information about CVD from a cohort or case control study it would be hard to narrow down the causes to any one factor, largely due to the fact that there are so many variables at play.
The ideal study would be to take 1,000 infants at birth and prescribe the same diet and exercise regimes for all of them, but with one group eating two eggs a day, another eating two a week and another eating no eggs at all (our control). We could then monitor them throughout the course of their lives, ensure that the stick to this strict regime of eating an exercise. This is obviously not feasible, but even if it were it still fails to take genetic factors into account (see this study which shows that different people can react very differently to different diets). So maybe what we would actually need is 1,000 clones...
I think you get the point. This stuff is hard to do and rarely provides a concrete answer to the questions being asked. This is why you will so often see phrases like 'recent studies suggest that...' or 'excessive consumption of <something> may increases the risk of <some disease>'. Certainty in the field of nutrition science simply does not exist.
Lets look at a real world example. A recent study in the American Journal of Clinical Nutrition claimed to show that the cholesterol measured in people eating two eggs a day was much the same as that of people eating two eggs a week. Now we know that eggs contain about as much cholesterol as an 8oz steak, so how could this be possible? Well we need to look at the study design. The participants of the low egg consumption group were basically told to make up the protein through eating meat... which is also high in cholesterol, thereby negating the fact that they were abstaining from eggs. Also this study was performed over a few months... a short amount of time when considering the impact your diet has on the body. The ideal way to perform that study would surely have been to have two groups on a plant based diet (zero cholesterol) with one eating eggs. This would provide a much more reliable control with fewer variables. It's also worth noting that this study was funded by the Australian egg corporation.
Now just think how easy it would be to make a compelling headline out of that study, while completely ignoring the fine print. The design of the study and providers of funding are easily lost in a tabloid headline. For the record, I'm not suggesting the eggs are good or bad - I don't know. Excessive consumption is likely bad, especially when coupled with a low fiber diet (I have linked a video on this below from Dr. Rhonda Patrick who has done interesting research in this area).
It is important to understand that none of what I have discussed in this post means nutrition science is a waste of time. Important information can still be garnered from well designed studies. The problem is how the findings of different studies are presented. We all need to be mindful of the information and advice we incorporate into our daily lives. There are people out there that genuinely care about your health, but many care far more about making money. And this, for me, is the problem with nutrition science.
Further reading/watching that might be of interest
Dr. Rhonda Patrick on cholesterol and heart disease