`I quite agree with you,’ said the Duchess; `and the moral of that is–Be what you would seem to be–or if you’d like it put more simply–Never imagine yourself not to be otherwise than what it might appear to others that what you were or might have been was not otherwise than what you had been would have appeared to them to be otherwise.’
`I think I should understand that better,’ Alice said very politely, `if I had it written down: but I can’t quite follow it as you say it.’
If you tell a lie big enough and keep repeating it, people will eventually come to believe it.
Dr. Joseph Goebbels, Nazi minister of propaganda
I’m starting this post with two apropos quotes. The first, from Alice in Wonderland, because the post will be a little difficult to understand; the second because I just read for the umpteenth time the Big Lie about low-carb diets and wanted to blog about it but couldn’t until I wrote this post first.
Intention-to-treat analysis (ITT) has become the de rigueur way of looking at experimental results that more often than not gives erroneous results. These erroneous results are then reported as gospel, when in reality they are simply erroneous. When unbiased, intelligent people (the readers of this blog, for example) consider ITT, they cannot understand how it can be used by scientist trying to make sense out of their data, but, unfortunately, it is in almost every experiment. Here is how it works.
Let’s say were going to do an experiment comparing two different diets. We round up 100 subjects and randomize them into two groups of 50. We put one group, Group A, on one diet, Diet A, and we put the other, Group B, on a different diet, Diet B. We keep both groups on their respective diets for 8 weeks to see what happens.
At the end of the 8 weeks we find that 30 members of Group A dropped out, but those who hung in there lost an average of 3 pounds per week for a total of 24 pounds each over the course of the study. We look at Group B and find that no one dropped out of the study and that all the subjects lost an average of 1.2 pounds per week.
What does this data tell us? It’s pretty simple. It tells us that Diet A is much more effective, but is more difficult to follow. It tells us that Diet B is less effective but easier to follow. Right? All intelligent people could agree on that. So that’s how this study would be presented if it were published in a journal, right? Uh, no.
No. If published, the conclusion would be that both diets are exactly the same.
Yep. That’s what the authors would conclude. Why? Because they would use an intention-to-treat analysis. In fact, the peer-review process would probably demand it.
An intention-to-treat analysis demands that all subjects remain in the data pool, even if some have dropped out. The intention was to treat all the subjects, so the analysis should contain all the subjects, even if some left the study after the first day. In an ITT, researchers pretend that subjects who chose to abandon the study really didn’t and include them in their final data. Sounds like something from Through the Looking Glass, doesn’t it?
Let’s look at how this would work in our dietary study above. The 20 subjects in Group A who followed Diet A lost 24 pounds each. Multiply this 24 pounds times the 20 subjects who stayed in the study and you find that the group lost 480 pounds over the course of the 8 weeks. Now divide this 480 pounds by the 50 subjects who started the study, and you get a weight loss of 9.6 pounds for the 8 weeks. Dividing by 8 gives us an average weight loss of 1.2 pounds per week for all 50 subjects in Group A. Which is exactly the same as the weight loss in the subjects in Group B. So, according to the dictates of ITT, the study would show that both diets were equally effective. But, as we’ve seen, they’re not.
If a doctor were recommending a diet to his/her patients based on the actual findings of the study, he/she could reasonably say: Diet A is very effective but tough to follow, so if you think you can do it, Diet A is definitely the fastest way to lose weight. If you want something that will help you lose a little weight and is easy to stick to, then try Diet B.
If the same doctor recommends a diet to his/her patients based on the ITT results, he/she would say: Follow whichever diet you want – they’re both the same.
Why, you may ask, could seemingly intelligent people do something so stupid as use ITT to evaluate data? There is a reason, although it has its own problems.
We all know from experience and from talking to a lot of people who have lost weight that a lot of different diets work. People lose weight on the Ornish diet and they lose weight on the infinitely preferable Protein Power diet. And many other diets as well. So, we can reasonably assume that almost any diet will help some people lose weight. But we want to compare two diets to see which one is really the best. So, let’s do another experiment.
Let’s take another 100 people and randomize them into two groups of 50, Group C and Group D. Those subjects in Group C go on Diet C and all of them do well. They lose an average of 2 pounds per week and all of them stay on the diet. The subjects in Group D go on Diet D, and most don’t do very well. As we all know from experience, it’s tough to stay motivated to stay on a diet if you’re not losing weight. So, 30 of the subjects in Group D drop out because they’re not losing. We know that any diet will work for some people, and Diet D is no different. The 20 who stay in the study are those who are losing on Diet D. And those 20 Group D subjects lose an average of 2 pounds per week.
In analyzing our data, if we remove from the pool of subjects all those who dropped out of the study, we are left with all 50 people in Group C, who lost an average of 2 pounds per week and only 20 people in Group D, who lost an average of 2 pounds per week. We would then find that both diets are exactly the same. Subjects in both groups lost 2 pounds per week. Therefore both diets are equally effective.
But is that true? Clearly not. And that is the problem that ITT was designed to deal with. But, as we’ve seen above, it brings its own errors.
So, how do we deal with the issue honestly and effectively? Easy. By explaining the data in two ways. Most people – researchers included – want to boil an issue down to a single answer, when two answers are required. ITT allows one answer – often incorrect – to two different questions. ITT is like the old TV show in which the clown Bozo always asked the little kids he interviewed something like this:
So, Bobby, tell me: Do you walk to school or carry your lunch?
Were Bozo adamant on an ITT-type analysis of the question, he could get only one answer.
Going back to our Group A/Group B diet study we can look at the data in two ways:
1. Diet A is extremely effective for those who stick with it. (Called the adherence effect.)
2. Only 40 percent of those attempting Diet A achieve the desired effect. (Called the assignment effect.)
Both of these statements are true. Both contain valuable information. But they answer two different questions. The first answers the question: what happens to people who stick to the diet? The second answers the question: What happens to people who are placed on the diet?
As Dr. Gerard Dallal writes about ITT
The fraud occurs when the answer to the question of assignment is given as though it were the answer to the question of adherence!
Instead of the conclusion that both Diet A and Diet B show the same results (when, clearly, they don’t), which would be the way it would be presented in a scientific paper demanding ITT, why not present it this way?:
The adherence effect: Subjects following Diet A for 8 weeks lost an average of 3 pounds per week whereas those following Diet B lost 1.2 pounds per week.
The assignment effect: 40 percent of those attempting Diet A remained in the study whereas 100 percent of those following Diet B remained in the study.
Conclusion: Diet A is significantly more effective (3 pounds per week vs 1.2 pounds per week) for those able to remain on the diet. Diet B is less effective but significantly less difficult to follow than Diet A. (100 percent of subjects on Diet B remained on the diet throughout the study whereas 60 percent of those on Diet A dropped out).
It just ain’t that hard to present it that way. It provides much more information than the ITT, which attempts to answer two questions with one answer.
Now, let’s look at the big low-carb lie that launched me into this post. I was reading a book that I intended to review for this blog and came across the following statement:
There is evidence from a variety of sources that [low-carb diets] work for short-term weight loss. One year after starting a diet, however, there appears to be no significant difference in success rate than that seen on any other common diet plan.
Have you heard that one before? It’s a specific variant of the old: Studies show that while effective in the short term low-carb diets show no difference in weight loss after one year than do low-fat diets. It’s the Big Lie.
It’s the last refuge argument of low-fat advocates who are getting hammered with all the data showing low-carb diets to be more effective. Yeah, well, they say, Protein Power may work in the short term, but over a year studies show it’s no better than low-fat. It’s like a cross thrust in a vampire’s face.
But is it true? It is if you believe in intention-to-treat analysis. But what if you believe in a more accurate way of presenting the data?
Let’s briefly look at a few studies published that confirm the idea that there is no difference between low-carb diets and low-fat diets after one year.
The first was published in the Annals of Internal Medicine in 2004. The conclusion of the authors was that after one year subjects
had more favorable triglyceride and high-density lipoprotein cholesterol levels on the low-carbohydrate diet than on the conventional diet. However, weight loss and the other metabolic parameters were similar in the 2 diet groups.
In the body of the paper, however, one can read the following:
The final 1-year weight change (mean ± SD) was -5.1 ± 8.7 kg in the low-carbohydrate group and -3.1 ± 8.4 kg in the conventional diet group (Figure). The difference in weight loss between the 2 diet groups was not significant (-2.0 kg [CI, -4.9 kg to 1.0 kg]; P = 0.195 before and P > 0.2 after adjustment for baseline variables). The difference in weight loss between the 2 diet groups between 6 months and 1 year was not statistically significant (P = 0.063).
But that’s all ITT blather. Let’s read the next couple of sentences:
Persons on the low-carbohydrate diet who dropped out lost less weight than those who completed the study (change, -0.2 ± 7.6 kg vs. -7.3 ± 8.3 kg, respectively; mean difference, -7.1 kg [CI, -11.6 kg to -2.8 kg]; P = 0.003). In contrast, weight loss was not significantly different for those on the conventional diet, whether they dropped out or completed the study (change, -2.2 ± 9.5 kg vs. -3.7 ± 7.7, respectively; mean difference, -1.5 kg [CI, -5.7 kg to 2.7 kg]; P > 0.2).
Let’s translate. Those who dropped out of the low-carb diet but were counted as if they hadn’t lost 0.2 kg (about 0.4 pounds) whereas those who completed the study lost 7.3 kg (about 16 pounds). Do you think the dropouts skewed the numbers? I guess so. And look at the next astounding sentence. “In contrast, weight loss was not significantly different for those on the conventional diet, whether they dropped out or completed the study…” So, there was no difference in the results of those following the low-fat diet whether they dropped out or stayed in. Had the subjects who dropped from the low-fat arm not been included, the results for that diet would have been the same. Including the subjects who dropped from the low-carb arm, however, dramatically lowered the overall weight loss of the subjects as a group, making them equal to those in the low-fat arm.
It could be accurately stated that those who remained on the low-carb diet for one year lost significantly more weight than those who remained on the low-fat diet. which, of course, refutes the Big Lie that low-carb and low-fat diets provide equal weight loss at one year.
The two other studies used to perpetrate the Big Lie that low-carb diets show no difference in weight loss after one year are the ones by Foster et al and Samaha et al in the May 2003 New England Journal of Medicine.
When analyzed by ITT, both of these studies show no significant difference between low-carb and low-fat diets after a year. But when looked at from the perspective of those subjects remaining in the study, we see a big difference between the low-carb and the low-fat arms.
In the Foster et al study using a modified version of the Atkins diet, we find a statistically insignificant 1.9 kg difference in weight loss between the two groups by ITT. But when we eliminate the drop outs and look instead at the data from those subjects who remained on the diets for the entire one year, we find a statistically significant 2.8 kg (over 6 pounds) greater weight loss in those following the low-carb diet.
In the Samaha et al study using the diet from the Protein Power LifePlan, those following the low-carb diet lost a statistically insignificant 2 kg more weight than those following the low-fat diet by ITT. Eliminating the dropouts, however, gives us a statistically significant 3.6 kg (almost 8 pounds) greater weight loss on the low-carb verses the high-carb diet after one year.**
Intention-to-treat analysis gives us the Big Lie: Low-carb diets are no more effective than low-fat diets after one year. Dr. Goebbels would have been proud.
The truth, however, is a little different and can be stated thus:
Those who follow low-carb diets for a year lose significantly greater weight than those who follow low-fat diets for a year.
After reading this post you should know more about intention-to-treat analysis than 99.9 percent of the physicians and dietitians practicing in the world today. Don’t let this knowledge go to waste. Next time you hear the Big Lie, point out the truth.
** Thanks to Richard Feinman, Ph.D. for the tabulation of these data and for our many conversations on this subject.