Wednesday, July 28, 2010

Too much complexity! I like the simplicity of Ricky’s Weather Forecasting Stone

Too much complexity in the last few posts and related comments: multivariate analyses, path coefficients, nonparametric statistics, competing and interaction effects, explained variance, plant protein and colorectal cancer, the China Study, raw plant foods possibly giving people cancer unless they don’t …

I like simplicity though, and so does my mentor. I really like the simplicity of Ricky’s Weather Forecasting Stone. (See photo below, from … I will tell you in the comments section. Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.)


Can you guess who the gentleman on the photo is?

A few hints. He is a widely read and very smart blogger. He likes to eat a lot of saturated fat, and yet is very lean. If you do not read his blog, you should. Reading his blog is like heavy resistance exercise, for the brain. It is not much unlike doing an IQ test with advanced biology and physiology material mixed in, and a lot of joking around.

Like heavy resistance exercise, reading his blog is hard, but you fell pretty good after doing it.

Saturday, July 24, 2010

The China Study one more time: Are raw plant foods giving people cancer?

In this previous post I analyzed some data from the China Study that included counties where there were cases of schistosomiasis infection. Following one of Denise Minger’s suggestions, I removed all those counties from the data. I was left with 29 counties, a much smaller sample size. I then ran a multivariate analysis using WarpPLS (warppls.com), like in the previous post, but this time I used an algorithm that identifies nonlinear relationships between variables.

Below is the model with the results. (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) As in the previous post, the arrows explore associations between variables. The variables are shown within ovals. The meaning of each variable is the following: aprotein = animal protein consumption; pprotein = plant protein consumption; cholest = total cholesterol; crcancer = colorectal cancer.


What is total cholesterol doing at the right part of the graph? It is there because I am analyzing the associations between animal protein and plant protein consumption with colorectal cancer, controlling for the possible confounding effect of total cholesterol.

I am not hypothesizing anything regarding total cholesterol, even though this variable is shown as pointing at colorectal cancer. I am just controlling for it. This is the type of thing one can do in multivariate analyzes. This is how you “control for the effect of a variable” in an analysis like this.

Since the sample is fairly small, we end up with insignificant beta coefficients that would normally be statistically significant with a larger sample. But it helps that we are using nonparametric statistics, because they are still robust in the presence of small samples, and deviations from normality. Also the nonlinear algorithm is more sensitive to relationships that do not fit a classic linear pattern. We can summarize the findings as follows:

- As animal protein consumption increases, plant protein consumption decreases significantly (beta=-0.36; P<0.01). This is to be expected and helpful in the analysis, as it differentiates somewhat animal from plant protein consumers. Those folks who got more of their protein from animal foods tended to get significantly less protein from plant foods.

- As animal protein consumption increases, colorectal cancer decreases, but not in a statistically significant way (beta=-0.31; P=0.10). The beta here is certainly high, and the likelihood that the relationship is real is 90 percent, even with such a small sample.

- As plant protein consumption increases, colorectal cancer increases significantly (beta=0.47; P<0.01). The small sample size was not enough to make this association insignificant. The reason is that the distribution pattern of the data here is very indicative of a real association, which is reflected in the low P value.

Remember, these results are not confounded by schistosomiasis infection, because we are only looking at counties where there were no cases of schistosomiasis infection. These results are not confounded by total cholesterol either, because we controlled for that possible confounding effect. Now, control variable or not, you would be correct to point out that the association between total cholesterol and colorectal cancer is high (beta=0.58; P=0.01). So let us take a look at the shape of that association:


Does this graph remind you of the one on this post; the one with several U curves? Yes. And why is that? Maybe it reflects a tendency among the folks who had low cholesterol to have more cancer because the body needs cholesterol to fight disease, and cancer is a disease. And maybe it reflects a tendency among the folks who have high total cholesterol to do so because total cholesterol (and particularly its main component, LDL cholesterol) is in part a marker of disease, and cancer is often a culmination of various metabolic disorders (e.g., the metabolic syndrome) that are nothing but one disease after another.

To believe that total cholesterol causes colorectal cancer is nonsensical because total cholesterol is generally increased by consumption of animal products, of which animal protein consumption is a proxy. (In this reduced dataset, the linear univariate correlation between animal protein consumption and total cholesterol is a significant and positive 0.36.) And animal protein consumption seems to be protective again colorectal cancer in this dataset (negative association on the model graph).

Now comes the part that I find the most ironic about this whole discussion in the blogosphere that has been going on recently about the China Study; and the answer to the question posed in the title of this post: Are raw plant foods giving people cancer? If you think that the answer is “yes”, think again. The variable that is strongly associated with colorectal cancer is plant protein consumption.

Do fruits, veggies, and other plant foods that can be consumed raw have a lot of protein?

With a few exceptions, like nuts, they do not. Most raw plant foods have trace amounts of protein, especially when compared with foods made from refined grains and seeds (e.g., wheat grains, soybean seeds). So the contribution of raw fruits and veggies in general could not have influenced much the variable plant protein consumption. To put this in perspective, the average plant protein consumption per day in this dataset was 63 g; even if they were eating 30 bananas a day, the study participants would not get half that much protein from bananas.

Refined foods made from grains and seeds are made from those plant parts that the plants absolutely do not “want” animals to eat. They are the plants’ “children” or “children’s nutritional reserves”, so to speak. This is why they are packed with nutrients, including protein and carbohydrates, but also often toxic and/or unpalatable to animals (including humans) when eaten raw.

But humans are so smart; they learned how to industrially refine grains and seeds for consumption. The resulting human-engineered products (usually engineered to sell as many units as possible, not to make you healthy) normally taste delicious, so you tend to eat a lot of them. They also tend to raise blood sugar to abnormally high levels, because industrial refining makes their high carbohydrate content easily digestible. Refined foods made from grains and seeds also tend to cause leaky gut problems, and autoimmune disorders like celiac disease. Yep, we humans are really smart.

Thanks again to Dr. Campbell and his colleagues for collecting and compiling the China Study data, and to Ms. Minger for making the data available in easily downloadable format and for doing some superb analyses herself.

Thursday, July 22, 2010

The China Study again: A multivariate analysis suggesting that schistosomiasis rules!

In the comments section of Denise Minger’s post on July 16, 2010, which discusses some of the data from the China Study (as a follow up to a previous post on the same topic), Denise herself posted the data she used in her analysis. This data is from the China Study. So I decided to take a look at that data and do a couple of multivariate analyzes with it using WarpPLS (warppls.com).

First I built a model that explores relationships with the goal of testing the assumption that the consumption of animal protein causes colorectal cancer, via an intermediate effect on total cholesterol. I built the model with various hypothesized associations to explore several relationships simultaneously, including some commonsense ones. Including commonsense relationships is usually a good idea in exploratory multivariate analyses.

The model is shown on the graph below, with the results. (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) The arrows explore causative associations between variables. The variables are shown within ovals. The meaning of each variable is the following: aprotein = animal protein consumption; pprotein = plant protein consumption; cholest = total cholesterol; crcancer = colorectal cancer.


The path coefficients (indicated as beta coefficients) reflect the strength of the relationships; they are a bit like standard univariate (or Pearson) correlation coefficients, except that they take into consideration multivariate relationships (they control for competing effects on each variable). A negative beta means that the relationship is negative; i.e., an increase in a variable is associated with a decrease in the variable that it points to.

The P values indicate the statistical significance of the relationship; a P lower than 0.05 means a significant relationship (95 percent or higher likelihood that the relationship is real). The R-squared values reflect the percentage of explained variance for certain variables; the higher they are, the better the model fit with the data. Ignore the “(R)1i” below the variable names; it simply means that each of the variables is measured through a single indicator (or a single measure; that is, the variables are not latent variables).

I should note that the P values have been calculated using a nonparametric technique, a form of resampling called jackknifing, which does not require the assumption that the data is normally distributed to be met. This is good, because I checked the data, and it does not look like it is normally distributed. So what does the model above tell us? It tells us that:

- As animal protein consumption increases, colorectal cancer decreases, but not in a statistically significant way (beta=-0.13; P=0.11).

- As animal protein consumption increases, plant protein consumption decreases significantly (beta=-0.19; P<0.01). This is to be expected.

- As plant protein consumption increases, colorectal cancer increases significantly (beta=0.30; P=0.03). This is statistically significant because the P is lower than 0.05.

- As animal protein consumption increases, total cholesterol increases significantly (beta=0.20; P<0.01). No surprise here. And, by the way, the total cholesterol levels in this study are quite low; an overall increase in them would probably be healthy.

- As plant protein consumption increases, total cholesterol decreases significantly (beta=-0.23; P=0.02). No surprise here either, because plant protein consumption is negatively associated with animal protein consumption; and the latter tends to increase total cholesterol.

- As total cholesterol increases, colorectal cancer increases significantly (beta=0.45; P<0.01). Big surprise here!

Why the big surprise with the apparently strong relationship between total cholesterol and colorectal cancer? The reason is that it does not make sense, because animal protein consumption seems to increase total cholesterol (which we know it usually does), and yet animal protein consumption seems to decrease colorectal cancer.

When something like this happens in a multivariate analysis, it usually is due to the model not incorporating a variable that has important relationships with the other variables. In other words, the model is incomplete, hence the nonsensical results. As I said before in a previous post, relationships among variables that are implied by coefficients of association must also make sense.

Now, Denise pointed out that the missing variable here possibly is schistosomiasis infection. The dataset that she provided included that variable, even though there were some missing values (about 28 percent of the data for that variable was missing), so I added it to the model in a way that seems to make sense. The new model is shown on the graph below. In the model, schisto = schistosomiasis infection.


So what does this new, and more complete, model tell us? It tells us some of the things that the previous model told us, but a few new things, which make a lot more sense. Note that this model fits the data much better than the previous one, particularly regarding the overall effect on colorectal cancer, which is indicated by the high R-squared value for that variable (R-squared=0.73). Most notably, this new model tells us that:

- As schistosomiasis infection increases, colorectal cancer increases significantly (beta=0.83; P<0.01). This is a MUCH STRONGER relationship than the previous one between total cholesterol and colorectal cancer; even though some data on schistosomiasis infection for a few counties is missing (the relationship might have been even stronger with a complete dataset). And this strong relationship makes sense, because schistosomiasis infection is indeed associated with increased cancer rates. More information on schistosomiasis infections can be found here.

- Schistosomiasis infection has no significant relationship with these variables: animal protein consumption, plant protein consumption, or total cholesterol. This makes sense, as the infection is caused by a worm that is not normally present in plant or animal food, and the infection itself is not specifically associated with abnormalities that would lead one to expect major increases in total cholesterol.

- Animal protein consumption has no significant relationship with colorectal cancer. The beta here is very low, and negative (beta=-0.03).

- Plant protein consumption has no significant relationship with colorectal cancer. The beta for this association is positive and nontrivial (beta=0.15), but the P value is too high (P=0.20) for us to discard chance within the context of this dataset. A more targeted dataset, with data on specific plant foods (e.g., wheat-based foods), could yield different results – maybe more significant associations, maybe less significant.

Below is the plot showing the relationship between schistosomiasis infection and colorectal cancer. The values are standardized, which means that the zero on the horizontal axis is the mean of the schistosomiasis infection numbers in the dataset. The shape of the plot is the same as the one with the unstandardized data. As you can see, the data points are very close to a line, which suggests a very strong linear association.


So, in summary, this multivariate analysis vindicates pretty much everything that Denise said in her July 16, 2010 post. It even supports Denise’s warning about jumping to conclusions too early regarding the possible relationship between wheat consumption and colorectal cancer (previously highlighted by a univariate analysis). Not that those conclusions are wrong; they may well be correct.

This multivariate analysis also supports Dr. Campbell’s assertion about the quality of the China Study data. The data that I analyzed was already grouped by county, so the sample size (65 cases) was not so high as to cast doubt on P values. (Having said that, small samples create problems of their own, such as low statistical power and an increase in the likelihood of error-induced bias.) The results summarized in this post also make sense in light of past empirical research.

It is very good data; data that needs to be properly analyzed!

Tuesday, July 20, 2010

My transformation: I cannot remember the last time I had a fever

The two photos below (click to enlarge) were taken 4 years apart. The one on the left was taken in 2006, when I weighed 210 lbs (95 kg). Since my height is 5 ft 8 in, at that weight I was an obese person, with over 30 percent body fat. The one on the right was taken in 2010, at a weight of 150 lbs (68 kg) and about 13 percent body fat. I think I am a bit closer to the camera on the right, so the photos are not exactly on the same scale. For a more recent transformation update, see this post.


My lipids improved from borderline bad to fairly good numbers, as one would expect, but the two main changes that I noticed were in terms of illnesses and energy levels. I have not had a fever in a long time. I simply cannot remember when it was the last time that I had to stay in bed because of an illness. I only remember that I was fat then. Also, I used to feel a lot more tired when I was fat. Now I seem to have a lot of energy, almost all the time.

In my estimation, I was obese or overweight for about 10 years, and was rather careless about it. A lot of that time I weighed in the 190s; with a peak weight of 210 lbs. Given that, I consider myself lucky not to have had major health problems by now, like diabetes or cancer. A friend of mine who is a doctor told me that I probably had some protection due to the fact that, when I was fat, I was fat everywhere. My legs, for example, were fat. So were my arms and face. In other words, I lot of the fat was subcutaneous, and reasonably distributed. In fact, most people do not believe me when I say that I weighed 210 lbs when that photo was taken in 2006; but maybe they are just trying to be nice.

If you are not obese, you should do everything you can to avoid reaching that point. Among other things, your chances of having cancer will skyrocket.

So, I lost a whopping 60 lbs (27 kg) over about 2-3 years. That is not so radical; about 1.6-2.5 lbs per month. There were plateaus with no weight loss, and even a few periods with weight gain. Perhaps because of that and the slow weight loss, I had none of the problems usually associated with body responses to severe calorie restriction, such as hypothyroidism. I remember a short period when I felt a little weak and miserable; I was doing exercise after long fasts (20 h or so), and not eating enough afterwards. I did that for a couple of weeks and decided against the idea.

There are no shortcuts with body fat loss, it seems. Push it too hard and the body will react; compensatory adaptation at work.

My weight has been stable, at around 150 lbs, for a little less than 2 years now.

What did I do to lose 60 lbs? I did a number of things at different points in time. I measured various variables (e.g., intake of macronutrients, weight, body fat, HDL cholesterol etc.) and calculated associations, using a prototype version of HealthCorrelator for Excel (HCE). Based on all that, I am pretty much convinced that the main factors were the following:

- Complete removal of foods rich in refined carbohydrates and sugars from my diet, plus almost complete removal of plant foods that I cannot eat raw. (I do cook some plant foods, but avoid the ones I cannot eat raw; with a few exceptions like sweet potato.) That excluded most seeds and grains from my diet, since they can only be eaten after cooking.

- Complete removal of vegetable oils rich in omega-6 fats from my diet. I cook primarily with water. I also use butter and organic coconut oil. I occasionally use olive oil, often with water, for steam cooking.

- Consumption of plenty of animal products, with emphasis on eating the animal whole. All cooked. This includes small fish (sardines and smelts) eaten whole about twice a week, and offal (usually beef liver) about once or twice a week. I also eat eggs, about 3-5 per day.

- Practice of moderate exercise (2-3 sessions a week) with a focus on resistance training and high-intensity interval training (e.g., sprints). Also becoming more active, which does not necessarily mean exercising but  doing things that involve physical motion of some kind (e.g., walking, climbing stairs, moving things around), to the tune of 1 hour or more every day.

- Adoption of more natural eating patterns; by eating more when I am hungry, usually on days I exercise, and less (including fasting) when I am not hungry. I estimate that this leads to a caloric surplus on days that I exercise, and a caloric deficit on days that I do not (without actually controlling caloric intake).

- A few minutes (15-20 min) of direct skin exposure to sunlight almost every day, when the sun is high, to get enough of the all-important vitamin D. This is pre-sunburn exposure, usually in my backyard. When traveling I try to find a place where people jog, and walk shirtless for 15-20 min.

- Stress management, including some meditation and power napping.

- Face-to-face social interaction, in addition to online interaction. Humans are social animals, and face-to-face social interaction contributes to promoting the right hormonal balance.

When I was fat, my appetite was a bit off. I was hungry at the wrong times, it seemed. Then slowly, after a few months eating essentially whole foods, my hunger seemed to start “acting normally”. That is, my hunger slowly fell into a pattern of increasing after physical exertion, and decreasing with rest. Protein and fat are satiating, but so seem to be fruits and vegetables. Never satiating for me were foods rich in refined carbohydrates and sugars – white bread, bagels, doughnuts, pasta etc.

Looking back, it almost seems too easy. Whole foods taste very good, especially if you are hungry.

But I will never want to each a peach after I have a doughnut. The peach will be tasteless!

Saturday, July 17, 2010

Subcutaneous versus visceral fat: How to tell the difference?

The photos below, from Wikipedia, show two patterns of abdominal fat deposition. The one on the left is predominantly of subcutaneous abdominal fat deposition. The one on the right is an example of visceral abdominal fat deposition, around internal organs, together with a significant amount of subcutaneous fat deposition as well.


Body fat is not an inert mass used only to store energy. Body fat can be seen as a “distributed organ”, as it secretes a number of hormones into the bloodstream. For example, it secretes leptin, which regulates hunger. It secretes adiponectin, which has many health-promoting properties. It also secretes tumor necrosis factor-alpha (more recently referred to as simply “tumor necrosis factor” in the medical literature), which promotes inflammation. Inflammation is necessary to repair damaged tissue and deal with pathogens, but too much of it does more harm than good.

How does one differentiate subcutaneous from visceral abdominal fat?

Subcutaneous abdominal fat shifts position more easily as one’s body moves. When one is standing, subcutaneous fat often tends to fold around the navel, creating a “mouth” shape. Subcutaneous fat is easier to hold in one’s hand, as shown on the left photo above. Because subcutaneous fat tends to “shift” more easily as one changes the position of the body, if you measure your waist circumference lying down and standing up, and the difference is large (a one-inch difference can be considered large), you probably have a significant amount of subcutaneous fat.

Waist circumference is a variable that reflects individual changes in body fat percentage fairly well. This is especially true as one becomes lean (e.g., around 14-17 percent or less of body fat for men, and 21-24 for women), because as that happens abdominal fat contributes to an increasingly higher proportion of total body fat. For people who are lean, a 1-inch reduction in waist circumference will frequently translate into a 2-3 percent reduction in body fat percentage. Having said that, waist circumference comparisons between individuals are often misleading. Waist-to-fat ratios tend to vary a lot among different individuals (like almost any trait). This means that someone with a 34-inch waist (measured at the navel) may have a lower body fat percentage than someone with a 33-inch waist.

Subcutaneous abdominal fat is hard to mobilize; that is, it is hard to burn through diet and exercise. This is why it is often called the “stubborn” abdominal fat. One reason for the difficulty in mobilizing subcutaneous abdominal fat is that the network of blood vessels is not as dense in the area where this type of fat occurs, as it is with visceral fat. Another reason, which is related to degree of vascularization, is that subcutaneous fat is farther away from the portal vein than visceral fat. As such, it has to travel a longer distance to reach the main “highway” that will take it to other tissues (e.g., muscle) for use as energy.

In terms of health, excess subcutaneous fat is not nearly as detrimental as excess visceral fat. Excess visceral fat typically happens together with excess subcutaneous fat; but not necessarily the other way around. For instance, sumo wrestlers frequently have excess subcutaneous fat, but little or no visceral fat. The more health-detrimental effect of excess visceral fat is probably related to its proximity to the portal vein, which amplifies the negative health effects of excessive pro-inflammatory hormone secretion. Those hormones reach a major transport “highway” rather quickly.

Even though excess subcutaneous body fat is more benign than excess visceral fat, excess body fat of any kind is unlikely to be health-promoting. From an evolutionary perspective, excess body fat impaired agile movement and decreased circulating adiponectin levels; the latter leading to a host of negative health effects. In modern humans, negative health effects may be much less pronounced with subcutaneous than visceral fat, but they will still occur.

Based on studies of isolated hunger-gatherers, it is reasonable to estimate “natural” body fat levels among our Stone Age ancestors, and thus optimal body fat levels in modern humans, to be around 6-13 percent in men and 14–20 percent in women.

If you think that being overweight probably protected some of our Stone Age ancestors during times of famine, here is one interesting factoid to consider. It will take over a month for a man weighing 150 lbs and with 10 percent body fat to die from starvation, and death will not be typically caused by too little body fat being left for use as a source of energy. In starvation, normally death will be caused by heart failure, as the body slowly breaks down muscle tissue (including heart muscle) to maintain blood glucose levels.

References:

Arner, P. (2005). Site differences in human subcutaneous adipose tissue metabolism in obesity. Aesthetic Plastic Surgery, 8(1), 13-17.

Brooks, G.A., Fahey, T.D., & Baldwin, K.M. (2005). Exercise physiology: Human bioenergetics and its applications. Boston, MA: McGraw-Hill.

Fleck, S.J., & Kraemer, W.J. (2004). Designing resistance training programs. Champaign, IL: Human Kinetics.

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.

Wednesday, July 14, 2010

The China Study: With a large enough sample, anything is significant

There have been many references recently on diet and lifestyle blogs to the China Study. Except that they are not really references to the China Study, but to a blog post by Denise Minger. This post is indeed excellent, and brilliant, and likely to keep Denise from “having a life” for a while. That it caused so much interest is a testament to the effect that a single brilliant post can have on the Internet. Many thought that the Internet would lead to a depersonalization and de-individualization of communication. Yet, most people are referring to Denise’s post, rather than to “a great post written by someone on a blog.”

Anyway, I will not repeat what Denise said on her post here. My goal with this post is bit more general, and applies to the interpretation of quantitative research results in general. This post is a warning regarding “large” studies. These are studies whose main claim to credibility is that they are based on a very large sample. The China Study is a good example. It prominently claims to have covered 2,400 counties and 880 million people.

There are many different statistical analysis techniques that are used in quantitative analyses of associations between variables, where the variables can be things like dietary intakes of certain nutrients and incidence of disease. Generally speaking, statistical analyses yield two main types of results: (a) coefficients of association (e.g., correlations); and (b) P values (which are measures of statistical significance). Of course there is much more to statistical analyses than these two types of numbers, but these two are usually the most important ones when it comes to creating or testing a hypothesis. The P values, in particular, are often used as a basis for claims of significant associations. P values lower than 0.05 are normally considered low enough to support those claims.

In analyses of pairs of variables (known as "univariate", or "bivariate" analyses), the coefficients of association give an idea of how strongly the variables are associated. The higher these coefficients are, the more strongly the variables are associated. The P values tell us whether an apparent association is likely to be due to chance, given a particular sample. For example, if a P value is 0.05, or 5 percent, the likelihood that the related association is due to chance is 5 percent. Some people like to say that, in a case like this, one has a 95 percent confidence that the association is real.

One thing that many people do not realize is that P values are very sensitive to sample size. For example, with a sample of 50 individuals, a correlation of 0.6 may be statistically significant at the 0.01 level (i.e., its P value is lower than 0.01). With a sample of 50,000 individuals, a much smaller correlation of 0.06 may be statistically significant at the same level. Both correlations may be used by a researcher to claim that there is a significant association between two variables, even though the first association (correlation = 0.6) is 10 times stronger than the second (correlation = 0.06).

So, with very large samples, cherry-picking results is very easy. It has been argued sometimes that this is not technically lying, since one is reporting associations that are indeed statistically significant. But, by doing this, one may be omitting other associations, which may be much stronger. This type of practice is sometimes referred to as “lying with statistics”.

With a large enough sample one can easily “show” that drinking water causes cancer.

This is why I often like to see the coefficients of association together with the P values. For simple variable-pair correlations, I generally consider a correlation around 0.3 to be indicative of a reasonable association, and a correlation at or above 0.6 to be indicative of a strong association. These conclusions are regardless of P value. Whether these would indicate causation is another story; one has to use common sense and good theory.

If you take my weight from 1 to 20 years of age, and the price of gasoline in the US during that period, you will find that they are highly correlated. But common sense tells me that there is no causation whatsoever between these two variables.

There are a number of other issues to consider which I am not going to cover here. For example, relationships may be nonlinear, and standard correlation-based analyses are “blind” to nonlinearity. This is true even for advanced correlation-based statistical techniques such as multiple regression analysis, which control for competing effects of several variables on one main dependent variable. Ignoring nonlinearity may lead to misleading interpretations of associations, such as the association between total cholesterol and cardiovascular disease.

Note that this post is not an indictment of quantitative analyses in general. I am not saying “ignore numbers”. Denise’s blog post in fact uses careful quantitative analyses, with good ol’ common sense, to debunk several claims based on, well, quantitative analyses. If you are interested in this and other more advanced statistical analysis issues, I invite you to take a look at my other blog. It focuses on WarpPLS-based robust nonlinear data analysis.

Tuesday, July 13, 2010

Free running and primal workouts: Both look awesome, and dangerous

The other day I showed a YouTube MovNat video clip to one of my sons, noting the serious fitness of Erwan Le Corre. I also noted that the stunts were somewhat dangerous, and that they tried to replicate some of the movements that our Paleolithic ancestors had to do on a regular basis. That is, those movements are part of what one could call a primal workout.

My son looked at me and laughed, as if asking me if I was really being serious. Why? Well, he is into breakdancing (a.k.a. b-boying), and also does a bit of something called "free running". If you don’t know what free running is, take a look at this Wikipedia article.

Here are a couple of YouTube video clips on free running: clip 1, and clip 2. The moves do look a lot more hardcore than the ones on the MovNat video clip. (The reason for my son's reaction.) But, to be fair, the environments and goals are different. And, in terms of danger, some of these free running moves are really at the high end of the scale.

And, if you are interested, here are a couple of instructional YouTube video clips prepared by my sons: this one by my oldest, and this by my second oldest. (We have four children.) I have been telling them to be careful with those “airchairs” – the moves where all the weight is placed on one hand. It just looks like too much pressure on the joints of one single arm.

Two of the things that I like the most about primal workouts like the MovNat ones are the variety of movements, and the proximity to nature. Those two elements can potentially help with sticking to an exercise program in the long run, which is what matters most. Most people get very bored of exercising after a few months. Free running seems to be more competitive, and more dangerous.

Both free running and primal workouts are practiced by some people as their main form of exercise. In those cases, they appear to lead to body types that are similar to those of the hunter-gatherers on this post. I cannot help but notice that those body types are more like that of a sprinter than that of a typical bodybuilder.

The feats that those body types enable are feats of relative, not absolute, strength. This makes sense, as our Paleolithic ancestors were too smart to hunt prey or fight off predators (or even each other) with their bare hands. Spears and stones were formidable weapons. Paleolithic ancestors who were very adept at using weapons would probably be like skilled gunfighters in the American Old West – menacing, with the advantage of being able to use their skills to feed themselves and others.

Being lean, strong, and agile – all at the same time – arguably was one of the keys to survival in the Paleolithic.

Thursday, July 8, 2010

Our body’s priority is preventing hypoglycemia, not hyperglycemia

An adult human has about 5 l of blood in circulation. Considering a blood glucose concentration of 100 mg/dl, this translates into a total amount of glucose in the blood of about 5 g (5 l x 0.1 g / 0.1 l). That is approximately a teaspoon of glucose. If a person’s blood glucose goes down to about half of that, the person will enter a state of hypoglycemia. Severe and/or prolonged hypoglycemia can cause seizures, comma, and death.

In other words, the disappearance of about 2.5 g of glucose from the blood will lead to hypoglycemia. Since 2.5 g of glucose yields about 10 calories, it should be easy to see that it does not take much to make someone hypoglycemic in the absence of compensatory mechanisms. An adult will consume on average 6 to 9 times as many calories just sitting quietly, and a proportion of those calories will come from glucose.

While hypoglycemia has severe negative health effects in the short term, including the most severe of all - death, hyperglycemia has primarily long-term negative health effects. Given this, it is no surprise that our body’s priority is to prevent hypoglycemia, not hyperglycemia.

The figure below, from the outstanding book by Brooks and colleagues (2005), shows two graphs. The graph at the top shows the variation of arterial glucose in response to exercise. The graph at the bottom shows the variation of whole-body and muscle glucose uptake, plus hepatic glucose production, in response to exercise. The full reference to the Brooks and colleagues book is at the end of this post.


Note how blood glucose increases dramatically as the intensity of the exercise session increases, which means that muscle tissue consumption of glucose is also increasing. This is particularly noticeable as arm exercise is added to leg exercise, bringing the exercise intensity to 82 percent of maximal capacity. This blood glucose elevation is similar to the elevation one would normally see in response to all-out sprinting and weight training within the anaerobic range (with enough weight to allow only 6 to 12 repetitions, or a time under tension of about 30 to 70 seconds).

The dashed line at the bottom graph represents whole-body glucose uptake, including what would be necessary for the body to function in the absence of exercise. This is why whole-body glucose uptake is higher than muscle glucose uptake induced by exercise; the latter was measured through a glucose tracing method. The top of the error bars above the points on the dashed line represent hepatic glucose production, which is always ahead of whole-body glucose uptake. This is our body doing what it needs to do to prevent hypoglycemia.

One point that is important to make here is that at the beginning of an anaerobic exercise session muscle uses up primarily local glycogen stores (not liver glycogen stores), and can completely deplete them in a very localized fashion. Muscle glycogen stores add up to 500 g, but intense exercise depletes glycogen stores locally, only within the muscles being used. Still, muscle glycogen use generates lactate as a byproduct, which is then used by the liver to produce glucose (gluconeogenesis) to prevent hypoglycemia. The liver also makes some glycogen (glycogenesis) during this time. This means that it is not only pre-exercise liver glycogen that is being used to maintain blood glucose levels above whole-body glucose uptake. This makes sense, since the liver stores only about 100 g of glycogen.

The need to prevent hypoglycemia at all costs is the main reason why there are several hormones that increase blood glucose, while apparently there is only one that decreases blood glucose. Examples of hormones that increase blood glucose are cortisol, adrenaline, noradrenaline, growth hormone, and, notably, glucagon. The only hormone that decreases blood glucose levels in a significant way is insulin. These hormones do not increase or decrease blood glucose directly; they signal to various tissues to either secrete or absorb glucose.

Evolution typically prioritizes processes that have a higher impact on reproductive success, and one must be alive to successfully reproduce. Hypoglycemia causes death. Often those processes that have a significant effect on reproductive success rely on redundant mechanisms. So our evolved mechanisms to deal with hypoglycemia are redundant. Evolution is not an engineer; it is a tinkerer!

What about hyperglycemia – doesn’t it cause death? Well, not in the short term, so related selection pressures were fairly small compared to those associated with hypoglycemia. Besides, there were no foods rich in refined carbohydrates and sugars in the Paleolithic - e.g., white bread, bagels, doughnuts, pasta, cereals, fruit juices, regular sodas, table sugar. Those are the foods that contribute the most to hyperglycemia.

Reference:

Brooks, G.A., Fahey, T.D., & Baldwin, K.M. (2005). Exercise physiology: Human bioenergetics and its applications. Boston, MA: McGraw-Hill.

Saturday, July 3, 2010

Power napping, stress management, and jet lag

Many animals take naps during the day. Our ancestors probably napped during the day too. They certainly did not spend as many hours as we do under mental stress. In fact, the lives of our Paleolithic ancestors would look quite boring to a modern human. Mental stress can be seen as a modern poison. We need antidotes for that poison. Power napping seems to be one of them.

(Source: Squidoo.com)

Power napping is a topic that I have done some research on, but unfortunately I do not have access to the references right now. I am posting this from Europe, where I arrived a few days ago. Thus I am labeling this post “my experience”. Hopefully I will be able to write a more research-heavy post on this topic in the near future. I am pretty sure that there is a strong connection between power napping and stress hormones. Maybe our regular and knowledgeable commenters can help me fill this gap in their comments on this post.

Surprisingly, jet lag has been only very minor this time for me. The time difference between most of Europe and Texas is about 8 hours, which makes adaptation very difficult, especially coming over to Europe. In spite of that, I slept during much of my first night here. The same happened in the following nights, even though I can feel that my body is still not fully adapted to the new time zone.

How come? I am all but sure that this is a direct result of my recent experience with power napping.

I have been practicing power napping for several months now. Usually in the middle of the afternoon, between 3 and 4 pm, I lie down for about 15 minutes in a sleeping position on a yoga mat. I use a pillow for the head. I close my eyes and try to clear my mind of all thoughts, focusing on my breathing, as in meditation. When I feel like I am about to enter deep sleep, I get up. This usually happens 15 minutes after I lie down. The sign that I am about to enter deep sleep is having incoherent thoughts, like in dreaming. Often I have muscle jerks, called hypnic jerks, which are perfectly normal. Hypnic jerks are also a sign that it is time for me to get up.

After getting up I always feel very refreshed and relaxed. My ability to do intellectual work is also significantly improved. If I make the mistake of going further, and actually entering a deep sleep stage, I get up feeling very groggy and sleepy. So the power nap has to end at around 15 minutes for me. For most people, this time ranges from 10 to 20 minutes. It seems that once one enters a deep sleep phase, it is better to then sleep for at least a few hours.

Power napping is not as easy as it sounds. If one cannot enter a state of meditation at the beginning, the onset of sleep does not happen. You have to be able to clear your mind of thoughts. Focusing on your breathing helps. Interestingly, once you become experienced at power napping, you can then induce actual sleep in almost any situation – e.g., on a flight or when you arrive in another country. That is what happened with me during this trip. Even though I have been waking up at night since I arrived in Europe, I have been managing to go right back to sleep. Previously, in other trips to Europe, I would be unable to go back to sleep after I woke up in the middle of the night.

Power napping seems to also be an effective tool for stress management. In our busy modern lives, with many daily stressors, it is common for significant mental stress to set in around 8 to 9 hours after one wakes up in the morning. For someone waking up at 7 am, this will be about 3 to 4 pm in the afternoon. Power napping, when done right, seems to be very effective at relieving that type of stress.