Thursday, December 30, 2010

How much protein does one need to be in nitrogen balance?

This post has been revised and re-published. The original comments are preserved below.

Thursday, December 23, 2010

38 g of sardines or 2 fish oil softgels? Let us look at the numbers

The bar chart below shows the fat content of 1 sardine (38 g) canned in tomato sauce, and 2 fish oil softgels of the Nature Made brand. (The sardine is about 1/3 of the content of a typical can, and the data is from Nutritiondata.com. The two softgels are listed as the “serving size” on the Nature Made bottle.) Both the sardine and softgels have some vegetable oil added; presumably to increase their vitamin E content and form a more stable oil mix. This chart is a good reminder that looking at actual numbers can be quite instructive sometimes. Even though the chart focuses on fat content, it is worth noting that the 38 g sardine also contains 8 g of high quality protein.


If your goal with the fish oil is to “neutralize” the omega-6 fat content of your diet, which is most people’s main goal, you should consider this. A rough measure of the omega-6 neutralization “power” of a food portion is, by definition, its omega-3 minus omega-6 content. For the 1 canned sardine, this difference is 596 mg; for the 2 fish oil softgels, 440 mg. The reason is that the two softgels have more omega-6 than the sardine.

In case you are wondering, the canning process does not seem to have much of an effect on the nutrient composition of the sardine. There is some research suggesting that adding vegetable oil (e.g., soy) helps preserve the omega-3 content during the canning process. There is also research suggesting that not much is lost even without any vegetable oil being added.

Fish oil softgels, when taken in moderation (e.g., two of the type discussed in this post, per day), are probably okay as “neutralizers” of omega-6 fats in the diet, and sources of a minimum amount of omega-3 fats for those who do not like seafood. For those who can consume 1 canned sardine per day, which is only 1/3 of a typical can of sardines, the sardine is not only a more effective source of omega-3, but also a good source of protein and many other nutrients.

As far as balancing dietary omega-6 fats is concerned, you are much better off reducing your consumption of foods rich in omega-6 fats in the first place. Apparently nothing beats avoiding industrial seed oils in that respect. It is also advisable to eat certain types of nuts with high omega-6 content, like walnuts, in moderation.

Both omega-6 and omega-3 fats are essential; they must be part of one’s diet. The actual minimum required amounts are fairly small, probably much lower than the officially recommended amounts. Chances are they would be met by anyone on a balanced diet of whole foods. Too much of either type of fat in synthetic or industrialized form can cause problems. A couple of instructive posts on this topic are this post by Chris Masterjohn, and this one by Chris Kresser.

Even if you don’t like canned sardines, it is not much harder to gulp down 38 g of sardines than it is to gulp down 2 fish oil softgels. You can get the fish oil for $12 per bottle with 300 softgels; or 8 cents per serving. You can get a can of sardines for 50 cents; which gives 16.6 cents per serving. The sardine is twice as expensive, but carries a lot more nutritional value.

You can also buy wild caught sardines, like I do. I also eat canned sardines. Wild caught sardines cost about $2 per lb, and are among the least expensive fish variety. They are not difficult to prepare; see this post for a recipe.

I don’t know how many sardines go into the industrial process of making 2 fish oil softgels, but I suspect that it is more than one. So it is also probably more ecologically sound to eat the sardine.

Monday, December 20, 2010

Nuts by numbers: Should you eat them, and how much?

Nuts are generally seen as good sources of protein and magnesium. The latter plays a number of roles in the human body, and is considered critical for bone health. Nuts are also believed to be good sources of vitamin E. While there is a lot of debate about vitamin E’s role in health, it is considered by many to be a powerful antioxidant. Other than in nuts, vitamin E is not easily found in foods other than seeds and seed oils.

Some of the foods that we call nuts are actually seeds; others are legumes. For simplification, in this post I am calling nuts those foods that are generally protected by shells (some harder than others). This protective layer is what makes most people call them nuts.

Let us see how different nuts stack up against each other in terms of key nutrients. The quantities listed below are per 1 oz (28 g), and are based on data from Nutritiondata.com. All are raw. Roasting tends to reduce the vitamin content of nuts, often by half, and has little effect on the mineral content. Protein and fat content are also reduced, but not as much as the vitamin content.

These two figures show the protein, fat, and carbohydrate content of nuts (on the left); and the omega-6 and omega-3 fat content (on the right).


When we talk about nuts, walnuts are frequently presented in a very positive light. The reason normally given is that walnuts have a high omega-3 content; the plant form of omega-3, alpha-linolenic acid (ALA). That is true. But look at the large amount of omega-6 in walnuts. The difference between the omega-6 and omega-3 content in walnuts is about 8 g! And this is in only 1 oz of walnuts. That is 8 g of possibly pro-inflammatory omega-6 fats to be “neutralized”. It would take many fish oil softgels to achieve that.

Walnuts should be eaten in moderation. Most studies looking at the health effects of nuts, including walnuts, show positive results in short-term interventions. But they usually involve moderate consumption, often of 1 oz per day. Eat several ounces of walnuts every day, and you are entering industrial see oil territory in terms of omega-6 fats consumption. Maybe other nutrients in walnuts have protective effects, but still, this looks like dangerous territory; “diseases of civilization” territory.

A side note. Focusing too much on the omega-6 to omega-3 ratio of individual foods can be quite misleading. The reason is that a food with a very small amount of omega-6 (e.g., 50 mg) but close to zero omega-3 will have a very high ratio. (Any number divided by zero yields infinity.) Yet, that food will contribute little omega-6 to a person’s diet. It is the ratio at the end of the day that matters, when all foods that have been eaten are considered.

The figures below show the magnesium content of nuts (on the left); and the vitamin E content (on the right).


Let us say that you are looking for the best combination of protein, magnesium, and vitamin E. And you also want to limit your intake of omega-6 fats, which is a very wise thing to do. Then what is the best choice? It looks like it is almonds. And even they should be eaten in small amounts, as 1 oz has more than 3 g of omega-6 fats.

Macadamia nuts don’t have much omega-6; their fats are mostly monounsaturated, which are very good. Their protein to fat ratio is very low, and they don’t have much magnesium or vitamin E. Coconuts (i.e., their meat) have mostly medium-chain saturated fats, which are also very good. Coconuts have little protein, magnesium, and vitamin E. If you want to increase your intake of healthy fats, both macadamia nuts and coconuts are good choices, with macadamia nuts providing about 3 times more fat.

There are many other dietary sources of magnesium around. In fact, magnesium is found in many foods. Examples are, in approximate descending order of content: salmon, spinach, sardine, cod, halibut, banana, white potato, sweet potato, beef, chicken, pork, liver, and cabbage. This is by no means a comprehensive list.

As for vitamin E, it likes to hide in seeds. While it may be a powerful antioxidant, I wonder whether Mother Nature really had it “in mind” as she tinkered with our DNA for the last few million years.

Thursday, December 16, 2010

Maknig to mayn tipos? Myabe ur teh boz

Undoubtedly one of the big differences between life today and in our Paleolithic past is the level of stress that modern humans face on a daily basis. Much stress happens at work, which is very different from what our Paleolithic ancestors would call work. Modern office work, in particular, would probably be seen as a form of slavery by our Paleolithic ancestors.

Some recent research suggests that organizational power distance is a big factor in work-related stress. Power distance is essentially the degree to which bosses and subordinates accept wide differences in organizational power between them (Hofstede, 2001).

(Source: talentedapps.wordpress.com)

I have been studying the topic of information overload for a while. It is a fascinating topic. People who experience it have the impression that they have more information to process than they can handle. They also experience significant stress as a result of it, and both the quality of their work and their productivity goes down.

Recently some colleagues and I conducted a study that included employees from companies in New Zealand, Spain, and the USA (Kock, Del Aguila-Obra & Padilla-Meléndez, 2009). These are countries whose organizations typically display significant differences in power distance. We found something unexpected. Information overload was much more strongly associated with power distance than with the actual amount of information employees had to process on a daily basis.

While looking for explanations to this paradoxical finding, I recalled an interview I gave way back in 2001 to the Philadelphia Inquirer, commenting on research by Dr. David A. Owens. His research uncovered an interesting phenomenon. The higher up in the organizational pecking order one was, the less the person was concerned about typos on emails to subordinates.

There is also some cool research by Carlson & Davis (1998) suggesting that bosses tend to pick the communication media that are the most convenient for them, and don’t care much about convenience for the subordinates. One example would be calling a subordinate on the phone to assign a task, and then demanding a detailed follow-up report by email.

As a side note, writing a reasonably sized email takes a lot longer than conveying the same ideas over the phone or face-to-face (Kock, 2005). To be more precise, it takes about 10 times longer when the word count is over 250 and the ideas being conveyed are somewhat complex. For very short messages, a written medium like email is fairly convenient, and the amount of time to convey ideas may be even shorter than by using the phone or doing it face-to-face.

So a picture started to emerge. Bosses choose the communication media that are convenient for them when dealing with subordinates. If the media are written, they don’t care about typos at all. The subordinates use the media that are imposed on them, and if the media are written they certainly don’t want something with typos coming from them to reach their bosses. It would make them look bad.

The final result is this. Subordinates experience significant information overload, particularly in high power distance organizations. They also experience significant stress. Work quality and productivity goes down, and they get even more stressed. They get fat, or sickly thin. Their health deteriorates. Eventually they get fired, which doesn’t help a bit.

What should you do, if you are not the boss? Here are some suggestions:

- Try to tactfully avoid letting communication media being imposed on you all the time by your boss (and others). Explicitly state, in a polite way, the media that would be most convenient for you in various circusmtances, both as a receiver and sender. Generally, media that support oral speech are better for discussing complex ideas. Written media are better for short exchanges. Want an evolutionary reason for that? As you wish: Kock (2004).

- Discuss the ideas in this post with your boss; assuming that the person cares. Perhaps there is something that can be done to reduce power distance, for example. Making the work environment more democratic seems to help in some cases.

- And ... dot’n wrory soo mach aobut tipos ... which could be extrapolated to: don’t sweat the small stuff. Most bosses really care about results, and will gladly take an email with some typos telling them that a new customer signed a contract. They will not be as happy with an email telling them the opposite, no matter how well written it is.

Otherwise, your organizational demise may come sooner than you think.

References

Carlson, P.J., & Davis, G.B. (1998). An investigation of media selection among directors and managers: From "self" to "other" orientation. MIS Quarterly, 22(3), 335-362.

Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions, and organizations across nations. Thousand Oaks, CA: Sage.

Kock, N. (2004). The psychobiological model: Towards a new theory of computer-mediated communication based on Darwinian evolution. Organization Science, 15(3), 327-348.

Kock, N. (2005). Business process improvement through e-collaboration: Knowledge sharing through the use of virtual groups. Hershey, PA: Idea Group Publishing.

Kock, N., Del Aguila-Obra, A.R., & Padilla-Meléndez, A. (2009). The information overload paradox: A structural equation modeling analysis of data from New Zealand, Spain and the U.S.A. Journal of Global Information Management, 17(3), 1-17.

Monday, December 13, 2010

What is a reasonable vitamin D level?

The figure and table below are from Vieth (1999); one of the most widely cited articles on vitamin D. The figure shows the gradual increase in blood concentrations of 25-Hydroxyvitamin, or 25(OH)D, following the start of daily vitamin D3 supplementation of 10,000 IU/day. The table shows the average levels for people living and/or working in sun-rich environments; vitamin D3 is produced by the skin based on sun exposure.


25(OH)D is also referred to as calcidiol. It is a pre-hormone that is produced by the liver based on vitamin D3. To convert from nmol/L to ng/mL, divide by 2.496. The figure suggests that levels start to plateau at around 1 month after the beginning of supplementation, reaching a point of saturation after 2-3 months. Without supplementation or sunlight exposure, levels should go down at a comparable rate. The maximum average level shown on the table is 163 nmol/L (65 ng/mL), and refers to a sample of lifeguards.

From the figure we can infer that people on average will plateau at approximately 130 nmol/L, after months of 10,000 IU/d supplementation. That is 52 ng/mL. Assuming a normal distribution with a standard deviation of about 20 percent of the range of average levels, we can expect about 68 percent of those taking that level of supplementation to be in the 42 to 63 ng/mL range.

This might be the range most of us should expect to be in at an intake of 10,000 IU/d. This is the equivalent to the body’s own natural production through sun exposure.

Approximately 32 percent of the population can be expected to be outside this range. A person who is two standard deviations (SDs) above the mean (i.e., average) would be at around 73 ng/mL. Three SDs above the mean would be 83 ng/mL. Two SDs below the mean would be 31 ng/mL.

There are other factors that may affect levels. For example, being overweight tends to reduce them. Excess cortisol production, from stress, may also reduce them.

Supplementing beyond 10,000 IU/d to reach levels much higher than those in the range of 42 to 63 ng/mL may not be optimal. Interestingly, one cannot overdose through sun exposure, and the idea that people do not produce vitamin D3 after 40 years of age is a myth.

One would be taking in about 14,000 IU/d of vitamin D3 by combining sun exposure with a supplemental dose of 4,000 IU/d. Clear signs of toxicity may not occur until one reaches 50,000 IU/d. Still, one may develop other complications, such as kidney stones, at levels significantly above 10,000 IU/d.

See this post by Chris Masterjohn, which makes a different argument, but with somewhat similar conclusions. Chris points out that there is a point of saturation above which the liver is unable to properly hydroxylate vitamin D3 to produce 25(OH)D.

How likely it is that a person will develop complications like kidney stones at levels above 10,000 IU/d, and what the danger threshold level could be, are hard to guess. Kidney stone incidence is a sensitive measure of possible problems; but it is, by itself, an unreliable measure. The reason is that it is caused by factors that are correlated with high levels of vitamin D, where those levels may not be the problem.

There is some evidence that kidney stones are associated with living in sunny regions. This is not, in my view, due to high levels of vitamin D3 production from sunlight. Kidney stones are also associated with chronic dehydration, and populations living in sunny regions may be at a higher than average risk of chronic dehydration. This is particularly true for sunny regions that are also very hot and/or dry.

Reference

Vieth, R. (1999). Vitamin D supplementation, 25-hydroxyvitamin D concentrations, and safety. American Journal of Clinical Nutrition, 69(5), 842-856.

Saturday, December 11, 2010

Strength training: A note about Scooby and comments by Anon

Let me start this post with a note about Scooby, who is a massive bodybuilder who has a great website with tips on how to exercise at home without getting injured. Scooby is probably as massive a bodybuilder as anyone can get naturally, and very lean. He says he is a natural bodybuilder, and I am inclined to believe him. His dietary advice is “old school” and would drive many of the readers of this blog crazy – e.g., plenty of grains, and six meals a day. But it obviously works for him. (As far as muscle gain is concerned, a lot of different approaches work. For some people, almost any reasonable approach will work; especially if they are young men with high testosterone levels.)

The text below is all from an anonymous commenter’s notes on this post discussing the theory of supercompensation. Many thanks to this person for the detailed and thoughtful comment, which is a good follow-up on the note above about Scooby. In fact I thought that the comment might have been from Scooby; but I don’t think so. My additions are within “[ ]”. While the comment is there under the previous post for everyone to see, I thought that it deserved a separate post.

***

I love this subject [i.e., strength training]. No shortages of opinions backed by research with the one disconcerting detail that they don't agree.

First one opening general statement. If there was one right way we'd all know it by now and we'd all be doing it. People's bodies are different and what motivates them is different. (Motivation matters as a variable.)

My view on one set vs. three is based on understanding what you're measuring and what you're after in a training result.

Most studies look at one rep max strength gains as the metric but three sets [of repetitions] improves strength/endurance. People need strength/endurance more typically than they need maximal strength in their daily living. The question here becomes what is your goal?

The next thing I look at in training is neural adaptation. Not from the point of view of simple muscle strength gain but from the point of view of coordinated muscle function, again, something that is transferable to real life. When you exercise the brain is always learning what it is you are asking it to do. What you need to ask yourself is how well does this exercise correlate with a real life requirements.

[This topic needs a separate post, but one can reasonably argue that your brain works a lot harder during a one-hour strength training session than during a one-hour session in which you are solving a difficult mathematical problem.]

To this end single legged squats are vastly superior to double legged squats. They invoke balance and provoke the activation of not only the primary movers but the stabilization muscles as well. The brain is acquiring a functional skill in activating all these muscles in proper harmony and improving balance.

I also like walking lunges at the climbing wall in the gym (when not in use, of course) as the instability of the soft foam at the base of the wall gives an excellent boost to the basic skill by ramping up the important balance/stabilization component (vestibular/stabilization muscles). The stabilization muscles protect joints (inner unit vs. outer unit).

The balance and single leg components also increase core activation naturally. (See single legged squat and quadratus lumborum for instance.) [For more on the quadratus lumborum muscle, see here.]

Both [of] these exercises can be done with dumbbells for increased strength[;] and though leg exercises strictly speaking, they ramp up the core/full body aspect with weights in hand.

I do multiple sets, am 59 years old and am stronger now than I have ever been (I have hit personal bests in just the last month) and have been exercising for decades. I vary my rep ranges between six and fifteen (but not limited to just those two extremes). My total exercise volume is between two and three hours a week.

Because I have been at this a long time I have learned to read my broad cycles. I push during the peak periods and back off during the valleys. I also adjust to good days and bad days within the broader cycle.

It is complex but natural movements with high neural skill components and complete muscle activation patterns that have moved me into peak condition while keeping me from injury.

I do not exercise to failure but stay in good form for all reps. I avoid full range of motion because it is a distortion of natural movement. Full range of motion with high loads in particular tends to damage joints.

Natural, functional strength is more complex than the simple study designs typically seen in the literature.

Hopefully these things that I have learned through many years of experimentation will be of interest to you, Ned, and your readers, and will foster some experimentation of your own.

Anonymous

Monday, December 6, 2010

Pressure-cooked meat: Top sirloin

Pressure cooking relies on physics to take advantage of the high temperatures of liquids and vapors in a sealed container. The sealed container is the pressure-cooking pan. Since the sealed container does not allow liquids or vapors to escape, the pressure inside the container increases as heat is applied to the pan. This also significantly increases the temperature of the liquids and vapors inside the container, which speeds up cooking.

Pressure cooking is essentially a version of high-heat steaming. The food inside the cooker tends to be very evenly cooked. Pressure cooking is also considered to be one of the most effective cooking methods for killing food-born pathogens. Since high pressure reduces cooking time, pressure cooking is usually employed in industrial food processing.

When cooking meat, the amount of pressure used tends to affect amino-acid digestibility; more pressure decreases digestibility. High pressures in the cooker cause high temperatures. The content of some vitamins in meat and plant foods is also affected; they go down as pressure goes up. Home pressure cookers are usually set at 15 pounds per square inch (psi). Significant losses in amino-acid digestibility occur only at pressures of 30 psi or higher.

My wife and I have been pressure-cooking for quite some time. Below is a simple recipe, for top sirloin.

- Prepare some dry seasoning powder by mixing sea salt, garlic power, chili powder, and a small amount of cayenne pepper.
- Season the top sirloin pieces at least 2 hours prior to placing them in the pressure cooking pan.
- Place the top sirloin pieces in the pressure cooking pan, and add water, almost to the point of covering them.
- Cook on very low fire, after the right amount of pressure is achieved, for 1 hour. The point at which the right amount of pressure is obtained is signaled by the valve at the top of the pan making a whistle-like noise.

As with slow cooking in an open pan, the water around the cuts should slowly turn into a fatty and delicious sauce, which you can pour on the meat when serving, to add flavor. The photos below show the seasoned top sirloin pieces, the (old) pressure-cooking pan we use, and some cooked pieces ready to be eaten together with some boiled yam.




A 100 g portion will have about 30 g of protein. (That is a bit less than 4 oz, cooked.) The amount of fat will depend on how trimmed the cuts are. Like most beef cuts, the fat will be primarily saturated and monounsatured, with approximately equal amounts of each. It will provide good amounts of the following vitamins and minerals: iron, magnesium, niacin, phosphorus, potassium, zinc, selenium, vitamin B6, and vitamin B12.

Thursday, December 2, 2010

How lean should one be?

Loss of muscle mass is associated with aging. It is also associated with the metabolic syndrome, together with excessive body fat gain. It is safe to assume that having low muscle and high fat mass, at the same time, is undesirable.

The extreme opposite of that, achievable though natural means, would be to have as much muscle as possible and as low body fat as possible. People who achieve that extreme often look a bit like “buff skeletons”.

This post assumes that increasing muscle mass through strength training and proper nutrition is healthy. It looks into body fat levels, specifically how low body fat would have to be for health to be maximized.

I am happy to acknowledge that quite often I am working on other things and then become interested in a topic that is brought up by Richard Nikoley, and discussed by his readers (I am one of them). This post is a good example of that.

Obesity and the diseases of civilization

Obesity is strongly associated with the diseases of civilization, of which the prototypical example is perhaps type 2 diabetes. So much so that sometimes the impression one gets is that without first becoming obese, one cannot develop any of the diseases of civilization.

But this is not really true. For example, diabetes type 1 is also one of the diseases of civilization, and it often strikes thin people. Diabetes type 1 results from the destruction of the beta cells in the pancreas by a person’s own immune system. The beta cells in the pancreas produce insulin, which regulates blood glucose levels.

Still, obesity is undeniably a major risk factor for the diseases of civilization. It seems reasonable to want to move away from it. But how much? How lean should one be to be as healthy as possible? Given the ubiquity of U-curve relationships among health variables, there should be a limit below which health starts deteriorating.

Is the level of body fat of the gentleman on the photo below (from: ufcbettingtoday.com) low enough? His name is Fedor; more on him below. I tend to admire people who excel in narrow fields, be they intellectual or sport-related, even if I do not do anything remotely similar in my spare time. I admire Fedor.


Let us look at some research and anecdotal evidence to see if we can answer the question above.

The buff skeleton look is often perceived as somewhat unattractive

Being in the minority is not being wrong, but should make one think. Like Richard Nikoley’s, my own perception of the physique of men and women is that, the leaner they are, the better; as long as they also have a reasonable amount of muscle. That is, in my mind, the look of a stage-ready competitive natural bodybuilder is close to the healthiest look possible.

The majority’s opinion, however, seems different, at least anecdotally. The majority of women that I hear or read voicing their opinions on this matter seem to find the “buff skeleton” look somewhat unattractive, compared with a more average fit or athletic look. The same seems to be true for perceptions of males about females.

A little side note. From an evolutionary perspective, perceptions of ancestral women about men must have been much more important than perceptions of ancestral men about women. The reason is that the ancestral women were the ones applying sexual selection pressures in our ancestral past.

For the sake of discussion, let us define the buff skeleton look as one of a reasonably muscular person with a very low body fat percentage; pretty much only essential fat. That would be 10-13 percent for women, and 5-8 percent for men.

The average fit look would be 21-24 percent for women, and 14-17 percent for men. Somewhere in between, would be what we could call the athletic look, namely 14-20 percent for women, and 6-13 percent for men. These levels are exactly the ones posted on this Wikipedia article on body fat percentages, at the time of writing.

From an evolutionary perspective, attractiveness to members of the opposite sex should be correlated with health. Unless we are talking about a costly trait used in sexual selection by our ancestors; something analogous to the male peacock’s train.

But costly traits are usually ornamental, and are often perceived as attractive even in exaggerated forms. What prevents male peacock trains from becoming the size of a mountain is that they also impair survival. Otherwise they would keep growing. The peahens find them sexy.

Being ripped is not always associated with better athletic performance

Then there is the argument that if you carried some extra fat around the waist, then you would not be able to fight, hunt etc. as effectively as you could if you were living 500,000 years ago. Evolution does not “like” that, so it is an unnatural and maladaptive state achieved by modern humans.

Well, certainly the sport of mixed martial arts (MMA) is not the best point of comparison for Paleolithic life, but it is not such a bad model either. Look at this photo of Fedor Emelianenko (on the left, clearly not so lean) next to Andrei Arlovski (fairly lean). Fedor is also the one on the photo at the beginning of this post.

Fedor weighed about 220 lbs at 6’; Arlovski 250 lbs at 6’4’’. In fact, Arlovski is one of the leanest and most muscular MMA heavyweights, and also one of the most highly ranked. Now look at Fedor in action (see this YouTube video), including what happened when Fedor fought Arlovski, at around the 4:28 mark. Fedor won by knockout.

Both Fedor and Arlovski are heavyweights; which means that they do not have to “make weight”. That is, they do not have to lose weight to abide by the regulations of their weight category. Since both are professional MMA fighters, among the very best in the world, the weight at which they compete is generally the weight that is associated with their best performance.

Fedor was practically unbeaten until recently, even though he faced a very high level of competition. Before Fedor there was another professional fighter that many thought was from Russia, and who ruled the MMA heavyweight scene for a while. His name is Igor Vovchanchyn, and he is from the Ukraine. At 5’8’’ and 230 lbs in his prime, he was a bit chubby. This YouTube video shows him in action; and it is brutal.

A BMI of about 25 seems to be the healthiest for long-term survival

Then we have this post by Stargazey, a blogger who likes science. Toward the end the post she discusses a study suggesting that a body mass index (BMI) of about 25 seems to be the healthiest for long-term survival. That BMI is between normal weight and overweight. The study suggests that both being underweight or obese is unhealthy, in terms of long-term survival.

The BMI is calculated as an individual’s body weight divided by the square of the individual’s height. A limitation of its use here is that the BMI is a more reliable proxy for body fat percentage for women than for men, and can be particularly misleading when applied to muscular men.

The traditional Okinawans are not super lean

The traditional Okinawans (here is a good YouTube video) are the longest living people in the world. Yet, they are not super lean, not even close. They are not obese either. The traditional Okinawans are those who kept to their traditional diet and lifestyle, which seems to be less and less common these days.

There are better videos on the web that could be used to illustrate this point. Some even showing shirtless traditional karate instructors and students from Okinawa, which I had seen before but could not find again. Nearly all of those karate instructors and students were a bit chubby, but not obese. By the way, karate was invented in Okinawa.

The fact that the traditional Okinawans are not ripped does not mean that the level of fat that is healthy for them is also healthy for someone with a different genetic makeup. It is important to remember that the traditional Okinawans share a common ancestry.

What does this all mean?

Some speculation below, but before that let me tell this: as counterintuitive as it may sound, excessive abdominal fat may be associated with higher insulin sensitivity in some cases. This post discusses a study in which the members of a treatment group were more insulin sensitive than the members of a control group, even though the former were much fatter; particularly in terms of abdominal fat.

It is possible that the buff skeleton look is often perceived as somewhat unattractive because of cultural reasons, and that it is associated with the healthiest state for humans. However, it seems a bit unlikely that this applies as a general rule to everybody.

Another possibility, which appears to be more reasonable, is that the buff skeleton look is healthy for some, and not for others. After all, body fat percentage, like fat distribution, seems to be strongly influenced by our genes. We can adapt in ways that go against genetic pressures, but that may be costly in some cases.

There is a great deal of genetic variation in the human species, and much of it may be due to relatively recent evolutionary pressures.

Life is not that simple!

References

Buss, D.M. (1995). The evolution of desire: Strategies of human mating. New York, NY: Basic Books.

Cartwright, J. (2000). Evolution and human behavior: Darwinian perspectives on human nature. Cambridge, MA: The MIT Press.

Miller, G.F. (2000). The mating mind: How sexual choice shaped the evolution of human nature. New York, NY: Doubleday.

Zahavi, A. & Zahavi, A. (1997). The Handicap Principle: A missing piece of Darwin’s puzzle. Oxford, England: Oxford University Press.

Sunday, November 28, 2010

HealthCorrelator for Excel 1.0 (HCE): Call for beta testers

This call is closed. Beta testing has been successfully completed. HealthCorrelator for Excel (HCE) is now publicly available for download and use on a free trial basis. For those users who decide to buy it after trying, licenses are available for individuals and organizations.

To download a free trial version – as well as get the User Manual, view demo YouTube videos, and download and try sample datasets – visit the HealthCorrelator.com web site.

Monday, November 22, 2010

Human traits are distributed along bell curves: You need to know yourself, and HCE can help

Most human traits (e.g., body fat percentage, blood pressure, propensity toward depression) are influenced by our genes; some more than others. The vast majority of traits are also influenced by environmental factors, the “nurture” part of the “nature-nurture” equation. Very few traits are “innate”, such as blood type.

This means that manipulating environmental factors, such as diet and lifestyle, can strongly influence how the traits are finally expressed in humans. But each individual tends to respond differently to diet and lifestyle changes, because each individual is unique in terms of his or her combination of “nature” and “nurture”. Even identical twins are different in that respect.

When plotted, traits that are influenced by our genes are distributed along a bell-shaped curve. For example, a trait like body fat percentage, when measured in a population of 1000 individuals, will yield a distribution of values that will look like a bell-shaped distribution. This type of distribution is also known in statistics as a “normal” distribution.

Why is that?

The additive effect of genes and the bell curve

The reason is purely mathematical. A measurable trait, like body fat percentage, is usually influenced by several genes. (Sometimes individual genes have a very marked effect, as in genes that “switch on or off” other genes.) Those genes appear at random in a population, and their various combinations spread in response to selection pressures. Selection pressures usually cause a narrowing of the bell-shaped curve distributions of traits in populations.

The genes interact with environmental influences, which also have a certain degree of randomness. The result is a massive combined randomness. It is this massive randomness that leads to the bell-curve distribution. The bell curve itself is not random at all, which is a fascinating aspect of this phenomenon. From “chaos” comes “order”. A bell curve is a well-defined curve that is associated with a function, the probability density function.

The underlying mathematical reason for the bell shape is the central limit theorem. The genes are combined in different individuals as combinations of alleles, where each allele is a variation (or mutation) of a gene. An allele set, for genes in different locations of the human DNA, forms a particular allele combination, called a genotype. The alleles combine their effects, usually in an additive fashion, to influence a trait.

Here is a simple illustration. Let us say one generates 1000 random variables, each storing 10 random values going from 0 to 1. Then the values stored in each of the 1000 random variables are added. This mimics the additive effect of 10 genes with random allele combinations. The result are numbers ranging from 1 to 10, in a population of 1000 individuals; each number is analogous to an allele combination. The resulting histogram, which plots the frequency of each allele combination (or genotype) in the population, is shown on the figure bellow. Each allele configuration will “push for” a particular trait range, making the trait distribution also have the same bell-shaped form.


The bell curve, research studies, and what they mean for you

Studies of the effects of diet and exercise on health variables usually report their results in terms of average responses in a group of participants. Frequently two groups are used, one control and one treatment. For example, in a diet-related study the control group may follow the Standard American Diet, and the treatment group may follow a low carbohydrate diet.

However, you are not the average person; the average person is an abstraction. Research on bell curve distributions tells us that there is about a 68 percentage chance that you will fall within a 1 standard deviation from the average, to the left or the right of the “middle” of the bell curve. Still, even a 0.5 standard deviation above the average is not the average. And, there is approximately a 32 percent chance that you will not be within the larger -1 to 1 standard deviation range. If this is the case, the average results reported may be close to irrelevant for you.

Average results reported in studies are a good starting point for people who are similar to the studies’ participants. But you need to generate your own data, with the goal of “knowing yourself through numbers” by progressively analyzing it. This is akin to building a “numeric diary”. It is not exactly an “N=1” experiment, as some like to say, because you can generate multiple data points (e.g., N=200) on how your body alone responds to diet and lifestyle changes over time.

HealthCorrelator for Excel (HCE)

I think I have finally been able to develop a software tool that can help people do that. I have been using it myself for years, initially as a prototype. You can see the results of my transformation on this post. The challenge for me was to generate a tool that was simple enough to use, and yet powerful enough to give people good insights on what is going on with their body.

The software tool is called HealthCorrelator for Excel (HCE). It runs on Excel, and generates coefficients of association (correlations, which range from -1 to 1) among variables and graphs at the click of a button.

This 5-minute YouTube video shows how the software works in general, and this 10-minute video goes into more detail on how the software can be used to manage a specific health variable. These two videos build on a very small sample dataset, and their focus is on HDL cholesterol management. Nevertheless, the software can be used in the management of just about any health-related variable – e.g., blood glucose, triglycerides, muscle strength, muscle mass, depression episodes etc.

You have to enter data about yourself, and then the software will generate coefficients of association and graphs at the click of a button. As you can see from the videos above, it is very simple. The interpretation of the results is straightforward in most cases, and a bit more complicated in a smaller number of cases. Some results will probably surprise users, and their doctors.

For example, a user who is a patient may be able to show to a doctor that, in the user’s specific case, a diet change influences a particular variable (e.g., triglycerides) much more strongly than a prescription drug or a supplement. More posts will be coming in the future on this blog about these and other related issues.

Monday, November 15, 2010

Your mind as an anabolic steroid

The figure below, taken from Wilmore et al. (2007), is based on a classic 1972 study conducted by Ariel and Saville. The study demonstrated the existence of what is referred to in exercise physiology as the “placebo effect on muscular strength gains”. The study had two stages. In the first stage, fifteen male university athletes completed a 7-week strength training program. Gains in strength occurred during this period, but were generally small as these were trained athletes.


In the second stage the same participants completed a 4-week strength training program, very much like the previous one (in the first stage). The difference was that some of them took placebos they believed to be anabolic steroids. Significantly greater gains in strength occurred during this second stage for those individuals, even though this stage was shorter in duration (4 weeks). The participants in this classic study increased their strength gains due to one main reason. They strongly believed it would happen.

Again, these were trained athletes; see the maximum weights lifted on the left, which are not in pounds but kilograms. For trained athletes, gains in strength are usually associated with gains in muscle mass. The gains may not look like much, and seem to be mostly in movements involving big muscle groups. Still, if you look carefully, you will notice that the bench press gain is of around 10-15 kg. This is a gain of 22-33 lbs, in a little less than one month!

This classic study has several implications. One is that if someone tells you that a useless supplement will lead to gains from strength training, and you believe that, maybe the gains will indeed happen. This study also provides indirect evidence that “psyching yourself up” for each strength training session may indeed be very useful, as many serious bodybuilders do. It is also reasonable to infer from this study that if you believe that you will not achieve gains from strength training, that belief may become reality.

As a side note, androgenic-anabolic steroids, better known as “anabolic steroids” or simply “steroids”, are synthetic derivatives of the hormone testosterone. Testosterone is present in males and females, but it is usually referred to as a male hormone because it is found in much higher concentrations in males than females.

Steroids have many negative side effects, particularly when taken in large quantities and for long periods of time. They tend to work only when taken in doses above a certain threshold (Wilmore et al., 2007); results below that threshold may actually be placebo effects. The effective thresholds for steroids tend to be high enough to lead to negative health side effects for most people. Still, they are used by bodybuilders as an effective aid to muscle gain, because they do lead to significant muscle gain in high doses. Adding to the negative side effects, steroids do not usually prevent fat gain.

References

Ariel, G., & Saville, W. (1972). Anabolic steroids: The physiological effects of placebos. Medicine and Science in Sports and Exercise, 4(2), 124-126.

Wilmore, J.H., Costill, D.L., & Kenney, W.L. (2007). Physiology of sport and exercise. Champaign, IL: Human Kinetics.

Monday, November 8, 2010

High-heat cooking will AGE you, if you eat food deep-fried with industrial vegetable oils

As I said before on this blog, I am yet to be convinced that grilled meat is truly unhealthy in the absence of leaky gut problems. I am referring here to high heat cooking-induced Maillard reactions (browning) and the resulting advanced glycation endproducts (AGEs). Whenever you cook a food in high heat, to the point of browning it, you generate a Maillard reaction. Searing and roasting meat usually leads to that.

Elevated levels of serum AGEs presumably accelerate the aging process in humans. This is supported by research with uncontrolled diabetics, who seem to have elevated levels of serum AGEs. In fact, a widely used measure in the treatment of diabetes, the HbA1c (or percentage of glycated hemoglobin), is actually a measure of endogenous AGE formation. (Endogenous = generated by our own bodies.)

Still, evidence that a person with an uncompromised gut can cause serum levels of AGEs to go up significantly by eating AGEs is weak, and evidence that any related serum AGE increases lead the average person to develop health problems is pretty much nonexistent. The human body can handle AGEs, as long as their concentration is not too high. We cannot forget that a healthy HbA1c in humans is about 5 percent; meaning that AGEs are created and dealt with by our bodies. A healthy HbA1c in humans is not 0 percent.

Thanks again to Justin for sending me the full text version of the Birlouez-Aragon et al. (2010) article, which is partially reviewed here. See this post and the comments under it for some background on this discussion. The article is unequivocally titled: “A diet based on high-heat-treated foods promotes risk factors for diabetes mellitus and cardiovascular diseases.”

This article is recent, and has already been cited by news agencies and bloggers as providing “definitive” evidence that high-heat cooking is bad for one’s health. Interestingly, quite a few of those citations are in connection with high-heat cooking of meat, which is not even the focus of the article.

In fact, the Birlouez-Aragon et al. (2010) article provides no evidence that high-heat cooking of meat leads to AGEing in humans. If anything, the article points at the use of industrial vegetable oils for cooking as the main problem. And we know already that industrial vegetable oils are not healthy, whether you cook with them or drink them cold by the tablespoon.

But there are a number of good things about this article. For example, the authors summarize past research on AGEs. They focus on MRPs, which are “Maillard reaction products”. One of the summary statements supports what I have said on this blog before:

"The few human intervention trials […] that reported on health effects of dietary MRPs have all focused on patients with diabetes or renal failure."

That is, there is no evidence from human studies that dietary AGEs cause health problems outside the context of preexisting conditions that themselves seem to be associated with endogenous AGE production. To that I would add that gut permeability may also be a problem, as in celiacs ingesting large amounts of AGEs.

As you can see from the quote below, the authors decided to focus their investigation on a particular type of AGE, namely CML or carboxymethyllysine.

"...we decided to specifically quantify CML, as a well-accepted MRP indicator ..."

As I noted in my comments under this post (the oven roasted pork tenderloin post), one particular type of diet seems to lead to high serum CML levels – a vegetarian diet.

So let us see what the authors studied:

"... we conducted a randomized, crossover, intervention trial to clarify whether a habitual diet containing high-heat-treated foods, such as deep-fried potatoes, cookies, brown crusted bread, or fried meat, could promote risk factors of type 2 diabetes or cardiovascular diseases in healthy people."

Well, “deep-fried potatoes” is a red flag, don’t you think? They don’t say what oil was used for deep-frying, but I bet it was not coconut or olive oil. Cheap industrial vegetable oils (corn, safflower etc.) are the ones normally used (and re-used) for deep-frying. This is in part because these oils are cheap, and in part because they have high “smoke points” (the temperature at which the oil begins to generate smoke).

Let us see what else the authors say about the dietary conditions they compared:

"The STD was prepared by using conventional techniques such as grilling, frying, and roasting and contained industrial food known to be highly cooked, such as extruded corn flakes, coffee, dry cookies, and well-baked bread with brown crust. In contrast, the STMD comprised some raw food and foods that were cooked with steam techniques only. In addition, convenience products were chosen according to the minimal process applied (ie, steamed corn flakes, tea, sponge cakes, and mildly baked bread) ..."

The STD diet was the one with high-heat preparation of foods; in the STMD diet the foods were all steam-cooked at relatively low temperatures. Clearly these diets were mostly of plant-based foods, and of the unhealthy kind!

The following quote, from the results, pretty much tells us that the high omega-6 content of industrial oils used for deep frying was likely to be a major confounder, if not the main culprit:

"... substantial differences in the plasma fatty acid profile with higher plasma concentrations of long-chain omega-3 fatty acids […] and lower concentrations of omega-6 fatty acids […] were analyzed in the STMD group compared with in the STD group."

That is, the high-heat cooking group had higher plasma concentrations of omega-6 fats, which is what you would expect from a group consuming a large amount of industrial vegetable oils. One single tablespoon per day is already a large amount; these folks were probably consuming more than that.

Perhaps a better title for this study would have been: “A diet based on foods deep-fried in industrial vegetable oils promotes risk factors for diabetes mellitus and cardiovascular diseases.”

This study doesn’t even get close to indicting charred meat as a major source of serum AGEs. But it is not an exception among studies that many claim to do so.

Reference

H Birlouez-Aragon, I., Saavedra, G., Tessier, F.J., Galinier, A., Ait-Ameur, L., Lacoste, F., Niamba, C.-N., Alt, N., Somoza, V., & Lecerf, J.-M. (2010). A diet based on high-heat-treated foods promotes risk factors for diabetes mellitus and cardiovascular diseases. The American Journal of Clinical Nutrition, 91(5), 1220-1226.

Tuesday, October 19, 2010

Slow-cooked meat: Round steak, not grilled, but slow-cooked in a frying pan

I am yet to be convinced that grilled meat is truly unhealthy in the absence of leaky gut problems. I am referring here to high heat cooking-induced Maillard reactions and the resulting advanced glycation endproducts (AGEs). If you are interested, see this post and the comments under it, where I looked into some references provided by an anonymous commenter. In short, I am more concerned about endogenous (i.e., inside the body) formation of AGEs than with exogenous (e.g., dietary) intake.

Still, the other day I had to improvise when cooking meat, and used a cooking method that is considered by many to be fairly healthy – slow-cooking at a low temperature. I seasoned a few pieces of beef tenderloin (filet mignon) for the grill, but it started raining, so I decided to slow-cook them in a frying pan with water and some olive oil. After about 1 hour of slow-cooking, and somewhat to my surprise, they tasted more delicious than grilled!

I have since been using this method more and more, with all types of cuts of meat. It is great for round steak and top sirloin, for example, as well as cuts that come with bone. The pieces of meat come off the bone very easily, are soft, and taste great. So does much of the marrow. You also end up with a delicious sauce. Almost any cut of beef end up very soft when slow-cooked, even cuts that would normally come out from a grill a bit hard. Below is a simple recipe, for round steak (a.k.a. eye round).

- Prepare some dry seasoning powder by mixing sea salt, black pepper, dried garlic bits, chili powder, and a small amount of cayenne pepper.
- Season the round steak pieces at least 2 hours prior to placing them in the pan.
- Add a bit of water and olive oil to one or more frying pans. Two frying pans may be needed, depending on their size and the amount of meat.
- Place the round steak pieces in the frying pan, and add more water, almost to the point of covering them.
- Cook on low fire covered for 2-3 hours.

Since you will be cooking with low fire, the water will probably not evaporate completely even after 3 h. Nevertheless it is a good idea to check it every 15-30 min to make sure that this is the case, because in dry weather the water may evaporate rather fast. The water around the cuts should slowly turn into a fatty and delicious sauce, which you can pour on the meat when serving, to add flavor. The photos below show seasoned round steak pieces in a frying pan before cooking, and some cooked pieces served with sweet potatoes, orange pieces and a nectarine.



A 100 g portion will have about 34 g of protein. (A 100 g portion is a bit less than 4 oz, cooked.) The amount of fat will depend on how trimmed the cuts are. Like most beef cuts, the fat will be primarily saturated and monounsatured (both very healthy), with approximately equal amounts of each. It will provide good amounts of the following vitamins and minerals: iron, niacin, phosphorus, potassium, zinc, selenium, vitamin B6, and vitamin B12.

Monday, October 11, 2010

Blood glucose levels in birds are high yet HbA1c levels are low: Can vitamin C have anything to do with this?

Blood glucose levels in birds are often 2-4 times higher than those in mammals of comparable size. Yet birds often live 3 times longer than mammals of comparable size. This is paradoxical. High glucose levels are generally associated with accelerated senescence, but birds seem to age much slower than mammals. Several explanations have been proposed for this, one of which is related to the formation of advanced glycation endproducts (AGEs).

Glycation is a process whereby sugar molecules “stick” to protein or fat molecules, impairing their function. Glycation leads to the formation of AGEs, which seem to be associated with a host of diseases, including diabetes, and to be implicated in accelerated aging (or “ageing”, with British spelling).

The graphs below, from Beuchat & Chong (1998), show the glucose levels (at rest and prior to feeding) and HbA1c levels (percentage of glycated hemoglobin) in birds and mammals. HbA1c is a measure of the degree of glycation of hemoglobin, a protein found in red blood cells. As such HbA1c (given in percentages) is a good indicator of the rate of AGE formation within an animal’s body.


The glucose levels are measured in mmol/l; they should be multiplied by 18 to obtain the respective measures in mg/dl. For example, the 18 mmol/l glucose level for the Anna’s (a hummingbird species) is equivalent to 324 mg/dl. Even at that high level, well above the level of a diabetic human, the Anna’s hummingbird species has an HbA1c of less than 5, which is lower than that for most insulin sensitive humans.

How can that be?

There are a few possible reasons. Birds seem to have evolved better mechanisms to control cell permeability to glucose, allowing glucose to enter cells very selectively. Birds also seem to have a higher turnover of cells where glycation and thus AGE formation results. The lifespan of red blood cells in birds, for example, is only 50 to 70 percent that of mammals.

But one of the most interesting mechanisms is vitamin C synthesis. Not only is vitamin C a powerful antioxidant, but it also has the ability to reversibly bind to proteins at the sites where glycation would occur. That is, vitamin C has the potential to significantly reduce glycation. The vast majority of birds and mammals can synthesize vitamin C. Humans are an exception. They have to get it from their diet.

This may be one of the many reasons why isolated human groups with traditional diets high in fruits and starchy tubers, which lead to temporary blood glucose elevations, tend to have good health. Fruits and starchy tubers in general are good sources of vitamin C.

Grains and seeds are not.

References

Beuchat, C.A., & Chong, C.R. (1998). Hyperglycemia in hummingbirds and its consequences for hemoglobin glycation. Comparative Biochemistry and Physiology Part A, 120(3), 409–416.

Holmes D.J., Flückiger, R., & Austad, S.N. (2001). Comparative biology of aging in birds: An update. Experimental Gerontology, 36(4), 869-883.

Tuesday, October 5, 2010

The China Study II: Does calorie restriction increase longevity?

The idea that calorie restriction extends human life comes largely from studies of other species. The most relevant of those studies have been conducted with primates, where it has been shown that primates that eat a restricted calorie diet live longer and healthier lives than those that are allowed to eat as much as they want.

There are two main problems with many of the animal studies of calorie restriction. One is that, as natural lifespan decreases, it becomes progressively easier to experimentally obtain major relative lifespan extensions. (That is, it seems much easier to double the lifespan of an organism whose natural lifespan is one day than an organism whose natural lifespan is 80 years.) The second, and main problem in my mind, is that the studies often compare obese with lean animals.

Obesity clearly reduces lifespan in humans, but that is a different claim than the one that calorie restriction increases lifespan. It has often been claimed that Asian countries and regions where calorie intake is reduced display increased lifespan. And this may well be true, but the question remains as to whether this is due to calorie restriction increasing lifespan, or because the rates of obesity are much lower in countries and regions where calorie intake is reduced.

So, what can the China Study II data tell us about the hypothesis that calorie restriction increases longevity?

As it turns out, we can conduct a preliminary test of this hypothesis based on a key assumption. Let us say we compared two populations (e.g., counties in China), based on the following ratio: number of deaths at or after age 70 divided by number deaths before age 70. Let us call this the “ratio of longevity” of a population, or RLONGEV. The assumption is that the population with the highest RLONGEV would be the population with the highest longevity of the two. The reason is that, as longevity goes up, one would expect to see a shift in death patterns, with progressively more people dying old and fewer people dying young.

The 1989 China Study II dataset has two variables that we can use to estimate RLONGEV. They are coded as M005 and M006, and refer to the mortality rates from 35 to 69 and 70 to 79 years of age, respectively. Unfortunately there is no variable for mortality after 79 years of age, which limits the scope of our results somewhat. (This does not totally invalidate the results because we are using a ratio as our measure of longevity, not the absolute number of deaths from 70 to 79 years of age.) Take a look at these two previous China Study II posts (here, and here) for other notes, most of which apply here as well. The notes are at the end of the posts.

All of the results reported here are from analyses conducted using WarpPLS. Below is a model with coefficients of association; it is a simple model, since the hypothesis that we are testing is also simple. (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) The arrows explore associations between variables, which are shown within ovals. The meaning of each variable is the following: TKCAL = total calorie intake per day; RLONGEV = ratio of longevity; SexM1F2 = sex, with 1 assigned to males and 2 to females.



As one would expect, being female is associated with increased longevity, but the association is just shy of being statistically significant in this dataset (beta=0.14; P=0.07). The association between total calorie intake and longevity is trivial, and statistically indistinguishable from zero (beta=-0.04; P=0.39). Moreover, even though this very weak association is overall negative (or inverse), the sign of the association here does not fully reflect the shape of the association. The shape is that of an inverted J-curve; a.k.a. U-curve. When we split the data into total calorie intake terciles we get a better picture:


The second tercile, which refers to a total daily calorie intake of 2193 to 2844 calories, is the one associated with the highest longevity. The first tercile (with the lowest range of calories) is associated with a higher longevity than the third tercile (with the highest range of calories). These results need to be viewed in context. The average weight in this dataset was about 116 lbs. A conservative estimate of the number of calories needed to maintain this weight without any physical activity would be about 1740. Add about 700 calories to that, for a reasonable and healthy level of physical activity, and you get 2440 calories needed daily for weight maintenance. That is right in the middle of the second tercile.

In simple terms, the China Study II data seems to suggest that those who eat well, but not too much, live the longest. Those who eat little have slightly lower longevity. Those who eat too much seem to have the lowest longevity, perhaps because of the negative effects of excessive body fat.

Because these trends are all very weak from a statistical standpoint, we have to take them with caution. What we can say with more confidence is that the China Study II data does not seem to support the hypothesis that calorie restriction increases longevity.

Reference

Kock, N. (2010). WarpPLS 1.0 User Manual. Laredo, Texas: ScriptWarp Systems.

Notes

- The path coefficients (indicated as beta coefficients) reflect the strength of the relationships; they are a bit like standard univariate (or Pearson) correlation coefficients, except that they take into consideration multivariate relationships (they control for competing effects on each variable). Whenever nonlinear relationships were modeled, the path coefficients were automatically corrected by the software to account for nonlinearity.

- Only two data points per county were used (for males and females). This increased the sample size of the dataset without artificially reducing variance, which is desirable since the dataset is relatively small (each county, not individual, is a separate data point is this dataset). This also allowed for the test of commonsense assumptions (e.g., the protective effects of being female), which is always a good idea in a multivariate analyses because violation of commonsense assumptions may suggest data collection or analysis error. On the other hand, it required the inclusion of a sex variable as a control variable in the analysis, which is no big deal.

- Mortality from schistosomiasis infection (MSCHIST) does not confound the results presented here. Only counties where no deaths from schistosomiasis infection were reported have been included in this analysis. The reason for this is that mortality from schistosomiasis infection can severely distort the results in the age ranges considered here. On the other hand, removal of counties with deaths from schistosomiasis infection reduced the sample size, and thus decreased the statistical power of the analysis.

Tuesday, September 28, 2010

Income, obesity, and heart disease in US states

The figure below combines data on median income by state (bottom-left and top-right), as well as a plot of heart disease death rates against percentage of population with body mass index (BMI) greater than 30 percent. The data are recent, and have been provided by CNN.com and creativeclass.com, respectively.


Heart disease deaths and obesity are strongly associated with each other, and both are inversely associated with median income. US states with lower median income tend to have generally higher rates of obesity and heart disease deaths.

The reasons are probably many, complex, and closely interconnected. Low income is usually associated with high rates of stress, depression, smoking, alcoholism, and poor nutrition. Compounding the problem, these are normally associated with consumption of cheap, addictive, highly refined foods.

Interestingly, this is primarily an urban phenomenon. If you were to use hunter-gatherers as your data sources, you would probably see the opposite relationship. For example, non-westernized hunter-gatherers have no income (at least not in the “normal” sense), but typically have a lower incidence of obesity and heart disease than mildly westernized ones. The latter have some income.

Tragically, the first few generations of fully westernized hunter-gatherers usually find themselves in the worst possible spot.

Wednesday, September 22, 2010

Low nonexercise activity thermogenesis: Uncooperative genes or comfy furniture?

The degree of nonexercise activity thermogenesis (NEAT) seems to a major factor influencing the amount of fat gained or lost by an individual. It also seems to be strongly influenced by genetics, because NEAT is largely due to involuntary activities like fidgeting.

But why should this be?

The degree to which different individuals will develop diseases of civilization in response to consumption of refined carbohydrate-rich foods can also be seen as influenced by genetics. After all, there are many people who eat those foods and are thin and healthy, and that appears to be in part a family trait. But whether we consume those products or not is largely within our control.

So, it is quite possible that NEAT is influenced by genetics, but the fact that NEAT is low in so many people should be a red flag. In the same way that the fact that so many people who eat refined carbohydrate-rich foods are obese should be a red flag. Moreover, modern isolated hunter-gatherers tend to have low levels of body fat. Given the importance of NEAT for body fat regulation, it is not unreasonable to assume that NEAT is elevated in hunter-gatherers, compared to modern urbanites. Hunter-gatherers live more like our Paleolithic ancestors than modern urbanites.

True genetic diseases, caused by recent harmful mutations, are usually rare. If low NEAT were truly a genetic “disease”, those with low NEAT should be a small minority. That is not the case. It is more likely that the low NEAT that we see in modern urbanites is due to a maladaptation of our Stone Age body to modern life, in the same way that our Stone Age body is maladapted to the consumption of foods rich in refined grains and seeds.

What could have increased NEAT among our Paleolithic ancestors, and among modern isolated hunter-gatherers?

One thing that comes to mind is lack of comfortable furniture, particularly comfortable chairs (photo below from: prlog.org). It is quite possible that our Paleolithic ancestors invented some rudimentary forms of furniture, but they would have been much less comfortable than modern furniture used in most offices and homes. The padding of comfy office chairs is not very easy to replicate with stones, leaves, wood, or even animal hides. You need engineering to design it; you need industry to produce that kind of thing.


I have been doing a little experiment with myself, where I do things that force me to sit tall and stand while working in my office, instead of sitting back and “relaxing”. Things like putting a pillow on the chair so that I cannot rest my back on it, or placing my computer on an elevated surface so that I am forced to work while standing up. I tend to move a lot more when I do those things, and the movement is largely involuntary. These are small but constant movements, a bit like fidgeting. (It would be interesting to tape myself and actually quantify the amount of movement.)

It seems that one can induce an increase in NEAT, which is largely due to involuntary activities, by doing some voluntary things like placing a pillow on a chair or working while standing up.

Is it possible that the unnaturalness of comfy furniture, and particularly of comfy chairs, is contributing (together with other factors) to not only making us fat but also having low-back problems?

Both obesity and low-back problems are widespread among modern urbanites. Yet, from an evolutionary perspective, they should not be. They likely impaired survival success among our ancestors, and thus impaired their reproductive success. Evolution “gets angry” at these things; over time it wipes them out. In my reading of studies of hunter-gatherers, I don’t recall a single instance in which obesity and low-back problems were described as being widespread.

Friday, September 17, 2010

Strong causation can exist without any correlation: The strange case of the chain smokers, and a note about diet

Researchers like to study samples of data and look for associations between variables. Often those associations are represented in the form of correlation coefficients, which go from -1 to 1. Another popular measure of association is the path coefficient, which usually has a narrower range of variation. What many researchers seem to forget is that the associations they find depend heavily on the sample they are looking at, and on the ranges of variation of the variables being analyzed.

A forgotten warning: Causation without correlation

Often those who conduct multivariate statistical analyses on data are unaware of certain limitations. Many times this is due to lack of familiarity with statistical tests. One warning we do see a lot though is: Correlation does not imply causation. This is, of course, absolutely true. If you take my weight from 1 to 20 years of age, and the price of gasoline in the US during that period, you will find that they are highly correlated. But common sense tells me that there is no causation whatsoever between these two variables.

So correlation does not imply causation alright, but there is another warning that is rarely seen: There can be strong causation without any correlation. Of course this can lead to even more bizarre conclusions than the “correlation does not imply causation” problem. If there is strong causation between variables B and Y, and it is not showing as a correlation, another variable A may “jump in” and “steal” that “unused correlation”; so to speak.

The chain smokers “study”

To illustrate this point, let us consider the following fictitious case, a study of “100 cities”. The study focuses on the effect of smoking and genes on lung cancer mortality. Smoking significantly increases the chances of dying from lung cancer; it is a very strong causative factor. Here are a few more details. Between 35 and 40 percent of the population are chain smokers. And there is a genotype (a set of genes), found in a small percentage of the population (around 7 percent), which is protective against lung cancer. All of those who are chain smokers die from lung cancer unless they die from other causes (e.g., accidents). Dying from other causes is a lot more common among those who have the protective genotype.

(I created this fictitious data with these associations in mind, using equations. I also added uncorrelated error into the equations, to make the data look a bit more realistic. For example, random deaths occurring early in life would reduce slightly any numeric association between chain smoking and cancer deaths in the sample of 100 cities.)

The table below shows part of the data, and gives an idea of the distribution of percentage of smokers (Smokers), percentage with the protective genotype (Pgenotype), and percentage of lung cancer deaths (MLCancer). (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) Each row corresponds to a city. The rest of the data, up to row 100, has a similar distribution.


The graphs below show the distribution of lung cancer deaths against: (a) the percentage of smokers, at the top; and (b) the percentage with the protective genotype, at the bottom. Correlations are shown at the top of each graph. (They can vary from -1 to 1. The closer they are to -1 or 1, the stronger is the association, negative or positive, between the variables.) The correlation between lung cancer deaths and percentage of smokers is slightly negative and statistically insignificant (-0.087). The correlation between lung cancer deaths and percentage with the protective genotype is negative, strong, and statistically significant (-0.613).


Even though smoking significantly increases the chances of dying from lung cancer, the correlations tell us otherwise. The correlations tell us that lung cancer does not seem to cause lung cancer deaths, and that having the protective genotype seems to significantly decrease cancer deaths. Why?

If there is no variation, there is no correlation

The reason is that the “researchers” collected data only about chain smokers. That is, the variable “Smokers” includes only chain smokers. If this was not a fictitious case, focusing the study on chain smokers could be seen as a clever strategy employed by researchers funded by tobacco companies. The researchers could say something like this: “We focused our analysis on those most likely to develop lung cancer.” Or, this could have been the result of plain stupidity when designing the research project.

By restricting their study to chain smokers the researchers dramatically reduced the variability in one particular variable: the extent to which the study participants smoked. Without variation, there can be no correlation. No matter what statistical test or software is used, no significant association will be found between lung cancer deaths and percentage of smokers based on this dataset. No matter what statistical test or software is used, a significant and strong association will be found between lung cancer deaths and percentage with the protective genotype.

Of course, this could lead to a very misleading conclusion. Smoking does not cause lung cancer; the real cause is genetic.

A note about diet

Consider the analogy between smoking and consumption of a particular food, and you will probably see what this means for the analysis of observational data regarding dietary choices and disease. This applies to almost any observational study, including the China Study. (Studies employing experimental control manipulations would presumably ensure enough variation in the variables studied.) In the China Study, data from dozens of counties were collected. One may find a significant association between consumption of food A and disease Y.

There may be a much stronger association between food B and disease Y, but that association may not show up in statistical analyses at all, simply because there is little variation in the data regarding consumption of food B. For example, all those sampled may have eaten food B; about the same amount. Or none. Or somewhere in between, within a rather small range of variation.

Statistical illiteracy, bad choices, and taxation

Statistics is a “necessary evil”. It is useful to go from small samples to large ones when we study any possible causal association. By doing so, one can find out whether an observed effect really applies to a larger percentage of the population, or is actually restricted to a small group of individuals. The problem is that we humans are very bad at inferring actual associations from simply looking at large tables with numbers. We need statistical tests for that.

However, ignorance about basic statistical phenomena, such as the one described here, can be costly. A group of people may eliminate food A from their diet based on coefficients of association resulting from what seem to be very clever analyses, replacing it with food B. The problem is that food B may be equally harmful, or even more harmful. And, that effect may not show up on statistical analyses unless they have enough variation in the consumption of food B.

Readers of this blog may wonder why we explicitly use terms like “suggests” when we refer to a relationship that is suggested by a significant coefficient of association (e.g., a linear correlation). This is why, among other reasons.

One does not have to be a mathematician to understand basic statistical concepts. And doing so can be very helpful in one’s life in general, not only in diet and lifestyle decisions. Even in simple choices, such as what to be on. We are always betting on something. For example, any investment is essentially a bet. Some outcomes are much more probable than others.

Once I had an interesting conversation with a high-level officer of a state government. I was part of a consulting team working on an information technology project. We were talking about the state lottery, which was a big source of revenue for the state, comparing it with state taxes. He told me something to this effect:

Our lottery is essentially a tax on the statistically illiterate.

Sunday, September 12, 2010

The China Study II: Wheat flour, rice, and cardiovascular disease

In my last post on the China Study II, I analyzed the effect of total and HDL cholesterol on mortality from all cardiovascular diseases. The main conclusion was that total and HDL cholesterol were protective. Total and HDL cholesterol usually increase with intake of animal foods, and particularly of animal fat. The lowest mortality from all cardiovascular diseases was in the highest total cholesterol range, 172.5 to 180; and the highest mortality in the lowest total cholesterol range, 120 to 127.5. The difference was quite large; the mortality in the lowest range was approximately 3.3 times higher than in the highest.

This post focuses on the intake of two main plant foods, namely wheat flour and rice intake, and their relationships with mortality from all cardiovascular diseases. After many exploratory multivariate analyses, wheat flour and rice emerged as the plant foods with the strongest associations with mortality from all cardiovascular diseases. Moreover, wheat flour and rice have a strong and inverse relationship with each other, which suggests a “consumption divide”. Since the data is from China in the late 1980s, it is likely that consumption of wheat flour is even higher now. As you’ll see, this picture is alarming.

The main model and results

All of the results reported here are from analyses conducted using WarpPLS. Below is the model with the main results of the analyses. (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) The arrows explore associations between variables, which are shown within ovals. The meaning of each variable is the following: SexM1F2 = sex, with 1 assigned to males and 2 to females; MVASC = mortality from all cardiovascular diseases (ages 35-69); TKCAL = total calorie intake per day; WHTFLOUR = wheat flour intake (g/day); and RICE = and rice intake (g/day).


The variables to the left of MVASC are the main predictors of interest in the model. The one to the right is a control variable – SexM1F2. The path coefficients (indicated as beta coefficients) reflect the strength of the relationships. A negative beta means that the relationship is negative; i.e., an increase in a variable is associated with a decrease in the variable that it points to. The P values indicate the statistical significance of the relationship; a P lower than 0.05 generally means a significant relationship (95 percent or higher likelihood that the relationship is “real”).

In summary, the model above seems to be telling us that:

- As rice intake increases, wheat flour intake decreases significantly (beta=-0.84; P<0.01). This relationship would be the same if the arrow pointed in the opposite direction. It suggests that there is a sharp divide between rice-consuming and wheat flour-consuming regions.

- As wheat flour intake increases, mortality from all cardiovascular diseases increases significantly (beta=0.32; P<0.01). This is after controlling for the effects of rice and total calorie intake. That is, wheat flour seems to have some inherent properties that make it bad for one’s health, even if one doesn’t consume that many calories.

- As rice intake increases, mortality from all cardiovascular diseases decreases significantly (beta=-0.24; P<0.01). This is after controlling for the effects of wheat flour and total calorie intake. That is, this effect is not entirely due to rice being consumed in place of wheat flour. Still, as you’ll see later in this post, this relationship is nonlinear. Excessive rice intake does not seem to be very good for one’s health either.

- Increases in wheat flour and rice intake are significantly associated with increases in total calorie intake (betas=0.25, 0.33; P<0.01). This may be due to wheat flour and rice intake: (a) being themselves, in terms of their own caloric content, main contributors to the total calorie intake; or (b) causing an increase in calorie intake from other sources. The former is more likely, given the effect below.

- The effect of total calorie intake on mortality from all cardiovascular diseases is insignificant when we control for the effects of rice and wheat flour intakes (beta=0.08; P=0.35). This suggests that neither wheat flour nor rice exerts an effect on mortality from all cardiovascular diseases by increasing total calorie intake from other food sources.

- Being female is significantly associated with a reduction in mortality from all cardiovascular diseases (beta=-0.24; P=0.01). This is to be expected. In other words, men are women with a few design flaws, so to speak. (This situation reverses itself a bit after menopause.)

Wheat flour displaces rice

The graph below shows the shape of the association between wheat flour intake (WHTFLOUR) and rice intake (RICE). The values are provided in standardized format; e.g., 0 is the mean (a.k.a. average), 1 is one standard deviation above the mean, and so on. The curve is the best-fitting U curve obtained by the software. It actually has the shape of an exponential decay curve, which can be seen as a section of a U curve. This suggests that wheat flour consumption has strongly displaced rice consumption in several regions in China, and also that wherever rice consumption is high wheat flour consumption tends to be low.


As wheat flour intake goes up, so does cardiovascular disease mortality

The graphs below show the shapes of the association between wheat flour intake (WHTFLOUR) and mortality from all cardiovascular diseases (MVASC). In the first graph, the values are provided in standardized format; e.g., 0 is the mean (or average), 1 is one standard deviation above the mean, and so on. In the second graph, the values are provided in unstandardized format and organized in terciles (each of three equal intervals).



The curve in the first graph is the best-fitting U curve obtained by the software. It is a quasi-linear relationship. The higher the consumption of wheat flour in a county, the higher seems to be the mortality from all cardiovascular diseases. The second graph suggests that mortality in the third tercile, which represents a consumption of wheat flour of 501 to 751 g/day (a lot!), is 69 percent higher than mortality in the first tercile (0 to 251 g/day).

Rice seems to be protective, as long as intake is not too high

The graphs below show the shapes of the association between rice intake (RICE) and mortality from all cardiovascular diseases (MVASC). In the first graph, the values are provided in standardized format. In the second graph, the values are provided in unstandardized format and organized in terciles.



Here the relationship is more complex. The lowest mortality is clearly in the second tercile (206 to 412 g/day). There is a lot of variation in the first tercile, as suggested by the first graph with the U curve. (Remember, as rice intake goes down, wheat flour intake tends to go up.) The U curve here looks similar to the exponential decay curve shown earlier in the post, for the relationship between rice and wheat flour intake.

In fact, the shape of the association between rice intake and mortality from all cardiovascular diseases looks a bit like an “echo” of the shape of the relationship between rice and wheat flour intake. Here is what is creepy. This echo looks somewhat like the first curve (between rice and wheat flour intake), but with wheat flour intake replaced by “death” (i.e., mortality from all cardiovascular diseases).

What does this all mean?

- Wheat flour displacing rice does not look like a good thing. Wheat flour intake seems to have strongly displaced rice intake in the counties where it is heavily consumed. Generally speaking, that does not seem to have been a good thing. It looks like this is generally associated with increased mortality from all cardiovascular diseases.

- High glycemic index food consumption does not seem to be the problem here. Wheat flour and rice have very similar glycemic indices (but generally not glycemic loads; see below). Both lead to blood glucose and insulin spikes. Yet, rice consumption seems protective when it is not excessive. This is true in part (but not entirely) because it largely displaces wheat flour. Moreover, neither rice nor wheat flour consumption seems to be significantly associated with cardiovascular disease via an increase in total calorie consumption. This is a bit of a blow to the theory that high glycemic carbohydrates necessarily cause obesity, diabetes, and eventually cardiovascular disease.

- The problem with wheat flour is … hard to pinpoint, based on the results summarized here. Maybe it is the fact that it is an ultra-refined carbohydrate-rich food; less refined forms of wheat could be healthier. In fact, the glycemic loads of less refined carbohydrate-rich foods tend to be much lower than those of more refined ones. (Also, boiled brown rice has a glycemic load that is about three times lower than that of whole wheat bread; whereas the glycemic indices are about the same.) Maybe the problem is wheat flour's  gluten content. Maybe it is a combination of various factors, including these.

Reference

Kock, N. (2010). WarpPLS 1.0 User Manual. Laredo, Texas: ScriptWarp Systems.

Acknowledgment and notes

- Many thanks are due to Dr. Campbell and his collaborators for collecting and compiling the data used in this analysis. The data is from this site, created by those researchers to disseminate their work in connection with a study often referred to as the “China Study II”. It has already been analyzed by other bloggers. Notable analyses have been conducted by Ricardo at Canibais e Reis, Stan at Heretic, and Denise at Raw Food SOS.

- The path coefficients (indicated as beta coefficients) reflect the strength of the relationships; they are a bit like standard univariate (or Pearson) correlation coefficients, except that they take into consideration multivariate relationships (they control for competing effects on each variable). Whenever nonlinear relationships were modeled, the path coefficients were automatically corrected by the software to account for nonlinearity.

- The software used here identifies non-cyclical and mono-cyclical relationships such as logarithmic, exponential, and hyperbolic decay relationships. Once a relationship is identified, data values are corrected and coefficients calculated. This is not the same as log-transforming data prior to analysis, which is widely used but only works if the underlying relationship is logarithmic. Otherwise, log-transforming data may distort the relationship even more than assuming that it is linear, which is what is done by most statistical software tools.

- The R-squared values reflect the percentage of explained variance for certain variables; the higher they are, the better the model fit with the data. In complex and multi-factorial phenomena such as health-related phenomena, many would consider an R-squared of 0.20 as acceptable. Still, such an R-squared would mean that 80 percent of the variance for a particularly variable is unexplained by the data.

- The P values have been calculated using a nonparametric technique, a form of resampling called jackknifing, which does not require the assumption that the data is normally distributed to be met. This and other related techniques also tend to yield more reliable results for small samples, and samples with outliers (as long as the outliers are “good” data, and are not the result of measurement error).

- Only two data points per county were used (for males and females). This increased the sample size of the dataset without artificially reducing variance, which is desirable since the dataset is relatively small. This also allowed for the test of commonsense assumptions (e.g., the protective effects of being female), which is always a good idea in a complex analysis because violation of commonsense assumptions may suggest data collection or analysis error. On the other hand, it required the inclusion of a sex variable as a control variable in the analysis, which is no big deal.

- Since all the data was collected around the same time (late 1980s), this analysis assumes a somewhat static pattern of consumption of rice and wheat flour. In other words, let us assume that variations in consumption of a particular food do lead to variations in mortality. Still, that effect will typically take years to manifest itself. This is a major limitation of this dataset and any related analyses.

- Mortality from schistosomiasis infection (MSCHIST) does not confound the results presented here. Only counties where no deaths from schistosomiasis infection were reported have been included in this analysis. Mortality from all cardiovascular diseases (MVASC) was measured using the variable M059 ALLVASCc (ages 35-69). See this post for other notes that apply here as well.