Sunday, September 22, 2019

How long does it take for a food-related trait to evolve?

Often in discussions about Paleolithic nutrition, and books on the subject, we see speculations about how long it would take for a population to adapt to a particular type of food. Many speculations are way off mark; some think that even 10,000 years are not enough for evolution to take place.

This post addresses the question: How long does it take for a food-related trait to evolve?

We need a bit a Genetics 101 first, discussed below. For more details see, e.g., Hartl & Clark, 2007; and one of my favorites: Maynard Smith, 1998. Full references are provided at the end of this post.

New gene-induced traits, including traits that affect nutrition, appear in populations through a deceptively simple process. A new genetic mutation appears in the population, usually in one single individual, and one of two things happens: (a) the genetic mutation disappears from the population; or (b) the genetic mutation spreads in the population. Evolution is a term that is generally used to refer to a gene-induced trait spreading in a population.

Traits can evolve via two main processes. One is genetic drift, where neutral traits evolve by chance. This process dominates in very small populations (e.g., 50 individuals). The other is selection, where fitness-enhancing traits evolve by increasing the reproductive success of the individuals that possess them. Fitness, in this context, is measured as the number of surviving offspring (or grand-offspring) of an individual.

Yes, traits can evolve by chance, and often do so in small populations.

Say a group of 20 human ancestors became isolated for some reason; e.g., traveled to an island and got stranded there. Let us assume that the group had the common sense of including at least a few women in it; ideally more than men, because women are really the reproductive bottleneck of any population.

In a new generation one individual develops a sweet tooth, which is a neutral mutation because the island has no supermarket. Or, what would be more likely, one of the 20 individuals already had that mutation prior to reaching the island. (Genetic variability is usually high among any group of unrelated individuals, so divergent neutral mutations are usually present.)

By chance alone, that new trait may spread to the whole (larger now) population in 80 generations, or around 1,600 years; assuming a new generation emerging every 20 years. That whole population then grows even further, and gets somewhat mixed up with other groups in a larger population (they find a way out of the island). The descendants of the original island population all have a sweet tooth. That leads to increased diabetes among them, compared with other groups. They find out that the problem is genetic, and wonder how evolution could have made them like that.

The panel below shows the formulas for the calculation of the amount of time it takes for a trait to evolve to fixation in a population. It is taken from a set of slides I used in a presentation (PowerPoint file here). To evolve to fixation means to spread to all individuals in the population. The results of some simulations are also shown. For example, a trait that provides a minute selective advantage of 1% in a population of 10,000 individuals will possibly evolve to fixation in 1,981 generations, or 39,614 years. Not the millions of years often mentioned in discussions about evolution.


I say “possibly” above because traits can also disappear from a population by chance, and often do so at the early stages of evolution, even if they increase the reproductive success of the individuals that possess them. For example, a new beneficial metabolic mutation appears, but its host fatally falls off a cliff by accident, contracts an unrelated disease and dies etc., before leaving any descendant.

How come the fossil record suggests that evolution usually takes millions of years? The reason is that it usually takes a long time for new fitness-enhancing traits to appear in a population. Most genetic mutations are either neutral or detrimental, in terms of reproductive success. It also takes time for the right circumstances to come into place for genetic drift to happen – e.g., massive extinctions, leaving a few surviving members. Once the right elements are in place, evolution can happen fast.

So, what is the implication for traits that affect nutrition? Or, more specifically, can a population that starts consuming a particular type of food evolve to become adapted to it in a short period of time?

The answer is yes. And that adaptation can take a very short amount of time to happen, relatively speaking.

Let us assume that all members of an isolated population start on a particular diet, which is not the optimal diet for them. The exception is one single lucky individual that has a special genetic mutation, and for whom the diet is either optimal or quasi-optimal. Let us also assume that the mutation leads the individual and his or her descendants to have, on average, twice as many surviving children as other unrelated individuals. That translates into a selective advantage (s) of 100%. Finally, let us conservatively assume that the population is relatively large, with 10,000 individuals.

In this case, the mutation will spread to the entire population in approximately 396 years.

Descendants of individuals in that population (e.g., descendants of the Yanomamö) may posses the trait, even after some fair mixing with descendants of other populations, because a trait that goes into fixation has a good chance of being associated with dominant alleles. (Alleles are the different variants of the same gene.)

This Excel spreadsheet (link to a .xls file) is for those who want to play a bit with numbers, using the formulas above, and perhaps speculate about what they could have inherited from their not so distant ancestors. Download the file, and open it with Excel or a compatible spreadsheet system. The formulas are already there; change only the cells highlighted in yellow.

References:

Hartl, D.L., & Clark, A.G. (2007). Principles of population genetics. Sunderland, MA: Sinauer Associates.

Maynard Smith, J. (1998). Evolutionary genetics. New York, NY: Oxford University Press.

Monday, August 26, 2019

How much alcohol is optimal? Maybe less than you think

I have been regularly recommending to users of the software HCE () to include a column in their health data reflecting their alcohol consumption. Why? Because I suspect that alcohol consumption is behind many of what we call the “diseases of affluence”.

A while ago I recall watching an interview with a centenarian, a very lucid woman. When asked about her “secret” to live a long life, she said that she added a little bit of whiskey to her coffee every morning. It was something like a tablespoon of whiskey, or about 15 g, which amounted to approximately 6 g of ethanol every single day.

Well, she might have been drinking very close to the optimal amount of alcohol per day for the average person, if the study reviewed in this post is correct.

Studies of the effect of alcohol consumption on health generally show results in terms of averages within fixed ranges of consumption. For example, they will show average mortality risks for people consuming 1, 2, 3 etc. drinks per day. These studies suggest that there is a J-curve relationship between alcohol consumption and health (). That is, drinking a little is better than not drinking; and drinking a lot is worse than drinking a little.

However, using “rough” ranges of 1, 2, 3 etc. drinks per day prevents those studies from getting to a more fine-grained picture of the beneficial effects of alcohol consumption.

Contrary to popular belief, the positive health effects of moderate alcohol consumption have little, if anything, to do with polyphenols such as resveratrol. Resveratrol, once believed to be the fountain of youth, is found in the skin of red grapes.

It is in fact the alcohol content that has positive effects, apparently reducing the incidence of coronary heart disease, diabetes, hypertension, congestive heart failure, stroke, dementia, Raynaud’s phenomenon, and all-cause mortality. Raynaud's phenomenon is associated with poor circulation in the extremities (e.g., toes, fingers), which in some cases can progress to gangrene.

In most studies of the effects of alcohol consumption on health, the J-curves emerge from visual inspection of the plots of averages across ranges of consumption. Rarely you find studies where nonlinear relationships are “discovered” by software tools such as WarpPLS (), with effects being adjusted accordingly.

You do find, however, some studies that fit reasonably justified functions to the data. Di Castelnuovo and colleagues’ study, published in JAMA Internal Medicine in 2006 (), is probably the most widely cited among these studies. This study is a meta-analysis; i.e., a study that builds on various other empirical studies.

I think that the journal in which this study appeared was formerly known as Archives of Internal Medicine, a fairly selective and prestigious journal, even though this did not seem to be reflected in its Wikipedia article at the time of this writing ().

What Di Castelnuovo and colleagues found is interesting. They fitted a bunch of nonlinear functions to the data, all with J-curve shapes. The results suggest a lot of variation in the maximum amount one can drink before mortality becomes higher than not drinking at all; that maximum amount ranges from about 4 to 6 drinks per day.

But there is little variation in one respect. The optimal amount of alcohol is somewhere around 5 and 7 g/d, which translates into about the following every day: half a can of beer, half a glass of wine, or half a “shot” of spirit. This is clearly a common trait of all of the nonlinear functions that they generated. This is illustrated in the figure below, from the article.



As you can seen from the curves above, a little bit of alcohol every day seems to have an acute effect on mortality reduction. And it seems that taking little doses every day is much better than taking the equivalent dose over a larger period of time; for instance, the equivalent per week, taken once a week. This is suggested by other studies as well ().

The curves above do not clearly reflect a couple of problems with alcohol consumption. One is that alcohol seems to be treated by the body as a toxin, which causes some harm and some good at the same time, the good being often ascribed to hormesis (). Someone who is more sensitive to alcohol’s harmful effects, on the liver for example, may not benefit as much from its positive effects.

The curves are averages that pass through points, after which the points are forgotten; even though they are real people.

The other problem with alcohol is that most people who are introduced to it in highly urbanized areas (where most people live) tend to drink it because of its mood-altering effects. This leads to a major danger of addiction and abuse. And drinking a lot of alcohol is much worse than not drinking at all.

Interestingly, in traditional Mediterranean Cultures where wine is consumed regularly, people tend to generally frown upon drunkenness ().

Wednesday, July 24, 2019

Ketosis, methylglyoxal, and accelerated aging: Probably more fiction than fact

This is a follow up on this post. Just to recap, an interesting hypothesis has been around for quite some time about a possible negative effect of ketosis. This hypothesis argues that ketosis leads to the production of an organic compound called methylglyoxal, which is believed to be a powerful agent in the formation of advanced glycation endproducts (AGEs).

In vitro research, and research with animals (e.g., mice and cows), indeed suggests negative short-term effects of increased ketosis-induced methylglyoxal production. These studies typically deal with what appears to be severe ketosis, not the mild type induced in healthy people by very low carbohydrate diets.

However, the bulk of methylglyoxal is produced via glycolysis, a multi-step metabolic process that uses sugar to produce the body’s main energy currency – adenosine triphosphate (ATP). Ketosis is a state whereby ketones are used as a source of energy instead of glucose.

(Ketones also provide an energy source that is distinct from lipoprotein-bound fatty acids and albumin-bound free fat acids. Those fatty acids appear to be preferred vehicles for the use of dietary or body fat as a source of energy. Yet it seems that small amounts of ketones are almost always present in the blood, even if they do not show up in the urine.)

Thus it follows that ketosis is associated with reduced glycolysis and, consequently, reduced methylglyoxal production, since the bulk of this substance (i.e., methylglyoxal) is produced through glycolysis.

So, how can one argue that ketosis is “a recipe for accelerated AGEing”?

One guess is that ketosis is being confused with ketoacidosis, a pathological condition in which the level of circulating ketones can be as much as 40 to 80 times that found in ketosis. De Grey (2007) refers to “diabetic patients” when he talks about this possibility (i.e., the connection with accelerated AGEing), and ketoacidosis is an unfortunately common condition among those with uncontrolled diabetes.

A gentle body massage is relaxing, and thus health-promoting. Add 40 times to the pressure, and the massage will become a form of physical torture; certainly unhealthy. That does not mean that a gentle body massage is unhealthy.

Interestingly, ketoacidosis often happens together with hyperglycemia, so at least part of the damage associated with ketoacidosis is likely to be caused by high blood sugar levels. Ketosis, on the other hand, is not associated with hyperglycemia.

Finally, if ketosis led to accelerated AGEing to the same extent as, or worse than, chronic hyperglycemia does, where is the long-term evidence?

Since the late 1800s people have been experimenting with ketosis-inducing diets, and documenting the results. The Inuit and other groups have adopted ketosis-inducing diets for much longer, although evolution via selection might have played a role in these cases.

No one seems to have lived to be 150 years of age, but where are the reports of conditions akin to those caused by chronic hyperglycemia among the many that went “banting” in a more strict way since the late 1800s?

The arctic explorer Vilhjalmur Stefansson, who is reported to have lived much of his adult life in ketosis, died in 1962, in his early 80s. After reading about his life, few would disagree that he lived a rough life, with long periods without access to medical care. I doubt that Stefansson would have lived that long if he had suffered from untreated diabetes.

Severe ketosis, to the point of large amounts of ketones being present in the urine, may not be a natural state in which our Paleolithic ancestors lived most of the time. In modern humans, even a 24 h water fast, during an already low carbohydrate diet, may not induce ketosis of this type. Milder ketosis states, with slightly elevated concentrations of ketones showing up in blood tests, can be achieved much more easily.

In conclusion, the notion that ketosis causes accelerated aging to the same extent as chronic hyperglycemia seems more like fiction than fact.

Reference:

De Grey, A. (2007). Ending aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime. New York: NY: St. Martin’s Press.

Sunday, June 23, 2019

Vitamin D production from UV radiation: The effects of total cholesterol and skin pigmentation

Our body naturally produces as much as 10,000 IU of vitamin D based on a few minutes of sun exposure when the sun is high. Getting that much vitamin D from dietary sources is very difficult, even after “fortification”.

The above refers to pre-sunburn exposure. Sunburn is not associated with increased vitamin D production; it is associated with skin damage and cancer.

Solar ultraviolet (UV) radiation is generally divided into two main types: UVB (wavelength: 280–320 nm) and UVA (320–400 nm). Vitamin D is produced primarily based on UVB radiation. Nevertheless, UVA is much more abundant, amounting to about 90 percent of the sun’s UV radiation.

UVA seems to cause the most skin damage, although there is some debate on this. If this is correct, one would expect skin pigmentation to be our body’s defense primarily against UVA radiation, not UVB radiation. If so, one’s ability to produce vitamin D based on UVB should not go down significantly as one’s skin becomes darker.

Also, vitamin D and cholesterol seem to be closely linked. Some argue that one is produced based on the other; others that they have the same precursor substance(s). Whatever the case may be, if vitamin D and cholesterol are indeed closely linked, one would expect low cholesterol levels to be associated with low vitamin D production based on sunlight.

Bogh et al. (2010) published a very interesting study; one of those studies that remain relevant as time goes by. The link to the study was provided by Ted Hutchinson in the comments sections of a another post on vitamin D. The study was published in a refereed journal with a solid reputation, the Journal of Investigative Dermatology.

The study by Bogh et al. (2010) is particularly interesting because it investigates a few issues on which there is a lot of speculation. Among the issues investigated are the effects of total cholesterol and skin pigmentation on the production of vitamin D from UVB radiation.

The figure below depicts the relationship between total cholesterol and vitamin D production based on UVB radiation. Vitamin D production is referred to as “delta 25(OH)D”. The univariate correlation is a fairly high and significant 0.51.


25(OH)D is the abbreviation for calcidiol, a prehormone that is produced in the liver based on vitamin D3 (cholecalciferol), and then converted in the kidneys into calcitriol, which is usually abbreviated as 1,25-(OH)2D3. The latter is the active form of vitamin D.

The table below shows 9 columns; the most relevant ones are the last pair at the right. They are the delta 25(OH)D levels for individuals with dark and fair skin after exposure to the same amount of UVB radiation. The difference in vitamin D production between the two groups is statistically indistinguishable from zero.


So there you have it. According to this study, low total cholesterol seems to be associated with impaired ability to produce vitamin D from UVB radiation. And skin pigmentation appears to have little  effect on the amount of vitamin D produced.

The study has a few weaknesses, as do almost all studies. For example, if you take a look at the second pair of columns from the right on the table above, you’ll notice that the baseline 25(OH)D is lower for individuals with dark skin. The difference was just short of being significant at the 0.05 level.

What is the problem with that? Well, one of the findings of the study was that lower baseline 25(OH)D levels were significantly associated with higher delta 25(OH)D levels. Still, the baseline difference does not seem to be large enough to fully explain the lack of difference in delta 25(OH)D levels for individuals with dark and fair skin.

A widely cited dermatology researcher, Antony Young, published an invited commentary on this study in the same journal issue (Young, 2010). The commentary points out some weaknesses in the study, but is generally favorable. The weaknesses include the use of small sub-samples.

References

Bogh, M.K.B., Schmedes, A.V., Philipsen, P.A., Thieden, E., & Wulf, H.C. (2010). Vitamin D production after UVB exposure depends on baseline vitamin D and total cholesterol but not on skin pigmentation. Journal of Investigative Dermatology, 130(2), 546–553.

Young, A.R. (2010). Some light on the photobiology of vitamin D. Journal of Investigative Dermatology, 130(2), 346–348.

Monday, May 27, 2019

The theory of supercompensation: Strength training frequency and muscle gain

Moderate strength training has a number of health benefits, and is viewed by many as an important component of a natural lifestyle that approximates that of our Stone Age ancestors. It increases bone density, muscle mass, and improves a number of health markers. Done properly, it may decrease body fat percentage.

Generally one would expect some muscle gain as a result of strength training. Men seem to be keen on upper-body gains, while women appear to prefer lower-body gains. Yet, many people do strength training for years, and experience little or no muscle gain.

Paradoxically, those people experience major strength gains, both men and women, especially in the first few months after they start a strength training program. However, those gains are due primarily to neural adaptations, and come without any significant gain in muscle mass. This can be frustrating, especially for men. Most men are after some noticeable muscle gain as a result of strength training. (Whether that is healthy is another story, especially as one gets to extremes.)

After the initial adaptation period, of “beginner” gains, typically no strength gains occur without muscle gains.

The culprits for the lack of anabolic response are often believed to be low levels of circulating testosterone and other hormones that seem to interact with testosterone to promote muscle growth, such as growth hormone. This leads many to resort to anabolic steroids, which are drugs that mimic the effects of androgenic hormones, such as testosterone. These drugs usually increase muscle mass, but have a number of negative short-term and long-term side effects.

There seems to be a better, less harmful, solution to the lack of anabolic response. Through my research on compensatory adaptation I often noticed that, under the right circumstances, people would overcompensate for obstacles posed to them. Strength training is a form of obstacle, which should generate overcompensation under the right circumstances. From a biological perspective, one would expect a similar phenomenon; a natural solution to the lack of anabolic response.

This solution is predicted by a theory that also explains a lack of anabolic response to strength training, and that unfortunately does not get enough attention outside the academic research literature. It is the theory of supercompensation, which is discussed in some detail in several high-quality college textbooks on strength training. (Unlike popular self-help books, these textbooks summarize peer-reviewed academic research, and also provide the references that are summarized.) One example is the excellent book by Zatsiorsky & Kraemer (2006) on the science and practice of strength training.

The figure below, from Zatsiorsky & Kraemer (2006), shows what happens during and after a strength training session. The level of preparedness could be seen as the load in the session, which is proportional to: the number of exercise sets, the weight lifted (or resistance overcame) in each set, and the number of repetitions in each set. The restitution period is essentially the recovery period, which must include plenty of rest and proper nutrition.


Note that toward the end there is a sideways S-like curve with a first stretch above the horizontal line and another below the line. The first stretch is the supercompensation stretch; a window in time (e.g., a 20-hour period). The horizontal line represents the baseline load, which can be seen as the baseline strength of the individual prior to the exercise session. This is where things get tricky. If one exercises again within the supercompensation stretch, strength and muscle gains will likely happen. (Usually noticeable upper-body muscle gain happens in men, because of higher levels of testosterone and of other hormones that seem to interact with testosterone.) Exercising outside the supercompensation time window may lead to no gain, or even to some loss, of both strength and muscle.

Timing strength training sessions correctly can over time lead to significant gains in strength and muscle (see middle graph in the figure below, also from Zatsiorsky & Kraemer, 2006). For that to happen, one has not only to regularly “hit” the supercompensation time window, but also progressively increase load. This must happen for each muscle group. Strength and muscle gains will occur up to a point, a point of saturation, after which no further gains are possible. Men who reach that point will invariably look muscular, in a more or less “natural” way depending on supplements and other factors. Some people seem to gain strength and muscle very easily; they are often called mesomorphs. Others are hard gainers, sometimes referred to as endomorphs (who tend to be fatter) and ectomorphs (who tend to be skinnier).


It is not easy to identify the ideal recovery and supercompensation periods. They vary from person to person. They also vary depending on types of exercise, numbers of sets, and numbers of repetitions. Nutrition also plays a role, and so do rest and stress. From an evolutionary perspective, it would seem to make sense to work all major muscle groups on the same day, and then do the same workout after a certain recovery period. (Our Stone Age ancestors did not do isolation exercises, such as bicep curls.) But this will probably make you look more like a strong hunter-gatherer than a modern bodybuilder.

To identify the supercompensation time window, one could employ a trial-and-error approach, by trying to repeat the same workout after different recovery times. Based on the literature, it would make sense to start at the 48-hour period (one full day of rest between sessions), and then move back and forth from there. A sign that one is hitting the supercompensation time window is becoming a little stronger at each workout, by performing more repetitions with the same weight (e.g., 10, from 8 in the previous session). If that happens, the weight should be incrementally increased in successive sessions. Most studies suggest that the best range for muscle gain is that of 6 to 12 repetitions in each set, but without enough time under tension gains will prove elusive.

The discussion above is not aimed at professional bodybuilders. There are a number of factors that can influence strength and muscle gain other than supercompensation. (Still, supercompensation seems to be a “biggie”.) Things get trickier over time with trained athletes, as returns on effort get progressively smaller. Even natural bodybuilders appear to benefit from different strategies at different levels of proficiency. For example, changing the workouts on a regular basis seems to be a good idea, and there is a science to doing that properly. See the “Interesting links” area of this web site for several more focused resources of strength training.

Reference:

Zatsiorsky, V., & Kraemer, W.J. (2006). Science and practice of strength training. Champaign, IL: Human Kinetics.

Sunday, April 28, 2019

Subcutaneous versus visceral fat: How to tell the difference?

The photos below, from Wikipedia, show two patterns of abdominal fat deposition. The one on the left is predominantly of subcutaneous abdominal fat deposition. The one on the right is an example of visceral abdominal fat deposition, around internal organs, together with a significant amount of subcutaneous fat deposition as well.


Body fat is not an inert mass used only to store energy. Body fat can be seen as a “distributed organ”, as it secretes a number of hormones into the bloodstream. For example, it secretes leptin, which regulates hunger. It secretes adiponectin, which has many health-promoting properties. It also secretes tumor necrosis factor-alpha (more recently referred to as simply “tumor necrosis factor” in the medical literature), which promotes inflammation. Inflammation is necessary to repair damaged tissue and deal with pathogens, but too much of it does more harm than good.

How does one differentiate subcutaneous from visceral abdominal fat?

Subcutaneous abdominal fat shifts position more easily as one’s body moves. When one is standing, subcutaneous fat often tends to fold around the navel, creating a “mouth” shape. Subcutaneous fat is easier to hold in one’s hand, as shown on the left photo above. Because subcutaneous fat tends to “shift” more easily as one changes the position of the body, if you measure your waist circumference lying down and standing up, and the difference is large (a one-inch difference can be considered large), you probably have a significant amount of subcutaneous fat.

Waist circumference is a variable that reflects individual changes in body fat percentage fairly well. This is especially true as one becomes lean (e.g., around 14-17 percent or less of body fat for men, and 21-24 for women), because as that happens abdominal fat contributes to an increasingly higher proportion of total body fat. For people who are lean, a 1-inch reduction in waist circumference will frequently translate into a 2-3 percent reduction in body fat percentage. Having said that, waist circumference comparisons between individuals are often misleading. Waist-to-fat ratios tend to vary a lot among different individuals (like almost any trait). This means that someone with a 34-inch waist (measured at the navel) may have a lower body fat percentage than someone with a 33-inch waist.

Subcutaneous abdominal fat is hard to mobilize; that is, it is hard to burn through diet and exercise. This is why it is often called the “stubborn” abdominal fat. One reason for the difficulty in mobilizing subcutaneous abdominal fat is that the network of blood vessels is not as dense in the area where this type of fat occurs, as it is with visceral fat. Another reason, which is related to degree of vascularization, is that subcutaneous fat is farther away from the portal vein than visceral fat. As such, it has to travel a longer distance to reach the main “highway” that will take it to other tissues (e.g., muscle) for use as energy.

In terms of health, excess subcutaneous fat is not nearly as detrimental as excess visceral fat. Excess visceral fat typically happens together with excess subcutaneous fat; but not necessarily the other way around. For instance, sumo wrestlers frequently have excess subcutaneous fat, but little or no visceral fat. The more health-detrimental effect of excess visceral fat is probably related to its proximity to the portal vein, which amplifies the negative health effects of excessive pro-inflammatory hormone secretion. Those hormones reach a major transport “highway” rather quickly.

Even though excess subcutaneous body fat is more benign than excess visceral fat, excess body fat of any kind is unlikely to be health-promoting. From an evolutionary perspective, excess body fat impaired agile movement and decreased circulating adiponectin levels; the latter leading to a host of negative health effects. In modern humans, negative health effects may be much less pronounced with subcutaneous than visceral fat, but they will still occur.

Based on studies of isolated hunger-gatherers, it is reasonable to estimate “natural” body fat levels among our Stone Age ancestors, and thus optimal body fat levels in modern humans, to be around 6-13 percent in men and 14–20 percent in women.

If you think that being overweight probably protected some of our Stone Age ancestors during times of famine, here is one interesting factoid to consider. It will take over a month for a man weighing 150 lbs and with 10 percent body fat to die from starvation, and death will not be typically caused by too little body fat being left for use as a source of energy. In starvation, normally death will be caused by heart failure, as the body slowly breaks down muscle tissue (including heart muscle) to maintain blood glucose levels.

References:

Arner, P. (2005). Site differences in human subcutaneous adipose tissue metabolism in obesity. Aesthetic Plastic Surgery, 8(1), 13-17.

Brooks, G.A., Fahey, T.D., & Baldwin, K.M. (2005). Exercise physiology: Human bioenergetics and its applications. Boston, MA: McGraw-Hill.

Fleck, S.J., & Kraemer, W.J. (2004). Designing resistance training programs. Champaign, IL: Human Kinetics.

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.

Friday, March 22, 2019

Total cholesterol and cardiovascular disease: A U-curve relationship

The hypothesis that blood cholesterol levels are positively correlated with heart disease (the lipid hypothesis) dates back to Rudolph Virchow in the mid-1800s.

One famous study that supported this hypothesis was Ancel Keys's Seven Countries Study, conducted between the 1950s and 1970s. This study eventually served as the foundation on which much of the advice that we receive today from doctors is based, even though several other studies have been published since that provide little support for the lipid hypothesis.

The graph below (from O Primitivo) shows the results of one study, involving many more countries than Key's Seven Countries Study, that actually suggests a NEGATIVE linear correlation between total cholesterol and cardiovascular disease.


Now, most relationships in nature are nonlinear, with quite a few following a pattern that looks like a U-curve (plain or inverted); sometimes called a J-curve pattern. The graph below (also from O Primitivo) shows the U-curve relationship between total cholesterol and mortality, with cardiovascular disease mortality indicated through a dotted red line at the bottom.

This graph has been obtained through a nonlinear analysis, and I think it provides a better picture of the relationship between total cholesterol (TC) and mortality. Based on this graph, the best range of TC that one can be at is somewhere between 210, where cardiovascular disease mortality is minimized; and 220, where total mortality is minimized.

The total mortality curve is the one indicated through the full blue line at the top. In fact, it suggests that mortality increases sharply as TC decreases below 200.

Now, these graphs relate TC with disease and mortality, and say nothing about LDL cholesterol (LDL). In my own experience, and that of many people I know, a TC of about 200 will typically be associated with a slightly elevated LDL (e.g., 110 to 150), even if one has a high HDL cholesterol (i.e., greater than 60).

Yet, most people who have a LDL greater than 100 will be told by their doctors, usually with the best of the intentions, to take statins, so that they can "keep their LDL under control". (LDL levels are usually calculated, not measured directly, which itself creates a whole new set of problems.)

Alas, reducing LDL to 100 or less will typically reduce TC below 200. If we go by the graphs above, especially the one showing the U-curves, these folks' risk for cardiovascular disease and mortality will go up - exactly the opposite effect that they and their doctors expected. And that will cost them financially as well, as statin drugs are expensive, in part to pay for all those TV ads.