Sunday, November 24, 2019

The China Study II: Does calorie restriction increase longevity?

The idea that calorie restriction extends human life comes largely from studies of other species. The most relevant of those studies have been conducted with primates, where it has been shown that primates that eat a restricted calorie diet live longer and healthier lives than those that are allowed to eat as much as they want.

There are two main problems with many of the animal studies of calorie restriction. One is that, as natural lifespan decreases, it becomes progressively easier to experimentally obtain major relative lifespan extensions. (That is, it seems much easier to double the lifespan of an organism whose natural lifespan is one day than an organism whose natural lifespan is 80 years.) The second, and main problem in my mind, is that the studies often compare obese with lean animals.

Obesity clearly reduces lifespan in humans, but that is a different claim than the one that calorie restriction increases lifespan. It has often been claimed that Asian countries and regions where calorie intake is reduced display increased lifespan. And this may well be true, but the question remains as to whether this is due to calorie restriction increasing lifespan, or because the rates of obesity are much lower in countries and regions where calorie intake is reduced.

So, what can the China Study II data tell us about the hypothesis that calorie restriction increases longevity?

As it turns out, we can conduct a preliminary test of this hypothesis based on a key assumption. Let us say we compared two populations (e.g., counties in China), based on the following ratio: number of deaths at or after age 70 divided by number deaths before age 70. Let us call this the “ratio of longevity” of a population, or RLONGEV. The assumption is that the population with the highest RLONGEV would be the population with the highest longevity of the two. The reason is that, as longevity goes up, one would expect to see a shift in death patterns, with progressively more people dying old and fewer people dying young.

The 1989 China Study II dataset has two variables that we can use to estimate RLONGEV. They are coded as M005 and M006, and refer to the mortality rates from 35 to 69 and 70 to 79 years of age, respectively. Unfortunately there is no variable for mortality after 79 years of age, which limits the scope of our results somewhat. (This does not totally invalidate the results because we are using a ratio as our measure of longevity, not the absolute number of deaths from 70 to 79 years of age.) Take a look at these two previous China Study II posts (here, and here) for other notes, most of which apply here as well. The notes are at the end of the posts.

All of the results reported here are from analyses conducted using WarpPLS. Below is a model with coefficients of association; it is a simple model, since the hypothesis that we are testing is also simple. (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) The arrows explore associations between variables, which are shown within ovals. The meaning of each variable is the following: TKCAL = total calorie intake per day; RLONGEV = ratio of longevity; SexM1F2 = sex, with 1 assigned to males and 2 to females.



As one would expect, being female is associated with increased longevity, but the association is just shy of being statistically significant in this dataset (beta=0.14; P=0.07). The association between total calorie intake and longevity is trivial, and statistically indistinguishable from zero (beta=-0.04; P=0.39). Moreover, even though this very weak association is overall negative (or inverse), the sign of the association here does not fully reflect the shape of the association. The shape is that of an inverted J-curve; a.k.a. U-curve. When we split the data into total calorie intake terciles we get a better picture:


The second tercile, which refers to a total daily calorie intake of 2193 to 2844 calories, is the one associated with the highest longevity. The first tercile (with the lowest range of calories) is associated with a higher longevity than the third tercile (with the highest range of calories). These results need to be viewed in context. The average weight in this dataset was about 116 lbs. A conservative estimate of the number of calories needed to maintain this weight without any physical activity would be about 1740. Add about 700 calories to that, for a reasonable and healthy level of physical activity, and you get 2440 calories needed daily for weight maintenance. That is right in the middle of the second tercile.

In simple terms, the China Study II data seems to suggest that those who eat well, but not too much, live the longest. Those who eat little have slightly lower longevity. Those who eat too much seem to have the lowest longevity, perhaps because of the negative effects of excessive body fat.

Because these trends are all very weak from a statistical standpoint, we have to take them with caution. What we can say with more confidence is that the China Study II data does not seem to support the hypothesis that calorie restriction increases longevity.

Reference

Kock, N. (2019). WarpPLS User Manual: Version 6.0. Laredo, Texas: ScriptWarp Systems.

Notes

- The path coefficients (indicated as beta coefficients) reflect the strength of the relationships; they are a bit like standard univariate (or Pearson) correlation coefficients, except that they take into consideration multivariate relationships (they control for competing effects on each variable). Whenever nonlinear relationships were modeled, the path coefficients were automatically corrected by the software to account for nonlinearity.

- Only two data points per county were used (for males and females). This increased the sample size of the dataset without artificially reducing variance, which is desirable since the dataset is relatively small (each county, not individual, is a separate data point is this dataset). This also allowed for the test of commonsense assumptions (e.g., the protective effects of being female), which is always a good idea in a multivariate analyses because violation of commonsense assumptions may suggest data collection or analysis error. On the other hand, it required the inclusion of a sex variable as a control variable in the analysis, which is no big deal.

- Mortality from schistosomiasis infection (MSCHIST) does not confound the results presented here. Only counties where no deaths from schistosomiasis infection were reported have been included in this analysis. The reason for this is that mortality from schistosomiasis infection can severely distort the results in the age ranges considered here. On the other hand, removal of counties with deaths from schistosomiasis infection reduced the sample size, and thus decreased the statistical power of the analysis.

Monday, October 21, 2019

Lipotoxicity or tired pancreas? Abnormal fat metabolism as a possible precondition for type 2 diabetes

The term “diabetes” is used to describe a wide range of diseases of glucose metabolism; diseases with a wide range of causes. The diseases include type 1 and type 2 diabetes, type 2 ketosis-prone diabetes (which I know exists thanks to Michael Barker’s blog), gestational diabetes, various MODY types, and various pancreatic disorders. The possible causes include genetic defects (or adaptations to very different past environments), autoimmune responses, exposure to environmental toxins, as well as viral and bacterial infections; in addition to obesity, and various other apparently unrelated factors, such as excessive growth hormone production.

Type 2 diabetes and the “tired pancreas” theory

Type 2 diabetes is the one most commonly associated with the metabolic syndrome, which is characterized by middle-age central obesity, and the “diseases of civilization” brought up by Neolithic inventions. Evidence is mounting that a Neolithic diet and lifestyle play a key role in the development of the metabolic syndrome. In terms of diet, major suspects are engineered foods rich in refined carbohydrates and refined sugars. In this context, one widely touted idea is that the constant insulin spikes caused by consumption of those foods lead the pancreas (figure below from Wikipedia) to get “tired” over time, losing its ability to produce insulin. The onset of insulin resistance mediates this effect.



Empirical evidence against the “tired pancreas” theory

This “tired pancreas” theory, which refers primarily to the insulin-secreting beta-cells in the pancreas, conflicts with a lot of empirical evidence. It is inconsistent with the existence of isolated semi/full hunter-gatherer groups (e.g., the Kitavans) that consume large amounts of natural (i.e., unrefined) foods rich in easily digestible carbohydrates from tubers and fruits, which cause insulin spikes. These groups are nevertheless generally free from type 2 diabetes. The “tired pancreas” theory conflicts with the existence of isolated groups in China and Japan (e.g., the Okinawans) whose diets also include a large proportion of natural foods rich in easily digestible carbohydrates, which cause insulin spikes. Yet these groups are generally free from type 2 diabetes.

Humboldt (1995), in his personal narrative of his journey to the “equinoctial regions of the new continent”, states on page 121 about the natives as a group that: "… between twenty and fifty years old, age is not indicated by wrinkling skin, white hair or body decrepitude [among natives]. When you enter a hut is hard to differentiate a father from son …" A large proportion of these natives’ diets included plenty of natural foods rich in easily digestible carbohydrates from tubers and fruits, which cause insulin spikes. Still, there was no sign of any condition that would suggest a prevalence of type 2 diabetes among them.

At this point it is important to note that the insulin spikes caused by natural carbohydrate-rich foods are much less pronounced than the ones caused by refined carbohydrate-rich foods. The reason is that there is a huge gap between the glycemic loads of natural and refined carbohydrate-rich foods, even though the glycemic indices may be quite similar in some cases. Natural carbohydrate-rich foods are not made mostly of carbohydrates. Even an Irish (or white) potato is 75 percent water.

More insulin may lead to abnormal fat metabolism in sedentary people

The more pronounced spikes may lead to abnormal fat metabolism because more body fat is force-stored than it would have been with the less pronounced spikes, and stored body fat is not released just as promptly as it should be to fuel muscle contractions and other metabolic processes. Typically this effect is a minor one on a daily basis, but adds up over time, leading to fairly unnatural patterns of fat metabolism in the long run. This is particularly true for those who lead sedentary lifestyles. As for obesity, nobody gets obese in one day. So the key problem with the more pronounced spikes may not be that the pancreas is getting “tired”, but that body fat metabolism is not normal, which in turn leads to abnormally high or low levels of important body fat-derived hormones (e.g., high levels of leptin and low levels of adiponectin).

One common characteristic of the groups mentioned above is absence of obesity, even though food is abundant and often physical activity is moderate to low. Repeat for emphasis: “… even though food is abundant and often physical activity is moderate to low”. Note that having low levels of activity is not the same as spending the whole day sitting down in a comfortable chair working on a computer. Obviously caloric intake and level of activity among these groups were/are not at the levels that would lead to obesity. How could that be possible? See this post for a possible explanation.

Excessive body fat gain, lipotoxicity, and type 2 diabetes

There are a few theories that implicate the interaction of abnormal fat metabolism with other factors (e.g., genetic factors) in the development of type 2 diabetes. Empirical evidence suggests that this is a reasonable direction of causality. One of these theories is the theory of lipotoxicity.

Several articles have discussed the theory of lipotoxicity. The article by Unger & Zhou (2001) is a widely cited one. The theory seems to be widely based on the comparative study of various genotypes found in rats. Nevertheless, there is mounting evidence suggesting that the underlying mechanisms may be similar in humans. In a nutshell, this theory proposes the following steps in the development of type 2 diabetes:

    (1) Abnormal fat mass gain leads to an abnormal increase in fat-derived hormones, of which leptin is singled out by the theory. Some people seem to be more susceptible than others in this respect, with lower triggering thresholds of fat mass gain. (What leads to exaggerated fat mass gains? The theory does not go into much detail here, but empirical evidence from other studies suggests that major culprits are refined grains and seeds, as well as refined sugars; other major culprits seem to be trans fats, and vegetable oils rich in linoleic acid.)

    (2) Resistance to fat-derived hormones sets in. Again, leptin resistance is singled out as the key here. (This is a bit simplistic. Other fat-derived hormones, like adiponectin, seem to clearly interact with leptin.) Since leptin regulates fatty acid metabolism, the theory argues, leptin resistance is hypothesized to impair fatty acid metabolism.

    (3) Impaired fat metabolism causes fatty acids to “spill over” to tissues other than fat cells, and also causes an abnormal increase in a substance called ceramide in those tissues. These include tissues in the pancreas that house beta-cells, which secrete insulin. In short, body fat should be stored in fat cells (adipocytes), not outside them.

    (4) Initially fatty acid “spill over” to beta-cells enlarges them and makes them become overactive, leading to excessive insulin production in response to carbohydrate-rich foods, and also to insulin resistance. This is the pre-diabetic phase where hypoglycemic episodes happen a few hours following the consumption of carbohydrate-rich foods. Once this stage is reached, several natural carbohydrate-rich foods also become a problem (e.g., potatoes and bananas), in addition to refined carbohydrate-rich foods.

    (5) Abnormal levels of ceramide induce beta-cell apoptosis in the pancreas. This is essentially “death by suicide” of beta cells in the pancreas. What follows is full-blown type 2 diabetes. Insulin production is impaired, leading to very elevated blood glucose levels following the consumption of carbohydrate-rich foods, even if they are unprocessed.

It is widely known that type 2 diabetics have impaired glucose metabolism. What is not so widely known is that usually they also have impaired fatty acid metabolism. For example, consumption of the same fatty meal is likely to lead to significantly more elevated triglyceride levels in type 2 diabetics than non-diabetics, after several hours. This is consistent with the notion that leptin resistance precedes type 2 diabetes, and inconsistent with the “tired pancreas” theory.

Weak and strong points of the theory of lipotoxicity

A weakness of the theory of lipotoxicity is its strong lipophobic tone; at least in the articles that I have read. There is ample evidence that eating a lot of the ultra-demonized saturated fat, per se, is not what makes people obese or type 2 diabetic. Yet overconsumption of trans fats and vegetable oils rich in linoleic acid does seem to be linked with obesity and type 2 diabetes. (So does the consumption of refined grains and seeds, and refined sugars.) The theory of lipotoxicity does not seem to make these distinctions.

In defense of the theory of lipotoxicity, it does not argue that there cannot be thin diabetics. Many type 1 diabetics are thin. Type 2 diabetics can also be thin, although this is much less common. In certain individuals, the threshold of body fat gain that will precipitate lipotoxicity may be quite low. In others, the same amount of body fat gain (or more) may in fact increase their insulin sensitivity under certain circumstances – e.g., when growth hormone levels are abnormally low.

Autoimmune disorders, perhaps induced by environmental toxins, or toxins found in certain refined foods, may cause the immune system to attack the beta-cells in the pancreas. This may lead to type 1 diabetes if all beta cells are destroyed, or something that can easily be diagnosed as type 2 (or type 1.5) diabetes if only a portion of the cells are destroyed, in a way that does not involve lipotoxicity.

Nor does the theory of lipotoxicity predict that all those who become obese will develop type 2 diabetes. It only suggests that the probability will go up, particularly if other factors are present (e.g., genetic propensity). There are many people who are obese during most of their adult lives and never develop type 2 diabetes. On the other hand, some groups, like Hispanics, tend to develop type 2 diabetes more easily (often even before they reach the obese level). One only has to visit the South Texas region near the Rio Grande border to see this first hand.

What the theory proposes is a new way of understanding the development of type 2 diabetes; a way that seems to make more sense than the “tired pancreas” theory. The theory of lipitoxicity may not be entirely correct. For example, there may be other mechanisms associated with abnormal fat metabolism and consumption of Neolithic foods that cause beta-cell “suicide”, and that have nothing to do with lipotoxicity as proposed by the theory. (At least one fat-derived hormone, tumor necrosis factor-alpha, is associated with abnormal cell apoptosis when abnormally elevated. Levels of this hormone go up immediately after a meal rich in refined carbohydrates.) But the link that it proposes between obesity and type 2 diabetes seems to be right on target.

Implications and thoughts

Some implications and thoughts based on the discussion above are the following. Some are extrapolations based on the discussion in this post combined with those in other posts. At the time of this writing, there were hundreds of posts on this blog, in addition to many comments stemming from over 2.5 million page views. See under "Labels" at the bottom-right area of this blog for a summary of topics addressed. It is hard to ignore things that were brought to light in previous posts.

    - Let us start with a big one: Avoiding natural carbohydrate-rich foods in the absence of compromised glucose metabolism is unnecessary. Those foods do not “tire” the pancreas significantly more than protein-rich foods do. While carbohydrates are not essential macronutrients, protein is. In the absence of carbohydrates, protein will be used by the body to produce glucose to supply the needs of the brain and red blood cells. Protein elicits an insulin response that is comparable to that of natural carbohydrate-rich foods on a gram-adjusted basis (but significantly lower than that of refined carbohydrate-rich foods, like doughnuts and bagels). Usually protein does not lead to a measurable glucose response because glucagon is secreted together with insulin in response to ingestion of protein, preventing hypoglycemia.

    - Abnormal fat gain should be used as a general measure of one’s likelihood of being “headed south” in terms of health. The “fitness” level for men and women shown on the table in this post seem like good targets for body fat percentage. The problem here, of course, is that this is not as easy as it sounds. Attempts at getting lean can lead to poor nutrition and/or starvation. These may make matters worse in some cases, leading to hormonal imbalances and uncontrollable hunger, which will eventually lead to obesity. Poor nutrition may also depress the immune system, making one susceptible to a viral or bacterial  infection that may end up leading to beta-cell destruction and diabetes. A better approach is to place emphasis on eating a variety of natural foods, which are nutritious and satiating, and avoiding refined ones, which are often addictive “empty calories”. Generally fat loss should be slow to be healthy and sustainable.

    - Finally, if glucose metabolism is compromised, one should avoid any foods in quantities that cause an abnormally elevated glucose or insulin response. All one needs is an inexpensive glucose meter to find out what those foods are. The following are indications of abnormally elevated glucose and insulin responses, respectively: an abnormally high glucose level 1 hour after a meal (postprandial hyperglycemia); and an abnormally low glucose level 2 to 4 hours after a meal (reactive hypoglycemia). What is abnormally high or low? Take a look at the peaks and troughs shown on the graph in this post; they should give you an idea. Some insulin resistant people using glucose meters will probably realize that they can still eat several natural carbohydrate-rich foods, but in small quantities, because those foods usually have a low glycemic load (even if their glycemic index is high).

Lucy was a vegetarian and Sapiens an omnivore. We apparently have not evolved to be pure carnivores, even though we can be if the circumstances require. But we absolutely have not evolved to eat many of the refined and industrialized foods available today, not even the ones marketed as “healthy”. Those foods do not make our pancreas “tired”. Among other things, they “mess up” fat metabolism, which may lead to type 2 diabetes through a complex process involving hormones secreted by body fat.

References

Humboldt, A.V. (1995). Personal narrative of a journey to the equinoctial regions of the new continent. New York, NY: Penguin Books.

Unger, R.H., & Zhou, Y.-T. (2001). Lipotoxicity of beta-cells in obesity and in other causes of fatty acid spillover. Diabetes, 50(1), S118-S121.

Sunday, September 22, 2019

How long does it take for a food-related trait to evolve?

Often in discussions about Paleolithic nutrition, and books on the subject, we see speculations about how long it would take for a population to adapt to a particular type of food. Many speculations are way off mark; some think that even 10,000 years are not enough for evolution to take place.

This post addresses the question: How long does it take for a food-related trait to evolve?

We need a bit of Genetics 101 first, discussed below. For more details see, e.g., Hartl & Clark, 2007; and one of my favorites: Maynard Smith, 1998. Full references are provided at the end of this post.

New gene-induced traits, including traits that affect nutrition, appear in populations through a deceptively simple process. A new genetic mutation appears in the population, usually in one single individual, and one of two things happens: (a) the genetic mutation disappears from the population; or (b) the genetic mutation spreads in the population. Evolution is a term that is generally used to refer to a gene-induced trait spreading in a population.

Traits can evolve via two main processes. One is genetic drift, where neutral traits evolve by chance. This process dominates in very small populations (e.g., 50 individuals). The other is selection, where fitness-enhancing traits evolve by increasing the reproductive success of the individuals that possess them. Fitness, in this context, is measured as the number of surviving offspring (or grand-offspring) of an individual.

Yes, traits can evolve by chance, and often do so in small populations.

Say a group of 20 human ancestors became isolated for some reason; e.g., traveled to an island and got stranded there. Let us assume that the group had the common sense of including at least a few women in it; ideally more than men, because women are really the reproductive bottleneck of any population.

In a new generation one individual develops a sweet tooth, which is a neutral mutation because the island has no supermarket. Or, what would be more likely, one of the 20 individuals already had that mutation prior to reaching the island. (Genetic variability is usually high among any group of unrelated individuals, so divergent neutral mutations are usually present.)

By chance alone, that new trait may spread to the whole (larger now) population in 80 generations, or around 1,600 years; assuming a new generation emerging every 20 years. That whole population then grows even further, and gets somewhat mixed up with other groups in a larger population (they find a way out of the island). The descendants of the original island population all have a sweet tooth. That leads to increased diabetes among them, compared with other groups. They find out that the problem is genetic, and wonder how evolution could have made them like that.

The panel below shows the formulas for the calculation of the amount of time it takes for a trait to evolve to fixation in a population. It is taken from a set of slides I used in a presentation (PowerPoint file here). To evolve to fixation means to spread to all individuals in the population. The results of some simulations are also shown. For example, a trait that provides a minute selective advantage of 1% in a population of 10,000 individuals will possibly evolve to fixation in 1,981 generations, or 39,614 years. Not the millions of years often mentioned in discussions about evolution.


I say “possibly” above because traits can also disappear from a population by chance, and often do so at the early stages of evolution, even if they increase the reproductive success of the individuals that possess them. For example, a new beneficial metabolic mutation appears, but its host fatally falls off a cliff by accident, contracts an unrelated disease and dies etc., before leaving any descendant.

How come the fossil record suggests that evolution usually takes millions of years? The reason is that it usually takes a long time for new fitness-enhancing traits to appear in a population. Most genetic mutations are either neutral or detrimental, in terms of reproductive success. It also takes time for the right circumstances to come into place for genetic drift to happen – e.g., massive extinctions, leaving a few surviving members. Once the right elements are in place, evolution can happen fast.

So, what is the implication for traits that affect nutrition? Or, more specifically, can a population that starts consuming a particular type of food evolve to become adapted to it in a short period of time?

The answer is yes. And that adaptation can take a very short amount of time to happen, relatively speaking.

Let us assume that all members of an isolated population start on a particular diet, which is not the optimal diet for them. The exception is one single lucky individual that has a special genetic mutation, and for whom the diet is either optimal or quasi-optimal. Let us also assume that the mutation leads the individual and his or her descendants to have, on average, twice as many surviving children as other unrelated individuals. That translates into a selective advantage (s) of 100%. Finally, let us conservatively assume that the population is relatively large, with 10,000 individuals.

In this case, the mutation will spread to the entire population in approximately 396 years.

Descendants of individuals in that population (e.g., descendants of the Yanomamö) may posses the trait, even after some fair mixing with descendants of other populations, because a trait that goes into fixation has a good chance of being associated with dominant alleles. (Alleles are the different variants of the same gene.)

This Excel spreadsheet (link to a .xls file) is for those who want to play a bit with numbers, using the formulas above, and perhaps speculate about what they could have inherited from their not so distant ancestors. Download the file, and open it with Excel or a compatible spreadsheet system. The formulas are already there; change only the cells highlighted in yellow.

References:

Hartl, D.L., & Clark, A.G. (2007). Principles of population genetics. Sunderland, MA: Sinauer Associates.

Maynard Smith, J. (1998). Evolutionary genetics. New York, NY: Oxford University Press.

Monday, August 26, 2019

How much alcohol is optimal? Maybe less than you think

I have been regularly recommending to users of the software HCE () to include a column in their health data reflecting their alcohol consumption. Why? Because I suspect that alcohol consumption is behind many of what we call the “diseases of affluence”.

A while ago I recall watching an interview with a centenarian, a very lucid woman. When asked about her “secret” to live a long life, she said that she added a little bit of whiskey to her coffee every morning. It was something like a tablespoon of whiskey, or about 15 g, which amounted to approximately 6 g of ethanol every single day.

Well, she might have been drinking very close to the optimal amount of alcohol per day for the average person, if the study reviewed in this post is correct.

Studies of the effect of alcohol consumption on health generally show results in terms of averages within fixed ranges of consumption. For example, they will show average mortality risks for people consuming 1, 2, 3 etc. drinks per day. These studies suggest that there is a J-curve relationship between alcohol consumption and health (). That is, drinking a little is better than not drinking; and drinking a lot is worse than drinking a little.

However, using “rough” ranges of 1, 2, 3 etc. drinks per day prevents those studies from getting to a more fine-grained picture of the beneficial effects of alcohol consumption.

Contrary to popular belief, the positive health effects of moderate alcohol consumption have little, if anything, to do with polyphenols such as resveratrol. Resveratrol, once believed to be the fountain of youth, is found in the skin of red grapes.

It is in fact the alcohol content that has positive effects, apparently reducing the incidence of coronary heart disease, diabetes, hypertension, congestive heart failure, stroke, dementia, Raynaud’s phenomenon, and all-cause mortality. Raynaud's phenomenon is associated with poor circulation in the extremities (e.g., toes, fingers), which in some cases can progress to gangrene.

In most studies of the effects of alcohol consumption on health, the J-curves emerge from visual inspection of the plots of averages across ranges of consumption. Rarely you find studies where nonlinear relationships are “discovered” by software tools such as WarpPLS (), with effects being adjusted accordingly.

You do find, however, some studies that fit reasonably justified functions to the data. Di Castelnuovo and colleagues’ study, published in JAMA Internal Medicine in 2006 (), is probably the most widely cited among these studies. This study is a meta-analysis; i.e., a study that builds on various other empirical studies.

I think that the journal in which this study appeared was formerly known as Archives of Internal Medicine, a fairly selective and prestigious journal, even though this did not seem to be reflected in its Wikipedia article at the time of this writing ().

What Di Castelnuovo and colleagues found is interesting. They fitted a bunch of nonlinear functions to the data, all with J-curve shapes. The results suggest a lot of variation in the maximum amount one can drink before mortality becomes higher than not drinking at all; that maximum amount ranges from about 4 to 6 drinks per day.

But there is little variation in one respect. The optimal amount of alcohol is somewhere around 5 and 7 g/d, which translates into about the following every day: half a can of beer, half a glass of wine, or half a “shot” of spirit. This is clearly a common trait of all of the nonlinear functions that they generated. This is illustrated in the figure below, from the article.



As you can seen from the curves above, a little bit of alcohol every day seems to have an acute effect on mortality reduction. And it seems that taking little doses every day is much better than taking the equivalent dose over a larger period of time; for instance, the equivalent per week, taken once a week. This is suggested by other studies as well ().

The curves above do not clearly reflect a couple of problems with alcohol consumption. One is that alcohol seems to be treated by the body as a toxin, which causes some harm and some good at the same time, the good being often ascribed to hormesis (). Someone who is more sensitive to alcohol’s harmful effects, on the liver for example, may not benefit as much from its positive effects.

The curves are averages that pass through points, after which the points are forgotten; even though they are real people.

The other problem with alcohol is that most people who are introduced to it in highly urbanized areas (where most people live) tend to drink it because of its mood-altering effects. This leads to a major danger of addiction and abuse. And drinking a lot of alcohol is much worse than not drinking at all.

Interestingly, in traditional Mediterranean Cultures where wine is consumed regularly, people tend to generally frown upon drunkenness ().

Wednesday, July 24, 2019

Ketosis, methylglyoxal, and accelerated aging: Probably more fiction than fact

This is a follow up on this post. Just to recap, an interesting hypothesis has been around for quite some time about a possible negative effect of ketosis. This hypothesis argues that ketosis leads to the production of an organic compound called methylglyoxal, which is believed to be a powerful agent in the formation of advanced glycation endproducts (AGEs).

In vitro research, and research with animals (e.g., mice and cows), indeed suggests negative short-term effects of increased ketosis-induced methylglyoxal production. These studies typically deal with what appears to be severe ketosis, not the mild type induced in healthy people by very low carbohydrate diets.

However, the bulk of methylglyoxal is produced via glycolysis, a multi-step metabolic process that uses sugar to produce the body’s main energy currency – adenosine triphosphate (ATP). Ketosis is a state whereby ketones are used as a source of energy instead of glucose.

(Ketones also provide an energy source that is distinct from lipoprotein-bound fatty acids and albumin-bound free fat acids. Those fatty acids appear to be preferred vehicles for the use of dietary or body fat as a source of energy. Yet it seems that small amounts of ketones are almost always present in the blood, even if they do not show up in the urine.)

Thus it follows that ketosis is associated with reduced glycolysis and, consequently, reduced methylglyoxal production, since the bulk of this substance (i.e., methylglyoxal) is produced through glycolysis.

So, how can one argue that ketosis is “a recipe for accelerated AGEing”?

One guess is that ketosis is being confused with ketoacidosis, a pathological condition in which the level of circulating ketones can be as much as 40 to 80 times that found in ketosis. De Grey (2007) refers to “diabetic patients” when he talks about this possibility (i.e., the connection with accelerated AGEing), and ketoacidosis is an unfortunately common condition among those with uncontrolled diabetes.

A gentle body massage is relaxing, and thus health-promoting. Add 40 times to the pressure, and the massage will become a form of physical torture; certainly unhealthy. That does not mean that a gentle body massage is unhealthy.

Interestingly, ketoacidosis often happens together with hyperglycemia, so at least part of the damage associated with ketoacidosis is likely to be caused by high blood sugar levels. Ketosis, on the other hand, is not associated with hyperglycemia.

Finally, if ketosis led to accelerated AGEing to the same extent as, or worse than, chronic hyperglycemia does, where is the long-term evidence?

Since the late 1800s people have been experimenting with ketosis-inducing diets, and documenting the results. The Inuit and other groups have adopted ketosis-inducing diets for much longer, although evolution via selection might have played a role in these cases.

No one seems to have lived to be 150 years of age, but where are the reports of conditions akin to those caused by chronic hyperglycemia among the many that went “banting” in a more strict way since the late 1800s?

The arctic explorer Vilhjalmur Stefansson, who is reported to have lived much of his adult life in ketosis, died in 1962, in his early 80s. After reading about his life, few would disagree that he lived a rough life, with long periods without access to medical care. I doubt that Stefansson would have lived that long if he had suffered from untreated diabetes.

Severe ketosis, to the point of large amounts of ketones being present in the urine, may not be a natural state in which our Paleolithic ancestors lived most of the time. In modern humans, even a 24 h water fast, during an already low carbohydrate diet, may not induce ketosis of this type. Milder ketosis states, with slightly elevated concentrations of ketones showing up in blood tests, can be achieved much more easily.

In conclusion, the notion that ketosis causes accelerated aging to the same extent as chronic hyperglycemia seems more like fiction than fact.

Reference:

De Grey, A. (2007). Ending aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime. New York: NY: St. Martin’s Press.

Sunday, June 23, 2019

Vitamin D production from UV radiation: The effects of total cholesterol and skin pigmentation

Our body naturally produces as much as 10,000 IU of vitamin D based on a few minutes of sun exposure when the sun is high. Getting that much vitamin D from dietary sources is very difficult, even after “fortification”.

The above refers to pre-sunburn exposure. Sunburn is not associated with increased vitamin D production; it is associated with skin damage and cancer.

Solar ultraviolet (UV) radiation is generally divided into two main types: UVB (wavelength: 280–320 nm) and UVA (320–400 nm). Vitamin D is produced primarily based on UVB radiation. Nevertheless, UVA is much more abundant, amounting to about 90 percent of the sun’s UV radiation.

UVA seems to cause the most skin damage, although there is some debate on this. If this is correct, one would expect skin pigmentation to be our body’s defense primarily against UVA radiation, not UVB radiation. If so, one’s ability to produce vitamin D based on UVB should not go down significantly as one’s skin becomes darker.

Also, vitamin D and cholesterol seem to be closely linked. Some argue that one is produced based on the other; others that they have the same precursor substance(s). Whatever the case may be, if vitamin D and cholesterol are indeed closely linked, one would expect low cholesterol levels to be associated with low vitamin D production based on sunlight.

Bogh et al. (2010) published a very interesting study; one of those studies that remain relevant as time goes by. The link to the study was provided by Ted Hutchinson in the comments sections of a another post on vitamin D. The study was published in a refereed journal with a solid reputation, the Journal of Investigative Dermatology.

The study by Bogh et al. (2010) is particularly interesting because it investigates a few issues on which there is a lot of speculation. Among the issues investigated are the effects of total cholesterol and skin pigmentation on the production of vitamin D from UVB radiation.

The figure below depicts the relationship between total cholesterol and vitamin D production based on UVB radiation. Vitamin D production is referred to as “delta 25(OH)D”. The univariate correlation is a fairly high and significant 0.51.


25(OH)D is the abbreviation for calcidiol, a prehormone that is produced in the liver based on vitamin D3 (cholecalciferol), and then converted in the kidneys into calcitriol, which is usually abbreviated as 1,25-(OH)2D3. The latter is the active form of vitamin D.

The table below shows 9 columns; the most relevant ones are the last pair at the right. They are the delta 25(OH)D levels for individuals with dark and fair skin after exposure to the same amount of UVB radiation. The difference in vitamin D production between the two groups is statistically indistinguishable from zero.


So there you have it. According to this study, low total cholesterol seems to be associated with impaired ability to produce vitamin D from UVB radiation. And skin pigmentation appears to have little  effect on the amount of vitamin D produced.

The study has a few weaknesses, as do almost all studies. For example, if you take a look at the second pair of columns from the right on the table above, you’ll notice that the baseline 25(OH)D is lower for individuals with dark skin. The difference was just short of being significant at the 0.05 level.

What is the problem with that? Well, one of the findings of the study was that lower baseline 25(OH)D levels were significantly associated with higher delta 25(OH)D levels. Still, the baseline difference does not seem to be large enough to fully explain the lack of difference in delta 25(OH)D levels for individuals with dark and fair skin.

A widely cited dermatology researcher, Antony Young, published an invited commentary on this study in the same journal issue (Young, 2010). The commentary points out some weaknesses in the study, but is generally favorable. The weaknesses include the use of small sub-samples.

References

Bogh, M.K.B., Schmedes, A.V., Philipsen, P.A., Thieden, E., & Wulf, H.C. (2010). Vitamin D production after UVB exposure depends on baseline vitamin D and total cholesterol but not on skin pigmentation. Journal of Investigative Dermatology, 130(2), 546–553.

Young, A.R. (2010). Some light on the photobiology of vitamin D. Journal of Investigative Dermatology, 130(2), 346–348.

Monday, May 27, 2019

The theory of supercompensation: Strength training frequency and muscle gain

Moderate strength training has a number of health benefits, and is viewed by many as an important component of a natural lifestyle that approximates that of our Stone Age ancestors. It increases bone density, muscle mass, and improves a number of health markers. Done properly, it may decrease body fat percentage.

Generally one would expect some muscle gain as a result of strength training. Men seem to be keen on upper-body gains, while women appear to prefer lower-body gains. Yet, many people do strength training for years, and experience little or no muscle gain.

Paradoxically, those people experience major strength gains, both men and women, especially in the first few months after they start a strength training program. However, those gains are due primarily to neural adaptations, and come without any significant gain in muscle mass. This can be frustrating, especially for men. Most men are after some noticeable muscle gain as a result of strength training. (Whether that is healthy is another story, especially as one gets to extremes.)

After the initial adaptation period, of “beginner” gains, typically no strength gains occur without muscle gains.

The culprits for the lack of anabolic response are often believed to be low levels of circulating testosterone and other hormones that seem to interact with testosterone to promote muscle growth, such as growth hormone. This leads many to resort to anabolic steroids, which are drugs that mimic the effects of androgenic hormones, such as testosterone. These drugs usually increase muscle mass, but have a number of negative short-term and long-term side effects.

There seems to be a better, less harmful, solution to the lack of anabolic response. Through my research on compensatory adaptation I often noticed that, under the right circumstances, people would overcompensate for obstacles posed to them. Strength training is a form of obstacle, which should generate overcompensation under the right circumstances. From a biological perspective, one would expect a similar phenomenon; a natural solution to the lack of anabolic response.

This solution is predicted by a theory that also explains a lack of anabolic response to strength training, and that unfortunately does not get enough attention outside the academic research literature. It is the theory of supercompensation, which is discussed in some detail in several high-quality college textbooks on strength training. (Unlike popular self-help books, these textbooks summarize peer-reviewed academic research, and also provide the references that are summarized.) One example is the excellent book by Zatsiorsky & Kraemer (2006) on the science and practice of strength training.

The figure below, from Zatsiorsky & Kraemer (2006), shows what happens during and after a strength training session. The level of preparedness could be seen as the load in the session, which is proportional to: the number of exercise sets, the weight lifted (or resistance overcame) in each set, and the number of repetitions in each set. The restitution period is essentially the recovery period, which must include plenty of rest and proper nutrition.


Note that toward the end there is a sideways S-like curve with a first stretch above the horizontal line and another below the line. The first stretch is the supercompensation stretch; a window in time (e.g., a 20-hour period). The horizontal line represents the baseline load, which can be seen as the baseline strength of the individual prior to the exercise session. This is where things get tricky. If one exercises again within the supercompensation stretch, strength and muscle gains will likely happen. (Usually noticeable upper-body muscle gain happens in men, because of higher levels of testosterone and of other hormones that seem to interact with testosterone.) Exercising outside the supercompensation time window may lead to no gain, or even to some loss, of both strength and muscle.

Timing strength training sessions correctly can over time lead to significant gains in strength and muscle (see middle graph in the figure below, also from Zatsiorsky & Kraemer, 2006). For that to happen, one has not only to regularly “hit” the supercompensation time window, but also progressively increase load. This must happen for each muscle group. Strength and muscle gains will occur up to a point, a point of saturation, after which no further gains are possible. Men who reach that point will invariably look muscular, in a more or less “natural” way depending on supplements and other factors. Some people seem to gain strength and muscle very easily; they are often called mesomorphs. Others are hard gainers, sometimes referred to as endomorphs (who tend to be fatter) and ectomorphs (who tend to be skinnier).


It is not easy to identify the ideal recovery and supercompensation periods. They vary from person to person. They also vary depending on types of exercise, numbers of sets, and numbers of repetitions. Nutrition also plays a role, and so do rest and stress. From an evolutionary perspective, it would seem to make sense to work all major muscle groups on the same day, and then do the same workout after a certain recovery period. (Our Stone Age ancestors did not do isolation exercises, such as bicep curls.) But this will probably make you look more like a strong hunter-gatherer than a modern bodybuilder.

To identify the supercompensation time window, one could employ a trial-and-error approach, by trying to repeat the same workout after different recovery times. Based on the literature, it would make sense to start at the 48-hour period (one full day of rest between sessions), and then move back and forth from there. A sign that one is hitting the supercompensation time window is becoming a little stronger at each workout, by performing more repetitions with the same weight (e.g., 10, from 8 in the previous session). If that happens, the weight should be incrementally increased in successive sessions. Most studies suggest that the best range for muscle gain is that of 6 to 12 repetitions in each set, but without enough time under tension gains will prove elusive.

The discussion above is not aimed at professional bodybuilders. There are a number of factors that can influence strength and muscle gain other than supercompensation. (Still, supercompensation seems to be a “biggie”.) Things get trickier over time with trained athletes, as returns on effort get progressively smaller. Even natural bodybuilders appear to benefit from different strategies at different levels of proficiency. For example, changing the workouts on a regular basis seems to be a good idea, and there is a science to doing that properly. See the “Interesting links” area of this web site for several more focused resources of strength training.

Reference:

Zatsiorsky, V., & Kraemer, W.J. (2006). Science and practice of strength training. Champaign, IL: Human Kinetics.

Sunday, April 28, 2019

Subcutaneous versus visceral fat: How to tell the difference?

The photos below, from Wikipedia, show two patterns of abdominal fat deposition. The one on the left is predominantly of subcutaneous abdominal fat deposition. The one on the right is an example of visceral abdominal fat deposition, around internal organs, together with a significant amount of subcutaneous fat deposition as well.


Body fat is not an inert mass used only to store energy. Body fat can be seen as a “distributed organ”, as it secretes a number of hormones into the bloodstream. For example, it secretes leptin, which regulates hunger. It secretes adiponectin, which has many health-promoting properties. It also secretes tumor necrosis factor-alpha (more recently referred to as simply “tumor necrosis factor” in the medical literature), which promotes inflammation. Inflammation is necessary to repair damaged tissue and deal with pathogens, but too much of it does more harm than good.

How does one differentiate subcutaneous from visceral abdominal fat?

Subcutaneous abdominal fat shifts position more easily as one’s body moves. When one is standing, subcutaneous fat often tends to fold around the navel, creating a “mouth” shape. Subcutaneous fat is easier to hold in one’s hand, as shown on the left photo above. Because subcutaneous fat tends to “shift” more easily as one changes the position of the body, if you measure your waist circumference lying down and standing up, and the difference is large (a one-inch difference can be considered large), you probably have a significant amount of subcutaneous fat.

Waist circumference is a variable that reflects individual changes in body fat percentage fairly well. This is especially true as one becomes lean (e.g., around 14-17 percent or less of body fat for men, and 21-24 for women), because as that happens abdominal fat contributes to an increasingly higher proportion of total body fat. For people who are lean, a 1-inch reduction in waist circumference will frequently translate into a 2-3 percent reduction in body fat percentage. Having said that, waist circumference comparisons between individuals are often misleading. Waist-to-fat ratios tend to vary a lot among different individuals (like almost any trait). This means that someone with a 34-inch waist (measured at the navel) may have a lower body fat percentage than someone with a 33-inch waist.

Subcutaneous abdominal fat is hard to mobilize; that is, it is hard to burn through diet and exercise. This is why it is often called the “stubborn” abdominal fat. One reason for the difficulty in mobilizing subcutaneous abdominal fat is that the network of blood vessels is not as dense in the area where this type of fat occurs, as it is with visceral fat. Another reason, which is related to degree of vascularization, is that subcutaneous fat is farther away from the portal vein than visceral fat. As such, it has to travel a longer distance to reach the main “highway” that will take it to other tissues (e.g., muscle) for use as energy.

In terms of health, excess subcutaneous fat is not nearly as detrimental as excess visceral fat. Excess visceral fat typically happens together with excess subcutaneous fat; but not necessarily the other way around. For instance, sumo wrestlers frequently have excess subcutaneous fat, but little or no visceral fat. The more health-detrimental effect of excess visceral fat is probably related to its proximity to the portal vein, which amplifies the negative health effects of excessive pro-inflammatory hormone secretion. Those hormones reach a major transport “highway” rather quickly.

Even though excess subcutaneous body fat is more benign than excess visceral fat, excess body fat of any kind is unlikely to be health-promoting. From an evolutionary perspective, excess body fat impaired agile movement and decreased circulating adiponectin levels; the latter leading to a host of negative health effects. In modern humans, negative health effects may be much less pronounced with subcutaneous than visceral fat, but they will still occur.

Based on studies of isolated hunger-gatherers, it is reasonable to estimate “natural” body fat levels among our Stone Age ancestors, and thus optimal body fat levels in modern humans, to be around 6-13 percent in men and 14–20 percent in women.

If you think that being overweight probably protected some of our Stone Age ancestors during times of famine, here is one interesting factoid to consider. It will take over a month for a man weighing 150 lbs and with 10 percent body fat to die from starvation, and death will not be typically caused by too little body fat being left for use as a source of energy. In starvation, normally death will be caused by heart failure, as the body slowly breaks down muscle tissue (including heart muscle) to maintain blood glucose levels.

References:

Arner, P. (2005). Site differences in human subcutaneous adipose tissue metabolism in obesity. Aesthetic Plastic Surgery, 8(1), 13-17.

Brooks, G.A., Fahey, T.D., & Baldwin, K.M. (2005). Exercise physiology: Human bioenergetics and its applications. Boston, MA: McGraw-Hill.

Fleck, S.J., & Kraemer, W.J. (2004). Designing resistance training programs. Champaign, IL: Human Kinetics.

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.

Friday, March 22, 2019

Total cholesterol and cardiovascular disease: A U-curve relationship

The hypothesis that blood cholesterol levels are positively correlated with heart disease (the lipid hypothesis) dates back to Rudolph Virchow in the mid-1800s.

One famous study that supported this hypothesis was Ancel Keys's Seven Countries Study, conducted between the 1950s and 1970s. This study eventually served as the foundation on which much of the advice that we receive today from doctors is based, even though several other studies have been published since that provide little support for the lipid hypothesis.

The graph below (from O Primitivo) shows the results of one study, involving many more countries than Key's Seven Countries Study, that actually suggests a NEGATIVE linear correlation between total cholesterol and cardiovascular disease.


Now, most relationships in nature are nonlinear, with quite a few following a pattern that looks like a U-curve (plain or inverted); sometimes called a J-curve pattern. The graph below (also from O Primitivo) shows the U-curve relationship between total cholesterol and mortality, with cardiovascular disease mortality indicated through a dotted red line at the bottom.

This graph has been obtained through a nonlinear analysis, and I think it provides a better picture of the relationship between total cholesterol (TC) and mortality. Based on this graph, the best range of TC that one can be at is somewhere between 210, where cardiovascular disease mortality is minimized; and 220, where total mortality is minimized.

The total mortality curve is the one indicated through the full blue line at the top. In fact, it suggests that mortality increases sharply as TC decreases below 200.

Now, these graphs relate TC with disease and mortality, and say nothing about LDL cholesterol (LDL). In my own experience, and that of many people I know, a TC of about 200 will typically be associated with a slightly elevated LDL (e.g., 110 to 150), even if one has a high HDL cholesterol (i.e., greater than 60).

Yet, most people who have a LDL greater than 100 will be told by their doctors, usually with the best of the intentions, to take statins, so that they can "keep their LDL under control". (LDL levels are usually calculated, not measured directly, which itself creates a whole new set of problems.)

Alas, reducing LDL to 100 or less will typically reduce TC below 200. If we go by the graphs above, especially the one showing the U-curves, these folks' risk for cardiovascular disease and mortality will go up - exactly the opposite effect that they and their doctors expected. And that will cost them financially as well, as statin drugs are expensive, in part to pay for all those TV ads.

Wednesday, February 27, 2019

Want to improve your cholesterol profile? Replace refined carbs and sugars with saturated fat and cholesterol in your diet

An interesting study by Clifton and colleagues (1998; full reference and link at the end of this post) looked at whether LDL cholesterol particle size distribution at baseline (i.e., beginning of the study) for various people was a determinant of lipid profile changes in each of two diets – one low and the other high in fat. This study highlights a few interesting points made in a previous post, which are largely unrelated to the main goal or findings of the study, but that are supported by side findings:

- As one increases dietary cholesterol and fat consumption, particularly saturated fat, circulating HDL cholesterol increases significantly. This happens whether one is taking niacin or not, although niacin seems to help, possibly as an independent (not moderating) factor. Increasing serum vitamin D levels, which can be done through sunlight exposure and supplementation, are also known to increase circulating HDL cholesterol.

- As one increases dietary cholesterol and fat consumption, particularly saturated fat, triglycerides in the fasting state (i.e., measured after a 8-hour fast) decrease significantly, particularly on a low carbohydrate diet. Triglycerides in the fasting state are negatively correlated with HDL cholesterol; they go down as HDL cholesterol goes up. This happens whether one is taking niacin or supplementing omega 3 fats or not, although these seem to help, possibly as independent factors.

- If one increases dietary fat intake, without also decreasing carbohydrate intake (particularly in the form of refined grains and sugars), LDL cholesterol will increase. Even so, LDL particle sizes will shift to more benign forms, which are the larger forms. Not all LDL particles change to benign forms, and there seem to be some genetic factors that influence this. LDL particles larger than 26 nm in diameter simply cannot pass through the gaps in the endothelium, which is a thin layer of cells lining the interior surface of arteries, and thus do not induce plaque formation.

The study by Clifton and colleagues (1998) involved 54 men and 51 women with a wide range of lipid profiles. They first underwent a 2-week low fat period, after which they were given two liquid supplements in addition to their low fat diet, for a period of 3 weeks. One of the liquid supplements contained 31 to 40 g of fat, and 650 to 845 mg of cholesterol. The other was fat and cholesterol free.

Studies that adopt a particular diet at baseline have the advantage of departing from a uniform diet across conditions. They also typically have one common characteristic: the baseline diet reflects the beliefs of the authors about what an ideal diet is. That is not always the case, of course. If this was indeed the case here, we have a particularly interesting study, because in that case the side findings discussed below contradicted the authors’ beliefs.

The table below shows the following measures for the participants in the study: age, body mass index (BMI), waist-to-hip ratio (WHR), total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and three subtypes of high-density lipoprotein (HDL) cholesterol. LDL cholesterol is the colloquially known as the “bad” type, and “HDL” as the good one (which is an oversimplification). In short, the participants were overweight, middle-aged men and women, with relatively poor lipid profiles.


At the bottom of the table is the note “P < 0.001”, following a small “a”. This essentially means that on the rows indicated by an “a”, like the “WHR” row, the difference in the averages (e.g., 0.81 for women, and 0.93 for men, in the WHR row) was significantly different from what one would expect it to be due to chance alone. More precisely, the likelihood that the difference was due to chance was lower than 0.001, or 0.1 percent, in the case of a P < 0.001. Usually a difference between averages (a.k.a. means) associated with a P < 0.05 will be considered statistically significant.

Since the LDL cholesterol concentrations (as well as other lipoprotein concentrations) are listed on the table in mmol/L, and many people receive those measures in mg/dL in blood lipid profile test reports, below is a conversion table for LDL cholesterol (from: Wikipedia).


The table below shows the dietary intake in the low and high fat diets. Note that in the high fat diet, not only is the fat intake higher, but so is the cholesterol intake. The latter is significantly higher, more than 4 times the intake in the low fat diet, and about 2.5 times the recommended daily value by the U.S. Food and Drug Administration. The total calorie intake is reported as slightly lower in the high fat diet than in the low fat diet.


Note that the largest increase was in saturated fat, followed by an almost equally large increase in monounsaturated fat. This, together with the increase in cholesterol, mimics a move to a diet where fatty meat and organs are consumed in higher quantities, with a corresponding reduction in the intake of refined carbohydrates (e.g., bread, pasta, sugar, potatoes) and lean meats.

Finally, the table below shows the changes in lipid profiles in the low and high fat diets. Note that all subtypes of HDL (or "good") cholesterol concentrations were significantly higher in the high fat diet, which is very telling, because HDL cholesterol concentrations are much better predictors of cardiovascular disease than LDL or total cholesterol concentrations. The higher the HDL cholesterol, the lower the risk of cardiovascular disease.


In the table above, we also see that triglycerides are significantly lower in the high fat diet, which is also good, because high fasting triglyceride concentrations are associated with cardiovascular disease and also insulin resistance (which is associated with diabetes).

However, the total and LDL cholesterol were also significantly higher in the high fat compared to the low fat diet. Is this as bad as it sounds? Not when we look at other factors that are not clear from the tables in the article.

One of those factors is the likely change in LDL particle size. LDL particle sizes almost always increase with significant increases in HDL; frequently going up in diameter beyond 26 nm, and thus passing the threshold beyond which an LDL particle can penetrate the endothelium and help form a plaque.

Another important factor to take into consideration is the somewhat strange decision by the authors to use the Friedewald equation to estimate the LDL concentrations in the low and high fat diets. Through the Friedewald equation, LDL is calculated as follows (where TC is total cholesterol):

    LDL = TC – HDL – Triglycerides / 5

Here is one of the problems with the Friedewald equation. Let us assume that an individual has the following lipid profile numbers: TC = 200, HDL = 50, and trigs. = 150. The calculated LDL will be 120. Let us assume that this same individual reduces trigs. to 50, from the previous 150, keeping all of the other measures constant. This is a major improvement. Yet, the calculated LDL will now be 140, and a doctor will tell this person to consider taking statins!

By the way, most people who do a blood test and get their lipid profile report also get their LDL calculated through the Friedewald equation. Usually this is indicated through a "CALC" note next to the description of the test or the calculated LDL number.

Finally, total cholesterol is not a very useful measure, because an elevated total cholesterol may be primarily reflecting an elevated HDL, which is healthy. Also, a slightly elevated total cholesterol seems to be protective, as it is associated with reduced overall mortality and also reduced mortality from cardiovascular disease, according to U-curve regression studies comparing mortality and total cholesterol levels in different countries.

We do not know for sure that the participants in this study were consuming a lot of refined carbohydrates and/or sugars at baseline. But it is a safe bet that they were, since they were consuming 214 g of carbohydrates per day. It is difficult, although not impossible, to eat that many carbohydrates per day by eating only vegetables and fruits, which are mostly water. Consumption of starches makes it easier to reach that level.

This is why when one goes on a paleo diet, he or she reduces significantly the amount of dietary carbohydrates; even more so on a targeted low carbohydrate diet, such as the Atkins diet. Richard K. Bernstein, who is a type 1 diabetic and has been adopting a strict low carbohydrate diet during most of his adult life, had the following lipid profile at 72 years of age: HDL = 118, LDL = 53, trigs. = 45. His fasting blood sugar was reportedly 83 mg/dl. Click here to listen to an interview with Dr. Bernstein on the The Livin' La Vida Low-Carb Show.

The lipid profile improvement observed (e.g., a 14 percent increase in HDL from baseline for men, and about half that for women, in only 3 weeks) was very likely due to an increase in dietary saturated fat and cholesterol combined with a decrease in refined carbohydrates and sugars. The improvement would probably have been even more impressive with a higher increase in saturated fat, as long as it was accompanied by the elimination of refined carbohydrates and sugars from the participants’ diets.

Reference:

Clifton, P. M., M. Noakes, and P. J. Nestel (1998). LDL particle size and LDL and HDL cholesterol changes with dietary fat and cholesterol in healthy subjects. J. Lipid. Res. 39: 1799–1804.

Monday, January 28, 2019

What should be my HDL cholesterol?

HDL cholesterol levels are a rough measure of HDL particle quantity in the blood. They actually tell us next to nothing about HDL particle type, although HDL cholesterol increases are usually associated with increases in LDL particle size. This a good thing, since small-dense LDL particles are associated with increased cardiovascular disease.

Most blood lipid panels reviewed by family doctors with patients give information about HDL status through measures of HDL cholesterol, provided in one of the standard units (e.g., mg/dl).

Study after study shows that HDL cholesterol levels, although imprecise, are a much better predictor of cardiovascular disease than LDL or total cholesterol levels. How high should be one’s HDL cholesterol? The answer to this question is somewhat dependent on each individual’s health profile, but most data suggest that a level greater than 60 mg/dl (1.55 mmol/l) is close to optimal for most people.

The figure below (from Eckardstein, 2008; full reference at the end of this post) plots incidence of coronary events in men (on the vertical axis), over a period of 10 years, against HDL cholesterol levels (on the horizontal axis). Note: IFG = impaired fasting glucose. This relationship is similar for women, particularly post-menopausal women. Pre-menopausal women usually have higher HDL cholesterol levels than men, and a low incidence of coronary events.


From the figure above, one can say that a diabetic man with about 55 mg/dl of HDL cholesterol will have approximately the same chance, on average, of having a coronary event (a heart attack) as a man with no risk factors and about 20 mg/dl of HDL cholesterol. That chance will be about 7 percent. With 20 mg/dl of HDL cholesterol, the chance of a diabetic man having a coronary event would approach 50 percent.

We can also conclude from the figure above that a man with no risk factors will have a 5 percent chance of having a coronary event if his HDL cholesterol is about 25 mg/dl; and about 2 percent if his HDL cholesterol is greater than 60 mg/dl. This a 60 percent reduction in risk, a risk that was low to start with because of the absence of risk factors.

HDL cholesterol levels greater than 60 are associated with significantly reduced risks of coronary events, particularly for those with diabetes (the graph does not take diabetes type into consideration). Much higher levels of HDL cholesterol (beyond 60) do not seem to be associated with much lower risk of coronary events.

Conversely, a very low HDL cholesterol level (below 25) is a major risk factor when other risk factors are also present, particularly: diabetes, hypertension (high blood pressure), and familial hypercholesteromia (gene-induced very elevated LDL cholesterol).

It is not yet clear whether HDL cholesterol is a cause of reduced cardiovascular disease, or just a marker of other health factors that lead to reduced risk for cardiovascular disease. Much of the empirical evidence suggests a causal relationship, and if this is the case then it may be a good idea to try to increase HDL levels. Even if HDL cholesterol is just a marker, the same strategy that increases it may also have a positive impact on the real causative factor of which HDL cholesterol is a marker.

What can one do to increase his or her HDL cholesterol? One way is to replace refined carbs and sugars with saturated fat and cholesterol in one’s diet. (I know that this sounds counterintuitive, but seems to work.) Another is to increase one’s vitamin D status, through sun exposure or supplementation.

Other therapeutic interventions can also be used to increase HDL; some more natural than others. The figure below (also from Eckardstein, 2008) shows the maximum effects of several therapeutic interventions to increase HDL cholesterol.


Among the therapeutic interventions shown in the figure above, taking nicotinic acid (niacin) in pharmacological doses, of 1 to 3 g per day (higher dosages may be toxic), is by far the most effective way of increasing one’s HDL cholesterol. Only the niacin that causes flush is effective in this respect. No-flush niacin preparations may have some anti-inflammatory effects, but do not cause increases in HDL cholesterol.

Rimonabant, which is second to niacin in its effect on HDL cholesterol, is an appetite suppressor that has been associated with serious side effects and, to be best of my knowledge, has been largely banned from use in pharmaceutical drugs.

Third in terms of effectiveness, among the factors shown in the figure, is moderate alcohol consumption. Running about 19 miles per week (2.7 miles per day) and taking fibrates are tied in forth place.

Many people think that they are having a major allergic reaction, and have a panic attack, when they experience the niacin flush. This usually happens several minutes after taking niacin, and depends on the dose and whether niacin was consumed with food or not. It is not uncommon for one’s entire torso to turn hot red, as though the person had had major sunburn. This reaction is harmless, and usually disappears after several minutes.

One could say that, with niacin: no “pain” (i.e., flush), no gain.

Reference:

von Eckardstein, A. (2008). HDL – a difficult friend. Drug Discovery Today: Disease Mechanisms, 5(3), 315-324.