Monday, October 21, 2019

Lipotoxicity or tired pancreas? Abnormal fat metabolism as a possible precondition for type 2 diabetes

The term “diabetes” is used to describe a wide range of diseases of glucose metabolism; diseases with a wide range of causes. The diseases include type 1 and type 2 diabetes, type 2 ketosis-prone diabetes (which I know exists thanks to Michael Barker’s blog), gestational diabetes, various MODY types, and various pancreatic disorders. The possible causes include genetic defects (or adaptations to very different past environments), autoimmune responses, exposure to environmental toxins, as well as viral and bacterial infections; in addition to obesity, and various other apparently unrelated factors, such as excessive growth hormone production.

Type 2 diabetes and the “tired pancreas” theory

Type 2 diabetes is the one most commonly associated with the metabolic syndrome, which is characterized by middle-age central obesity, and the “diseases of civilization” brought up by Neolithic inventions. Evidence is mounting that a Neolithic diet and lifestyle play a key role in the development of the metabolic syndrome. In terms of diet, major suspects are engineered foods rich in refined carbohydrates and refined sugars. In this context, one widely touted idea is that the constant insulin spikes caused by consumption of those foods lead the pancreas (figure below from Wikipedia) to get “tired” over time, losing its ability to produce insulin. The onset of insulin resistance mediates this effect.



Empirical evidence against the “tired pancreas” theory

This “tired pancreas” theory, which refers primarily to the insulin-secreting beta-cells in the pancreas, conflicts with a lot of empirical evidence. It is inconsistent with the existence of isolated semi/full hunter-gatherer groups (e.g., the Kitavans) that consume large amounts of natural (i.e., unrefined) foods rich in easily digestible carbohydrates from tubers and fruits, which cause insulin spikes. These groups are nevertheless generally free from type 2 diabetes. The “tired pancreas” theory conflicts with the existence of isolated groups in China and Japan (e.g., the Okinawans) whose diets also include a large proportion of natural foods rich in easily digestible carbohydrates, which cause insulin spikes. Yet these groups are generally free from type 2 diabetes.

Humboldt (1995), in his personal narrative of his journey to the “equinoctial regions of the new continent”, states on page 121 about the natives as a group that: "… between twenty and fifty years old, age is not indicated by wrinkling skin, white hair or body decrepitude [among natives]. When you enter a hut is hard to differentiate a father from son …" A large proportion of these natives’ diets included plenty of natural foods rich in easily digestible carbohydrates from tubers and fruits, which cause insulin spikes. Still, there was no sign of any condition that would suggest a prevalence of type 2 diabetes among them.

At this point it is important to note that the insulin spikes caused by natural carbohydrate-rich foods are much less pronounced than the ones caused by refined carbohydrate-rich foods. The reason is that there is a huge gap between the glycemic loads of natural and refined carbohydrate-rich foods, even though the glycemic indices may be quite similar in some cases. Natural carbohydrate-rich foods are not made mostly of carbohydrates. Even an Irish (or white) potato is 75 percent water.

More insulin may lead to abnormal fat metabolism in sedentary people

The more pronounced spikes may lead to abnormal fat metabolism because more body fat is force-stored than it would have been with the less pronounced spikes, and stored body fat is not released just as promptly as it should be to fuel muscle contractions and other metabolic processes. Typically this effect is a minor one on a daily basis, but adds up over time, leading to fairly unnatural patterns of fat metabolism in the long run. This is particularly true for those who lead sedentary lifestyles. As for obesity, nobody gets obese in one day. So the key problem with the more pronounced spikes may not be that the pancreas is getting “tired”, but that body fat metabolism is not normal, which in turn leads to abnormally high or low levels of important body fat-derived hormones (e.g., high levels of leptin and low levels of adiponectin).

One common characteristic of the groups mentioned above is absence of obesity, even though food is abundant and often physical activity is moderate to low. Repeat for emphasis: “… even though food is abundant and often physical activity is moderate to low”. Note that having low levels of activity is not the same as spending the whole day sitting down in a comfortable chair working on a computer. Obviously caloric intake and level of activity among these groups were/are not at the levels that would lead to obesity. How could that be possible? See this post for a possible explanation.

Excessive body fat gain, lipotoxicity, and type 2 diabetes

There are a few theories that implicate the interaction of abnormal fat metabolism with other factors (e.g., genetic factors) in the development of type 2 diabetes. Empirical evidence suggests that this is a reasonable direction of causality. One of these theories is the theory of lipotoxicity.

Several articles have discussed the theory of lipotoxicity. The article by Unger & Zhou (2001) is a widely cited one. The theory seems to be widely based on the comparative study of various genotypes found in rats. Nevertheless, there is mounting evidence suggesting that the underlying mechanisms may be similar in humans. In a nutshell, this theory proposes the following steps in the development of type 2 diabetes:

    (1) Abnormal fat mass gain leads to an abnormal increase in fat-derived hormones, of which leptin is singled out by the theory. Some people seem to be more susceptible than others in this respect, with lower triggering thresholds of fat mass gain. (What leads to exaggerated fat mass gains? The theory does not go into much detail here, but empirical evidence from other studies suggests that major culprits are refined grains and seeds, as well as refined sugars; other major culprits seem to be trans fats, and vegetable oils rich in linoleic acid.)

    (2) Resistance to fat-derived hormones sets in. Again, leptin resistance is singled out as the key here. (This is a bit simplistic. Other fat-derived hormones, like adiponectin, seem to clearly interact with leptin.) Since leptin regulates fatty acid metabolism, the theory argues, leptin resistance is hypothesized to impair fatty acid metabolism.

    (3) Impaired fat metabolism causes fatty acids to “spill over” to tissues other than fat cells, and also causes an abnormal increase in a substance called ceramide in those tissues. These include tissues in the pancreas that house beta-cells, which secrete insulin. In short, body fat should be stored in fat cells (adipocytes), not outside them.

    (4) Initially fatty acid “spill over” to beta-cells enlarges them and makes them become overactive, leading to excessive insulin production in response to carbohydrate-rich foods, and also to insulin resistance. This is the pre-diabetic phase where hypoglycemic episodes happen a few hours following the consumption of carbohydrate-rich foods. Once this stage is reached, several natural carbohydrate-rich foods also become a problem (e.g., potatoes and bananas), in addition to refined carbohydrate-rich foods.

    (5) Abnormal levels of ceramide induce beta-cell apoptosis in the pancreas. This is essentially “death by suicide” of beta cells in the pancreas. What follows is full-blown type 2 diabetes. Insulin production is impaired, leading to very elevated blood glucose levels following the consumption of carbohydrate-rich foods, even if they are unprocessed.

It is widely known that type 2 diabetics have impaired glucose metabolism. What is not so widely known is that usually they also have impaired fatty acid metabolism. For example, consumption of the same fatty meal is likely to lead to significantly more elevated triglyceride levels in type 2 diabetics than non-diabetics, after several hours. This is consistent with the notion that leptin resistance precedes type 2 diabetes, and inconsistent with the “tired pancreas” theory.

Weak and strong points of the theory of lipotoxicity

A weakness of the theory of lipotoxicity is its strong lipophobic tone; at least in the articles that I have read. There is ample evidence that eating a lot of the ultra-demonized saturated fat, per se, is not what makes people obese or type 2 diabetic. Yet overconsumption of trans fats and vegetable oils rich in linoleic acid does seem to be linked with obesity and type 2 diabetes. (So does the consumption of refined grains and seeds, and refined sugars.) The theory of lipotoxicity does not seem to make these distinctions.

In defense of the theory of lipotoxicity, it does not argue that there cannot be thin diabetics. Many type 1 diabetics are thin. Type 2 diabetics can also be thin, although this is much less common. In certain individuals, the threshold of body fat gain that will precipitate lipotoxicity may be quite low. In others, the same amount of body fat gain (or more) may in fact increase their insulin sensitivity under certain circumstances – e.g., when growth hormone levels are abnormally low.

Autoimmune disorders, perhaps induced by environmental toxins, or toxins found in certain refined foods, may cause the immune system to attack the beta-cells in the pancreas. This may lead to type 1 diabetes if all beta cells are destroyed, or something that can easily be diagnosed as type 2 (or type 1.5) diabetes if only a portion of the cells are destroyed, in a way that does not involve lipotoxicity.

Nor does the theory of lipotoxicity predict that all those who become obese will develop type 2 diabetes. It only suggests that the probability will go up, particularly if other factors are present (e.g., genetic propensity). There are many people who are obese during most of their adult lives and never develop type 2 diabetes. On the other hand, some groups, like Hispanics, tend to develop type 2 diabetes more easily (often even before they reach the obese level). One only has to visit the South Texas region near the Rio Grande border to see this first hand.

What the theory proposes is a new way of understanding the development of type 2 diabetes; a way that seems to make more sense than the “tired pancreas” theory. The theory of lipitoxicity may not be entirely correct. For example, there may be other mechanisms associated with abnormal fat metabolism and consumption of Neolithic foods that cause beta-cell “suicide”, and that have nothing to do with lipotoxicity as proposed by the theory. (At least one fat-derived hormone, tumor necrosis factor-alpha, is associated with abnormal cell apoptosis when abnormally elevated. Levels of this hormone go up immediately after a meal rich in refined carbohydrates.) But the link that it proposes between obesity and type 2 diabetes seems to be right on target.

Implications and thoughts

Some implications and thoughts based on the discussion above are the following. Some are extrapolations based on the discussion in this post combined with those in other posts. At the time of this writing, there were hundreds of posts on this blog, in addition to many comments stemming from over 2.5 million page views. See under "Labels" at the bottom-right area of this blog for a summary of topics addressed. It is hard to ignore things that were brought to light in previous posts.

    - Let us start with a big one: Avoiding natural carbohydrate-rich foods in the absence of compromised glucose metabolism is unnecessary. Those foods do not “tire” the pancreas significantly more than protein-rich foods do. While carbohydrates are not essential macronutrients, protein is. In the absence of carbohydrates, protein will be used by the body to produce glucose to supply the needs of the brain and red blood cells. Protein elicits an insulin response that is comparable to that of natural carbohydrate-rich foods on a gram-adjusted basis (but significantly lower than that of refined carbohydrate-rich foods, like doughnuts and bagels). Usually protein does not lead to a measurable glucose response because glucagon is secreted together with insulin in response to ingestion of protein, preventing hypoglycemia.

    - Abnormal fat gain should be used as a general measure of one’s likelihood of being “headed south” in terms of health. The “fitness” level for men and women shown on the table in this post seem like good targets for body fat percentage. The problem here, of course, is that this is not as easy as it sounds. Attempts at getting lean can lead to poor nutrition and/or starvation. These may make matters worse in some cases, leading to hormonal imbalances and uncontrollable hunger, which will eventually lead to obesity. Poor nutrition may also depress the immune system, making one susceptible to a viral or bacterial  infection that may end up leading to beta-cell destruction and diabetes. A better approach is to place emphasis on eating a variety of natural foods, which are nutritious and satiating, and avoiding refined ones, which are often addictive “empty calories”. Generally fat loss should be slow to be healthy and sustainable.

    - Finally, if glucose metabolism is compromised, one should avoid any foods in quantities that cause an abnormally elevated glucose or insulin response. All one needs is an inexpensive glucose meter to find out what those foods are. The following are indications of abnormally elevated glucose and insulin responses, respectively: an abnormally high glucose level 1 hour after a meal (postprandial hyperglycemia); and an abnormally low glucose level 2 to 4 hours after a meal (reactive hypoglycemia). What is abnormally high or low? Take a look at the peaks and troughs shown on the graph in this post; they should give you an idea. Some insulin resistant people using glucose meters will probably realize that they can still eat several natural carbohydrate-rich foods, but in small quantities, because those foods usually have a low glycemic load (even if their glycemic index is high).

Lucy was a vegetarian and Sapiens an omnivore. We apparently have not evolved to be pure carnivores, even though we can be if the circumstances require. But we absolutely have not evolved to eat many of the refined and industrialized foods available today, not even the ones marketed as “healthy”. Those foods do not make our pancreas “tired”. Among other things, they “mess up” fat metabolism, which may lead to type 2 diabetes through a complex process involving hormones secreted by body fat.

References

Humboldt, A.V. (1995). Personal narrative of a journey to the equinoctial regions of the new continent. New York, NY: Penguin Books.

Unger, R.H., & Zhou, Y.-T. (2001). Lipotoxicity of beta-cells in obesity and in other causes of fatty acid spillover. Diabetes, 50(1), S118-S121.

Sunday, September 22, 2019

How long does it take for a food-related trait to evolve?

Often in discussions about Paleolithic nutrition, and books on the subject, we see speculations about how long it would take for a population to adapt to a particular type of food. Many speculations are way off mark; some think that even 10,000 years are not enough for evolution to take place.

This post addresses the question: How long does it take for a food-related trait to evolve?

We need a bit of Genetics 101 first, discussed below. For more details see, e.g., Hartl & Clark, 2007; and one of my favorites: Maynard Smith, 1998. Full references are provided at the end of this post.

New gene-induced traits, including traits that affect nutrition, appear in populations through a deceptively simple process. A new genetic mutation appears in the population, usually in one single individual, and one of two things happens: (a) the genetic mutation disappears from the population; or (b) the genetic mutation spreads in the population. Evolution is a term that is generally used to refer to a gene-induced trait spreading in a population.

Traits can evolve via two main processes. One is genetic drift, where neutral traits evolve by chance. This process dominates in very small populations (e.g., 50 individuals). The other is selection, where fitness-enhancing traits evolve by increasing the reproductive success of the individuals that possess them. Fitness, in this context, is measured as the number of surviving offspring (or grand-offspring) of an individual.

Yes, traits can evolve by chance, and often do so in small populations.

Say a group of 20 human ancestors became isolated for some reason; e.g., traveled to an island and got stranded there. Let us assume that the group had the common sense of including at least a few women in it; ideally more than men, because women are really the reproductive bottleneck of any population.

In a new generation one individual develops a sweet tooth, which is a neutral mutation because the island has no supermarket. Or, what would be more likely, one of the 20 individuals already had that mutation prior to reaching the island. (Genetic variability is usually high among any group of unrelated individuals, so divergent neutral mutations are usually present.)

By chance alone, that new trait may spread to the whole (larger now) population in 80 generations, or around 1,600 years; assuming a new generation emerging every 20 years. That whole population then grows even further, and gets somewhat mixed up with other groups in a larger population (they find a way out of the island). The descendants of the original island population all have a sweet tooth. That leads to increased diabetes among them, compared with other groups. They find out that the problem is genetic, and wonder how evolution could have made them like that.

The panel below shows the formulas for the calculation of the amount of time it takes for a trait to evolve to fixation in a population. It is taken from a set of slides I used in a presentation (PowerPoint file here). To evolve to fixation means to spread to all individuals in the population. The results of some simulations are also shown. For example, a trait that provides a minute selective advantage of 1% in a population of 10,000 individuals will possibly evolve to fixation in 1,981 generations, or 39,614 years. Not the millions of years often mentioned in discussions about evolution.


I say “possibly” above because traits can also disappear from a population by chance, and often do so at the early stages of evolution, even if they increase the reproductive success of the individuals that possess them. For example, a new beneficial metabolic mutation appears, but its host fatally falls off a cliff by accident, contracts an unrelated disease and dies etc., before leaving any descendant.

How come the fossil record suggests that evolution usually takes millions of years? The reason is that it usually takes a long time for new fitness-enhancing traits to appear in a population. Most genetic mutations are either neutral or detrimental, in terms of reproductive success. It also takes time for the right circumstances to come into place for genetic drift to happen – e.g., massive extinctions, leaving a few surviving members. Once the right elements are in place, evolution can happen fast.

So, what is the implication for traits that affect nutrition? Or, more specifically, can a population that starts consuming a particular type of food evolve to become adapted to it in a short period of time?

The answer is yes. And that adaptation can take a very short amount of time to happen, relatively speaking.

Let us assume that all members of an isolated population start on a particular diet, which is not the optimal diet for them. The exception is one single lucky individual that has a special genetic mutation, and for whom the diet is either optimal or quasi-optimal. Let us also assume that the mutation leads the individual and his or her descendants to have, on average, twice as many surviving children as other unrelated individuals. That translates into a selective advantage (s) of 100%. Finally, let us conservatively assume that the population is relatively large, with 10,000 individuals.

In this case, the mutation will spread to the entire population in approximately 396 years.

Descendants of individuals in that population (e.g., descendants of the Yanomamö) may posses the trait, even after some fair mixing with descendants of other populations, because a trait that goes into fixation has a good chance of being associated with dominant alleles. (Alleles are the different variants of the same gene.)

This Excel spreadsheet (link to a .xls file) is for those who want to play a bit with numbers, using the formulas above, and perhaps speculate about what they could have inherited from their not so distant ancestors. Download the file, and open it with Excel or a compatible spreadsheet system. The formulas are already there; change only the cells highlighted in yellow.

References:

Hartl, D.L., & Clark, A.G. (2007). Principles of population genetics. Sunderland, MA: Sinauer Associates.

Maynard Smith, J. (1998). Evolutionary genetics. New York, NY: Oxford University Press.

Monday, August 26, 2019

How much alcohol is optimal? Maybe less than you think

I have been regularly recommending to users of the software HCE () to include a column in their health data reflecting their alcohol consumption. Why? Because I suspect that alcohol consumption is behind many of what we call the “diseases of affluence”.

A while ago I recall watching an interview with a centenarian, a very lucid woman. When asked about her “secret” to live a long life, she said that she added a little bit of whiskey to her coffee every morning. It was something like a tablespoon of whiskey, or about 15 g, which amounted to approximately 6 g of ethanol every single day.

Well, she might have been drinking very close to the optimal amount of alcohol per day for the average person, if the study reviewed in this post is correct.

Studies of the effect of alcohol consumption on health generally show results in terms of averages within fixed ranges of consumption. For example, they will show average mortality risks for people consuming 1, 2, 3 etc. drinks per day. These studies suggest that there is a J-curve relationship between alcohol consumption and health (). That is, drinking a little is better than not drinking; and drinking a lot is worse than drinking a little.

However, using “rough” ranges of 1, 2, 3 etc. drinks per day prevents those studies from getting to a more fine-grained picture of the beneficial effects of alcohol consumption.

Contrary to popular belief, the positive health effects of moderate alcohol consumption have little, if anything, to do with polyphenols such as resveratrol. Resveratrol, once believed to be the fountain of youth, is found in the skin of red grapes.

It is in fact the alcohol content that has positive effects, apparently reducing the incidence of coronary heart disease, diabetes, hypertension, congestive heart failure, stroke, dementia, Raynaud’s phenomenon, and all-cause mortality. Raynaud's phenomenon is associated with poor circulation in the extremities (e.g., toes, fingers), which in some cases can progress to gangrene.

In most studies of the effects of alcohol consumption on health, the J-curves emerge from visual inspection of the plots of averages across ranges of consumption. Rarely you find studies where nonlinear relationships are “discovered” by software tools such as WarpPLS (), with effects being adjusted accordingly.

You do find, however, some studies that fit reasonably justified functions to the data. Di Castelnuovo and colleagues’ study, published in JAMA Internal Medicine in 2006 (), is probably the most widely cited among these studies. This study is a meta-analysis; i.e., a study that builds on various other empirical studies.

I think that the journal in which this study appeared was formerly known as Archives of Internal Medicine, a fairly selective and prestigious journal, even though this did not seem to be reflected in its Wikipedia article at the time of this writing ().

What Di Castelnuovo and colleagues found is interesting. They fitted a bunch of nonlinear functions to the data, all with J-curve shapes. The results suggest a lot of variation in the maximum amount one can drink before mortality becomes higher than not drinking at all; that maximum amount ranges from about 4 to 6 drinks per day.

But there is little variation in one respect. The optimal amount of alcohol is somewhere around 5 and 7 g/d, which translates into about the following every day: half a can of beer, half a glass of wine, or half a “shot” of spirit. This is clearly a common trait of all of the nonlinear functions that they generated. This is illustrated in the figure below, from the article.



As you can seen from the curves above, a little bit of alcohol every day seems to have an acute effect on mortality reduction. And it seems that taking little doses every day is much better than taking the equivalent dose over a larger period of time; for instance, the equivalent per week, taken once a week. This is suggested by other studies as well ().

The curves above do not clearly reflect a couple of problems with alcohol consumption. One is that alcohol seems to be treated by the body as a toxin, which causes some harm and some good at the same time, the good being often ascribed to hormesis (). Someone who is more sensitive to alcohol’s harmful effects, on the liver for example, may not benefit as much from its positive effects.

The curves are averages that pass through points, after which the points are forgotten; even though they are real people.

The other problem with alcohol is that most people who are introduced to it in highly urbanized areas (where most people live) tend to drink it because of its mood-altering effects. This leads to a major danger of addiction and abuse. And drinking a lot of alcohol is much worse than not drinking at all.

Interestingly, in traditional Mediterranean Cultures where wine is consumed regularly, people tend to generally frown upon drunkenness ().

Wednesday, July 24, 2019

Ketosis, methylglyoxal, and accelerated aging: Probably more fiction than fact

This is a follow up on this post. Just to recap, an interesting hypothesis has been around for quite some time about a possible negative effect of ketosis. This hypothesis argues that ketosis leads to the production of an organic compound called methylglyoxal, which is believed to be a powerful agent in the formation of advanced glycation endproducts (AGEs).

In vitro research, and research with animals (e.g., mice and cows), indeed suggests negative short-term effects of increased ketosis-induced methylglyoxal production. These studies typically deal with what appears to be severe ketosis, not the mild type induced in healthy people by very low carbohydrate diets.

However, the bulk of methylglyoxal is produced via glycolysis, a multi-step metabolic process that uses sugar to produce the body’s main energy currency – adenosine triphosphate (ATP). Ketosis is a state whereby ketones are used as a source of energy instead of glucose.

(Ketones also provide an energy source that is distinct from lipoprotein-bound fatty acids and albumin-bound free fat acids. Those fatty acids appear to be preferred vehicles for the use of dietary or body fat as a source of energy. Yet it seems that small amounts of ketones are almost always present in the blood, even if they do not show up in the urine.)

Thus it follows that ketosis is associated with reduced glycolysis and, consequently, reduced methylglyoxal production, since the bulk of this substance (i.e., methylglyoxal) is produced through glycolysis.

So, how can one argue that ketosis is “a recipe for accelerated AGEing”?

One guess is that ketosis is being confused with ketoacidosis, a pathological condition in which the level of circulating ketones can be as much as 40 to 80 times that found in ketosis. De Grey (2007) refers to “diabetic patients” when he talks about this possibility (i.e., the connection with accelerated AGEing), and ketoacidosis is an unfortunately common condition among those with uncontrolled diabetes.

A gentle body massage is relaxing, and thus health-promoting. Add 40 times to the pressure, and the massage will become a form of physical torture; certainly unhealthy. That does not mean that a gentle body massage is unhealthy.

Interestingly, ketoacidosis often happens together with hyperglycemia, so at least part of the damage associated with ketoacidosis is likely to be caused by high blood sugar levels. Ketosis, on the other hand, is not associated with hyperglycemia.

Finally, if ketosis led to accelerated AGEing to the same extent as, or worse than, chronic hyperglycemia does, where is the long-term evidence?

Since the late 1800s people have been experimenting with ketosis-inducing diets, and documenting the results. The Inuit and other groups have adopted ketosis-inducing diets for much longer, although evolution via selection might have played a role in these cases.

No one seems to have lived to be 150 years of age, but where are the reports of conditions akin to those caused by chronic hyperglycemia among the many that went “banting” in a more strict way since the late 1800s?

The arctic explorer Vilhjalmur Stefansson, who is reported to have lived much of his adult life in ketosis, died in 1962, in his early 80s. After reading about his life, few would disagree that he lived a rough life, with long periods without access to medical care. I doubt that Stefansson would have lived that long if he had suffered from untreated diabetes.

Severe ketosis, to the point of large amounts of ketones being present in the urine, may not be a natural state in which our Paleolithic ancestors lived most of the time. In modern humans, even a 24 h water fast, during an already low carbohydrate diet, may not induce ketosis of this type. Milder ketosis states, with slightly elevated concentrations of ketones showing up in blood tests, can be achieved much more easily.

In conclusion, the notion that ketosis causes accelerated aging to the same extent as chronic hyperglycemia seems more like fiction than fact.

Reference:

De Grey, A. (2007). Ending aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime. New York: NY: St. Martin’s Press.

Sunday, June 23, 2019

Vitamin D production from UV radiation: The effects of total cholesterol and skin pigmentation

Our body naturally produces as much as 10,000 IU of vitamin D based on a few minutes of sun exposure when the sun is high. Getting that much vitamin D from dietary sources is very difficult, even after “fortification”.

The above refers to pre-sunburn exposure. Sunburn is not associated with increased vitamin D production; it is associated with skin damage and cancer.

Solar ultraviolet (UV) radiation is generally divided into two main types: UVB (wavelength: 280–320 nm) and UVA (320–400 nm). Vitamin D is produced primarily based on UVB radiation. Nevertheless, UVA is much more abundant, amounting to about 90 percent of the sun’s UV radiation.

UVA seems to cause the most skin damage, although there is some debate on this. If this is correct, one would expect skin pigmentation to be our body’s defense primarily against UVA radiation, not UVB radiation. If so, one’s ability to produce vitamin D based on UVB should not go down significantly as one’s skin becomes darker.

Also, vitamin D and cholesterol seem to be closely linked. Some argue that one is produced based on the other; others that they have the same precursor substance(s). Whatever the case may be, if vitamin D and cholesterol are indeed closely linked, one would expect low cholesterol levels to be associated with low vitamin D production based on sunlight.

Bogh et al. (2010) published a very interesting study; one of those studies that remain relevant as time goes by. The link to the study was provided by Ted Hutchinson in the comments sections of a another post on vitamin D. The study was published in a refereed journal with a solid reputation, the Journal of Investigative Dermatology.

The study by Bogh et al. (2010) is particularly interesting because it investigates a few issues on which there is a lot of speculation. Among the issues investigated are the effects of total cholesterol and skin pigmentation on the production of vitamin D from UVB radiation.

The figure below depicts the relationship between total cholesterol and vitamin D production based on UVB radiation. Vitamin D production is referred to as “delta 25(OH)D”. The univariate correlation is a fairly high and significant 0.51.


25(OH)D is the abbreviation for calcidiol, a prehormone that is produced in the liver based on vitamin D3 (cholecalciferol), and then converted in the kidneys into calcitriol, which is usually abbreviated as 1,25-(OH)2D3. The latter is the active form of vitamin D.

The table below shows 9 columns; the most relevant ones are the last pair at the right. They are the delta 25(OH)D levels for individuals with dark and fair skin after exposure to the same amount of UVB radiation. The difference in vitamin D production between the two groups is statistically indistinguishable from zero.


So there you have it. According to this study, low total cholesterol seems to be associated with impaired ability to produce vitamin D from UVB radiation. And skin pigmentation appears to have little  effect on the amount of vitamin D produced.

The study has a few weaknesses, as do almost all studies. For example, if you take a look at the second pair of columns from the right on the table above, you’ll notice that the baseline 25(OH)D is lower for individuals with dark skin. The difference was just short of being significant at the 0.05 level.

What is the problem with that? Well, one of the findings of the study was that lower baseline 25(OH)D levels were significantly associated with higher delta 25(OH)D levels. Still, the baseline difference does not seem to be large enough to fully explain the lack of difference in delta 25(OH)D levels for individuals with dark and fair skin.

A widely cited dermatology researcher, Antony Young, published an invited commentary on this study in the same journal issue (Young, 2010). The commentary points out some weaknesses in the study, but is generally favorable. The weaknesses include the use of small sub-samples.

References

Bogh, M.K.B., Schmedes, A.V., Philipsen, P.A., Thieden, E., & Wulf, H.C. (2010). Vitamin D production after UVB exposure depends on baseline vitamin D and total cholesterol but not on skin pigmentation. Journal of Investigative Dermatology, 130(2), 546–553.

Young, A.R. (2010). Some light on the photobiology of vitamin D. Journal of Investigative Dermatology, 130(2), 346–348.

Monday, May 27, 2019

The theory of supercompensation: Strength training frequency and muscle gain

Moderate strength training has a number of health benefits, and is viewed by many as an important component of a natural lifestyle that approximates that of our Stone Age ancestors. It increases bone density, muscle mass, and improves a number of health markers. Done properly, it may decrease body fat percentage.

Generally one would expect some muscle gain as a result of strength training. Men seem to be keen on upper-body gains, while women appear to prefer lower-body gains. Yet, many people do strength training for years, and experience little or no muscle gain.

Paradoxically, those people experience major strength gains, both men and women, especially in the first few months after they start a strength training program. However, those gains are due primarily to neural adaptations, and come without any significant gain in muscle mass. This can be frustrating, especially for men. Most men are after some noticeable muscle gain as a result of strength training. (Whether that is healthy is another story, especially as one gets to extremes.)

After the initial adaptation period, of “beginner” gains, typically no strength gains occur without muscle gains.

The culprits for the lack of anabolic response are often believed to be low levels of circulating testosterone and other hormones that seem to interact with testosterone to promote muscle growth, such as growth hormone. This leads many to resort to anabolic steroids, which are drugs that mimic the effects of androgenic hormones, such as testosterone. These drugs usually increase muscle mass, but have a number of negative short-term and long-term side effects.

There seems to be a better, less harmful, solution to the lack of anabolic response. Through my research on compensatory adaptation I often noticed that, under the right circumstances, people would overcompensate for obstacles posed to them. Strength training is a form of obstacle, which should generate overcompensation under the right circumstances. From a biological perspective, one would expect a similar phenomenon; a natural solution to the lack of anabolic response.

This solution is predicted by a theory that also explains a lack of anabolic response to strength training, and that unfortunately does not get enough attention outside the academic research literature. It is the theory of supercompensation, which is discussed in some detail in several high-quality college textbooks on strength training. (Unlike popular self-help books, these textbooks summarize peer-reviewed academic research, and also provide the references that are summarized.) One example is the excellent book by Zatsiorsky & Kraemer (2006) on the science and practice of strength training.

The figure below, from Zatsiorsky & Kraemer (2006), shows what happens during and after a strength training session. The level of preparedness could be seen as the load in the session, which is proportional to: the number of exercise sets, the weight lifted (or resistance overcame) in each set, and the number of repetitions in each set. The restitution period is essentially the recovery period, which must include plenty of rest and proper nutrition.


Note that toward the end there is a sideways S-like curve with a first stretch above the horizontal line and another below the line. The first stretch is the supercompensation stretch; a window in time (e.g., a 20-hour period). The horizontal line represents the baseline load, which can be seen as the baseline strength of the individual prior to the exercise session. This is where things get tricky. If one exercises again within the supercompensation stretch, strength and muscle gains will likely happen. (Usually noticeable upper-body muscle gain happens in men, because of higher levels of testosterone and of other hormones that seem to interact with testosterone.) Exercising outside the supercompensation time window may lead to no gain, or even to some loss, of both strength and muscle.

Timing strength training sessions correctly can over time lead to significant gains in strength and muscle (see middle graph in the figure below, also from Zatsiorsky & Kraemer, 2006). For that to happen, one has not only to regularly “hit” the supercompensation time window, but also progressively increase load. This must happen for each muscle group. Strength and muscle gains will occur up to a point, a point of saturation, after which no further gains are possible. Men who reach that point will invariably look muscular, in a more or less “natural” way depending on supplements and other factors. Some people seem to gain strength and muscle very easily; they are often called mesomorphs. Others are hard gainers, sometimes referred to as endomorphs (who tend to be fatter) and ectomorphs (who tend to be skinnier).


It is not easy to identify the ideal recovery and supercompensation periods. They vary from person to person. They also vary depending on types of exercise, numbers of sets, and numbers of repetitions. Nutrition also plays a role, and so do rest and stress. From an evolutionary perspective, it would seem to make sense to work all major muscle groups on the same day, and then do the same workout after a certain recovery period. (Our Stone Age ancestors did not do isolation exercises, such as bicep curls.) But this will probably make you look more like a strong hunter-gatherer than a modern bodybuilder.

To identify the supercompensation time window, one could employ a trial-and-error approach, by trying to repeat the same workout after different recovery times. Based on the literature, it would make sense to start at the 48-hour period (one full day of rest between sessions), and then move back and forth from there. A sign that one is hitting the supercompensation time window is becoming a little stronger at each workout, by performing more repetitions with the same weight (e.g., 10, from 8 in the previous session). If that happens, the weight should be incrementally increased in successive sessions. Most studies suggest that the best range for muscle gain is that of 6 to 12 repetitions in each set, but without enough time under tension gains will prove elusive.

The discussion above is not aimed at professional bodybuilders. There are a number of factors that can influence strength and muscle gain other than supercompensation. (Still, supercompensation seems to be a “biggie”.) Things get trickier over time with trained athletes, as returns on effort get progressively smaller. Even natural bodybuilders appear to benefit from different strategies at different levels of proficiency. For example, changing the workouts on a regular basis seems to be a good idea, and there is a science to doing that properly. See the “Interesting links” area of this web site for several more focused resources of strength training.

Reference:

Zatsiorsky, V., & Kraemer, W.J. (2006). Science and practice of strength training. Champaign, IL: Human Kinetics.

Sunday, April 28, 2019

Subcutaneous versus visceral fat: How to tell the difference?

The photos below, from Wikipedia, show two patterns of abdominal fat deposition. The one on the left is predominantly of subcutaneous abdominal fat deposition. The one on the right is an example of visceral abdominal fat deposition, around internal organs, together with a significant amount of subcutaneous fat deposition as well.


Body fat is not an inert mass used only to store energy. Body fat can be seen as a “distributed organ”, as it secretes a number of hormones into the bloodstream. For example, it secretes leptin, which regulates hunger. It secretes adiponectin, which has many health-promoting properties. It also secretes tumor necrosis factor-alpha (more recently referred to as simply “tumor necrosis factor” in the medical literature), which promotes inflammation. Inflammation is necessary to repair damaged tissue and deal with pathogens, but too much of it does more harm than good.

How does one differentiate subcutaneous from visceral abdominal fat?

Subcutaneous abdominal fat shifts position more easily as one’s body moves. When one is standing, subcutaneous fat often tends to fold around the navel, creating a “mouth” shape. Subcutaneous fat is easier to hold in one’s hand, as shown on the left photo above. Because subcutaneous fat tends to “shift” more easily as one changes the position of the body, if you measure your waist circumference lying down and standing up, and the difference is large (a one-inch difference can be considered large), you probably have a significant amount of subcutaneous fat.

Waist circumference is a variable that reflects individual changes in body fat percentage fairly well. This is especially true as one becomes lean (e.g., around 14-17 percent or less of body fat for men, and 21-24 for women), because as that happens abdominal fat contributes to an increasingly higher proportion of total body fat. For people who are lean, a 1-inch reduction in waist circumference will frequently translate into a 2-3 percent reduction in body fat percentage. Having said that, waist circumference comparisons between individuals are often misleading. Waist-to-fat ratios tend to vary a lot among different individuals (like almost any trait). This means that someone with a 34-inch waist (measured at the navel) may have a lower body fat percentage than someone with a 33-inch waist.

Subcutaneous abdominal fat is hard to mobilize; that is, it is hard to burn through diet and exercise. This is why it is often called the “stubborn” abdominal fat. One reason for the difficulty in mobilizing subcutaneous abdominal fat is that the network of blood vessels is not as dense in the area where this type of fat occurs, as it is with visceral fat. Another reason, which is related to degree of vascularization, is that subcutaneous fat is farther away from the portal vein than visceral fat. As such, it has to travel a longer distance to reach the main “highway” that will take it to other tissues (e.g., muscle) for use as energy.

In terms of health, excess subcutaneous fat is not nearly as detrimental as excess visceral fat. Excess visceral fat typically happens together with excess subcutaneous fat; but not necessarily the other way around. For instance, sumo wrestlers frequently have excess subcutaneous fat, but little or no visceral fat. The more health-detrimental effect of excess visceral fat is probably related to its proximity to the portal vein, which amplifies the negative health effects of excessive pro-inflammatory hormone secretion. Those hormones reach a major transport “highway” rather quickly.

Even though excess subcutaneous body fat is more benign than excess visceral fat, excess body fat of any kind is unlikely to be health-promoting. From an evolutionary perspective, excess body fat impaired agile movement and decreased circulating adiponectin levels; the latter leading to a host of negative health effects. In modern humans, negative health effects may be much less pronounced with subcutaneous than visceral fat, but they will still occur.

Based on studies of isolated hunger-gatherers, it is reasonable to estimate “natural” body fat levels among our Stone Age ancestors, and thus optimal body fat levels in modern humans, to be around 6-13 percent in men and 14–20 percent in women.

If you think that being overweight probably protected some of our Stone Age ancestors during times of famine, here is one interesting factoid to consider. It will take over a month for a man weighing 150 lbs and with 10 percent body fat to die from starvation, and death will not be typically caused by too little body fat being left for use as a source of energy. In starvation, normally death will be caused by heart failure, as the body slowly breaks down muscle tissue (including heart muscle) to maintain blood glucose levels.

References:

Arner, P. (2005). Site differences in human subcutaneous adipose tissue metabolism in obesity. Aesthetic Plastic Surgery, 8(1), 13-17.

Brooks, G.A., Fahey, T.D., & Baldwin, K.M. (2005). Exercise physiology: Human bioenergetics and its applications. Boston, MA: McGraw-Hill.

Fleck, S.J., & Kraemer, W.J. (2004). Designing resistance training programs. Champaign, IL: Human Kinetics.

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.