Sunday, June 23, 2019

Vitamin D production from UV radiation: The effects of total cholesterol and skin pigmentation

Our body naturally produces as much as 10,000 IU of vitamin D based on a few minutes of sun exposure when the sun is high. Getting that much vitamin D from dietary sources is very difficult, even after “fortification”.

The above refers to pre-sunburn exposure. Sunburn is not associated with increased vitamin D production; it is associated with skin damage and cancer.

Solar ultraviolet (UV) radiation is generally divided into two main types: UVB (wavelength: 280–320 nm) and UVA (320–400 nm). Vitamin D is produced primarily based on UVB radiation. Nevertheless, UVA is much more abundant, amounting to about 90 percent of the sun’s UV radiation.

UVA seems to cause the most skin damage, although there is some debate on this. If this is correct, one would expect skin pigmentation to be our body’s defense primarily against UVA radiation, not UVB radiation. If so, one’s ability to produce vitamin D based on UVB should not go down significantly as one’s skin becomes darker.

Also, vitamin D and cholesterol seem to be closely linked. Some argue that one is produced based on the other; others that they have the same precursor substance(s). Whatever the case may be, if vitamin D and cholesterol are indeed closely linked, one would expect low cholesterol levels to be associated with low vitamin D production based on sunlight.

Bogh et al. (2010) published a very interesting study; one of those studies that remain relevant as time goes by. The link to the study was provided by Ted Hutchinson in the comments sections of a another post on vitamin D. The study was published in a refereed journal with a solid reputation, the Journal of Investigative Dermatology.

The study by Bogh et al. (2010) is particularly interesting because it investigates a few issues on which there is a lot of speculation. Among the issues investigated are the effects of total cholesterol and skin pigmentation on the production of vitamin D from UVB radiation.

The figure below depicts the relationship between total cholesterol and vitamin D production based on UVB radiation. Vitamin D production is referred to as “delta 25(OH)D”. The univariate correlation is a fairly high and significant 0.51.


25(OH)D is the abbreviation for calcidiol, a prehormone that is produced in the liver based on vitamin D3 (cholecalciferol), and then converted in the kidneys into calcitriol, which is usually abbreviated as 1,25-(OH)2D3. The latter is the active form of vitamin D.

The table below shows 9 columns; the most relevant ones are the last pair at the right. They are the delta 25(OH)D levels for individuals with dark and fair skin after exposure to the same amount of UVB radiation. The difference in vitamin D production between the two groups is statistically indistinguishable from zero.


So there you have it. According to this study, low total cholesterol seems to be associated with impaired ability to produce vitamin D from UVB radiation. And skin pigmentation appears to have little  effect on the amount of vitamin D produced.

The study has a few weaknesses, as do almost all studies. For example, if you take a look at the second pair of columns from the right on the table above, you’ll notice that the baseline 25(OH)D is lower for individuals with dark skin. The difference was just short of being significant at the 0.05 level.

What is the problem with that? Well, one of the findings of the study was that lower baseline 25(OH)D levels were significantly associated with higher delta 25(OH)D levels. Still, the baseline difference does not seem to be large enough to fully explain the lack of difference in delta 25(OH)D levels for individuals with dark and fair skin.

A widely cited dermatology researcher, Antony Young, published an invited commentary on this study in the same journal issue (Young, 2010). The commentary points out some weaknesses in the study, but is generally favorable. The weaknesses include the use of small sub-samples.

References

Bogh, M.K.B., Schmedes, A.V., Philipsen, P.A., Thieden, E., & Wulf, H.C. (2010). Vitamin D production after UVB exposure depends on baseline vitamin D and total cholesterol but not on skin pigmentation. Journal of Investigative Dermatology, 130(2), 546–553.

Young, A.R. (2010). Some light on the photobiology of vitamin D. Journal of Investigative Dermatology, 130(2), 346–348.

Monday, May 27, 2019

The theory of supercompensation: Strength training frequency and muscle gain

Moderate strength training has a number of health benefits, and is viewed by many as an important component of a natural lifestyle that approximates that of our Stone Age ancestors. It increases bone density, muscle mass, and improves a number of health markers. Done properly, it may decrease body fat percentage.

Generally one would expect some muscle gain as a result of strength training. Men seem to be keen on upper-body gains, while women appear to prefer lower-body gains. Yet, many people do strength training for years, and experience little or no muscle gain.

Paradoxically, those people experience major strength gains, both men and women, especially in the first few months after they start a strength training program. However, those gains are due primarily to neural adaptations, and come without any significant gain in muscle mass. This can be frustrating, especially for men. Most men are after some noticeable muscle gain as a result of strength training. (Whether that is healthy is another story, especially as one gets to extremes.)

After the initial adaptation period, of “beginner” gains, typically no strength gains occur without muscle gains.

The culprits for the lack of anabolic response are often believed to be low levels of circulating testosterone and other hormones that seem to interact with testosterone to promote muscle growth, such as growth hormone. This leads many to resort to anabolic steroids, which are drugs that mimic the effects of androgenic hormones, such as testosterone. These drugs usually increase muscle mass, but have a number of negative short-term and long-term side effects.

There seems to be a better, less harmful, solution to the lack of anabolic response. Through my research on compensatory adaptation I often noticed that, under the right circumstances, people would overcompensate for obstacles posed to them. Strength training is a form of obstacle, which should generate overcompensation under the right circumstances. From a biological perspective, one would expect a similar phenomenon; a natural solution to the lack of anabolic response.

This solution is predicted by a theory that also explains a lack of anabolic response to strength training, and that unfortunately does not get enough attention outside the academic research literature. It is the theory of supercompensation, which is discussed in some detail in several high-quality college textbooks on strength training. (Unlike popular self-help books, these textbooks summarize peer-reviewed academic research, and also provide the references that are summarized.) One example is the excellent book by Zatsiorsky & Kraemer (2006) on the science and practice of strength training.

The figure below, from Zatsiorsky & Kraemer (2006), shows what happens during and after a strength training session. The level of preparedness could be seen as the load in the session, which is proportional to: the number of exercise sets, the weight lifted (or resistance overcame) in each set, and the number of repetitions in each set. The restitution period is essentially the recovery period, which must include plenty of rest and proper nutrition.


Note that toward the end there is a sideways S-like curve with a first stretch above the horizontal line and another below the line. The first stretch is the supercompensation stretch; a window in time (e.g., a 20-hour period). The horizontal line represents the baseline load, which can be seen as the baseline strength of the individual prior to the exercise session. This is where things get tricky. If one exercises again within the supercompensation stretch, strength and muscle gains will likely happen. (Usually noticeable upper-body muscle gain happens in men, because of higher levels of testosterone and of other hormones that seem to interact with testosterone.) Exercising outside the supercompensation time window may lead to no gain, or even to some loss, of both strength and muscle.

Timing strength training sessions correctly can over time lead to significant gains in strength and muscle (see middle graph in the figure below, also from Zatsiorsky & Kraemer, 2006). For that to happen, one has not only to regularly “hit” the supercompensation time window, but also progressively increase load. This must happen for each muscle group. Strength and muscle gains will occur up to a point, a point of saturation, after which no further gains are possible. Men who reach that point will invariably look muscular, in a more or less “natural” way depending on supplements and other factors. Some people seem to gain strength and muscle very easily; they are often called mesomorphs. Others are hard gainers, sometimes referred to as endomorphs (who tend to be fatter) and ectomorphs (who tend to be skinnier).


It is not easy to identify the ideal recovery and supercompensation periods. They vary from person to person. They also vary depending on types of exercise, numbers of sets, and numbers of repetitions. Nutrition also plays a role, and so do rest and stress. From an evolutionary perspective, it would seem to make sense to work all major muscle groups on the same day, and then do the same workout after a certain recovery period. (Our Stone Age ancestors did not do isolation exercises, such as bicep curls.) But this will probably make you look more like a strong hunter-gatherer than a modern bodybuilder.

To identify the supercompensation time window, one could employ a trial-and-error approach, by trying to repeat the same workout after different recovery times. Based on the literature, it would make sense to start at the 48-hour period (one full day of rest between sessions), and then move back and forth from there. A sign that one is hitting the supercompensation time window is becoming a little stronger at each workout, by performing more repetitions with the same weight (e.g., 10, from 8 in the previous session). If that happens, the weight should be incrementally increased in successive sessions. Most studies suggest that the best range for muscle gain is that of 6 to 12 repetitions in each set, but without enough time under tension gains will prove elusive.

The discussion above is not aimed at professional bodybuilders. There are a number of factors that can influence strength and muscle gain other than supercompensation. (Still, supercompensation seems to be a “biggie”.) Things get trickier over time with trained athletes, as returns on effort get progressively smaller. Even natural bodybuilders appear to benefit from different strategies at different levels of proficiency. For example, changing the workouts on a regular basis seems to be a good idea, and there is a science to doing that properly. See the “Interesting links” area of this web site for several more focused resources of strength training.

Reference:

Zatsiorsky, V., & Kraemer, W.J. (2006). Science and practice of strength training. Champaign, IL: Human Kinetics.

Sunday, April 28, 2019

Subcutaneous versus visceral fat: How to tell the difference?

The photos below, from Wikipedia, show two patterns of abdominal fat deposition. The one on the left is predominantly of subcutaneous abdominal fat deposition. The one on the right is an example of visceral abdominal fat deposition, around internal organs, together with a significant amount of subcutaneous fat deposition as well.


Body fat is not an inert mass used only to store energy. Body fat can be seen as a “distributed organ”, as it secretes a number of hormones into the bloodstream. For example, it secretes leptin, which regulates hunger. It secretes adiponectin, which has many health-promoting properties. It also secretes tumor necrosis factor-alpha (more recently referred to as simply “tumor necrosis factor” in the medical literature), which promotes inflammation. Inflammation is necessary to repair damaged tissue and deal with pathogens, but too much of it does more harm than good.

How does one differentiate subcutaneous from visceral abdominal fat?

Subcutaneous abdominal fat shifts position more easily as one’s body moves. When one is standing, subcutaneous fat often tends to fold around the navel, creating a “mouth” shape. Subcutaneous fat is easier to hold in one’s hand, as shown on the left photo above. Because subcutaneous fat tends to “shift” more easily as one changes the position of the body, if you measure your waist circumference lying down and standing up, and the difference is large (a one-inch difference can be considered large), you probably have a significant amount of subcutaneous fat.

Waist circumference is a variable that reflects individual changes in body fat percentage fairly well. This is especially true as one becomes lean (e.g., around 14-17 percent or less of body fat for men, and 21-24 for women), because as that happens abdominal fat contributes to an increasingly higher proportion of total body fat. For people who are lean, a 1-inch reduction in waist circumference will frequently translate into a 2-3 percent reduction in body fat percentage. Having said that, waist circumference comparisons between individuals are often misleading. Waist-to-fat ratios tend to vary a lot among different individuals (like almost any trait). This means that someone with a 34-inch waist (measured at the navel) may have a lower body fat percentage than someone with a 33-inch waist.

Subcutaneous abdominal fat is hard to mobilize; that is, it is hard to burn through diet and exercise. This is why it is often called the “stubborn” abdominal fat. One reason for the difficulty in mobilizing subcutaneous abdominal fat is that the network of blood vessels is not as dense in the area where this type of fat occurs, as it is with visceral fat. Another reason, which is related to degree of vascularization, is that subcutaneous fat is farther away from the portal vein than visceral fat. As such, it has to travel a longer distance to reach the main “highway” that will take it to other tissues (e.g., muscle) for use as energy.

In terms of health, excess subcutaneous fat is not nearly as detrimental as excess visceral fat. Excess visceral fat typically happens together with excess subcutaneous fat; but not necessarily the other way around. For instance, sumo wrestlers frequently have excess subcutaneous fat, but little or no visceral fat. The more health-detrimental effect of excess visceral fat is probably related to its proximity to the portal vein, which amplifies the negative health effects of excessive pro-inflammatory hormone secretion. Those hormones reach a major transport “highway” rather quickly.

Even though excess subcutaneous body fat is more benign than excess visceral fat, excess body fat of any kind is unlikely to be health-promoting. From an evolutionary perspective, excess body fat impaired agile movement and decreased circulating adiponectin levels; the latter leading to a host of negative health effects. In modern humans, negative health effects may be much less pronounced with subcutaneous than visceral fat, but they will still occur.

Based on studies of isolated hunger-gatherers, it is reasonable to estimate “natural” body fat levels among our Stone Age ancestors, and thus optimal body fat levels in modern humans, to be around 6-13 percent in men and 14–20 percent in women.

If you think that being overweight probably protected some of our Stone Age ancestors during times of famine, here is one interesting factoid to consider. It will take over a month for a man weighing 150 lbs and with 10 percent body fat to die from starvation, and death will not be typically caused by too little body fat being left for use as a source of energy. In starvation, normally death will be caused by heart failure, as the body slowly breaks down muscle tissue (including heart muscle) to maintain blood glucose levels.

References:

Arner, P. (2005). Site differences in human subcutaneous adipose tissue metabolism in obesity. Aesthetic Plastic Surgery, 8(1), 13-17.

Brooks, G.A., Fahey, T.D., & Baldwin, K.M. (2005). Exercise physiology: Human bioenergetics and its applications. Boston, MA: McGraw-Hill.

Fleck, S.J., & Kraemer, W.J. (2004). Designing resistance training programs. Champaign, IL: Human Kinetics.

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.

Friday, March 22, 2019

Total cholesterol and cardiovascular disease: A U-curve relationship

The hypothesis that blood cholesterol levels are positively correlated with heart disease (the lipid hypothesis) dates back to Rudolph Virchow in the mid-1800s.

One famous study that supported this hypothesis was Ancel Keys's Seven Countries Study, conducted between the 1950s and 1970s. This study eventually served as the foundation on which much of the advice that we receive today from doctors is based, even though several other studies have been published since that provide little support for the lipid hypothesis.

The graph below (from O Primitivo) shows the results of one study, involving many more countries than Key's Seven Countries Study, that actually suggests a NEGATIVE linear correlation between total cholesterol and cardiovascular disease.


Now, most relationships in nature are nonlinear, with quite a few following a pattern that looks like a U-curve (plain or inverted); sometimes called a J-curve pattern. The graph below (also from O Primitivo) shows the U-curve relationship between total cholesterol and mortality, with cardiovascular disease mortality indicated through a dotted red line at the bottom.

This graph has been obtained through a nonlinear analysis, and I think it provides a better picture of the relationship between total cholesterol (TC) and mortality. Based on this graph, the best range of TC that one can be at is somewhere between 210, where cardiovascular disease mortality is minimized; and 220, where total mortality is minimized.

The total mortality curve is the one indicated through the full blue line at the top. In fact, it suggests that mortality increases sharply as TC decreases below 200.

Now, these graphs relate TC with disease and mortality, and say nothing about LDL cholesterol (LDL). In my own experience, and that of many people I know, a TC of about 200 will typically be associated with a slightly elevated LDL (e.g., 110 to 150), even if one has a high HDL cholesterol (i.e., greater than 60).

Yet, most people who have a LDL greater than 100 will be told by their doctors, usually with the best of the intentions, to take statins, so that they can "keep their LDL under control". (LDL levels are usually calculated, not measured directly, which itself creates a whole new set of problems.)

Alas, reducing LDL to 100 or less will typically reduce TC below 200. If we go by the graphs above, especially the one showing the U-curves, these folks' risk for cardiovascular disease and mortality will go up - exactly the opposite effect that they and their doctors expected. And that will cost them financially as well, as statin drugs are expensive, in part to pay for all those TV ads.

Wednesday, February 27, 2019

Want to improve your cholesterol profile? Replace refined carbs and sugars with saturated fat and cholesterol in your diet

An interesting study by Clifton and colleagues (1998; full reference and link at the end of this post) looked at whether LDL cholesterol particle size distribution at baseline (i.e., beginning of the study) for various people was a determinant of lipid profile changes in each of two diets – one low and the other high in fat. This study highlights a few interesting points made in a previous post, which are largely unrelated to the main goal or findings of the study, but that are supported by side findings:

- As one increases dietary cholesterol and fat consumption, particularly saturated fat, circulating HDL cholesterol increases significantly. This happens whether one is taking niacin or not, although niacin seems to help, possibly as an independent (not moderating) factor. Increasing serum vitamin D levels, which can be done through sunlight exposure and supplementation, are also known to increase circulating HDL cholesterol.

- As one increases dietary cholesterol and fat consumption, particularly saturated fat, triglycerides in the fasting state (i.e., measured after a 8-hour fast) decrease significantly, particularly on a low carbohydrate diet. Triglycerides in the fasting state are negatively correlated with HDL cholesterol; they go down as HDL cholesterol goes up. This happens whether one is taking niacin or supplementing omega 3 fats or not, although these seem to help, possibly as independent factors.

- If one increases dietary fat intake, without also decreasing carbohydrate intake (particularly in the form of refined grains and sugars), LDL cholesterol will increase. Even so, LDL particle sizes will shift to more benign forms, which are the larger forms. Not all LDL particles change to benign forms, and there seem to be some genetic factors that influence this. LDL particles larger than 26 nm in diameter simply cannot pass through the gaps in the endothelium, which is a thin layer of cells lining the interior surface of arteries, and thus do not induce plaque formation.

The study by Clifton and colleagues (1998) involved 54 men and 51 women with a wide range of lipid profiles. They first underwent a 2-week low fat period, after which they were given two liquid supplements in addition to their low fat diet, for a period of 3 weeks. One of the liquid supplements contained 31 to 40 g of fat, and 650 to 845 mg of cholesterol. The other was fat and cholesterol free.

Studies that adopt a particular diet at baseline have the advantage of departing from a uniform diet across conditions. They also typically have one common characteristic: the baseline diet reflects the beliefs of the authors about what an ideal diet is. That is not always the case, of course. If this was indeed the case here, we have a particularly interesting study, because in that case the side findings discussed below contradicted the authors’ beliefs.

The table below shows the following measures for the participants in the study: age, body mass index (BMI), waist-to-hip ratio (WHR), total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and three subtypes of high-density lipoprotein (HDL) cholesterol. LDL cholesterol is the colloquially known as the “bad” type, and “HDL” as the good one (which is an oversimplification). In short, the participants were overweight, middle-aged men and women, with relatively poor lipid profiles.


At the bottom of the table is the note “P < 0.001”, following a small “a”. This essentially means that on the rows indicated by an “a”, like the “WHR” row, the difference in the averages (e.g., 0.81 for women, and 0.93 for men, in the WHR row) was significantly different from what one would expect it to be due to chance alone. More precisely, the likelihood that the difference was due to chance was lower than 0.001, or 0.1 percent, in the case of a P < 0.001. Usually a difference between averages (a.k.a. means) associated with a P < 0.05 will be considered statistically significant.

Since the LDL cholesterol concentrations (as well as other lipoprotein concentrations) are listed on the table in mmol/L, and many people receive those measures in mg/dL in blood lipid profile test reports, below is a conversion table for LDL cholesterol (from: Wikipedia).


The table below shows the dietary intake in the low and high fat diets. Note that in the high fat diet, not only is the fat intake higher, but so is the cholesterol intake. The latter is significantly higher, more than 4 times the intake in the low fat diet, and about 2.5 times the recommended daily value by the U.S. Food and Drug Administration. The total calorie intake is reported as slightly lower in the high fat diet than in the low fat diet.


Note that the largest increase was in saturated fat, followed by an almost equally large increase in monounsaturated fat. This, together with the increase in cholesterol, mimics a move to a diet where fatty meat and organs are consumed in higher quantities, with a corresponding reduction in the intake of refined carbohydrates (e.g., bread, pasta, sugar, potatoes) and lean meats.

Finally, the table below shows the changes in lipid profiles in the low and high fat diets. Note that all subtypes of HDL (or "good") cholesterol concentrations were significantly higher in the high fat diet, which is very telling, because HDL cholesterol concentrations are much better predictors of cardiovascular disease than LDL or total cholesterol concentrations. The higher the HDL cholesterol, the lower the risk of cardiovascular disease.


In the table above, we also see that triglycerides are significantly lower in the high fat diet, which is also good, because high fasting triglyceride concentrations are associated with cardiovascular disease and also insulin resistance (which is associated with diabetes).

However, the total and LDL cholesterol were also significantly higher in the high fat compared to the low fat diet. Is this as bad as it sounds? Not when we look at other factors that are not clear from the tables in the article.

One of those factors is the likely change in LDL particle size. LDL particle sizes almost always increase with significant increases in HDL; frequently going up in diameter beyond 26 nm, and thus passing the threshold beyond which an LDL particle can penetrate the endothelium and help form a plaque.

Another important factor to take into consideration is the somewhat strange decision by the authors to use the Friedewald equation to estimate the LDL concentrations in the low and high fat diets. Through the Friedewald equation, LDL is calculated as follows (where TC is total cholesterol):

    LDL = TC – HDL – Triglycerides / 5

Here is one of the problems with the Friedewald equation. Let us assume that an individual has the following lipid profile numbers: TC = 200, HDL = 50, and trigs. = 150. The calculated LDL will be 120. Let us assume that this same individual reduces trigs. to 50, from the previous 150, keeping all of the other measures constant. This is a major improvement. Yet, the calculated LDL will now be 140, and a doctor will tell this person to consider taking statins!

By the way, most people who do a blood test and get their lipid profile report also get their LDL calculated through the Friedewald equation. Usually this is indicated through a "CALC" note next to the description of the test or the calculated LDL number.

Finally, total cholesterol is not a very useful measure, because an elevated total cholesterol may be primarily reflecting an elevated HDL, which is healthy. Also, a slightly elevated total cholesterol seems to be protective, as it is associated with reduced overall mortality and also reduced mortality from cardiovascular disease, according to U-curve regression studies comparing mortality and total cholesterol levels in different countries.

We do not know for sure that the participants in this study were consuming a lot of refined carbohydrates and/or sugars at baseline. But it is a safe bet that they were, since they were consuming 214 g of carbohydrates per day. It is difficult, although not impossible, to eat that many carbohydrates per day by eating only vegetables and fruits, which are mostly water. Consumption of starches makes it easier to reach that level.

This is why when one goes on a paleo diet, he or she reduces significantly the amount of dietary carbohydrates; even more so on a targeted low carbohydrate diet, such as the Atkins diet. Richard K. Bernstein, who is a type 1 diabetic and has been adopting a strict low carbohydrate diet during most of his adult life, had the following lipid profile at 72 years of age: HDL = 118, LDL = 53, trigs. = 45. His fasting blood sugar was reportedly 83 mg/dl. Click here to listen to an interview with Dr. Bernstein on the The Livin' La Vida Low-Carb Show.

The lipid profile improvement observed (e.g., a 14 percent increase in HDL from baseline for men, and about half that for women, in only 3 weeks) was very likely due to an increase in dietary saturated fat and cholesterol combined with a decrease in refined carbohydrates and sugars. The improvement would probably have been even more impressive with a higher increase in saturated fat, as long as it was accompanied by the elimination of refined carbohydrates and sugars from the participants’ diets.

Reference:

Clifton, P. M., M. Noakes, and P. J. Nestel (1998). LDL particle size and LDL and HDL cholesterol changes with dietary fat and cholesterol in healthy subjects. J. Lipid. Res. 39: 1799–1804.

Monday, January 28, 2019

What should be my HDL cholesterol?

HDL cholesterol levels are a rough measure of HDL particle quantity in the blood. They actually tell us next to nothing about HDL particle type, although HDL cholesterol increases are usually associated with increases in LDL particle size. This a good thing, since small-dense LDL particles are associated with increased cardiovascular disease.

Most blood lipid panels reviewed by family doctors with patients give information about HDL status through measures of HDL cholesterol, provided in one of the standard units (e.g., mg/dl).

Study after study shows that HDL cholesterol levels, although imprecise, are a much better predictor of cardiovascular disease than LDL or total cholesterol levels. How high should be one’s HDL cholesterol? The answer to this question is somewhat dependent on each individual’s health profile, but most data suggest that a level greater than 60 mg/dl (1.55 mmol/l) is close to optimal for most people.

The figure below (from Eckardstein, 2008; full reference at the end of this post) plots incidence of coronary events in men (on the vertical axis), over a period of 10 years, against HDL cholesterol levels (on the horizontal axis). Note: IFG = impaired fasting glucose. This relationship is similar for women, particularly post-menopausal women. Pre-menopausal women usually have higher HDL cholesterol levels than men, and a low incidence of coronary events.


From the figure above, one can say that a diabetic man with about 55 mg/dl of HDL cholesterol will have approximately the same chance, on average, of having a coronary event (a heart attack) as a man with no risk factors and about 20 mg/dl of HDL cholesterol. That chance will be about 7 percent. With 20 mg/dl of HDL cholesterol, the chance of a diabetic man having a coronary event would approach 50 percent.

We can also conclude from the figure above that a man with no risk factors will have a 5 percent chance of having a coronary event if his HDL cholesterol is about 25 mg/dl; and about 2 percent if his HDL cholesterol is greater than 60 mg/dl. This a 60 percent reduction in risk, a risk that was low to start with because of the absence of risk factors.

HDL cholesterol levels greater than 60 are associated with significantly reduced risks of coronary events, particularly for those with diabetes (the graph does not take diabetes type into consideration). Much higher levels of HDL cholesterol (beyond 60) do not seem to be associated with much lower risk of coronary events.

Conversely, a very low HDL cholesterol level (below 25) is a major risk factor when other risk factors are also present, particularly: diabetes, hypertension (high blood pressure), and familial hypercholesteromia (gene-induced very elevated LDL cholesterol).

It is not yet clear whether HDL cholesterol is a cause of reduced cardiovascular disease, or just a marker of other health factors that lead to reduced risk for cardiovascular disease. Much of the empirical evidence suggests a causal relationship, and if this is the case then it may be a good idea to try to increase HDL levels. Even if HDL cholesterol is just a marker, the same strategy that increases it may also have a positive impact on the real causative factor of which HDL cholesterol is a marker.

What can one do to increase his or her HDL cholesterol? One way is to replace refined carbs and sugars with saturated fat and cholesterol in one’s diet. (I know that this sounds counterintuitive, but seems to work.) Another is to increase one’s vitamin D status, through sun exposure or supplementation.

Other therapeutic interventions can also be used to increase HDL; some more natural than others. The figure below (also from Eckardstein, 2008) shows the maximum effects of several therapeutic interventions to increase HDL cholesterol.


Among the therapeutic interventions shown in the figure above, taking nicotinic acid (niacin) in pharmacological doses, of 1 to 3 g per day (higher dosages may be toxic), is by far the most effective way of increasing one’s HDL cholesterol. Only the niacin that causes flush is effective in this respect. No-flush niacin preparations may have some anti-inflammatory effects, but do not cause increases in HDL cholesterol.

Rimonabant, which is second to niacin in its effect on HDL cholesterol, is an appetite suppressor that has been associated with serious side effects and, to be best of my knowledge, has been largely banned from use in pharmaceutical drugs.

Third in terms of effectiveness, among the factors shown in the figure, is moderate alcohol consumption. Running about 19 miles per week (2.7 miles per day) and taking fibrates are tied in forth place.

Many people think that they are having a major allergic reaction, and have a panic attack, when they experience the niacin flush. This usually happens several minutes after taking niacin, and depends on the dose and whether niacin was consumed with food or not. It is not uncommon for one’s entire torso to turn hot red, as though the person had had major sunburn. This reaction is harmless, and usually disappears after several minutes.

One could say that, with niacin: no “pain” (i.e., flush), no gain.

Reference:

von Eckardstein, A. (2008). HDL – a difficult friend. Drug Discovery Today: Disease Mechanisms, 5(3), 315-324.

Saturday, December 22, 2018

Applied evolutionary thinking: Darwin meets Washington

Charles Darwin, perhaps one of the greatest scholars of all time, thought about his theory of mutation, inheritance, and selection of biological traits for more than 20 years, and finally published it as a book in 1859.  At that time, many animal breeders must have said something like this: “So what? We knew this already.”

In fact George Washington, who died in 1799 (many years before Darwin’s famous book came out), had tried his hand at what today would be called “genetic engineering.” He produced at least a few notable breeds of domestic animals through selective breeding. Those include a breed of giant mules – the “Mammoth Jackstock” breed. Those mules are so big and strong that they were used to pull large boats filled with coal along artificial canals in Pennsylvania.

Washington learned the basic principles of animal breeding from others, who learned it from others, and so on. Animal breeding has a long tradition.

So, not only did animal breeders, like George Washington, had known about the principles of mutation, inheritance, and selection of biological traits; but they also had been putting that knowledge into practice for quite some time before Darwin’s famous book “The Origin of Species” was published.

Yet, Darwin’s theory has applications that extend well beyond animal breeding. There are thousands of phenomena that would look very “mysterious” today without Darwin’s theory. Many of those phenomena apply to nutrition and lifestyle, as we have been seeing lately with the paleo diet movement. Among the most amazing and counterintuitive are those in connection with the design of our brain.

Recent research, for instance, suggests that “surprise” improves cognition. Let me illustrate this with a simple example. If you were studying a subject online that required memorization of key pieces of information (say, historical facts) and a surprise stimulus was “thrown” at you (say, a video clip of an attacking rattlesnake was shown on the screen), you would remember the key pieces of information (about historical facts) much better than if the surprise stimulus was not present!

The underlying Darwinian reason for this phenomenon is that it is adaptively advantageous for our brain to enhance our memory in dangerous situations (e.g., an attack by a poisonous snake), because that would help us avoid those situations in the future (Kock et al., 2008; references listed at the end of this post). Related mental mechanisms increased our ancestors’ chances of survival over many generations, and became embedded in our brain’s design.

Animal breeders knew that they could apply selection, via selective breeding, to any population of animals, and thus make certain traits evolve in a matter of a few dozen generations or less. This is known as artificial selection. Among those traits were metabolic traits. For example, a population of lambs may be bred to grow fatter on the same amount of food as leaner breeds.

Forced natural selection may have been imposed on some of our ancestors, as I argue in this post, leading metabolic traits to evolve in as little as 396 years, or even less, depending on the circumstances.

In a sense, forced selection would be a bit like artificial selection. If a group of our ancestors became geographically isolated from others, in an environment where only certain types of food were available, physiological and metabolic adaptations to those types of food might evolve. This is also true for the adoption of cultural practices; culture can also strongly influence evolution (see, e.g., McElreath & Boyd, 2007).

This is why it is arguably a good idea for people to look at their background (i.e., learn about their ancestors), because they may have inherited genes that predispose them to function better with certain types of diets and lifestyles. That can help them better tailor their diets to their genetic makeup, and also understand why certain diets work for some people but not for others. (This is essentially what medical doctors do, on a smaller time scale, when they take a patients' parents health history into consideration when dispensing medical advice.)

By ancestors I am not talking about Homo erectus here, but ancestors that lived 3,000; 1,000; or even 500 years ago. At times when medical care and other modern amenities were not available, and thus selection pressures were stronger. For example, if your no-so-distant ancestors have consumed plenty of dairy, chances are you are better adapted to consume dairy than people whose ancestors have not.

Very recent food inventions, like refined carbohydrates, refined sugars, and hydrogenated fats are too new to have influenced the genetic makeup of anybody living today. So, chances are, they are bad for the vast majority of us. (A small percentage of the population may not develop any hint of diseases of civilization after consuming them for years, but they are not going to be as healthy as they could be.) Other, not so recent, food inventions, such as olive oil, certain types of bread, certain types of dairy, may be better for some people than for others.

References:

Kock, N., Chatelain-Jardón, R., & Carmona, J. (2008). An experimental study of simulated web-based threats and their impact on knowledge communication effectiveness. IEEE Transactions on Professional Communication, 51(2), 183-197.

McElreath, R., & Boyd, R. (2007). Mathematical models of social evolution: A guide for the perplexed. Chicago, IL: The University of Chicago Press.