Wednesday, July 28, 2021

What is a reasonable vitamin D level?

The figure and table below are from Vieth (1999); one of the most widely cited articles on vitamin D. The figure shows the gradual increase in blood concentrations of 25-Hydroxyvitamin, or 25(OH)D, following the start of daily vitamin D3 supplementation of 10,000 IU/day. The table shows the average levels for people living and/or working in sun-rich environments; vitamin D3 is produced by the skin based on sun exposure.


25(OH)D is also referred to as calcidiol. It is a pre-hormone that is produced by the liver based on vitamin D3. To convert from nmol/L to ng/mL, divide by 2.496. The figure suggests that levels start to plateau at around 1 month after the beginning of supplementation, reaching a point of saturation after 2-3 months. Without supplementation or sunlight exposure, levels should go down at a comparable rate. The maximum average level shown on the table is 163 nmol/L (65 ng/mL), and refers to a sample of lifeguards.

From the figure we can infer that people on average will plateau at approximately 130 nmol/L, after months of 10,000 IU/d supplementation. That is 52 ng/mL. Assuming a normal distribution with a standard deviation of about 20 percent of the range of average levels, we can expect about 68 percent of those taking that level of supplementation to be in the 42 to 63 ng/mL range.

This might be the range most of us should expect to be in at an intake of 10,000 IU/d. This is the equivalent to the body’s own natural production through sun exposure.

Approximately 32 percent of the population can be expected to be outside this range. A person who is two standard deviations (SDs) above the mean (i.e., average) would be at around 73 ng/mL. Three SDs above the mean would be 83 ng/mL. Two SDs below the mean would be 31 ng/mL.

There are other factors that may affect levels. For example, being overweight tends to reduce them. Excess cortisol production, from stress, may also reduce them.

Supplementing beyond 10,000 IU/d to reach levels much higher than those in the range of 42 to 63 ng/mL may not be optimal. Interestingly, one cannot overdose through sun exposure, and the idea that people do not produce vitamin D3 after 40 years of age is a myth.

One would be taking in about 14,000 IU/d of vitamin D3 by combining sun exposure with a supplemental dose of 4,000 IU/d. Clear signs of toxicity may not occur until one reaches 50,000 IU/d. Still, one may develop other complications, such as kidney stones, at levels significantly above 10,000 IU/d.

In an earlier post by Chris Masterjohn (the link no longer works), which made a different argument, somewhat similar conclusions were reached. Chris pointed out that there is a point of saturation above which the liver is unable to properly hydroxylate vitamin D3 to produce 25(OH)D.

How likely it is that a person will develop complications like kidney stones at levels above 10,000 IU/d, and what the danger threshold level could be, are hard to guess. Kidney stone incidence is a sensitive measure of possible problems; but it is, by itself, an unreliable measure. The reason is that it is caused by factors that are correlated with high levels of vitamin D, where those levels may not be the problem.

There is some evidence that kidney stones are associated with living in sunny regions. This is not, in my view, due to high levels of vitamin D3 production from sunlight. Kidney stones are also associated with chronic dehydration, and populations living in sunny regions may be at a higher than average risk of chronic dehydration. This is particularly true for sunny regions that are also very hot and/or dry.

Reference

Vieth, R. (1999). Vitamin D supplementation, 25-hydroxyvitamin D concentrations, and safety. American Journal of Clinical Nutrition, 69(5), 842-856.

Tuesday, June 22, 2021

Blood glucose control before age 55 may increase your chances of living beyond 90

I have recently read an interesting study by Yashin and colleagues (2009) at Duke University’s Center for Population Health and Aging. (The full reference to the article, and a link, are at the end of this post.) This study is a gem with some rough edges, and some interesting implications.

The study uses data from the Framingham Heart Study (FHS). The FHS, which started in the late 1940s, recruited 5209 healthy participants (2336 males and 2873 females), aged 28 to 62, in the town of Framingham, Massachusetts. At the time of Yashin and colleagues’ article publication, there were 993 surviving participants.

I rearranged figure 2 from the Yashin and colleagues article so that the two graphs (for females and males) appeared one beside the other. The result is shown below (click on it to enlarge); the caption at the bottom-right corner refers to both graphs. The figure shows the age-related trajectory of blood glucose levels, grouped by lifespan (LS), starting at age 40.


As you can see from the figure above, blood glucose levels increase with age, even for long-lived individuals (LS > 90). The increases follow a U-curve (a.k.a. J-curve) pattern; the beginning of the right side of a U curve, to be more precise. The main difference in the trajectories of the blood glucose levels is that as lifespan increases, so does the width of the U curve. In other words, in long-lived people, blood glucose increases slowly with age; particularly up to 55 years of age, when it starts increasing more rapidly.

Now, here is one of the rough edges of this study. The authors do not provide standard deviations. You can ignore the error bars around the points on the graph; they are not standard deviations. They are standard errors, which are much lower than the corresponding standard deviations. Standard errors are calculated by dividing the standard deviations by the square root of the sample sizes for each trajectory point (which the authors do not provide either), so they go up with age since progressively smaller numbers of individuals reach advanced ages.

So, no need to worry if your blood glucose levels are higher than those shown on the vertical axes of the graphs. (I will comment more on those numbers below.) Not everybody who lived beyond 90 had a blood glucose of around 80 mg/dl at age 40. I wouldn't be surprised if about 2/3 of the long-lived participants had blood glucose levels in the range of 65 to 95 at that age.

Here is another rough edge. It is pretty clear that the authors’ main independent variable (i.e., health predictor) in this study is average blood glucose, which they refer to simply as “blood glucose”. However, the measure of blood glucose in the FHS is a very rough estimation of average blood glucose, because they measured blood glucose levels at random times during the day. These measurements, when averaged, are closer to fasting blood glucose levels than to average blood glucose levels.

A more reliable measure of average blood glucose levels is that of glycated hemoglobin (HbA1c). Blood glucose glycates (i.e., sticks to, like most sugary substances) hemoglobin, a protein found in red blood cells. Since red blood cells are relatively long-lived, with a turnover of about 3 months, HbA1c (given in percentages) is a good indicator of average blood glucose levels (if you don’t suffer from anemia or a few other blood abnormalities). Based on HbA1c, one can then estimate his or her average blood glucose level for the previous 3 months before the test, using one of the following equations, depending on whether the measurement is in mg/dl or mmol/l.

    Average blood glucose (mg/dl) = 28.7 × HbA1c − 46.7

    Average blood glucose (mmol/l) = 1.59 × HbA1c − 2.59

The table below, from Wikipedia, shows average blood glucose levels corresponding to various HbA1c values. As you can see, they are generally higher than the corresponding fasting blood glucose levels would normally be (the latter is what the values on the vertical axes of the graphs above from Yashin and colleagues’ study roughly measure). This is to be expected, because blood glucose levels vary a lot during the day, and are often transitorily high in response to food intake and fluctuations in various hormones. Growth hormone, cortisol and noradrenaline are examples of hormones that increase blood glucose. Only one hormone effectively decreases blood glucose levels, insulin, by stimulating glucose uptake and storage as glycogen and fat.


Nevertheless, one can reasonably expect fasting blood glucose levels to have been highly correlated with average blood glucose levels in the sample. So, in my opinion, the graphs above showing age-related blood glucose trajectories are still valid, in terms of their overall shape, but the values on the vertical axes should have been measured differently, perhaps using the formulas above.

Ironically, those who achieve low average blood glucose levels (measured based on HbA1c) by adopting a low carbohydrate diet (one of the most effective ways) frequently have somewhat high fasting blood glucose levels because of physiological (or benign) insulin resistance. Their body is primed to burn fat for energy, not glucose. Thus when growth hormone levels spike in the morning, so do blood glucose levels, as muscle cells are in glucose rejection mode. This is a benign version of the dawn effect (a.k.a. dawn phenomenon), which happens with quite a few low carbohydrate dieters, particularly with those who are deep in ketosis at dawn.

Yashin and colleagues also modeled relative risk of death based on blood glucose levels, using a fairly sophisticated mathematical model that takes into consideration U-curve relationships. What they found is intuitively appealing, and is illustrated by the two graphs at the bottom of the figure below. The graphs show how the relative risks (e.g., 1.05, on the topmost dashed line on the both graphs) associated with various ranges of blood glucose levels vary with age, for both females and males.


What the graphs above are telling us is that once you reach old age, controlling for blood sugar levels is not as effective as doing it earlier, because you are more likely to die from what the authors refer to as “other causes”. For example, at the age of 90, having a blood glucose of 150 mg/dl (corrected for the measurement problem noted earlier, this would be perhaps 165 mg/dl, from HbA1c values) is likely to increase your risk of death by only 5 percent. The graphs account for the facts that: (a) blood glucose levels naturally increase with age, and (b) fewer people survive as age progresses. So having that level of blood glucose at age 60 would significantly increase relative risk of death at that age; this is not shown on the graph, but can be inferred.

Here is a final rough edge of this study. From what I could gather from the underlying equations, the relative risks shown above do not account for the effect of high blood glucose levels earlier in life on relative risk of death later in life. This is a problem, even though it does not completely invalidate the conclusion above. As noted by several people (including Gary Taubes in his book Good Calories, Bad Calories), many of the diseases associated with high blood sugar levels (e.g., cancer) often take as much as 20 years of high blood sugar levels to develop. So the relative risks shown above underestimate the effect of high blood glucose levels earlier in life.

Do the long-lived participants have some natural protection against accelerated increases in blood sugar levels, or was it their diet and lifestyle that protected them? This question cannot be answered based on the study.

Assuming that their diet and lifestyle protected them, it is reasonable to argue that: (a) if you start controlling your average blood sugar levels well before you reach the age of 55, you may significantly increase your chances of living beyond the age of 90; (b) it is likely that your blood glucose levels will go up with age, but if you can manage to slow down that progression, you will increase your chances of living a longer and healthier life; (c) you should focus your control on reliable measures of average blood glucose levels, such as HbA1c, not fasting blood glucose levels (postprandial glucose levels are also a good option, because they contribute a lot to HbA1c increases); and (d) it is never too late to start controlling your blood glucose levels, but the more you wait, the bigger is the risk.

References:

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.

Yashin, A.I., Ukraintseva, S.V., Arbeev, K.G., Akushevich, I., Arbeeva, L.S., & Kulminski, A.M. (2009). Maintaining physiological state for exceptional survival: What is the normal level of blood glucose and does it change with age? Mechanisms of Ageing and Development, 130(9), 611-618.

Sunday, May 16, 2021

The Friedewald and Iranian equations: Fasting triglycerides can seriously distort calculated LDL

Standard lipid profiles provide LDL cholesterol measures based on equations that usually have the following as their inputs (or independent variables): total cholesterol, HDL cholesterol, and triglycerides.

Yes, LDL cholesterol is not measured directly in standard lipid profile tests! This is indeed surprising, since cholesterol-lowering drugs with negative side effects are usually prescribed based on estimated (or "fictitious") LDL cholesterol levels.

The most common of these equations is the Friedewald equation. Through the Friedewald equation, LDL cholesterol is calculated as follows (where TC = total cholesterol, and TG = triglycerides). The inputs and result are in mg/dl.

    LDL = TC – HDL – TG / 5

Here is one of the problems with the Friedewald equation. Let us assume that an individual has the following lipid profile numbers: TC = 200, HDL = 50, and TG = 150. The calculated LDL will be 120. Let us assume that this same individual reduces triglycerides to 50, from the previous 150, keeping all of the other measures constant with except of HDL, which goes up a bit to compensate for the small loss in total cholesterol associated with the decrease in triglycerides (there is always some loss, because the main carrier of triglycerides, VLDL, also carries some cholesterol). This would normally be seen as an improvement. However, the calculated LDL will now be 140, and a doctor will tell this person to consider taking statins!

There is evidence that, for individuals with low fasting triglycerides, a more precise equation is one that has come to be known as the “Iranian equation”. The equation has been proposed by Iranian researchers in an article published in the Archives of Iranian Medicine (Ahmadi et al., 2008), hence its nickname. Through the Iranian equation, LDL is calculated as follows. Again, the inputs and result are in mg/dl.

    LDL = TC / 1.19 + TG / 1.9 – HDL / 1.1 – 38

The Iranian equation is based on linear regression modeling, which is a good sign, although I would have liked it even better if it was based on nonlinear regression modeling. The reason is that relationships between variables describing health-related phenomena are often nonlinear, leading to biased linear estimations. With a good nonlinear analysis algorithm, a linear relationship will also be captured; that is, the “curve” that describes the relationship will default to a line if the relationship is truly linear (see: warppls.com).

Anyway, an online calculator that implements both equations (Friedewald and Iranian) is linked here; it was the top Google hit on a search for “Iranian equation LDL” at the time of this post’s writing.

As you will see if you try it, the online calculator linked above is useful in showing the difference in calculated LDL cholesterol, using both equations, when fasting triglycerides are very low (e.g., below 50).

The Iranian equation yields high values of LDL cholesterol when triglycerides are high; much higher than those generated by the Friedewald equation. If those are not overestimations (and there is some evidence that, if they are, it is not by much), they describe an alarming metabolic pattern, because high triglycerides are associated with small-dense LDL particles. These particles are the most potentially atherogenic of the LDL particles, in the presence of other factors such as chronic inflammation.

In other words, the Iranian equation gives a clearer idea than the Friedewald equation about the negative health effects of high triglycerides. You need a large number of small-dense LDL particles to carry a high amount of LDL cholesterol.

An even more precise measure of LDL particle configuration is the VAP test; this post has a discussion of a sample VAP test report.

Reference:

Ahmadi SA, Boroumand MA, Gohari-Moghaddam K, Tajik P, Dibaj SM. (2008). The impact of low serum triglyceride on LDL-cholesterol estimation. Archives of Iranian Medicine, 11(3), 318-21.

Sunday, April 11, 2021

Want to make coffee less acidic? Add cream to it

The table below is from a 2008 article by Ehlen and colleagues (), showing the amount of erosion caused by various types of beverages, when teeth were exposed to them for 25 h in vitro. Erosion depth is measured in microns. The third row shows the chance probabilities (i.e., P values) associated with the differences in erosion of enamel and root.


As you can see, even diet drinks may cause tooth erosion. That is not to say that if you drink a diet soda occasionally you will destroy your teeth, but regular drinking may be a problem. I discussed this study in a previous post (). After that post was published here some folks asked me about coffee, so I decided to do some research.

Unfortunately coffee by itself can also cause some erosion, primarily because of its acidity. Generally speaking, you want a liquid substance that you are interested in drinking to have a pH as close to 7 as possible, as this pH is neutral (). Tap and mineral water have a pH that is very close to 7. Black coffee seems to have a pH of about 4.8.

Also problematic are drinks containing fermentable carbohydrates, such as sucrose, fructose, glucose, and lactose. These are fermented by acid-producing bacteria. Interestingly, when fermentable carbohydrates are consumed as part of foods that require chewing, such as fruits, acidity is either neutralized or significantly reduced by large amounts of saliva being secreted as a result of the chewing process.

So what to do about coffee?

One possible solution is to add heavy cream to it. A small amount, such as a teaspoon, appears to bring the pH in a cup of coffee to a little over 6. Another advantage of heavy cream is that it has no fermentable carbohydrates; it has no carbohydrates, period. You will have to get over the habit of drinking sweet beverages, including sweet coffee, if you were unfortunate enough to develop that habit (like so many people living in cities today).

It is not easy to find reliable pH values for various foods. I guess dentistry researchers are more interested in ways of repairing damage already done, and there doesn't seem to be much funding available for preventive dentistry research. Some pH testing results from a University of Cincinnati college biology page were available at the time of this writing; they appeared to be reasonably reliable the last time I checked them ().

Thursday, March 11, 2021

The steep obesity increase in the USA in the 1980s: In a sense, it reflects a major success story

Obesity rates have increased in the USA over the years, but the steep increase starting around the 1980s is unusual. Wang and Beydoun do a good job at discussing this puzzling phenomenon (), and a blog post by Discover Magazine provides a graph (see below) that clear illustrates it ().



What is the reason for this?

You may be tempted to point at increases in calorie intake and/or changes in macronutrient composition, but neither can explain this sharp increase in obesity in the 1980s. The differences in calorie intake and macronutrient composition are simply not large enough to fully account for such a steep increase. And the data is actually full of oddities.

For example, an article by Austin and colleagues (which ironically blames calorie consumption for the obesity epidemic) suggests that obese men in a NHANES (2005–2006) sample consumed only 2.2 percent more calories per day on average than normal weight men in a NHANES I (1971–1975) sample ().

So, what could be the main reason for the steep increase in obesity prevalence since the 1980s?

The first clue comes from an interesting observation. If you age-adjust obesity trends (by controlling for age), you end up with a much less steep increase. The steep increase in the graph above is based on raw, unadjusted numbers. There is a higher prevalence of obesity among older people (no surprise here). And older people are people that have survived longer than younger people. (Don’t be too quick to say “duh” just yet.)

This age-obesity connection also reflects an interesting difference between humans living “in the wild” and those who do not, which becomes more striking when we compare hunter-gatherers with modern urbanites. Adult hunter-gatherers, unlike modern urbanites, do not gain weight as they age; they actually lose weight (, ).

Modern urbanites gain a significant amount of weight, usually as body fat, particularly after age 40. The table below, from an article by Flegal and colleagues, illustrates this pattern quite clearly (). Obesity prevalence tends to be highest between ages 40-59 in men; and this has been happening since the 1960s, with the exception of the most recent period listed (1999-2000).



In the 1999-2000 period obesity prevalence in men peaked in the 60-74 age range. Why? With progress in medicine, it is likely that more obese people in that age range survived (however miserably) in the 1999-2000 period. Obesity prevalence overall tends to be highest between ages 40-74 in women, which is a wider range than in men. Keep in mind that women tend to also live longer than men.

Because age seems to be associated with obesity prevalence among urbanites, it would be reasonable to look for a factor that significantly increased survival rates as one of the main reasons for the steep increase in the prevalence of obesity in the USA in the 1980s. If significantly more people were surviving beyond age 40 in the 1980s and beyond, this would help explain the steep increase in obesity prevalence. People don’t die immediately after they become obese; obesity is a “disease” that first and foremost impairs quality of life for many years before it kills.

Now look at the graph below, from an article by Armstrong and colleagues (). It shows a significant decrease in mortality from infectious diseases in the USA since 1900, reaching a minimum point between 1950 and 1960 (possibly 1955), and remaining low afterwards. (The spike in 1918 is due to the influenza pandemic.) At the same time, mortality from non-infectious diseases remains relatively stable over the same period, leading to a similar decrease in overall mortality.



When proper treatment options are not available, infectious diseases kill disproportionately at ages 15 and under (). Someone who was 15 years old in the USA in 1955 would have been 40 years old in 1980, if he or she survived. Had this person been obese, this would have been just in time to contribute to the steep increase in obesity trends in the USA. This increase would be cumulative; if this person were to live to the age of 70, he or she would be contributing to the obesity statistics up to 2010.

Americans are clearly eating more, particularly highly palatable industrialized foods whose calorie-to-nutrient ratio is high. Americans are also less physically active. But one of the fundamental reasons for the sharp increase in obesity rates in the USA since the early 1980s is that Americans have been surviving beyond age 40 in significantly greater numbers.

This is due to the success of modern medicine and public health initiatives in dealing with infectious diseases.

PS: It is important to point out that this post is not about the increase in American obesity in general over the years, but rather about the sharp increase in obesity since the early 1980s. A few alternative hypotheses have been proposed in the comments section, of which one seems to have been favored by various readers: a significant increase in consumption of linoleic acid (not to be confused with linolenic acid) since the early 1980s.

Sunday, February 21, 2021

The China Study II: Wheat flour, rice, and cardiovascular disease

In my last post on the China Study II, I analyzed the effect of total and HDL cholesterol on mortality from all cardiovascular diseases. The main conclusion was that total and HDL cholesterol were protective. Total and HDL cholesterol usually increase with intake of animal foods, and particularly of animal fat. The lowest mortality from all cardiovascular diseases was in the highest total cholesterol range, 172.5 to 180; and the highest mortality in the lowest total cholesterol range, 120 to 127.5. The difference was quite large; the mortality in the lowest range was approximately 3.3 times higher than in the highest.

This post focuses on the intake of two main plant foods, namely wheat flour and rice intake, and their relationships with mortality from all cardiovascular diseases. After many exploratory multivariate analyses, wheat flour and rice emerged as the plant foods with the strongest associations with mortality from all cardiovascular diseases. Moreover, wheat flour and rice have a strong and inverse relationship with each other, which suggests a “consumption divide”. Since the data is from China in the late 1980s, it is likely that consumption of wheat flour is even higher now. As you’ll see, this picture is alarming.

The main model and results

All of the results reported here are from analyses conducted using WarpPLS. Below is the model with the main results of the analyses. (Click on it to enlarge. Use the "CRTL" and "+" keys to zoom in, and CRTL" and "-" to zoom out.) The arrows explore associations between variables, which are shown within ovals. The meaning of each variable is the following: SexM1F2 = sex, with 1 assigned to males and 2 to females; MVASC = mortality from all cardiovascular diseases (ages 35-69); TKCAL = total calorie intake per day; WHTFLOUR = wheat flour intake (g/day); and RICE = and rice intake (g/day).


The variables to the left of MVASC are the main predictors of interest in the model. The one to the right is a control variable – SexM1F2. The path coefficients (indicated as beta coefficients) reflect the strength of the relationships. A negative beta means that the relationship is negative; i.e., an increase in a variable is associated with a decrease in the variable that it points to. The P values indicate the statistical significance of the relationship; a P lower than 0.05 generally means a significant relationship (95 percent or higher likelihood that the relationship is “real”).

In summary, the model above seems to be telling us that:

- As rice intake increases, wheat flour intake decreases significantly (beta=-0.84; P<0.01). This relationship would be the same if the arrow pointed in the opposite direction. It suggests that there is a sharp divide between rice-consuming and wheat flour-consuming regions.

- As wheat flour intake increases, mortality from all cardiovascular diseases increases significantly (beta=0.32; P<0.01). This is after controlling for the effects of rice and total calorie intake. That is, wheat flour seems to have some inherent properties that make it bad for one’s health, even if one doesn’t consume that many calories.

- As rice intake increases, mortality from all cardiovascular diseases decreases significantly (beta=-0.24; P<0.01). This is after controlling for the effects of wheat flour and total calorie intake. That is, this effect is not entirely due to rice being consumed in place of wheat flour. Still, as you’ll see later in this post, this relationship is nonlinear. Excessive rice intake does not seem to be very good for one’s health either.

- Increases in wheat flour and rice intake are significantly associated with increases in total calorie intake (betas=0.25, 0.33; P<0.01). This may be due to wheat flour and rice intake: (a) being themselves, in terms of their own caloric content, main contributors to the total calorie intake; or (b) causing an increase in calorie intake from other sources. The former is more likely, given the effect below.

- The effect of total calorie intake on mortality from all cardiovascular diseases is insignificant when we control for the effects of rice and wheat flour intakes (beta=0.08; P=0.35). This suggests that neither wheat flour nor rice exerts an effect on mortality from all cardiovascular diseases by increasing total calorie intake from other food sources.

- Being female is significantly associated with a reduction in mortality from all cardiovascular diseases (beta=-0.24; P=0.01). This is to be expected. In other words, men are women with a few design flaws, so to speak. (This situation reverses itself a bit after menopause.)

Wheat flour displaces rice

The graph below shows the shape of the association between wheat flour intake (WHTFLOUR) and rice intake (RICE). The values are provided in standardized format; e.g., 0 is the mean (a.k.a. average), 1 is one standard deviation above the mean, and so on. The curve is the best-fitting U curve obtained by the software. It actually has the shape of an exponential decay curve, which can be seen as a section of a U curve. This suggests that wheat flour consumption has strongly displaced rice consumption in several regions in China, and also that wherever rice consumption is high wheat flour consumption tends to be low.


As wheat flour intake goes up, so does cardiovascular disease mortality

The graphs below show the shapes of the association between wheat flour intake (WHTFLOUR) and mortality from all cardiovascular diseases (MVASC). In the first graph, the values are provided in standardized format; e.g., 0 is the mean (or average), 1 is one standard deviation above the mean, and so on. In the second graph, the values are provided in unstandardized format and organized in terciles (each of three equal intervals).



The curve in the first graph is the best-fitting U curve obtained by the software. It is a quasi-linear relationship. The higher the consumption of wheat flour in a county, the higher seems to be the mortality from all cardiovascular diseases. The second graph suggests that mortality in the third tercile, which represents a consumption of wheat flour of 501 to 751 g/day (a lot!), is 69 percent higher than mortality in the first tercile (0 to 251 g/day).

Rice seems to be protective, as long as intake is not too high

The graphs below show the shapes of the association between rice intake (RICE) and mortality from all cardiovascular diseases (MVASC). In the first graph, the values are provided in standardized format. In the second graph, the values are provided in unstandardized format and organized in terciles.



Here the relationship is more complex. The lowest mortality is clearly in the second tercile (206 to 412 g/day). There is a lot of variation in the first tercile, as suggested by the first graph with the U curve. (Remember, as rice intake goes down, wheat flour intake tends to go up.) The U curve here looks similar to the exponential decay curve shown earlier in the post, for the relationship between rice and wheat flour intake.

In fact, the shape of the association between rice intake and mortality from all cardiovascular diseases looks a bit like an “echo” of the shape of the relationship between rice and wheat flour intake. Here is what is creepy. This echo looks somewhat like the first curve (between rice and wheat flour intake), but with wheat flour intake replaced by “death” (i.e., mortality from all cardiovascular diseases).

What does this all mean?

- Wheat flour displacing rice does not look like a good thing. Wheat flour intake seems to have strongly displaced rice intake in the counties where it is heavily consumed. Generally speaking, that does not seem to have been a good thing. It looks like this is generally associated with increased mortality from all cardiovascular diseases.

- High glycemic index food consumption does not seem to be the problem here. Wheat flour and rice have very similar glycemic indices (but generally not glycemic loads; see below). Both lead to blood glucose and insulin spikes. Yet, rice consumption seems protective when it is not excessive. This is true in part (but not entirely) because it largely displaces wheat flour. Moreover, neither rice nor wheat flour consumption seems to be significantly associated with cardiovascular disease via an increase in total calorie consumption. This is a bit of a blow to the theory that high glycemic carbohydrates necessarily cause obesity, diabetes, and eventually cardiovascular disease.

- The problem with wheat flour is … hard to pinpoint, based on the results summarized here. Maybe it is the fact that it is an ultra-refined carbohydrate-rich food; less refined forms of wheat could be healthier. In fact, the glycemic loads of less refined carbohydrate-rich foods tend to be much lower than those of more refined ones. (Also, boiled brown rice has a glycemic load that is about three times lower than that of whole wheat bread; whereas the glycemic indices are about the same.) Maybe the problem is wheat flour's  gluten content. Maybe it is a combination of various factors, including these.

Reference

Kock, N. (2010). WarpPLS 1.0 User Manual. Laredo, Texas: ScriptWarp Systems.

Acknowledgment and notes

- Many thanks are due to Dr. Campbell and his collaborators for collecting and compiling the data used in this analysis. The data is from this site, created by those researchers to disseminate their work in connection with a study often referred to as the “China Study II”. It has already been analyzed by other bloggers. Notable analyses have been conducted by Ricardo at Canibais e Reis, Stan at Heretic, and Denise at Raw Food SOS.

- The path coefficients (indicated as beta coefficients) reflect the strength of the relationships; they are a bit like standard univariate (or Pearson) correlation coefficients, except that they take into consideration multivariate relationships (they control for competing effects on each variable). Whenever nonlinear relationships were modeled, the path coefficients were automatically corrected by the software to account for nonlinearity.

- The software used here identifies non-cyclical and mono-cyclical relationships such as logarithmic, exponential, and hyperbolic decay relationships. Once a relationship is identified, data values are corrected and coefficients calculated. This is not the same as log-transforming data prior to analysis, which is widely used but only works if the underlying relationship is logarithmic. Otherwise, log-transforming data may distort the relationship even more than assuming that it is linear, which is what is done by most statistical software tools.

- The R-squared values reflect the percentage of explained variance for certain variables; the higher they are, the better the model fit with the data. In complex and multi-factorial phenomena such as health-related phenomena, many would consider an R-squared of 0.20 as acceptable. Still, such an R-squared would mean that 80 percent of the variance for a particularly variable is unexplained by the data.

- The P values have been calculated using a nonparametric technique, a form of resampling called jackknifing, which does not require the assumption that the data is normally distributed to be met. This and other related techniques also tend to yield more reliable results for small samples, and samples with outliers (as long as the outliers are “good” data, and are not the result of measurement error).

- Only two data points per county were used (for males and females). This increased the sample size of the dataset without artificially reducing variance, which is desirable since the dataset is relatively small. This also allowed for the test of commonsense assumptions (e.g., the protective effects of being female), which is always a good idea in a complex analysis because violation of commonsense assumptions may suggest data collection or analysis error. On the other hand, it required the inclusion of a sex variable as a control variable in the analysis, which is no big deal.

- Since all the data was collected around the same time (late 1980s), this analysis assumes a somewhat static pattern of consumption of rice and wheat flour. In other words, let us assume that variations in consumption of a particular food do lead to variations in mortality. Still, that effect will typically take years to manifest itself. This is a major limitation of this dataset and any related analyses.

- Mortality from schistosomiasis infection (MSCHIST) does not confound the results presented here. Only counties where no deaths from schistosomiasis infection were reported have been included in this analysis. Mortality from all cardiovascular diseases (MVASC) was measured using the variable M059 ALLVASCc (ages 35-69). See this post for other notes that apply here as well.

Sunday, January 17, 2021

Has COVID led to an increase in all-cause mortality? A look at US data from 2015 to 2020


Has COVID led to an increase in all-cause mortality? The figure below shows mortality data in the US for the 2015-2020 period. At the top chart are the absolute numbers of deaths per 1000 people. At the bottom are the annual change percentages, how much the absolute numbers have been changing from the previous years.



As you can see at the top chart the absolute numbers of deaths per 1000 people have been going up steadily, since 2015, at a rate of around 10 percent per year. This is due primarily to population ageing, which has been increasing in a very similar fashion. Since life expectancy has been generally stable in the US for the 2015-2020 period (), an increase in the number of deaths is to be expected due to population ageing.

What I mean by population ageing is an increase in the average age of the population due to an increase in the proportion of older individuals (e.g., aged 65 or more) in the population. In any population where there are no immortals, this population ageing phenomenon is normally expected to cause a higher number of deaths per 1000 people.

Now look at the bottom chart in the figure. It shows no increase in the rate of change from 2019 to 2020. This is not what you would expect if COVID had led to an increase in all-cause mortality in 2020. In fact, based on media reports, one would expect to see a visible spike in the rate of change for 2020. If these numbers are correct, we have to conclude that COVID has NOT led to an increase in all-cause mortality.