Monday, November 10, 2014

Can salmon be a rich source of calcium?


Removing the bones from cooked fish, before eating the flesh, is not only a waste of mineral nutrients. In some cases it can be difficult, and lead to a lot of waste of meat.

We know that many ancestral cultures employed slow-cooking techniques and tools, such as earth ovens (a.k.a. cooking pits; see ). Slow-cooking fish over a long time tends to soften the bones to the point that they can be eaten with the flesh.

The photo below shows the leftovers of a whole salmon that we cooked recently. We baked it with vegetables on a tray covered with aluminum foil. We set the oven at 300 degrees Fahrenheit, and baked the salmon for about 5 hours.



The end result is that we can eat the salmon, a rich source of omega-3 fat, with the bones. No need to remove anything. Just take a chunk, as you can see in the photo, and eat it whole.

It is a good idea to marinate the salmon for a few hours prior to baking it. This will create enough moisture to ensure that the salmon does not dry up during the baking process.

If you are a carnivore, you can make a significant contribution to sustainability by eating the whole animal, or as much of the animal as possible. This applies to fish, as I discussed here before (, , ).

Add eating less to this habit, and your health will benefit greatly.

Monday, October 13, 2014

Will the aluminum pan and foil give you Alzheimer’s?


Aluminum (or aluminium) is a silvery metal that is both ductile and light. It is abundant in nature. These characteristics make it a favorite in many industries. Food utensils, such as pans and pots, are often made of aluminum. This use is dwarfed by aluminum’s widespread use in the canning of foods and drinks (e.g., sodas and beers).

Based on a systematic literature review published in 2008, Ferreira et al. argued that there is credible evidence of an “association” between Alzheimer’s disease and aluminum intake (). This argument has been challenged by other researchers, but has nevertheless gained media attention. Positive and negative associations will always be found where there are nonzero correlations, but correlation does not guarantee causation.

A research report commissioned by the U.S. Environmental Protection Agency, authored by Krewski et al. and published in 2007, reviewed a number of studies on the health effects of aluminum (). Several interesting findings emerged from this extensive review of the literature.

For example, a targeted study published in the late 1980s and early 1990s suggested that the daily intake of aluminum of a 14-16 year old male in the U.S. was about 11.5 mg; the main sources being additives to the following refined foods: cornbread (36.6% of total intake), American processed cheese (17.2%), pancakes (9.0%), yellow cake with icing (8%), taco/tostada (3.5%), cheeseburger (2.7%), tea (2.0%); hamburger (1.8%), and fish sticks (1.5%).

The meat that goes into the manufacturing of industrial hamburgers is not a significant source of aluminum. The same goes for the fish in the fish sticks. It is the industrial refining that makes the above-mentioned foods non-negligible sources of aluminum. One could argue that processed cheese should not be called “cheese”, as it is far removed from “real” cheese in terms of nutrient composition – particularly aged raw milk cheese.

Aluminum-treated water is widely believed to be a major source of aluminum to the body, with the potential of leading to health-detrimental accumulation. It appears that this is a myth based on several of the studies reviewed by Krewski et al.

One study concluded that humans drinking aluminum-treated water over a period of 70 to 80 years would have a total accumulation of approximately 1.5 mg of aluminum in their brain (1 mg/kg, the average adult human brain weighs 1.5 kg). At the high end of normal levels, and not much compared to the 34 mg found in some of those exposed to the Camelford water pollution incident (). And here is something else to consider. The study made two unlikely assumptions for emphasis: that all the ingested aluminum was absorbed, and that those exposed suffered from a condition that entirely prevented excretion from excess ingested aluminum.

Krewski et al.’s report and virtually all empirical studies I reviewed for this post suggest that the intake of aluminum from cooking utensils is negligible.

Is aluminum intake via food additives, arguably one of the main sources for most people living in urban environments today, likely to cause neurological diseases such as Alzheimer's disease?

My review of the evidence left me with the impression that most of the studies suggesting that aluminum intake can lead to neurological diseases make causal mistakes. One representative example is Rifat et al.’s study published in 1990 in The Lancet ().

This old study is interesting because it looked at the effects of ingestion of finely ground aluminum between 1944 and 1977 by miners, where the aluminum was ingested because it was believed to be protective against silicotic lung disease (caused by inhalation of crystalline silica dust).

As a side note, I should say that the intake levels reported in Rifat et al.’s study seem lower than what one would expect to see from a modern diet of refined foods. This seems odd. The levels may have been underestimated by Rifat et al. Or, what is more worrying, they may be quite high in a modern diet of refined foods.

Having said that, Rifat et al.’s article reports “… no significant differences between exposed and non-exposed miners in reported diagnoses of neurological disorder …” However, the tables below from their article show significant differences between exposed and non-exposed miners in their performance in cognitive tests. Those exposed to aluminum performed worst.





Two major variables that one would expect Rifat et al. to have controlled for are age and lung disease. They did control for age and a few other factors, with the corresponding results indicated as “adjusted” in the tables. However, they did not control for lung disease – the very factor that motivated aluminum intake.

Lung disease is likely to limit the supply of oxygen to the brain, and thus cause cognitive problems in the short and long term. Therefore, the cognitive impairments suggested by Rifat et al.'s study may have been caused by lung disease, and not by exposure to aluminum. This type of problem is a common feature of studies of the health effects of aluminum.

Will cooking in aluminum pans and aluminum foils give you Alzheimer’s? I doubt it.

Monday, September 15, 2014

Will your wireless router give you cancer?


If you pick up a magnet and move it up and down with your hand, you will be creating electromagnetic radiation. The faster you move the magnet, the higher the frequency of the radiation you create. The higher the frequency of the radiation, the lower is its wavelength. High frequency is also associated with high radiation strength, where strength can be measured in watts (W).

We are constantly bombarded by electromagnetic radiation, which is usually classified based on its frequency (and also wavelength, since frequency and wavelength are inversely proportional). The main types of electromagnetic waves, in order of increasing frequency, are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, and gamma rays.

There has been a large amount of research on the health effects of wireless equipment, including wireless routers (figure below from Bestwirelessrouterreview.com), because of the electromagnetic radiation that they emit. Wireless equipment uses electromagnetic radiation of the radio waves type.



In developing countries, wireless routers are ubiquitous. They are found everywhere – at home, in hotels and businesses, and even in public parks. They allow wireless devices to connect to the Internet, by creating one or more “WiFi hotspots”.

The strength of the radiation emitted by wireless routers, when it reaches humans, is much lower than that emitted by mobile phones. One of the reasons for this is the lower strength of the radiation emitted by wireless routers, which can go from 30 to 500 milliwatts (mW); versus 125 mW to 2 W for mobile phones.

But the main reason for the lower strength of the radiation emitted by wireless routers, when it reaches humans, is that wireless routers normally are located farther away from humans than mobile phones. Radiation strength goes down according to the inverse-square law; i.e., proportionally to 1 divided by the distance between source and destination squared.

Given this, it has been estimated () that the exposure to 1 full year of radiation from a wireless router at home is equivalent, in terms of radiation reaching the body, to 20 minutes of exposure to the radiation emitted by a mobile phone.

If the radiation from wireless routers were to cause cancer, so should the radiation from mobile phones. So, what about mobile phones? Do they cause cancer?

In spite of a large amount of research conducted on the subject, no conclusive evidence has been found that the radiation from mobile phones causes cancer. A representative example of this research is a large Danish study (), whose results have recently been replicated.

Mobile phone radiation, like wireless router radiation, is currently classified by the International Agency for Research on Cancer (IARC) in Group 2B, namely “possibly carcinogenic”. This carries a recommendation of “more research”. Caffeic acid, found in coffee, is also in this group. It is useful to note that neither mobile phone nor wireless router radiation are classified in Group 2A, which is the “probably carcinogenic” IARC group.

When one considers the accumulated evidence regarding cancer risk associated with all types of electromagnetic radiation, the biggest concern by far is sunburn from ultraviolet radiation. The evidence suggests that it causes skin cancer. Chronic non-sunburn exposure to natural ultraviolet radiation, on the other hand, seems protective against most types of cancer (skin cancer included).

Will your wireless router give you cancer? I don’t think so.

Monday, August 11, 2014

Slow versus slow-brisk walking: Effects on type 2 diabetics


I am not a big fan of reviewing new studies published in refereed journals, particularly those that make it to the news. I prefer studies that have been published for a while, so that I can look at citations to them – both positive and negative.

But I am making an exception here to a study by Kristian Karstoft and colleagues (the senior author is diabetes researcher Thomas Solomon: ), accepted for publication on 30 June 2014 in the fairly targeted and selective journal Diabetologia (full text freely available in a .zip file at the time of this writing: ).

This is a small study. Individuals diagnosed with type 2 diabetes, and who were not being treated for the condition, were allocated to three groups: a control group (CON), an “interval” walking group (IWT), and a slow walking group (CWT).

The groups had 8, 12, and 12 people in them, respectively. Those people in the IWT group alternated between walking briskly and slowly for 1 hour five times a week. Those in the CWT group only walked slowly. Those in the CON group supposedly did not do any targeted exercise.

One of the interesting findings of this study was that there was no difference in terms of health effects between the CWT and the CON groups. The only group that benefited was the IWT group. That is, those who alternated between walking briskly and slowly benefited in a way that was observable from the exercise, but those who walked slowly did not.

This study highlights two facts that I have mentioned here before, but that are often overlooked by those who suffer from type 2 diabetes or are on their way to developing the condition. They refer to visceral fat and are listed below. Visceral fat accumulates around the abdominal organs ().

- Type 2 diabetes is strongly associated with visceral fat accumulation, and is somewhat unrelated to subcutaneous fat accumulation (see the case of sumo wrestlers: ).

- Visceral fat is very easy to burn via glycolytic exercise, but does not seem to respond well to non-glycolytic exercise.

Glycolytic exercise burns sugar stored in muscle, in the form of glycogen, while it is being performed. This form of exercise raises growth hormone levels acutely. Weight training and sprints are types of glycolytic exercise, which also takes other names, such as glycogen-depleting and anaerobic exercise.

Often one sees prediabetics and type 2 diabetics avoiding this type of exercise because it pushes their blood glucose levels through the roof. That happens, however, only during the exercise. After, the benefits are tremendous and appear to clearly outweigh the possible problems associated with the temporary exercise-induced hyperglycemia.

Take a look at the last line of this cropped version of Table 1 from the study, shown below. The relevant line for the point made above is the one that refers to visceral fat volume. As you can see, those in the IWT group had the greatest reduction in visceral fat. This was also the only statistically significant reduction among the three groups; according to an analysis of variance (ANOVA) test, the probability that it was due to chance was lower than one tenth of one percent.



The ANOVA test is "parametric", in the sense that it assumes that the data is normally distributed. However, the authors did not report conducting a test of normality. Also, the sample is very small. Given these, "non-parametric" tests, such as multiple one-group-two-conditions tests run with WarpPLS (link to specific page of the .pdf file of a relevant academic paper: ) would not only be more advisable but also provide more much more information to readers.

If you compare the line showing visceral fat with the other two above it, within the body composition section of the table, you will notice another interesting pattern. In the IWT group the changes in average total body mass and total fat mass were also the greatest, but the largest change in percentage terms was the one in average visceral fat mass. Visceral fat mass is often correlated with total fat mass, with this correlation being a function of how sedentary individuals are, and it does not take a lot of it to cause serious problems.

Sumo wrestlers tend to have large ratios of total to visceral fat mass. Virtually all of their body fat is subcutaneous. They also carry a lot of muscle mass. They achieve these through intense glycolytic exercise alternated with periods of rest and consumption of large amounts of calorie-dense food. To these they add another ingredient - exercise in the fasted state, usually in the morning prior to a large breakfast. Exercise in the fasted state seems particularly conducive to visceral fat mobilization.

By the way, sumo wrestlers consume enormous amounts of carbohydrates, but as noted by Karam () have "low visceral fat, absent hyperglycemia and absent dyslipidemia despite massive subcutaneous obesity".

In my opinion the folks in the study by Karstoft and colleagues would have benefited even more, possibly a lot more, if they had alternated between sprinting and regular walking.

Monday, July 28, 2014

What is “relative risk” (RR)? The case of alcohol frequency and its impact on mortality from stroke


This post is in response to an inquiry by Ivor (sorry for the delayed response). It refers to a recent study by Rantakömi and colleagues on the effect of alcohol consumption frequency on mortality from stroke (). The study followed men who consumed alcohol to different degrees, including no consumption at all, over a period of a little more than 20 years.

The study purportedly controlled for systolic blood pressure, smoking, body mass index, diabetes, socioeconomic status, and total amount of alcohol consumption. That is, its results are presented as holding regardless of those factors.

The main results were reported in terms of “relative risk” (RR) ratios. Here they are, quoted from the abstract:

“0.71 (95% CI, 0.30–1.68; P = 0.437) for men with alcohol consumption <0.5 times per week and 1.16 (95% CI, 0.54–2.50; P = 0.704) among men who consumed alcohol 0.5–2.5 times per week. Among men who consumed alcohol >2.5 times per week compared with nondrinkers, RR was 3.03 (95% CI, 1.19–7.72; P = 0.020).”

Note the P values reported within parentheses. They are the probabilities that the results are due to chance and thus “not real”, or not due to actual effects. By convention, P values equal to or lower than 0.05 are considered statistically significant. In consequence, P values greater than 0.05 are seen as referring to effects that cannot be unequivocally considered real.

This means that, of the results reported, only one seems to be due to a real effect, and that is the one that: “Among men who consumed alcohol >2.5 times per week compared with nondrinkers, RR was 3.03 …”

Why the authors report the statistically non-significant results as if they were noteworthy is unclear to me.

Before we go any further, let us look at what “relative risk” (RR) means. RR is given by the following ratio:

(Probability of an event when exposed) / (Probability of an event when not exposed)

In the study by Rantakömi and colleagues, the event is death from stroke. The exposure refers to alcohol consumption at a certain level, compared to no alcohol consumption (no exposure).

Now, let us go back to the result regarding consumption of alcohol more than 2.5 times per week. That result sounds ominous. It is helpful to keep in mind that the study by Rantakömi and colleagues followed a total of 2609 men with no history of stroke, of whom only 66 died from stroke.

Consider the following scenario. Let us say that 1 person in a group of 1,000 people who consumed no alcohol died from stroke. Let us also say that 3 people in a group of 1,000 people who consumed alcohol more than 2.5 times per week died from stroke. Given this, the RR would be: (3/1,000) / (1/1,000) = 3.

One could say, based on this, that: “Consuming alcohol more than 2.5 times per week increases the risk of dying from stroke by 200%”. Based on the RR, this is technically correct. It is rather misleading nevertheless.

If you think that increasing sample size may help ameliorate the problem, think again. The RR would be the same if it was 3 people versus 1 person in 1,000,000 (one million). With these numbers, the RR would be even less credible, in my view.

This makes the findings by Rantakömi and colleagues look a lot less ominous, don’t you think? This post is not really about the study by Rantakömi and colleagues. It is about the following question, which is in the title of this post: What is “relative risk” (RR)?

Quite frankly, given what one sees in RR-based studies, the answer is arguably not far from this:

RR is a ratio used in statistical analysis that makes minute effects look enormous; the effects in question would not normally be noticed by anyone in real life, and may be due to chance after all.

The reason I say that the effects “may be due to chance after all” is that when effects are such that 1 event in 1,000 would make a big difference, a researcher would have to control for practically everything in order to rule out confounders.

If one single individual with a genetic predisposition toward death from stroke falls into the group that consumes more alcohol, falling in that group entirely by chance (or due to group allocation bias), the RR-based results would be seriously distorted.

This highlights one main problem with epidemiological studies in general, where RR is a favorite ratio to be reported. The problem is that epidemiological studies in general refer to effects that are tiny.

One way to put results in context and present them more “honestly” would be to provide more information to readers, such as graphs showing data points and unstandardized scales, like the one below. This graph is from a previous post on latitude and cancer rates in the USA (), and has been generated with the software WarpPLS ().



This graph clearly shows that, while there seems to be an association between latitude and cancer rates in the USA, the total variation in cancer rates in the sample is only of around 3 in 1,000. This graph also shows outliers (e.g., Alaska), which call for additional explanations.

As for the issue of alcohol consumption frequency and mortality, I leave you with the results of a 2008 study by Breslow and Graubard, with more citations and published in a more targeted journal ():

“Average volume obscured effects of quantity alone and frequency alone, particularly for cardiovascular disease in men where quantity and frequency trended in opposite directions.”

In other words, alcohol consumption in terms of volume (quantity multiplied by frequency) appears to matter much more than quantity or frequency alone. We can state this even more simply: drinking two bottles of whiskey in one sitting, but only once every two weeks, is not going to be good for you.

In the end, providing more information to readers so that they can place the results in context is a matter of scientific honesty.

Monday, June 30, 2014

A case of a very large salivary stone


Salivary stones are the most common type of salivary gland disease. Having said that, they are very rare – less than 1 in 200 people will develop a symptomatic salivary stone. Usually they occur on one side of the mouth only. They seem to be more common in men than in women. Most of the evidence suggests that they are not strongly correlated with kidney stones, although some factors can increase both (e.g., dehydration).

Singh and Singh () discuss a case of a 55-year-old man who went to the Udaipur Dental Clinic with mild fever, pain, and swelling in the floor of the mouth. External examination, visually and through palpation, found no swelling or abnormal mass. The man’s oral hygiene was rather poor. The figures below show the extracted salivary stone, the stone perforating the base of the mouth prior to extraction, and an X-ray image of the stone.





I am not a big fan of X-ray tests in dental clinics, as they are usually done to convince patients to have dental decay treated in the conventional way – drilling and filling. Almost ten years ago, based on X-ray tests, I was told that I needed to treat some cavities urgently. I refused and instead completely changed my diet. Those cavities either reversed or never progressed. As the years passed, my dentist eventually became convinced that I had done the right thing, but told me that my case was very rare; unique in fact. Well, I know of a few cases like mine already. I believe that the main factors in my case were the elimination of unnatural foods (e.g., wheat-based foods), and consumption of a lot of raw-milk cheese.

However, as the case described here suggests, an X-ray test may be useful when a salivary stone is suspected.

Tuesday, May 6, 2014

Why red meat consumption may appear unhealthy in scientific studies


There have been many academic articles in the past linking red meat intake with increased mortality, and there will be more in the future. I discussed one such article before here (, ). The findings in this article, which received an enormous amount of media attention, are the basis for my discussion in this post. I am interested in answering the question: Why red meat consumption may appear unhealthy in scientific studies?

This question leads to other questions, which are also addressed in this post. Can red meat intake be associated with increases and decreases in mortality, in the same study? Can red meat intake possibly cause increased mortality, at least for a percentage of the population?

All of the analyses discussed below have been conducted with the software WarpPLS (). This software supports multivariate analyses where relationships can be modeled as linear or nonlinear, with or without moderating effects included.

The ubiquitous J curve

The graph below shows how mortality varies with red meat intake. As you can see, the relationship is overall flat, meaning that red meat intake is overall unrelated with mortality. However, when we look at the two sets of points above and below the relationship line, for males and females, we see a different pattern. It appears that red meat intake and mortality are indeed significantly associated with one another, but in a J-curve pattern. That is, red meat intake is associated with increases and decreases in mortality, in the same study.



Each serving of red meat corresponds to approximately 84 g. Therefore, we could say, based on the graph above, that mortality would be minimized with consumption of a little less than 67 g/d of red meat (0.80*84) for males, and a little more than 115 g/d (1.37*84) for females. Not zero consumption, simply not a lot.

Now, one may say that this is very reasonable: a little bit of red meat is fine, but not too much. Generally females lose blood periodically, so they need a bit more than males. However, based on a number of other studies, it seems that the optimal intake amounts that we are seeing here are unusually low. If this is the case, what could be biasing the results?

Multivariate associations

Multivariate associations can distort results quite a lot. Such associations arise from correlations among multiple variables; correlations that should not per se be taken as strong indications of causality. Below are the correlations between “Red meat intake (servings/d)” and other relevant variables in the dataset taken from the study being considered here.

- Physical activity (MET-h/wk): -0.696. That is, increases in red meat intake are very strongly associated with decreases in physical activity in this study. One MET unit is equal to the energy produced per unit surface area of an average person seated at rest.

- Diabetes (%): 0.781. Increases in red meat intake are very strongly associated with increases in the percentages of individuals with diabetes.

- Food intake (cal/d): 0.604. Increases in red meat intake are strongly associated with increases in food intake in general.

- Current smoker (%): 0.519. Increases in red meat intake are strongly associated with increases in the percentages of smokers.

Let us take the physical activity variable, for example. It is inversely correlated with red meat intake, with a strong correlation coefficient, and it is unlikely that this correlation is due to direct causation - one way or the other. Below is the same graph as above, but now with labels indicating physical activity levels.



You can see that physical activity levels tend to be lower among females, which is in part due to them being on average smaller than males and thus burning fewer calories. Here you can see that physical activity is associated with mortality in a pattern that is pretty much the reverse of red meat intake. The reason for this is the strong inverse correlation between physical activity and red meat intake.

The highest mortality is associated with the lowest physical activity at the highest red meat intake. Interestingly, mortality goes up as one reaches the point at which physical activity is the highest at the lowest red meat intake.

Now take a look at the two graphs below. Both show the relationship between diabetes incidence and mortality. The first has biological sex indicated through legends. The second has physical activity levels indicated through labels.





One way to untangle the messy nature of the relationships above is to try to look for possible moderating effects, based on reasonable causal assumptions. One such assumption is that physical activity moderates the relationship between red meat intake and mortality.

The moderating effect of physical activity

The two graphs below show the relationships between red meat intake and mortality with (first graph) and without (second graph) the moderating effect of physical activity. Basically and with minimum statistical jargon, the numbers next to the arrows indicate the strengths of the associations (betas) and the probabilities that the associations are not real (Ps). By convention, a P value lower than 0.05 is normally seen as an indication that the association is strong enough to be considered real – i.e., not due to chance.





What the graphs above suggest is that increases in physical activity tend to make the relationship between red meat intake and mortality go from flat (or nonexistent) to negative. This is the meaning of the negative moderating coefficient next to the dashed arrow. In other words, as physical activity levels go up, more red meat intake is associated with less mortality.

The role of genetics

While being male or female means having different genetic profiles, with a full chromosome difference, the effect of biological sex on mortality appears to be confounded by the effect of physical activity. That is, physical activity, as measured in this study (using METs), is strongly correlated with biological sex, and also with mortality. As noted earlier, physical activity levels tend to be lower among females, which is in part due to them being on average smaller than males and thus burning fewer calories.

But another genetic factor that may influence the results and that is not included in this analysis is HFE hereditary haemochromatosis, a hereditary disease that leads to excessive intestinal absorption of dietary iron, resulting in iron overload. This genetic condition is relatively common in northern Europeans and their descendants, with a prevalence of 1 in 200 in this group. Factoid: it is quite common in Australia.

This level of prevalence matters when you are looking at mortality levels that vary along only approximately 8 in 1,000, as in this study. That translates to 0.4 in 200; much less than the prevalence of HFE hereditary haemochromatosis in northern Europeans and their descendants. That is, HFE hereditary haemochromatosis may be a major confounder in our analyses above, one that has not been controlled for. The study included 37,698 men from the Health Professionals Follow-up Study (1986-2008) and 83,644 women from the Nurses' Health Study (1980-2008). There must have been many individuals with HFE hereditary haemochromatosis in the sample.

In summary …

Based on all of the above, I think it is quite possible that for those who suffer from HFE hereditary haemochromatosis, both biological sex and physical activity affect the relationship between red meat intake and mortality.

Past menopause, women who suffer from HFE hereditary haemochromatosis should consider reducing their red meat intake, as well as intake of iron from other sources (particularly from pills). The same goes for men with the condition. Male and post-menopausal female sufferers should consider regularly donating blood.

Both men and women who suffer from HFE hereditary haemochromatosis should consider significantly increasing their level of physical activity to reduce the likelihood of iron overload. (This would be good for anyone.)

Why physical activity? Because iron is used to transport oxygen and in biological redox reactions, both of which are significantly increased during and after physical activity. In those who tend to accumulate iron in tissues, physical activity creates an increase in demand for iron that can balance the increased supply from iron-rich sources.

Our bodies evolved in the context of physical activity, often intense physical activity, and are thus maladapted for sedentary behavior.

Monday, April 21, 2014

Often acquired tastes are acquired genes: Probiotics and prebiotics


Gut flora is found in many areas of our digestive tract, particularly in the colon. Whenever we eat anything we feed the microbes that make up our gut flora and/or add new microbes. Much of this flora is made up of bacteria. Not all of it is made up of bacteria though. The much talked about Candida albicans (a.k.a. “the American parasite”) is a fungus that is found predominantly in our digestive tract and mouths.

Candida’s recent fame is more a testament to the power of well-orchestrated Internet campaigns to sell products than to the actual importance of the fungus in determining the health of non-immunodepressed individuals. Claims about Candida, including dubious ones, have been made many times in the past ().

The relationship between the human gut flora and health was a topic of much interest to Élie Metchnikoff (photo below from Wikipedia), who received the Nobel Prize in Medicine in 1908 for his research on phagocytosis (). Metchnikoff was also a pioneer in the study of aging.



Gut flora discussions often refer to foods and supplements that fall into one of two main categories: probiotics and prebiotics (). Probiotics are generally defined as foods and supplements that include health-promoting live microbes. Prebiotics are non-digestible foods and supplements that feed health-promoting microbes living primarily in the human colon.

Food fermentation, under the appropriate conditions, leads to the formation of natural probiotics. This applies to both animal foods (e.g., cheese, cured meats) and plant foods (e.g., sauerkraut, pickles). Prebiotics occur naturally in many raw plant foods as fiber and resistant starch, and can also be produced through starch retrogradation ().

Again, whenever we eat anything we feed our gut flora. This gut flora is reportedly made up of 10 to the power of 14 cells of bacteria, 10 times more cells than the human body (), plus other types of microbes (e.g., fungi). Different species of microbes in our gut have genomes that are markedly different from ours. Thus we carry in our gut significantly more genes than our own; and genes are selfish.

Genes are selfish in the sense that they seek to propagate themselves. From the perspective of our gut microbes, this can be achieved by inducing the secretion of chemicals that will make us crave foods that will also feed the microbes, whether this will lead to an improvement in our health or not. Even unhealthy human hosts can live long enough to sustain a large number of generations of microbes.

Killing the host human organism may seem like a suicidal strategy for gut microbes, but not if the host organism passes the microbes to other host organisms before the microbes themselves die. Microbes can pass from one human to another through many mechanisms.

So how can we improve our gut flora?

Supplementation or transplantation of microbes have been attempted with mixed but generally positive results ().

Few approaches combine the effectiveness and simplicity of avoiding highly processed industrialized foods. The emphasis here is on inhibiting the growth of unnatural gut flora; flora that has not been carried regularly by our Paleolithic ancestors.

Having done that for a while, which can be difficult due to cravings induced by unnatural gut flora, your own body may become very effective at telling you what is good for you and what is not.

As a side note, just because a food is fermented one cannot assume that it is health-promoting. Bread is a fermented food.

Over the years I have noticed that I prefer eating certain meat dishes cold, and several days after they have been prepared. I wonder if this has anything to do with a small amount of fermentation bringing to life probiotic microbes.

Monday, March 31, 2014

Another kind of meatza: Ham, salami and cheese


A few years ago I wrote about a meatza made with lean ground beef and bison (). This post is about another kind of meatza, one that takes a lot less time to prepare. In fact, this one is very quick, and still very nutritious.

The recipe below is for a meal that feeds 3-6 people. If you are preparing this for an opinionated family, and you do not want to be accused of preparing “grilled ham and cheese” for them, you can always add some sautéed vegetables to the ham.

- Place 2 to 3 lbs of folded ham into a sheet pan. There is no need to coat the pan, as some of the water and fat in the ham will seep out and prevent sticking.
- Add some dry seasoning and butter. For the dry seasoning, I suggest a mix of garlic powder and cayenne pepper.
- Add a layer of genoa salami, and another layer of swiss cheese.
- Preheat oven to 375 degrees Fahrenheit.
- Bake the meatza for about 15 minutes.



The photo montage above shows the different stages of preparation and the final product. Since ham cuts tend to be very lean, the amount of fat in the entire meatza will normally depend heavily on the amount of added butter, salami, and cheese.

In this kind of meatza, the protein-to-fat ratio will normally be greater than 1. I think a ratio closer to 2 is ideal for those semi-sedentary office workers who do moderate exercise. The reason is that fat is the most calorie-dense macronutrient. Protein is the least calorie-dense macronutrient.

You do lose something with this dish, as you do with hot dishes in general. You lose the probiotic bacteria that would normally be found in significant amounts in the ham, salami, and cheese. These are all fermented foods that are better consumed raw.

Tuesday, March 18, 2014

Should you do resistance exercise to failure?


Doing resistance exercise to failure is normally recommended for those who want to maximize strength and muscle mass gains from the exercise. Yet, going to failure tends to significantly increase the chances of injury, after which the ability to do resistance exercise is impaired – also impairing gains, in the long term.

From an evolutionary perspective, getting injured is clearly maladaptive. Prey animals that show signs of injury, for example, tend to be targeted by predators. There is also functional loss, which would be reflected in impaired hunting and gathering ability.

So, assuming that going to failure is at least somewhat unnatural, because of a higher likelihood of subsequent injuries, how can it be advisable in the context of resistance exercise?

The graph below is from a study by Izquierdo and colleagues (). They randomly assigned several athletes to two exercise conditions, namely resistance training to failure (RF) and not to failure (NRF). A control group of athletes did not do any resistance exercise. The athletes were tested at four points in time: before the initiation of training (T0), after 6 wk of training (T1), after 11 wk of training (T2), and after 16 wk of training (T3).



The graph above shows the gains in terms of weight lifted in two exercises, the bench press and squat. It is similar to other graphs from the study in that it clearly shows: (a) improvements in the amount of weight lifted over time for both the RF and NRF groups, which reflect gains in strength; and (b) no significant differences in the improvements for the RF and NRF groups.

When you look at the results of the study as a whole, it seems that RF and NRF are associated with slightly greater or lesser gains, depending on the type of exercise and the measure of gains employed. The differences are small, and one can reasonably conclude that no significant difference in overall gains exists between RF and NRF.

It is clear that going to failure leads to increased metabolic stress, and that increased metabolic stress is associated with greater secretion of anabolic hormones (). How can this be reconciled with the lack of a significant difference in gains in the RF and NRF groups?

The graph below provides a hint as to the answer to this question. It shows resting serum cortisol concentrations in the participants. As you can see, after 16 wk of training (T3) cortisol levels are higher in the RF group, which is particularly interesting because the NRF group had higher cortisol levels at baseline (T0). Cortisol is a catabolic hormone, which may in this case counter the effects of the anabolic hormones, even though going to failure is expected to lead to greater anabolic hormone secretion.



It seems that cortisol levels tend to go up over time for those who go to failure, and go down for those who do not. I am not sure if this is a strictly metabolic effect. There may be a psychological component to it, as strength and power gains over time tend to be increasingly more difficult to achieve (see schematic graph below); perhaps leading to some added mental stress as well, as one tries to continue increasing resistance (or weight) while regularly going to failure.



And, of course, it is also possible that the results of the study reviewed here are a statistical “mirage”. The authors explain how they controlled for various possible confounders by adjusting the actual measures. This approach is generally less advisable than controlling for the effects of confounders by including the confounders in a multivariate analysis model ().

Nevertheless, in light of the above I am not so sure that regularly doing resistant exercise to failure is such a good idea.

Wednesday, March 5, 2014

Can intermittent very-low-calorie dieting cure diabetes?


The health effects of very-low-calorie diets (VLCDs) adopted for short periods of time (e.g., 5 days) have been the target of much recent in the past. Consuming 400-600 kcal/day would be considered VLCDing. VLCDing for significantly longer periods of time than 5 days can be dangerous, and in some cases potentially fatal. Nevertheless, there is speculation that it can also cure type II diabetes ().

Intermittent VLCDs mimic in part what probably happened with our ancestors in our evolutionary past. Successful hunting and gathering would lead to weight-maintenance food intake most of the time, with occasional periods of severe food scarcity. This has probably been a regular pattern in our evolutionary history, leading to health-promoting adaptations that are triggered by VLCDs.

The part that VLCDs alone do not mimic is the “hunting and gathering part”, or the exercise required to obtain food when it is scarce. This is an important point, because VLCDs are likely to induce lean body mass loss without exercise, together with body fat loss. VLCDs without exercise are not very natural, even though they can have very positive effects on one’s health, as we’ll see below.

An interesting and well cited study of the effects of VLCDs in participants with type II diabetes was published in 1998 in an article authored by Katherine V. Williams and colleagues (). The study included 54 participants, and lasted 20 weeks in total. The site of the study was the University of Pittsburgh School of Medicine. The participants were split in three groups, referred to as:

- Standard behavioral therapy (SBT). The participants received a 1,500−1,800 kcal/day diet throughout, with the goal of inducing gradual weight loss.

- Intermittent 1 day/week VLCD (one-day). The participants received a VLCD for 5 consecutive days during week 2, followed by an intermittent VLCD therapy for 1 day/week for 15 weeks, with a 1,500−1,800 kcal/day diet at other times.

- Intermittent 5 day/week VLCD (five-day). The participants received a VLCD for 5 consecutive days during week 2, followed by an intermittent VLCD therapy for 5 consecutive days every 5 weeks (5-day), with a 1,500−1,800 kcal/day diet at other times.

There is a reason behind this complicated arrangement. The researchers wanted to make sure that the average caloric intake for the two VLCD groups was identical, but 18,000-28,000 kcal lower than for the SBT group. The SBT group served as a baseline group.

All of the three diets were designed to make the participants lose weight. Exercise was not manipulated as part of the experiment. The one-day and five-day groups consumed 400-600 kcal/day while VLCDing, with the majority of the calories coming from high-protein-low-fat minimally processed food items – notably lean meat, fish, and fowl.

The graphs below show results in terms of weight loss and fasting plasma glucose (FPG) reduction. They suggest that, while there were significant differences in weight loss between the VLCD groups and the SBT group, the differences in FPG reduction were relatively minor across the three groups.





Glucose was measured in mmol/l and weight in kg. One mmol/l is equivalent to approximately 18 mg/dl (), and one kg is equivalent to about 2.2 lbs.

The graph below, however, shows a different picture. It shows results in terms of the percentages of participants with HbA1c below 6 percent. The HbA1c is a measure of average blood glucose over a period of a few months ().



The graph above tells us that the intermittent VLCD interventions, particularly the second (five-day), were reasonably successful at promoting average blood glucose control. A threshold normally used to characterize poor blood glucose control is 7.3 percent (), which is based on studies of HbA1c levels associated with diabetes complications.

The graph below, which is probably the most telling of all, shows long-term FPG changes (at the 20-week mark) plotted against short-term changes (at the 3-week mark). What this graph tells us is that those who experienced the most improvement right away were the ones with the most improvement in the long term.



This study tells us a few interesting things. Firstly, intermittent VLCDing with a focus on high-protein foods (lean meats) seems to be a powerful way of controlling average blood glucose levels in diabetics. It is essentially a low carbohydrate diet that is also low in calories (). Secondly, results with respect to FPG levels are not as telling as those in terms of HbA1c levels, even though HbA1c and FPG are highly correlated.

Thirdly, intermittent VLCDing may not actually “cure” diabetes when significant beta cell damage has already occurred (). This conclusion is speculative, but it follows from the short-term versus long-term results.

It seems that intermittent VLCDing helps diabetics in general with glucose control, but is truly curative for those in which enough beta cell function has been preserved. At least this is one explanation for the fact that those with immediate positive results (at the 3-week mark) tend to be the ones who retain those results over the long term.

The immediate positive results may well be due to those individuals not having reached the point at which significant and irreversible beta cell damage occurred. In other words, this study suggests that intermittent VLCDing can be particularly helpful in the long term for prediabetics.

This third, and speculative, conclusion may have to be revisited in light of the excellent discussion by Roy Taylor on the etiology and reversibility of type II diabetes (), linked by Evelyn (see comments under this post). This refers to the effects of an extended and more extreme version of VLCD than discussed here, where uninterrupted VLCD would last as long as 8 weeks.

For those who are not diabetic, I personally think it would be better to alternate VLCD with glycogen depleting exercise (e.g., sprints, weight training), every other day or so, with a lot more food consumed on exercise days (). After excess body fat is lost, it would be advisable to stick to weight-maintenance calorie intake, averaged over a week.

Monday, February 17, 2014

The megafat could be the healthiest


Typically obesity leads to health problems via insulin resistance (). Excess calories are stored as fat in fat cells up to a certain point. Beyond this point fat cells start rejecting fat. This is the point where fat cells become insulin resistant.

When they become insulin resistant, fat cells no longer respond to the insulin-mediated signal that they should store fat. Fat then increases in circulation and starts getting stored in tissues other than fat cells, including organ tissues (visceral fat). When the organ in question is the liver, this is called non-alcoholic fatty liver disease.

This progression happens with most people, but not with those who can progress to extremely high body fat levels (). Those people are the “megafat-prone” (MP). In the MP, fat cells take a long time to start rejecting fat. So the MP can keep on gaining body fat, often with no sign of diabetes at body fat levels that would have caused serious harm to most people.

One could say that the MP are extremely metabolically resilient. By not becoming insulin resistance as they gain more and more body fat, the MP are somewhat similar to sumo wrestlers (photo below from Nationalgeographic.com); although the main reason why sumo wrestlers do not develop insulin resistance is vigorous exercise. Visceral fat is very easy to "mobilize" through vigorous exercise; this being the basis for the "fat-but-fit" phenomenon (). There are two interesting, and also speculative, inferences that can be made based on all of this.



One is that the MP could potentially be the healthiest people among us. This is due to their extreme metabolic resilience, which should be fairly protective if they can avoid getting up to the unhealthy point of body fat for them. In fact, they could be overweight or even obese and fairly healthy, at least in terms of degenerative diseases. This is a genetic predisposition, which is likely to run in families.

The other inference is that the MP would probably not look “ripped” at relatively low weights. Since their body fat cells have above average insulin sensitivity at high body fat levels, one would expect that high insulin sensitivity to remain at low body fat levels. Insulin sensitivity is strongly associated with longevity ().

So, bringing all of this together, here are two apparent paradoxes. That person who already gained a lot of body fat and is an MP, showing no health problems at or near obesity, could be the healthiest among us. And that person who cannot look ripped at low body fat levels, no matter how hard he or she tries, may be one of the 2 percent or so of the population who will live beyond 90.

Unfortunately it is hard to tell whether someone is MP or not until the person actually becomes megafat. And if you are MP and actually become megafat, the afterlife will very likely arrive sooner rather than later.

Monday, February 3, 2014

Beef heart


I have posted here before about the nutrition value of beef liver, nature’s “super-multivitamin”. I have even speculated that grain-fed beef liver could be particularly nutritious (). What I should have done also was to post about beef liver’s equal in terms of nutrition value – beef heart. In this post I am correcting the omission.

Contrary to popular belief, not all organ meats are inherently fatty. The fat that is attached to an animal’s heart after slaughter, even if from grain-fed cattle, can be easily removed. The resulting cut will have a very low fat-to-protein ratio; often significantly less than fat-trimmed non-organ muscle cuts.

I don't say this because I consider fat to be unhealthy. In fact, dietary fat is necessary for the absorption of fat-soluble vitamins, and can thus be uniquely healthy. However, fat also is the most calorie-dense macronutrient. Even though the caloric values of macronutrients vary based on a number of factors, excess calories tend to be stored as excess body fat.

A 100 g portion of cooked beef heart, as in the photos below, will have 28 g of protein and only 5 g of fat (see this link, you may have to reset the serving size field: ). The photos below show two different beef heart dishes I have prepared. In the first the beef heart was barbecued. In the second it was simmered in a pan with vegetables for about 8 h.





Below is a simple recipe for the barbecued beef heart, which I recommend cutting into steaks. For the simmered beef heart I suggest cutting it into chunks that resemble cubes; then you can just add the dry seasoning powder mentioned below to the water, some vegetables, enough water to last about 8 h, and leave it simmering.

- Prepare some dry seasoning powder by mixing salt, garlic power, chili powder, and a small amount of cayenne pepper.
- Season the beef heart steaks at least 2 hours prior to placing them on the grill.
- Grill with the lid on, checking the meat every 10 minutes or so. (I use charcoal, one layer only to avoid burning the surface of the meat.) Turn it frequently, always putting the lid back on.
- If you like it rare, 20 minutes (or a bit less) may be enough.

Beef heart is a very good source of vitamins and minerals, and is one of the least expensive cuts of meat (in meat sections of grocery stores, not in paleo restaurants). Many people prefer beef heart over beef liver because of beef heart’s texture.

While I have restricted my comments in this post to “beef” heart, the hearts of most animals that are eaten by humans (e.g., chicken, duck, deer, turkey) are fairly nutritious, and they seem to have that uniformly meaty texture that many people like.

Here is an interesting factoid. The largest known carnivorous marsupial of modern times was the now extinct Tasmanian tiger. It was an elusive and solitary animal, and the subject of the beautiful film "The Hunter (2001)" (). The Tasmanian tiger was known to frequently eat only the hearts of prey. I hope this is not why it became extinct!

Tuesday, January 21, 2014

Waist-to-weight ratio vs. body max index


The optimal waist / weight ratio (WWR) theory () is one of the most compatible with evidence regarding the lowest mortality body mass index (BMI).

But why do we need the WWR when we already have the BMI? This was a question that a reader asked me in connection with a post on the John Stone transformation ().

The montage below shows photos of the John Stone transformation with the respective WWR and BMI measures.



Well, which one is the most useful measure, WWR or BMI?

Monday, January 6, 2014

Doing crossfit and looking like a bodybuilder?


Top crossfit athletes like Annie Thorisdottir and Rich Froning Jr. (pictured below; photos from Crossfitthestables.com and List09.com) look like bodybuilders even though their training practices are markedly different from those of most top natural bodybuilders. It is instructive, from a human physiology perspective, to try to understand why.





First of all we should make it clear that what makes Annie Thorisdottir and Rich Froning Jr. look the way they do is not only crossfit training. Genetics plays a key role here. Some people don’t accept this argument at all. Can you imagine someone arguing that top basketball players are generally tall because the stretching and reaching moves inherent in playing basketball make them tall? Top basketball players are not tall because they play basketball; the causality is stronger in the opposite direction: they play basketball because they are tall. The situation is not all that different with top crossfit competitors.

Often people will point at before and after photos as evidence that anyone can achieve the level of muscularity of a champion natural bodybuilder, if they do the right things. The problem with these before and after photos is that one can “go down” in terms of muscularity and definition quite a lot, but there is a clear ceiling in terms of “going up”. For example, if one goes from competitive marathon running to competitive bodybuilding, after a few years the difference will be dramatic if the person has the genetics necessary to gain a lot of muscle.

In other words, those who have the genetics to become very muscular can lose muscle and/or gain body fat to the point that they would look like they don’t have much genetic potential for muscle gain. Someone who doesn’t have the required genetics, on the other hand, will also be very effective at losing muscle and/or gaining body fat, but will be much more limited at the upper end of the scale.

The table below is from a widely cited and classic study by Fryburg and colleagues on the effects of growth hormone, insulin, and amino acid infusion on muscle accretion of protein. The article is available online as a PDF file (). The measurements shown on the table were taken basally (BAS) and at 3 h and 6 h after the start of the infusions, one of which was of a balanced amino acid mixture that raised arterial phenylalanine concentration to about twice what it was before the infusion. Phenylalanine is one of the essential amino acids present in muscle ().



There were four experimental conditions, two with only amino acid infusion, one with insulin and amino acid infusions, and one with insulin-like growth factor 1 (IGF-1) and amino acid infusions. Protein synthesis and breakdown numbers are based on phenylalanine kinetics inferences. The balance number is based on the synthesis and breakdown numbers; the former minus the latter. Note that at BAS the balance is always negative; this implies a net amino acid loss from muscle. At BAS the measurements were taken after a 12 h fast.

All infusions – of insulin, IGF-1, and amino acids – were continuously applied during the 6 h period. There was no exercise involved in this infusion study, and the amino acid mixture was balanced; as opposed to focused on certain amino acids, such as BCAAs.

The numbers in the table suggest that insulin infusion brings the balance to positive territory at the 3-h mark, with the effect wearing down at 6 h. IGF-1 infusion brings the balance to positive territory at 3 h, with the effect increasing and almost doubling at 6 h. Amino acid infusion alone brings the balance to positive territory a bit at 3 h and 6 h, and much less than when it is combined with insulin or IGF-1 infusions.

The effects of these infusions were due to both reductions in breakdown (amino acid loss) and increases in synthesis. We see that insulin exerts its effect on the balance primarily by suppressing breakdown. IGF-1 exerts its effect on the balance primarily by increasing synthesis. The effect of IGF-1 on the balance is significantly stronger than those of insulin and amino acid infusions, even when these latter two are taken together.

While this is an infusion study, one can derive conclusions about what would happen in response to different types of exercise and nutrients. Under real life conditions, insulin will increase in response to ingestion of carbohydrates and/or protein. IGF-1 will increase in response to growth hormone (GH) elevation, of which a major trigger is intense exercise.

The type of exercise that leads to the highest elevation of GH levels is intense exercise that raises heart rate significantly and rapidly. Examples are sprints, large-muscle resistance exercise, and resistance exercise involving multiple muscles at the same time. At the very high end of GH secretion are exercises that use large upper and lower body muscles at the same time, such as the deadlift. At the low end of GH secretion are localized small-muscle exercises, such as calf raises and isolated curls.

Anecdotally it seems that, at least for beginners, those exercises that lead to the highest GH secretion are the least “comfortable” for them. That is, those are the exercises that cause the most “huffing and puffing”. So next time you do an exercise like that, use this as a motivator: these are the exercises with the biggest return on investment; whether you are looking for health improvement, muscle gain, or both.

Competitive crossfit practitioners tend to favor variations of high-intensity interval training (HIIT), with an emphasis on a blend of endurance and strength exercises. Endurance and strength are both needed in crossfit competition. Competitive bodybuilders tend to focus more on strength, often exercising with more resistance or weight than competitive crossfit practitioners.

Extrapolating from the infusion study, one could argue that high GH secretion exercises are critical for amino acid accretion in muscle. Both groups mentioned above – competitive crossfit practitioners and competitive bodybuilders – exercise in ways that lead to high GH secretion. Surprising as this may sound (to some), if you do chin-ups, you’ll probably have better results in terms of biceps hypertrophy than if you do isolated bicep curls. This will happen even though the overall load on the bicep muscles will be lower with the chin-ups. The reason is that the GH secretion will be significantly higher with the chin-ups, because more muscles are involved at the same time, including large ones (e.g. the lats).

It is interesting to see competitive crossfit practitioners talking about needing to lose some weight but not being able to (). The reason is that they do not have much body fat to lose, and the types of exercise that they do create such a powerful stimulus toward positive nitrogen balance () that they end up gaining weight even as they restrict calorie intake.

Carbohydrate ingestion prior to exercise may raise insulin levels, but will blunt GH secretion; protein without carbohydrate, on the other hand, will raise insulin levels without blunting GH secretion (). Whether ingesting protein immediately before exercising is necessarily good in the long run is an open question, however, because GH secretion is likely to be greater for someone who is exercising in the fasted state, as GH secretion is in part a response to glycogen depletion (, ). And, as we have seen from the infusion study, GH secretion is disproportionately important as a positive nitrogen balance factor.

Compensatory adaptation applied to human biology () suggests that the body responds to challenges over time, in a compensatory way. Which scenario poses the bigger challenge: (a) high GH exercise with more amino acid loss during the exercise, or (b) high GH exercise with less amino acid loss during the exercise? I think it is (a), because the message being sent to the body is that “we need more muscle to do all of this and still compensate for the loss during exercise”.

Maybe this is why top crossfit practitioners end up looking like bodybuilders, and cannot lose muscle even when a slightly lighter frame would make them more competitive in crossfit games. Their bodies are just responding to the stimuli they are getting.