In a world with no antibiotics, how did doctors treat infections?

The development of antibiotics and other antimicrobial therapies is arguably the greatest achievement of modern medicine. However, overuse and misuse of antimicrobial therapy predictably leads to resistance in microorganisms.

Bloodletting was treatment for infection in the past. Wellcome Library, London, CC BY

The development of antibiotics and other antimicrobial therapies is arguably the greatest achievement of modern medicine. However, overuse and misuse of antimicrobial therapy predictably leads to resistance in microorganisms. Antibiotic-resistant bacteria such as methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus species (VRE) and carbapenem-resistant Enterobacteriaceae (CRE) have emerged. Certain CRE species are resistant to multiple antibiotics, and have been deemed “superbugs” in the news.

Alternative therapies have been used to treat infections since antiquity, but none are as reliably safe and effective as modern antimicrobial therapy.

Unfortunately, due to increasing resistance and lack of development of new agents, the possibility of a return to the pre-antimicrobial era may become a reality.

So how were infections treated before antimicrobials were developed in the early 20th century?

Blood, leeches and knives

Bloodletting was used as a medical therapy for over 3,000 years. It originated in Egypt in 1000 B.C. and was used until the middle of the 20th century.

Medical texts from antiquity all the way up until 1940s recommend bloodletting for a wide variety of conditions, but particularly for infections. As late as 1942, William Osler’s 14th edition of Principles and Practice of Medicine, historically the preeminent textbook of internal medicine, included bloodletting as a treatment for pneumonia.

Bloodletting is based on an ancient medical theory that the four bodily fluids, or “humors” (blood, phlegm, black bile and yellow bile), must remain in balance to preserve health. Infections were thought to be caused by an excess of blood, so blood was removed from the afflicted patient. One method was to make an incision in a vein or artery, but it was not the only one. Cupping was another common method, in which heated glass cups were placed on the skin, creating a vacuum, breaking small blood vessels and resulting in large areas of bleeding under the skin. Most infamously, leeches were also used as a variant of bloodletting.

A man sitting in chair, arms outstretched, streams of blood pouring out as a nun places leeches on his body.
Images from the History of Medicine (NLM)

Interestingly, though bloodletting was recommended by physicians, the practice was actually performed by barbers, or “barber-surgeons.” The red and white striped pole of the barbershop originated as “advertising” their bloodletting services, the red symbolizing blood and the white symbolizing bandages.

There may actually have been some benefit to the practice – at least for certain kinds of bacteria in the early stages of infection. Many bacteria require iron to replicate, and iron is carried on heme, a component of the red blood cell. In theory, fewer red blood cells resulted in less available iron to sustain the bacterial infection.

Some mercury for your syphilis?

Naturally occurring chemical elements and chemical compounds have historically have been used as therapies for a variety of infections, particularly for wound infections and syphilis.

A woodcut from 1689 showing various methods of syphilis treatment including mercury fumigation.
Images from the History of Medicine (NLM)

Topical iodine, bromine and mercury-containing compounds were used to treat infected wounds and gangrene during the American Civil War. Bromine was used most frequently, but was very painful when applied topically or injected into a wound, and could cause tissue damage itself. These treatments inhibited bacterial cell replication, but they could also harm normal human cells.

Mercury compounds were used to treat syphilis from about 1363 to 1910. The compounds could be applied to skin, taken orally or injected. But the side effects could include extensive damage to skin and mucous membranes, kidney and brain damage, and even death. Arsphenamine, an arsenic derivative, was also used in the first half of the 20th century. Though it was effective, side effects included optic neuritis, seizures, fever, kidney injury and rash.

Thankfully, in 1943, penicillin supplanted these treatments and remains the first-line therapy for all stages of syphilis.

Looking in the garden

Over the centuries, a variety of herbal remedies evolved for the treatment of infections, but very few have been evaluated by controlled clinical trials.

One of the more famous herbally derived therapies is quinine, which was used to treat malaria. It was originally isolated from the bark of the cinchona tree, which is native to South America. Today we use a synthetic form of quinine to treat the disease. Before that, cinchona bark was dried, ground into powder, and mixed with water for people to drink. The use of cinchona bark to treat fevers was described by Jesuit missionaries in the 1600s, though it was likely used in native populations much earlier.

An engraving of a Quinine plant, 1880.
Wellcome Library, London, CC BY

Artemisinin, which was synthesized from the Artemisia annua (sweet wormwood) plant is another effective malaria treatment. A Chinese scientist, Dr. Tu Youyou, and her team analyzed ancient Chinese medical texts and folk remedies, identifying extracts from Artemisia annua as effectively inhibiting the replication of the malaria parasite in animals. Tu Youyou was coawarded the 2015 Nobel Prize in Physiology or Medicine for the discovery of artemisinin.

You probably have botantically derived therapy against wound infection in your kitchen cupboard. The use of honey in wound healing dates back to the Sumerians in 2000 B.C.. The high sugar content can dehydrate bacterial cells, while acidity can inhibit growth and division of many bacteria. Honey also has an enzyme, glucose oxidase, that reduces oxygen to hydrogen peroxide, which kills bacteria.

The most potent naturally occurring honey is thought to be Manuka honey. It is derived from the flower of the tea tree bush, which has additional antibacterial properties.

Like other botanically derived therapies, honey has inspired the creation of pharmaceuticals. MEDIHONEY®, a medical grade product developed by Derma Sciences, is used to promote healing in burns as well as other types of wounds.

Combating antimicrobial resistance

While some of these ancient therapies proved effective enough that they are still used in some form today, on the whole they just aren’t as good modern antimicrobials at treating infections. Sadly, thanks to overuse and misuse, antibiotics are becoming less effective.

Each year in the United States, at least two million people become infected with bacteria that are resistant to antibiotics, and at least 23,000 people die each year as a direct result of these infections.

While resistant bacteria are most commonly reported, resistance also can arise in other microorganisms, including fungi, viruses and parasites. Increasing resistance has raised the possibility that certain infections may eventually be untreatable with the antimicrobials we currently have.

The race is on to find new treatments for these infections, and researchers are exploring new therapies and new sources for antibiotics.

Besides using antibiotics as directed and only when necessary, you can avoid infections in the first place with appropriate immunization, safe food-handling practices and washing your hands.

Tracking resistant infections so we can learn more about them and their risk factors, as well as limiting the use of antibiotics in humans and animals, could also help curb the risk of resistant bacteria.

Cristie Columbus does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Explainer: Where did Zika virus come from and why is it a problem in Brazil?

From October 2015 to January 2016, there were almost 4,000 cases of babies born with microcephaly in Brazil. Before then, there were just 150 cases per year.

From October 2015 to January 2016, there were almost 4,000 cases of babies born with microcephaly in Brazil. Before then, there were just 150 cases per year.

The suspected culprit is a mosquito-borne virus called Zika. Officials in Colombia, Ecuador, El Salvador and Jamaica have suggested that women delay becoming pregnant. And the Centers for Disease Control and Prevention has advised pregnant women to postpone travel to countries where Zika is active.

Countries and territories with active Zika virus transmission.
Centers for Disease Control and Prevention

The World Health Organization says it is likely that the virus will spread, as the mosquitoes that carry the virus are found in almost every country in the Americas.

Zika virus was discovered almost 70 years ago, but wasn’t associated with outbreaks until 2007. So how did this formerly obscure virus wind up causing so much trouble in Brazil and other nations in South America?

Where did Zika come from?

Zika virus was first detected in Zika Forest in Uganda in 1947 in a rhesus monkey, and again in 1948 in the mosquito Aedes africanus, which is the forest relative of Aedes aegypti. Aedes aegypti and Aedes albopictus can both spread Zika. Sexual transmission between people has also been reported.

Aedes aegypti. Emil August Goeldi (1859-1917).
via Wikimedia Commons.

Zika has a lot in common with dengue and chikungunya, another emergent virus. All three originated from West and central Africa and Southeast Asia, but have recently expanded their range to include much of the tropics and subtropics globally. And they are all spread by the same species of mosquitoes.

Until 2007 very few cases of Zika in humans were reported. Then an outbreak occurred on Yap Island of Micronesia, infecting approximately 75 percent of the population. Six years later, the virus appeared in French Polynesia, along with outbreaks of dengue and chikungunya viruses.

How did Zika get to the Americas?

Genetic analysis of the virus revealed that the strain in Brazil was most similar to one that had been circulating in the Pacific.

Brazil had been on alert for an introduction of a new virus following the 2014 FIFA World Cup, because the event concentrated people from all over the world. However, no Pacific island nation with Zika transmission had competed at this event, making it less likely to be the source.

There is another theory that Zika virus may have been introduced following an international canoe event held in Rio de Janeiro in August of 2014, which hosted competitors from various Pacific islands.

Another possible route of introduction was overland from Chile, since that country had detected a case of Zika disease in a returning traveler from Easter Island.

Most people with Zika don’t know they have it

According to research after the Yap Island outbreak, the vast majority of people (80 percent) infected with Zika virus will never know it – they do not develop any symptoms at all. A minority who do become ill tend to have fever, rash, joint pains, red eyes, headache and muscle pain lasting up to a week. And no deaths had been reported.

However, in the aftermath of the Polynesian outbreak it became evident that Zika was associated with Guillain-Barré syndrome, a life-threatening neurological paralyzing condition.

In early 2015, Brazilian public health officials sounded the alert that Zika virus had been detected in patients with fevers in northeast Brazil. Then there was a similar uptick in the number of cases of Guillain-Barré in Brazil and El Salvador. And in late 2015 in Brazil, cases of microcephaly started to emerge.

At present, the link between Zika virus infection and microcephaly isn’t confirmed, but the virus has been found in amniotic fluid and brain tissue of a handful of cases.

How Zika might affect the brain is unclear, but a study from the 1970s revealed that the virus could replicate in neurons of young mice, causing neuronal destruction. Recent genetic analyses suggest that strains of Zika virus may be undergoing mutations, possibly accounting for changes in virulence and its ability to infect mosquitoes or hosts.

The Swiss cheese model for system failure

The Swiss cheese model of accident causation.
Davidmack via Wikimedia Commons, CC BY-SA

One way to understand how Zika spread is to use something called the Swiss cheese model. Imagine a stack of Swiss cheese slices. The holes in each slice are a weakness, and throughout the stack, these holes aren’t the same size or the same shape. Problems arise when the holes align.

With any disease outbreak, multiple factors are at play, and each may be necessary but not sufficient on its own to cause it. Applying this model to our mosquito-borne mystery makes it easier to see how many different factors, or layers, coincided to create the current Zika outbreak.

A hole through the layers

The first layer is a fertile environment for mosquitoes. That’s something my colleagues and I have studied in the Amazon rain forest. We found that deforestation followed by agriculture and regrowth of low-lying vegetation provided a much more suitable environment for the malaria mosquito carrier than pristine forest.

Increasing urbanization and poverty create a fertile environment for the mosquitoes that spread dengue by creating ample breeding sites. In addition, climate change may raise the temperature and/or humidity in areas that previously have been below the threshold required for the mosquitoes to thrive.

The second layer is the introduction of the mosquito vector. Aedes aegypti and Aedes albopictus have expanded their geographic range in the past few decades. Urbanization, changing climate, air travel and transportation, and waxing and waning control efforts that are at the mercy of economic and political factors have led to these mosquitoes spreading to new areas and coming back in areas where they had previously been eradicated.

For instance, in Latin America, continental mosquito eradication campaigns in the 1950s and 1960s led by the Pan American Health Organization conducted to battle yellow fever dramatically shrunk the range of Aedes aegypti. Following this success, however, interest in maintaining these mosquito control programs waned, and between 1980 and the 2000s the mosquito had made a full comeback.

The third layer, susceptible hosts, is critical as well. For instance, chikungunya virus has a tendency to infect very large portions of a population when it first invades an area. But once it blows through a small island, the virus may vanish because there are very few susceptible hosts remaining.

Since Zika is new to the Americas, there is a large population of susceptible hosts who haven’t previously been exposed. In a large country, Brazil for instance, the virus can continue circulating without running out of susceptible hosts for a long time.

The fourth layer is the introduction of the virus. It can be very difficult to pinpoint exactly when a virus is introduced in a particular setting. However, studies have associated increasing air travel with the spread of certain viruses such as dengue.

When these multiple factors are in alignment, it creates the conditions needed for an outbreak to start.

Putting the layers together

My colleagues and I are studying the role of these “layers” as they relate to the outbreak of yet another mosquito-borne virus, Madariaga virus (formerly known as Central/South American eastern equine encephalitis virus), which has caused numerous cases of encephalitis in the Darien jungle region of Panama.

There, we are examining the association between deforestation, mosquito vector factors, and the susceptibility of migrants compared to indigenous people in the affected area.

In our highly interconnected world which is being subjected to massive ecological change, we can expect ongoing outbreaks of viruses originating in far-flung regions with names we can barely pronounce – yet.

Amy Y. Vittor has received funding from the Institute for Social and Environmental Transition International, NIH Fogarty International Center, NIH NIAID Global Health training grant, New York Community Trust Fund, and the University of Florida.

Read the Original Article at

Not all psychopaths are criminals – some psychopathic traits are actually linked to success

Tom Skeyhill was an acclaimed Australian war hero, known as “the blind solider-poet.” During the monumental World War I battle of Gallipoli, he was a flag signaler, among the most dangerous of all positions.

Some psychopathic traits can lead to success, at least in the short term. Man in suit via

Tom Skeyhill was an acclaimed Australian war hero, known as “the blind solider-poet.” During the monumental World War I battle of Gallipoli, he was a flag signaler, among the most dangerous of all positions. After being blinded when a bomb shell detonated at his feet, he was transferred out.

After the war he penned a popular book of poetry about his combat experience. He toured Australia and the United States, reciting his poetry to rapt audiences. President Theodore Roosevelt appeared on stage with him and said, “I am prouder to be on the stage with Tom Skeyhill than with any other man I know.” His blindness suddenly disappeared following a medical procedure in America.

But, according to biographer Jeff Brownrigg, Skeyhill wasn’t what he seemed. The poet had, in fact, faked his blindness to escape danger.

That’s not all. After a drunken performance, he blamed his slurred speech on an unverifiable war injury. He claimed to have met Lenin and Mussolini (there is no evidence that he did), and spoke of his extensive battle experience at Gallipoli, when he had been there for only eight days.

You have to be pretty bold to spin those kinds of self-aggrandizing lies and to carry it off as long as Skeyhill did. Although he never received a formal psychological examination (at least to our knowledge), we suspect that most contemporary researchers would have little trouble recognizing him as a classic case of psychopathic personality, or psychopathy.

What’s more, Skeyhill embodied many elements of a controversial condition sometimes called successful psychopathy.

Despite the popular perception, most psychopaths aren’t coldblooded or psychotic killers. Many of them live successfully among the rest of us, using their personality traits to get what they want in life, often at the expense of others.

All psychopaths are criminals if you look for them only behind bars

Psychopathy is not easily defined, but most psychologists view it as a personality disorder characterized by superficial charm conjoined with profound dishonesty, callousness, guiltlessness and poor impulse control. According to some estimates, psychopathy is found in about one percent of the general population, and for reasons that are poorly understood, most psychopaths are male.

That number probably doesn’t capture the full number of people with some degree of psychopathy. Data suggest that psychopathic traits lie on a continuum, so some individuals possess marked psychopathic traits but don’t fulfill the criteria for full-blown psychopathy.

Not surprisingly, psychopathic individuals are more likely than other people to commit crimes. They almost always understand that their actions are morally wrong – it just doesn’t bother them. Contrary to popular belief, only a minority are violent.

Because researchers tend to seek out psychopaths where they can locate them in plentiful numbers, much research on the condition has taken place in prisons and jails. That’s why until fairly recently, the lion’s share of theory and research on psychopathy focused on decidedly unsuccessful individuals – such as convicted criminals.

But a lot of people on the psychopathic continuum aren’t in jail or prison. In fact, some individuals may be able to use psychopathic traits, like boldness, to achieve professional success.

A profoundly disturbed core

The very existence of successful psychopathy has been controversial, perhaps in part because many scholars insist they have never seen it. Some say the concept is illogical, with others going so far as to term it an oxymoron.

Successful psychopathy is a controversial idea, but it’s not a new one. In 1941, American psychiatrist Hervey Cleckley was among the first to highlight this paradoxical condition in his classic book “The Mask of Sanity.” According to Cleckley, the psychopath is a hybrid creature, donning an engaging veil of normalcy that conceals an emotionally impoverished and profoundly disturbed core.

In Cleckley’s eyes, psychopaths are charming, self-centered, dishonest, guiltless and callous people who lead aimless lives devoid of deep interpersonal attachments. But Cleckley also alluded to the possibility that some psychopathic individuals are successful interpersonally and occupationally, at least in the short term.

In a 1946 article, he wrote that the typical psychopath will have often:

outstripped 20 rival salesmen over a period of 6 months, or married the most desirable girl in town, or, in a first venture into politics, got himself elected into the state legislature.

Charming, aggressive and looking out for number one

In 1977, Catherine Widom published a study about “noninstitutionalized psychopaths.” To find these individuals, she placed an advertisement in underground Boston newspapers calling for “charming, aggressive, carefree people who are impulsively irresponsible but are good at handling people and looking out for number one.”

The individuals she recruited exhibited a personality profile similar to those of incarcerated psychopaths, and about two-thirds of them had been arrested.

What’s the difference between the psychopaths who get arrested and the ones who don’t? Research from Adrian Raine, now at the University of Pennsylvania, conducted in the 1990s sheds some light.

Raine and his colleagues recruited men from temporary employment agencies in the Los Angeles area. After first identifying those who met the criteria for psychopathy, they compared the 13 participants who had been convicted of one or more crimes with the 26 who had not. Raine provisionally regarded these 26 men as successful psychopaths.

Each man gave a videotaped speech about his personal flaws. Raine and his colleagues found that the men they considered successful psychopaths displayed significantly greater heart rate increases, suggesting an increase in social anxiety. These men also performed better on a task requiring them to modulate their impulses.

The bottom line: having a modicum of social anxiety and impulse control may explain why some psychopathic people manage to stay out of trouble.

The psychopath at the stock exchange

More recently, some researchers, ourselves included, have speculated that people with pronounced psychopathic traits may be found disproportionately in certain professional niches, such as politics, business, law enforcement, firefighting, special operations military services and high-risk sports. Most of those with psychopathic traits probably aren’t classic “psychopaths,” but nonetheless exhibit many features of the condition.

Perhaps their social poise, charisma, audacity, adventurousness and emotional resilience lends them a performance edge over the rest of us when it comes to high-stakes settings. As Canadian psychologist Robert Hare, the world’s premier psychopathy expert, quipped, “If I weren’t studying psychopaths in prison, I’d do it at the stock exchange.”

Our lab at Emory University, and that of our collaborators at Florida State University, are investigating whether some psychopathic traits, such as boldness, predispose to certain successful behaviors.

What do we mean by boldness? It encompasses poise and charm, physical risk-taking and emotional resilience, and it is a trait that is well-represented in many widely used psychopathy measures.

For instance, in studies on college students and people in the general community, we have found that boldness is modestly tied to impulsive heroic behaviors, such as intervening in emergencies. It’s also linked to a higher likelihood of assuming leadership and management positions, and to certain professions, such as law enforcement, firefighting and dangerous sports.

Want to be president? Having some psychopathic traits could help

There’s one job in particular in which boldness may make a difference: president of the United States.

In a study of the 42 American presidents up to and including George W. Bush, we asked biographers and other experts to complete a detailed set of personality items – including items assessing boldness – about the president of their expertise. Then, we connected these data with independent surveys of presidential performance by prominent historians.

We found that boldness was positively, although modestly, associated with better overall presidential performance. And several specific facets of such performance, such as crisis management, agenda setting and public persuasiveness, were associated with boldness too. This may be something to keep in mind the next time you see presidential candidates talk about how bold they’ll be in the White House.

Theodore Roosevelt, the boldest of them all.
National Archives and Records Administration

In an interesting coincidence, the boldest president in our study was the one who said he was proud to share a stage with Tom Skeyhill. Theodore Roosevelt was described by a recent biographer as possessing a “robust, forceful, naturalistic, bombastic, teeth-clapping, animal-skinning, keen-eyed, avalanche-like persona.”

The boldest presidents were not necessarily extreme or pathological on this dimension, but boldness was markedly elevated relative to the average person.

Although boldness was tied to some successful actions, we generally found that other psychopathic features, such as callousness and poor impulse control, were unrelated or negatively related to professional success.

Boldness may be associated with certain positive life outcomes, but full-fledged psychopathy generally is not.

Where’s the line between success and criminality?

Could psychopathic traits be adaptive? Few investigators have explored this “Goldilocks” hypothesis. Moreover, we know surprisingly little about how psychopathic traits forecast real-world behavior over extended stretches of time.

The charm of the psychopath is shallow and superficial. With that in mind, we would argue that boldness and allied traits may be linked to successful behaviors in the short term, but that their effectiveness almost always fizzles out in the long term. After all, Tom Skeyhill was able to fool people for only so long.

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Take a chill pill if you want to avoid the flu this year

Along with snow and frigid temperatures, the winter months also bring coughs, colds and the flu. Lower respiratory tract infections, the ones that cause feelings of chest congestion despite the deepest coughs, are one of the top 10 causes of death in the United States and around the world.

Avoiding stress could help stave off the flu. Sick woman via

Along with snow and frigid temperatures, the winter months also bring coughs, colds and the flu. Lower respiratory tract infections, the ones that cause feelings of chest congestion despite the deepest coughs, are one of the top 10 causes of death in the United States and around the world. In the U.S. the flu alone kills thousands of people each year.

Besides causing poor health, the flu and other respiratory illness also have a huge impact on the economy. A study published in 2007 suggests that flu epidemics account for over US$10 billion per year in direct medical care costs, while lost earnings due to illness account for an additional $16.3 billion per year. And that doesn’t cover run-of-the-mill colds and coughs. The total economic impact of non-influenza viral respiratory tract infections is estimated at another $40 billion per year.

Avoiding the flu or catching a cold in the winter months can be tough, but there is something you can do in addition to getting the flu shot and washing your hands.

Relax. There’s strong evidence that stress affects the immune system and can make you more susceptible to infections.

Big doses of stress can hurt your immune system

Health psychologist Andrew Baum defined stress as “a negative emotional experience accompanied by predictable biochemical, physiological, and behavioral changes that are directed toward adaptation.” Scientists can actually measure the body’s stress response – the actions the body takes to fight through arduous situations ranging from difficult life events to infections.

In most stress responses, the body produces chemicals called pro-inflammatory cytokines. They activate the immune system, and without them the body would not be able to fight off bacteria, viruses or fungi. Normally the stress response is helpful because it preps your body to deal with whatever challenge is coming. When the danger passes, this response is turned off with help from anti-inflammatory cytokines.

However, if the stress response cannot be turned off, or if there is an imbalance between pro-inflammatory and anti-inflammatory cytokines, the body can be damaged. This extra wear and tear due to the inflammation from a heightened stress response has been termed allostatic load. A high allostatic load has been associated with multiple chronic illnesses, such as cardiovascular disease and diabetes. This partly explains the focus on taking anti-inflammatory supplements to prevent or treat disease.

Short-term stress hurts too

An inappropriate stress response can do more than cause chronic illness down the road. It can also make you more susceptible to acute infections by suppressing the immune system.

For example, when mice are subjected to different environmental stressors, there is an increase in a molecule in their blood called corticosterone, which is known to have immunosuppressive effects on the body. This type of response is mirrored in research on humans. In a study of middle-aged and older women, stress from being instructed to complete a mental math or speech test was associated with higher levels of similar immunosuppressive molecules.

A similar response has been documented among medical students. A 1995 study showed that the students who reported “feeling stressed” the most during exam periods also had the highest levels of molecules with immunosuppressant characteristics.

Stress won’t make you healthier.
Stressed out man image via

Stress makes it easier to get sick

There is also direct evidence that stress can increase risk of infection. For instance, a group of scientists in Spain used surveys to assess stress in 1,149 people for a year and then measured how many colds occurred within the group. They found that every dimension of stress they measured was associated with an increased risk for getting the common cold. While this study’s large sample size and design make it particularly noteworthy, the relationship between colds and stress has been reported since the 1960s.

More recently, we presented a study that calculated allostatic load scores in over 10,000 people that were a part of the National Health and Nutrition Examination Survey between 1999 and 2002. We searched for associations between those allostatic load scores and the likelihood of having reported symptoms of a communicable disease, like the common cold. We found that the higher the score, the more likely an individual was to have reported symptoms of illness.

While causality cannot be completely confirmed in the type of analysis we conducted, our calculations included multiple biological and clinical markers that would not likely have been significantly impacted by short-term illnesses alone. This suggests that the correlation between allostatic load score and disease symptoms was not simply due to the stress of having an illness.

Our results mirror what is generally accepted in the field. There are whole book chapters dedicated to describing the impacts of stress and infection risk. All this evidence seems to suggest that stress reduction might lead to a healthier cold and flu season.

A prescription for relaxation

While there are medications that can treat the flu, the latest evidence suggests they are only marginally effective at relieving symptoms and may have no impact on reducing the rate of hospitalizations. And Vitamin C, which is often touted as an over-the-counter cold remedy, has little impact on the incidence of the common cold according to the latest compilation of studies from the Cochrane Collaboration, an independent network of scientific researchers.

So keeping stress at bay might be a better bet for staying healthy. But besides just remembering to take deep breaths, participating in activities to reduce stress during the winter months has been shown to help reduce the burden of respiratory illnesses. This may include making good on that New Year’s resolution to get to the gym. In fact, a recent randomized controlled trial concluded that those who exercised or meditated had fewer severe acute respiratory illnesses than did a control group that did neither.

It may also help to talk to somebody, such as your physician or a psychologist, about techniques to manage stress. In a clinical trial done with children between the ages of 8-12, those who talked with therapists about relaxation management had improved mood and decreased frequency of colds. On a cellular level, those in the therapist group had increased levels of secretory immunoglobulin A, one of the molecules that is responsible for protecting mucosal surfaces, like the lung, from infection. These types of relaxation techniques are not just for kids. Review articles conclude relaxation techniques are an important therapeutic strategy for stress-related diseases.

Cold and flu season is here, but getting worried about it might only hurt your chances of staying healthy. Instead, consider how stress hurts your immune system, and write yourself a prescription for relaxation.

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Veterans’ health care: doctors outside the VA need to know more about the veterans they treat

Each year the military discharges over 240,000 veterans to reintegrate into civilian society. It’s a professional transition, but it’s also a personal one.

Civilian doctors might not know that their patients have served in the military. In this photo Marines march around the World Trade Center memorial after participating in a memorial run in 2012. MarineCorps NewYork/Flickr, CC BY

Each year the military discharges over 240,000 veterans to reintegrate into civilian society. It’s a professional transition, but it’s also a personal one.

Veterans go from TRICARE, the Department of Defense’s own health care system, to navigating the ins and outs of the civilian health care system. Under TRICARE, military service members are cared for in a manner that meets their needs. When they’re discharged, their new health care providers might not know that they were ever in the military.

Asking “Have you served in the military?” may seem like a minor issue, but it’s actually much more important than you might think. And it’s a question that few doctors make a point of asking, even though many medical residents and medical students receive all or part of their clinical training at VA medical centers and hospitals.

In fact, Jeffrey Brown, a professor at Weill Cornell Medical College and a Vietnam veteran, has called it the “unasked question.” When physicians don’t ask, they may miss critical parts of their patient’s medical history, making it harder to provide the best possible care.

Health care providers need to understand how diverse today’s veterans are. In this photo, Vietnam veteran Euretha Shropshire salutes during the national anthem at an interfaith vigil for the victims of the Tennessee shooting.
Tami Chappell/Reuters

Why ‘Have you served?’ is a critical question

Contrary to popular belief, most veterans do not receive care from the Veterans Health Administration (VHA) health care system. While eligibility criteria to receive VA care have become more flexible for combat veterans in recent years, overall most veterans don’t go the VA. Only about 30 percent receive health care from the VA after they are discharged. Some might not be eligible for VA health care, or don’t live near a VA health care center. Others might prefer to go somewhere else for care.

The health care providers might not be aware of the chemicals, infections and injuries that military personnel can encounter. Veterans may have been exposed to chemical pollutants or solvents (such as jet fuel, nerve agents or radiation) as well as infectious diseases and blood-borne pathogens. During their careers, they may have also gotten blast injuries, burns or shrapnel wounds. They may also face reproductive health issues or dermatologic issues related to their service. Some may have physical injuries, mental and emotional issues or any combination of these.

Unlike many of our perceptions of the wounds of war – loss of limb or damage to some other body part – veterans also suffer from invisible wounds such as post-traumatic stress disorder (PTSD), military sexual trauma (MST) and traumatic brain injury (TBI), a common wound of Operation Enduring Freedom/Operation Iraqi Freedom.

Veterans may also have experienced physical and emotional trauma, as well as stress from adjustment back to civilian life. For instance, 17 percent of all veterans seeking care at the VA have depression, and 13 percent of Operation Iraqi Freedom combat veterans screen positive for depression within six months of returning from combat.

So asking “Did you ever serve in the military” is an important start. If a patient says yes, then providers should follow up by asking when and where they served and what they did. This can help health care providers arrive at the cause of symptoms, pinpoint sources of support and barriers to wellness. This kind of background information can help providers identify the best medical approaches and develop an effective care plan for veterans and their families.

Part of making sure that doctors ask their patients this critical question is to teach them who veterans are.

Using photographs to teach doctors about veterans

As medical educators and researchers working at the University of Michigan Medical School who also have experience working with veterans, we know how important it is for health care providers to be aware of their patients’ military service.

While serving.
Paula T Ross, Author provided

So we developed a massive open online course (MOOC) called Lessons in Veteran-Centered Care aimed at teaching health professionals about providing veteran-centered care. We cover military culture, focusing on patients’ positive capabilities and strengths, and military health history, as well as highlighting available patient resources.

Participants learn how to use and apply principles from the course to improve assessment and triage for patients with PTSD, MST, TBI, anxiety and depression – all the conditions that are more prevalent in the military population than in the general civilian one.

…and as a veteran.
Paula T Ross, Author provided

But caring for veterans isn’t just about being able to diagnose PTSD or depression. It’s also about understanding who they are and where they’ve been. To do this we use coursework and key moments from the documentary “Where Soldiers Come From,” which follows five young men as they joined the National Guard, underwent military training, deployed to Afghanistan looking for improvised explosive devices and returned home as combat veterans dealing with PTSD and TBI.

To help health professionals understand how diverse today’s veterans are, we also use 30 pairs of photographs of people during and after their time in service. The collection includes veterans from World War II, the Korean War, the Vietnam War, Desert Storm/Desert Shield and the more recent conflicts in Iraq and Afghanistan. This exercise alone helps demonstrate the age diversity of today’s veterans, most of whom fall between 35 and 74 years old.

The collection also includes several women. Women now make up an increasing portion of veterans, more than two million today, about nine percent of all veterans.

Although men have dominated the veteran population in the past, an increasing number of women have served in recent conflicts. The photographs also illustrate the racial diversity of today’s veterans. About 21 percent of veterans are minorities.

These photos help medical students and physicians in our course visualize the trajectory of US service members from soldier to civilian. We challenge learners to think deeply about the experiences, concerns and perspectives of U.S. military veterans, and to reshape previously held unconscious biases, stereotypes or attitudes toward veterans.

Our goal is to use our photographs to improve veteran care by asking health care professionals to take a similar yet important step, and consider military service or exposure to the military culture during their encounters with all patients.

Medical students who’ve taken the course said it inspired them to be more likely to ask patients if they’ve served and to understand the importance of veteran-centered care. Faculty physicians who took the course said it made them reexamine their own biases about veterans.

Ultimately, we challenge assumptions about what veterans look like and help health care professionals recognize that most veterans look just like other patients. Unless you ask the question, you may never find out this valuable piece of information that can lead to improving health outcomes.

Monica L. Lypson, MD MHPE is the Associate Chief of Staff for Education at the Ann Arbor Veteran Healthcare System. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

Paula Thompson Ross does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

If you’re going to drink, make it part of your Mediterranean diet

The British government’s new guidelines advise reducing alcohol consumption to 14 units a week for both men and women and bluntly state that, for some cancers of the mouth, throat and breast, “risk increases with any amount you drink”.

It’s not just what you drink, but the way that you drink it. merc67/

The British government’s new guidelines advise reducing alcohol consumption to 14 units a week for both men and women and bluntly state that, for some cancers of the mouth, throat and breast, “risk increases with any amount you drink”. The message is clear: for the good of our health, the government would rather we not drink at all.

So what about the many millions of people of the Mediterranean, whose diet is one of the healthiest in the world and which includes a drink or two as an integral part? The answer may lie not just in the amount of alcohol consumed, as the UK government’s guidelines would have it, but the manner in which it is drunk and what it is drunk with.

There is now good evidence that many foods in the Mediterranean diet including vegetables, pulses, whole grains and olive oil contain protective substances that help counter alcohol’s harmful effects.

For example, a number of studies suggest that even low amounts of alcohol increase the risk of breast cancer. But a recent trial, part of the highly regarded Predimed Study, found that women who ate a Mediterranean diet had a reduced risk of breast cancer, even though almost half were drinking up to two units of alcohol (a 175ml glass of wine) a day.

The extra virgin olive oil in their diet was thought to have played a role. Alcohol increases breast cancer risk by raising oestrogen levels, but extra virgin olive oil contains various anti-oestrogens that block the carcinogenic actions of oestrogens. In another large European study involving 368,000 women, it was convincingly shown that folates – found in large quantities in the green, leafy vegetables and pulses of the Mediterranean diet – also provide a protective action against the effects of alcohol.

Although these are important findings, women with a family history of breast cancer are still advised to avoid drinking.

The link between mouth and throat cancers and low alcohol consumption, which the guidelines declare to hold true “for any amount you drink”, also deserves closer scrutiny. Again, the Mediterranean diet comes up trumps: even when low to moderate alcohol is consumed as part of the diet, the risk of these cancers decreases.

How we drink matters

Food and wine: the ancient Greeks knew what they were doing.
Caravaggio/Uffizi Gallery

It’s well established that combining smoking with drinking dramatically increases the risk of causing mouth and throat cancers. Some studies such as the Million Women Study (which really did involve well over a million women) found no increased risk of these cancers for women drinking up to two units a day, so long as they were non-smokers. It’s thought that alcohol acts as a solvent that increases the absorption of carcinogens in cigarette smoke. If most drinking occurs during a meal, the hazards from smoking become less likely.

So it’s clear that the way we drink is very important. Drinking with food is the typical pattern in Mediterranean countries, whereas in the UK binge drinking is far more common – where alcohol is not just drunk excessively, but also without food. A full stomach of food slows the rate of alcohol absorption, limiting dangerous spikes in blood alcohol levels that are linked to high blood pressure and strokes. In Mediterranean countries, even alcohol consumed without a meal is usually accompanied with some food: a few olives with an ouzo in Greece, tapas or a piece of tortilla to accompany a beer in a Spanish bar. What a shame that so few pubs in the UK provide these protective mouthfuls.

A scoring system was developed to capture the Mediterranean way of drinking: moderate alcohol intake spread out over the week, a preference for red wine drunk with meals, little intake of spirits, and an avoidance of binge drinking. Scoring highly on these criteria correlated with significantly reduced mortality.

Of course there are many other benefits to a Mediterranean diet: it is the leading diet for risk reduction of cardiovascular disease, with many studies confirming the cardio-protective effects of moderate drinking, especially as part of a Mediterranean diet, and increasing evidence that links the Mediterranean diet with a decreased risk of dementia. Considering how few other options there are to counter this devastating disease, these are important findings.

Just as eating guidelines now recognise that diet must be considered as a whole, rather than isolating individual foods or nutrients such as sugar or saturated fat, there is good reason to apply the same thinking to weighing up the risks and benefits of drinking alcohol. Heavy drinking increases the risk of various cancers, of this there is no doubt – and even low alcohol consumption may do so with certain diets such as those high in processed foods. But the evidence suggests that one or two glasses of wine, so long as they are accompanied by a tasty Mediterranean meal, won’t hurt you – whatever the government guidelines say.

Richard Hoffman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

We can avoid weight creep – here’s how

Many of us enter a new year reflecting on where we have been and our plans for the future. For some, this will mean acknowledging that a couple more kilos have crept on over the past year.

Walking briskly for at least 30 minutes on most days of the week is a good start. etorres/Shutterstock

Many of us enter a new year reflecting on where we have been and our plans for the future. For some, this will mean acknowledging that a couple more kilos have crept on over the past year. Others will have health on their hit list for 2016; resolving to eat better and lose weight could be part of that.

It can be difficult to lose weight and keep it off in the long term. So how can we support communities to avoid weight gain over time?

New research published today in PLOS Medicine suggests that simple lifestyle programs can help prevent weight gain. But GPs, communities and individuals also have a role to play.

It’s OK to aim low

Even a small weight loss can result in positive health impacts. It has been estimated that a 1% reduction in body mass index (BMI) – the equivalent to approximately 1kg for an average adult – across the United States population would avoid 2 million cases of diabetes, 1.5 million cardiovascular diseases, and more than 73,000 cases of cancer.

It is the norm to be overweight or obese in Australia. Figures released last month showed 63% of adults (71% men, 56% women) and 27% of children were in this category in 2014/15. Further, rates of obesity in women in Australasia are growing faster than anywhere else in the world.

Challenging the accepted dogma that we will gain weight as we age was put to a recent meeting of the Queensland Clinical Senate, which helps set the agenda for long-term health strategies. The resolution of the meeting, convened with Health Consumers Queensland, was to focus on preventing weight gain in the community, rather than weight loss, particularly given the problems faced in achieving the latter.

Lifestyle programs

In today’s study, the researchers randomly assigned 649 women in 41 rural Australian towns to either the intervention group or the control group.

Women in the intervention group took part in information sessions, received a personalised self-management plan, were sent monthly text message reminders and undertook a 20-minute phone-based coaching session.

Women in the control group attended a general session on women’s health.

Over 12 months, the women in the towns who received the targeted intervention program lost almost half a kilogram, while those in the control towns gained almost half a kilo.

This shows that delivering programs with community integration, a focus on small changes in behaviour, self-management, and minimal burden on the participants using a mix of personal and electronic modes of delivery, can be feasible, cheap and effective.

Role of GPs

General and nurse practitioners play an important role in providing advice and strategies on healthy and active lifestyles to prevent and manage obesity.

However, a Monash University study, published in The Medical Journal of Australia, found GPs recorded the weight of only 25.8% of a sample of 270,426 patients. Some of the barriers for recording this information are difficulty in approaching the discussion and a perceived lack of available training.

It is important for governments to support all health-care providers to be able to raise the issue of weight control – not just with those who are overweight or obese, but also to encourage those who are a healthy weight to remain in that category.

Guidelines for health professionals already exist, however, better integration with community programs (particularly those which offer social benefits), referral to tailored services and alignment with mass media campaigns are likely to add enormous value at relatively low cost.

There is no single strategy that will address excess weight and obesity in our community. But health professionals are important influencers. Empowering this group with effective, low-intensity strategies and programs is one element of a comprehensive approach to address poor diets and weight issues.

Community response

Another key element is to support communities to create healthy environments, to make the healthy choice the easy choice. Schools, workplaces, sports and community centres are all environments that should support healthy eating and active lifestyles.

If communities are funded and empowered, such as through the OPAL (Obesity prevention and lifestyle) program in South Australia and Healthy Together Victoria, they can link into statewide programs but also develop local solutions to solve the unique issues that exist in their catchment.

Recently we saw the funding removed from the National Partnership Agreement on Preventive Health, which provided valuable investment for the implementation of policies and programs to support healthy lifestyles. Funding to support community based initiatives so local populations can engage this issue is critically important, along with the implementation of policies such as reducing junk food marketing to children, mandatory health star labels and taxing sugary drinks.

Individual action

In the meantime, how can individuals who regularly pledge to get fit and lose weight make sustainable and significant healthy changes, as the women in today’s rural Australia study have done?

Aiming to avoid weight gain is a good starting point, followed by small lifestyle changes, such as:

  • reducing serving sizes
  • aiming for two serves of fruit and five serves of vegetables a day
  • reducing sugary drinks
  • walking briskly for at least 30 minutes on most days of the week.

These changes can make a big difference to your risk of weight gain and developing serious health problems in the future.

Jane Martin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Eating well – it’s more than just what you eat

As the new year rolls on and people consider the resolutions they have already broken, we’re being flooded with advice on what to eat.

The way we eat now

As the new year rolls on and people consider the resolutions they have already broken, we’re being flooded with advice on what to eat. The US has released its revised dietary guidelines, Public Health England has launched their new sugar app and there are endless new books, TV shows, magazine articles and blogs advising us on how to lose weight, stay healthy, avoid disease and live longer. Although the health experts’ views on how to achieve these goals may differ, they have one thing in common: they only focus on what to eat. But eating well is about much more than what you eat, it’s about when, where, why and how you eat as well acheter du viagra sur internet.

When to eat

We live in a culture where being busy is valued. We’re too busy for breakfast, too busy for lunch and too busy for a proper meal in the evening. And so the traditional three-meals-a-day structure of our lives is disappearing and people are getting fatter and fatter as more snacks are consumed than ever before. But if we have specified meal times then we will eat these meals and nothing else in between as we’ll remember “I’ve had that meal”.

Where to eat

Not only are meal times disappearing, designated meal places are also on the way out. And so people eat in the car, at their desks, walking down the street or on the sofa in front of the TV. Yet much research shows that eating on the go or eating when distracted can make people eat more as they aren’t focusing on how much food they’re consuming. It can also make people eat more later on as they “forget” that they’ve eaten. But if you have a designated café, table or common room then the meal becomes an event; the food is the focus; the meal box can be ticked as “done” and you become not only more full there and then, as you’re thinking about eating, but you also remain full in the gap until the next meal as you know that the meal has taken place.

Why to eat

If you ask people why they eat they tend to say “I’m hungry” or “I enjoy eating”. But for the majority of people food is far more complicated than that. Eating is about regulating emotions. We eat when we’re fed up, bored or in need of a treat.

It’s also about social interaction. So we eat more at a birthday dinner or festive celebration than during a simple night in, and it’s about communicating who we are to the rest of the world.

Imagine a first date – what would you cook? A roast dinner might be too maternal, beans on toast too student-like and oysters too desperate. Food can talk and it’s used to show the world the kind of person you are. But as a result people lose track of hunger and food fills many more roles in their lives than just preventing hunger.

We need to rediscover the feeling of hunger; learn that it feels nice to be hungry before a meal and that this hunger goes away once we have eaten. We also need to learn other ways to manage our emotions and other ways to socialise that don’t revolve around food. And this is helped by planning not only what to eat but also when and where to eat. And it’s also helped by planning how to eat.

We use food to show the world what kind of person we are

How to eat

Fullness is a perception, like pain or tiredness. So, in the same way that a headache hurts less if we drag ourselves off the sofa and into work to be distracted by our colleagues, we feel less full if we’re distracted when we eat. And so we eat more because we haven’t properly processed that we are eating. But if we eat at a designated time in the day called “a mealtime”, at a designated place called a “meal place” and tell ourselves “this is a meal” then this mindful approach to eating can make us feel fuller after meals and this fullness can sustain us until the next meal.

Dietitians, nutritionists and celebrity chefs are right to focus on what to eat. But eating well is also about when, where, why and how food is consumed. And if we can eat well then we can feel full again and food can be put back in its rightful place so that we can start to eat to live, rather than live to eat.

Read the Original Article at

Mental health care for prisoners could prevent rearrest, but prisons aren’t designed for rehabilitation

Mental health conditions are more common among prisoners than in the general population. Estimates suggest that as many as 26 percent of state and federal prisoners suffer from at least one mental illness, compared with nine percent or less in the general population.

Mental health conditions are more common among prisoners than in the general population. Estimates suggest that as many as 26 percent of state and federal prisoners suffer from at least one mental illness, compared with nine percent or less in the general population. And prisoners with untreated mental illness are more likely to be arrested again after they are released.

But prisoners’ access to health care, including mental health care, varies from prison to prison. This is partly because funding varies annually due to budget restrictions and changing policies requiring use of funds for other purposes. And public support for rehabilitation is constantly fluctuating. As you can imagine, many people consider mental health treatment among prisoners to be a low funding priority compared to other federal programs, such as college student financial aid.

As a researcher in the emerging field of correctional health, I have spent many hours with inmates and the physicians who treat them. With mental illness so prevalent in U.S. prisoners, the ability to access quality mental health care is critical. It can help inmates regain control over their lives, and may lead to better individual and public safety outcomes upon release from prison.

But even though mental illness is consistently associated with criminal behavior, these conditions are largely undertreated in our prison system. Prisons were designed to incapacitate inmates, not to rehabilitate them. They are underfunded, and they provide poor working conditions for health care providers and environments that can exacerbate (or perhaps even lead to) mental illnesses.

Health care is a right for prisoners

In the 1970s, the Supreme Court supported the rights of prisoners to receive physical health and mental health care. In fact, this right is now law, and denial of care would be considered “cruel and unusual punishment,” which is prohibited under the Eighth Amendment.

This law came about because prisoners were contracting contagious and communicable diseases from one another. Infectious disease screenings are now commonplace in prisons. While prisoners have access to basic health care, treatment for mental health conditions is less broadly provided. The quality of treatment that is available in the penal system, including counseling and medication for chronic mental illnesses, remains poor.

Unfortunately, the screening and treatment procedures that should constitute minimal provision of “mental health care” are not clear and tremendous variation exists from one prison to another.

How big a difference can good mental health care make?

Imagine that you are a prisoner housed in a relatively well-funded state-run facility. You have a mental illness, and have regular counseling sessions and receive antipsychotic medications that help you function in your day-to-day routine. When you are released, you will likely receive comprehensive discharge plans and direct links to services in the community to make sure you can continue therapy and get access to medication. Your ability to control your condition might lead to better employment prospects, not to mention less involvement in criminal behavior. As a result, you aren’t rearrested.

But, if you are transferred to a poorly funded institution, you may be immediately taken off your medication and receive very limited counseling or none at all for your condition.

Transfers from one institution to another are common and may explain why there is such inconsistency in treatment nationwide. According a national survey of department of corrections staff across 48 states, medical treatment was identified anecdotally as a reason for transfer, but no percentages were reported to shed light on the number of prison transfers that occur for medical or psychological reasons.

And this explains why prisoners with mental health conditions return to prison 50-230 percent more frequently than those without mental health conditions. Given the high cost of the average prison stay (US$31,286 per person per year), an ounce of mental health treatment may result in pounds of cost savings.

For physicians in prisons, low morale and high turnover

As you can imagine, recruiting quality physicians to work in prisons can be challenging given the work environment. Although prison physicians are relatively well-paid), they are exposed to infectious diseases like tuberculosis or influenza more so than the general population. Threats or fear of physical violence are ever-present in the prison setting. This is not to say that the doctors employed by prisons are not highly qualified – they are. However, in my anecdotal experience, there is high turnover and low morale. And many prisons employ only one primary care doctor who is responsible for treating all inmates’ physical and mental health conditions, a challenge in a facility that houses hundreds or thousands.

The absence of mental health care sets prisoners up for failure when they reenter their communities and social circles. They may leave prison unequipped to handle their mental health condition and continue through the “revolving door” of incarceration for much of their life. This costly cycle is difficult to stop, as is exceedingly clear from decades of research in criminal justice. To make mental health care in state and federal prisons a national priority, a transformation in how we view the role of prisons is needed.

Given the investment that taxpayers make in the criminal justice system, it is reasonable for the public to expect a return on their investment in the form of lower repeat criminal activity. One step in this direction would be using time spent in prison to address as many risk factors for crime as possible, including mental health conditions.

Jennifer Reingle Gonzalez does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Five things that happen to your body in space

Tim Peake is the first official British astronaut to walk in space. The former Army Air Corps officer has spent a month in space, after blasting off on a Russian Soyuz rocket to the International Space Station on December 15 last year, but the spacewalk will doubtless be his most gruelling test.

Nice night for a stroll: Scott Kelly working outside the International Space Station in 2015. NASA

Tim Peake is the first official British astronaut to walk in space. The former Army Air Corps officer has spent a month in space, after blasting off on a Russian Soyuz rocket to the International Space Station on December 15 last year, but the spacewalk will doubtless be his most gruelling test.

Tim Peake prepares his Extravehicular Mobility Unit spacesuit.

But what exactly will he be going through, during his remarkable spell aboard the space station? Space travel leads to many changes in the human body, many of which have been investigated since Yuri Gargarin made the first manned spaceflight in 1961 – and an extensive team provides guidance and preparation for astronauts before, during and after any spaceflight. But if you’re planning an out-of-this-world trip, here are some of the things to expect.

1) You get weaker

The skeletal muscle system is the largest organ system of the human body. Hundreds of muscles are used for maintaining posture – sitting, standing – and performing a wide range of movements, with different loading conditions imposed by the forces of gravity on Earth.

Skeletal muscles have the ability to adapt to different purposes and the different loads placed on them, a quality known as plasticity. But like inactivity, space flight leads to loss of both skeletal muscle mass (atrophy) and strength.

During long spaceflights on the ISS, research found that 37 crew members experienced a decrease in mean isokinetic strength of between 8% and 17%. Men and women were similarly affected. In fact, this degradation occurs even when astronauts follow a strict exercise regime, meaning that it has profound implications for humans embarking on even longer journeys, such as to Mars. Data suggests that around 30% of muscle strength is lost after spending 110 to 237 days in microgravity.

2) So does your heart

Many parts of the cardiovascular system (including the heart) are influenced by gravity. On Earth, for example, the veins in our legs work against gravity to get blood back to the heart. Without gravity, however, the heart and blood vessels change – and the longer the flight, the more severe the changes.

NASA astronauts Scott Kelly and Tim Kopra prepare for a spacewalk, December 2015.

The size and shape of the heart, for example, changes with microgravity and the right and left ventricles decrease in mass. This may be because of a decrease in fluid volume (blood) and changes in myocardial mass. A human heart rate (number of beats per minute) is lower in space than on Earth, too. In fact, it has been found that the heart rate of individuals standing upright on the ISS is similar to their rate while lying down pre-flight on Earth. Blood pressure is also lower in space than on Earth.

The cardiac output of the heart – the amount of blood pumped out of the heart each minute – decreases in space, too. Without gravity, there is also a redistribution of the blood – more blood stays in the legs and less blood is returned to the heart, which leads to less blood being pumped out of the heart. Muscle atrophy also contributes to reduced blood flow to the lower limbs.

This reduced blood flow to the muscles, combined with the loss of muscle mass, impacts aerobic capacity (below).

3) Fitness suffers

Aerobic capacity is a measure of aerobic fitness – the maximum amount of oxygen that the body can use during exercise. This can be measured by VO2max and VO2peak tests. Changes to both the muscles and cardiovascular system caused by spaceflight contribute to reduced aerobic fitness.

Feeling good: Tim Peake aboard the ISS.

After nine to 14 days of spaceflight, for example, research shows that aerobic capacity (VO2peak) is reduced by 20%-25%. But the trends are interesting. During longer spells in space – say, five to six months – after the initial reduction in aerobic capacity, the body appears to compensate and the numbers begin improving – although they never return to pre-trip levels.

4) You lose bone

On Earth, the effects of gravity and mechanical loading are needed to maintain our bones. In space, this doesn’t happen. Bone normally undergoes continual remodelling and two types of cells are involved: osteoblasts (these make and regulate the bone matrix) and osteoclasts (these absorb bone matrix). During spaceflight, however, the balance of these two processes is altered which leads to reduced bone mineral density. Research shows that a 3.5% loss of bone occurs after 16 to 28 weeks of spaceflight, 97% of which is in weight-bearing bones, such as the pelvis and legs.

ISS: a healthy home?

5) Your immune system suffers

The immune system, which protects the body against disease, is also affected. There are a number of variables which contribute to this, including radiation, microgravity, stress, isolation and alterations in the circadian rhythm, the 24-hour cycle of sleep and wakefulness that we follow on Earth. Also, while in space, astronauts will interact with microbes from themselves, other crew members, their food, their environment and these can alter their immune response, which may lead to challenging situations and increase the potential for infections among the crew as well as contamination of extraterrestrial sites.

Naomi Brooks does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

If we want medicine to be evidence-based, what should we think when the evidence doesn’t agree?

To understand if a new treatment for an illness is really better than older treatments, doctors and researchers look to the best available evidence.

Weighing the evidence. Maggie Villiger, CC BY-ND

To understand if a new treatment for an illness is really better than older treatments, doctors and researchers look to the best available evidence. Health professionals want a “last word” in evidence to settle questions about what the best modes of treatment are.

But not all medical evidence is created equal. And there is a clear hierarchy of evidence: expert opinion and case reports about individual events are at the lowest tier, and well-conducted randomized controlled trials are near the top. At the very top of this hierarchy are meta-analyses – studies that combine the results from multiple studies that asked the same question. And the very, very top of this hierarchy are meta-analyses performed by a group called the Cochrane Collaboration.

To be a member of the Cochrane Collaboration, individual researchers or research groups are required to adhere to very strict guidelines about how meta-analyses are to be reported and conducted. That’s why Cochrane reviews are generally considered to be the best meta-analyses.

However, no one has ever asked if the results in meta-analyses performed by the Cochrane Collaboration are different from meta-analyses from other sources. In theory, if you compared a Cochrane and non-Cochrane meta-analysis, both published within a similar time frame, you’d tend to expect that they’d have chosen the same studies to analyze, and that their results and interpretation would more or less match up.

Our team at Boston University’s School of Public Health decided to find out. And surprisingly, that’s not what we found.

What is a meta-analysis, anyway?

Imagine you have five small clinical trials that all found a generally positive benefit for, let’s say, taking aspirin to prevent heart attacks. But because each of the studies only had a small number of study subjects, none could confidently state that the beneficial effects weren’t simply due to chance. In statistical-speak, such studies would be deemed “underpowered.”

There is a good way to increase the statistical power of those studies: combine those five smaller studies into one. That’s what a meta-anaysis does. Combining several smaller studies into one analysis and taking the average of those studies can sometimes tip the scales, and let the medical community know with confidence whether a given intervention works, or not.

Taking the average.
Magazine image via

Meta-analyses are efficient and cheap because they don’t require running new trials. Rather, it’s a matter of finding all of the relevant studies that have already been published, and this can be surprisingly difficult. Researchers have to be persistent and methodical in their search. Finding studies and deciding whether they are good enough to trust is where the art – and error – of this science becomes a critical issue.

That’s actually a major reason why the Cochrane Collaboration was founded. Archie Cochrane, a health services researcher, recognized the power of meta-analyses, but also the tremendous importance of doing them right. The Cochrane Collaboration meta-analyses must adhere to very high standards of transparency and methodological rigor and reproducibility.

Unfortunately, few can commit the time and effort to join the Cochrane Collaboration, and that means that the vast majority of meta-analyses are not conducted by the Collaboration, and are not bound to adhere to their standards. But does this actually matter?

Not quite the same.
Apple and orange via

How different can two meta-analyses be?

To find out, we started by identifying 40 pairs of meta-analyses, one from Cochrane and one not, that covered the same intervention (e.g., aspirin) and outcome (e.g., heart attacks), and then compared and contrasted them.

First, we found that almost 40 percent of the Cochrane and non-Cochrane meta-analyses disagreed in their bottom-line statistical answers. That means that typical readers, doctors or health policymakers, for instance, would come up with a fundamentally different interpretation of whether the intervention was effective or not, depending on which meta-anlyses they happened to read.

Second, these differences appeared to be systematic. The non-Cochrane reviews, on average, tended to suggest that the interventions they were testing were more potent, more likely to cure the condition or avert some medical complication than the Cochrane reviews suggested. At the same time, the non-Cochrane reviews were less precise in their accuracy, meaning that there was a higher chance that the findings were merely due to chance.

A meta-analysis is nothing more than just a fancy weighted average of its component studies. We were surprised to find that approximately 63 percent of the included studies were unique to one or the other set of meta-analyses. In other words, despite the fact that the two sets of meta-analyses would presumably look for the same papers, using similar search criteria, over a similar period of time and from similar databases, only about a third of the papers the two sets had included were the same.

It seems likely that most or all of these differences come down to the fact that Cochrane insists on tougher criteria. A meta-analysis is only as good as the studies it includes, and taking the average of poor research can lead to a poor result. As the saying goes, “garbage in, garbage out.”

Interestingly, the analyses that reported much higher effect sizes tended to get cited again in other papers at a much higher rate than the analyses reporting the lower effect size. This is a statistical embodiment of the old journalistic saying “If it bleeds, it leads.” Big and bold effects get more attention than results showing marginal or equivocal outcomes. The medical community is, after all, just human.

Why does this matter?

At its most basic level, this shows that Archie Cochrane was absolutely correct. Methodological consistency and rigor and transparency are essential. Without that, there’s a risk of concluding that something works when it doesn’t, or even just overhyping benefits.

But at a higher level this shows us, yet again, how very difficult it is to generate a unified interpretation of the medical literature. Meta-analyses are often used as the final word on a given subject, as the arbiters of ambiguity.

Clearly that role is challenged by the fact that two meta-analyses, ostensibly on the same topic, can reach different conclusions. If we view the meta-analysis as the “gold standard” in our current era of “evidence-based medicine,” how is the average doctor or policymaker or even patient to react when two gold standards contradict each other? Caveat emptor.

Christopher J. Gill does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

‘Like a piranha’: how midwives’ descriptions of breastfeeding affect women’s attitudes

The World Health Organisation (WHO) promotes exclusive breastfeeding as the optimal way to feed infants. Most Australian babies – 96% –start out breastfeeding.

If health professionals’ interpretations of a baby’s behaviours are negative, a woman may question whether breastfeeding is meeting her baby’s needs. Justin McGregor/Flickr, CC BY-NC-ND

The World Health Organisation (WHO) promotes exclusive breastfeeding as the optimal way to feed infants. Most Australian babies – 96% – start out breastfeeding. But this figure drops to 61% exclusive breastfeeding at one month, 39% at three months and a very low 15% at five months.

The reasons women stop breastfeeding are widespread. They include pain and discomfort during early establishment, lack of support, fear the baby is not getting enough milk, plans to return to work, and worry about the baby’s enjoyment or fulfilment.

A woman’s confidence with breastfeeding can be impacted by her baby’s behaviour and the perceived quality and quantity of milk. Mothers often look to health professionals in the first few days after birth for help in making these assessments.

But a study my colleagues and I conducted in New South Wales found that the sometimes negative language that health professionals use, when describing normal behaviour while feeding, is far from helpful.

If health professionals’ interpretations of baby’s behaviours are negative, a woman may question whether breastfeeding is meeting her baby’s needs. The language used to describe the baby matters. Women who are not enjoying breastfeeding, or think their baby is not enjoying breastfeeding, are more likely to wean early.

Blaming the baby

Published in the journal Maternal and Child Nutrition, our research observed the breastfeeding interactions between 77 women and 36 midwives or lactation consultants at two New South Wales hospitals in the first week after the women gave birth. We also interviewed some of the midwives and the women separately.

At times health professionals attempted to shift blame for breastfeeding difficulties away from the mother. But in so doing they inadvertently placed blame onto the baby.

Midwives used terms such as “impatient” and “lazy” to describe the infant. Babies were deemed impatient, for example, if they were crying at the breast and not sucking. This was attributed to inheriting an “impatient personality”, demonstrated when the milk was not flowing fast enough for them at their first sucking efforts.

Some babies were considered “lazy” if they were not sucking for long enough or not acquiring sufficient breastmilk at each breastfeed.

In the first week after birth, health professionals took on the role of “infant interpreter” and offered what they thought the baby was “thinking”. The implication was that newborn babies had the capacity to think, make decisions and choose whether to cooperate with breastfeeding or not.

There was a definite impression that the baby had a “job” to do during breastfeeding. In this setting, a baby who “cooperated” with breastfeeding, and performed their “job” properly, was labelled “good”, “clever” and “smart”. Yet if the staff member felt the baby had “decided” not to “cooperate”, they used negative language.

Babies who were unsettled and “uncooperative” were described as being “cross”, “cranky” and “angry” during breastfeeding because the milk was not flowing fast enough for them. Babies were described as “complaining”, having “temper tantrums”, getting themselves into a “tizz” or using their mother as a “dummy”.

These kinds of repeated negative references to personality and unfavourable interpretations of baby behaviour ultimately influenced how some women perceived their babies.

The following quote demonstrates how the words health professionals use can become embedded in a woman’s own language. While this woman was in hospital, she told the midwife that she had sore nipples. The midwife replied:

Your nipples are a bit tender because you’re not used to having this little piranha hanging off them every five minutes.

Six weeks later, I interviewed the same woman at home and asked her to describe her early breastfeeding experience. She replied:

With the latching on and that, she’s a bit like a piranha. She grabs straight on…

Comparing the newborn baby to a harmful creature with a voracious appetite could have significant implications for the mother-baby breastfeeding relationship.

Mother and baby are both learning

We found that more positive language and interpretations of baby behaviour during breastfeeding emerged when health professionals viewed the mother and baby as two participants in a reciprocal relationship.

In these interactions, the baby was seen as an instinctual being who was learning how to breastfeed, and so was the mother.

The language that emerged normalised baby behaviours and reflected more positive interpretations. It also facilitated the mother “tuning in” to the needs of her baby.

At times when women themselves used negative language to describe their babies, the midwives focused on the relationship and encouraged a different interpretation. In one example, a mother interpreted her baby as “a stubborn little bugger” who “doesn’t make decisions real quick”.

The midwife shifted the focus to a more positive reading of the baby: “he just may not be quite ready yet” and “just do some skin-to-skin with him”.

When it comes to supporting women to breastfeed, language is very important. It can positively, or negatively, influence the developing relationship between mother and baby. Language should aim to enhance, rather than undermine, the mother-baby relationship and should facilitate the mother “tuning in” to her baby by identifying normal newborn behaviours.

Elaine Burns received funding for this project from an Australian Research Council Linkage Grant.

Read the Original Article at

If being too clean makes us sick, why isn’t getting dirty the solution?

Today rates of allergic, autoimmune and other inflammatory diseases are rising dramatically in Western societies. If that weren’t bad enough, we are beginning to understand that many psychiatric disorders, including depression, migraine headaches and anxiety disorders, are associated with inflammation.

Wash up. Riccardo Meneghini/Flickr, CC BY-NC-ND

Today rates of allergic, autoimmune and other inflammatory diseases are rising dramatically in Western societies. If that weren’t bad enough, we are beginning to understand that many psychiatric disorders, including depression, migraine headaches and anxiety disorders, are associated with inflammation. Perhaps the most startling observation is that our children are afflicted with the same inflammatory problems, contributing to the fact that over 40 percent of US children are on medications for some chronic condition.

And the cause, according to the “hygiene hypothesis,” is that being too clean causes a malformation of the immune system, leading to a wide range of inflammatory diseases. The original idea was that decreased infections in childhood due to hygiene led to a weak immune system, prone to become allergic and inflamed.

If the problem is that we are too clean, then, hypothetically, the issue can be easily resolved. We just need to get dirty, right? Wrong.

Getting dirty doesn’t help our immune system and generally makes inflammation worse. Much worse. That means there is something very wrong with the hygiene hypothesis.

Biodiversity is the real issue

What we actually have is a biodiversity problem. Our clean, indoor-centered lives and a Western diet rich in processed foods have depleted our biomes – the bacteria and worms that naturally live in our bodies, our guts in particular. These organisms play a role in the development and regulation of our immune systems, and scientists have identified the loss of biodiversity as being central to the high rates of inflammatory disease in the developed world.

Giving up soap won’t help your biome.
Bar of soap via

The hygiene hypothesis was right…in its day

An increase in inflammatory disorders, like allergies, was first observed about 150 years ago among the aristocracy in Europe, then reached the entire population of the industrialized world by the 1960s, and seems only to have climbed steadily since then.

When trying to understand why inflammatory diseases increased in the late 1800s and throughout the 20th century, scientists put their finger on things such as toilets and water treatment facilities. In those days, having a toilet was “hygiene.”

But times change. After generations of living with toilets and water treatment facilities, some of the wildlife in our bodies has been driven to the point of extinction. Our loss of contact with the soil due to indoor working environments has further depleted the wildlife of our bodies. And the typical Western diet doesn’t help either.

Even if you were to never use soap again for the rest of your life, you would not recover the wildlife your body is missing. Many of the lost organisms of our body don’t exist in North America in the wild, and others you simply won’t come across in your daily life.

On top of tremendous social difficulties imposed by a lack of soap, you’d likely increase your exposure to a lot of aggravating and even dangerous germs. The bacteria and viruses deposited on your shopping cart handle or the light switch at a hotel are generally not good. Those are often the germs of modern society that cause infection and inflammation. Your immune system would remain inflamed, and perhaps be even more agitated than before.

So what exactly are we missing? For practical purposes, it’s important to divide the wildlife of our bodies into two groups: microbes and more complex organisms such as worms. Microbes and worms affect our immune systems in different ways and both are important to be healthy. Biodiversity is the key.

A healthy crop of microbes and a few good worms

What would the gut biomes in our hunter-gatherer ancestors have looked like? A study by Jeffrey Gordon at Washington University in St. Louis showed that people living in modern preindustrial societies had more diverse micriobiome compositions than people living in the United States today. Seventy bacterial species Gordon found in preindustrial people’s biomes were present in very different amounts from those found in the modern U.S. participants.

While each group may have been exposed to different kinds of bacteria in their day-to-day life, the primary reason for the difference in diversity was attributed to diet. The preindustrial folks ate a diet rich in corn and cassava, compared to a US diet rich in animal fat and protein.

And you might think that antibiotics are an issue, but they are usually less of a long-term problem for biodiversity. They can deplete bacteria in the gut microbiome, but the dangerous and disease-inducing tailspin is generally temporary. The microbiome usually recovers quite nicely, for the most part, although some lingering effects can remain.

The second group of organisms that we need are intestinal worms called helminths. These worms are called mutualists, because they benefit from us and we benefit from having them hanging around in our intestines. They used to naturally live in our gut. In fact, only 150 years ago most people in the West had intestinal worms that helped regulate immune function and prevent inflammatory disease. The culprit here isn’t diet, but cleanliness and sanitation.

Eat some fiber.
Ali Karimian/Flickr, CC BY-SA

If getting dirty won’t help your biome, what can you do?

When it comes to bacteria, a healthy diet is the critical ingredient. We can actually achieve a good mixture of gut bacteria very similar to that of our hunter-gatherer ancestors by adopting a good diet high in fiber and low in processed foods. The right diet helps the good bacteria in your gut flourish, and might make it easier for new varieties of good bacteria to take root.

In addition, there are some products that might, in theory, support a more hunter-gatherer-like bacterial flora, by exposing us to the kind of bacteria we don’t encounter anymore, but they haven’t been tested in clinical trials.

Probiotics, generally formulations of bacteria such as bifidobacteria and lactobacilli that grow readily in milk, are safe to use unless patients are severely ill. They could help support biodiversity in our guts if we need to take antibiotics.

Worms are a bit more challenging. There are two schools of thought on how to help helminth-less guts: one is to figure out what makes good worms good for us, and develop a drug that can do the same thing. The other is just to have these good worms living in your intestines.

Personally, I don’t think we can replicate complex biological relationships using a drug. My view is that modern medicine will eventually embrace the actual worm or maybe complex single-celled organisms called protozoans that work the same way, but research in this field is still in the early stages of development.

In the meantime, some intrepid people are going straight for the worm. As in actually acquiring worms in their gut. The challenge for these adventurers is to find a worm that has more benefits than disadvantages.

For instance, the same species of worm can have different effects in different people. The human hookworm, for instance, is commercially available and easily cultured at home. It has been found to treat multiple sclerosis and severe airway hypersensitivity but can also cause severe gastrointestinal distress in many patients.

For now, most individuals interested in immune health will focus on those factors that are risk-free, like avoiding chronic psychological stress, eating well and exercising, and watching out for vitamin D deficiency. These factors, all within our control, are important for avoiding a wide range of inflammation-related diseases, including allergy, autoimmunity, depression and cancer.

William Parker does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Four common myths about exercise and weight loss

It’s that time of year when many are trying, and some are failing, to live up to their New Years’ resolution of losing weight.

Exercise isn’t the best way to lose weight, in fact it’s one of the hardest. Nottingham Trent University/Flickr, CC BY-SA

It’s that time of year when many are trying, and some are failing, to live up to their New Years’ resolution of losing weight. Many of these probably include resolutions to be more physically active in striving for this goal. But first, there are some common misconceptions about exercise and weight loss that need to be addressed.

Myth 1. Exercise is the best way to lose weight

While there is plenty of evidence showing people can lose weight just by being physically active, it is also one of the hardest ways to go about it.

Our energy balance is mostly determined by what we eat and our metabolic rate (the energy you burn when you do nothing). Our energy balance is determined only to a small extent by how active we are. That means losing weight just by being active is very hard work.

The American College of Sports Medicine recommends accumulating 250 to 300 minutes of moderate intensity exercise per week for weight loss. That is twice the amount of physical activity recommended for good health (30 minutes on most days), and most Australians don’t even manage that.

The best way to lose weight is through combining a nutritious, low-calorie diet with regular physical activity.

Just exercising is an extremely difficult way to shed kilos.
Nina Hale/Flickr, CC BY

Myth 2. You can’t be fat and fit

Inactive people of healthy weight may look OK, but this isn’t necessarily the case. When you’re not active you have a higher risk of heart disease, diabetes, high blood pressure, osteoporosis, some cancers, depression and anxiety. Several studies have demonstrated the association between premature death and being overweight or obese disappears when fitness is taken into account (although another study disputed this).

This means you can still be metabolically healthy while being overweight, but only if you’re regularly active. Of course, people who are fit and of normal weight have the best health outcomes, so there are still plenty of reasons to try to shed some weight.

Myth 3. No pain, no gain

Or in other words, “no suffering, no weight loss”. As mentioned earlier, if you want to lose weight by being active, you will need to do a lot of it. But while physical activity of a moderate intensity is recommended, guidelines do not say activity needs to be of vigorous intensity.

Moderate intensity physical activity makes you breathe harder and may make it more difficult to talk, but you should still be able to carry on a conversation (such as brisk walking, riding a bicycle at a moderate pace). This is unlike vigorous physical activity, which will make you completely out of breath and will make you sweat profusely regardless of the weather conditions (such as running).

Moderate intensity physical activity is not painful and does not include excessive suffering to meet your goals. A study of weight loss in groups with higher intensity and lower volumes of activity compared to groups of lower intensity and higher volumes of activity did not find significant differences.

Myth 4. Only resistance training will help you lose weight

Resistance or strength training is good for you for several reasons. It increases functional capacity (the ability to perform tasks safely and independently) and lean body mass, and prevents falls and osteoporosis. But the main idea for promoting it to lose weight is that muscle mass needs more energy than fat mass, even when at rest. Therefore the more muscular you are, the higher your metabolic rate, which makes it easier to expend the energy you’re taking on board.

However, building muscle mass takes a serious effort, and you need to keep doing resistance training or significant loss of muscle mass will occur within weeks.

Not everyone enjoys weight lifting, so do what you prefer.
Sherri Abendroth/Flickr, CC BY

More importantly though, aerobic or endurance training is also good to help you lose weight. In fact, a recent study demonstrated that endurance training was more effective in producing weight loss compared to resistance training. It’s also likely many people will get more enjoyment out of a brisk walk than a session of weight-lifting, so the most important thing is to pick an exercise routine you enjoy and thus will actually stick to.

To help you get started on your journey to a more active and potentially leaner lifestyle, you can sign up for free physical activity programs such as If you want to take part in our web-based physical activity research study, you can register your interest here.

Corneel Vandelanotte receives funding from Queensland Health (for maintaining the 10,000 Steps Australia program), the National Health and Medical Research Council (project funding) and the National Heart Foundation of Australia (salary support).

Read the Original Article at

Explainer: Why can’t anyone tell me how much this surgery will cost?

Thanks to rising annual deductibles and a push toward consumer-driven health care, people are increasingly encouraged to shop around for medical care.

Why is it so hard to figure out what medical care costs? Bill image via

Thanks to rising annual deductibles and a push toward consumer-driven health care, people are increasingly encouraged to shop around for medical care. Many states or state hospital associations have price transparency initiatives, and there are a number of private companies that also purport to help consumers find value for their health care dollar.

But the search for the best price is often stymied, not necessarily by a lack of information, but by a lack of relevant information.

Price in health care is a squishy concept. Different words relating to cost – charge, price and out-of-pocket cost – all have different meanings and there is no standard among consumer transparency websites about which of these prices to report. So, while the price variation between hospitals is well-recognized, less often discussed is that when consumers search for price, the variation in information reported means they may see wide variation within the same hospital for the same procedure. The lack of standards in this respect can leave consumers confused and means some price transparency efforts may be doing more harm than good.

Sometimes, searching for hospital prices adds to confusion about what a procedure will cost.
Surgery image via

Searching for a price

As an example of how confusing things can get, in mid-December 2015 I searched for the price of spinal fusion surgery, a common procedure, at a hospital near my Michigan home, the Henry Ford Health System.

My first stop was the website run by the Michigan Health & Hospital Association, the trade association representing hospitals in the state. There, I found out that the average charge at Henry Ford was about US$71,000. Then I looked for other sources of price information for consumers. The first result that came up in a Google search for “compare hospital prices” was a site called OpsCost. That site showed me a billed price of about $67,000 at Henry Ford and also told me Medicare reimbursed about $33,000 for the procedure. I looked for something on the site that would explain why there was a difference between these numbers, and how they relate to other insurers, but couldn’t find it.

Then, I tried Healthcare Bluebook, which allowed me to narrow into a zip code but not a specific hospital. That website said that the “fair price” for my spinal fusion procedure in the zip code where Henry Ford is located would be about $39,000. I tried another, Fair Health, which also let me search just by zip code. That website said my procedure cost $9,350.

It’s easy to see how a well-intentioned consumer would get frustrated.

Why is there so much variation?

None of the prices in the above examples are wrong, per se. They just give the cost of different things. And, most importantly, none of them likely reflect the cost that someone with insurance would pay for the procedure.

The first two examples, from the hospital association and OpsCost, show the billed, or chargemaster, amounts at Henry Ford. That is akin to a “sticker price” for the service. It is rare that anyone with health insurance would pay an amount that high if the hospital is included in their insurer’s network. Just as a car buyer might haggle down from the sticker price of a vehicle, an insurance company negotiates a lower price for its members.

People with insurance pay less than the chargemaster amount, but it’s hard to tell just how much less. This is known as the negotiated price, or sometimes the actual paid amount. In some instances, the insurer pays very close to the chargemaster price, while in others they pay much less. That can vary based on the insurer or by the hospital, making the chargemaster price virtually meaningless for comparing hospital prices for those with commercial insurance.

The prices quoted by Healthcare Bluebook and Fair Health are both meant to estimate actual amounts paid by insurers to hospitals. These prices are disclosed in an explanation of benefits statement (it’s the amount after the insurance discount is removed), but you usually don’t see that until after the procedure is done and you get the statement.

The negotiated price is usually a closely guarded secret. Because of this fact, the websites do not have or do not reveal Henry Ford’s or any other hospitals’ actual negotiated prices. So unless you know someone with the same insurance who just had done the same procedure at the same hospital, you’d have a hard time finding that number. In addition, neither website asked about the generosity of my insurance benefit, which determines my out-of-pocket cost, the actual amount I would owe.

Then, there’s the issue of what is encompassed in the quoted price, which is likely the source of the large discrepancy between what Fair Health reported as a fair cost ($9,350) and what Healthcare Bluebook reported ($39,000). Healthcare Bluebook estimated the hospital’s facility fee, physician fee and anesthesia fee based on typical recovery time and prices. The Fair Health price is a bit unclear, but it seems to include only the price of the actual surgery, not taking anesthesia or the cost of the hospital stay into account.

What if you don’t have insurance? In some cases, patients are billed chargemaster prices. However, many hospitals will work with these people to lower large bills. Additionally, thanks to the Affordable Care Act, anyone without insurance who is eligible for financial assistance must be billed a lower amount, usually based on average insurer payments. Uninsured people with higher incomes may still pay chargemaster prices within the law.

Too many prices.
Man at computer image via

What’s a consumer to do?

The best thing you can do if you know you have a major medical expense coming up is call your insurer. Most large insurers now have tools that help consumers shop around for health care providers, and they can often give you an idea of the variation in costs you would face at different providers in your network and specific to your plan.

Next, as a policy recommendation, we need to be careful about releasing information on billed charges under the guise of price transparency, and particularly about calling these numbers prices. They bear little relevance to what the vast majority of consumers will pay and simply distract from finding relevant information on actual prices facing patients.

Price transparency is undoubtedly hard to implement. But it doesn’t have to be as hard as we are making it.

Betsy Q Cliff has received funding from the Robert Wood Johnson Foundation, the Agency for Healthcare Research and Quality and the Centers for Medicare and Medicaid Services.

Read the Original Article at

Can we curb the opioid abuse epidemic by rethinking chronic pain?

Over the last few decades, medicine has witnessed a sea change in attitudes toward chronic pain, and particularly toward opioids. While these changes were intended to bring relief to many, they have also fed an epidemic of prescription opioid and heroin abuse.

Rethinking chronic pain. Doctor and patient image via

Over the last few decades, medicine has witnessed a sea change in attitudes toward chronic pain, and particularly toward opioids. While these changes were intended to bring relief to many, they have also fed an epidemic of prescription opioid and heroin abuse.

Curbing abuse is a challenge spilling over into the 2016 political campaigns. Amid calls for better addiction treatment and prescription monitoring, it might be time for doctors to rethink how to treat chronic pain.

Ancient roots, modern challenges

A class of drugs that includes morphine and hydrocodone, opioids get their name from opium, Greek for “poppy juice,” the source from which they are extracted.

In fact, one of the earliest accounts of narcotic addiction is found in Homer’s Odyssey. One of the first places Odysseus and his beleaguered crew land on their voyage home from Troy is the land of the Lotus-Eaters. Some of his men eat of the Lotus, lapsing into somnolent apathy. Soon the listless addicts care for nothing but the drug and weep bitterly when Odysseus forces them back to their ships.

Odysseus removing his men from the company of the Lotus-Eaters.
via Wikimedia Commons

For decades in the U.S., physicians resisted prescribing opioids, in part for fear that patients would develop dependency and addiction. Beginning in the 1980s and 1990s, this began to change.

Based on experiences with end-of-life care, some physicians and drug companies began saying that opioids should be used more liberally to relieve chronic pain. They argued that the risks of addiction had been overstated.

Since 2001, the Joint Commission, an independent group that accredits hospitals, has required that pain be assessed and treated, leading to numerical pain rating scales and the promotion of pain as medicine’s “fifth vital sign.” Doctors and nurses now routinely ask patients to rate the severity of their pain on a scale of zero to 10.

While it is impossible to measure the burden of pain strictly in dollars, it has been estimated that the total health care cost attributable to pain ranges from US$560 billion to $635 billion annually, making it an important source of revenue for many health professionals, hospitals and drug companies.

More prescriptions for opioids have fed abuse

Today it is estimated that 100 million people in the U.S. suffer from chronic pain – more than the number with diabetes (26 million), heart disease (16 million) and cancer (12 million). Many who suffer from chronic pain will be treated with opioids.

In 2010 enough prescription painkillers were prescribed to medicate every American adult every four hours for one month. The nation is now in the midst of an epidemic of opioid abuse, and prescription medications far outrank illicit drugs as causes of drug overdose and death.

It is estimated that 5.1 million Americans abuse painkillers, and nearly two million Americans suffer from opioid addiction or dependence. Between 1999 and 2010, the number of women dying annually of opioid overdose increased five times. The number of fatalities each day from opioid overdoses exceeds that of car accidents and homicides.

In response, the Drug Enforcement Agency and a number of state legislatures have tightened restrictions on opioid prescribing.

For instance, patients must have a written prescription to obtain Vicodin and doctors can’t call prescriptions in. The downside, of course, is that many patients must visit their physicians more often, a challenge for those who are seriously ill.

Some patients seek multiple prescriptions for opioids so that they can turn a profit selling extra pills. The increase in prescription opioid misuse is also linked to an increase in the number of people using heroin.

A sea change in pain treatment helped create the opioid abuse epidemic, and another sea change in how doctors view chronic pain could help curb it.

Doctors should examine the strengths and weaknesses of opioids.
Pill bottles image via

Looking beyond physical pain

In a recent article in the New England Journal of Medicine, two physicians from the University of Washington, Jane Ballantyne and Mark Sullivan, argue that physicians need to reexamine the real strengths and weaknesses of opioids. While these drugs can be very effective in relieving short-term pain associated with injuries and surgery, the authors say “there is little evidence supporting their long-term benefit.”

One of the reasons opioids have become so widely used today, the authors suggest, has been the push to lower pain intensity scores, which often requires “escalating doses of opioids at the expense of worsening function and quality of life.” Merely lowering a pain score does not necessarily make the patient better off.

They point out that the experience of pain is not always equal to the amount of tissue damage. In some cases, such as childbirth or athletic competition, individuals may tolerate even excruciating degrees of pain in pursuit of an important goal. In other situations, lesser degrees of pain – particularly chronic pain – can prove unbearable, in part because it is experienced in the setting of helplessness and hopelessness.

Instead of focusing strictly on pain intensity, they say, physicians and patients should devote greater attention to suffering. For example, when patients better understand what is causing their pain, no longer perceive pain as a threat to their lives and know that they are receiving effective treatment for their underlying condition, their need for opioids can often be reduced. This means focusing more on the meaning of pain than its intensity.

This helps to explain why one group of patients, those with preexisting mental health and substance abuse problems (“dual diagnosis patients”), are particularly poorly served by physicians who base opioid doses strictly on pain intensity scores. Such patients are more likely to be treated with opioids on a long-term basis, to misuse their medications, and to experience adverse drug effects leading to emergency room visits, hospitalizations, and death – often with no improvement in their underlying condition.

The point is that pain intensity scores are an imperfect measure of what the patient is experiencing. When it comes to chronic pain, say the authors, “intensity isn’t a simple measure of something that can be easily fixed.” Instead patients and physicians need to recognize the larger psychological, social and even spiritual dimensions of suffering.

For chronic pain, Ballantyne and Sullivan argue, one of the missing links is conversation between doctor and patient, “which allows the patient to be heard and the clinician to appreciate the patient’s experiences and offer empathy, encouragement, mentorship, and hope.”

If the authors are right, in other words, patients and physicians need to strike a new and different balance between relying on the prescription pad and developing stronger relationships with patients.

One problem, of course, is that many physicians are not particularly eager to develop strong relationships with patients suffering from chronic pain, substance abuse and/or mental illness. One reason is the persistent widespread stigma associated with such conditions.

It takes a doctor with a special sense of calling to devote the time and energy necessary to connect with such patients, many of whom can prove particularly difficult to deal with.

In too many cases today, it proves easier just to numb the suffering with a prescription for an opioid.

Read the Original Article at

Why isn’t learning about public health a larger part of becoming a doctor?

Chronic conditions, such as Type II diabetes and hypertension, account for seven in 10 deaths in the United States each year. And by some estimates, public health factors, such as the physical environment we live in, socioeconomic status and ability to access health services, determine 90% of our health.

Public health isn’t a standard part of medical school curricula. Medical school class images via

Chronic conditions, such as Type II diabetes and hypertension, account for seven in 10 deaths in the United States each year. And by some estimates, public health factors, such as the physical environment we live in, socioeconomic status and ability to access health services, determine 90% of our health. Biomedical sciences and actual medical care – the stuff doctors do – determine the remaining 10%.

Clinical medicine can treat patients when they are sick, but public health provides an opportunity to prevent disease and poor health. But too often, medical students don’t get to learn about public health, or how to use it when they become doctors. That means many of today’s students aren’t learning about health care in a broader context.

Why doctors need to know about public health

What should a physician do if patients are unable to visit a physician because their workplace doesn’t give them sick days? What about an obese individual who has trouble following healthy eating recommendations because their neighborhood doesn’t have a grocery store?

If we want the next generation of medical professionals to understand why some patients have an easier time following a care plan than others, or understand what causes these conditions so we can prevent them, medical schools need to look toward public health.

Epidemiology, a core discipline within public health, emphasizes the study and application of treatment to disease and other health-related issues within a population. It is focused on prevention, which means understanding what makes people sick or unwell.

You might hear about epidemiologists who work on figuring out how infectious diseases spread. But they also study obesity, cancer, how our environments affect our health and more.

So a doctor with training in public health would have an understanding of how environmental, social and behavioral factors impact their patients’ health. These physicians might also draw on other medical professionals to treat individuals who are sick, and prevent sickness from occurring in the first place.

Medical schools recognize that their students should learn more about public health. But according to the Association of American Medical Colleges (AAMC), about one-fourth of 2015 medical school graduates report that they intend to participate in public health-related activities during their career, and nearly one-third of graduates report that training related to community health and social service agencies was inadequate.

Putting public health into medicine

But this is slowly starting to change.

For instance, the Medical College Acceptance Test (MCAT), which all medical school applicants in the US take, used to focus on just physical and biological sciences and verbal reasoning. But in 2014 the MCAT added a new section on the psychological, social and biological foundations of behavior. The idea is to provide students with a foundation learn about what public health scholars call the social determinants of health. These are conditions and environments in which we are born, work, live and interact with others.

Students are expected to know more about public health.
Medical students image via

Expectations for students transitioning from medical school to their postgraduate residency are also starting to change.

The AAMC has a list of 13 activities that medical school graduates are expected to be able to do on their first day of residency. The activities (called Entrustable Professional Activities, or EPAs) integrate, among other core competencies, principles of public health into everyday practice. They include guidelines for working with individuals who have different belief systems, patient-centered practice and understanding how to access and use information about the needs individuals have and the community resource available to them.

Having students make house calls

At the University of Florida, where I teach, population health-based topics are integrated into our medical school curriculum, and also into curricula for other health professions.

Each fall, 700 first-year health science students studying everything from dentistry to clinical psychology, health administration, pharmacy, nursing and more take part in a service learning project with local families.

Students complete coursework about public health, but they are also assigned to work with a family through the year. Students make a series of home visits, which means that they can see, firsthand, how the family’s home environment shapes their health. Because the project includes students from all the health professions, it helps them understand each other’s roles and responsibilities in providing care.

In these visits, students get a chance to see the myriad factors that can make it easier or harder for a patient to follow the care plan their doctor prescribes. Students may learn that their patients have priorities in life that come before monitoring their own health. And for many students, this may be the only home visit that they make during their entire career.

For instance, a team of our students were humbled to learn that one of the patients they visited, a woman with severe hypertension and Type II diabetes, put her desire to provide Christmas presents for the six grandchildren she was raising over her medication adherence or her glucose monitoring. She was more focused on her grandchildren than spending time on monitoring her health and taking medications.

These home visits show students how complex their patients’ lives really are. And that give these future doctors a perspective on their patients that they may never get in a clinical visit.

Other medical schools putting public health on the agenda

The University of Florida isn’t the only medical school investing time and energy to explore new methods to teach students about public health.

Some are adopting dual-degree models that allow medical students to earn degrees in both public health and medicine. Often, these programs extend students’ training by 12 months, but some institutions, like the University of Miami and the University of Texas Health Science Center at San Antonio, have developed four-year dual-degree programs.

Other institutions, such as the University of Illinois and Florida International University, are integrating population and public health perspectives throughout their curricula, to make sure that all students learn about public health.

Erik Black does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at

Can pharmacists help fill the growing primary care gap?

By 2020 157 million people in the US will be living with at least one chronic health condition. As the number of Americans managing diseases such as diabetes, hypertension and high cholesterol increases, the ranks of primary care providers (PCPs) who currently perform the majority of chronic disease management are dwindling.

Safeway pharmacist Ronak Amin is shown at his work station at the store in Wheaton, Maryland, February 13 2015. Gary Cameron/Reuters

By 2020 157 million people in the US will be living with at least one chronic health condition. As the number of Americans managing diseases such as diabetes, hypertension and high cholesterol increases, the ranks of primary care providers (PCPs) who currently perform the majority of chronic disease management are dwindling.

Within the next 10 years, there is estimated to be a 27% shortage of PCPs in the US – about 90,000 fewer PCPs than the US health care system requires.

But there are approximately 300,000 pharmacists in the US, and the number of pharmacists is going up. Between 2003 and 2013, the number of pharmacists in the US increased by approximately 19%.

Pharmacists are trained to do much more than dispense medication, and they could help plug the growing gaps in chronic care management in the United States.

The trouble is that state pharmacy practice statutes were written in a different era, and haven’t caught up with the training pharmacists receive today. There’s a chasm between what pharmacists are trained to do and what they are allowed to do by law.

A child has her blood pressure checked by a pharmacist at the Mayor’s Back to School Fair in Dallas, Texas, August 2009.
Jessica Rinaldi/Reuters

What does your pharmacist know how to do?

Your local pharmacist is a highly trained medical professional. Before pharmacy students even start school, they have to take and pass the standardized Pharmacy College Admissions Test (PCAT), which covers topics like chemistry and biology and mathematics. Before entering pharmacy school (which is a four-year program), most students will have completed a bachelor’s degree or a rigorous two-year program of prerequisites. That means graduates of pharmacy schools have doctoral-level training.

Before they can practice, students have to pass a licensure exam (North American Pharmacist Licensure Examination, NAPLEX). Some will go on to receive board certification in cardiology, pediatrics, infectious disease or other specialties by the Board of Pharmaceutical Sciences (BPS).

Of course, pharmacists receive extensive training in drug therapy management – medical care provided by pharmacists whose aim is to optimize drug therapy and improve therapeutic outcomes for patients, and the subtle differences between medications.

But pharmacists are also well-versed in preventative care, patient counseling, and health and wellness. They know how to manage chronic diseases, including high blood pressure, diabetes and high cholesterol. A pharmacist can manage a treatment plan initiated by a physician, order basic laboratory tests and adjust medication dosages, adding or subtracting medications as needed. These are things that many patients with chronic disease need to schedule an appointment with their PCP to do.

Pharmacists are often more accessible to patients than PCPs. No appointments are needed and in general pharmacists are available for consultation at hours during the day and night that most physician offices are closed.

But in most states, pharmacists stick with drug therapy management and don’t get to use the rest of the skills that they learn throughout their pharmacy education.

In some states, pharmacists are allowed to participate in administration of certain immunizations or are allowed to participate in preventative care or wellness. But it is the minority of states that have progressive pharmacy statutes allowing pharmacists to interact with patients, take medical histories and order appropriate laboratory tests under certain conditions.

Why aren’t pharmacists doing more?

Outdated pharmacy statues aren’t the only thing blocking pharmacists from doing more than dispensing medication.

Pharmacists are often assisted by pharmacy technicians who perform routine tasks, like counting pills and labeling bottles, so they can devote more time to patients. Despite that division of labor, almost 70% of a pharmacist’s time is still spent on tasks that can be performed by technicians.

Pharmacists are paid based on the number of prescriptions filled. Even though they can do a lot more than dispense medication, that’s what they get paid to do, with a few exceptions.

For instance, Medicare reimburses pharmacists for medication therapy management – where a pharmacist manages and adjusts a the medication to suit an individual patient’s needs.

Because pharmacists don’t get paid for other services they provide, the end result is that patients receive less care than they could and should when visiting the pharmacy.

Safeway Pharmacist Sonya Safaie works behind the prescription drop-off window at a Safeway Pharmacy in Great Falls, Virginia, July 29 2009.
Hyungwon Kang/Reuters

Letting pharmacists play a bigger role in care is a boon for patients

Even if there were enough PCPs to take care of the explosion in chronic diseases, there is evidence that PCPs aren’t doing a good job at managing their patients’ chronic diseases.

Fifty percent of patients walk out of appointments not understanding what they were told by their physician. Patients actively participate in their own clinical decision-making less than 10% of the time. Just one-third of US patients with diabetes, hypertension and elevated cholesterol have their conditions under good control.

And patients are taking more medication than ever. The number of prescriptions written in the US has increased from 700 million in 1989 to 4 billion in 2014. Since 2002 there has been a 15% increase in the number of 55-64-year-olds taking five or more medications. Ninety percent of adults over the age of 65 years take at least one prescription drug.

Taking more medication makes it more likely that a person won’t take them as directed. This can lead to medical complications, higher costs and even death. And more medication means a greater likelihood of harmful interactions.

But research shows that when pharmacists are part of patient care teams they can help avoid these problems and result in better patient care. This is called a collaborative care model.

For example, the physician in charge of the care team would assign activities to a pharmacist, like monitoring blood pressure, ordering lab tests, evaluating and changing medication or doses. This lets the pharmacist act more independently while still working closely with the physician who is leading the care.

Collaborative care models have been shown to improve outcomes in patients with hypertension, diabetes, clotting disorders and high cholesterol. Putting a pharmacist on the care team can reduce adverse drug reactions and lower costs. If patients can go to a pharmacist for day-to-day management of their condition, physicians can spend more time seeing the patients that really need their expertise.

Change is happening…slowly

There are bills in both the House and Senate proposing an amendment to the Social Security Act authorizing the secretary of health and human services to develop pharmacist-specific codes for insurance reimbursement.

These efforts are necessary and long overdue, but even if these bills are passed and signed into law, what pharmacists can do is still restricted by antiquated state statutes that have little connection to how pharmacists are trained today.

Once laws catch up to what pharmacists are really trained to do, it will be the patients who benefit the most.

John Gums does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Read the Original Article at