In the 1950’s, American agriculture researchers Thomas Jukes and Robert Stokstad discovered that giving small doses of antibiotics to chickens could speed up their growth and prevent infection. Their observation profoundly changed animal food production. It enabled the rise of concentrated animalfeeding operations, where farmers could inexpensively raise hundreds of animals — chickens, pigs and cows — and keep them healthy, even in close confinement. With the rise of industrial farming, millions of people have been able to get access to cheap animal meat. Approximately 70 percent of the medically important antibiotics sold in the United States go to food animals — animals humans will eat.
The routine use of antibiotics has also exposed quintillions of bacterial pathogens to antibiotics, increasing the evolutionary pressure on bacteria to develop resistance. Antibioticresistant bacteria can then pass from animals to people through direct contact, contaminated meat or environmental pathways such as water runoff or airborne dust.
Since the 1970s, a growing number of public health advocates have warned that the practice of feeding antibiotics to otherwise healthy animals, will result in more humans contracting antibiotic resistant pathogens. Indeed, a 2013 study found that the closer people lived to pig farms, the greater the likelihood they would contract MRSA, a resistant staphylococcus bacteria that can kill. As of 2013, about 23,000 people in the U.S. die from an illness caused by an antibiotic-resistant bacteria, according to the Centers for Disease Control and Prevention (CDC).
The CDC has stated that “much of the antibiotic use in animals is unnecessary and inappropriate and makes everyone less safe.”
Globally, antibiotic use in food production outweighs human use by about three times. If left unchecked, antibiotic use in food farming is expected to rise by 53 percent by 2030, as the world’s growing population demands more meat. China, the U.S., Brazil, India and Spain are among the top five users of antibiotics in food production, according to the Center for Disease Dynamics, Economics and Policy.
There are some efforts in the U.S. to change course on antibiotic consumption in food. The Food and Drug Administration in 2017 banned the use of antibiotics in food animal farming for growth promotion, but still allows its use for disease prevention, as long as the use is overseen by a veterinarian.
Some consumer groups have argued that the FDA action hasn’t gone far enough. They say it should be banned entirely and point to other countries, like Denmark, who have successfully reduced antibiotic use in farming through good animal husbandry practices, like more air, space and better food. For example, total antibiotic consumption in animals in Denmark fell 47 percent between 1994 and 2015, which led to a decrease in resistance to certain antibiotics. The food producing sector argues that it has been cooperating with the government and that veterinarian oversight will ensure that antibiotics are used appropriately.
Some food companies are reducing antibiotic use on their own. Perdue Farms and Tyson Foods are raising most of their birds without antibiotics. A half dozen restaurant chains — including McDonald's and KFC — have also committed to phasing out routine antibiotic use in some or all of their supply chains which is expected to put more pressure on farmers to figure out how to raise animal meat without antibiotics.
Consumer groups say such reductions will reduce the expansion of antibiotic resistance.
First introduced to the public in 1944, antibiotics – drugs that kill harmful bacteria – have all but eliminated the threat of diseases that once killed millions including sepsis, tuberculosis, plague, and cholera. But overuse of these drugs in people and livestock animal farming has resulted in the breeding of “superbugs,” or germs that are resistant to most or all existing antibiotics.
Bacteria become resistant to antibiotics naturally. When an antibiotic is used, most bacteria are eliminated. A few resistant strains survive, however and continue to multiply and spread. Some of these bacteria are resistant due to a genetic mutation, or because it acquired resistance from other bacteria. Mutations, rare spontaneous changes in bacteria’s genetic material, enable bacteria to inactivate or close off from an antibiotic. Bacteria can also acquire resistance by “mating” with one another, transferring genetic material with antibiotic resistance. Bacteria can collect multiple resistance traits, making it resistant to many if not all antibiotics.
Until the early 1980’s, pharmaceutical companies invested in developing new antibiotics to keep ahead of evolving resistance, but the scientific and economic challenges of developing new antibiotics has led to a steep drop off in the availability of new types of antibiotics, even as superbugs have spread. In the 1950’s, drug companies introduced at least nine new types of antibiotics. Since 1984, there have been no new registered classes of antibiotics.
Most antibiotic resistance cases have been associated with healthcare settings such as hospitals and nursing homes. Increasingly, however, there are cases of antibiotic-resistance – methicillin-resistant staphylococcus aureus (MRSA) infections or drug-resistant tuberculosis, for example – outside health care settings. These cases are hard to treat with existing antibiotics.
At least 2 million people in the U.S. contract an antibiotic-resistance bacterium annually and 23,000 die as a result. The threat has become so dire that in 2013, the Centers for Disease Control and Prevention (CDC), announced that the globe had reached a “post-antibiotic” world.
A British report, The Review on Antimicrobial Resistance, estimates that as many as 700,000 people die each year around the world from infections that can’t be killed by antibiotics. The same report estimated that if nothing is done to stem the growing number of antibiotic-resistant bugs, as many as 10 million people could lose their lives annually by 2050.
Public health leaders have been working on strategies for reversing the trend. The CDC is pushing physicians to stop overprescribing antibiotics and farmers to stop using antibiotics in raising livestock. A large percent of antibiotics are prescribed for farm animals, which many experts believe is accelerating the spread of antibiotic resistance. For example, colistin, an antibiotic of last resort, was widely used in animal farming in China. Then colistin-resistant bacteria showed up in some patients, most recently in the U.S. when a Pennsylvania woman was found with a colistin-resistant bacteria that caused a urinary tract infection. China has since banned the use of colistin in animal farming.
To spur drug company investment, Congress in 2012 passed the Generating Antibiotic Incentives Now (GAIN) Act as part of a reauthorization of funding for the Food and Drug Administration, which grants longer market exclusivity for antibiotics and streamlines the process for regulatory approval of new antibiotics.
Antibiotic stewardship in hospitals
About 20 percent to 50 percent of all antibiotics prescribed in U.S. acute care hospitals are unnecessary or inappropriate, the Centers for Disease Control and Prevention estimates.
Overuse of antibiotics in hospital settings is one of the factors in the rise of “superbugs,” which is the term for bacteria that has become resistant to many if not all antibiotics. Inappropriate use of antibiotics has also caused some patients to develop serious side effects, such illness from clostridium difficile, an infection of the intestine that is hard to treat. C. diff is a bacteria that can live harmlessly in the gut, kept in check by healthy bacteria. When a person gets too many antibiotics, healthy bacteria are killed off, enabling c. diff to flourish.
Antibiotic stewardship programs in hospitals aim to curb the inappropriate use of antibiotics to ensure that existing drugs remain effective, and patients don’t experience adverse events.
In 2014, the CDC published infection control and antibiotic stewardship guidelines for hospitals to follow. Only about half of hospitals implemented stewardship programs until 2017, when the Joint Commission, which accredits U.S. hospitals, required all hospitals to have an antibiotic stewardship program. As of early 2018, almost all now do.
Generally, stewardship programs include support for physicians to help them determine whether a patient really needs an antibiotic and what type of antibiotic would be most appropriate. Support includes providing physicians with rapid diagnostic testing to determine if a patient is sick with a bacteria or virus [viruses cannot be treated with antibiotics] and determine the type of bacteria making a patient ill. Once the bacteria is diagnosed, the hospital then can help the physician determine the best antibiotic to treat the patient. To do so, a hospital program might assign one hospital pharmacist with a specialty in antibiotics, to work with the physician. See the CDC’s guidelines here.
Culturally, hospital doctors tend to prescribe an antibiotic to a patient, even if they don’t know what is wrong with the person, out of an abundance of caution. Often physicians will also give patients a broad-spectrum antibiotic. Broad-spectrum means a type of antibiotic that kills all kinds of bacteria. The problem is that these medications will also kill healthy bacteria that a patient needs to stay well, and a narrow antibiotic might have been better suited for the patient.
These programs are so new, that there isn’t much data yet to determine if they have been effective in improving usage of antibiotics.
Phage therapy involves the use of bacterial viruses (bacteriophages) to treat bacterial infections. This therapy was discovered more than a century ago, but never caught on as worldwide topic of medical research because anti microbial medications became the preferred and easier method for treating bacterial infections. With the rise of antimicrobial resistance, there is a growing interest in harnessing this natural enemy of bacteria as an alternative to antibiotics.
Phages exist anywhere there is bacteria and are one of nature’s ways of keeping bacteria from growing out of control. They hijack bacteria’s genetic code and then destroy it.
Scientists at the Pasteur Institute in Paris first discovered phages in the early 20th century. A Georgian scientist, George Eliava, took the discovery back to his country, where the Eliava Institute was created. Phages were used through the Soviet Union during and after World Work Two to treat infections, but the scientific community outside the USSR mostly ignored their work. The challenge is that phages are particular to the bacteria. There is a phage for each strain of Escherichia coli (E. coli) or Clostridium difficile (C. diff), but they aren’t interchangeable. Antibiotics, however, can kill many strains of bacteria, and they became the preferred method for treating infections globally.
With the rise of antimicrobial resistance, phage therapy is getting more attention. In 2010, Texas A & M opened a Center for Phage Technology, the University of California at San Diego founded the Center for Innovative Phage Applications and Therapeutics, the University of Pittsburgh has built a research team around phage therapy and so has the US Naval Medical Research Center.
Researchers are now making use of advances in genetic technology to build phages that can attack bacteria. The process is time-consuming and expensive. It requires scientists to find the exact phage that will kill the bacterial strain. Among the places scientists look for phages that attack pathogens dangerous to humans include sampling sewage. The research has advanced far enough to have saved several people who were dying from antibiotic resistance.
The obstacles to phage therapy aren’t only scientific. There are also huge regulatory challenges to utilizing viruses. The Food and Drug Administration in February 2019 accepted the very first phage therapy trial as an investigational new drug at the University of California, San Diego.
Bioterrorism, the act of turning biological agents like microbes or toxins into weapons, has been used by military leaders for more than 2.000 years. One of the first known uses of bioterrorism occurred in 184 B.C., when Hannibal, the leader of Carthage (modern-day Tunisia), directed his sailors to fill earthen pots with serpents and launch them at enemy ships led by King Eumenes of Pergamum (Turkey).
During the 1960s, the U.S. military had a biological arsenal that included numerous weaponized pathogens, as did Canada, France, Britain, and the Soviet Union. But concerns about risks of such programs to society led to a 1972 U.N. Convention prohibiting the development, production and stockpiling of infectious diseases. The agreement was signed by 103 countries, and the U.S. confiscated its arsenal of bioweapons. But the convention had no enforcement mechanism. The Soviet Union, for example, kept working on bioweapons, through a program it called Biopreparat. Not until the late 1990s, when the Soviet Union dissolved, did the U.S. learn of the program.
The U.S. helped to dismantle the Biopreparat program, but security experts believe some countries, like North Korea, are still working on bioterrorism. Further, terrorists no longer need the backing or the resources of a nation-state to build a bioterrorist weapon. With developments in technology, only determination and access to medical supplies and a laboratory are needed, as the U.S. learned in the 2001 anthrax attack.
Anthrax isn’t communicable between people, but the bacterium can be altered to easily spread through the air. When inhaled, anthrax can cause lung damage and death. Not long after the Sept. 11 terror attacks, someone altered anthrax spores and sent them to the media and members of Congress through the mail. After a years-long investigation, the Federal Bureau of Investigation declared that Bruce Ivans, an Army microbiologist, was responsible for sending the spores, which infected twenty-two people and killed five. Ivans killed himself before he was charged and doubts remain about whether he was the culprit.
The anthrax attack remains the worst bioterrorism attack in U.S. history, and national security experts worry another one could occur.
Since 2001, the government has spent billions on programs to respond to a potential biological attack. One of the programs, BioShield, directs the federal government to stockpile medical countermeasures to respond to a chemical, biological or nuclear attack. Another program is the Biomedical Advanced Research and Development Authority (BARDA), which operates within the Department of Health and Human Services’ Office of Preparedness and Response. Among other measures, BARDA has developed anthrax drugs and diagnostic tools.
Despite all the money that has been spent, biodefense has been handled by various pieces of the federal government, and national security experts have been concerned that the U.S. isn’t positioned to effectively respond to a bioterrorist attack. Currently, there isn’t a single coordinated inter-agency strategy for responding to a bioterrorist attack or a naturally occurring pandemic.
To change that, Congress at the end of 2016 directed, the departments of Agriculture, Defense, Health and Human Services, and Homeland Security to work together on a national biodefense strategy. Though there were concerns among security experts that the Trump administration didn’t have a team in place to produce such a strategy, Tim Ziemer, White House National Security Council senior director for global health security and biothreats, in November 2017, told a non-profit security group, the Blue Ribbon Study Panel on Biodefense, that the Trump administration would be releasing a biodefense strategy by the end of 2017 or early in 2018. As of January 2018, a strategy had yet to be released.
Disease elimination vs. eradication
The elimination of rubella in the Americas was announced in April 2015, followed by the elimination of measles across the Americas continents in September 2016. Yet measles cases still occur in North and South America, and news is still being reported on the eradication of polio, which has not been seen in the Americas in years. The key here is that “elimination” and “eradication” are different things, though they are often confused by readers and sometimes even by journalists.
Eradication refers to a disease being completely, literally eradicated from the earth: no cases occur at all, from any source. The best-known example is the eradication of smallpox in 1980. Another lesser known disease that has been eradicated includes the livestock virus rinderpest. Campaigns to eradicate polio and Guinea worm are officially underway, and it could be argued that public health officials are — so far unofficially — working toward eradication of hookworm, measles, rubella, malaria and other diseases. Only certain conditions are able to be eradicated with current tools. An example would be a disease which lives in the environment, rather than requiring a human vector.
Elimination, however, refers to a permanent interruption in indigenous transmission of a disease, making it no longer endemic, but the disease can still be introduced by a case from another geographical region. Or, as it was put in an article about the measles elimination, “Measles no longer lives in the Americas though it occasionally visits.” For example, measles has been eliminated from the U.S. since 2000, but there have been a number of measles outbreaks in the U.S. since then. All of those outbreaks, however, were introduced by a person visiting from outside the U.S. None of them began with a person already living in the U.S. because the virus no longer circulates on its own in the U.S., thanks to the effectiveness of the measles vaccine.
The distinction is important because an eliminated disease can always return if conditions allow for it, such as a sufficiently deep, sustained drop in immunization rates that allows measles to begin circulating again.
When infectious disease experts are asked about which disease everyone should be worrying about, after the influenza virus, they often answer “Disease X.”
Disease X, stands for an unknown bacteria, or virus that might be lurking in animals or humans, with the potential to suddenly become virulent and contagious, spreading around the world. As Wired magazine put it, “Disease X is a contagion requiring immediate attention- but which we don’t yet know about.” And because we don’t know about it, there is no treatment or vaccine for it.
“Disease X” emerged on infectious disease experts’ minds after the World Health Organization named it as one of eight pathogens that public health officials believe is of high risk to blow up into an epidemic. The WHO list, called the “Blueprint Priority Diseases” and was developed to spur research investment in finding vaccines, treatments and medical counter measures for these pathogens, where few or none currently exist.
On the list includes Ebola, other hemorrhagic fevers, Middle East Respiratory syndrome (MERS), Severe Acute Respiratory Syndrome (SARS), mosquito-born diseases Zika and Rift Valley Fever, and now Disease X.
Though there is no “Disease X” per se, the WHO’s inclusion of it was meant to remind public health leaders and healthcare providers to be ready for any new and dangerous emerging illness. Recent history is a guide of how that can happen. The WHO declared a public health emergency to respond to three unexpected outbreaks in the past decade, including the swine flu pandemic in 2009, the Ebola virus outbreak in West Africa in 2014, and the Zika virus in 2016.
The WHO’s list has spurred investment in vaccine research in the form of the Coalition for Epidemic Preparedness Innovations, a joint public and private organization funded by Norway, Germany, Japan, the Bill & Melinda Gates Foundation and the Wellcome Trust. The effort has so far raised $630 million towards its goal of reaching $1 billion in vaccine research investment.
One effort the organization is looking at is developing a universal vaccine platform that could be used to quickly, in the event of an emergency, tailor vaccine production, based on the genomic sequence of the pathogen.
In the U.S., public health officials are better prepared than five years ago to handle a health emergency like a “Disease X,” but regional gaps remain, the Robert Wood Johnson Foundation said in a 2018 report.
To understand how an infectious disease spreads, public health officials ask questions reminiscent of journalists. They want to know: What was the agent that caused the disease? Who was the host? Where did the transmission occur? These three questions create the “Epidemiologic Triangle” model, which is used to determine the nature of an outbreak.
To end the outbreak, epidemiologists seek to answer at least one of the “sides” of the triangle.
Question one: “What is the agent?” refers to the microbe — a bacteria, virus, fungi or parasite — that is causing the disease. How did it invade, leave or transmit to the host? Was the transmission direct (from a person coughing on another person)? Or indirect? (from eating contaminated food, or drinking dirty water?) Or through an animal or insect? (from the bite of a mosquito?)
Question two: “Who is the host?” refers to the human or animal that is exposed and harboring a disease. What was the risk, or the susceptibility of the person of getting the disease? The answers to the questions may be biological (do they have a genetic predisposition for the disease? Or a weak immune system?), behavioral (what are the person’s eating habits?), demographic or cultural (do they have access to clean water? Do they live on a farm?)
Question three: “Where did the transmission occur?” refers to the environment. The environment impacts the risk of an animal or person’s exposure to a pathogen. For example, what is the climate, geology, and habitat of the person or animal? Is the person living in a nursing home? What is the biological environment? Does the person live near a jungle or river, where there are mosquitos? And what is the person’s economic status? What is their occupation? Was there a natural disaster, like a hurricane, which caused mold to grow? Other environmental factors include the weather. In the winter, flu viruses spread more quickly, than in the warm summer months.
In the middle of the triangle is time. It is the period between exposure and signs of symptoms, which is called the incubation period. The time can provide information to epidemiologists about the nature of the pathogen, its source and identify those who were likely infected.
These are all questions journalists may ask public health officials as well when they are covering a disease outbreak and want to know more about what it means for their community.
This lapse in responsible reporting refers to using outliers’ voices to state opinions that contradict the facts simply to provide “balance” to a story. Stories about any topic certainly need to include as many perspectives on an issue as possible as long as those perspectives are purely opinion-based (something that science cannot show to be true or untrue either way) or those perspectives are supported by some scientific evidence, even if that evidence diverges from other evidence. However, if such a strong consensus from the evidence exists that something is regarded as a fact, then including a person who doesn’t believe that fact does not provide accurate or appropriate balance to a story — it just confuses the reader about what the facts are. A flip example would be including a quote from someone who believes the earth is flat in a story related to weather or the curvature of the earth, or quoting someone who believes the moon landing was a hoax in a memorial story about the moonwalk. In reporting on medical research, this becomes tricky because scientists are learning more information all the time, and it’s reasonable for journalists to seek countering opinions particularly on new research, such as new findings about the gut microbiome or a new treatment. Other topics, such as breast cancer screening, may have contradictory evidence or involve controversial opinions on what to do about the evidence, all of which should be considered for a story.
One of the most common examples of a topic that falls prey to false balance, or false equivalency, in reporting is vaccines, mostly in smaller markets or by general assignment reporters who are less familiar with the health or science beat. The way the media’s falsely balanced vaccine reporting damaged public health reporting (and consequently public health) is such a well-worn case study that CJR featured outstanding coverage of it in Curtis Brainard’s Sticking with the Truth. Quoting “both sides” on concerns about a safety issue in vaccines that has been demonstrably shown not to exist makes it appear that there is a controversy among experts when there is not. The group Voices for Vaccines offers an excellent primer to false balance and how to avoid it in accurate news stories about vaccines.
The danger of false equivalence remains for any issue on which a broad medical or scientific consensus exists based on the evidence and a handful of outliers attempt to discredit that information for various reasons, often motivated by personal financial gain. Avoiding false balance doesn’t mean journalists take off their skeptical hat in covering these issues, but they should only report these scientifically outlier positions if solid evidence supports it, not just because someone somewhere believes it.
Hand hygiene – Health care settings and non-health care settings
Washing ones hands is among the most effective ways of reduction the spread of infections.
In health care settings, those providing services wash their hands less than half the time that they should, which contributes to the spread of disease in hospitals where patients are already sick. On any given day, at least 31 patients have gotten an infection because someone working at the hospital touched them with a pathogen on their hands, according to the Centers for Disease Control and Prevention.
The CDC recommends health care providers use an alcohol-based sanitizer on their hands immediately prior to touching a patient and to wash their hands with soap and water if their hands are visibly soiled. The agency also recommends using sanitizer before handling medical equipment, before moving from work on a soiled body site to a clean site on the body, after touching a patient’s immediate environment and after removing gloves. They also say providers should wash their hands with soap and water after touching a patient with diarrhea or suspected exposure to a spore, like c. difficile.
In non-clinical settings, the CDC recommends everyone wash their hands at certain times to prevent disease. They recommend doing so before, during and after preparing food, before eating, before and after caring for someone who is sick, before and after treating a wound, after using the toilet, after changing a diaper, after blowing your nose, or coughing, after touching a pet and after handling garbage. Proper hand washing includes wetting the hands, putting on soap, and scrubbing hands for at least 20 seconds, rinsing hands and then drying them with a clean towel or letting them air. If soap and water aren’t available, the Food and Drug Administration recommends using hand sanitizer, however, the agency says hand sanitizer should be a second option. Currently (as of April 2019) the FDA is studying hand sanitizer chemicals to ensure their safety.
Incidence vs. prevalence
Many of the medical studies journalists cover are epidemiological, which are observational studies focusing on the health of populations. These studies tend to report on the incidence and prevalence of diseases and other conditions, so it’s important that journalists understand the difference between these two commonly confused terms in epidemiology.
In the plainest terms, “incidence” refers to new cases of a disease or injury or condition. “prevalence” refers to the total existing cases of a disease or injury or condition – whether newly occurring or ongoing from a previous diagnosis or occurrence. Although these terms can refer to any condition studied, such as gunshot wounds, short-term infectious disease or chronic conditions, this section will primarily focus on diseases for the sake of simplicity.
Whether a study, or a journalist, uses incidence or prevalence depends on what’s being communicated. For example, to have a sense of how quickly a disease is spreading through a population, incidence is more relevant because it describes new cases.
But to understand the burden of a disease, especially a chronic condition, in a population, prevalence is more relevant because it focuses on how many people are suffering, regardless of whether they were diagnosed yesterday or ten years ago.
These concepts involve more complexity, but first, here is a visual analogy to make sense of the difference: imagine a bathtub that has the faucet turned on and the drain open. The water pouring into the bathtub is the incidence – the new cases getting diagnosed. The water that is in the bathtub is the prevalence – how many currently have the condition. The water exiting the tub through the drain are the people leaving the prevalence either because they died from the condition or because they recovered from it.
Two more examples: If 500 people are diagnosed with diabetes each year, that refers to incidence, but if 15 million people are currently living with diabetes, that refers to prevalence. If 6 million people caught the flu in the first week of February, that’s incidence, but if only 4 million people are currently suffering from symptoms of the flu on February 7, that refers to the prevalence of influenza; the other 2 million recovered or died from the flu during that week.
Incidence is typically described in one of two ways: incidence proportion or incidence rate. The incidence proportion is also called cumulative incidence, attack rate, or risk of a condition — the probability of developing it. Incidence proportion is expressed as a ratio where the numerator (top number) is the total number of new cases of a condition during a specified time interval, and the denominator (bottom number) is the population of people who are at risk for the condition.
For example, the incidence proportion of HIV in a particular country might 25 people per 100,000 individuals per year. Similarly, the incidence of cervical cancer in the same country might be 10 per 50,000 women. Even though it’s the same population, the denominator must reflect the population that is at risk. Both males and females can get HIV, but only females can get cervical cancer, so the denominator can only include women in the second example. (The second example would probably actually be expressed as 5/100,000, but it’s important to know that the denominator still only contains women and that the HIV rate and the cervical cancer rates given here cannot be directly compared since the denominators refer to different populations within the same country.)
Even incidence proportion can be described different ways. For example, the overall incidence proportion, or attack rate, of a listeriosis outbreak refers to the total number of individuals getting newly diagnosed with the foodborne illness out of the total population. But if the source of the outbreak is determined to be cantaloupe, the food-specific attack rate refers to the number of new cases of illness among people who ate that food. (It can get even more specific if the denominator is limited to the people who ate the cantaloupe from the farm where the outbreak originated.)
The incidence rate is less familiar to journalists even though they will come across it in studies; it refers to the number of newly diagnosed cases in the population over a set amount of time. It’s often expressed in “person-years,” which incorporates time into the denominator. In writing about this type of incidence in layperson terms, one way express it is to do a quick division and use “cases per year” (or whatever the unit of time is, usually days or years). For example, if rate of norovirus in Pleasantville over a 10-year period is 25,000 cases per 1 million person-years, then that actually means the population is approximately 100,000 people (100,000 people times 10 years is 1 million person-years), and 2,500 people a year got sick. (If 25,000 cases occur over that time, the annual rate is estimated by dividing by 10.) The reason researchers might express a condition in person-years instead of annual rate is that the population might change over that time and person-years is more precise and accurate for researchers. Usually, for a journalist’s purposes, that level of precision is not necessary, and the estimate of 2,500 cases per year is sufficient.)
Prevalence can also be discussed in two different ways: point prevalence and period prevalence. Just as it sounds, point prevalence refers to the number of people with a certain condition at a precise moment in time, such as a day or “right now” throughout the U.S. The numerator is the number of current cases, and the denominator is the total current population. The estimated point prevalence of HIV in the U.S. is 1.2 million people. Meanwhile, period prevalence refers to the number of current cases over a period of time, such as over a year. The point prevalence of the flu in February may be 300,000 cases, but the period prevalence of the flu over the entire year might be 9 million (which includes the 300,000 cases in February). Whether this refers to the cases in a nation, a state, a county, a city, a school or some other group depends on the study and the needs of the journalist’s story.
For additional discussion of incidence versus prevalence, review this Paediatric Nursing primer or this explanation from the University of North Carolina School of Public Health (which includes an illustration of the bathtub analogy).
Despite great strides in medication, sanitation, hygiene and in animal and pest control, infectious diseases remain an enormous threat to human and animal health. Reemerging and new dangerous infectious diseases are surfacing around the planet at an accelerating rate. The number of total reported infectious disease outbreaks worldwide has tripled since 1980, according to the World Health Organization.
How these infectious diseases spread and become epidemics depends on a range of interconnected dynamics of pathogens, people and animals. Some microbes are transmitted between people, or between people and animals; some circulate among multiple hosts before they are transmitted, and others must be carried in an insect vector before spreading. Many factors including, increasing antibiotic resistance, human connectivity and behavior, population growth, climate change, land-use change, farming, urbanization and global travel, also impact the emergence and spread of infectious disease, as well as pose challenges for prevention and control.
Given these huge complexities, scientists have increasingly turned to mathematical models to understand epidemiological patterns to develop evidence for making public health decisions. These models all hark back to Isaac Newton, who had the fundamental insight that there are unchanging universal laws that govern the actions of natural phenomenon. The hope is that with more data and more computing power, the most complex outcomes can be predicted.
In the past 20 years, growing computing power and infectious surveillance has enabled scientists to gather more data for developing these models. Researchers are collecting volumes of data from epidemiology, evolutionary biology, immunology, sociology, climate and public health resources to develop models that mimic how infections might evolve and spread.
The challenge with modeling infectious disease however, is that pathogens, the environment, the rate of contagiousness, the rate of transmission, the availability of vaccines, and the climate are ever changing. But most models rely on data from past events to predict the future.
Scientists are trying to develop models for specific pathogens that also take into account the likelihood of the many real-time variables. For example, utilizing information on social media to try to predict the magnitude of an upcoming flu season. These systems could quickly alert public health officials when there are too many instances of people complaining about a certain symptom, signaling a potential outbreak.
The Centers for Disease Control and Prevention, the National Institutes of Health, the U.S. State Department, private organizations like the EcoHealth Alliance, health systems, and other organizations are all working on developing these models.
The CDC funds the Epidemic Prediction Initiative which is monitoring dengue, influenza and mosquito activity as part of efforts to enhance the predictability of whether there will be a flare up of disease. The NIH is funding the Models of Infectious Disease Agent Study (MIDAS), which has supported projects ranging from the development simulation models of measles spread to Ebola outbreaks.
The CDC’s predictive flu project, called FluSight has been developing different mathematical algorithms matched with data collection - such an algorithm for data gathered through Google searches, or a model developed with data on flu-related hospitalizations. No one model has emerged as of yet, that is significantly better than another, say researchers.
Are non-communicable diseases “infectious?”
Infectious diseases are a primary focus of public health officials. The traditional definition of “infectious” is a disease that can be transmitted between people. To prevent their spread, public health officials emphasize clean water, clean living conditions and access to vaccines and drugs that prevent and kill bacteria, parasites and viruses from making people sick.
As medicine has become sophisticated enough to eradicate many infectious diseases, non-communicable diseases have become the leading cause of death. Traditionally, non-communicable diseases have been defined as illnesses that cannot be transmitted between humans physically. There is no pathogen causing the sickness. Usually the diseases are chronic, lasting three months or longer, and cannot be completely cured by medicine or prevented by vaccines. Examples are cancer, chronic obstructive pulmonary disease, heart disease, diabetes and mental health conditions. About 75 percent of deaths worldwide are due to a non-communicable disease, according to the World Health Organization.
But there is a growing number of public health experts calling for the definition of “non-communicable” to be reconsidered, arguing that many of these diseases are indeed transmitted between people through social networks, cultural and economic conditions and genetics. Redefining these diseases as ones that are communicable could create more of a sense of urgency within governments and better focus public health strategies on preventing and treating them, these experts argue. A change in definition could help focus attention on the social determinants of health like economic inequality, education and housing.
Efforts to shift the definition have been apparent with the current opioid crisis, in which thousands of Americans are dying from overdoses of opioids. Many public officials are calling it an “epidemic,” though traditionally, the definition of "epidemic" is the widespread occurrence of an infectious disease. For more on the debate, click on the links provided in this concept brief and AHCJ’s social determinants core topics page.
Lyme disease controversies
Lyme disease, in the U.S., is caused by the bacteria B. burgdoreri, and is transmitted through the bite of a tick. About 300,000 Americans develop Lyme disease annually, making it the worst vector-borne disease in the U.S.
B. burgdoreri bacteria can cause tissue and immune system damage. In most people, a course of antibiotics can kill the bacteria. For 10 percent to 20 percent of people, antibiotics don’t work. The symptoms of Lyme, such as nerve tingling, fever, fatigue and headaches, persist for months or years.
There is division within the medical community whether or not these non-responding patients ever had Lyme. There is also debate over the best options for continuing treatment of these patients too.
Arguments over the treatment of Lyme disease is “one of the biggest controversies that medicine has seen,” Dr. John Aucott, a physician and director of the Johns Hopkins Lyme Disease Clinical Research Center said in a September 2019 article in the Atlantic magazine.
Since the 1990’s, mainstream medical consensus, has been that Lyme disease is fairly easy to diagnose. If a person develops a bulls-eye rash and flulike symptoms after being bitten by a tick, they have Lyme. The prescribed treatment protocol is ten days, to a few weeks of oral antibiotics, usually doxycycline, and the disease clears up. If the disease is found in a later stage, a month of intravenous antibiotics could be necessary. This remains the treatment recommended by the Infectious Diseases Society of America (IDSA), the organization that represents the consensus of infectious disease specialists in the country.
For patients who failed to respond to antibiotics, some physicians began to refer to the symptoms as chronic Lyme disease or post-treatment Lyme disease (PTLD). The International Lyme and Associated Disease Society and the U.S. Centers for Disease Control and Prevention now recognizes PTLD as a diagnosis for people with symptoms of Lyme that persist after treatment. In many cases, physicians, using this diagnosis, recommend long-term use of antibiotics.
The IDSA, however, concerned about overuse of antibiotics, doesn’t recognize this diagnosis. Often diagnostic tests that look for Lyme disease don’t find it, or contradict one another. Many physicians believe patients that complain of PTLD may actually have an undiagnosed auto immune disease, or that the Lyme triggered an autoimmune disease and therefore antibiotics are useless. Still other physicians think patients may have been infected with other tick borne illnesses, or maybe a virus that no one has been looking for.
Until research is more definitive in being able to answer the question of why some people don’t respond to Lyme disease treatment, controversy over how to best respond will continue.
The microbiome is the army of microbes - bacteria, viruses, yeasts and fungi - that live on and in the body. There are trillions of these microorganisms in humans. They outnumber human cells 10 to 1, and make up about 1 percent to 3 percent of the body’s mass. Most of the time they live in harmony with the body, and don’t cause disease. In fact, the microbiome plays a key role in the proper functioning of the immune system and human health. Scientists are still trying to figure out why it is so important to health, and why sometimes microbes can turn deadly to their human hosts.
Research into the microbiome is one of the hottest new areas in health and has flourished since 2007 when the National Institutes of Health launched the NIH Human Microbiome Project to map the microbial make up of the human body. Through that work, researchers learned there are more than 10,000 microbial species in the body. Scientists also discovered the make up of the microbiome varies drastically throughout the body. “Each body site can be inhabited by organisms as different as those in the Amazon Rainforest and the Sahara Desert,” says the National Institutes of Health.
Most scientists now believe microbes live in and on the body to help extract energy from food and store it in the body. They also may help to “train” the immune system to determine what pathogens may be deadly for a human host and which are harmless. There are now more than 500 clinical studies published in ClinicalTrials.gov looking at the microbiome and its role in disease. Among the most promising areas of microbiome research is connected to fecal transplants to treat reoccurring illness from clostridium difficile. C. diff is a bacteria that causes symptoms from diarrhea to life threatening inflammation of the colon. The evidence clearly indicates that around 90 percent of people with severe c. diff can be healed if they receive a fecal transplant from someone healthy.
Most microbiome research, however, is preliminary and has been conducted on mice. Hence scientists urge journalists to use caution when writing about the microbiome.
Mosquito lifecycle and disease risks in U.S.
There are about 3,500 species of mosquitos in the world and about 150 of them live in the U.S. Most of these species are just a nuisance. They bite people and leave behind an itchy bump, but nothing harmful. Most people don’t get sick from their bite.
However, there are a number of mosquito species that are capable of carrying dangerous diseases, from dengue to West Nile virus, and their populations have been growing, increasing disease risks. For disease prevention, it’s important to understand how mosquitos reproduce and if any mosquito in a given area has been found with disease. State health departments periodically test mosquitos for disease so the public is aware of potential risk.
All mosquitos have a four-step life cycle. The cycle requires two elements - blood from an animal or a human and water. Without water, mosquitos won’t reproduce, which is why emptying any type of water container near a home is so important. Mosquitos lay their eggs in water, which then develop into larvae, then pupa, then to an adult and fly off to feed on animals or people and start the cycle of reproduction again.
Temperature is a key aspect of mosquito development. The warmer the water, the faster the mosquito moves through its development stage. The level of precipitation also matters because more rain means new sources of water. Vegetation and bird life matter because mosquitos may choose to feed on humans if they can’t find enough plants or birds for food. Temperature also impacts the ability of mosquitos to survive. At temperatures below 50 degrees or above 95 degrees, most mosquitos don’t survive.
Mosquitos living in the U.S. that can carry disease include the Culex, Aedes aegypti, Aedes albopictus (also called the Asian tiger mosquito) and Aedes triseratus. Culex mosquitos (found in all but two states) can carry West Nile and Zika. Aegypti (which is found mostly in the southern U.S.) and albopictus (found along the east coast and in the south) can carry Zika, dengue and eastern equine encephalitis. The triseratus can carry La Cross encephalitis virus. While Americans still rarely develop diseases from bites from these mosquitos, more people are developing mosquito borne diseases. The Centers for Disease Control and Prevention says the number of vector-borne diseases has tripled since 2004.
Mosquito control programs that test, spray and eliminate standing water sources, exist in every state, but many are under resourced and the public cannot rely on these programs to eliminate disease risk. Some states are testing biological alternatives to spraying. In 2017, 20 states were permitted by the federal government to experiment with release mosquitoes infected with a naturally occurring insect bacteria that is safe for humans, pets and the environment and may reduce Aedes aegypti populations.
Neglected tropical diseases
About 1.4 billion people around the world live below the poverty line. These individuals, mostly the world’s subsistence farmers and their families as well as the urban poor, often do not have access to clean water and adequate sanitation. They also may live in close contact with livestock and vectors that carry disease, such as flies and mosquitos. As a result, they are at risk of becoming infected with at almost two dozen parasitic and related infectious diseases such as amebiasis, Chagas disease, cysticercosis, echinococcosis, hookworm, leishmaniasis, and schistosomiasis. They produce a level of global disability and human suffering. See the World Health Organization complete list here:
These diseases are considered “neglected” because while many of them could be controlled or even eliminated through mass administration of medication or vaccination, they haven’t been because of the extreme poverty of the environment. The world’s poor receive little attention from governments and health organizations and don’t have access to much of modern medicine. Further, because of the absence of financial incentives, multinational pharmaceutical companies have not embarked on substantive research and development programs to develop vaccines for many of these neglected tropical diseases.
Still, the WHO in 2012 announced an ambitious plan, with the help of global health donors, to reduce the burden of these diseases, and there has been some progress, but many people are still sickened by these diseases annually.
One Health is a growing field within public health that embraces the connection between animals, humans and the environment and solve complex health problems such as emerging infectious diseases, food safety and antibiotic resistance.
The medical community observed that human and animal health were closely linked back in the late 1800s, but the concept of One Health has risen in prominence as the world’s population has exploded. By 2025, there are expected to be more than 8 billion people living on the planet, up from about 7.4 billion at the end of 2017.
Scientists estimate that more than 60 percent of all new emerging infections are a zoonosis, meaning they come from animals. Commonly known zoonoses include avian influenza, Ebola, rabies, Middle East Respiratory Syndrome and Lyme Disease. Worldwide, the number of infectious disease outbreaks has tripled. More than a dozen new infectious diseases have emerged over the past 25 years in the U.S. alone.
Outbreaks are associated with economic and agricultural turmoil. The 2014 outbreak of the Ebola virus, for example, cost Guinea, Liberia and Sierra Leone about $2.2 billion and a 2014 pathogenic avian influenza outbreak cost U.S. farmers about $3.3 billion.
Population growth is spurring antibiotic resistance. The increase in people is driving rising demand for animal protein, and an increase in animal production operations. In turn, antibiotics are being used on more animals, accelerating the rise of antibiotic-resistant bacteria, or “superbugs” in the environment. In the U.S., about 2 million people annually contract a superbug, and 23,000 die. By 2025, as many as 10 million people, could die annually as the result of a superbug infection, if humanity does nothing.
For all of these reasons, public health momentum surrounding One Health has grown. The U.S. Centers for Disease Control and Prevention created the first One Health office in 2009, to foster collaboration between international, federal, state and local governments, as well as the academic, health and private sectors.
As One Health is relatively new in the public health field, the definition of the term is imprecise. One Health has been defined as an initiative, a movement, a strategy, a framework, an agenda, an approach and a collaborative effort. In general, One Health involves the intersection of biology, comparative medicine, earth sciences, ecology, engineering, human medicine, social sciences, humanities and veterinary medicine. One Health programs link physicians, nurses, public health professionals, veterinarians, agricultural scientists, ecologists, social scientists, engineers, biologists and other professionals, to develop holistic solutions for keeping humans, animals and the environment healthy.
In the U.S., the CDC’s 10-person One Health office regularly coordinates discussions between the U.S. Department of Agriculture, the Department of Health and Human Services, the Interior Department, state health and agriculture departments, health systems, and health providers to discuss emerging infectious diseases that may impact communities.
Outbreak culture is a term to describe the collective mindset that develops within communities and by public health and humanitarian responders as a disease outbreak unfolds and the ways that the mindset can inhibit initial action and even worsen an epidemic. The mindset can develop from challenges in communication and coordination between individuals, agencies, organizations and governments, resistance by local people, uncertainty about the cause and source of a disease, health provider and infrastructure gaps, media coverage and politics.
Outbreak culture during the Ebola outbreak in West Africa in 2014 is detailed in a 2018 book: “Outbreak Culture,” co-authored by Pardis Sabeti, head of Sabeti Lab, a research group at Harvard University’s Faculty of Arts and Sciences Center for Systems Biology and journalist Lara Salahi.
Some of the examples of the challenges they describe include: lack of staff and medical equipment to treat Ebola, governments that didn’t want to admit to the world that there was an Ebola outbreak, communities that didn’t trust medical providers, communities that didn’t trust the government, government officials that tried to take over for local health providers, multiple humanitarian aide organizations that weren’t communicating with one another, and health care providers who weren’t sharing information with one another. Combined, these and other challenges likely led to more people being infected and dying from the Ebola outbreak than might otherwise, if an outbreak culture hadn’t developed.
To prepare for outbreaks in the future, Sabeti and Salahi suggest in their book that public health leaders focus on ways to enhance collaboration between individuals, providers, responders, communities and governments.
Among their suggestions is for the world to agree to the creation of a centralized governing structure that can step in during public health emergencies and make rapid decisions, in the vein of a military approach to a conflict. The leading global health governmental organization, the World Health Organization, cannot play that role because it is ultimately part of a diplomatic organization, the United Nations, they say.
Other suggestions include investing in building health infrastructures in resource challenged nations so that communities have more trust in providers and the government when there is an outbreak; developing systems for rapid sharing of information, data and medical samples during an outbreak and developing a unified approach to research during a public health emergency.
“We need to shift outbreak response to a mode that favors collaboration instead of competition and readiness instead of reaction,” say Sabeti and Salahi.
Influenza, a respiratory disease caused by a virus, is endemic to humanity. The virus is always around, most often striking populations in the late fall or winter seasons. The flu virus attacks not only the respiratory system but also can cause headaches, muscle and joint pain and other complications. The overwhelming majority of the time, people who are infected with the flu recover within about ten days, which is why influenza is rarely viewed with great concern.
Still, the flu can seriously sicken and kill people, particularly older individuals or those with weak immune systems or who are in poor health. In the U.S., seasonal flu can kill between 12,000 to 56,000 people and can hospitalize between 140,000 to 710,000, according to the Centers for Disease Control and Prevention. Because so many people do get sick annually, public health officials urge the public to get an annual flu vaccine. The vaccine isn’t 100 percent effective, however, because flu viruses mutate.
The genetic code of a virus is always mutating to outwit a host’s immune system. Among the fastest mutating is influenza. Mutation occurs when the virus makes a “mistake” while reproducing itself. The mistakes are often minor and called antigenic drift. Sometimes several different influenza viruses come together and swap and rearrange their genetic material, creating a new hybrid virus. This change is called antigenic shift. Historically - several times a century, the flu virus shifts to become more lethal. Because flu can spread so easily - the virus spreads through water droplets in by breath - public health officials most worry about a pandemic flu outbreak. Pandemic means the flu is spreading quickly throughout multiple countries.
Viruses aren’t strictly alive or dead. They are biological particles that lurk in the environment until they find a cell to latch onto and enter. The virus replicates inside the cell, until there are so many viruses, the cell explodes and the newly made viruses go on to enter other healthy cells. Influenza viruses originated in the gut of wild aquatic birds. Though bird viruses don’t spread easily among humans, they do spread between animal species, including domestic birds like chickens and turkeys and pigs. Because swine and humans have similar respiratory cell receptors, a mutating flu in a pig can jump more easily to and between people, as was the case in 2009, with the pandemic swine flu.
There are three types of influenza virus: A, B and C. The different letters stem from the different proteins that contain the virus’s genetic code. The seasonal flu comes from either Type A and B. Type C typically doesn’t cause disease in humans. Type A is found in humans, birds, pigs and other animals, while Type B is only found in humans. Research has shown that only Type A causes pandemics. Proteins that jut out of the virus’ outside covering identify type A viruses. There are eighteen different hemagglutinin or H proteins and eleven different neuraminidase or N proteins. The H proteins give the virus the ability to enter cell walls and the N protein governs the release of the newly minted viruses out of cells. Scientists classify flu viruses based on their H and N proteins. The 2009 flu virus, for example, was H1N1.
Between 1918 and 1919, the world was struck by the largest flu outbreak in modern history. Though it was called the “Spanish” flu, it may have actually started in the U.S. in an agricultural district in Kansas. It was called the Spanish flu because Spain was the first country to report on a widespread flu. Epidemiologists say the 1918 strain spread from an army base in Kansas, where recruits brought it to Europe and then the rest of the world. What made the flu particularly dangerous is that it sickened young people and pregnant women more often than the old and immune-compromised. The virus caused otherwise healthy people’s immune systems to overreact, damaging organs and killing them. The process is called a cytokine storm. Estimates range on how many people died worldwide, with the largest suggesting that about 100 million died. About 500,000-650,000 people are estimated to have died in the U.S.
Since 1500, there have been more than a dozen flu pandemics recorded, with at least five occurring in the past 140 years - in 1889, 1918, 1957, 1968, and 2009.
None have been as deadly as the outbreak in 1918, but scientists fear another deadly outbreak is inevitable because they see signs that influenza is evolving more rapidly than ever. The explosion in worldwide bird and pig farming is giving flu viruses lots of opportunities to evolve and become dangerous to humans. Pharmaceutical companies can develop vaccines to prevent the spread of a dangerous flu, but it takes time - in the range of four to six months - to identify a flu and then manufacture a vaccine. By that time, a pandemic flu may have killed millions.
Pertussis vaccine and vaccine hesitancy
Whooping cough is a contagious and potentially dangerous respiratory disease, especially for young children. To prevent the disease, caused by the bacteria, Bordetella pertussis, there is a vaccine, developed in the early 20th century. The whooping cough vaccine is now part of the diphtheria, tetanus and pertussis (DTaP) vaccine, given in five doses to children between 2 months and 6 years of age. Researchers believe, however, that the pertussis bacteria has mutated, and that protection from the DTaP vaccine is waning for some children by the time they are teenagers.
A June 2019 study, published in the journal Pediatrics, showed that between January 2006 and June 2017, about 80 percent of pertussis cases occurred in fully vaccinated children, suggesting that the vaccine’s protection is waning in the years after vaccination. The Centers for Disease Control and Prevention says the vaccine remains effective in about 70 percent of children who have been fully vaccinated, five years after they received their last dose and is now recommending booster shots to teenagers and adults.
The history of the pertussis vaccine is important because it is one of the origins of the current state of vaccine hesitancy. In the 1970’s, a study was published in the United Kingdom in the journal Archives of Diseases of Childhood, suggesting that there was a connection between the whole-cell whooping cough vaccine and neurological damage in children. The report led to a steep decline in vaccine coverage in the UK and a rise in distrust with the medical establishment regarding vaccine safety, according to a Health Affairs report from 2005.
Researchers retooled the vaccine and adverse events declined. However, protection from the retooled vaccine - which uses parts of a pertussis bacteria cell instead of the whole cell - doesn’t last as long, and scientists are now in process of working on developing a new vaccine.
“The pertussis vaccine is not optimal,” said Dr. William Schaffner, professor of preventive medicine at Vanderbilt University Medical Center, told NBC News. “We’re making the best use of the vaccine, while we’re frantically doing research to make a better one.” A new one is still many years away, he said.
Quarantine and isolation
In the event of an outbreak of a contagious disease, health authorities may deploy several strategies to protect healthy people from getting sick, including implementing quarantines and isolation.
A quarantine involves restricting the moment of a person suspected of exposure to a communicable disease, even though the person isn’t yet showing any signs of illness, or doesn’t know if they might been exposed. The person is kept apart from the community until he or she can no longer transmit the disease to others. The time period of quarantine depends on the length of time a pathogen remains infectious.
Isolation involves separating someone who is already ill and removing them for anyone who isn’t sick, and keeping them apart until they are well.
The history of quarantines goes back to the Middle Ages, when the plague was sweeping through Europe. Venice, a major port, tried to stop the disease from entering its city by requiring ships suspected of harboring plague, to wait offshore for 40 days before people or goods could come ashore. The city also built a hospital off its coast, where sailors who came off ships with the plague were sent. The forty-day waiting period was called “quarantinario,” for the Italian word for forty. Hence the word “quarantine.”
Quarantines can be important when there is no vaccine or drug to treat a rapidly spreading disease. They are, however, very controversial because they involve separating healthy people from the community, and they raise civil liberties questions. Forced quarantines and isolation can cause societal panic, as people worry about getting food, losing work, or becoming isolated from others.
Historically, quarantines have been used to target vulnerable populations in society, such as ethnic groups and the poor. In 2014, during the outbreak of Ebola (a viral hemorrhagic fever with no cure or vaccine), Liberia tried to impose a quarantine for 21 days (the incubation period for Ebola), causing people to flee into the jungle. The quarantine also sparked intense protests, leading the government to end the quarantine in ten days.
In the U.S., the federal government, through the Centers for Disease Control and Prevention, has the legal authority to quarantine and isolate a person, for up to 72 hours, at a U.S. airport, port or the border if the person is known to be infected or possibly infected with one of nine quarantinable diseases. The nine include: cholera, diphtheria, infectious tuberculosis, smallpox, yellow fever, viral hemorrhagic fevers, severe acute respiratory syndrome, new types of flu that could cause a pandemic, or a disease that has been designated by order of the President.
Health providers living in the U.S. who traveled to West Africa to care for Ebola patients in 2014, were asked to voluntarily remain at home and monitor themselves for any signs of illness during the 21-day incubation period. Ebola isn’t contagious until a person shows symptoms of illness. But some states when farther than the federal government during the outbreak.
Under emergency preparedness powers, every state, the District of Columbia and most territories have laws authorizing the use of quarantines and isolation, usually through the state’s health authority. Some states, including New York and New Jersey imposed forced quarantines upon some returning health care workers including Laura Skrip, a public health graduate student who had been in Liberia providing computer technology support during the outbreak. She was forced to stay in her apartment in isolation, enforced by a police officer patrolling her building.
The CDC has recently updated its community guidelines on quarantine in the event of a flu pandemic, which could provide guidance in future outbreaks of other diseases.
R0 (pronounced R-naught) is a number epidemiologists use to determine the infectiousness of a disease and a community’s susceptibility to an epidemic. The “R” stands for “reproductive number” and is a kind an epidemiological threshold. If the “R” number of a bacteria, virus, fungi or parasite, is greater than 1, the pathogen has a greater chance of spreading through a population and causing an epidemic. If it is less than one, then it is likely the disease will die out. If you want to geek out on the mathematical forumal, check out the Mathematical and Statistical Estimation Approaches in Epidemiology or this detailed description.
Understanding the susceptibility of a population is important for helping public health officials determine strategies for controlling the spread of an infectious disease, such as vaccinating the population or quarantining sick individuals if no vaccine is available.
The “R” number for a disease is a range and changes with conditions within the community at the time. Many factors impact the “R” number including the period of time for which a disease is contagious (the longer a person is contagious, the more likely the disease is to spread), the number of people that a sick person comes in contact with (a sick person who stays home may spread the disease more slowly), how the disease is transmitted between people (diseases that spread through the air, like measles can travel quickly, while those that are sexually transmitted spread more slowly), the immunity level of the population (whether people have been vaccinated for the disease or survived a version of the disease in the past) and whether there is a strong health and legal system within the community (hospitals to treat people and law enforcement to impose quarantines can reduce spread of a disease).
Ebola is a good example of how conditions within a community impact the “R” number. Ebola is spread between people by an infected person’s blood or bodily fluids. The World Health Organization says the average mortality rate from Ebola, is about 50 percent, but can range between 20 percent to 90 percent. The R number for Ebola is estimated around 1.5 to 2.5.
During the 2014 Ebola outbreak in West Africa, the R number was 1.51 in Guinea, 2.53 in Sierra Leone and 1.59 in Liberia. In Nigeria, the number was below 1, because as soon as one Ebola patient was identified, the country implemented a tracing program to isolate exposed individuals. There was no outbreak in the country.
The R-naught number emerged into culture in the 2011 movie “Contagion” in which a virus causes a deadly epidemic in the U.S. Actress Kate Winslet plays a Centers for Disease Control and Prevention official who writes the R-naught formula onto a white board to try to determine how quickly the disease is spreading. Here’s an interesting Q & A from Wired magazine on the science behind the movie, which includes a conversation about the R-naught figure.
Relationship between urbanization and antibiotic resistance
Bacterial resistance to antibiotics has existed since antibiotics were created. That’s because, though antibiotics kill most bacteria, there are always a few bacteria that have the genetic ability to survive and then replicate. Some bacteria replicate every 20 minutes, and within seven hours, can generate 2 million bacteria.
In cities where more people live close to one another, there are more opportunities for diseases to spread between humans. With more illness, there is a greater demand for antibiotics. The more antibiotics that are taken, the greater the likelihood there will be more bacteria surviving doses of antibiotics. As people crowd together in cities, the concentration of antibiotic resistant bacteria, therefore, rises. It spreads in the water, in food, on objects, between animals and between people.
Multiple studies show that resistance to many antibiotics is greater in highly populated areas. The chances of antibiotic resistant bacteria spreading are particular high in areas of unregulated access to antibiotics, poor sanitation and poor environmental planning.
For example, a 2016 study conducted on major waterways in Thailand, found bacteria resistant to eight classes of antibiotics. The study connected the resistant bacteria to high concentrations of people and poor urban land use and animal husbandry practices.
This is a public health strategy aimed at halting the spread of a viral infection. It involves vaccinating all people in a specific area, during an outbreak, that may be susceptible to contracting the virus. In essence, health providers identify patients that are infected and then vaccinate all those around the individuals, forming a “ring” or a kind of buffer of immune individuals to prevent the spread of the disease.
The strategy was most famously deployed to eradicate smallpox. When an infection of smallpox was identified, public health providers identified everyone who was or might have been exposed to the virus and then vaccinated those individuals. Then a second “ring” of people that might have been exposed to the first ring of people were identified and vaccinated. The last naturally occurring case of smallpox occurred in Somalia in 1977 and the World Health Organized declared smallpox eradicated in 1980.
More recently, ring vaccination is being used in the Ebola outbreak in the Democratic Republic of Congo. An experimental vaccine has been used to help protect health care workers, who are the most vulnerable to contracting the virus as they care for the sick. It has also been used to protect people who have been in contact with Ebola patients, but identifying people have been exposed has been challenging because the country is in the middle of a civil war. While there has been no formal assessment of the vaccine, WHO officials, at the end of 2018, said they believe the “ring” strategy has been helping.
Vaccination involves stimulating the immune system with an infectious organism or components of the organism without causing disease. For more than a century, vaccines have been delivered by introducing either a weakened or dead version of an infectious agent so the body can develop immunity without causing illness.
A platform is a method of manufacturing that vaccine for broad use. The seasonal flu vaccine, for example is made by growing a flu virus in a fertilized chicken egg. Scientists inject the virus into the egg where it replicates. After it grows in the egg, it is deactivated by formaldehyde so it can’t cause illness, but still results in the body building immunity to the flu. The process of developing an annual flu vaccine and then growing it takes around six to nine months. If there were a pandemic flu, scientists wouldn’t be able to create a vaccine fast enough to stop its spread before it killed millions.
With the advent of genetic engineering, scientists have been working on new platform technologies to speed up the production of a vaccine once it is developed. The term platform has several meanings but in vaccine production, it is usually referred to a single method for generating a vaccine that can work for multiple pathogens.
A faster vaccine platform would help global leaders to respond to deadly emerging infectious diseases for which there aren’t existing vaccines, like Nipah and Lassa viruses.
One example of a new platform is DNA (deoxyribonucleic acid) vaccine delivery. The approach involves taking a piece of the DNA of an infectious agent and engineering it so that it can be injected into the body to produce an immune response. Scientists at the National Institutes of Health began testing a DNA vaccine to protect against the Zika virus in 2016. Early stage testing found the vaccine to be safe, and researchers are now assessing its efficacy with a larger group of people.
The only DNA vaccine that is currently approved by the Food and Drug Administration is for one aimed at protecting horses from West Nile virus.
Vaccines are agents (usually dead or weakened microorganisms, or a genetic piece of the organism) that elicit a specific immune response protecting individuals from the pathogen should he or she be exposed to it at a later date. Vaccines stimulate protection without triggering the disease. They are considered the most important and powerful tools for preventing the spread of infectious diseases. Vaccines are estimated to save about 3 million lives worldwide annually.
The term vaccine stems from the work of Edward Jenner, an English physician, who noted in the mid-18th century that people exposed to cowpox, a mild version of smallpox, were then immune to smallpox. Smallpox, a virus that causes fever and severe and debilitating skin blisters, was killing about 400,000 Europeans annually during the 18th century. The Latin translation of cowpox was “variolae vaccinae.”
In 1796, Jenner experimented with taking a scab from a milkmaid with cowpox and inserted it into a cut on an 8-year-old boy. The boy became immune to smallpox, proving a person could be protected from smallpox without being directly exposed to it. Jenner’s method was recognized as the first scientific attempt to control an infectious disease. Breakthroughs in science after Jenner’s experiment led to the development of vaccines for rabies (1885), plague (1897), typhoid (1899), cholera (1911), diphtheria (1914), tuberculosis (1921), whooping cough (1940), polio (1955), measles (1963), mumps (1967) and rubella (1969).
The impact on public health is obvious. A little more than a century ago, the U.S. infant mortality rate was 20 percent and childhood mortality before the age of five was 20 percent. Many Americans knew the distinct sound of a child with whooping cough gasping for breath, and saw children with congenital defects caused by rubella or paralyzed by polio. Public health campaigns in the 1960s to require all children to be vaccinated succeeded spectacularly. Only a handful of children get diphtheria and rubella annually and polio has been eradicated in the U.S. Among the top cases of death in the U.S. in 1900 were tuberculosis and diphtheria. Cancer and strokes have replaced them as of 2014.
The World Health Organization, the public health arm of the United Nations, in the 1960s made a push to eliminate most childhood diseases by vaccinating as much of the world as possible. One of the organization’s most successful achievements was the elimination of smallpox. The last naturally occurring case of smallpox occurred in Somalia in 1977.
Each type of vaccine is different in terms of composition and formula. The differences reflect not only the pathogen from which the vaccine is derived but also how the vaccine is used and how the body responds. Some vaccines are living and weakened versions of the pathogen. These are called “attenuated” vaccines. Measles, mumps rubella and some polio vaccines are an example of attenuated vaccines.
Other vaccines are inactive and consist of either dead bacteria or viruses – often referred to as whole cell vaccines – or are pieces of the microorganism. They are often given in multiple doses. Inactive vaccines include some flu vaccines, rabies, whooping cough, tetanus, hepatitis A and B.
Some vaccines are composed of a toxin extracted from the microorganism. Vaccines for diphtheria are an example. Researchers are also working on developing vaccines using just portions of a pathogen’s DNA to stimulate immunity.
All 50 states and the District of Columbia require children to be vaccinated – typically for diphtheria, measles, rubella and polio – before they can attend public school, but some states allow exemptions for medical, religious or philosophical reasons.
Vaccines have worked so well that many people take them for granted. In the past two decades, a small but growing number of parents have been refusing vaccinations for their children due to fears that vaccines cause autism. The fears stem from a now discredited study that a British scientist said demonstrated an association between autism and vaccination. Countless studies have shown the safety of vaccines. Nevertheless, childhood vaccination rates have fallen. Measles outbreaks have flared, most recently in Minnesota, where unvaccinated children in a Somali community contracted the disease.
Some celebrities and prominent leaders have supported parental fears. President Trump invigorated the anti-vaccine movement when he suggested during a 2015 Republican presidential debate that there is a link between vaccines and autism. He said in January 2017 that he is considering creating a commission to look into vaccine safety. As of August 2017, no such commission had been created.
Vectors are organisms that pass diseases from animals to humans or between humans. Many vectors are insects that suck infected blood from the animal or human host and then pass the disease causing microorganism to other animals or humans. Mosquitos are the most widely known vector. Other vectors include ticks, fleas, sand flies, freshwater snails and triatomine insects (like kissing bugs). Vector-borne diseases cause about 17 percent of infectious diseases globally and cause as many as 1 million deaths. The World Health Organization has a good primer here.
There are a number of vector-borne diseases in the U.S., including Zika, Chikungunya, and West Nile, which are spread by mosquitos. Lyme disease is spread by ticks. In May 2018, the CDC warned that illnesses caused by the bite of fleas, mosquitos and ticks tripled to 640,000 between 2004 and 2016. Nine new germs spread by mosquitos and ticks were discovered or introduced since 2004, the agency said. Among the reasons they are on the rise in the U.S. include, a warming climate, travel and global trade.
Other vector born diseases found globally (but are rare in the U.S.) include malaria, dengue fever, Rift Valley fever and yellow fever (all spread by mosquitos) , Leishmaniasis (spread by sandflies) , Crimean-Congo hemorrhagic fever, tularemia (spread by ticks), plague (spread by fleas), river blindness (spread by black flies) and typhus (spread by lice).
The best way to slow the spread of these diseases is to eliminate the organisms in places where they breed. Mosquitos, for example, breed in standing water and thrive in places with poor sanitation. In the U.S., state and local vector control programs work to eliminate standing water and to spray areas where vectors may be breeding, but at least 84 percent of them lack at least one measure of competency, the CDC says. “Better control of mosquitos and ticks is needed to protect people from these costly and deadly diseases, “ the CDC says.
The CDC also urges individuals to utilize bug repellant, especially at prime times (dawn and dusk) when mosquitos bite, to wear protective clothing and to check for ticks on one’s body and pets.