Nursing students training in the classroom, United States, 1930s.
The 1940 Provisions of the Sanitary Code of the City of New York and Regulations Relative to Reportable Diseases and Conditions and Control of Communicable Diseases, written by the City of New York’s Department of Health, notified medical personal that “certain diseases and conditions must be reported immediately and others within twenty-four hours.” Some had to be reported in writing and some immediately “by telephone or messenger in addition to the written report.” This spring, students in History 236: Medicine and Disease in Modern Society were each assigned a disease from that list and charged to write a paper that described the biological and social experience of having that disease in the United States in the 1930s. It was not an easy assignment. Students had to really search for the primary sources that could provide them with the kind of information that they needed to be able to make a persuasive argument, but many of them wrote excellent papers and we are delighted to share some of them in this special issue.
First Typhoid Fever Inoculations.
By Andreas Van Dijck
Note: Essay 7 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
As one of the City of New York Department of Health’s
designated “reportable diseases” in 1940, typhoid fever was viewed as a serious
health hazard by United States health officials, and anyone showing possible
symptoms of the disease was required to report those symptoms immediately. An
infectious and contagious bacterial disease that spreads via contaminated food
and water[1],
typhoid fever caused great suffering to those who contracted it in the 1930s
due to the biological and social ramifications of contracting such a disease.
Contracting typhoid fever was an arduous biological and social experience; the
disease presented painful physical symptoms, and those who contracted it tended
to be viewed with suspicion and contempt. In addition, outbreaks of typhoid
fever disproportionately impacted poor, rural communities across North America,
which exposed the growing economic divide between cities and rural areas.
Typhoid
fever symptoms are painful and physical in nature; those who suffer from the
disease come down with a prolonged fever, nausea, headaches, vomiting,
abdominal pain, rashes, diarrhea, and loss of appetite[2]. In the 1930s, typhoid fever
was sometimes confused with other diseases that caused persistent fevers[3] such
as malaria and yellow fever, thus indicating that diagnosing the disease
sometimes proved difficult. Typhoid fever was still scientifically understood
to be transmitted by bacteria, Salmonella
Typhi, that can only be carried by humans[4]. It was also understood that
the disease was primarily spread via contaminated water, and when cities such
as New York City made improvements to their sewage systems and facilitated
easier access to clean water, instances of typhoid fever decreased
dramatically; the death rate per 100,00 people due to Typhoid Fever in the
United States dropped from 35.8 in 1900 to 4.9 by 1928[5]. In addition, several
treatments for people affected by typhoid fever existed, including administering
calomel, saline draught, and a spoonful of hot water for hydration, with mixed
results[6]
Nonetheless, cases of Typhoid fever
continued to crop up across the country in the 1930s, particularly in rural and
poor communities; during the 1930s, 65 percent of typhoid outbreaks in the
United States and 77.5 percent of those in Canada occurred in cities with a
population of less than 5,000 people[7]. Thus, rural residents who
did not have ready access to sanitation and clean water most likely to be
affected by typhoid fever, a fact which also highlights the economic and
developmental disparities of North America in the 1930s. American cities with
over one million residents were noted to have nearly eliminated the disease by
1931 due to being better funded and having modern sewage systems[8].
Rural communities did not have the resources, funds, or expertise to update
their sewage systems and curb the spread of typhoid fever. In fact, a 1938
health report estimated that deaths from typhoid fever were 30 to 40 percent
higher than reported in Mississippi, a state where only around 30 percent of
residents had access to running water in the 1930s and where most residents
lived in towns of less than 1,000 people[9].
While outbreaks of typhoid fever
were more prevalent in rural towns, the disease still appeared in more affluent
areas as well, as in the case of “Typhoid Mary”. Mary Mallon was a poor Irish
immigrant who worked as a cook for several wealthy New York families in the
early 1900s, most of whom contracted typhoid fever while employing Mary. By the
1930s, it was understood that Mary was a carrier of typhoid fever[10];
she did not present any symptoms but was still a host to the bacteria causing
the disease. As a carrier, Mary could still transmit the disease by handling
and then serving food or water, which is how most of the families she worked
for became ill. After a series of investigations, Mary was apprehended by
authorities and forcefully quarantined[11]. This incident reveals how
typhoid fever could still be an isolating social experience even if one was not
suffering with the disease’s physical symptoms, and it also highlights how
typhoid fever is a uniquely human affliction with human carriers and
transmitters; Salmonella Typhi cannot
be transmitted by animals, which is unlike most other diseases[12]. In
addition, the story of Mary Mallone infecting the families she worked for
spiked prejudice against Irish immigrants, who were seen among some Americans
as “potentially dirty and hazardous”[13]. Because Mary Mallon was
one of the most notorious carriers of typhoid fever, she likely became the
image of a carrier in the eyes of the public, and that image was extended to
Irish immigrants as a whole. As a result, the incidence of typhoid fever and
the association of carriers with Mary Mallon further exposed prejudice against
Irish immigrants, which was already prevalent during this time.
While typhoid fever was a known entity by the 1930s, and officials knew how to prevent it, the disease’s presence persisted among the poorer parts of society. Considering typhoid fever’s role as a biological and social experience in the 1930s is important because the disease exposed rifts in American society; poor, rural parts of the country were much more likely to experience outbreaks of this disease than bustling metropolitan areas were, which reflected the growing divide between urban and rural prospects in the United States. Indeed, cities benefited more from the “roaring 20s” and the technological advancements of the early 20th century than places like rural Mississippi, and thus were able to limit the spread of the disease. The divide between urban and rural areas arguably still exists today in the United States, albeit in a different context, but examining how different parts of the country were impacted by typhoid fever in the 1930s helps expose the origins of that divide. Furthermore, the case of Mary Mallon also shows how the outbreak of typhoid fever was used to justify prejudices against Irish immigrants, revealing how outbreaks of infectious disease can exacerbate existing social tensions and justify biases. While typhoid fever is not a major cause for concern today, the social problems of inequality and anti-immigrant sentiment still exist in the United States.
Andreas Van Dijck is a Junior Political Science Major and History minor from the Cleveland area.
[1] Center for Disease Control and Prevention,
“Typhoid Fever”, https://wwwnc.cdc.gov/travel/diseases/typhoid
[2] World Health Organization, “Typhoid”,
https://www.who.int/immunization/diseases/typhoid/en/
[3] F.F Russell, “The Prevention and Treatment of
Typhoid Fever”, Boston Medical and
Surgical Journal 164, no. 1 (1911), 1
Wounded Australian soldiers receiving tetanus antitoxin outside a medical dressing station. 1918, Australian War Memorial E05242, Campbell, Australia. From: Shanks, Dennis. How World War 1 changed global attitudes to war and infectious diseases. New York: The Lancet, 2014.
By Karley Carter
Note: Essay 6 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
Tetanus has been a well known disease for thousands of
years, with its effects becoming devastating at times when treatment was
unknown. Due to developments of the late
nineteenth and early twentieth centuries, there were revolutionary improvements
in the way the disease was handled. While
tetanus carried with it a lack of social stigma due to its non-communicable
nature, the fear of tetanus during war time prior to the twentieth century was
real. With the discovery of both the
tetanus antitoxin and toxoid, tetanus transformed from being a killer in war to
something that was less than an afterthought in the minds of soldiers and the
general population.
Tetanus has
been regarded in history all through time, with documents noting tetanus
symptoms found from 1500 BC in Ancient Egypt, but are thought to have been
copied from as early as 3000 BC.[1] While there was a general understanding that
the disease came from something infecting an open wound, many ideas for
treatment were not beneficial, such as early Chinese physicians needling
patients above the ears around 300 BC, Hippocrates’ ideas in Ancient Greece of
promoting sweating through drinking strong wines and being wrapped in oil
soaked cloths, and ideas in the Renaissance of covering the patient with
manure.[2] The
19th century was revolutionary for tetanus research, as the disease was first
replicated in 1884 through producing tetanus in animals, and pure cultures of
tetanus bacillus were acquired soon after to study.[3] These studies led to Kitasato and Emil von
Behring among others discovering the tetanus antitoxin in 1891, something that
greatly reduced deaths due to tetanus after being administered in World War I.[4] In 1924, the first tetanus toxoid was
developed and was given to all U.S. soldiers prior entering World War II, being
eventually widely administered as the tetanus vaccine in the late 1940’s.[5] National report of tetanus cases began in the
1940’s as well, allowing the decline in tetanus cases over the next half
century to be noted.[6]
Tetanus was
relatively well understood in the early twentieth century. With the new discoveries found between the
1880’s and 1920’s, tetanus was known to be caused by the bacteria tetanus
bacillus, which is an anaerobic organism that enters the body through
subsurface wounds.[7] In addition, there was knowledge of
contraction of tetanus being through contamination of the wound with soil, due
to puncture wounds, wounds entering joints, or through other subsurface wounds,
such as surgical incision sites, that were not properly treated.[8]
While there was ability to destroy the bacteria through
antiseptics, it was known to be unable to be destroyed in spore form, due to
its ability to live through a wide range of temperatures.[9] This information is relatively true to today,
with most discrepancies between the times being small, such as many of the
articles of the early to mid-twentieth century referring to tetanus as tetanus
bacillus, with few calling it Clostridium tetani as it is officially referred
to as today. There is also a wider
description of causes today as risks and dangers in society have changed, such
as contraction due to non-sterile needles in drug use, body piercing, and
tattooing.[10] Recent articles also provide more information
on the different kinds of tetanus, being general, local, cephalic, and
neonatal, describing the specifics of each as well as how common each one is.[11]
The
experience of having tetanus, if acquired, is very painful and incessant. After contraction of the disease, the
incubation period is around 2-21 days, with symptoms tending to start around
the seventh or eighth day.[12] The first symptoms would be spasms in the
muscles near the location of the wound, or tightness in the jaw, in which the spasms would spread throughout
the body as the bacteria travel through the bloodstream.[13] Swallowing can become difficult and stiffness
and pain may occur in the muscles of the shoulders, neck, and back, with
additional spasms possibly spreading to the muscles of the arms, legs, and
abdomen.[14] There can be other symptoms too,
including fever, sweating, high blood pressure, and rapid heart rate.[15] In some cases, the spasms can be so strong
that they causes fractures and muscle tears, as well as ones in the throat that
cause difficulty breathing and can sometimes lead to brain damage.[16]
These symptoms tend to lessen after around 17 days, but spasms can continue for
three to four weeks, and in some cases a recovery can take months.[17] The prognosis for the disease can be dire,
with twenty-five percent of people with the disease dying if not properly
treated, and around ten percent of people with the disease dying when properly
treated, even into modern day.[18]
The
treatments for tetanus created in the early twentieth century completely
altered the prevalence of the disease.
The discovery of the tetanus antitoxin completely changed its effects in
war, with soldiers in battle being the primary victims to the disease
prior. In the Civil War, one of every
500 men died of tetanus by sustaining wounds during battle and then becoming
infected with tetanus.[19] In World War I, there was less than one case
that occured for every 5000 wounded, due to the fact that every wounded soldier
in the U.S. troops received a prophylactic injection of the tetanus antitoxin.[20] To create the antitoxin that was distributed,
a tetanus toxin was injected into horses who form antitoxins to protect
themselves from the poison.[21] The resulting antitoxins created a serum that
could be obtained from the horse containing the antitoxin and be used for
treatment in humans.[22] This was the primary way to treat tetanus
until the development of the tetanus toxoid in 1924.[23] While it was not commonly used in the
thirties, the toxoid was administered to
all U.S. soldiers in World War II to protect them from contracting the disease.[24] It was then used for the vaccine that was
administered to the public, most commonly together with the Diphtheria and
Pertussis vaccine, which created the DTP vaccine.[25]
Unlike many communicable diseases,
tetanus did not have a strong social stigma, as it was not contagious from
person to person. Although there was no
stigma, there was still a fear of tetanus in people up until the antitoxin and
toxoid became widely available, for soldiers in battle, in cases of surgical
procedures gone wrong, and even on the Fourth of July.[26] Tetanus cases on the Fourth of July were
extremely prevalent due to injuries by blank cartridges, firecrackers, and
other Fourth of July festivities.[27] This caused the Fourth to become nicknamed
the Bloody Fourth, due to the amount of deaths it caused due to tetanus.[28] While this was devastating in the 1800’s due
to lack of treatment, articles of the 1900’s urged those injured on the Fourth
to seek treatment to prevent the onset of tetanus, eventually reducing the
number of deaths.[29] In addition, there was little regulation by
public health officials of tetanus due to it not being contagious. When looking at the sanitary code from New
York City in 1940, tetanus was mentioned as a communicable disease, but there
were no specific regulations for it, unlike the majority of other communicable
diseases.[30]
The discovery of the tetanus antitoxin and toxoid transformed tetanus from being devastating in war to becoming one less trepidation in the minds of soldiers and the general population. This was tremendously helpful during both World Wars, as it greatly reduced deaths which created a better morale for both soldiers and their families. The advancements in science during the late nineteenth and early twentieth centuries caused this disease to become something extremely uncommon in places where vaccines are easily accessible, which helped to manifest the current health system that we know today.
Karley Carter is a freshman majoring in Architecture with a minor in History.
Bibliography
Chalian,
William. “An Essay on the History of Lockjaw.”
Bulletin of the History of
Medicine.
Vol. 8, No.2. Baltimore: The Johns
Hopkins University Press, 1940. JSTOR.
City
of New York’s Department of Health, Provisions of the Sanitary Code of the City of New
York and Regulations Relative to
Reportable Diseases and Conditions and Control of Communicable Diseases. Washington DC, 1940.
Coleman,
George E. “Investigating Tetanus.
(Lockjaw).” The Scientific Monthly. Vol 31, No.
6. Washington DC: American Association
for the Advancement of Science,
1930.
Faulkner,
Amanda E. and Tejpratap S. P. Tiwar. “Manual for the Surveillance of
Vaccine-Preventable
Diseases.” Centers for Disease Control
and Prevention. November 17, 2017.
https://www.cdc.gov/vaccines/pubs/surv-manual/chpt16-tetanus.html
Huber,
John B. “Tetanus and the Glorious Fourth.” Scientific
American, Vol. 101, No. 1.
New
York: Scientific American, a division of Nature America, Inc., 1909. JSTOR.
Krantz,
C. John. Fighting Disease with Drugs. Baltimore: Williams & Wilkins Co., 1931.
Lerner, Brenda Wilmoth and K. Lee Lerner,
Infectious Diseases: In Context. Detroit: Gale, 2008.
Gale Virtual Reference Library.
Spaeth, Ralph.
“Tetanus.”The American Journal of
Nursing, Vol. 42, No. 7. New York:
Lippincott
Williams & Wilkins, 1942. JSTOR.
United
States Surgeon General’s Office. FM 21-10 Military Sanitation and First Aid.
Wounded
Australian soldiers receiving tetanus antitoxin outside a medical dressing
station. 1918, Australian War Memorial E05242, Campbell,
Australia. From: Shanks, Dennis. How World War 1 changed global
attitudes to war and infectious diseases. New York: The Lancet, 2014.
[1] William Chalian, “An Essay on the History
of Lockjaw,” Bulletin of the History of
Medicine. Vol. 8, No.2 (1940): 173. JSTOR.
[5] Amanda E. Faulkner and Tejpratap S. P.
Tiwar, “Manual for the Surveillance of Vaccine-Preventable Diseases,” Centers for Disease Control and Prevention.
[7] United States Surgeon General’s Office, FM 21-10 Military Sanitation and First Aid (Washington, 1940), 115. https://archive.org/details/FM2110/page/n119
[30] City of New York’s Department of
Health. Provisions of the Sanitary Code of the City of New York and Regulations
Relative to Reportable Diseases and Conditions and Control of Communicable
Diseases. (Washington D.C., 1940).
Hookworm treatment at the Chapel Hill School, Alabama 1923
By Matt Narbutis
Note: Essay 5 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
Imagine yourself as being a child.
You are trying to live a normal life, but a mysterious organism inside of you
regularly manifests both physical and mental problems. In addition to you, almost half of your
friends and family suffer similarly, yet for the most part, no one is even
talking about it, let alone trying anything to get rid of it. You may end up free from this condition, but
most likely it will be present with you until your death. You’re not living in some sort of apocalyptic
disease-ridden world, you are one of the millions of Americans that suffered
from Hookworm at the turn of the 19th century.
Though having Hookworms was rarely fatal, or even significantly
impactful on one’s life, the experience of having the parasitic disease in the
1930’s resulted in physical discomfort and social stigmatization, which were
treated by archaic medicines and often vague preventative measures.
In
the 1930’s Ancylostomiasis
or, as it is still commonly known today, Hookworm, was a disease scientists and
health practitioners fought and researched with regularity. The disease was known to be an parasitic
infection of the body caused by millimeters-long worms. Throughout the first two decades of the
twentieth century, some adamant scientists believed Hookworm infections were
either caused by the consumption of contaminated meat or passed
hereditarily. However, by the 1930’s it
was nearly universally accepted that the infection entered the body almost
exclusively through skin penetration, with rare cases stemming from ingesting
Hookworm contaminated food[1].
In
the 1930’s, the known history of the disease was relatively comprehensive. The disease was first documented in 1838 when
an Italian physician performed an autopsy on a peasant woman. By the mid 1800’s, the disease had been
documented across the world and known cases existed on nearly every
continent. Hookworm’s origins in the
United States were thought to be in 1902, though the condition had most likely
been in the country for centuries before. Around the turn of the century, many Americans
considered the disease to be nothing more than a myth. This line of thinking however, was halted in
the 1910’s when various health organizations and the federal government
recognized Hookworms to be a prevalent parasitic infection within the
country. By the 1920s, the disease was
thought to have disappeared for the most part from the U.S. as a result of
aggressive treatment and public education.[2]
In
the United States one group suffered more from the disease than any other:
rural Southerners. Hookworm infections
were so rampant in the American South that estimates concluded roughly “30
percent of the rural southern population”[3] was
afflicted by the disease. Among rural
Southerners, those who had frequent interaction with soils and sand, such as
farmers and children, were most likely to have the condition.
The
physical experience of having Hookworms was a tedious one. Those suffering from the infection
experienced anemia, sluggishness, “Delayed pilosity, aches, dizziness,
epigastric tenderness, lassitude, insomnia, constipation, irregular menses,
[and]
frigidity.”[4] Despite these symptoms, infections were
rarely lethal with the few actual Hookworm caused deaths primarily a result of
anemia in children. This gave rise to
the notion among many that Hookworms didn’t necessitate treatment, as it was
perceived to be an inconvenient condition rather than a possibly
life-threatening one.
The
social experiences of having Hookworms were similar to the physical ones: they
ranged from uncomfortable to debilitating.
Those with Hookworms were stigmatized and often seen as impoverished,
low-class, and uneducated due to the disease’s prevalence in the rural southern
states. The children who suffered from
this disease were thought to be “dull, apathetic, unable to concentrate” and in
extreme cases “mentally retarded” due to their infections[5]. Those who were afflicted by Hookworms and
resided in the South generally had easier social experiences than sufferers in
the Northern States, who were even more heavily stigmatized. This notion makes sense give the diseases
relatively rare rate of occurrence in the North compared to the South.
Unfortunately,
both the treatments given to sufferers and the preventative measures taken were
relatively archaic in the 1930’s. Carbon
tetrachloride, a sweet-smelling, volatile liquid closely related to chloroform,
which had previously been used primarily
as an industrial cleaner, was the standard of care in treating Hookworm
infections. Though it was effective in
treating patients afflicted by the condition, it could be toxic and cause
damage to the nervous system, liver, and kidneys in high dosages. Another common treatment was Chenopodium, a
flowering plant that was made into a liquid.
However, those who received this treatment often experienced lethargy
and the dose had to be administered multiple times before it had any positive
effect thereby drawing out the side effects[6],
thus making Carbon tetrachloride, which only required one dose, the preferred
choice. The preventive measures that
were recommended to combat the spread of the disease were fairly vague. Among them were “Proper disposal of human
excreta” and the recommendation to “implement sanitary measures.”[7]These
non-specific recommendations make sense given the fact that Hookworms had the
potential to live in nearly any soil or sand, thereby making specific
preventive measures nearly impossible.
Public
discourse regarding the disease went through a turbulent reform over time. At the turn of the 19th century, a
practitioner’s suggestion that a patient had Hookworms often resulted in the
patient being offended. This is no
surprise given the negative connotations and social stigma the disease
carried. However, generous funding from
the Rockefeller foundation for education and treatment of the disease resulted
in a massive expansion of the discourse and the legitimacy in which people
spoke of it. Additionally, through the
help of travelling Hookworm educators, who often spoke at schools, the disease
was discussed within communities even more.[8]
In
the Southern United States, where the disease was the most prevalent, public
health officials did not enforce specific health requirements for disease as
they lacked the resources to do so.
Trying to implement specific guidelines for how cases of the disease
would be reported to public health officials and managed by physicians would
have been impossible, due to how frequently the condition presented. However, in the Northern cities, such as New
York City, public health officials enforced much stricter regulations, due to
Hookworm’s lack of prevalence there.
Those with Hookworms were to be removed from hospitals unless they were
able to be properly isolated and quarantined, and were prohibited from
mobilization so that they would not spread the disease. In addition, physicians attending to cases of
the disease had to file official reports noting them or else face heavy
penalties. These regulations, combined
with the prevailing environmental conditions, helped limit the prevalence of
Hookworms in the North.[9]
Both
the 1930’s understandings of Hookworms scientifically and historically were
quite similar to what they are understood to be today. The developments and insights made in that
era laid the foundation for the current research on the condition. Presently, it is common knowledge that
Hookworms in humans are caused by an infection with the nematode parasites
Necator americanus and Ancylostoma duodenale that are transmitted through
contact with contaminated soil. The
worms subsequently migrate to the lungs where productive coughing sends them
into the gastrointestinal tract where they can cause intestinal blood loss and
in some cases, anemia. Historically
speaking, it is now know that in the decades leading up to the 1910’s when
education and treatment began to take place Hookworms were, and most likely had
been for decades prior, an epidemic in the American South. It is also accepted as fact that the
treatments of the early 20th century did not nearly eradicate Hookworms as
previously thought. Though much has
changed since
the 1930’s, for the nearly 700 million people who suffer from Hookworm today
the feelings of physical discomfort and social stigmatization they experience
are akin to those experienced by Americans in the 1930’s.[10]
Matt Narbutis is a second year student majoring in History, with a co-major in premedical studies. Outside of class he participates in cell signaling and cancer related research.
Bibliography
“The
Life-History Of The Hookworm.” The
British Medical Journal Vol.1, no. 2670 (1912): 499-500.
http://www.jstor.org/stable/25296276.
Hotez, Peter J., Simon Brooker, Jeffrey
M. Bethony, Maria Elena Bottazzi, Alex Loukas, and Shuhua Xiao. “Hookworm
infection.” New England Journal of
Medicine 351, no. 8 (2004): 799-807.
New York (N.Y.). Department of
Health. “Provisions of the
Sanitary Code of the City of New York and Regulations Relative to Reportable
Diseases and Conditions and Control of Communicable Diseases.” (1940): 13-42.
Nicholls, Lucius, and G. G. Hampton.
“Treatment Of Human Hookworm Infection With Carbon Tetrachloride.” The British Medical Journal 2, no. 3209
(1922): 8-11. http://www.jstor.org/stable/20420412.
Power, Helen J(Jun 2001) History of
Hookworm. In: eLS. John Wiley & Sons Ltd, Chichester. http://www.els.net
[doi: 10.1038/npg.els.0003582]
Smillie, W. G., and Cassie R. Spencer.
“Mental retardation in school children infested with hookworms.” Journal of Educational Psychology 17,
no. 5 (1926): 314.
Stiles, C. W. “Decrease of Hookworm
Disease in the United States.” Public
Health Reports (1896-1970) 45, no. 31 (1930): 1763-781.
Ch. Wardell Stiles. “Some Practical
Considerations in Regard to Control of Hookworm Disease in the United States
under Present Conditions.” The
Journal of Parasitology 18, no. 3 (1932): 169-72.
“The Rockefeller Foundation.” The British Medical Journal 2, no. 3493
(1927): 1154-155. http://www.jstor.org/stable/25327296.
[1]Ch. Wardell Stiles. “Some Practical Considerations in Regard to
Control of Hookworm Disease in the United States under Present
Conditions.” The Journal of
Parasitology 18, no. 3 (1932): 80.
[2] Stiles, C. W. “Decrease of Hookworm Disease in the United
States.” Public Health Reports
(1896-1970) 45, no. 31 (1930): 1763-781. doi:10.2307/4579737; “The
Life-History Of The Hookworm.” The
British Medical Journal Vol.1, no. 2670 (1912): 499-500; Stiles, CH (1932):
“Some Practical
Considerations in Regard to Control of Hookworm Disease in the United States
under Present Conditions.”
[3]Stiles, C. W. “Decrease of Hookworm Disease in the United
States.” Public Health Reports
(1896-1970) 45, no. 31 (1930): 1763.
[4] Stiles, C. W. (1930): “Decrease of
Hookworm Disease in the United States,” 1770.
[5]Smillie, W. G., and Cassie R. Spencer. “Mental retardation in
school children infested with hookworms.” Journal of Educational Psychology 17, no. 5 (1926): 314.
[6]Nicholls, Lucius, and G. G. Hampton. “Treatment Of Human Hookworm
Infection With Carbon Tetrachloride.” The
British Medical Journal 2, no. 3209 (1922): 8-9.
[7]“The Prevention and Cure of Hookworm.” Scientific American 120, no. 14 (1919): 334.
[8] “The Rockefeller Foundation.” The British Medical Journal 2, no. 3493
(1927): 1154.
[9]New York
(N.Y.). Department of Health. “Provisions
of the Sanitary Code of the City of New York and Regulations Relative to
Reportable Diseases and Conditions and Control of Communicable Diseases.” (1940): 28.
[10] Hotez, Peter J., Simon Brooker, Jeffrey
M. Bethony, Maria Elena Bottazzi, Alex Loukas, and Shuhua Xiao. “Hookworm
infection.” New England Journal of
Medicine 351, no. 8 (2004): 799-807; Power, Helen J(Jun 2001) History of
Hookworm. In: eLS. John Wiley & Sons Ltd, Chichester.
Note: Essay 4 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
The air is warm and
muggy. A faint buzzing echoes in the air, and neck hairs tingle. The acrid
smell of smoke fills nostrils, as bark nests are burned in an attempt to ward
off an impending illness: malaria. This is what people may have experienced in
1930s southeast America, where the disease devastated many towns near the
waters where mosquitos flourished.Biologically,
the disease was understood to be a cycle of chills and fevers, a parasitic
infection caused by the bite of an Anopheles mosquito or the drinking of
infected waters where they resided and bred. Socially, many people lived in
fear because it was difficult to be sure whether or not a given mosquito or
water source was infectious. There was not a classist or isolationist attitude
associated with malaria, but there was a geographic or regional predisposition
surrounding who contracted the disease.
Between 1930 and 1940, the majority of what
people knew about malaria came from abroad, because that was where the disease
originated and primarily attacked. Africa, India, and south Asia were common
places to contract malaria, and people were infected in droves, resulting in
hundreds of thousands of deaths.[1]
When it first reached America, the government questioned whether or not there
should be a quarantine because they were unsure of its contagion capabilities.[2]
Soon, doctors and citizens alike knew how the disease was spread, through the
various discharges of the Anopheles mosquito, but it was difficult for the
average person to differentiate between it and the common American Culex
variety. Its body is much narrower and sharper, and only this species carries
the parasites that cause infection.[3]
People who drank water from sources where these insects mated were also at risk
of contracting the disease, because the parasites can be secreted into the
water during mating.
Once bitten, a victim begins to feel
chills, which the body responds to with feverishness. This sequence repeats,
and often induces nausea, vomiting, and jaundice, or yellowing of the skin. The
main reason people die from malaria is due to these excessive lapses outside
homeostasis, which is exhaustive and cannot be maintained, as it wears out the
immune system.[4]
Primarily, 20th century doctors would prescribe people to bed rest for 10-15
days, which is usually how long it took to recover if survivable. Many victims,
however, could face up to five years of relapse, and at the time they had no
reason as to why this occurred in some cases and not others.[5]
The peak mortality rate of malaria in America was 3.3 deaths per 100,000
persons, in 1933.[6]
Though the death rate was not massive, it was higher than typical, and this
fact combined with the imminent reality of often being outside terrified the
public.
The fact that there was no real cure
did not help curb this paranoia. Treatment for malaria was fairly limited to
taking quinine, a substance extracted from the bark of cinchona trees, which is
also found in tonic water. It was first discovered in South America in 1820,
when bark was a main source of medicinal products for varying diseases.[7]
At this time, it was recommended in encapsulated pills, since it absorbed
better that way versus through an injection. The recommended dosage was 30
grains per day to break chills, then 10 grains daily at bedtime to break the
attack, though relapse was still possible.[8] Plasmochin
was also effective for killing the parasite, but not to alleviate symptoms. It
was not advised to take quinine daily to prevent, only once the disease had
been contracted. Throughout the history of malaria, drugs like quinine were
often abused, and used as a vaccine instead of a symptomatic relief.[9]
In order to prevent malaria before it began, many
infrastructural precautions were taken in areas of the southeast, such as
Tennessee, Mississippi, and Alabama, where it was most prevalent. Water
reservoirs were seized by health departments and inspected, and persons living
within a two-mile radius were tested regularly. Special bureaus were
commissioned explicitly for the investigation and prevention of malaria,
specifically by the TVA in Tennessee.[10] Mosquito
nets covered many people as they went outside, and were also placed over food
and other high-risk items. This became and issue, however, because the majority
of malaria casualties stemmed from children, those with outdoor professions,
and persons living and working in rural areas. These water treatments and net
coverings did not bode well for working in such sparse areas where being among
the marshes and cotton fields was their livelihood. One solution proposed at
the time was to grow legume plants, such as beans and alfalfa, as it had been
observed in other countries that crops such as these somehow fended off the
mosquitoes.[11]
Now, scientists know about many different factors that
contribute to who gets malaria, why, and how to prevent it. There are several
antibiotics in place that can treat it, quinine still being one of them. Other
drugs including chloroquine, doxycycline, mefloquine, and more are used to
treat the disease as well, sometimes in conjunction with quinine.[12]
Much of this is dependent on the type of parasite the mosquito hosts and
infects the person with, as well as other illness they may have, allergies,
area of contraction, etc. In addition, a much wider spread of insecticides and
bug zappers are available to protect people day-to-day from these potentially
deadly insects. Doctors are also aware of certain genes people hailing from
Africa and parts of the Middle East carry, which mutates their blood cells in a
way that immunizes them from malaria (such as sickle cell disease).[13]
Altogether, malaria is and was a lasting, horrific disease that
still affects millions of people today. Even though scientists know much more
about it at a molecular, chemical, and human level, it still kills and is being
investigated further. No permanent treatment or vaccine exists yet, and many
children and adults even now suffer immensely, especially in underfunded and
underdeveloped countries. By looking at how people experienced the disease
earlier in history, researchers can compare the information they have now and
perhaps learn from both their mistakes and advancements, in order to try and
eradicate the menace that is malaria.
Bibliography
Copeland, Royal S., M.D. “Guarding Your Health: Control of
Malaria.” The Cincinnati Enquirer,
June 27, 1931. Accessed March 1, 2019.
https://search.proquest.com/docview/1882058259/1067A9A0CC8E46C4PQ/5?accountid=12434.
Evans, Dr. W. A., M.D. “How To Keep Well: Treatment for
Control of Malaria.” The Washington
Post (Washington, D.C.), October 29, 1932. Accessed February 28, 2019.
https://search.proquest.com/hnpwashingtonpost/docview/150244363/6D227B296E294483PQ/3?accountid=12434.
Krysto, Theo. “Can the World Banish Malaria?” Scientific American, 142 (April 1930):
270-72. Accessed February 28, 2019.
https://web.b.ebscohost.com/ehost/detail/detail?vid=0&sid=9c050e8e-219b-490f-a767-6a23adca6093%40pdc-v-sessmgr05&bdata=JnNpdGU9ZWhvc3QtbGl2ZSZzY29wZT1zaXRl#AN=514363711&db=rgr
Malar, J. “Quinine,
an old anti-malarial drug in a modern world: role in the treatment of malaria” US National Library of Medicine, May 24,
2011. Accessed April 19, 2019. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3121651/
“Malaria: MedlinePlus Medical Encyclopedia.”
MedlinePlus. January 28, 2019. Accessed March 01, 2019.
https://medlineplus.gov/ency/article/000621.htm.
“Mighty Malaria.” Time
Magazine, January 14, 1935. Accessed March 1, 2019. https://web.b.ebscohost.com/ehost/detail/detail?vid=11&sid=51785a60-de75-41e4-b5cf-c234fa1fdf0a@pdc-v-sessmgr01&bdata=JnNpdGU9ZWhvc3QtbGl2ZSZzY29wZT1zaXRl#AN=54805106&db=a9h.
Snowden, Frank M. The Global Challenge of Malaria: Past Lessons
and Future Prospects. New Jersey: World Scientific, 2014.
U.S. Congress. Senate. Committee on Public Health and National
Quarantine. Malaria and Typhoid Fever:
Hearings before the United States Senate Committee on Public Health and National
Quarantine, Sixty-Third Congress, Second Session, on Mar. 5, 6, 1914. 63rd
Cong., 2d sess. S. Bill. Washington: U.S. G.P.O., 1914.
U.S. Public Health Service.
American Red Cross. “Quinine kills malaria germs” Library of Congress, September 9, 1920. https://www.loc.gov/item/2017677870/
[1] Frank
Snowden, The
Global Challenge of Malaria: Past Lessons and Future Prospects (New Jersey: World Scientific, 2014), 29-30
[2] US
Congress, Malaria
and Typhoid Fever: Hearings before the United States Senate Committee on Public
Health and National Quarantine (Washington,
U.S., 1914)
[3] Theo Krysto, Can the
World Banish Malaria? (Scientific American, 1930), 270-272
[4] MedlinePlus,
Malaria:
MedlinePlus Medical Encyclopedia (MedlinePlus,
2019)
[5] Royal
Copeland, Guarding Your Health: Control of Malaria (The Cincinnati
Enquirer, 1931)
Children’s ward with nurses and visitors in a nursing institute in Java
By Gretchen Blackwell
Note: Essay 3 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
Beginning in 1894 and lasting until
around 1950, a pandemic of the plague began to spread, wreaking havoc on much
of the developing world. This pandemic was the third outbreak of its kind,
harkening back to the days of the Plague of Justinian that occured around
541-542 AD and the Black Death that desolated much of the world between 1347
and 1500 AD, both of which killed millions and reshaped the political and
social spheres of the world. When the plague spread in the early 1900s,
clinicians, politicians, and researches were no more prepared for its
destruction than those of outbreaks before germ theory was developed. Despite
the various methods of prevention, cures, and treatment employed by public
health officials, the outbreak in colonized countries, where the cities were
overcrowded and unsanitary, worsened drastically. The actions of officials were
met with much protest and resistance from citizens, as many did not trust any
Western medical interventions. Though the third outbreak of plague ravaged the
majority of the world, the rich and powerful West was unscathed. Consequently,
the memory of this outbreak faded in the minds of the West; yet, its
devastation marked the beginning of a clear inequality in health care in those
countries affected by the plague and the developed world.
The
plague, a contagion caused by Yersinia
pestis, is transmitted by rodents and their fleas. There are three types of
plague- bubonic, septicemic, and pneumonic, the most deadly. A person can catch
the plague from being bitten by an infected flea, handling carcases of rodents
with the disease or, in the case of pneumonic plague, from Y. pestis particles transmitted person to person. The plague kills
by reproducing its bacteria rapidly and overloading the immune system until the
organs fail. The symptoms occur around six days after infection and have
various effects depending on the type of plague.[1] In bubonic plague
infections, the patient’s lymph nodes swell, and the patient experiences fever,
aches, and chills. When infected with septicemic plague, the patient develops
fever, chills, shock, and bleeding under the skin that causes the blackening of
skin tissue, a characteristic that is typical of this type of the disease. When
either bubonic or septicemic plague is left untreated, it can develop into
pneumonic plague which causes pneumonia.[2] There is no vaccine
available for the plague; however, if it is treated with antibiotics, the
patient has an increased rate of survival.[3] If the plague is left
untreated, the patient has a 50 percent survival rate.[4]
During the third outbreak of plague,
no cure or treatment was known, and there was a lack of understanding of the
exact mode of transmission. The theory that rats spread the plague developed
during this time; however, it was accepted by many that human to human
transmissions occurred in every case of the plague, some even attributed
miasmic theory to the outbreak.[5] Ultimately,
public health officials resorted to the methods used in past pandemics. These
methods included quarantining and the burning homes and belongings of victims,
forcing them to relocate, in order to stop human to human transmission, what
they assumed was causing infection. Additionally, officials would roundup and
poison rats in order to control the spread of disease. Further, vaccination, a
method that has now been proven very dangerous, was compulsory for citizens of
Senegal.[6] None
of these efforts did much, however, to fight the pandemic.
Many actions taken by health
officials during the third plague pandemic were met with resistance in
colonized nations, as the officials were ignorant of or apathetic towards the
cultural and religious traditions of the colonized people. For instance, in
India, citizens reacted with violent protests when forced to conform to Western
medical practices, leading “to the death of four Britons, and helped accelerate
the growth of Indian nationalism.”[7]
Further, in Bombay in 1936, it was reported that after objections from
citizens, officials would stop the trapping of rats because of the religious
beliefs.[8] The
title of an article from the Sunday Times of London read “Plague Preferred”,
showing a clear inconsideration of the values of the society in which Britain
was occupying. In the West, it was thought that only those who were ignorant
caught the plague. In fact, when there was an outbreak in Scotland, officials
were embarrassed by its presence.[9] It
was generally accepted that backwardness and uncleanliness would cause the
plague. A medical professional in the American Journal of Public Health and the
Nation’s Health in 1934 claimed that it was “ignorance and fear” that worsened
the fatality of disease.[10] In
India, this became “an excuse for letting the plague epidemic… burn out”.[11]
Officials decided to stop intervention and let the disease run its course,
resulting in 10 million deaths in the country.
By the end of the outbreak around
1950, nearly 15 million were dead, primarily in the port cities of India,
China, and other Asian countries. There were many deaths in Africa, South
America, and Australia as well, but, comparatively, Europe and North America
experienced very few casualties. The breeding ground for plague epidemics is
overcrowded and unsanitary spaces, a description of many seaports in the
developing world during the early 1900s. The threat of death by the plague
struck so much panic in citizens of India that two men from Calcutta were
sentenced to death for the murder of a man by using “bacilli serum” containing
plague microbe.[12] In
comparison, the West had a developed public health system that enforced
sanitary codes and prevention efforts. The New York Sanitary Code from 1940
lists the rules and regulations to be followed in the case of plague, including
notifying authorities, isolating the patient and the patient’s family, and
quarantining the patient’s home.[13]
Ultimately, these measures led to the disparate effects of the plague
pandemic. In an article from The Science
News-Letter from 1938, “the horrors of the plague of the Orient” are described
as being far off and distant from the minds of Americans.[14] Further, an article from
The Sunday Times discusses the public health efforts to prevent infection and
how the threat of plague has been “happily” reduced in Britain because of them.[15]
While the majority of the world was being ravaged by the plague, the disease
was barely on the radar of Westerners.
The disparities in public health interventions that existed during the time of the third plague pandemic led to the death of millions by, as evident by the low mortality in Western countries, a preventable disease. While North America and Europe were barely touched by the disease as a result of the regulations and precautions set in place by their bureaucracies, poor and often colonized countries such as India and China experienced an unbridled pandemic that struck fear into the populations and left the countries with political and social turmoil. This sharp contrast between formally colonized countries and the Western world is still evident today. According to the World Health Organization, as of 2017 “nearly 9 million children under the age of five die every year” and “around 70% of these early child deaths are due to conditions that could be prevented or treated”.[16] Most of these deaths are concentrated in developing countries, the same regions ravaged by the plague around 100 years ago.
Gretchen Blackwell is a freshman from Huron, Ohio planning on majoring in history and political science with a minor in computer science.
Works Cited
Centers for Disease Control and Prevention.
“CDC Plague | Frequently Asked Questions (FAQ) About Plague.”
(Centers for Disease Control and Prevention, 2018).
https://emergency.cdc.gov/agent/plague/faq.asp.
Echenberg,
Myron. “Pestis Redux: The Initial Years of the Third Bubonic Plague
Pandemic, 1894-1901.” Journal of
World
History13, no. 2 (2002a): 429-49.
http://www.jstor.org/stable/20078978, 435.
Echenberg,
Myron J. Black Death, White Medicine:
Bubonic Plague and the Politics of Health in Colonial Senegal,
1914-1945. (Oxford: James Currey, 2002b), 103.
Larkey,
Sanford Vincent. “Public Health in Tudor England.” American Journal of Public Health 24 (November 1934):
1099–1102. doi:10.2105/AJPH.24.11.1099, 1102.
“Murder
by Germs.” (Sunday Times 1935),
p. 21. The Sunday Times Digital Archive,
http://tinyurl.galegroup.com/tinyurl/9M3dZ9. Accessed 4 Mar.
2019.
Our
Agricultural Correspondent. “Plague Peril from Rats.” (Sunday Times 1937), p. 31. The Sunday Times Digital
[1] Echenberg, Myron. “Pestis Redux: The Initial Years of
the Third Bubonic Plague Pandemic, 1894-1901.” Journal of World History13, no. 2 (2002a): 429-49.
http://www.jstor.org/stable/20078978, 435.
[2] U.S. National Library of Medicine. “Plague.”
(MedlinePlus, 2018).
https://medlineplus.gov/plague.html.
[3] Centers for Disease Control and Prevention. “CDC
Plague | Frequently Asked Questions (FAQ) About Plague.” (Centers for
Disease Control and Prevention, 2018).
https://emergency.cdc.gov/agent/plague/faq.asp.
[6] Myron J., Echenberg. Black
Death, White Medicine: Bubonic Plague and the Politics of Health in Colonial
Senegal, 1914-1945. (Oxford: James Currey, 2002b), 103.
[8] Our Own Correspondent. “Indian City Not to Trap
Rats.” (Sunday Times, 1936), p.
24. The Sunday Times Digital Archive,
http://tinyurl.galegroup.com/tinyurl/9Lzbq6.
[10] Sanford Vincent, Larkey. “Public Health in Tudor England.” American Journal of Public Health 24
(November 1934): 1099–1102. doi:10.2105/AJPH.24.11.1099, 1102.
[12] “Murder by Germs.” (Sunday Times 1935), p. 21. The Sunday Times Digital Archive,
http://tinyurl.galegroup.com/tinyurl/9M3dZ9. Accessed 4 Mar. 2019.
[13] “Provisions of the Sanitary Code of the City of New York
and Regulations Relative to Reportable Diseases and Conditions and Control of
Communicable Diseases”(Department of
Health, 1940).
[14] Jane Stafford. “Death Rides a Rat.” (Science News Letter, 1938), 134–35. doi:10.2307/3914747.
[15] Our Agricultural Correspondent. “Plague Peril from
Rats.” (Sunday Times 1937), p.
31. The Sunday Times Digital Archive,
http://tinyurl.galegroup.com/tinyurl/9M3Aq2.
[16] World Health Organization. “Child Mortality.”
(World Health Organization, 2011).
https://www.who.int/pmnch/media/press_materials/fs/fs_mdg4_childmortality/en/.
1996 series 200 Deutsche Mark banknote featuring Dr. Paul Ehrlich
By Ashlee Mosley
Note: Essay 2 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
In the 1930’s syphilis was known as a sexually transmitted
disease. Even then syphilis was a
preventable disease, but it still caused a worldwide panic. Syphilis has been referred to as the “third
great plague”, due to its significance in affecting the population all around
the world. The symptoms have stayed the
same within 90 years. Sores or lesions
all over the body which vary in size and placement, they are usually
painless. Treatment and the social
stigma has changed within this time frame.
In the past 90 years, the social perception of syphilis changed due to a
shift in social acceptance and scientific understanding of the disease which
resulted in more effective treatment.
In the
1930s syphilis was STD that was caused by an organism called spirochete or treponeme. It was long
debated to decide which one caused syphilis.
Schaudinn was unable to determine the membrane which characterized
spirochete, then suggested the name Terponema
pallidum. Though, it was also called Spirochaeta pallida during this
time. This organism is delicate and can
be killed by the mildest of antiseptics and drying.[1] During this time people did know that
syphilis was preventable and curable if handled and caught in time.[2]
Syphilis today is a systemic disease
caused by spirichaete, Treponema
pallidum. This can be transmitted the same way in 1930s, sexual acts, blood
transfusion, or from mother to fetus in utero.
Syphilis has four different stages that it is broken up into. There is primary, secondary, and early latent
which are the early stages of syphilis and then there is late latent
syphilis. To put people into groups,
anyone less than two years is early latent and more than two years without
clinical evidence is referred to as late syphilis.[3]
It was known to be contracted not
just from sexual acts but in other innocent ways. People thought that by using contaminated and
dirty dishes and utensils they would contract syphilis. Infected money and simple kissing was thought
to spread syphilis.[4] In other works, it was said that prostitution
was a main reason for the spread of syphilis.
Since prostitution was such a big thing, people believed that everyone
who was a prostitute or was with a prostitute had syphilis. Everyone was at
risk for syphilis, men, women, children, and even a fetus still in utero. Children that are born with syphilis are more
likely to be handicapped for the rest of their lives, physically and mentally.[5]
To determine if one was to have
syphilis they would administer blood test.
Treatment for syphilis was only handled at home until Dr. Ehrlich came
up with “bullets”. Patients can be
discharged after a week in the hospital.
The new treatment does require treatment from a trained professional at
the hospital. The treatment was to take
Dr. Ehrlich’s “bullets” and use an IV to get the medicine. For poorer, malnutrition patients they would
gain up to ten pounds during the week of care.
They would be on a continuous drip for five days and it was about ten
quarts of the solution.[6] Another type of treatment was the use of
arsenic, bismuth and mercury which are nephrotoxic drugs but they can cause
irritation to the kidneys. Also the use
of arsphenamines which causes severe damage to the liver.[7]
Today, to
determine if someone has syphilis they would administer a blood test or test
the cerebral spinal fluid. The treatment
for syphilis today is a single dose of penicillin if caught early. In all cases, syphilis is curable if caught
in time. The penicillin will stop the
sextually transmitted disease from progressing.
This works for people that have been infected for less than a year. If pregnant, the doctor will only recommend
penicillin. The newborn child must
receive antibiotic treatment as well.
Follow up treatment is to have periodic blood test to make sure the
patient is responding well to the dosage of penicillin the doctor
prescribed. People should avoid sex
until their sores have healed. Still
with this, they should always use condoms when engaging in sexual
activity. Even though cured, they can
still get syphilis again. [8] Though syphilis can be cured, it cannot
reverse any damage that has been done.
The public
health officials proposed some action to help with the spread of syphilis. They wanted to more control prostitution. Control the marriage of someone who has
syphilis and is not receiving treatment.
The officials also said they will punish people who do not receive
treatment. Start giving good treatment
at the expensive of the state. They want
to work on earlier detection of syphilis and reporting all causes of sexual
disease.2
The social
stigma around syphilis has changed in the way that is more accepted. In the 1930s, Kaempffert wrote, “Nice people
don’t have syphilis, nice people don’t have syphilis and nice people shouldn’t
do anything about having syphilis.”4
Works about this disease are only for professionals. They were published and put into libraries
until many people were reading and talking about it. During this time, if someone was to get
syphilis it was as if they “deserved it.”
In the Third Great Plague, Stokes raised the question of why other
sexually transmitted diseases were not seen as a sign of shame. Other STDs were
not seen as bad or for bad people.2
Today, people today see
syphilis as a risk. Syphilis is not
something that happens due to bad behavior or a bad person. STDs in general is something that everyone
has a risk of if they engage in sexual activity. People are now trying not to brand people and
make them feel bad for having a sexually transmitted disease. This will cause people not to get tested and
for them to spread this others without knowing. If people are not being safe or
not getting tested then many more people are at risk for syphilis.
The social
stigma of syphilis has changed for the better.
There are still going to be people that see it as the person is
bad. From the new understanding, this is
just not the case. The new treatment
that has been developed has also helped with the social stigma. Since it is easier to cure, it is not such a
scary topic to talk about. People are
not ashamed to have syphilis anymore because it is not a bad thing. Anyone is at risk for this anytime they
engage in sexual activity. Within the past 90 years, the stigma has changed
because of the new treatments and the moral understanding of what the disease
is.
References
Kaempffert, Waldemar. “The Battle Against Syphilis:
Dr. Parran’s “Shadow on the Land” Is A…” New York Times, August 1, 1937.
Kazanjian, Kaiden. “5
Day Treatment for Syphilis.” New
York Times, April 13, 1940.
Lehman. “Lehman Urges
War against Syphilis.” New York
Times, February 5, 1937.
Moulton, Forest Ray. Syphilis:
Presented by the Section on the Medical Sciences. American Association for
the Advancement of Science by THE SCIENCE PRESS, 1938.
Nelson, Nels A., and Gladys L. Crain. Syphilis, Gonorrhea and the Public Health. New York: Macmillan Company, 1938.
Stokes, John H. The Third Great Plague:
A Discussion of Syphilis for Everyday People. W.B. Saunders Company, 1917.
[4] Kaempffert, Waldemar. “The Battle Against Syphilis:
Dr. Parran’s “Shadow on the Land” Is A…” New York Times, August 1, 1937.
[5]
Lehman. “Lehman Urges War against Syphilis.” New York Times,
February 5, 1937.
[6] Kazanjian, Kaiden. “5 Day Treatment for
Syphilis.” New York Times, April
13, 1940.
[7] Moulton, Forest Ray. Syphilis:
Presented by the Section on the Medical Sciences. American Association for
the Advancement of Science by THE SCIENCE PRESS, 1938.
Rat Collecting Station. Shortly after 1900. Philadelphia.
By Alex Gregory
Note: Essay 1 in a series, all from Dr. Amanda McVety’s Spring 2019 class on Medicine and Disease in Modern Society
The bubonic plague
ravaged Asia and Europe during the 14th century and resulted in
major economic and social paradigm shifts. Fear and a poor understanding about
how the disease was spread resulted in epidemics occurring for the next five
centuries. The invention of faster transportation, increased levels of immigration,
and worldwide trading lead to a fifty-one-year outbreak of the Bubonic Plague
throughout the world. Particularly in port cities, such as San Francisco, New
Orleans, and Honolulu, the plague proved to be a dangerous and isolating
experience for the population. Denial, racial tensions, and attempts at
quarantine from the outbreaks between 1901-1910 affected the social
understanding and regulations that were implemented around the bubonic plague.
Throughout the 1920s and 1930s the scientific understanding of the bubonic
plague developed, with the discovery of Bacillus
pestis and a link being drawn between historical black death and the
outbreaks in the 19th and 20th centuries.
During the 1930s
it was understood that the bacteria that cause the plague was Bacillus pestis, but the ways in which
the disease entered the body were still being debated[1].
Known as a disease of rats[2],
the bubonic plague was thought to contaminate food and water, which would be
appropriate with the limited knowledge of how bacteria and viruses were spread
in the early 20th century[3].
With the adoption of germ theory and the discovery of Bacillus pestis, it is now known that plague is spread through
bodily fluids and vectors, such as fleas, rats, and other small rodents. The
1900-1924 outbreak in India and China allowed scientist to diagnose the Black
Death of the medieval period[4],
and give reasons behind a long history of fear and death. With the lack of
knowledge that preceded the Chinese and Indian outbreaks, several government
laboratories were established in these countries, and lead to bacteriologists
discovering essential information about the plague[5].
Even though the plague was almost nonexistent during the 1930s in the United
States, the history of massive graves and quick deaths allowed fear to persist
and lead to the plague still being on the report list for the City of New York.
Today’s
understanding of the Bubonic Plague divides the illness into three categories:
bubonic, septicemic, and pneumonic. Bubonic is the most common, with
approximately ¾ of all cases from the 1900-1924 Chinese/Indian outbreaks
falling into this category[6].
Septicemic and pneumonic forms of plague are the deadliest and have the most
serious side effects. Septicemic is a form of plague which infects the blood stream
and causes death within 24-72 hours. Pneumonic plague can spread directly,
quickly, and efficiently from one individual to another, through coughing and
other bodily fluids[7]. Early 20th
century societies knew that the plague was common among small mammals, but now
it is understood to be difficult to eradicate because of its ability to survive
indefinitely in its host and the wild population inability to be inoculated[8].
Outbreaks that
occurred between 1900-1924 in California[9]
gave rise to racism and anti-immigrant attitudes, which carried on into the
1930s and 40s. Today it is known that the plague arrived on a steam ship from
Hong Kong and was carried by the rats on board, but those in the United States
blamed the Chinese immigrants for bringing the disease to North America.
Japanese internment camps during World War II were preceded by the quarantining
and unfair treatment of the Chinese during the plague outbreak. Immigrants, the
homeless, and those in the lower class were the primary sufferers of the plague,
and the denial of San Francisco’s Mayor did not assist in reducing the plague
or helping those who suffered. These outbreaks also revealed the staggering
differences between the upper and lower classes access to public health
initiatives and how indifference of the upper class can cause devastation in
the lower class. Even after the surgeon general of the US Public Health Service
attempted to implement anti-plague regulations, there was a concern of causing
alarm about the disease.
Common
preventative measures included rodent control, incineration, isolation, and
inoculation. Incineration was a highly used method in Honolulu, resulting in
approximately 171,950 dollars of compensation being paid out to insurance
companies in 1926 by the U.S government, from the fire suppression methods that
were used during the 1899-1900 outbreak of plague[10].
An attempt at quarantining Chinatown grew from the racism and anti-Chinese
sentiments that were common during the time. Immigrants were forced to stay in
Chinatown, while white individuals could move freely throughout the city. Although
rodent control had the ability to be the most effective method of containment,
the disinfection campaigns failed in the immediate eradication of the disease.
By pouring carbolic acid into the sewers in an attempt to kill the bacteria,
the rats fled and began to live among the homeless and those in poor living
conditions[11]. Because
of the disinfection campaign’s failure, lower classes became even more impacted
by the disease, since rodents were the primary carriers of plague. In the late
19th century a plague vaccine was created, but the effectiveness has
never been fully studied[12].
In the United States there is not a current plague vaccine accepted by the
government.
Hawaii is the only
state in the United States to have plague in human victims during the 1930s.
Between 1931-1932 there were five instances of plague on the island of Maui,
with four of the victims dying; following these cases there was instance of
plague in humans in the United States thru the 1930s[13]. During
the 1930s there were countless occurrences of rats and small rodents being
infected by the plague[14].
Various maps included in the United States Health Service report of 1936 reveal
the extent of infected rats across three Hawaiian Islands. The cases are
concentrated around waterways and main roads[15], leading
to the belief that the plague was being spread through the transportation
vehicles that were used. Rats, fleas, or other small mammals would have been
stowaways on these vehicles, allowing the disease to spread to other
populations.
Although the
disease had similar symptoms to past epidemics: high fevers, convulsions,
vomiting, pain in the limbs, and appearance of buboes, the social experience of
having the disease changed. Attempts at Cartoons, newspapers, and caricatures
were used to target the Chinese and other immigrants[16].
Written in Chinese, a poster showing a Chinese immigrant injecting ‘common
sense serum’ into a government officials head[17]
reveals that the methods being implemented by the government, or white man,
were clearly not effective. It is also understood from this image that the
targeted groups, particularly the Chinese, were not passive bystanders in this
hazardous environment of illness, racism, and denial. By showing how the lower-class
citizens understood what was happening to them and around them, the artist of
this poster makes a statement about the poor and ignorant treatment of the
immigrant population. “Plague phobia”[18]
resulted in other illnesses being left untreated, such as appendicitis, because
of the fear that the plague would spread if contact occurred between the sick
and the healthy. Some individuals claimed that the plague was a “distant,
tropical, exotic disease”[19]
that should not be worried about, which was a sharp contrast to those
individuals who compared the disease to Bolshevism[20].
Having these highly variable views of the plague is a result of the denial and
coverups that occurred in California in the early 20th century.
Most diseases come with social, economic, or scientific influences, but the racial tensions left by the bubonic plague were evident for decades and impacted the experience of Asian-Americans/ Asian-immigrants during World War II. Lack of understanding about how the disease was spread lead to blaming the outbreaks on a specific racial group, the Chinese. Even though the plague was not an active epidemic in 1930s, the fear of past epidemics resulted in mandatory reporting and extreme measures to be taken if a case was to appear.
Alex Gregory is double majoring in English Literature and History with double minors in Archaeology and Museum Studies. A native of Liberty Twp., Ohio, she hopes to attend graduate school and study public history or museum studies.
Bibliography
Chase, Marilyn. The Barbary Plague: The Black Death in
Victorian San Francisco. New York: Random
House, 2003.
U.S. Congress, Senate, Committee on
the District of Columbia. Experiments on
Living Dogs. Washington, DC: GPO,
1930
Bollet, Alfred. Plagues and Poxes: The Impact of Human
History on Epidemic Disease. New York: Demos
Medical Publishing Inc., 2004
Herlihy, David. The Black Death and the Transformation of
the West. Cambridge, Ma: Harvard University
Press, 1997.
California State Board of Health. Ground Squirrel Eradication. Sacramento,
California, 1911
The United States Public Health
Service, United States Treasury Department. Public
Health Reports. Washington, DC,
1936. Pp 1537
Kellogg, Williams and Simpson (1920). Present Status of
Plague. American Journal of Public Health, 11, p.844.
U.S Congress, House, Special
Committee to Investigate Communist Activities in the United States. Investigation of Communist Propaganda. Chicago, Il., 1930.
[1] Marilyn
Chase, The Barbary Plague: The Black
Death in Victorian San Francisco (New York: Random House, 2003), 44
[2] U.S
Congress, Senate, Committee on the District of Columbia, Experiments on Living Dogs (Washington, DC: GPO, 1930), 174
[3] Alfred
Bollet, Plagues and Poxes: The Impact of
Human History on Epidemic Disease (New York: Demos Medical Publishing Inc.:
2004), 25
[4] David
Herlihy, The Black Death and the
Transformation of the West (Cambridge, Ma: Harvard University Press, 1997),
20
[5] California
State Board of Health, Ground Squirrel
Eradication, (Sacramento, California: 1911), 513
[6] David
Herlihy, The Black Death and the
Transformation of the West (Cambridge, Ma: Harvard University Press, 1997),
21
[7] David
Herlihy, The Black Death and the
Transformation of the West (Cambridge, Ma: Harvard University Press, 1997),
21
[8] David
Herlihy, The Black Death and the
Transformation of the West (Cambridge, Ma: Harvard University Press, 1997),
21
[9] Alfred
Bollet, Plagues and Poxes: The Impact of
Human History on Epidemic Disease (New York: Demos Medical Publishing Inc.:
2004), 25
[10] U.S
Congress, Senate, Committee on the District of Columbia, Experiments on Living Dogs (Washington, DC: GPO, 1930), 174
[11] Alfred
Bollet, Plagues and Poxes: The Impact of
Human History on Epidemic Disease (New York: Demos Medical Publishing Inc.:
2004), 25
[13] The
United States Public Health Service, United States Treasury Department, Public Health Reports (Washington, DC:
1936), 1537
[14] The
United States Public Health Service, United States Treasury Department, Public Health Reports (Washington, DC:
1936), 1537
[15] The
United States Public Health Service, United States Treasury Department, Public Health Reports (Washington, DC:
1936), 1537
[16] Marilyn
Chase, The Barbary Plague: The Black
Death in Victorian San Francisco (New York: Random House, 2003), 46
[17]
Dr. Kellogg, Dr. Williams, and Dr. Simpson, “Present Status of Plague,” American Journal of Public Health 11
(1920): 844
[18] Marilyn
Chase, The Barbary Plague: The Black
Death in Victorian San Francisco (New York: Random House, 2003), 50-51
[19] U.S
Congress, Senate, Committee on the District of Columbia, Experiments on Living Dogs (Washington, DC: GPO, 1930), 174
[20] U.S
Congress, House, Special Committee to Investigate Communist Activities in the
United States, Investigation of Communist
Propaganda (Chicago, Illinois: 1930), 89
Book Title: The Six-Day War: The Breaking of The Middle East (New Haven: Yale University Press, 2017.)
Author: Guy Laron
By Terry Tait
Our understanding of one of the briefest yet most consequential wars of the 20th Century is still evolving. The Six-Day War began on June 6th, 1967 after several months of political tension between Israel and its Arab neighbors: Egypt, Syria, and Jordan. Border clashes and hostile rhetoric pressured Israel to feel that the state’s existence hung in the balance. After much deliberation, the Israeli Knesset authorized an offensive campaign, which dramatically expanded the country’s borders, reshaping the physical and political landscape of the region. The war continues to define the contours of the “Arab-Israeli conflict” today.
Guy Laron’s The Six-Day War: The Breaking of the Middle East, published on the fiftieth anniversary of The Six-Day War, is a significant contribution that expands our understanding of the conflict. With new archival materials from former Soviet bloc countries, such as Bulgaria, Czechoslovakia, and East Germany that include diplomatic cables, letters, and government reports, Laron is able to shed light on a new aspect of this conflict to provide a clearly written analysis of the economic and political factors that led to the outbreak of hostilities in June, 1967. In The Six-Day War, Laron seeks to explain how the region as we know it today was shaped by the conflict when several historical forces converged at a single moment. For Laron the various crises that led to the June war were produced by internal political divisions as well as regional and cold war politics.
Laron points to the collapse of Bretton Woods international monetary system as the impetus for a balance of payments crisis that exacerbated tensions between “weak civilian leaderships” and “trigger-happy generals” in the 1960s. The economic situation in each state empowered their respective military establishments as various crises diminished and divided the support for civilian authorities. For instance the Syrian Ba‘ath party, which took power in 1963, had long maintained a tense but stable border with Israel along the disputed Golan Heights. But when the regime was put under pressure due to internal economic and political crises, Salah Jadid, the country’s strongman, decided to shift the public’s attention by playing the “Israel Card” in order to gain popular support and distract people from other events in the country. However, this military movement enabled Jadid’s political rival Hafez al-Asad to gain a greater foothold in the country’s tumultuous politics.
Syria’s military posturing with Israel was mirrored by the growing support for an offensive, expansionist policy among Israel’s general staff. Those desires were at odds with the country’s older civilian leadership, who wanted to pursue diplomatic solutions to any potential conflict. Whereas these elder statesmen and women were primarily born and raised in Europe, the younger, Israeli-born generation of generals, thought of themselves as “the brave new Jews who fought to make Israel a reality, while all the politicians had done was sit and talk” (110). In pursuit of their more hawkish views the general staff developed a large arsenal of military technology to defend the nation by expanding its borders in the years before 1967.
The tense civilian-military relationship is best illustrated for Laron in Egypt and in the struggle between Nasser and his partner-turned-rival, Abdel Hakim Amer. The combination of Nasser’s ineffective economic policies, the collapse of Bretton Woods, and the ongoing war in Yemen created an unstable situation. Nasser became the object of ridicule, while Amer’s political influence grew to new heights. Once partners in overthrowing the monarchy in 1952, the two had drastically different approaches to Israel. Nasser is depicted as dovish and steady-handed, while his counterpart is portrayed as aggressive, nearing-unstable. The divergence in their personalities matched their goals as they plotted a way forward in relation to Israel.
In an effort to deter conflict, Nasser signed a joint-military agreement with the Syrian government. But after a series escalating border skirmishes, Egypt was forced to move troops into the Sinai Peninsula. To show, or feint, his willingness to use force, Nasser expelled the UN peacekeeping force which had been preserving peace between Egypt and Israel since 1956. The action was a political victory and restored Nasser’s reputation throughout the region. But Amer was not satisfied and eventually moved the Egyptian military to close the Straits of Tiran. Severed from vital oil shipments coming from Iran, Israel now had a casus belli to start offensive operations. Inside Israel the lengthy debates were concentrated on whether U.S. support would come, and whether or not the country should wait to strike.
The mixed signals coming from Moscow and Washington certainly did not help resolve the crisis that was unfolding in the spring of 1967. Political divisions in the U.S.S.R. led to the development of two competing Middle Eastern policies, pursued simultaneously by Leonid Brezhnev and Alexei Kosygin. With agendas designed to appeal to their respective conservative and liberal constituents within the Kremlin, Moscow’s approach to the Middle East often seemed vague and unclear to its Arab allies. (Given Laron’s Soviet source base, however one would have hoped for a more extensive discussion on the U.S.S.R.’s relationships in the Middle East.)
Meanwhile, President Lyndon Johnson took an equally bipolar approach to Israel, saying one thing publically, but something very different in private. Laron effectively conveys the tense and confused situation on the eve of war as well as the resigned feeling that Israeli politicians felt on June 4th when they approved the June 6th offensive.
Although Laron does not discuss the war extensively, his account of the events, negotiations, and power struggles that produced this short, destructive conflict is a valuable narrative for anyone who wants to understand how the events of 1967 fit into the region’s longer history. He concludes by pointing out that The Six-Day War cemented the control of generals in Middle Eastern politics, and ponders that, “perhaps that is the reason why there the sound of gunfire never quite dies down” 313. Indeed, in all of the states that participated in the Six-Day War, the military remains among the strongest and most durable institutions. The protracted war in Syria is but one recent example in a region where conflict, rather than diplomacy has become the norm.
By connecting The Six Day War to current developments in the region, Laron forces the reader to think of this fifty-year-old war beyond its immediate aftermath. However, Despite this small drawback, Laron’s work, with its unique set of archive materials and long-term framework, is an objective and well-argued read, and a welcome addition to the study of The Six Day War and the Middle East as a whole. He makes it clear that we cannot understand the region’s current political instability without understanding the role of this short but important war.
The Rivers family of Alabama seamlessly typifies the wealthy and successful American colonists that founded American states and territories through displacing natives and using slaves to pave the way to today’s nation. In 1842, Richard Reno Rivers was a freshman at Miami University and was documented as living in Claiborne, Alabama at the time of his enrollment (The Seventeenth Annual Catalogue of Miami University). In this essay, he will be referred to as Richard Reno due to the many duplicate names in the Rivers family. Richard Reno’s father, Richard Harwell, and his mother, Lucy Gibbs, met and married in North Carolina, where several of their nine children were born. Richard Reno was born in Alabama in 1822 after the family and relatives relocated in 1816 (West 574). While he died in 1856 at the age of 34, Richard Reno had received a higher education, married, birthed children and continued his family’s tale. Richard Reno was descended from a large and wealthy established family that immigrated to colonial America from England in the mid-1700s. His grandfather, Reverend Joel Thomas, was the leader of the Rivers family—a family that accrued wealth and success, played a major role in the establishment of a slave-owning family and religion in the south.
The Rivers family was at the forefront of establishing the United States as a free and sovereign nation during the American Revolution. Both Virginia and England are documented as the birthplace of Richard Harwell’s father, Reverend Joel Thomas (Weaversons1). In 1773, Reverend Joel Thomas and Rhoda Harwell married in Virginia and later had ten children (Morgan). Reverend Joel Thomas dedicated himself to fighting for the freemen of North Carolina and establishing America’s independence from England (Barnes). According to Greensville County Court records, on August 13, 1783, the Greensville County Court paid Joel Rivers £3 for a gun he furnished for the Southern Expedition Militia (Barnes). For this service, his family members were later able to gain membership in Sons and Daughters of the American Revolution. His signature and support are shown on many documents in an effort to free America from Britain’s rule (Morgan).
“One of the most paradoxical and disheartening developments in U.S. history is the emergence of virulent racism alongside the full flowering of democratic ideals after the American Revolution” (Ford 90). After gaining independence, colonists sought to expand beyond the original colonies, and in the process displaced Native tribes, relied on slave labor and “the national government… reserved land to support institutions of higher education that will prepare leaders of the expanding nation” (Ellison 10). Documents and relative testimonials provide evidence that Reverend Joel Thomas owned a large plantation with 310 acres of land and slaves in Dinwiddie, Virginia (Morgan). The six documented slaves and land were sold to James Greenway in 1784 when Reverend Joel Thomas and his family relocated to North Carolina (Morgan). Thirty years later, in 1816, the Rivers family relocated. again “The Rev. Joel Rivers, a local preacher… moved from the town of Fayetteville, North Carolina, to Fort Claiborne, Alabama, accompanied by his children, all then grown, and purchased land, the lot being at Claiborne” (West 573). In Ebony & Ivy, Wilder explains the process white settlers, like the Rivers family, took to remove the Choctaws— and other native tribes— to western lands to establish the new, “uncharted” territory of Alabama. “[Colonists] surrounded and segregated the last of the Indian nations as they laid claim to their entitlements” (Wilder 178). Reverend Joel Thomas spearheaded the relocation to Alabama in 1816, around the time the Choctaws were “forcibly relocated” to Oklahoma (“Choctaw Indian Language”). This was also prior to the official founding of Alabama in 1819, providing contextual evidence that Reverend Joel Thomas uprooted his adult family from an established state to the unestablished land of Alabama, and in the meantime removed Native tribes and disrupted the land (“Choctaw Indian Language”). By doing so, Reverend Joel Thomas positioned his already wealthy family as Southern wealthy leaders in the small plot of land soon to be known as Claiborne, Alabama. The Rivers family depicts the “Christian rule over Native peoples” as the family moved from an established state to land newly opened to white settlers; their presence forced out the Native Americans in order to establish themselves, their wealth, success and power in untouched territories (Wilder 17).
Censuses show that brothers Richard Harwell and Joel Thomas and their wives and children later joined their parents in Claiborne, Alabama with Richard Harwell being the head of the household (Morgan). By the time the family was settled in the not-yet-established state of Alabama around 1816, Reverend Joel Thomas used his own funds to build and establish a “house of worship for the Methodist Episcopal Church. The first Society of Claiborne, organized just prior to the erection of the house of worship there, consisted of the Reverend Joe Rivers, Rhoda Rivers, his wife, and a number of their children” (West 574). The establishment stood as a physical representation that “Christian rule over Native peoples” as the Natives were removed to make space for the Rivers family’s house of worship (Wilder 17).
The political power that Reverend Joel Thomas enjoyed in Virginia in the 1770s persisted in the move to Alabama. In 1817, Joel and his son Mason signed the petition to Congress to prevent Mississippi from extending its boundary into the Alabama territory and remove the Native land in an effort to establish Alabama as a state (Barnes). “White southerners were now poised to claim tens of millions of acres from multiple Native nations,” and the Rivers family was at the forefront of Native displacement in an effort to establish themselves as a successful Southern family and prolong the attack on non-white, non-Christian entities (Wilder 250). This ideology extended beyond the Choctaws in Claiborne, Alabama and into Southern-wide slavery displacement and marginalization.
“As slave traders and planters came to power in colonial society, they took guardianship over education,” thus, the assumed next step in the Rivers family after settling and establishing themselves Alabama was to pursue higher education (Wilder 75). Richard Harwell and Lucy Gibb’s oldest son Thomas Buxton was twelve years older than his brother Richard Reno, and came to Alabama with his father, uncle Joel Thomas, and grandfather Reverend Joel Thomas in 1815. As the child of a slave owner, Thomas Buxton sought higher education. “Profits from the sale and purchase of human beings paid for campuses and swelled college trusts… and cultivated a social environment to the sons of wealthy families” (Wilder 77). In 1832, Thomas Buxton began studying medicine in Alabama (Ball 369). After earning his degree, Thomas Buxton settled in Suggsville, Alabama in 1836 to partake in “farming besides attending to the duties of his profession” (Ball 369). One observer noted that he was “not giving his chief attention to his profession” (Ball 370). Thus, it can be inferred that his college education and doctorate were not necessarily for the profession or community, but rather for the status of attaining knowledge. After moving to Texas to build mills, Thomas Buxton returned to Alabama and built a “large family mansion” (Ball 370). Documents show he owned multiple slaves at his mansion and fought in the Confederate Army (1850 United States Federal Census). Thomas B. Rivers was an Alabama state representative in 1847 (Ball 713).
While Richard Reno attended Miami University for only two years (1840-1843), he very much was a Southern student at “Old Miami” during the Civil War era (1909 General Catalog of the Graduates and Former Students of Miami University). There is no evidence of Richard Reno was a member of either a literary society or a Greek organization; however, he was most decidedly a student from the South attending school in a northern- albeit border- state. “As the United States began to unravel in sectional conflict, young men studying the classics in Oxford were confronted with the fate of their country, and many, including those from Southern states, were forced to examine the foundations of American democracy” (Ellison 63). “The late 1840s were rife with rebellion” when new rules were installed to govern and rivalry among the student body and faculty erupted (Ellison 14). Richard Reno attended Miami University at the peak of the removal of the Miami tribe to the west and at a time of one of the lowest student enrollments in history (Ellison 12).
Richard Reno Rivers and his family represent the elite American colonists that helped found the nation and its racist ideologies. The generations of Rivers family members aided in gaining America’s independence, in displacing the Choctaws natives, and in fueling the slave economy. Reverend Joel Thomas Rivers- Richard Reno’s grandfather- is the epitome of a wealthy, Christian colonist who established himself and his family for the future. His investments, loyalties, strategic move to Alabama and relationship with the church positioned the Rivers family as wealthy successful individuals. This, in turn, set his children and grandchildren for success in their own lives. College education was a plausible next step for Thomas Buxton and Richard Reno, Historical research provided insight into the lives of the symbolic college students that Wilder describes in Ebony & Ivy, as well as how America’s independence, the displacement of native tribes and the slave economy led to the higher education we know and attend today.
Katy O’Neill is a senior majoring in Strategic Communication and American Studies.
Works Cited
“1850 United States Federal Census.” From Ancestry.com Operations, Inc. 2009. Accessed on