score
stringclasses
605 values
text
stringlengths
4
618k
url
stringlengths
3
537
year
int64
13
21
33
pre-columbian indigenous Indian population in Brazil was widely scattered and probably numbered no more than 1 million when Pedro Cabral, the Portuguese explorer, reached the coast of Brazil on April 22, 1500. The first permanent Portuguese settlement was founded at Sao Vicente, in the state of Sao Paulo (1532). Initially, development was slow, based upon a feudal system in which favored individuals received title to large blocks of land called capitanias. Because of the great demand for sugar in Europe, the first major economic cycle in Brazil was based upon the sugarcane, grown in plantations along the northeast coast. To work the fields, the early settlers used native labor, often furnished by the Bandeirantes, as the pioneers from the state of Sao Paulo were known. When the indians proved insufficient in numbers, or unable to withstand the hard labor, depending upon the story, the importation of millions of slaves from African began. and FRENCH INVADERS this period the Dutch and the French briefly settled in the Northeast and Rio de Janeiro, building forts and leaving blue-eyed brown-skinned Brazilians. Under Estacio de Sa and others the Portuguese and Brazilians expelled the invaders, who in the case of many of the Dutch from Recife and Sao Luis, moved to their new colony in a place called New Amsterdam on the island of Manhattan. That is how Brazil settled New York City. SLAVES and QUILOMBOS Another interesting fact from this period was founding of the Quilombos by slaves who escaped from the plantations. The Quilombos were built in remote areas, and could have hundreds of people living, raising families, growing crops and fighting to keep their independence. Of course the former owners took a dim view of this, but were usually defeated when sending military expeditions against the ex-slaves. What to do? Call in the Paulistas and Bandeirantes from Sao Paulo in the south of Brazil, even at that time known to be the most efficient, hard working and organized of Brazilians. The Paulistas soon destroyed the Quilombos, including the most famous one at Palmares, which required cannon and a long seige. Gold and diamonds were discovered in Minas Gerais shortly after 1700, beginning what is called the gold cycle, and leading to the development and occupation of the interior. Rio de Janeiro supplanted Bahia as the capital in 1763. In 1807-08, during the Napoleonic Wars, King John VI of Portugal took refuge in Rio de Janeiro. Brazil, now the seat of government for its mother country, witnessed tremendous economic growth. Life was so good in Rio, that after Napolean had been defeated, the Royal family stayed on until a threatened revolt in Portugal forced John VI to return to Lisbon. Popular pressure in Brazil compelled his son, Dom Pedro, to declare Brazil independent in 1822, and so Brazil became an Empire with a monarchy, while the rest of North and South American became republics. Pedro's personality was enigmatic and his rule erratic. After a disastrous war (1825-28) with Argentina and a revolt in Rio de Janeiro, Pedro's abdicated (1831) in favor of son, Pedro II. He then returned to Portugal, where he was able to get his daughter to be crowned as queen. The EMPIRE and the REPUBLIC Pedro II (1825 - 1891), second and last emperor of Brazil was a reformist best remembered for overseeing the abolition of slavery in Brazil, in 1887, and for bringing millions of Italian, German and Polish immigrants to the south of Brazil. Pedro II, was far more successful as a scholar and scientist than he was as a ruler; his reign was marred by a number of internal revolts and conflicts with neighboring countries. Unrest among planters, the military, and the republicans finally culminated in a coup that overthrew the emperor and established (1889) the first republic. Pedro, by all accounts a decent, kindly gentleman, was poorly treated by the new government and spent the last two years of his life in exile. With the new republican government came the rubber cycle, which produced great profits, an opera house and the great Caruso singing in the middle of the Amazon jungle. But, as every Brazilian school boy knows, an English rascal stole a rubber plant and the boom collapsed, unable to complete with the stolen rubber from Asia. The next cycle was that of the coffee bean, and for more than 50 years politics was expressed in terms of cafe com leite, or coffee and milk, representing the coffee growers from Sao Paulo and the cattle ranchers from Minas and World War II This period was ended by a little gaucho from the south of Brazil, named Getulio Vargas. Unsuccessful in his bid for the presidency in 1930, Vargas led a revolt that overthrew the government. Over the next 15 years, he effected massive transformations in the public and private sectors. His style was authoritarian and his appeal populist: unionization, industrialization, and social welfare programs gained him the working - and middle-class backing. Vargas gave support to the Allies during World War II, but his popularity declined as democratic sentiment grew. In 1945 he was ousted by the army. Vargas returned to power in 1950, democratically election as president, but his second tenure was beset with scandals and economic difficulties. Faced with growing opposition and expecting a coup, he resigned and then committed suicide in 1954. Vargas's tenure marked the start of modern industrialization for Brazil. Vargas was a strange guy - a mixture of Mussolini and FDR. Today he is the hero of all left wing activists and politicians, but his secret police brutally tortured communists in the 30s. The book OLGA paints a good picture of this period (there is an English translation). Olga was a German communist jewess who met and married Luis Carlos Prestes in Russia and returned with him to bring the joys of Stalinism to Brazil. Prestes -- who lived to be a ripe old age and whom I once saw in Rio -- is best known for a long march undertaken in the 30s, traveling thousands of kilometers and holding off government forces and proving that most people did not care for either Vargas or Prestes. What happened to Olga? She was capyured by Vargas' police and shipped back to Nazi Germany -- not a good place for a jew. She died in a concentration camp, but not before delivering a baby girl, now a university professor in Rio. and MILITARY REGIME In 1960 a new capital was established at Brasilia to encourage development of the interior, but the concern of the military and business leaders turned to the pressing problems of social unrest and excessive inflation. In 1964 the military overthrew President Joao Goulart, who was rapidly moving to the left. For the next 21 years, Brazil was ruled by a succession of military governments. Although the country's economy prospered, the military suspended constitutional guarantees and imposed press censorship. Civilian government was restored in 1985 when an electoral college chose the very popular Tancredo de Almeida Neves as president. He died before taking office and was succeeded by Jose Sarney, a well connected and powerful politician from the North of Brazil. Brazil got a new constitution in October 1988. A year lated Fernando Collor de Mello was elected, after a close electoral race with Luis Ignacio de Souza (always called LULA) representing the always very vocal left. Lula might have won, except for: (1) Eastern Europe deciding they had had enough of the very thing Lula wanted for Brazil. This was very embarrassing for Lula and his supporters, who went on TV to try to convince the people that the PT's (Worker's Party) had nothing to do with Communism in Europe. (2) Roberto Marinho, the owner of the Globo network and most powerful man in Brazil, was afraid that a left wing government would nationize his property, so he backed Mello. (3) some of the usual dirty tricks all politicians Mello was elected and soon launched a "shock" program to reduce inflation and government spending (these programs are called pacotes, meaning packages, a term you must learn if living in Brazil). People soon found that Collor was corrupt, and so he lost all support, even that of Marinho. Out went Collor, under a cloud of impeachment. These last two presidents are representative of everything that is bad in traditional Brazilian politics, where nice words are used to cover the the ugly face of power, priviledge, self-interest and corruption. may be changing with Fernando Henrique Cardoso, elected in 1994, and whose pacote, called the Real Plan, named after the new currency, has held inflation under control and generated growth. Pedro II was an able ruler, and the country prospered and grew during his long reign, which continued until 1889. His government helped overthrow neighboring dictatorships and took a series of steps to end slavery, completing that process in 1888. By then large sections of the population favored a republic. A military revolt led by Manuel Deodoro da Fonseca forced Pedro II to abdicate. Brazil was proclaimed a republic with official separation of church and state. A constitution like that of the United States was adopted in 1891, and Brazil officially became the United States of Brazil. Fonseca was elected its first president but soon ruled as a dictator, only to yield to another. was restored during the administration of the first civilian president, Prudente José de Moraes Barros, and succeeding administrations struggled to strengthen the troubled Brazilian economy. World War I (1914-1918) caused an increase in demand for Brazilian products on the world market, and the Brazilian economy improved. Brazil contributed ships and supplies to the success of the Allied forces. the war, continually deepening economic crisis led to unrest, a large-scale revolt, and martial law under President Artur da Silva Bernardes. Continued economic trouble and an upsurge in radicalism prompted his successor, Washington Luiz Pereira de Souza, to ban labor strikes and repress communism. to power by military revolt in 1930, Getúlio Dornelles Vargas ruled for the next 15 years. His government followed mixed policies of social reform and repression, and the economy continued to struggle. Woman suffrage and social security were established, but by 1937 Brazil was a totalitarian state. During this period, Brazil was friendly with the United States and other democracies but broke ties with the Nazi Third Reich because of German political activity in Brazil, including support of an open revolt. Brazil sided with the Allies in World War II (1939-1945), again using increased world demand for raw materials to expand its economy. It contributed direct military support, access to bases, and vital supplies to the defeat of the Axis powers. After the war, the Vargas regime loosened its political grip. National elections were scheduled for late 1945. Amid fears that Vargas would retain his dictatorship, opponents ousted him by a military coup. Elections proceeded, and former Minister of War Eurico Gaspar Dutra won the presidency. was elected president in 1950, and his coalition government at once moved to balance the budget while improving the standard of living. It did not succeed. In 1954 military leaders forced Vargas to resign; he then committed suicide. the next three decades, Brazil suffered a series of unstable governments followed by military rule. Attempts to stimulate the economy with foreign loans foundered on sinking coffee prices. Rigorous austerity measures were abandoned. Pressured by the military, the legislature amended the constitution in 1961 to strip the presidency of most powers. Two years later the legislature restored presidential powers. Opposition parties were outlawed or refused to enter candidates in elections. Despite repression, unrest became widespread. this time, the economy grew, but the plight of the poor worsened. The Roman Catholic clergy criticized government failure to help the disadvantaged. Economic growth also brought inflation, high energy costs, and difficulties with loan payments. returned to civilian rule with the election of Tancredo Neves in 1985. However, he died before taking office, and José Sarney became president. Faced with rising inflation and a huge foreign debt, Sarney imposed an austerity program that included introducing a new unit of currency. A new constitution restoring civil liberties and providing for direct presidential elections was enacted in 1988. Fernando Collor de Mello was elected president in 1989. His term was marked by an anti-inflationary recession and by allegations of financial corruption. Shortly after Brazil hosted the United Nations Conference on Environment and Development, also known as the Earth Summit, in 1992, Collor was impeached. He resigned his post to Vice President Itamar Franco. In 1994 a plan to restructure and reduce Brazil's foreign debt was implemented. In the same year, Brazil joined other Latin American and Caribbean nations by declaring itself free of nuclear weapons. Henrique Cardoso, a former finance minister responsible for much of Brazil's economic recovery, won the 1994 presidential elections. Soon afterward, Collor was acquitted of corruption charges. administration found itself caught up in issues of land ownership and land use. By a 1995 presidential decree, Cardoso redistributed tracts of land from large, private estates to poor families. In 1996 he signed a decree allowing people other than Native Americans to appeal land allocation decisions made by Brazil's Indian Affairs Bureau. The law was widely condemned by human rights, Native American, and religious organizations.
http://www.southtravels.com/america/brazil/history.html
13
18
But how big is bad? How does this spill compare with spills in the past? And what happens if, and when, it surpasses the size of the 1989 Exxon Valdez? It turns out, in the media's preoccupation with measuring this spill against the Exxon Valdez, reporters may have distorted the actual history (and frequency) of catastrophic oil spills. To begin, we need to know how volumes of oil are measured. Unfortunately, this volume is measured in a variety of ways, and the media shows no consistency in the units they report. First, there's a gallon. We know gallons because we fill our gas tanks in gallons. We also buy milk in gallons. A gallon of oil fits in here: Oil is also measured in barrels. Oil was actually shipped in barrels in the 19th and early 20th centuries: (Image courtesy of the Library of Congress's American Memory digital archive.) In reality, the actual size of oil barrels varied. In the 19th century, crude oil was poured into whatever was available, regardless of the size. Eventually, though, the standard American barrel of oil came to contain 42 gallons. Finally, especially in international contexts, oil quantities are often reported as tonnes, or metric tons. This is actually a measure of mass: 1 tonne = 1,000 kilograms. It's just one more way to add to the confusion. How many tonnes are in a barrel varies depending on what's in the oil and where it's produced: some oils are thick and heavy, others light -- you can find many conversions for specific countries and years here. The global average is about 7.3 barrels per tonne; you can find a calculator here. Confusingly, in coverage of the Gulf spill, reporters have been using both barrels and gallons, even while reporting for the same news organizations. What does this mean for the Gulf spill? Initially, BP reported the leaks amounted to about 1,000 barrels per day -- that's 42,000 gallons. By April 28, government estimates pushed that figure up to roughly 5,000 barrels per day -- or, 210,000 gallons. And to make matters worse, today, NPR reported revised estimates that put the spill at between 56,000 and 84,000 barrels (that's 2,352,000 to 3,528,000 gallons). This criticism was similarly voiced in the New York Times, which went on to note how bad things might still get: BP later acknowledged to Congress that the worst case, if the leak accelerated, would be 60,000 barrels a day, a flow rate that would dump a plume the size of the Exxon Valdez spill into the gulf every four days. BP’s chief executive, Tony Hayward, has estimated that the reservoir tapped by the out-of-control well holds at least 50 million barrels of oil. For most of us, these big numbers start to lose their meaning. And here's where history comes in. The Times' reference to the Exxon Valdez is representative of many news organizations, which seek to put the size of this leak into perspective. We can't easily imagine how much oil is contained in thousands of barrels (or gallons), but we all remember the 1989 disaster of Exxon Valdez, which ultimately dumped about 270,000 barrels (11,340,000 gallons) of crude oil into the sea. The images out of Valdez have become iconic, with some 270,000 birds killed. But was Exxon Valdez the worst oil spill in history? Not by a long shot. Since 1967, it ranks as only the 35th worst spill. The actual worst was in 1979, when the Atlantic Empress sank off Tobago in the West Indies, spilling some 2,104,000 BARRELS into the water: that's over seven and a half Exxon Valdezes. The ABT Summer, 700 nautical miles off the coast of Angola was a close second in 1991, spilling 1,906,000 BARRELS. The list, as you can see goes on: (Graph courtesy of the International Tankers Owners Pollution Federation's 2009 Oil Tanker Spill Information Pack -- the red is the relative size of the Exxon Valdez.) Most Americans have never heard of The Atlantic Empress, ABT Summer, Castillo de Bellver, Amoco Cadiz, or any of the other largest spills. That's probably because they took place far from the United States, either in deep water or somewhere without much US press coverage: The Exxon Valdez was a tragedy, but in the United States, it has so grossly overshadowed much larger and more catastrophic spills. The Exxon Valdez has become so widely known because it was so close to shore, affecting so many Americans, and producing such disturbing images of oil coated wildlife. But much more oil has spilled into the world's oceans with less outrage but no less an ecological consequence. This won't come as much comfort, of course. The Gulf catastrophe is a leak, not a spill, and it will keep pouring oil into the water until it's plugged. The real problem is that so many spills have gone unnoticed for so long -- and until now, we've actually had a pretty good decade.
http://www.crimesagainstclio.com/2010_05_01_archive.html
13
18
Authors: Nancy Mead and Brent Sandene Knowledge of economics is important for individuals to function effectively as citizens in an increasingly connected world economy. Economic literacy includes understanding how economies and markets work, what the benefits and costs of economic interaction and interdependence are, and that people have to make choices because resources are limited. In recent decades, the focus on economics content in the school curriculum has increased. In this first NAEP assessment of economics at grade 12, the overall average economics score, set at 150, fell within the Basic achievement level. Seventy-nine percent of students performed at the Basic level or higher, and 42 percent performed at the Proficient level or higher, including 3 percent at the Advanced level. Results are based on a nationally representative sample of 11,500 twelfth-grade students from 590 public and nonpublic high schools. Students answered questions representing a wide range of content from three areas: market, national, and international economics. Market economy—traditionally described as microeconomics—covers how individuals, businesses, and institutions make decisions about allocating resources in the marketplace. National economy—traditionally described as macroeconomics—encompasses the sum of decisions made by individuals, businesses, and government. International economy concentrates on international trade—how individuals and businesses interact in foreign markets. The questions described below and presented in this report illustrate the knowledge and skills assessed in these three content areas. The full assessment includes questions that cover a range of topics and difficulty levels within each content area. What students know about Economics |72% described a benefit and a risk of leaving a full-time job to further one’s education| |52% identified how commercial banks use money deposited into customers’ checking accounts| |46% interpreted a supply and demand graph to determine the effect of establishing a price control| |36% used marginal analysis to determine how a business could maximize its profits| |60% identified factors that lead to an increase in the national debt| |36% identified the federal government’s primary source of revenue| |33% explained the effect of an increase in real interest rates on consumers’ borrowing| |11% analyzed how a change in the unemployment rate affects income, spending, and production| |63% determined the impact of a decrease in oil production on oil-importing countries| |51% determined a result of removing trade barriers between two countries| |40% determined why industries can successfully lobby for tariff protection| |32% identified how investment in education can impact economic growth| NCES 2007-475 Ordering information Mead, N., and Sandene, B. (2007). The Nation’s Report Card: Economics 2006 (NCES 2007–475). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, Washington, D.C. For more information, see the results of the 2006 Economics assessment on the Nation's Report Card website.
http://nces.ed.gov/nationsreportcard/pubs/main2006/2007475.asp
13
16
From The Art and Popular Culture Encyclopedia Classical Latin is the form of the Latin language used by the ancient Romans in what is usually regarded as "classical" Latin literature. Its use spanned the Golden Age of Latin literature—broadly the 1st century BC and the early 1st century AD—possibly extending to the Silver Age—broadly the 1st and 2nd centuries. The spoken Latin of the common people of the Roman Empire, especially from the 2nd century onward, is generally called Vulgar Latin. Vulgar Latin differed from Classical Latin in its vocabulary and grammar, and as time passed, it came to differ in pronunciation as well. Good Latin in philology is "classical" Latin literature. The term refers to the canonicity of works of literature written in Latin in the late Roman republic and the early to middle Roman empire: "that is to say, that of belonging to an exclusive group of authors (or works) that were considered to be emblematic of a certain genre." (Citroni (2006), p.204.) The term classicus (masculine plural classici) was devised by the Romans themselves to translate Greek ἐγκριθέντες (egkrithentes), "select", referring to authors who wrote in Greek that was considered model. Prior to then classis, in addition to being a naval fleet, was a social class in one of the diachronic divisions of Roman society according to property ownership by the Roman constitution. The word is a transliteration into the Latin alphabet of Greek κλῆσις (klēsis), a "calling" of draftees for the army by property: first class, second class, etc., down to fifth class. Classicus is anything primae classis, "first class", such as the authors of the polished works of Latinitas, or sermo urbanus. It had nuances of the certified and the authentic: testis classicus, "reliable witness." It was in this sense that Marcus Cornelius Fronto (an African-Roman lawyer and language teacher) in the 2nd century AD used scriptores classici, "first-class" or "reliable authors" whose works could be relied upon as model of good Latin. This is the first known reference, possibly innovated at this time, to classical applied to authors by virtue of the authentic language of their works. In imitation of the Greek grammarians, the Roman ones, such as Quintilian, drew up lists termed indices or ordines on the model of the Greek lists, termed pinakes, considered classical: the recepti scriptores, "select writers." Aulus Gellius includes many authors, such as Plautus, who are currently considered writers of Old Latin and not strictly in the period of classical Latin. The classical Romans distinguished Old Latin as prisca Latinitas and not sermo vulgaris. Each author (and work) in the Roman lists was considered equivalent to one in the Greek; for example Ennius was the Latin Homer, the Aeneid was a new Iliad, and so on. The lists of classical authors were as far as the Roman grammarians went in developing a philology. The topic remained at that point while interest in the classici scriptores declined in the medieval period as the best Latin yielded to medieval Latin, somewhat less than the best by classical standards. The Renaissance brought a revival of interest in restoring as much of Roman culture as could be restored and with it the return of the concept of classic, "the best." Thomas Sebillet in 1548 (Art Poétique) referred to "les bons et classiques poètes françois", meaning Jean de Meun and Alain Chartier, which was the first modern application of the word. According to Merriam Webster's Collegiate Dictionary, the term classical, from classicus, entered modern English in 1599, some 50 years after its re-introduction on the continent. Governor William Bradford in 1648 referred to synods of a separatist church as "classical meetings" in his Dialogue, a report of a meeting between New-England-born "young men" and "ancient men" from Holland and England. In 1715 Laurence Echard's Classical Geographical Dictionary was published. In 1736 Robert Ainsworth's Thesaurus Linguae Latinae Compendarius turned English words and expressions into "proper and classical Latin." In 1768 David Ruhnken (Critical History of the Greek Orators) recast the mold of the view of the classical by applying the word canon to the pinakes of orators, after the Biblical canon or list of authentic books of the Bible. Ruhnken had a kind of secular catechism in mind. The ages of Latin thumb|left|Wilhelm Sigismund Teuffel In 1870 Wilhelm Sigismund Teuffel in Geschichte der Römischen Literatur (A History of Roman Literature) innovated the definitive philological classification of classical Latin based on the metaphoric uses of the ancient myth of the Ages of Man, a practice then universally current: a Golden Age and a Silver Age of classical Latin were to be presumed. The practice and Teuffel's classification, with modifications, are still in use. His work was translated into English as soon as published in German by Wilhelm Wagner, who corresponded with Teuffel. Wagner published the English translation in 1873. Teuffel divides the chronology of classical Latin authors into several periods according to political events, rather than by style. Regarding the style of the literary Latin of those periods he had but few comments. Teuffel was to go on with other editions of his history, but meanwhile it had come out in English almost as soon as it did in German and found immediate favorable reception. In 1877 Charles Thomas Cruttwell produced the first English work along the same lines. In his Preface he refers to "Teuffel's admirable history, without which many chapters in the present work could not have attained completeness" and also gives credit to Wagner. Cruttwell adopts the same periods with minor differences; however, where Teuffel's work is mainly historical, Cruttwell's work contains detailed analyses of style. Nevertheless like Teuffel he encounters the same problem of trying to summarize the voluminous detail in a way that captures in brief the gist of a few phases of writing styles. Like Teuffel, he has trouble finding a name for the first of the three periods (the current Old Latin phase), calling it mainly "from Livius to Sulla." The language, he says, is ..."marked by immaturity of art and language, by a vigorous but ill-disciplined imitation of Greek poetical models, and in prose by a dry sententiousness of style, gradually giving way to a clear and fluent strength...." These abstracts have little meaning to those not well-versed in Latin literature. In fact, Cruttwell admits "The ancients, indeed, saw a difference between Ennius, Pacuvius, and Accius, but it may be questioned whether the advance would be perceptible by us." Some of Cruttwell's ideas have become stock in Latin philology for better or for worse. While praising the application of rules to classical Latin, most intensely in the Golden Age, he says "In gaining accuracy, however, classical Latin suffered a grievous loss. It became cultivated as distinct from a natural language .... Spontaneity, therefore, became impossible and soon invention also ceased.... In a certain sense, therefore, Latin was studied as a dead language, while it was still a living." These views are certainly debatable; one might ask how the upper classes of late 16th century Britain, who shared the Renaissance zealousness for the classics, managed to speak spontaneous Latin to each other officially and unofficially after being taught classical Latin by tutors hired for the purpose. Latinitas in the Golden Age was in fact sermo familiaris, the spoken Latin of the Roman upper classes, who sent their children to school to learn it. The debate continues. A second problem is the appropriateness of Teuffel's scheme to the concept of classical Latin, which Teuffel does not discuss. Cruttwell addresses the problem, however, altering the concept of the classical. As the best Latin is defined as golden Latin, the second of the three periods, the other two periods considered classical are left hanging. While on the one hand assigning to Old Latin the term pre-classical and by implication the term post-classical (or post-Augustan) to silver Latin Cruttwell realizes that this construct is not according to ancient usage and asserts "... the epithet classical is by many restricted to the authors who wrote in it [golden Latin]. It is best, however, not to narrow unnecessarily the sphere of classicity; to exclude Terence on the one hand or Tacitus and Pliny on the other, would savour of artificial restriction rather than that of a natural classification." (This from a scholar who had just been complaining that golden Latin was not a natural language.) The contradiction remains; Terence is and is not a classical author depending on context. Authors of the Golden Age Catullus wrote at a slightly later date. He pioneered the naturalization of Greek lyric verse forms in Latin. The poetry of Catullus was personal, sometimes erotic, sometimes playful, and frequently abusive. He wrote exclusively in Greek metres. The heavy hand of Greek prosody would continue to have a pronounced influence on the style and syntax of Latin poetry until the rise of Christianity necessitated a different sort of hymnody. The Hellenizing tendencies of Golden Age Latin reached their apex in Virgil, whose Aeneid was an epic poem after the manner of Homer. Similar tendencies are noted in Horace, whose odes and satires were after the manner of the Greek anthology, and who used almost all of the fixed forms of Greek prosody in Latin. Ovid likewise wrote long and learned poems on mythological subjects, as well as such semi-satirical pieces as the Art of Love (Ars Amatoria). Tibullus and Propertius also wrote poems that were modelled after Greek antecedents. In prose, Golden Age Latin is exemplified by Julius Caesar, whose Commentaries on the Gallic War display a laconic, precise, military style; and by Marcus Tullius Cicero, a practicing lawyer and politician, whose judicial arguments and political speeches, most notably the Catiline Orations, were considered for centuries to be the best models for Latin prose. Cicero also wrote many letters which have survived, and a few philosophical tracts in which he gives his version of Stoicism. Historiography was an important genre of classical Latin prose; it includes Sallust, who wrote of the Conspiracy of Catiline and the War Against Jugurtha, his only works that have been preserved complete. Another historian, Livy, wrote the Ab Urbe Condita, a history of Rome "from the Founding of the City." Though originally composed of 142 books, only 35 books of this history have been preserved. The foremost technical work which survives is the De Architectura of Vitruvius, a compilation of building construction methods, design and layout of all public and domestic buildings as well as descriptions of the machines which aided construction. He also gives a detailed description of many other machines, such as the ballista used in war, surveying instruments, water mills and dewatering devices such as the reverse overshot water-wheel. Silver Age Latin Classical Latin continued to be used into the "Silver Age" of Latin literature, which spans the 1st and 2nd centuries, and directly follows the Golden Age. Literature from the Silver Age has traditionally, perhaps unfairly, been considered inferior to that of the Golden Age, although contemporary historians have voiced legitimate criticisms concerning perhaps a too great a reliance on trying to emulate the Golden Age and a 'messy' style of teaching rhetoric as possible causes for this alleged decline in quality. Silver Age Latinity is sometimes called "Post-Augustan". Among the works which survive, those of Pliny the Elder and Pliny the Younger inspired later generations, especially during the Renaissance. Writers of the silver age include: - Phaedrus (c. 15 BC-50) - Seneca the Younger (c. 4 BC-65) - Pliny the Elder (23-79) - Petronius Arbiter (c. 27-66) - Persius (34-62) - Quintilian (c. 35-c. 100) - Lucan (39-65) - Martial (40-c. 103) - Statius (45-96) - Tacitus (c. 56-c. 117) - Pliny the Younger (63-c. 113) - Suetonius (c. 70-c. 130 or later) - Juvenal (fl. 127) - Aulus Gellius (c. 125-c. 180 or later) - Apuleius (c. 125-c. 180) Silver Age Latin itself may be subdivided further into two periods: a period of radical experimentation in the latter half of the first century AD, and a renewed Neoclassicism in the second century AD. Under the reigns of Nero and Domitian, poets like Seneca the Younger, Lucan and Statius pioneered a unique style that has alternately delighted, disgusted and puzzled later critics. Stylistically, Neronian and Flavian literature shows the ascendance of rhetorical training in late Roman education. The style of these authors is unfailingly declamatory — at times eloquent, at times bombastic. Exotic vocabulary and sharply-polished aphorisms glimmer everywhere, though at times to the detriment of thematic coherence. Thematically, late 1st century literature is marked by an interest in terrible violence, witchcraft, and extreme passions. Under the influence of Stoicism, the gods recede in importance, while the physiology of emotions looms large. Passions like anger, pride and envy are painted in almost anatomical terms of inflammation, swelling, upsurges of blood or bile. For Statius, even the inspiration of the Muses is described as a calor ("fever"). While their extremity in both theme and diction has earned these poets the disapproval of Neoclassicists both ancient and modern, they were favorites during the European Renaissance, and underwent a revival of interest among the English Modernist poets. By the end of the 1st century, a reaction against this form of poetry had set in, and Tacitus, Quintilian and Juvenal all testify to the resurgence of a more restrained, classicizing style under Trajan and the Antonine emperors.
http://www.artandpopularculture.com/Silver_Age_of_Latin_literature
13
17
Division or consolidation of communal lands in Western Europe into the carefully delineated and individually owned farm plots of modern times. Before enclosure, farmland was under the control of individual cultivators only during the growing season; after harvest and before the next growing season, the land was used by the community for the grazing of livestock and other purposes. In England the movement for enclosure began in the 12th century and proceeded rapidly from 1450 to 1640; the process was virtually complete by the end of the 19th century. In the rest of Europe, enclosure made little progress until the 19th century. Common rights over arable land have now been largely eliminated. Learn more about enclosure movement with a free trial on Britannica.com. Enclosure or inclosure (the latter is used in legal documents and place names) is the term used in England and Wales for the process by which arable farming in open field systems was ended. It is also applied to the process by which some commons (a piece of land owned by one person, but over which other people could exercise certain traditional rights, such as allowing their livestock to graze upon it), were fenced (enclosed) and deeded or entitled to one or more private owners, who would then enjoy the possession and fruits of the land to the exclusion of all others. The process of enclosure was sometimes accompanied by force, resistance, and bloodshed and remains among the most controversial areas of agricultural and economic history in England. Marxist and neo-Marxist historians argue that rich landowners used their control of state processes to appropriate public land for their private benefit. This created a landless working class that provided the labour required in the new industries developing in the north of England. For example: "In agriculture the years between 1760 and 1820 are the years of wholesale enclosure in which, in village after village, common rights are lost. "Enclosure (when all the sophistications are allowed for) was a plain enough case of class robbery. On the other hand, others have argued that this is perhaps an oversimplification, and economic studies have found that the better-off members of the European peasantry had encouraged and participated actively in enclosure, seeking to end the perpetual poverty of subsistence farming. "We should be careful not to ascribe to (enclosure) developments that were the consequence of a much broader and more complex process of historical change. "The impact of eighteenth and nineteenth century enclosure has been grossly exaggerated. Throughout the medieval and modern periods, piecemeal enclosure took place in which adajacent strips were fenced off from the common field. This was sometimes undertaken by small landowners, but more often by large landowners and lords of the manor. Significant enclosures (or emparkments) took place to establish deer parks. Some (but not all) of these enclosures took place with local agreement. There was a significant rise in enclosure during the Tudor period. These enclosures largely resulted in conversion of land use from arable to pasture – usually sheep farming. These enclosures were often undertaken unilaterally by the landowner. Enclosures during the Tudor period were often accompanied by a loss of common rights and could result in the destruction of whole villages. During the 18th and 19th centuries, enclosures were by means of local acts of Parliament, called the Inclosure Acts. These "parliamentary" enclosures consolidated strips in the open fields into more compact units, and enclosed much of the remaining pasture commons or wastes. Parliamentary enclosures usually provided commoners with some other land in compensation for the loss of common rights, although often of poor quality and limited extent. Parliamentary enclosure was also used for the division and privatisation of common wastes (in the original sense of "uninhabited places"), such as fens, marshes, heathland, downland, moors. These enclosures turned common land into owned land, whereas field enclosures only segregated land that was already owned, and removed the common rights. But I do not think that this necessity of stealing arises only from hence; there is another cause of it, more peculiar to England.' 'What is that?' said the Cardinal: 'The increase of pasture,' said I, 'by which your sheep, which are naturally mild, and easily kept in order, may be said now to devour men and unpeople, not only villages, but towns; for wherever it is found that the sheep of any soil yield a softer and richer wool than ordinary, there the nobility and gentry, and even those holy men, the abbots not contented with the old rents which their farms yielded, nor thinking it enough that they, living at their ease, do no good to the public, resolve to do it hurt instead of good. They stop the course of agriculture, destroying houses and towns, reserving only the churches, and enclose grounds that they may lodge their sheep in them. The loss of agricultural labour also hurt others like millers whose livelihood relied on agricultural produce. Fynes Moryson reported on these problems in An Itinerary (1617): England abounds with corn [wheat and other grains], which they may transport, when a quarter (in some places containing six, in others eight bushels) is sold for twenty shillings, or under; and this corn not only serves England, but also served the English army in the civil wars of Ireland, at which time they also exported great quantity thereof into foreign parts, and by God's mercy England scarce once in ten years needs a supply of foreign corn, which want commonly proceeds of the covetousness of private men, exporting or hiding it. Yet I must confess, that daily this plenty of corn decreaseth, by reason that private men, finding greater commodity in feeding of sheep and cattle than in the plow, requiring the hands of many servants, can by no law be restrained from turning cornfields into enclosed pastures, especially since great men are the first to break these laws. By some accounts, 3/4ths to 9/10ths of the tenant farmers on some estates were evicted in the late medieval period. Other economic historians argue that forced evictions were probably rare. Landlords would turn to enclosure as an option when lands went unused. Initially, enclosure was not itself an offence, but where it was accompanied by the destruction of houses, half the profits would go to the Crown until the lost houses were rebuilt. (The 1489 act gave half the profits to the superior landlord who might not be the crown, but an act of 1536 allowed the Crown to receive this half share if the superior landlord had not taken action.) In 1515, conversion from arable to pasture became an offence. Once again, half the profits from conversion would go to the Crown until the arable land was restored. Neither the 1515 act, nor the previous legislation was effective in stopping enclosure so in 1517, Cardinal Wolsey established a commission of enquiry to determine where offences had taken place – and to ensure the Crown received its half of the In 1607, beginning on May Eve in Haselbech, Northamptonshire and spreading to Warwickshire and Leicestershire throughout May, riots took place as a protest against the enclosure of common land. Known as The Midland Revolt, it drew considerable support and was led by Captain Pouch, otherwise known as John Reynolds, a tinker said to be from Desborough, Northamptonshire. He told the protestors he had authority from the King and the Lord of Heaven to destroy enclosures and promised to protect protesters by the contents of his pouch, carried by his side, which he said would keep them from all harm. (After he was captured, his pouch was opened - all that was in it was a piece of green cheese.) Thousands of people were recorded at Hillmorton, Warwickshire and at Cotesbach, Leicestershire. A curfew was imposed in the city of Leicester, as it was feared citizens would stream out of the city to join the riots. A gibbet was erected in Leicester as a warning, and was pulled down by the citizens. The Newton Rebellion was one of the last times that the peasantry of England and the gentry were in open armed conflict. Things had come to a head in early June. James I issued a Proclamation and ordered his Deputy Lieutenants in Northamptonshire to put down the riots. It is recorded that women and children were part of the protest. Over a thousand had gathered at Newton, near Kettering, pulling down hedges and filling ditches, to protest against the enclosures of Thomas Tresham. The Treshams were unpopular for their voracious enclosing of land - both the family at Newton and their more well-known Roman Catholic cousins at nearby Rushton, the family of Francis Tresham, who had been involved two years earlier in the Gunpowder Plot and had apparently died in The Tower. Sir Thomas Tresham of Rushton was known as "the most odious man in the county". The old Roman Catholic gentry family of the Treshams had long argued with the emerging Puritan gentry family, the Montagus of Boughton, about territory. Now Tresham of Newton was enclosing common land - The Brand - that had been part of Rockingham Forest. Edward Montagu, one of the Deputy Lieutenants, had stood up against enclosure in Parliament some years earlier, but was now placed by the King in the position effectively of defending the Treshams. The local armed bands and militia refused the call-up, so the landowners were forced to use their own servants to suppress the rioters on 8 June 1607. The Royal Proclamation was read twice. The rioters continued in their actions, although at the second reading some ran away. The gentry and their forces charged. A pitched battle ensued. 40-50 were killed and the ringleaders were hanged and quartered. No memorial to the event or to those killed exists. The Tresham family declined soon after. The Montagu family went on through marriage to become the Dukes of Buccleuch, one of the biggest landowners in Britain. Note that at this time "field" meant only the unenclosed open arable land – most of what would now be called "fields" would then have been called "closes". The only boundaries would be those separating the various types of land, and around the closes. In each of the two waves of enclosure, two different processes were used. One was the division of the large open fields and meadows into privately controlled plots of land, usually hedged and known at the time as severals. In the course of enclosure, the large fields and meadows were divided and common access restricted. Most open-field manors in England were enclosed in this manner, with the notable exception of Laxton, Nottinghamshire and parts of the Isle of Axholme in North Lincolnshire. The history of enclosure in England is different from region to region. Not all areas of England had open-field farming in the medieval period. Parts of south-east England (notably parts of Essex and Kent) retained a pre-Roman system of farming in small enclosed fields. Similarly in much of west and north-west England, fields were either never open, or were enclosed early. The primary area of open field management was in the lowland areas of England in a broad band from Yorkshire and Lincolnshire diagonally across England to the south, taking in parts of Norfolk and Suffolk, Cambridgeshire, large areas of the Midlands, and most of south central England. These areas were most affected by the first type of enclosure, particularly in the more densely-settled areas where grazing was scarce and farmers relied on open field grazing after the harvest and on the fallow to support their animals. The second form of enclosure affected those areas, such as the north, the far south-west, and some other regions such as the East Anglian Fens, and the Weald, where grazing had been plentiful on otherwise marginal lands, such as marshes and moors. Access to these common resources had been an essential part of the economic life in these strongly pastoral regions, and in the Fens, large riots broke out in the seventeenth century, when attempts to drain the peat and silt marshes were combined with proposals to partially enclose them. Both economic and social factors drove the enclosure movement. In particular, the demand for land in the seventeenth century, increasing regional specialisation, engrossment in landholding and a shift in beliefs regarding the importance of "common wealth" (usually implying common livelihoods) as opposed to the "public good" (the wealth of the nation or the GDP) all laid the groundwork for a shift of support among elites to favour enclosure. Enclosures were conducted by agreement among the landholders (not necessarily the tenants) throughout the seventeenth century; enclosure by Parliamentary Act began in the eighteenth century. Enclosed lands normally could demand higher rents than unenclosed, and thus landlords had an economic stake in enclosure, even if they did not intend to farm the land directly. While many villagers received plots in the newly enclosed manor, for small landholders this compensation was not always enough to offset the costs of enclosure and fencing. Many historians believe that enclosure was an important factor in the reduction of small landholders in England, as compared to the Continent, though others believe that this process had already begun from the seventeenth and eighteenth centuries. Enclosure faced a great deal of popular resistance because of its effects on the household economies of smallholders and landless labourers. Common rights had included not just the right of cattle or sheep grazing, but also the grazing of geese, foraging for pigs, gleaning, berrying, and fuel gathering. During the period of parliamentary enclosure, employment in agriculture did not fall, but failed to keep pace with the growing population. Consequently large numbers of people left rural areas to move into the cities where they became labourers in the Industrial Revolution. By the end of the 19th century the process of enclosure was largely complete, in most areas just leaving a few pasture commons and village greens. Many landowners became rich through the enclosure of the commons, while many ordinary folk had a centuries-old right taken away. Land enclosure has been condemned as a gigantic swindle on the part of large landowners, and Oliver Goldsmith wrote "The Deserted Village" in 1770 deploring rural depopulation. An anonymous protest poem from the 17th century summed up the anti-enclosure feeling: They hang the man, and flog the woman, That steals the goose from off the common; But let the greater villain loose, That steals the common from the goose. From 1450 to 1630, economies expanded alongside increasing poverty. The social framework of the manorial estate – and that of medieval society in general, including the town guilds of the burghers – was falling away. The old order had been centered on religious, theocentric values of continuity, stability, security and cooperative effort. These goods were accompanied by the ills of intolerance of change, rigid social stratification, little development, and a high degree of poverty. The debasement was not seen as a cause of inflation (and therefore enclosures) until Somerset's reign as Protector of Edward VI. Up to this point enclosures were seen as the cause of inflation, not the outcome. When Thomas Smith tried to advise Edward Seymour (The 1st Duke of Somerset) on his response to enclosure (that it was result of inflation not a cause), he was only ignored. It took till John Dudley (The 1st Duke of Northumberland)'s time as Protector for his finance minister William Cecil to realise and act on debasement to stop enclosure. The English Civil War spurred a major acceleration of enclosures. The parliamentary leaders supported the rights of landlords vis-a-vis the King, whose Star Chamber court, abolished in 1641, had provided the primary legal brake on the enclosure process. By dealing an ultimately crippling blow to the monarchy (which, even after the Restoration, no longer posed a significant challenge to enclosures) the Civil War paved the way for the eventual rise to power in the 18th century of what has been called a "committee of Landlords, a prelude to the UK's parliamentary system. The economics of enclosures also changed. Whereas earlier land had been enclosed in order to make it available for sheep farming, by 1650 the steep rise in wool prices had come to an end. Thereafter, the focus shifted to implementation of new agricultural techniques, including fertilizer, new crops, and crop rotation, all of which greatly increased the profitability of large-scale farms. The enclosure movement probably peaked from 1760 to 1832; by the latter date it had essentially completed the destruction of the medieval peasant community.
http://www.reference.com/browse/enclosure+movement
13
16
Science Fair Project Encyclopedia United Nations Framework Convention on Climate Change The United Nations Framework Convention on Climate Change (UNFCCC or FCCC) is an international environmental treaty produced at the United Nations Conference on Environment and Development (UNCED), informally known as the Earth Summit, held in Rio de Janeiro in 1992. The treaty aimed at reducing emissions of greenhouse gas, pursuant to its supporters' belief in the global warming hypothesis. The treaty as originally framed set no mandatory limits on greenhouse gas emissions for individual nations and contained no enforcement provisions; it is therefore considered legally non-binding. Rather, the treaty included provisions for updates (called "protocols") that would set mandatory emission limits. The principal update is the Kyoto Protocol, which has become much better known than the UNFCCC itself. The FCCC was opened for signature on May 9 1992. It entered into force on March 21 1994. Its stated objective is "to achieve stabilization of greenhouse gas concentrations in the atmosphere at a low enough level to prevent dangerous anthropogenic interference with the climate system." (189 parties) Afghanistan, Albania, Algeria, Angola, Antigua and Barbuda, Argentina, Armenia, Australia, Austria, Azerbaijan, The Bahamas, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Belize, Benin, Bhutan, Bolivia, Bosnia and Herzegovina, Botswana, Brazil, Bulgaria, Burkina Faso, Burma, Burundi, Cambodia, Cameroon, Canada, Cape Verde, Central African Republic, Chad, Chile, China, Colombia, Comoros, Democratic Republic of the Congo, Republic of the Congo, Cook Islands, Costa Rica, Côte d'Ivoire, Croatia, Cuba, Cyprus, Czech Republic, Denmark, Djibouti, Dominica, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Estonia, Ethiopia, European Union, Fiji, Finland, France, Gabon, The Gambia, Georgia, Germany, Ghana, Greece, Grenada, Guatemala, Guinea, Guinea-Bissau, Guyana, Haiti, Honduras, Hungary, Iceland, India, Indonesia, Iran, Ireland, Israel, Italy, Jamaica, Japan, Jordan, Kazakhstan, Kenya, Kiribati, North Korea, South Korea, Kuwait, Kyrgyzstan, Laos, Latvia, Lebanon, Lesotho, Liberia, Libya, Liechtenstein, Lithuania, Luxembourg, The Former Yugoslav Republic of Macedonia, Madagascar, Malawi, Malaysia, Maldives, Mali, Malta, Marshall Islands, Mauritania, Mauritius, Mexico, Federated States of Micronesia, Moldova, Monaco, Mongolia, Morocco, Mozambique, Namibia, Nauru, Nepal, Netherlands, New Zealand, Nicaragua, Niger, Nigeria, Niue, Norway, Oman, Pakistan, Palau, Panama, Papua New Guinea, Paraguay, Peru, Philippines, Poland, Portugal, Qatar, Romania, Russia, Rwanda, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Samoa, San Marino, São Tomé and Príncipe, Saudi Arabia, Senegal, Serbia and Montenegro, Seychelles, Sierra Leone, Singapore, Slovakia, Slovenia, Solomon Islands, South Africa, Spain, Sri Lanka, Sudan, Suriname, Swaziland, Sweden, Switzerland, Syria, Tajikistan, Tanzania, Thailand, Togo, Tonga, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Tuvalu, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States, Uruguay, Uzbekistan, Vanuatu, Venezuela, Vietnam, Yemen, Zambia, Zimbabwe Annex I and Annex II Countries, and Developing Countries Signatories to the UNFCCC are split into three groups: - Annex I countries (industrialised countries) - Annex II countries (developed countries which pay for costs of developing countries) - Developing countries. Annex I countries agree to reduce their emissions (particularly carbon dioxide) to target levels below their 1990 emissions levels. If they cannot do so, they must buy emission credits or invest in conservation. Developing countries have no immediate restrictions under the UNFCCC. This serves three purposes: - Avoids restrictions on growth because pollution is strongly linked to industrial growth, and developing economies can potentially grow very fast. - It means that they cannot sell emissions credits to industrialised nations to permit those nations to over-pollute. - They get money and technologies from the developed countries in Annex II. Developing countries might become Annex I countries when they are sufficiently developed. Some opponents of the Convention argue that the split between Annex I and developing countries is unfair, and that both developing countries and developed countries need to reduce their emissions. Some countries claim that their costs of following the Convention requirements will stress their economy. These were some of the reasons given by George W. Bush, President of the United States, for doing as his predecessor did and not forwarding the signed Kyoto Protocol to the Senate. U.N. Framework Convention on Climate Change (UNFCCC) The United Nations Framework Convention on Climate Change (UNFCCC) was opened for signature at the 1992 United Nations Conference on Environment and Development (UNCED) conference in Rio de Janeiro (known by its popular title, the Earth Summit). On June 12, ], 154 nations signed the UNFCCC, that upon ratification committed signatories' governments to a voluntary "non-binding aim" to reduce atmospheric concentrations of greenhouse gases with the goal of "preventing dangerous anthropogenic interference with Earth's climate system." These actions were aimed primarily at industrialized countries, with the intention of stabilizing their emissions of greenhouse gases at 1990 levels by the year 2000; and other responsibilities would be incumbent upon all UNFCCC parties. The parties agreed in general that they would recognize "common but differentiated responsibilities," with greater responsibility for reducing greenhouse gas emissions in the near term on the part of developed/industrialized countries, which were listed and identified in Annex I of the UNFCCC and thereafter referred to as "Annex I" countries. On September 8, 1992, President Bush transmitted the UNFCCC for advice and consent of the U.S. Senate to ratification. The Foreign Relations Committee approved the treaty and reported it (Senate Exec. Rept. 102-55) October 1, 1992. The Senate consented to ratification on October 7, 1992, with a two-thirds majority vote. President Bush signed the instrument of ratification October 13, 1992, and deposited it with the U.N. Secretary General. According to terms of the UNFCCC, having received over 50 countries' instruments of ratification, it entered into force March 24, 1994. Since the UNFCCC entered into force, the parties have been meeting annually in Conferences of the Parties (COP) to assess progress in dealing with climate change, and beginning in the mid-1990s, to negotiate the Kyoto Protocol to establish legally binding obligations for developed countries to reduce their greenhouse gas emissions. COP-1, The Berlin Mandate The UNFCCC Conference of Parties met for the first time in Berlin, Germany in the spring of 1995, and voiced concerns about the adequacy of countries' abilities to meet commitments under the Convention. These were expressed in a U.N. ministerial declaration known as the "Berlin Mandate ", which established a 2-year Analytical and Assessment Phase (AAP), to negotiate a "comprehensive menu of actions" for countries to pick from and choose future options to address climate change which for them, individually, made the best economic and environmental sense. The Berlin Mandate exempted non-Annex I countries from additional binding obligations, in keeping with the principle of "common but differentiated responsibilities" established in the UNFCCC even though, collectively, the larger, newly industrializing countries were expected to be the world's largest emitters of greenhouse gas emissions 15 years hence. COP-2, Geneva, Switzerland The Second Conference of Parties to the UNFCCC (COP-2) met in July 1996 in Geneva, Switzerland. Its Ministerial Declaration was adopted July 18, 1996, and reflected a U.S. position statement presented by Timothy Wirth, former Under Secretary for Global Affairs for the U.S. State Department at that meeting, which - Accepted the scientific findings on climate change proffered by the Intergovernmental Panel on Climate Change (IPCC) in its second assessment (1995); - Rejected uniform "harmonized policies" in favor of flexibility; - Called for "legally binding mid-term targets." COP-3, The Kyoto Protocol on Climate Change The Kyoto Protocol to the United Nations Framework Convention on Climate Change was adopted by COP-3, in December 1997 in Kyoto, Japan, after intensive and tense negotiations. Most industrialized nations and some central European economies in transition (all defined as Annex B countries) agreed to legally binding reductions in greenhouse gas emissions of an average of 6%-8% below 1990 levels between the years 2008-2012, defined as the first emissions budget period. The United States would be required to reduce its total emissions an average of 7% below 1990 levels. The Clinton Administration initiated funding efforts to address climate change; in the FY2001 budget request funding was included for a Climate Change Technology Initiative (CCTI) first introduced in his FY1999 budget. Somewhat reduced funding for the climate technology initiatives was received in previous years. COP-4, Buenos Aires COP-4 took place in Buenos Aires in November 1998. It had been expected that the remaining issues unresolved in Kyoto would be finalized at this meeting. However, the complexity and difficulty of finding agreement on these issues proved insurmountable, and instead the parties adopted a 2-year "Plan of Action" to advance efforts and to devise mechanisms for implementing the Kyoto Protocol, to be completed by 2000. COP-5, Bonn, Germany The 5th Conference of Parties to the U.N. Framework Convention on Climate Change met in Bonn, Germany, between October 25 and November 4, 1998. It was primarily a technical meeting, and did not reach major conclusions. COP-6, The Hague, Netherlands When COP-6 convened November 13-25, 2000, in The Hague, Netherlands, discussions evolved rapidly into a high-level negotiation over the major political issues. These included major controversy over the United States' proposal to allow credit for carbon "sinks" in forests and agricultural lands, satisfying a major proportion of the U.S. emissions reductions in this way; disagreements over consequences for non-compliance by countries that did not meet their emission reduction targets; and difficulties in resolving how developing countries could obtain financial assistance to deal with adverse effects of climate change and meet their obligationsto plan for measuring and possibly reducing greenhouse gas emissions. In the final hours of COP-6, despite some compromises agreed between the United States and some EU countries, notably the United Kingdom, the EU countries as a whole, led by Denmark and Germany, rejected the compromise positions, and the talks in The Hague collapsed. Jan Pronk, the President of COP-6, suspended COP-6 without agreement, with the expectation that negotiations would later resume. It was later announced that the COP-6 meetings (termed "COP-6 bis") would be resumed in Bonn, Germany, in the second half of July. The next regularly scheduled meeting of the parties to the UNFCCC - COP-7 - had been set for Marrakech, Morocco, in October-November, 2001. COP-6 "bis," Bonn, Germany When the COP-6 negotiations resumed July 16-27, 2001, in Bonn, Germany, little progress had been made on resolving the differences that had produced an impasse in The Hague. However, this meeting took place after President George W. Bush had become the U.S. President, and had rejected the Kyoto Protocol in March; as a result the United States delegation to this meeting declined to participate in the negotiations related to the Protocol, and chose to act as observers at that meeting. As the other parties negotiated the key issues, agreement was reached on most of the major political issues, to the surprise of most observers given the low level of expectations that preceded the meeting. The agreements included: - Mechanisms: The "flexibility" mechanisms which the United States had strongly favored as the Protocol was initially put together, including emissions trading; joint implementation; and the Clean Development Mechanism (CDM), which provides funding from developed countries for emissions reduction activities in developing countries, with credit for the donor countries. One of the key elements of this agreement was that there would be no quantitative limit on the credit a country could claim from use of these mechanisms, but that domestic action must constitute a significant element of the efforts of each Annex B country to meet their targets. - Carbon sinks: Credit was agreed to for broad activities that absorb carbon from the atmosphere or store it, including forest and cropland management, and revegetation, with no over-all cap on the amount of credit that a country could claim for sinks activities. In the case of forest management, an Appendix Z establishes country-specific caps for each Annex I country, for example, a cap of 13 million tons could be credited to Japan (which represents about 4% of its base-year emissions). For cropland management, countries could receive credit only for carbon sequestration increases above 1990 levels. - Compliance: final action on compliance procedures and mechanisms that would address non-compliance with Protocol provisions was deferred to COP-7, but included broad outlines of consequences for failing to meet emissions targets that would include a requirement to "make up" shortfalls at 1.3 tons to 1, suspension of the right to sell credits for surplus emissions reductions; and a required compliance action plan for those not meeting their targets. - Financing: Three new funds were agreed upon to provide assistance for needs associated with climate change; a least-developed-country fund to support National Adaptation Programs of Action; and a Kyoto Protocol adaptation fund supported by a CDM levy and voluntary contributions. A number of operational details attendant upon these decisions remained to be negotiated and agreed upon, and these were the major issues of the COP-7 meeting that followed. COP-7, Marrakech, Morocco At the COP-7 meeting in Marrakech, Morocco October 29-November 10, 2001, negotiators in effect completed the work of the Buenos Aires Plan of Action, finalizing most of the operational details and setting the stage for nations to ratify the Protocol. The United States delegation continued to act as observers, declining to participate in active negotiations. Other parties continued to express their hope that the United States would re-engage in the process at some point, but indicated their intention to seek ratification of the requisite number of countries to bring the Protocol into force (55 countries representing 55% of developed country emissions of carbon dioxide in 1990). A target date for bringing the Protocol into force was put forward: the August-September 2002 World Summit on Sustainable Development (WSSD) to be held in Johannesburg, South Africa. The main decisions at COP-7 included: - Operational rules for international emissions trading among parties to the Protocol and for the CDM and joint implementation; - A compliance regime that outlines consequences for failure to meet emissions targets but defers to the parties to the Protocol after it is in force to decide whether these consequences are legally binding; - Accounting procedures for the flexibility mechanisms; - A decision to consider at COP-8 how to achieve to a review of the adequacy of commitments that might move toward discussions of future developing country commitments. COP-8, New Delhi, India COP-9, Milan, Italy COP-10, Buenos Aires, Argentina The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/UNFCCC
13
35
Agner Fog: Cultural selection © 1999 2. The history of cultural selection theory Lamarck and Darwin The idea of cultural selection first arose in victorian England - a culture that had more success in the process of cultural selection than any other society. But before we talk about this theory we must take a look at the theory of biological evolution, founded by Lamarck and Darwin. The french biologist Jean-Baptiste de Lamarck was the first to talk about the evolution of species. He believed that an animal, which has acquired a beneficial trait or ability by learning, is able to transmit this acquired trait to its offspring (Lamarck 1809). The idea that acquired traits can be inherited is called lamarckism after him. Half a century later the english biologist Charles Darwin published the famous book "On the origin of Species" in which he rejected Lamarck's hypothesis and put forward the theory that the evolution of species happens by a combination of variation, selection, and reproduction. It was a big problem for the evolutionary thinkers of that time that they did not know the laws of inheritance. Indeed the austrian monk Gregor Mendel at around the same time was carrying out a series of experiments, which led him to those laws of inheritance that today carry his name and constitute the foundation of modern genetics, but the important works of Mendel did not become generally known until the beginning of the twentieth century, and were thus unknown to nineteenth century british philosophers. They knew nothing about genes or mutations, and consequently Darwin was unable to explain where the random variations came from. As a consequence of the criticism against his theory Darwin had to revise his Origin of Species and assume that acquired traits can be inherited, and that this was the basis of the variation that was necessary for natural selection to be possible (Darwin 1869, 1871). In 1875 the german biologist August Weismann published a series of experiments that disproved the theory that acquired traits can be inherited. His book, which was translated into english in 1880-82, caused lamarckism to lose many of its adherents. Although Darwin had evaded the question of the descent of man in his first book it was fairly obvious that the principle of natural selection could apply to human evolution. At that time no distinction was drawn between race and culture, and hence the evolution from the savage condition to modern civilized society came to be described in darwinian terms. The earliest example of such a description is an essay by the british economist Walter Bagehot in The Fortnightly in 1867. Bagehot imagined that the earliest humans were without any kind of organization, and he described how social organization might have originated: "But when once polities were begun, there is no difficulty in explaining why they lasted. Whatever may be said against the principle of 'natural selection' in other departments, there is no doubt of its predominance in early human history. The strongest killed out the weakest, as they could. And I need not pause to prove that any form of polity is more efficient than none; that an aggregate of families owning even a slippery allegiance to a single head, would be sure to have the better of a set of families acknowledging no obedience to anyone, but scattering loose about the world and fighting where they stood. [...] What is there requisite is a single government - call it Church or State, as you like - regulating the whole of human life. [...] The object of such organizations is to create what may be called a cake of custom." When we look at this citation with contemporary eyes, it seems like a clear example of cultural selection: The best organized groups vanquished the poorly organized groups. But in Bagehot's frame of reference the concept of cultural selection hardly had any meaning. As a consequence of lamarckism no distinction was drawn between social and organic inheritance. Nineteenth century thinkers believed that customs, habits, and beliefs would precipitate in the nervous tissue within a few generations and become part of our innate dispositions. As no distinction was drawn between race and culture, social evolution was regarded as racial evolution. Initially Bagehot regarded his model for human evolution as analogous with, but not identical to, Darwin's theory - not because of the difference between social and organic inheritance, but because of the difference between humans and animals. Bagehot did not appreciate that humans and animals have a common descent. He even discussed whether the different human races have each their own Adam and Eve (Bagehot 1869). He did, of course, revise his opinions in 1871 when Darwin published The Descent of Man. Despite these complications, I do consider Bagehot important for the theory of cultural selection because he focuses on customs, habits, beliefs, political systems and other features which today are regarded as essential parts of culture, rather than physical traits which today we mainly attribute to organic inheritance. It is important for his theory that customs etc. can be transmitted not only from parent to child, but also from one family to another. When one people defeats another people in war and conquers their land, then the victors art of war will also be transferred to or imitated by the defeated people, so that an ever stronger art of war will spread. Interestingly, unlike later philosophers, Bagehot does not regard this natural evolution as necessarily beneficial: It favors strength in war, but not necessarily other skills (Bagehot 1868). The anthropologist Edward B. Tylor has had a significant influence on evolutionary thought and on the very concept of culture. The idea that modern civilized society has arisen by a gradual evolution from more primitive societies is primarily attributed to Tylor. The predominant view at that time was that savages and barbarian peoples had come into being by a degeneration of civilized societies. Tylor's books contain a comprehensive description of customs, techniques and beliefs in different cultures, and how these have changed. He discusses how similarities between cultures can be due to either diffusion or parallel independent evolution. Darwin's theory about natural selection is not explicitly mentioned, but he is no doubt inspired by Darwin, as is obvious from the following citation: "History within its proper field, and ethnography over a wider range, combine to show that the institutions which can best hold their own in the world gradually supersede the less fit ones, and that this incessant conflict determines the general resultant course of culture." (Tylor 1871, vol. 1:68-69). Tylor was close to describing the principle of cultural selection as early as 1865, i.e. before the abovementioned publications by Bagehot: "On the other hand, though arts which flourish in times of great refinement or luxury, and complex processes which require a combination of skill or labour hard to get together and liable to be easily disarranged, may often degenerate, yet the more homely and useful the art, and the less difficult the conditions for its exercise, the less likely it is to disappear from the world, unless when superseded by some better device." (Tylor 1865:373). While Darwin was dealing with the survival of the fittest, Tylor was more concerned with the survival of the unfit. The existence of outdated institutions and customs, which no longer had any usefulness, were Tylor's best proof that modern society had evolved from a more primitive condition. Tylor's attitude towards darwinism seem to have been rather ambivalent, since his only reference to Darwin is the following enigmatic statement in the preface to the second edition of his principal work Primitive Culture: "It may have struck some readers as an omission, that in a work on civilization insisting so strenuously on a theory of development or evolution, mention should scarcely have been made or Mr. Darwin and Mr. Herbert Spencer, whose influence on the whole course of modern thought on such subjects should not be left without formal recognition. This absence of particular reference is accounted for by the present work, arranged on its own lines, coming scarcely into contact of detail with the previous works of these eminent philosophers." (Tylor 1873). This ambiguity has led to disagreement among historians of ideas about Tylor's relationship to darwinism. Greta Jones (1980:20), for example, writes that Tylor dissociated himself from darwinism, whereas Opler (1965) goes to great lengths to demonstrate darwinian tendencies in Tylor's Primitive Culture, and even categorizes Tylor as cultural darwinist. This categorization is a considerable exaggeration since Tylor did not have any coherent theory of causation (Harris 1969, p. 212). A central issue has been whether nineteenth century evolutionary thinkers were racist or not, i.e. whether they attributed the supremacy of civilized peoples to organic inheritance or culture. This controversy is meaningless, however, because no clear distinction was drawn at that time between organic and social inheritance. Tylor used the word race synonymously with culture or tribe, as did most of his contemporaries. As early as 1852, before the publication of Darwin's Origin of Species, the prominent english philosopher Herbert Spencer described the principle that the most fit individuals survive while the less fit die in the struggle for existence. This principle initially had only an inferior importance in Spencer's evolutionary philosophy, which was based on the idea that all kinds of evolutions follow the same fundamental principles. The Universe, the Earth, the species, the individuals, and society all evolve by the same pattern and in the same direction, according to Spencer, namely towards ever more differentiation and equilibrium. It was all part of one and the same process: "... there are not several kinds of Evolution having certain traits in common, but one Evolution going on everywhere after the same manner." (Spencer, H. 1862). In 1857, only two years before Darwin's book about the origin of species, Spencer described the cause of this evolution as "that ultimate mystery which must ever transcend human intelligence". (Spencer, H. 1857). The evolution of societies is going through four stages, according to Spencer: Out of the unorganized savage condition came the first barbarian societies of nomads and herders. These have later been united into towns and nation states, called militant societies. The last stage in the evolution is called the industrial society, which will continue to evolve towards equilibrium, zero growth, peace and harmony. Social evolution is primarily determined by external factors, such as climate, fertility of the soil, vegetation, fauna, and the basic characteristics of the humans themselves. Secondary factors include modifications imposed by the humans on their environment, themselves, and their society, as well as interaction with other societies. The main driving force in this evolution is population growth. The continued increase in population necessitates ever more effective food production methods, and hence an increasing degree of organization, division of labor, and technological progress. War plays a significant role in the transition from the barbarian to the militant society. Any war or threat of war necessitates the formation of alliances and establishment of a strong central government. The militant society is therefore characterized by a strong monopoly of power to which the population must submit. The end result of a war is often the fusion of two societies into one bigger society, whereby the two cultures get mixed and the best aspects from each culture is preserved. This creation of bigger and bigger states makes possible the last step in Spencer's evolutionary scheme: industrialization. The rigid and totalitarian central government is still an impediment to industrialization because it obstructs private economic initiatives and scientific progress. The militant society will therefore in times of peace move towards more individual freedom and democracy, and hence become what Spencer calls the industrial society (Spencer, H. 1873, 1876). Charles Darwin's book about the origin of species exerted an important influence on Spencer's philosophy, although he never totally rejected lamarckism. The principle of the survival of the fittest is only applicable to the evolution of the species and societies, not to the evolution of the Earth or the Universe, and neither to the ontogenetic development of the individual. The principle of natural selection could therefore not acquire the same central position in Spencer's evolutionary thought that it had in Darwin's. Spencer applied the principle of the survival of the fittest to the formation of the first primitive societies in the same way as Bagehot did: "... this formation of larger societies by the union of smaller ones in war, and this destruction or absorption of the smaller un-united societies by the united larger ones, is an inevitable process through which the varieties of men most adapted for social life, supplant the less adapted varieties." (Spencer, H. 1893) Just like Bagehot and Tylor, Spencer hardly distinguished between social and organic inheritance. It is therefore difficult to decide whether the above citation refers to genetic or cultural selection. Spencer does, however, apply the principle of natural selection to phenomena which from a contemporary point of view can only be regarded as social heritage. Spencer describes the origin of religion in this way: "If we consider that habitually the chief or ruler, propitiation of whose ghost originates a local cult, acquired his position through successes of one or other kind, we must infer that obedience to the commands emanating from him, and maintenance of the usages he initiated, is, on the average of cases, conducive to social prosperity so long as conditions remain the same; and that therefore this intense conservatism of ecclesiastical institutions is not without a justification. Even irrespective of the relative fitness of the inherited cult to the inherited social circumstances, there is an advantage in, if not indeed a necessity for, acceptance of traditional beliefs, and consequent conformity to the resulting customs and rules." (Spencer, H. 1896). The principle of the survival of the fittest can obviously lead to a philosophy of the right of the superior forces, i.e. a laissez faire-policy. To Spencer this principle applied primarily to the individual. He was against any kind of social policy for the benefit of the poor and weak individuals. Spencer was a leading advocate of "competitive individualism" in economic and social matters (Jones, G. 1980). He does not see egoism and altruism as opposites, but as two sides of the same coin. Whoever wants the best for himself also wants the best for society because he is part of society, and egoism thereby becomes an important driving force in the evolution of society (Spencer, H. 1876). Spencer did not, however, support laissez faire-policy when it came to international wars (Schallberger 1980). He was very critical of Britain's increasing militarization and imperialism which he saw as an evolutionary retrogression. He also warned about the fact that in modern society it is mostly the strongest men who go to war and die, whereas the weakest remain back and reproduce. Persistent optimist that he was, Spencer still believed that wars were a transitory stage in human evolutionary history: "But as there arise higher societies, implying individual characters fitted for closer co-operation, the destructive activities exercised by such higher societies have injurious re-active effects on the moral natures of their members - injurious effects which outweigh the benefits resulting from extirpation of inferior races. After this stage has been reached, the purifying process, continuing still an important one, remains to be carried on by industrial war - by a competition of societies during which the best, physically, emotionally, and intellectually, spread most, and leave the least capable to disappear gradually, from failing to leave a sufficiently-numerous posterity." (Spencer, H. 1873). Spencer's theories have first and foremost been criticized for the paradox that the free rein of the superior forces should lead to harmony. He denied the disadvantages of the capitalist society in order to be able to maintain his a priori belief that evolution is the same as progress, said his opponents. It is said, that Spencer in his older days became more disillusioned and began to realize this problem (Schallberger 1980). The french historian of literature Ferdinand Brunetière was inspired by Darwin's evolutionary theory, and thought that literature and other arts evolved according to a set of rules which were analogous to, but not identical to, the rules that govern biological evolution: "Et, dès à présent, si l'apparition de certaines espèces, en un point donné de l'espace et du temps, a pour effet de causer la disparation de certaines autres espèces; ou encore, s'il est vrai que la lutte pour la vie ne soit jamais plus âpre qu'entre espèces voisines, les exemples ne s'offrent-ils pas en foule pour nous rappeler qu'il n'en est pas autrement dans l'histoire de la littérature et de l'art?" (Brunetière 1890). Although the concept of cultural inheritance is not explicitly mentioned by Brunetière, he does undeniably distinguish between race and culture. He says that the evolution of literature and art depend on race as well as on environment, social and historical conditions, and individual factors2. Furthermore, he does distinguish between evolution and progress. The first to give a precise formulation of cultural selection theory was Leslie Stephen. In his book The science of ethics (1882) he draws a clear distinction between social and organic evolution, and explains the difference between these two processes by examples such as the following: "Improved artillery, like improved teeth, will enable the group to which it belongs to extirpate or subdue its competitors. But in another respect there is an obvious difference. For the improved teeth belong only to the individuals in whom they appear and to the descendants to whom they are transmitted by inheritance; but the improved artillery may be adopted by a group of individuals who form a continuous society with the original inventor. The invention by one is thus in certain respects an invention by all, though the laws according to which it spreads will of course be highly complex." The distinction between cultural and organic evolution is important to Stephen because the organic evolution is so slow that it has no relevance in social science. Stephen also discusses what the unit of selection is. In primitive tribal wars it may be an entire tribe that is extinguished and replaced by another tribe with a more effective art of war; but in modern wars between civilized states it is rather one political system winning over another, while the greater part of the defeated people survive. Ideas, too, can be selected in a process which does not depend on the birth and death of people. Stephen is thus aware that different phenomena spread by different mechanisms, as we can see from the following citation: "Beliefs which give greater power to their holders have so far a greater chance of spreading as pernicious beliefs would disappear by facilitating the disappearance of their holders. This, however, expresses what we may call a governing or regulative condition, and does not give the immediate law of diffusion. A theory spreads from one brain to another in so far as one man is able to convince another, which is a direct process, whatever its ultimate nature, and has its own laws underlying the general condition which determines the ultimate survival of different systems of opinion." (Stephen 1882). Leslie Stephen's brilliant theories of cultural evolution have largely been ignored and seem to have had no influence on later philosophers. Benjamin Kidd's work "Social Evolution" from 1894, for instance, does not mention cultural selection. Benjamin Kidd was inspired by both Marx and Spencer (mostly Spencer) but criticized both. It may seem as if he tried to strike the golden mean. He granted to the marxists that the members of the ruling class were not superior. He believed that the ruling families were degenerating so that new rulers had to be recruited from below. He was therefore against privileges. He denied the innate intellectual superiority of the white race, which he ascribed to social heritage, by which he meant accumulated knowledge. On the other hand he agreed with the racists that the english race was superior when it came to "social efficiency", by which he meant the ability to organize and to suppress egoistic instincts to the benefit of the community and the future. Kidd attributed this altruism to the religious instinct. Curious as it may seem, he explained the evolution of religion by natural selection of the strongest race on the basis of organic inheritance. Although Kidd refers to Leslie Stephen in other contexts, he never mentions selection based on social heritage. As a consequence of Weismann's rejection of lamarckism, Kidd saw an eternal competition as necessary for the continued evolution of the race. He therefore rejected socialism, which he believed would lead to degeneration. 2.2 Social darwinism The difficulty in distinguishing between social and organic inheritance continued until well after world war I. The mass psychologist William McDougall, for example, described the selection of populations on the basis of religion, military strength, or economical competence, without talking about social inheritance. These characters were in McDougall's understanding based on inborn dispositions in the different races (McDougall 1908, 1912). This focus on natural selection and the survival of the fittest as the driving force in the evolution of society paved the way for a multitude of philosophies that glorified war and competition. The aryan race was regarded as superior to all other races, and the proofs were seen everywhere: australians, maoris, red indians, and negroes - everybody succumbed in the competition with the white man. The term social darwinism was introduced in 1885 by Spencer's opponents and has since then been applied to any social philosophy based on darwinism (Bannister 1979). The definition of this term has been lax and varying, depending on what one wanted to include under this invective. It was Spencer, not Darwin, who coined the expression "the survival of the fittest". Implicit in this formulation lies the assumption that fittest = best, i.e. the one who survives in the competition is the best. Only many years later was it realized that this expression is a tautology, because fitness is indeed defined as the ability to survive - hence: the survival of the survivor (Peters 1976). An implicit determinism was also buried in Darwin's expression "natural selection". What was natural was also beneficial and desirable. Humans and human society was, in the worldview of the social darwinists, part of nature, and the concept of naturalness had then, as it has today, an almost magical appeal. Regarding man as part of nature must, in its logical consequence, mean that everything human is natural - nothing is unnatural. The concept of naturalness is therefore meaningless, but nobody seems to have realized that this was no objective category, but an arbitrary value-laden concept. By calling the evolution natural, you preclude yourself from choosing. Everything is left to the free reign of the superior forces. Nobody dared to break the order of nature, or to question the desirability of the natural selection. Evolution and progress were synonyms. Social darwinism was used to justify all kinds of liberalism, imperialism, racism, nazism, fascism, eugenics, etc. I shall refrain from listing the numerous ideologies that social darwinism has fostered - many books have already been written on that subject - but merely remark that social darwinism was not rejected until the second world war had demonstrated the horrors to which this line of thought may lead. The american sociologist Albert G. Keller criticized the previous social darwinists for basing their evolutionary theory on organic inheritance (1916). He rejected that acquired characteristics such as tradition and moral could be inherited by referring to Weismann. Keller was inspired by Darwin's general formula for biological evolution: that the conjoined effect of variation, selection and reproduction leads to adaptation. By simple analogy he defined social variation, social selection, and social reproduction. Keller regarded this idea as his own. He did of course refer to several british social thinkers, including Spencer and Bagehot, but he interpreted their theories as based on organic inheritance. He had no knowledge of Leslie Stephen. Keller's book is a systematic examination of the three factors: variation, selection, and reproduction, and hence the first thorough representation of cultural selection theory. Many years should pass before another equally exhaustive discussion of cultural selection was published. Keller described many different selection mechanisms. He used the term automatic selection to designate the outcome of conflicts. This could happen with or without bloodshed. The opposite of automatic selection was labeled rational selection, i.e. the result of rational decisions based on knowledge. Keller drew a clear distinction between biological and cultural selection and between biological and cultural fitness. He maintained that the two processes were in conflict with each other and would lead in different directions (Keller 1916). The social reproduction was carried by tradition, education, belief, and worship of ancestors. Religion was described as a very strong preserving and guiding force: "Discipline was precisely what men needed in the childhood of the race and have continued to require ever since. Men must learn to control themselves. Though the regulative organization exercised considerable discipline, its agents were merely human; the chief had to sleep occasionally, could not be everywhere at once, and might be deceived and evaded. Not so the ghosts and spirits. The all-seeing daimonic eye was sleepless; no time or place was immune from its surveillance. Detection was sure. Further, the penalty inflicted was awesome. Granted that the chief might beat or maim or fine or kill, there were yet limits to what he could do. The spirits, on the other hand, could inflict strange agonies and frightful malformations and transformations. Their powers extended even beyond the grave and their resources for harm outran the liveliest imaginings [...] there is no doubt that its disciplinary value has superseeded all other compulsions to which mankind has ever been subject." (Sumner & Keller 1927). Keller's criticism of social darwinism (1916) was purely scientific, not political, and he was an adherent of eugenics, which until the second world war was widely regarded as a progressive idea. Spencer imagined society as an organism, where the different institutions are comparable with those organs in an organism that have similar functions. The government, for example, was regarded as analogous with a brain, and roads were paralleled with veins. This metaphor has been popular among later social scientists and led to a line of thought called functionalism. This theoretical school is concerned with analyzing what function different institutions have in society. Functionalism is therefore primarily a static theory, which seldom concerns itself with studying change. Even though evolutionism was strongly criticized in this period, there was no fundamental contradiction between evolutionism and functionalism, and some outstanding functionalists have expressed regret that evolutionism was unpopular: "Evolutionism is at present rather unfashionable. Nevertheless, its main assumptions are not only valid, but also they are indispensible to the field-worker as well as to the student of theory." (Malinowski 1944). Functionalists defended their lack of interest in evolutionary theory by claiming that a structural and functional analysis of society must precede any evolutionary analysis (Bock 1963). One of the most famous anthropologists Alfred R. Radcliffe-Brown had the same view on evolutionism as his equally famous colleague Bronislaw Malinowski (Radcliffe-Brown 1952). He drew a distinction between different kinds of changes in a society: firstly, the fundamental changes in society as an adaptation to altered outer conditions; secondly, the adaptation of different social institutions to each other; and thirdly, the adaptation of individuals to these institutions. Radcliffe-Brown described these changes only in general terms as "adjustment" and "adaptation". Malinowski, on the other hand, goes into more detail with evolutionary theory. A cultural phenomenon can, according to Malinowski, be introduced into a society either by innovation or by diffusion from another society. The maintenance of the phenomenon then depends on its influence on the fitness of the culture, or its "survival value". Malinowski attributes great importance to diffusion in this context. Since cultural phenomena, as opposed to genes, can be transmitted from one individual to another or from one society to another, then wars should not be necessary for the process of cultural evolution, according to Malinowski. A degenerating society can either be incorporated under a more effective society or adopt the institutions of the higher culture. This selection process will result in greater effectivity and improved life conditions (Malinowski 1944). A synthesis between evolutionism and functionalism should certainly be possible, since the selection theory gives a possible connection between the function of a cultural institution and its origin. A functional institution will win over a less effective institution in the process of cultural selection (Dore 1961). Considering the domination of functionalist thought, it is no surprise that evolutionism got a renaissance from about 1950. The name "neo-evolutionism" implies that this is something new, which is somewhat misleading. Some neo-evolutionists rejected this term and called their science "plain old evolutionism" - and so it was! (Sahlins & Service 1960, p. 4). The tradition from Spencer and Tylor was continued without much novel thinking. The neo-evolutionists focused on describing the evolution of societies through a number of stages, finding similarities between parallel evolutionary processes, and finding a common formula for the direction of evolution. One important difference from nineteenth century evolutionism was that the laws of biological inheritance now were known to everyone. No one could carry on with confusing genetic and social inheritance, and a clear distinction was drawn between racial and social evolution. Theories were no longer racist, and the old social darwinism was rejected. Whereas genetic inheritance can only go from parent to child, the cultural heritage can be transmitted in all directions, even between unrelated peoples. The neo-evolutionists therefore found diffusion important. They realized that a culture can die without the people carrying that culture being extinguished. In other words, the cultural evolution does not, unlike the genetic evolution, depend on the birth and death of individuals (Childe 1951). An important consequence of diffusion is convergence. In prehistoric primitive societies social evolution was divergent. Each tribe adapted specifically to its environment. But in modern society communication is so effective that diffusion plays a major role. All cultures move in the same direction because advantageous innovations spread from one society to another, hence convergence (Harding 1960, Mead 1964). The neo-evolutionists considered it important to find a universal law describing the direction of evolution: "To be an evolutionist, one must define a trend in evolution..." (Parsons 1966, p. 109)3. And there were many suggestions to what this trend was. Childe (1951) maintained that the cultural evolution proceeded in the same direction as the biological evolution, and in fact had replaced the latter. As an example, he mentioned that we put on a fur coat when it is cold instead of developing a fur, as the animals do. Spencer had already characterized the direction of evolution by ever increasing complexity and integration, and this idea still had many adherents among the neo-evolutionists (Campbell 1965, Eder 1976). To Leslie White (1949) integration meant a strong political control and ever greater political units. This integration was not a goal in itself but a means towards the true goal of evolution: the greatest possible and most effective utilization of energy. White argued in thermodynamic terminology for the view that the exploitation of energy was the universal measure of cultural evolution. He expressed this with the formula: Energy x Technology ® Culture Talcott Parsons (1966), among others, characterized the direction of evolution as an ever growing accumulation of knowledge and an improvement of the adaptability of the humans (Sahlins 1960; Kaplan, D. 1960; Parsons 1966). Yehudi Cohen (1974) has listed several criteria which he summarizes as man's attempts to free himself from the limitations of his habitat. Zoologist Alfred Emerson defined the cultural evolution as increasing homeostasis (self-regulation). He was criticized for an all-embracing, imprecise, and value-laden use of this concept (Emerson 1956). The most all-encompassing definition of the direction of evolution is found in the writings of Margaret Mead (1964:161): "Directionality, at any given period, is provided by the competitive status of cultural inventions of different types and the competitive status of the societies carrying them; the outcome of each such competition, as it involves irreversible change (for example, in the destruction of natural resources or an invention that makes obsolete an older invention), defines the directional path." Such a tautology is so meaningless that one must wonder how the neo-evolutionists could maintain the claim that evolution follows a certain definable direction. Characteristically, most neo-evolutionists used more energy on studying the course and direction of evolution than its fundamental mechanisms. Most were content with repeating the three elements in Darwin's general formula: variation, selection, and reproduction, without going into detail. In particular, there was surprisingly little attention to the process of selection. Hardly anyone cared to define the criteria that determined, which features were promoted by the cultural selection, and which were weeded out. They were satisfied with the general criterion: survival value. Still the tautology is haunting! Without the selection criterion they also missed any argument why the evolution should go in the claimed direction. There was also a certain confusion over what the unit of selection was. Was it customs, which were selected, or was it the people bearing them? Or was it entire societies that were the objects of the selection process? Some thinkers failed to define any unit of selection at all. Many used the word invention (Childe 1936, 1951). Emerson (1956, 1965) had the idea that symbols in the cultural evolution were equivalent to genes in the biological evolution. Parsons (1966) mentioned several possible units of selection, and Mead presented the most complete list of possible units of selection: "a single trait, a trait cluster, a functional complex, a total structure; a stage of complexity in energy use; a type of social organization" (Mead 1964). A few scientists have given a reasonably detailed description of possible selection processes (Murdock 1956, Kaplan, D. 1960, Parsons 1966). The most comprehensive list of selection mechanisms is found in an often cited article by the social psychologist Donald Campbell (1965): "Selective survival of complete social organizations, selective diffusion or borrowing between social groups, selective propagation of temporal variations, selective imitation of inter-individual variations, selective promotion to leadership and educational roles, rational selection." Several philosophers found that human scientific knowledge evolves by the selection of hypotheses (Kuhn 1962, Popper 1972, Toulmin 1972, Hull 1988). The german sociologist Klaus Eder has developed a model where the selection of cognitive structures, rather than mere knowledge, controls cultural evolution. Man's moral structuring of interactive behavior, systems of religious interpretations, and symbolic structuring of the social world, are important elements in the worldview, on which the social structure is based. According to Eder, mutations in this cognitive structure and selective rewarding of those moral innovations that improve society's problem solving capability and hence its ability to maintain itself, is what controls social evolution. Adaptation to the ecological conditions, and other internal conditions, are the most important factors in Eder's theory, whereas he attributes little significance to external factors, such as contact with other societies (Eder 1976). The main criticism against nineteenth century evolutionism was that it did not distinguish between evolution and progress, and the theories were often called teleological. Another word, which was often used when criticizing evolutionism, was unilinearity. This referred to the idea that all societies were going through the same linear series of evolutionary stages. In other words: a universal determinism and a conception of parallel evolutionary courses. Twentieth century neo-evolutionists were busy countering this criticism by claiming that their theories were multilinear. They emphasized local differences between different societies due to different environments and life conditions. The claim about multilinearity was however somewhat misrepresenting since they still imagined a linear scale for measuring evolutionary level (See Steward 1955 for a discussion of these concepts). In 1960 a new dichotomy was introduced in evolutionary theory: specific versus general evolution. Specific evolution denotes the specific adaptation of a species or a society to the local life conditions or to a particular niche. General evolution, on the other hand, meant an improved general ability to adapt. A species or a society with a high adaptability may outcompete a specifically adapted species or society, especially in a changing environment. In other cases, a specifically adapted species or society may survive in a certain niche (Sahlins & Service 1960). This dichotomy seemed to solve the confusion: general evolution was unilinear, while specific evolution was multilinear (White 1960). Neo-evolutionism was mainly used for explaining the differences between industrialized countries and developing countries, and between past and present. The talk was mainly about fundamental principles, and rarely went into detail with the evolutionary history of specific cultures or specific historic occurrences. The explanatory power of the theories was usually limited to the obvious: that certain innovations spread because they are advantageous, whereas the unfavorable innovations are forgotten. Contemporary social scientists are often eager to distance themselves from social evolutionism. Never the less, evolutionary thought is still prevalent in many areas of the social sciences, and evolutionist theories are still being published (e.g. Graber, R.B. 1995). Another research tradition, which for many years has been seen as an alternative to evolutionism, is diffusionism. This research tradition focuses on diffusion, rather than innovation, as an explanation for social change. Strictly speaking, the diffusionist representation involves the same three elements that evolutionism is based on: innovation, selection, and reproduction - but viewed from another standpoint. The difference between the two paradigms is that diffusionism focuses on the spatial dimension of reproduction, i.e. the geographical spread of a phenomenon, whereas evolutionism focuses on the time dimension of reproduction, i.e. the continued existence and maintenance of a phenomenon. Diffusionists regard innovation as a rare and unique occurrence, whereas evolutionists acknowledge the possibility that the same innovation can occur several times at different places independently. The concept of selection is rarely discussed by that name by the diffusionists, although they often work with concepts such as barriers to diffusion or differences in receptivity to new ideas (Ormrod 1992). Many diffusionists regard themselves as in opposition to evolutionism, without realizing that the difference between the two models is quantitative, rather than qualitative. The first great scientist within diffusionism was the french sociologist Gabriel Tarde. He did not deny the theory of natural selection, but thought that this theory was a gross generalization which had been ascribed more importance than its explanatory power could justify, and that random occurrences play a more important role than the evolutionists would admit (Tarde 1890, 1902). Although Tarde accepted the importance of progress, he was no determinist. Progress was not inevitable. The keyword in Tarde's theory was imitation. Innovations spread from one people to another by imitation. He distinguished between two kinds of innovations: accumulative and alternative. By alternative inventions he meant ideas or customs which could not spread without displacing some other idea or custom. With this concept selection was sneaked into Tarde's theory under the name of opposition. Opposition between alternative innovations could take the form of war, competition, or discussion (Tarde 1890, 1898). Another early proponent of diffusionism was the american anthropologist Franz Boas. It was Boas who started the discussion about whether similarities between distant cultures were due to diffusion or independent innovation. He criticized the evolutionists for attributing too much importance to parallel evolution, i.e. the assumption that the same phenomenon has arisen independently at different places. Boas is usually considered one of the greatest opponents of evolutionism, but it is worth mentioning that he did not reject the theoretical foundation of evolutionism. Boas was opposed to great generalizations, and he emphasized that similarities between two cultures could be explained either by diffusion or parallel evolution and that it was impossible to distinguish between these two possibilities without closer investigation (Harris 1969:259,291). In his discussions he gave examples of both diffusion and parallel invention. As is evident from the following citation, he did indeed recognize that the two processes are both controlled by the same selection process: "When the human mind evolves an idea, or when it borrows the same idea, we may assume that it has been evolved or accepted because it conforms with the organization of the human mind; else it would not be evolved or accepted. The wider the distribution of an idea, original or borrowed, the closer must be its conformity with the laws governing the activities of the human mind. Historical analysis will furnish the data referring to the growth of ideas among different people; and comparisons of the processes of their growth will give us knowledge of the laws which govern the evolution and selection of ideas." (Boas 1898, cit. after Stocking 1974). Later diffusionists have actually described the attributes of an invention that have significance for whether it will spread or not. Everett Rogers lists the following attributes of an invention as important: advantage relative to alternatives, compatibility with existing structures, complexity, trialability, and observability. Rogers repeatedly emphasizes, however, that it is the perceived, rather than the objective attributes of the invention that matters (Rogers, E.M. 1983). By this emphasis he places the locus of control in the potential adopter of a new invention rather than in the inanimate invention itself. And herein lies the hidden agenda of the conflict between diffusionists and evolutionists: The diffusionists want to maintain an anthropocentric worldview, where the world is governed by conscious decisions of persons with a free will, whereas the non-anthropocentric model of evolutionism attributes an important amount of control to haphazard and often unanticipated effects and automatic mechanisms. The most obvious difference between diffusionism and evolutionism is that diffusionism first and foremost is an idiographic tradition. It focuses on specific studies of delimited phenomena, trying to map the geographical distribution of a certain custom or technology, and finding out where it has first arisen and how it has spread. Diffusionists reject the great generalizations, and believe more in chance occurrences than in universal laws. Evolutionism, on the contrary, is a nomothetic science, which seldom has been applied to the study of specific details (Harris 1969). The difference between the two research traditions can also be illustrated as a difference between a physical-chemical metaphor and a biological metaphor. Diffusion is a process whereby different molecules get mixed because of their random movements. By using the random motion of molecules as a metaphor for customs spreading in society, the diffusionists have stressed the importance of randomness. This metaphor naturally draws the attention of the scientists toward the spatial dimension, the velocity with which customs spread geographically, and the barriers impeding this expansion. The metaphor encompasses only the movement aspect, but neither innovation, selection, or reproduction. The latter three aspects belong to the biological metaphor on which social evolutionism is built. Evolutionism focuses on the time dimension, and it is important to notice that the time dimension is irreversible. Due to this irreversibility, the attention of the evolutionists becomes focused on the direction of the evolution. Evolutionism has thus become a deterministic philosophy of progress. The most extreme form of diffusionism is built on the concept of a few culture centers, where innovations miraculously arise, and then spread in concentric circles from that center. This line of thought came primarily from religious circles as a reaction against the atheistic evolutionism, and as an attempt to bring science in harmony with the christian story of creation (Harris 1969). Early diffusionism can hardly be said to be a theoretical school, since it first and foremost was a reaction against the excessive theorizing of the evolutionists. Diffusionism has even been called a non-principle (Harris 1969). Many diffusion studies have been made independently within many different areas of research all throughout the twentieth century. These are mainly idiographic studies, too numerous to mention here (See Katz et al. 1963; Rogers, E.M. 1983). Most diffusionists study only inventions that are assumed to be advantageous so that they can ignore selection criteria (Rogers, E.M. 1983). Occasionally, diffusion studies have been combined with darwinian thinking, namely in linguistics (Greenberg 1959). It may seem illogical to apply selection theory to linguistics, since it must be difficult for linguists to explain why one synonym or one pronunciation should spread at the expense of another, when, in principle, they are equally applicable. Gerard et. al. (1956) proposes that the selection criteria are that the word must be easy to pronounce and easy to understand. Geographer Richard Ormrod has argued for incorporating the concepts of adaptation and selection in diffusion studies. A diffusing innovation is selected by potential adopters who decide whether to adopt the innovation or not. Ormrod understands that the fitness of an innovation depends on local conditions. What is fit in one place may not be fit at some other location. Consequently, innovations are often modified in order to adapt them to local conditions (Ormrod 1992). Newer diffusion theories have departed somewhat from the purely ideographic tradition and developed a detailed mathematical formalism enabling a description of the velocity with which innovations spread in society (Hamblin et. al 1973, Valente 1993). Incidentally, sociobiologists have produced very similar mathematical models for cultural diffusion (Aoki et.al. 1996), but the two schools are still developing in parallel without reference to each other. In the early 1970's a new paradigm emerged within biology, dealing with the explanation of social behavior of animals and humans by referring to evolutionary, genetic, and ecological factors. The principal work within this new paradigm was E.O. Wilson's famous and controversial book: Sociobiology (1975), which named and defined this discipline. Wilson's book provoked a fierce criticism from the sociologists (see e.g. Sahlins 1976). The conflict between the biological and the humanistic view of human nature seems impossible to resolve, and the heated debate is still going on today, twenty years later. Apparently, it has been quite natural for the early ethologists and sociobiologists to reflect over the relationship between genetic and cultural inheritance. Several thinkers have independently introduced this discussion to the sociobiological and evolutionary paradigm, in most cases without knowledge of the previous literature on this subject. The possibility of selection based on cultural inheritance is briefly mentioned by one of the founders of ethology, Konrad Lorenz (1963), and likewise in Wilson's book sociobiology (1975). In a later book (1978) Wilson mentions the important difference between genetic and cultural evolution, that the latter is lamarckian, and therefore much faster. In 1970, archaeologist Frederick Dunn defined cultural innovation, transmission, and adaptation with explicit reference to the analogy with darwinian evolutionary theory, but avoided any talk about cultural selection - apparently in order to avoid being connected with social darwinism and evolutionism, to which he found it necessary to dissociate himself: "Although several analogies have been drawn between biological evolutionary concepts and cultural evolution, the reader will appreciate that they are of a different order than those analogies that once gave "cultural evolution" an unsavory reputation [...] In particular, I avoid any suggestion of inevitable and necessary tendencies toward increasing complexity and "improvement" of cultural traits and assemblages with the passage of time." (Dunn 1970). In 1968 anthropologist and ethologist F.T. Cloak published a rudimentary sketch of a cultural evolutionary theory closely related to the genetic theory, imagining that culture was transmitted in the form of small independent information units, subject to selection. In a later article (1975) he explained the distinction between the cultural instructions and the material culture that these instructions give rise to, analogously with the distinction between genotype and phenotype in biology. He also pointed out the possibility for conflict between cultural instructions and their bearers, as he compared the phenomenon with a parasite or virus. In 1972 psychologist Raymond Cattell published a book attempting to construct an ethic on a scientific, evolutionary basis. He emphasized cultural group selection as a mechanism by which man evolves cooperation, altruism, and moral behavior. He held the opinion that this mechanism ought to be promoted, and imagined giant sociocultural experiments with this purpose. By this argument he copied eugenic philosophy to cultural evolution. At a symposium in 1971 about human evolution, biologist C.J. Bajema proposed a simple model for the interaction between genetic and cultural inheritance. He imagined this process as a synergistic interaction, where the cultural part of the process was defined accordingly: "Cultural adaptation to the environment takes place via the differential transmission of ideas which influence how human beings perceive and interact with the environment which affect survival and reproductive patterns in and between human populations." (Bajema 1972). A somewhat more detailed description of cultural selection mechanisms was presented by anthropologist Eugene Ruyle at another meeting in 1971. Ruyle emphasized the psychological selection in the individual's "struggle for satisfaction". His description of selection mechanisms seems to be very much inspired by Donald Campbell's article from 1965 (see page 28), although he denies the possibility for cultural group selection (Ruyle 1973). Among the first biologists taking up the idea of cultural selection was also Luigi Cavalli-Sforza, who on a conference in 1970 published a theory of cultural selection based on the fact that some ideas are more readily accepted than others (Cavalli-Sforza 1971). It is apparent from this publication, that Cavalli-Sforza is totally ignorant of the previous literature on this subject despite some knowledge of anthropology. His only reference to cultural selection is the colleague Kenneth Mather, who mentions group selection based on social inheritance in a book on human genetics. Mather (1964) does not mention from where he has this idea. Since neither Cavalli-Sforza, nor Mather, at this time reveal any knowledge of cultural evolution theory in the social sciences, we must assume that they have invented most of this theory by themselves. Curiously enough, the abovementioned article by Cavalli-Sforza contains a discussion of the difficulty in deciding whether an idea that occurs in multiple different places has spread by diffusion or has been invented independently more than one time. Together with his colleague Marcus Feldman, Cavalli-Sforza has later published several influential articles on cultural selection. Their literature search has been rather casual. In 1973 they referred to an application of selection theory in linguistics (Gerard et al. 1956) and to a short mentioning of the theory in a discussion of eugenics (Motulsky 1968). Not until 1981 did they refer to more important publications such as White (1959) and Campbell (1965). The publications of Cavalli-Sforza and Feldman were strongly influenced by their background in genetics, which is an exact science. Their advancement of selection theory consisted mainly of setting up mathematical models (Cavalli-Sforza & Feldman 1981). The concise description of the models by mathematical formulae has certain advantages, but apparently also serious drawbacks. Many social phenomena are more complex and irregular than mathematical formulae can express, and the representations reveal that the examples given were chosen to fit the mathematical models, rather than vice versa. The majority of their models thus describe vertical transmission, i.e. from parents to children, rather than other kinds of transmission. There was also a certain focus on models in which the selection depends on from whom an idea comes, rather than the quality of the idea itself. Such models may admittedly have some relevance in the description of social stratification and social mobility. 2.7 Interaction between genetic and cultural selection In 1976 William Durham asserted that the genetic and the cultural evolution are mutually interacting, and hence in principle cannot be analyzed separately as independent processes. The interaction between these two processes was aptly named genetic/cultural coevolution. Unlike several other thinkers, Durham did not at this time see any conflict between these two kinds of evolution. In his understanding the two selection processes were both directed towards the same goal: the maximum possible reproduction of the individual and its nearest relatives. This criterion is what biologists call inclusive fitness. Despite criticism from both anthropologists and biologists (Ruyle, et al. 1977), Durham stuck to his position for a couple of years (Durham 1979), but has later accepted that genetic and cultural fitness are in principle different, although he maintained that the two kinds of selection in most cases reinforce each other and only rarely are in opposition to each other (Durham 1982, 1991). The most important selection mechanism in Durham's theory is conscious choices based on criteria which in themselves may be subject to cultural selection. He emphasized the distinction between cultural information units, called memes, and the behaviors they control. Genes and memes form two parallel information channels and their reciprocal interaction is symmetrical in Durham's model. Unfortunately, he did not distinguish clearly between selective transmission of memes, and selective use of these (Durham 1991, this problem is discussed on page 72). While Durham regarded genetic and cultural selection as synergistic, two other scientists, Robert Boyd and Peter Richerson (1976, 1978), asserted that genetic and cultural fitness are two fundamentally different concepts, and if they point in the same direction it is only a coincidence. Boyd and Richerson have developed a theoretical model for the conflict between these two selection processes and the consequences of such a conflict (1978). In a later article (1982) Boyd and Richerson claimed that humans have a genetic predisposition for cultural conformism and ethnocentrism, and that this trait promotes cultural group selection. This mechanism can then lead to cooperation, altruism, and loyalty to a group. These are characters that usually have been difficult for sociobiologists to explain because Darwin's principle of natural selection presumably would lead to egoism. Several other researchers have since proposed similar theories explaining altruism by cultural selection mechanisms (Feldman, Cavalli-Sforza & Peck 1985; Simon, H. 1990; Campbell 1991; Allison 1992). In 1985, Boyd and Richerson at last provided a more thorough and well-founded collection of models for cultural selection. Their book also describes how those genes that make cultural transmission and selection possible may have originated, as well as an analysis of the conditions that determine whether cultural selection will increase or decrease genetic fitness (Boyd & Richerson 1985, see also Richerson & Boyd 1989). While Boyd and Richerson maintain that cultural evolution is able to override genetic evolution, sociobiologist Edward Wilson and physicist Charles Lumsden had the opposite view on the gene/culture coevolution. They believed that the genetic evolution controls the cultural evolution. Their basic argument was that the cultural selection is controlled by people's genetically determined preferences, the so-called epigenic rules. They imagined that the genes control the culture like a dog on a leash (Lumsden & Wilson 1981). Let me illustrate this so-called leash principle by the following example: Assume that a certain food item can be prepared in two different ways, A and B. A is the most common because it tastes better, but B is the healthiest. In this situation genetic evolution will change people's taste so that they prefer B, and consequently cultural selection will quickly make B the most widespread recipe. Lumsden and Wilson's book expressed an extreme biologic reductionism, since they imagined that genes are able to control almost everything by adjusting human preferences. In this model, culture becomes almost superfluous. Their book has been highly disputed. One important point of criticism was that their theory lacked empirical support. Although Lumsden and Wilson have documented that humans do have certain inborn preferences, they have never demonstrated any differences between humans in different cultures with respect to such preferences (Cloninger & Yokoyama 1981; Lewin 1981; Smith & Warren 1982; Lumsden, Wilson, et.al. 1982; Almeida et.al. 1984; Rogers, A.R. 1988). A problem with the leash principle is to explain cultural traits that reduce genetic fitness. This argument has been met by the construction of a model of cultural transmission analogous to sexual selection - a genetic selection mechanism which is famous for its potential for reducing fitness (see chapt. 4.2) (Takahasi 1998). In later publications, Lumsden and Wilson no longer insisted that cultural differences have a genetic explanation, but they did not retract this claim either. They still maintained that even small changes in the genetic blend of a population can lead to considerable changes in the culture (Lumsden & Wilson 1985; Lumsden 1988, 1989). At a workshop in 1986 entitled "Evolved Constraints on Cultural Evolution"4 there was general agreement that a human is not born as a tabula rasa, but does indeed have genetically determined predispositions to learn certain behavior patterns easier than others. But there was no approval of the claim that genetic evolution can be so fast that it is able to govern cultural evolution. On the contrary, certain models were published showing that cultural evolution in some cases may produce behaviors that are genetically maladaptive, and that the leash principle in fact can be turned upside down, so that it is culture that controls the genes (Richerson & Boyd 1989, Barkow 1989). An important contribution to the debate came from the psychologists John Tooby and Leda Cosmides, who proposed a new kind of human ethology which they call evolutionary psychology5. According to this theory, man's psyche is composed of a considerable number of specialized mechanisms, each of which has been evolved for a specific adaptive function, and do not necessarily work as universal learning mechanisms or fitness maximizing mechanisms. These psychological mechanisms are so complex and the genetic evolution so slow, that we must assume that the human psyche is adapted to the life-style of our ancestors in the pleistocene period: "The hominid penetration into the "cognitive niche" involved the evolution of some psychological mechanisms that turned out to be relatively general solutions to problems posed by "local" conditions [...] The evolution of the psychological mechanisms that underlie culture turned out to be so powerful that they created a historical process, cultural change, which (beginning at least as early as the Neolithic) changed conditions far faster than organic evolution could track, given its inherent limitations on rates of successive substitution. Thus, there is no a priori reason to suppose that any specific modern cultural or behavioral practice is "adaptive" [...] or that modern cultural dynamics will necessarily return cultures to adaptive trajectories if perturbed away. Adaptive tracking must, of course, have characterized the psychological mechanisms governing culture during the Pleistocene, or such mechanisms could never have evolved; however, once human cultures were propelled beyond those Pleistocene conditions to which they were adapted at high enough rates, the formerly necessary connection between adaptive tracking and cultural dynamics was broken." (Tooby & Cosmides 1989). The theory that genetically determined preferences control the direction of cultural evolution, has been put forward many times, and also without Lumsden and Wilson's exaggeration of the power of the genes. Psychologist Colin Martindale calls this principle hedonic selection: "It is certainly possible that some of the genes freed by the capacity for culture may serve to "fine-tune" human hedonic responses so as to increase the probability that what brings pleasure will direct behavior in a way likely to increase [genetic] fitness. [...] it is generally assumed that hedonic selection will proceed in a certain direction until it is checked by the production of traits that render their possessors unfit [...]" (Martindale 1986). While some scientists stress the importance of psychological mechanisms (e.g. Mundinger 1980), others regard the survival of the individual or group as the ultimate criterion for cultural selection: "In the short run, various criteria - including efficiency of energy capture, and the satisfaction of perceived needs and wants - may determine the selection and retention of certain behavior. In the longer term, however, only if that behavior contributes to the persistence of the group or population in terms of reproductive continuity will it be truly retained." (Kirch 1980). This model does not leave much room for psychological selection of cultural phenomena. According to Kirch (1980), such a selection is not allowed to run further than the higher selection with the individual or the group as unit of selection allows. In recent years, the theory of gene/culture coevolution has been refined by a group of canadian biologists lead by C.S. Findlay. Findlay has continued the strictly mathematical tradition of Cavalli-Sforza, and constructed a series of mathematical models for cultural evolution and gene/culture coevolution. The mathematical analysis reveals that even relatively simple cultural systems can give rise to a great variety of complex phenomena, which are not possible in genetic systems of similar composition. These peculiar phenomena include the existence of multiple equilibrium states, oscillating systems, and stable polymorphism (Findlay, Lumsden & Hansell 1989a,b; Findlay 1990, 1992). Real world examples for such complex mechanisms were not given, but a few studies applying gene/culture coevolution theory to actual observations have been published (Laland, Kumm & Feldman 1995). Richard Dawkins famous and controversial book The selfish gene (1976) described genes as selfish beings striving only to make as many copies of themselves as possible. The body of an animal can thus be viewed as nothing more than the genes tool for making more genes. Many people feel that Dawkins is turning things upside down, but his way of seeing things has nevertheless turned out to be very fruitful. In a short chapter in the same book he has applied a similar point of view to culturally transmitted traits. Dawkins has introduced the new name meme (rhymes with beam) for cultural replicators. A meme is a culturally transmitted unit of information analogous to the gene (Dawkins 1976, 1993). The idea that a meme can be viewed as a selfish replicator that manipulates people to make copies of itself has inspired many scholars in the recent years. An obvious example is a religious cult which spends most of its energy on recruiting new members. The sect supports a set of beliefs that makes its members do exactly that: work hard to recruit new members. A meme is not a form of life. Strictly speaking, the meme cannot reproduce itself, it can only influence people to replicate it. This is analogous to a virus: a virus does not contain the apparatus necessary for its own reproduction. Instead it parasitizes its host and uses the reproductive apparatus of the host cell to make new viruses. The same applies to a computer virus: it takes over the control of the infected computer for a while and uses it to make copies of itself (Dawkins 1993). Viruses and computer viruses are the favorite metaphors used in meme theory, and the vocabulary is borrowed from virology: host, infection, immune reaction, etc. The idea of selfish memes has developed into a new theoretical tradition which is usually called meme theory or memetics. While meme theorists agree that most memes are beneficial to their hosts, they often concentrate on adverse or parasitic memes because this is an area where meme theory has greater explanatory power than alternative paradigms. Unlike the more mathematically oriented sociobiologists, the meme theorists have no problem finding convincing real-life examples that support their theories. In fact, in the beginning this tradition relied more on cases and examples than on theoretic principles. Several meme theorists have studied the evolution of religions or cults. A religion or sect is a set of memes which are transmitted together and reinforce each other. Certain memes in such a meme complex are hooks which make the entire set of beliefs propagate by providing an incentive for the believer to proselytize. Other memes in the complex makes the host resistant to infection by rival beliefs. The belief that blind faith is a virtue has exactly this function. Other very powerful parts of the meme complex may be promises of rewards or punishments in the after-life (Paradise or Hell-fire) which make the host obey the commands of all the memes in the complex (Lynch 1996, Brodie 1996). Examples of the unintended effects of cultural selection abound in memetic theory texts. One example is charity organizations spending most of their money on promotion: "It is their effectiveness in attracting funding and volunteers that determines whether they can stay in existence and perform their functions [...] Given limited resources in the world and new organizations being introduced all the time, the surviving organizations must become better and better at surviving. Any use of their money or energy for anything other than surviving - even using it for the charitable purpose for which they were created! - provides an opening for a competing group to beat them out for resources." (Brodie 1996:158) Another example of parasitic memes is chain letters which contain a promise of reward for sending out copies of the letter or punishment for breaking the chain (Goodenough & Dawkins 1994). One reason why arbitrary memes can spread is peoples gullibility. Ball (1984) argues, that gullibility can actually be a (genetic) fitness advantage: Believing the same as others do, has the advantage of improved cooperation and belonging to a group. Peoples tendency to follow any new fad is what Ball (1984) calls the bandwagon effect. The stability of a meme complex depends on its ability to make its host resistant to rival beliefs. Beliefs in supernatural and invisible phenomena are difficult to refute, and hence quite stable. Secular belief-complexes will be stable only if they have a similar defense against disproof. Such a defense can be the belief that a grand conspiracy has covered up all evidence by infiltrating the most powerful social institutions (Dennett 1995). While most meme theorists paint a fairly pessimistic picture of memes as parasitic epidemics, Douglas Rushkoff has presented a quite optimistic view of the memes that infest public media. He has studied how memes containing controversial or counter-cultural messages can penetrate mainstream media packaged as trojan horses. This gives grass-roots activists and other people without money or political positions the power to influence public opinion and provoke social change (Rushkoff 1994). Rushkoff does not seem to worry that the public agenda is thus determined by who has the luck to launch the most effective media viruses rather than by who has the most important messages to tell. The paradigm of meme theory is only gradually crystallizing into a rigorous science. Most of the publications are in the popular science genre with no exact definitions or strict formalism. Dennett does not even consider it a science because it lacks reliable formalizations, quantifiable results, and testable hypotheses, but he appreciates the insight it gives (1995). There is no common agreement about the definition of a meme. While most meme theorists consider the meme to be analogous to biological genotype and the phenotype has its parallel in social behavior or social structure, William Benzon has it exactly opposite (Benzon 1996, Speel & Benzon 1997). The analogy with biology is often taken very far (e.g. Dennett 1990, 1995) which makes the theory vulnerable to criticism. Critics have argued that humans are intelligent and goal-seeking beings which are more influenced by logical, true, informative, problem-solving, economic, and well-organized ideas than by illogical, false, useless or harmful beliefs (Percival 1994). Memetics will probably continue to be a soft science. Heyes and Plotkin have used cognitive psychology and brain neurology to argue that information is being transformed while stored in human memory and may be altered under the influence of later events. This leads them to argue that memes cannot be distinct, faithful copies of particulate information-bits, but blending and ever changing clusters of information (Heyes & Plotkin 1989). The products of cultural evolution or conceptual evolution cannot be systematized into distinct classes and it is impossible to make a strict evolutionary taxonomy of cultures (Hull 1982, Benzon 1996). Richard Brodie, a computer engineer, has divided memes into three fundamental classes: distinction memes that define names and categories, strategy memes that define strategies of behavior and theories about cause and effect, and association memes that make the presence of one thing trigger a thought or feeling about something else (Brodie 1996). Brodie has paid particular attention to the selection criteria that make some memes spread more than others. Based on evolutionary psychology6, his theory says that memes have higher fitness when they appeal to fundamental instincts: "Memes involving danger, food, and sex spread faster than other memes because we are wired to pay more attention to them - we have buttons around those subjects." (Brodie 1996:88) In other words, the memes that push the right buttons in our psyche are the most likely to spread. The most fundamental buttons have already been mentioned: danger, food, and sex. Other buttons identified by Brodie include: belonging to a group, distinguishing yourself, obeying authority, power, cheap insurance, opportunity, investment with low risk and high reward, protecting children. For example, the danger button is the reason why horror movies are popular. The cheap insurance button is what makes people knock on wood even when they claim not to be superstitious. And the low risk - high reward button is what makes people invest in lotteries even when the chance of winning is abysmally small (Brodie 1996). Meme theorists have a peculiar penchant for self-referential theories. Scientific theories are memes, and the theory of memes itself is often called the meme meme or metameme. When meme theorists are discussing scientific memes, they usually pick examples from those sciences with which they are most familiar. This extraordinary scientific self-awareness has led many meme theorists to present their theories in the most popularized way with the deliberate, and often proclaimed, aim of spreading the meme meme most effectively (e.g. Lynch 1996, Brodie 1996). 2.9 Sociology and anthropology The selection theory is quite unpopular among modern sociologists and anthropologists (Berghe 1990) and only few express a positive view (e.g. Blute 1987). Opponents of the theory claim that there is no cultural analogy to genes and that the selection theory attributes too much importance to competition, whereas cooperation and conscious planning is ignored (Hallpike 1985, Adams 1991). The critics attribute a more literal analogy with darwinism to the adherents of the theory than they have ever stated, to make the theory look absurd. Biologists Pulliam and Dunford have characterized the gap between biology and social sciences in this way: "It seems to us that decades of development in intellectual isolation from each other have allowed biological and social scientists to diverge in interests, ideas and especially language to the point where the two types of scientists now find it painfully difficult to communicate." (Pulliam & Dunford 1980) This is no exaggeration. Many social scientists have rejected sociobiology, and for good reasons. The following is an excerpt from a radio-transmitted debate in connection with Lumsden and Wilson's book: Genes, Mind and Culture (1981): John Maddox: "Should it be possible, or should it not be possible, on the basis of your theory, to be able to predict which people go to the back door and which to the front door when they go to visit John Turner in Leeds? Edward O. Wilson: "If there can be demonstrated substantial genetic variation in some of the epigenetic rules that produce strong bias, yes. But that is difficult to pin down at this very early, very primitive level of our understanding of human behavioral genetics." (Maddox et al., 1984). When Wilson, who is regarded as the founder of and foremost representative of sociobiology, can come up with such absurd a biological reductionism, it is no wonder that most sociologists and anthropologists take no interest in sociobiology, but instead develop their own theories. Many social scientists depict society as an autonomous system in order to avoid biological and psychological reductionism (Yengoyan 1991). There are, nevertheless, significant similarities between biological and sociological theories of culture. The french sociologist Pierre Bourdieu has studied the reproduction of social structures in the educational system (Bourdieu & Passeron 1970), and the british cultural sociologist Raymond Williams has elaborated further on this theory, and demonstrated that the cultural reproduction is subject to a conscious selection: "For tradition ('our cultural heritage') is self-evidently a process of deliberate continuity, yet any tradition can be shown, by analysis, to be a selection and reselection of those significant received and recovered elements of the past which represent not a necessary but a desired continuity. In this it resembles education, which is a comparable selection of desired knowledge and modes of learning and authority." (Williams, R. 1981:187) Williams has brilliantly explained how different cultural forms are connected with different degrees of autonomy and degrees of freedom, and hence unequal possibilities for selection. Williams analyzes both cultural innovation, reproduction, and selection, but oddly enough, he never combines these three concepts to a coherent evolutionary theory, and he omits any reference to evolutionary scientists (Williams, R. 1981). This omission is probably due to a resistance against overstated generalizations and, quite likely, a fear of being associated with social darwinism. The philosopher Rom Harré has theorized over social change from a mainly sociological paradigm. He discussed whether innovations are random or not, and hence whether social evolution can be characterized as darwinian or lamarckian. Harré has made a distinction between cultural informations, and the social practice they produce, but he has not gone into details with the selection process and its mechanisms (Harré 1979, 1981). Sociologist Michael Schmid has proposed a reconstruction of the theory of collective action based on selectionist thought, but with few references to biology. He argues that collective actions regulated by social rules have consequences which tend to stabilize or destabilize these rules. This is an evolutionary mechanism which Schmid calls internal selection, because all factors are contained within the social system. The selective impact of external resources on the stability of social regulations is considered external selection (Schmid 1981, 1987; Kopp & Schmid 1981). His theory has had some influence on social systems theory which in turn has influenced sociocybernetics (Luhmann 1984, Zouwen 1997). 2.10 Attempts to make a synthesis of sociobiology and anthropology It seems obvious to try to fit sociobiologic theory into anthropology, and there have of course been several attempts along this line. Unfortunately, those attempting to do so have seldom been able to escape the limitations of their old paradigms and the results have seldom been very convincing. In 1980, the biologists Ronald Pulliam and Christopher Dunford published a book in the popular science genre with this purpose. Despite intentions to make their book interdisciplinary, they disclose a rather limited knowledge of the humanistic sciences. David Rindos, who is a botanist as well as an anthropologist, has written several articles about cultural selection (1985, 1986). His articles contain some errors and misconceptions which, for the sake of brevity, I will not mention here, but instead refer to Robert Carneiro's criticism (1985). In an article by anthropologist Mark Flinn and Zoologist Richard Alexander (1982) the theory of coevolution is turned down by rejecting the culture/biology dichotomy and the difference between cultural and genetic fitness. Their argumentation has been rebutted by Durham (1991) and others. Ethologist Robert Hinde has likewise attempted to bridge the gap between biology and sociology, but his discussions largely remain within the ethological paradigm. Cultural selection theory is cursorily mentioned, but cultural fitness is not discussed (Hinde 1987). Sociologist Jack Douglas has combined a special branch of social science, namely the sociology of deviance, with the theory of cultural selection. By combining sociology, sociobiology, and psychology, Douglas has created a model for social change, where social rules are seen as analogous to genes, and deviations from the rules play the same role in social evolution as mutations do in genetic evolution. Douglas' theory addresses the question of how social deviations arise, and how people overcome the shame that deviation from the rules entail (Douglas, J. 1977). Archaeologist Patrick Kirch has presented a fairly detailed theory for cultural selection, and unlike most other researchers in selection theory, he has supported his theory thoroughly with specific examples. As mentioned on page 40, Kirch does not ascribe much importance to conscious or psychological selection, but regards the survival of the individual or the group as the ultimate selection criterion. Such cultural phenomena which has no obvious importance for survival, such as for example art or play, are regarded as random and neutral towards selection (Kirch 1980). Like Patrich Kirch, anthropologist Michael Rosenberg emphasizes that cultural innovations are not necessarily random, but often the result of purposeful reactions to a stressful situation such as overpopulation. In particular he contends that agriculture initially arose as a reaction to overpopulation: "... an allocation model proposes that in certain types of habitats, hunter-gatherers will resolve the symptoms of population pressure-induced stress through the voluntary or involuntary allocation of standing wild resources. It further proposes that, in a still more limited number of cases (given the institution of territorial systems), the consequences of growing population pressure-induced stress will be perceived as being most readily mitigated by food production, rather than by warfare or some other behavior intended to address these proximate consequences. Finally, it also proposes that it is under precisely such circumstances that sedentism, food storage, and other behaviors thought integral to the process develop to be selected for." (Rosenberg, M. 1990). The proficiency of the abovementioned scientists notwithstanding, I will maintain that their attempts at forming a synthesis of the different sciences have so far been insufficient. Not until recently has a fairly sound combination of sociology and sociobiology been presented. In 1992, the two sociologists Tom Burns and Thomas Dietz published a theory for cultural evolution based on the theory of the relationship between individual agency and social structure. Culture is defined as a set of rules which is established, transmitted, and used selectively. Burns and Dietz explain how an existing social structure sets limits to what kind of thoughts and actions are possible. An implicit selection lies in the requirement that actions and ideas must be compatible with the social structure, and that different sub-structures must be mutually compatible. According to Burns and Dietz, cultural selection proceeds in two steps: A greater or lesser part of the available resources is allocated to different actors or groups according to certain rules; these resources can subsequently be utilized to maintain and reinforce the concerned group or institution and its rules. Of course Burns and Dietz also mention the obvious selection that takes place by the exercise of power, as well as the limitations constituted by the material environment and the ecology (Burns & Dietz 1992). Despite the fact that these two sociologists better than most other scientists have been able to integrate different paradigms, their theory has been criticized for being reductionist and for not paying enough attention to certain important parts of social life (Strauss 1993). Political scientist Ann Florini has recently applied selection theory to the development of international norms. According to her model, three conditions must be met for an international norm to spread: firstly, the norm has to get prominence, usually by being promoted by a norm entrepreneur; secondly, it must be compatible with preexisting norms; and thirdly, it must fit the environmental conditions. She argues that new norms mainly are adopted through emulation of influential actors, rather than through a rational evaluation of all avaliable alternatives (Florini 1996). 2.11 Social psychology Studies of cultural selection from the point of view of social psychology and cognitive psychology have been too few to form a separate research tradition. This is clearly a neglected area of research. The distortion of memes through imperfect communication between humans has been explained by Heyes & Plotkin (1989) and Sperber (1990, 1996). This is seen as an important difference between genetic and cultural evolution: cultural informations are generally transformed or modified each time they are copied, and perfect copying is the exception rather than the rule. This is very unlike the case of genetic evolution where the copying of genes as a rule is perfect, and mutation is the exception. In Sperber's model, cultural representations are generally transformed each time they are copied, and this transformation is mostly in the direction of the representation that is most psychologically attractive, most compatible with the rest of the culture, or most easy to remember. Such an 'optimal' representation is called an attractor, and the repeated process of distortion through copying is seen as a trajectory with random fluctuations tending towards the nearest attractor (Sperber 1996). While other scientists present a simple model of memes being either present or not present in a human brain, Dan Sperber emphasizes that there are different ways of holding a belief. He makes a distinction between intuitive beliefs, which are the product of spontaneous and unconscious perceptual and inferential processes, and reflective beliefs, which are believed by virtue of second order beliefs about them. A claim that is not understood but nevertheless believed because it comes from some authority, is an example of a reflective belief. The commitment to a belief can vary widely, from loosely held opinions to fundamental creeds, from mere hunches to carefully thought out convictions (Sperber 1990). Psychological and cognitive factors may have important influence on the selection of cultural information. The following factors are mentioned by Sperber: the ease with which a particular representation can be memorized; the existence of background knowledge in relationship to which the representation is relevant; and a motivation to communicate the information (Sperber 1990). 2.12 Economic competition A well known analogy to darwinian evolution is economic competition between enterprises. This analogy has been explored most notably by the two economists Richard Nelson and Sidney Winter, who have developed a useful model for economic change. Their theory, which they call evolutionary, is contrasted with traditional economic theory, called orthodox, by its better ability to cope with technological change. Nelson and Winter argue that technological innovation and progress plays an important role in modern economic growth, but is inadequately dealt with in orthodox economic theory. Different firms have different research strategies and different amounts of resources to invest in research and development and hence unequal chances of making technological innovations that improve their competitiveness. Nelson and Winter regard knowledge as accumulative and the process of innovation is therefore described as irreversible. The so-called orthodox economic theory is criticized for its heavy reliance on the assumption that firms behave in the way that optimizes their profit. Finding the optimal strategy requires perfect knowledge and computing skills. It is argued that knowledge is never perfect and research is costly, and therefore the theoretical optimum may never be found. In contrast to orthodox economic theory, Nelson and Winter argue that economic equilibrium may exist in a market where nothing is optimal, and that many firms may stick to their old routines unless external factors provoke them to search for new strategies: "A historical process of evolutionary change cannot be expected to "test" all possible behavioral implications of a given set of routines, much less test them all repeatedly [...] There is no reason to expect, therefore, that the surviving patterns of behavior of a historical selection process are well adapted for novel conditions not repeatedly encountered in that process [...] In a context of progressive change, therefore, one should not expect to observe ideal adaptation to current conditions by the products of evolutionary processes." (Nelson & Winter 1982:154) Nelson and Winter (1982) have developed their evolutionary theory of economics to a high level of mathematical refinement in order to explain important aspects of economic growth as fueled by technological advance better than orthodox economic theory can. A more general theory of the evolution of business and other organizations has been published by sociologist Howard Aldrich (1979), based on the general formula of variation, selection, and retention. Unlike Nelson and Winter who emphasize goal-directed problem solving as an important source of variation, Aldrich underplays planned innovations and attaches more importance to random variations. Mechanisms of selection include selective survival of whole organizations, selective diffusion of successful innovations between organizations, and selective retention of successful activities within an organization. The effect of the environment is an important element in Aldrichs theory. He classifies environments according to several dimensions, such as capacity, homogeneity, stability, predictability, concentration versus dispersion of resources, etc. Different combinations of these parameters can provide different niches to which an organization may adapt (Aldrich 1979). In a long-term perspective, economic growth may not be steady but rather characterized by periods of relative structural stability and inertia, separated by rapid transitions from one structural regime to another. This is explained by Geoffrey Hodgson (1996) as analogous to the punctuated equilibria model of biological evolution (see chapt. 3.9). A similar theory has been applied to the development of organizations in economic competition. A firm's ability to adapt to changes in the market situation may be impeded by memetic constraints within the organization just like the adaptability of a biological species may be impeded by genetic constraints (see chapt. 3.9). Overcoming such constraints produces a leap in the development of the firm resembling the process of punctuated equilibria in biological evolution (Price 1995). 2.13 Universal selection theory Selection theory has been found useful for explaining many different phenomena in the world. Several philosophers have therefore been interested in studying similarities between different classes of phenomena which all depend on the same neo-darwinian formula: blind variation and selective retention (Cziko 1995). Biological and cultural evolution are obvious examples, but also ontogenetic growth and individual learning have been shown to involve such processes. A particularly convincing example is immunology: an organisms development of antibodies involves a process which is remarkably similar to biological evolution (Cziko 1995). Examples from the inorganic world are more subtle: In the growth of a crystal, each new molecule is wandering randomly about until by chance it hits a fitting place in the crystal lattice. A molecule in a fit position is more likely to be retained than a molecule at an unfit position. This explains how the highly ordered structure of a crystal or a snowflake is generated. You may notice, that the neo-darwinian formula for biological evolution has been modified here: the word blind has been replaced for random, and reproduction has been changed to retention. These modifications have been made for a reason. In cultural evolution, for example, the variation is seldom completely random. Cultural innovations are often goal-directed although still tentative. The philosophers meet the criticism that variation may be non-random by saying that a new innovation is not guaranteed to be successful, and hence it can be said to be blind to the outcome of the experimental variation (Campbell 1974). This modification has not stopped the criticism, since innovations may be both goal-directed and intelligent to such a degree that the outcome can be predicted with a reasonably high degree of reliability (Hull 1982). The use of the word retention, rather than reproduction, implies that the selected character is preserved, but not necessarily multiplied. In the crystal-growth example, each new molecule has to go through the same process of blind-variation-and-selective-retention rather than copying the knowledge from its predecessors. This mechanism is far less effective than biological evolution, where each new generation inherits the accumulated effect of all prior selections. The new generations do not have to wait for the successful mutations to be repeated. This is a fundamental difference, which many philosophers fail to recognize. Campbell has introduced a new branch of universal selection theory called evolutionary epistemology. He argues that any adaptation of an organism to its environment represents a form of knowledge of the environment. For example, the shapes of fish and whales represent a functional knowledge of hydrodynamics. The process of blind-variation-and-selective-retention produces such knowledge in a process resembling logical induction. Campbell claims that any increase in fitness of a system to its environment can only be achieved by this process. His theory entails three doctrines of which this is the first one. Campbells argument is symmetric: not only does he say that adaptation is knowledge, he also says that knowledge is adaptation. This means that all human knowledge ultimately stems from processes of blind-variation-and-selective-retention. Hence the term evolutionary epistemology. There are many processes which bypass the fundamental selection processes. This includes selection at higher levels, feed back, vicarious selection, etc. Intelligent problem solving is an obvious example of such a vicarious selection mechanism: it is much more effective and less costly than the primitive processes based on random mutation and selective survival. But all such mechanisms, which bypass the lower-level selection processes, are themselves representations of knowledge, ultimately achieved by blind-variation-and-selective-retention. This is Campbells second doctrine. The third doctrine is that all such bypass mechanisms also contain a process of blind-variation-and-selective-retention at some level of their own operation. Even non-tentative ways of acquiring knowledge, such as visual observation, or receiving verbal instruction from somebody who knows, are thus processes involving blind-variation-and-selective-retention according to Campbells third doctrine (Campbell 1974, 1990). Allow me to discuss this controversial claim in some detail. The most deterministic and error-free knowledge-gaining process we can think of is using a computer to get the result of a mathematical equation. Where does a modern computer get its error-free quality from? From digitalization. A fundamental digital circuit has only two possible stable states, designated 0 and 1. Any slight noise or deviation from one of these states will immediately be corrected with a return to the nearest stable state. This automatic error-correction is indeed a process of selective retention. Going down to an even more fundamental level, we find that the computer circuits are made of transistors, and that the electronic processes in a transistor involve blind-variation-and-selective-retention of electrons in a semiconductor crystal. This argument is seemingly a defense of Campbells third doctrine. But only seemingly so! My project here has not been to defend this doctrine but to reduce it ad absurdum. Campbell tells us that the translation of DNA into proteins involves blind-variation-and-selective-retention. What he does not tell us is that this applies to all chemical reactions. In fact, everything that molecules, atoms, and sub-atomic particles do, can be interpreted as blind-variation-and-selective-retention. And since everything in the Universe is made of such particles, everything can be said to rely on blind-variation-and-selective-retention. The problem with the claim that advanced methods of acquiring knowledge involve blind-variation-and-selective-retention is that it is extremely reductionistic. The third doctrine involves the common reductionist fallacy of ignoring that a complex system can have qualities which the constituent elements do not have. At the most fundamental level, everything involves blind-variation-and-selective-retention, but this may be irrelevant for an analysis of the higher-level functioning. I recognize that Campbells first and second doctrines provide a promising solution to the fundamental philosophical problem of where knowledge comes from and what knowledge is, but I find the third doctrine so reductionistic that it is irrelevant. Undeniably, however, the general darwinian formula represents an excellent mechanism for acquiring new knowledge. This mechanism is utilized in computerized methods for solving difficult optimization problems with many parameters. The principle, which is called evolutionary computation, involves computer simulation of a population of possible solutions to a given problem. New solutions are generated by mutation and sexual recombination of previous solutions, and each new generation of solutions is subjected to selection based on their fitness (Bäck et. al. 1997). This chapter has not been an account of the history of evolutionary ideas, but a study of how the principle of selection has been used for explaining cultural change. Although the principle of selection is often found in evolutionary thinking, it has sometimes played only a minor role, since traditional evolutionism has been more concerned with the direction of evolution than with its mechanism (Rambo 1991). This is one of the reasons why evolutionism often has been criticized for being teleological. The vast criticism against evolutionism has only been briefly reported here. Nineteenth century evolutionists lacked a clear distinction between organic and social inheritance because they did not know Mendel's laws of inheritance. 'Race' and 'culture' were synonymous to them. The principle of the survival of the fittest meant that evolution was dependent on the strongest individuals winning over the weaker ones. Since this process was regarded as natural and no distinction was made between evolution and progress, the logical consequence of this philosophy was a laissez faire-policy where the right of superior forces was the rule. In an extreme ethnocentrism, the so called social darwinists believed that their own race and culture was superior to everybody else and that it therefore was their right and duty to conquer the entire world. There was a strong opposition between social darwinism and socialism, because the former philosophy assumes that weakness is inborn and must naturally lead to an unkind fate, whereas the socialists believe that poverty and weakness are caused by social factors and ought to be remedied. In Herbert Spencer's philosophy, all kinds of evolution were analogous: The Universe, the Earth, the species, the individuals, and the society - all were evolving due to one and the same process. This theory has later been rejected and it is unfortunate that such diverse kinds of change are still designated by the same word: 'evolution'. Spencer compared society with an organism and the different institutions in society were paralleled with organs. While this metaphor, which has been quite popular in social science, may be appropriate in connection with a static model of society such as functionalism, it may lead to serious fallacies when social change is being studied. A consequence of the organism analogy is namely that a theory of social change is modeled after individual development rather than after the evolution of species. In the embryonic development of a body, everything is predetermined and the cause of change is inherent in the body which is changing. When transferred to the evolution of society, this line of thought leads to a deterministic, unilinear, and teleological philosophy7. The idea of analogy between different kinds of evolution has recently been revived in the universal selection theory. The words social darwinism, determinism, unilinearity, and teleology were invectives used mainly by the opponents of evolutionism. These concepts were so vaguely defined that the critics could include any theory under these headings while the proponents of evolutionism with the same ease were able to demonstrate that their theories were indeed not deterministic, teleological, etc. The debate was - and still is - highly dominated by conflicts between incompatible worldviews and views of human nature. The controversies over nature versus nurture, biology versus culture, determinism versus free will, etc. has made it impossible to reach an agreement and the conflict between different paradigms has so far lasted more than a century. Both sides have exaggerated their positions into extreme reductionism which has made them vulnerable to criticism. Adherents of the philosophy of free will wanted an idiographic description whereas the biologically oriented scientists demanded a nomothetic representation. Most social evolutionists were more interested in describing the direction or goal of evolution than its causes. Many failed to specify the unit of selection, mechanism of selection, or mode of reproduction, and only few distinguished between genetic and cultural fitness. Their theory therefore had only little explanatory power, and in particular lacked any explanation why the evolution should go in the claimed direction. The polarization of opinions did not decrease when sociobiologists took the lead in the 1970's. With an excessive use of mathematical formulae, the theoreticians distanced themselves more and more from the real world phenomena they were to describe, and many simplifications and dubious assumptions became necessary in order to make the models mathematically tractable. The mathematical models include so many parameters that it has become impossible to determine the numerical value of them all and it is therefore only possible to draw qualitative and conditional conclusions despite the intense focus on quantitative models. Of course the mathematical language has also widened the communication gap between sociobiologists and anthropologists. Cultural selection theory has so far never been a separate discipline, but has been investigated by scientists from several different branches of science, such as philosophy, economy, sociology, anthropology, social psychology, linguistics, sociobiology, etc. The consequence of severe communication gaps between the different sciences and neglectful literature search has been that the same ideas have been forgotten and reinvented several times without much progress. This is the reason why primitive and antiquated theories still pop up. Many scientists fail to acknowledge the fundamental differences between genetic and cultural selection (e.g. Ruse 1974; Hill 1978; Harpending 1980; van Parijs 1981; Mealey 1985; Russell & Russell 1982-1992, 1990), and some of these theories are even more insufficient than Leslie Stephen's neglected theory from 1882. The latest development is the school of memetics which is a much less exact discipline than sociobiology. The lack of rigor and sophistication in memetics has often been deplored, but the softness of this paradigm may help bridge the gap between the biological and humanistic sciences in the future. In connection with the theory of cultural selection, it has often been stated that knowledge is accumulated. It is an incredible paradox that this very theory itself has deviated so much from this principle when viewed as a case in the history of ideas. The theories of social change have followed a dramatic zigzag course, where every new theoretical fad has rejected the previous one totally rather than modifying and improving it; and where the same ideas and principles have been forgotten and reinvented again and again through more than a century. 2. Brunetières book L'Évolution des Genres dans l'Histoire de la Littérature (1890) was planned as a work in four volumes, of which volume two should describe the general principles for the evolution of literature. Although the first volume was reprinted in several editions, the planned following volumes were never published. 3. Italics are in the original. This also applies to the succeeding citations. 4. The contributions in this workshop have been published in Ethology and Sociobiology, vol. 10, no. 1-3, 1989, edited by Jerome H. Barkow. 5. An introduction to evolutionary psychology can be found in Barkow et.al. 1992. 6. See above. 7. Spencer's somewhat inconsistent attitude to this question has often been debated. See Haines (1988).
http://agner.org/cultsel/chapt2/
13
30
Accidents involving spilled oil are frequent occurrences in coastal regions. Many factors including, winds, surface currents, tides, air and water temperatures, and salinity, control the movement of spilled oil. The type and amount of spilled oil, and local shoreline and bottom features also influence movements of an oil slick. An effective response to an oil spill requires the input of scientists representing many different specialties and information on the chemical composition of the spilled oil, ocean currents, and weather. These data are needed for mathematical models that predict movements (known as trajectory analysis) of the oil. When combined with biological resource information, trajectory analyses can be used to identify areas that are most vulnerable to the oil so that equipment to contain the oil spill can be dispatched to where it will be most effective. Sensitive areas requiring protection include marine sanc- tuaries and unique habitats, especially those that are home to endangered species. Protection is key because oil not only causes immediate contamination but also has long- term effects on coastal ecosystems. The National Oceanic and Atmospheric Administration (NOAA) Coastal Change Analysis Program uses remote sensing data, primarily aerial photography, to identify and classify sensitive habitats of bottom-dwelling organisms, such as sea grasses. Oil spills can affect sea grasses in many ways. Toxic materials in the spilled oil are introduced into sediments, the water, and ultimately living organisms. Oil can also block sunlight from reaching the plants. Heavy oils sink and mix with sediment and can coat sea grasses. When these grasses are damaged or killed, organisms that depend upon them as a food source or for habitat are also adversely affected. This has a ripple effect on the local ecosystem and ultimately damages economically valuable fish and shellfish, thus impacting local economies for many years. Major sectors of Maine's economy, especially tourism and fishing, require clean coastal waters, aesthetically appealing coastlines, and functioning coastal ecosystems. The state requires timely and appropriate responses to oil spills. To demonstrate the important role of accurate oil-spill trajectory analysis and prompt response to mediate the effects of oil spills, we examine two cases separated by nearly 25 years. In 1972, the tanker Tamano spilled 380,000 liters (100,000 gal) of oil into Casco Bay, near Portland in southern Maine, then the largest oil spill in the state's history. Plans for dealing with spills were antiquated and slow to be implemen- ted. During the several days required to mobilize response teams, the oil spread along about 75 km (47 mi) of coastline, including beaches on 18 small islands, damaging commerci- ally valuable fish and shellfish stocks. The cleanup that followed took 11 years and cost nearly $4 billion. On September 27, 1996, the tanker Julie N. struck a bridge entering Casco Bay, spilling 680,000 liters (180,000 gal) of light fuel oil and much heavier bunker-C, a mix of oils. The light fuel oil evaporated quickly but left behind toxic components. The less toxic but heavier bunker-C sank to the bottom, covering vegetation and wildlife with a thick, sticky coating. The response to this incident was rapid and effective. The damaged tanker quickly docked and was immediately surrounded by floating barriers, called booms, to contain floating oil. Other booms were deployed quickly to prevent oil from reaching vulnerable biological resources or econom- ically valuable locations. Special clean-up vessels skimmed and collected oil from the water surface. Timely response combined with successful predictions of oil movements meant that much of the damage that had occurred 25 years earlier in the same area was not repeated. For More Information: NOAA is responsible for providing both scientific and technical support to the US Coast Guard during an oil spill, and acting as a natural resource trustee to protect and restore those resources impacted by the spill. http://www.nos.noaa.gov/programs/orr/welcome.html
http://oceanmotion.org/html/impact/marine-resources.htm
13
15
Radiation detectors provide a signal that is converted to an electric current. The device is designed so that the current provided is proportional to the characteristics of the incident radiation. There are detectors that provide a change in substance as the signal and these may be automated to provide an electric current or quantified proportional to the amount of new substance. Notation: let the symbol Def. indicate that a definition is following. Notation: symbols between [ and ] are replacement for that portion of a quoted text. To help with definitions, their meanings and intents, there is the learning resource theory of definition. Def. evidence that demonstrates that a concept is possible is called proof of concept. The proof-of-concept structure consists of - findings, and The findings demonstrate a statistically systematic change from the status quo or the control group. Proof of concept for a radiation detector is that the radiation characteristics are accurately reported. This requires standards of performance and standard radiation sources. A detector in radiation astronomy may need to be able to separate a collection of incoming radiation to obtain a clear set of signals for the radiation of interest. For example, a detector designed for red astronomy may need to be on the rocky-object surface of the Earth to separate X-rays and gamma-rays from red rays. Def. an action or process of throwing or sending out a traveling ray in a line, beam, or stream of small cross section is called radiation. "Radiation may affect materials and devices in deleterious ways:" - By causing the materials to become radioactive (mainly by neutron activation, or in presence of high-energy gamma radiation by photodisintegration). - By nuclear transmutation of the elements within the material including, for example, the production of Hydrogen and Helium which can in turn alter the mechanical properties of the materials and cause swelling and embrittlement. - By radiolysis (breaking chemical bonds) within the material, which can weaken it, cause it to swell, polymerize, promote corrosion, cause belittlements, promote cracking or otherwise change its desirable mechanical, optical, or electronic properties. - By formation of reactive compounds, affecting other materials (e.g. ozone cracking by ozone formed by ionization of air). - By ionization, causing electrical breakdown, particularly in semiconductors employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. "Devices intended for high radiation environments such as the nuclear industry and extra atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods." "Environments with high levels of ionizing radiation create special design challenges. A single charged particle can knock thousands of electrons loose, causing electronic noise and signal spikes. In the case of digital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design of artificial satellites, spacecraft, military aircraft, nuclear power stations, and nuclear weapons. In order to ensure the proper operation of such systems, manufacturers of integrated circuits and sensors intended for the military or aerospace markets employ various methods of radiation hardening. The resulting systems are said to be rad(iation)-hardened, rad-hard, or (within context) hardened." Oblique images such as the one at right are taken by astronauts looking out from the International Space Satation (ISS) at an angle, rather than looking straight downward toward the Earth (a perspective called a nadir view), as is common with most remotely sensed data from satellites. An oblique view gives the scene a more three-dimension quality, and provides a look at the vertical structure of the volcanic plume. While much of the island is covered in green vegetation, grey deposits that include pyroclastic flows and volcanic mud-flows (lahars) are visible extending from the volcano toward the coastline. When compared to its extent in earlier views, the volcanic debris has filled in more of the eastern coastline. Urban areas are visible in the northern and western portions of the island; they are recognizable by linear street patterns and the presence of bright building rooftops. The silver-grey appearance of the Caribbean Sea surface is due to sun-glint, which is the mirror-like reflection of sunlight off the water surface back towards the hand-held camera on-board the ISS. The sun-glint highlights surface wave patterns around the island. Theoretical radiation detectors "Gadolinium oxysulfide is a promising luminescent host material, because of its high density (7.32 g/cm3) and high effective atomic number of Gd. These characteristics lead to a high stopping power for X-ray radiation." Cadmium telluride (CdTe) "doped with chlorine is used as a radiation detector for [X-rays], gamma rays, beta particles and alpha particles. CdTe can operate at room temperature allowing the construction of compact detectors for a wide variety of applications in nuclear spectroscopy. The properties that make CdTe superior for the realization of high performance gamma- and x-ray detectors are high atomic number, large bandgap and high electron mobility ~1100 cm2/V·s, which result in high intrinsic μτ (mobility-lifetime) product and therefore high degree of charge collection and excellent spectral resolution." Def. "the fraction of photoelectric events which end up in the photopeak of the measured energy spectrum" is called the photopeak efficiency (ε). Ending up in the photopeak means within ± 1 full-width at half maximum (FWHM) of the peak of the distribution. "The peak to valley ratio is commonly used as a token for ε." "Another common practice is to fit an exponential function to the “valley” and to extrapolate the fit to lower pulse heights to estimate the fraction of counts hidden in the Compton continuum." "We have used a calibrated Cs137 source to determine the absolute photopeak efficiency at 662 keV. The source was placed at a sufficiently large distance from the detector so that the event rate was low and the dead time was less than 20%. Based on a log-histogram of the time intervals between events, the dead-time has been estimated to a fractional accuracy of better than 5%. We determine the photopeak efficiency by comparing the dead-time corrected event rate in the photopeak with the theoretical expectation assuming a perfect detector." Def. "the average energy loss of the particle per unit path length" is called the stopping power. Def. "the slowing down of a projectile ion due to the inelastic collisions between bound electrons in the medium and the ion moving through it" is called the electronic stopping power. Def. "the elastic collisions between the projectile ion and atoms in the sample ... [involving] the interaction of the ion with the nuclei in the target" is called the nuclear stopping power. "[T]he hardware setup also defines key experimental parameters, such as source-detector distance, solid angle and detector shielding." "Since the energy of a thermal neutron is relatively low, charged particle reaction is discrete (i.e., essentially monoenergetic) while other reactions such as gamma reactions will span a broad energy range, it is possible to discriminate among the sources." "The other sources of noise, such as alpha and beta particles, can be eliminated by various shielding materials, such as lead, plastic, thermo-coal, etc. Thus, photons cause major interference in neutron detection, since it is uncertain if neutrons or photons are being detected by the neutron detector." "A number of sources of X-ray photons have been used; these include X-ray generators, betatrons, and linear accelerators (linacs). For gamma rays, radioactive sources such as 192Ir, 60Co or 137Cs are used." "While in the past radium and radon have both been used for radiography, they have fallen out of use as they are radiotoxic alpha radiation emitters which are expensive; iridium-192 and cobalt-60 are far better photon sources." "Feature-based object recognizers generally work by pre-capturing a number of fixed views of the object to be recognized, extracting features from these views, and then in the recognition process, matching these features to the scene and enforcing geometric constraints." "Object recognition – in computer vision, this is the task of finding a given object in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different view points, in many different sizes / scale or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. This task is still a challenge for computer vision systems in general." Def. "a device that recovers information of interest contained in a modulated wave" is called a detector. Def. “a device used to detect, track, and/or identify high-energy particles, such as those produced by nuclear decay, cosmic radiation, or reactions in a particle accelerator” is called a radiation detector. "Humans have a multitude of senses. Sight (ophthalmoception), hearing (audioception), taste (gustaoception), smell (olfacoception or olfacception), and touch (tactioception) are the five traditionally recognized. While the ability to detect other stimuli beyond those governed by the traditional senses exists, including temperature (thermoception), kinesthetic sense (proprioception), pain (nociception), balance (equilibrioception), acceleration (kinesthesioception) ..., and various internal stimuli (e.g. the different chemoreceptors for detecting salt and carbon dioxide concentrations in the blood), ... some species [are] able to sense electrical and magnetic fields, and detect water pressure and currents." "Many sensors generate outputs that reflect the rate of change in attitude. These require a known initial attitude, or external information to use them to determine attitude. Many of this class of sensor have some noise, leading to inaccuracies if not corrected by absolute attitude sensors." "Gyroscopes are devices that sense rotation in three-dimensional space without reliance on the observation of external objects. Classically, a gyroscope consists of a spinning mass, but there are also "Laser Gyros" utilizing coherent light reflected around a closed path. Another type of "gyro" is a hemispherical resonator gyro where a crystal cup shaped like a wine glass can be driven into oscillation just as a wine glass "sings" as a finger is rubbed around its rim. The orientation of the oscillation is fixed in inertial space, so measuring the orientation of the oscillation relative to the spacecraft can be used to sense the motion of the spacecraft with respect to inertial space." "Motion Reference Units are single- or multi-axis motion sensors. They utilize Micro-Electro-Mechanical-Structure (MEMS) sensor technology. These sensors are revolutionizing inertial sensor technology by bringing together micro-electronics with micro-machining technology, to make complete systems-on-a-chip with high accuracy." "A horizon sensor is an optical instrument that detects light from the 'limb' of the Earth's atmosphere, i.e., at the horizon. Thermal Infrared sensing is often used, which senses the comparative warmth of the atmosphere, compared to the much colder cosmic background. This sensor provides orientation with respect to the earth about two orthogonal axes. It tends to be less precise than sensors based on stellar observation. Sometimes referred to as an Earth Sensor." "Similar to the way that a terrestrial gyrocompass uses a pendulum to sense local gravity and force its gyro into alignment with earth's spin vector, and therefore point north, an orbital gyrocompass uses a horizon sensor to sense the direction to earth's center, and a gyro to sense rotation about an axis normal to the orbit plane. Thus, the horizon sensor provides pitch and roll measurements, and the gyro provides yaw. See Tait-Bryan angles." "A xenon arc lamp is a specialized type of gas discharge lamp, an electric light that produces light by passing electricity through ionized xenon gas at high pressure. It produces a bright white light that closely mimics natural sunlight." "[C]ontinuous spectra (as in bremsstrahlung and thermal radiation) are usually associated with free particles, such as atoms in a gas, electrons in an electron beam, or conduction band electrons in a metal. In particular, the position and momentum of a free particle have a continuous spectrum, but when the particle is confined to a limited space their spectra become discrete." "Inverse Compton scattering is important in astrophysics. In X-ray astronomy, the accretion disc surrounding a black hole is presumed to produce a thermal spectrum. The lower energy photons produced from this spectrum are scattered to higher energies by relativistic electrons in the surrounding corona." "Condensed noble gases, most notably liquid xenon and liquid argon, are excellent radiation detection media. They can produce two signatures for each particle interaction: a fast flash of light (scintillation) and the local release of charge (ionisation). In two-phase xenon – so called since it involves liquid and gas phases in equilibrium – the scintillation light produced by an interaction in the liquid is detected directly with photomultiplier tubes; the ionisation electrons released at the interaction site are drifted up to the liquid surface under an external electric field, and subsequently emitted into a thin layer of xenon vapour. Once in the gas, they generate a second, larger pulse of light (electroluminescence or proportional scintillation), which is detected by the same array of photomultipliers. These systems are also known as xenon 'emission detectors'." "Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i.e., photons, from the radiating field. The intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum. Absorption spectroscopy is performed across the electromagnetic spectrum." "[A] band gap, also called an energy gap or bandgap, is an energy range in a solid where no electron states can exist. In graphs of the electronic band structure of solids, the band gap generally refers to the energy difference (in electron volts) between the top of the valence band and the bottom of the conduction band in insulators and semiconductors. This is equivalent to the energy required to free an outer shell electron from its orbit about the nucleus to become a mobile charge carrier, able to move freely within the solid material. So the band gap is a major factor determining the electrical conductivity of a solid. Substances with large band gaps are generally insulators, those with smaller band gaps are semiconductors, while conductors either have very small band gaps or none, because the valence and conduction bands overlap." "Every solid has its own characteristic energy band structure. This variation in band structure is responsible for the wide range of electrical characteristics observed in various materials. In semiconductors and insulators, electrons are confined to a number of bands of energy, and forbidden from other regions. The term "band gap" refers to the energy difference between the top of the valence band and the bottom of the conduction band. Electrons are able to jump from one band to another. However, in order for an electron to jump from a valence band to a conduction band, it requires a specific minimum amount of energy for the transition. The required energy differs with different materials. Electrons can gain enough energy to jump to the conduction band by absorbing either a phonon (heat) or a photon (light)." "[T]he electronic band structure (or simply band structure) of a solid describes those ranges of energy, called energy bands, that an electron within the solid may have ("allowed bands"), and ranges of energy called band gaps ("forbidden bands"), which it may not have." "The main components of background noise in neutron detection are high-energy photons, which aren't easily eliminated by physical barriers." "[N]oise is a random fluctuation in an electrical signal, a characteristic of all electronic circuits. Noise generated by electronic devices varies greatly, as it can be produced by several different effects. Thermal noise is unavoidable at non-zero temperature (see fluctuation-dissipation theorem), while other types depend mostly on device type (such as shot noise, which needs steep potential barrier) or manufacturing quality and semiconductor defects, such as conductance fluctuations, including 1/f noise." "[N]oise is an error or undesired random disturbance of a useful information signal, introduced before or after the detector and decoder. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, (e.g. cross-talk, deliberate jamming or other unwanted electromagnetic interference from specific transmitters), for example in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted alteration of the signal waveform, for example in the signal-to-noise and distortion ratio (SINAD). In a carrier-modulated passband analog communication system, a certain carrier-to-noise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit error rate (BER)." "[S]pikes are fast, short duration electrical transients in voltage (voltage spikes), current (current spikes), or transferred energy (energy spikes) in an electrical circuit." "Fast, short duration electrical transients (overvoltages) in the electric potential of a circuit are typically caused by" - Lightning strikes, - Power outages, - Tripped circuit breakers, - Short circuits, - Power transitions in other large equipment on the same power line, - Malfunctions caused by the power company, - Electromagnetic pulses (EMP) with electromagnetic energy distributed typically up to the 100 kHz and 1 MHz frequency range, or - Inductive spikes. "For sensitive electronics, excessive current can flow if this voltage spike exceeds a material's breakdown voltage, or if it causes avalanche breakdown. In semiconductor junctions, excessive electric current may destroy or severely weaken that device. An avalanche diode, transient voltage suppression diode, transil, varistor, overvoltage crowbar, or a range of other overvoltage protective devices can divert (shunt) this transient current thereby minimizing voltage." Usually, a meteor detector is designed for another form of radiation that the meteor may radiate. In the image at right, a 0.3 m meteor has impacted a meteor detector, in this case the Moon, and created a scintillation event that in turn is detected by a photoelectronic detector system. In the image at left, a meteor has impacted another detector, here Jupiter, but instead of a scintillation event has created a lowering of albedo as detected by the photoelectronic system, the Hubble Space Telescope. “The basic set-up consists of 1600 water tanks (water Cherenkov Detectors, similar to the Haverah Park experiment) distributed over 3,000 square kilometres (1,200 sq mi), along with four atmospheric fluorescence detectors (similar to the High Resolution Fly's Eye) overseeing the surface array.” “The Pierre Auger Observatory is unique in that it is the first experiment that combines both ground and fluorescence detectors at the same site thus allowing cross-calibration and reduction of systematic effects that may be peculiar to each technique. The Cherenkov detectors use three large photomultiplier tubes to detect the Cherenkov radiation produced by high-energy particles passing through water in the tank. The time of arrival of high-energy particles from the same shower at several tanks is used to calculate the direction of travel of the original particle. The fluorescence detectors are used to track the particle shower's glow on cloudless moonless nights, as it descends through the atmosphere.” "The cloud chamber, also known as the Wilson chamber, is a particle detector used for detecting ionizing radiation. In its most basic form, a cloud chamber is a sealed environment containing a supersaturated vapor of water or alcohol. When a charged particle (for example, an alpha or beta particle) interacts with the mixture, it ionizes it. The resulting ions act as condensation nuclei, around which a mist will form (because the mixture is on the point of condensation). The high energies of alpha and beta particles mean that a trail is left, due to many ions being produced along the path of the charged particle. These tracks have distinctive shapes (for example, an alpha particle's track is broad and shows more evidence of deflection by collisions, while an electron's is thinner and straight). When any uniform magnetic field is applied across the cloud chamber, positively and negatively charged particles will curve in opposite directions, according to the Lorentz force law with two particles of opposite charge" "The diffusion cloud chamber ... differs from the expansion cloud chamber in that it is continuously sensitized to radiation, and in that the bottom must be cooled to a rather low temperature, generally as cold as -15 degrees fahrenheit. Alcohol vapor is also often used due to its different phase transition temperatures. Dry-ice-cooled cloud chambers are a common demonstration and hobbyist device; the most common fluid used in them is isopropyl alcohol, though methyl alcohol can be encountered as well. There are also water-cooled diffusion cloud chambers, using ethylene glycol. "The bubble chamber ... reveals the tracks of subatomic particles ... as trails of bubbles in a superheated liquid, usually liquid hydrogen. Bubble chambers can be made physically larger than cloud chambers, and since they are filled with much-denser liquid material, they reveal the tracks of much more energetic particles." "The ... spark chamber is an electrical device that uses a grid of uninsulated electric wires in a chamber, with voltages applied between the wires. Microscopic charged particles cause some ionization of the air along the path of the particle, and this ionization causes sparks to fly between the associated wires. The presence and location of these sparks is then registered electrically, and the information is stored for later analysis, such as by a digital computer." "Cosmic radiation is a significant obstacle to a manned space flight to Mars. Accurate measurements of the cosmic ray environment are needed to plan appropriate countermeasures. Most cosmic ray studies are done by balloon-borne satellites with flight times that are measured in days; these studies have shown significant variations. AMS-02 [image in the superluminal detector section] will be operative on the ISS for a nominal mission of 3 years, gathering an immense amount of accurate data and allowing measurements of the long term variation of the cosmic ray flux over a wide energy range, for nuclei from protons to iron. After the nominal mission, AMS-02 can continue to provide cosmic ray measurements. In addition to the understanding the radiation protection required for manned interplanetary flight, this data will allow the interstellar propagation and origins of cosmic rays to be pinned down." The second figure on the right shows the electronic and nuclear stopping power of aluminum single crystal for aluminum ions. These stopping powers are versus particle energy per nucleon. The maximum of the nuclear stopping curve typically occurs at energies of the order of 1 keV per nucleon. The third figure at right illustrates the slowing down of a single ion in a solid material. "In the beginning of the slowing-down process at high energies, the ion is slowed down mainly by electronic stopping, and it moves almost in a straight path. When the ion has slowed down sufficiently, the collisions with nuclei (the nuclear stopping) become more and more probable, finally dominating the slowing down. When atoms of the solid receive significant recoil energies when struck by the ion, they will be removed from their lattice positions, and produce a cascade of further collisions in the material. These collision cascades are the main cause of damage production during ion implantation in metals and semiconductors." "When the energies of all atoms in the system have fallen below the threshold displacement energy, the production of new damage ceases, and the concept of nuclear stopping is no longer meaningful. The total amount of energy deposited by the nuclear collisions to atoms in the materials is called the nuclear deposited energy." "The inset in the figure shows a typical range distribution of ions deposited in the solid. The case shown here might for instance be the slowing down of a 1 MeV silicon ion in silicon. The mean range for a 1 MeV ion is typically in the micrometer range." Fourth right is an illustration of a Bragg curve. The stopping power and hence, the density of ionization, usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero. “Detection hardware refers to the kind of neutron detector used [such as] the scintillation detector and to the electronics used in the detection setup. Further, the hardware setup also defines key experimental parameters, such as source-detector distance, solid angle and detector shielding. Detection software consists of analysis tools that perform tasks such as graphical analysis to measure the number and energies of neutrons striking the detector.” “Neutrons react with a number of materials through elastic scattering producing a recoiling nucleus, inelastic scattering producing an excited nucleus, or absorption with transmutation of the resulting nucleus. Most detection approaches rely on detecting the various reaction products.” “[D]etection approaches for neutrons fall into several major categories: - Absorptive reactions with prompt reactions - Low energy neutrons are typically detected indirectly through absorption reactions. Typical absorber materials used have high cross sections for absorption of neutrons and include Helium-3, Lithium-6, Boron-10, and Uranium-235. Each of these reacts by emission of high energy ionized particles, the ionization track of which can be detected by a number of means. Commonly used reactions include 3He(n,p) 3H, 6Li(n,α) 3H, 10B(n,α) 7Li and the fission of uranium. - Activation processes - Neutrons may be detected by reacting with absorbers in a radiative capture, spallation or similar reaction, producing reaction products which then decay at some later time, releasing beta particles or gammas. Selected materials (e.g., indium, gold, rhodium, iron (56Fe(n,p)56Mn), aluminum27Al(n,α)24Na), niobium (93Nb(n,2n)92mNb), & silicon (28Si(n,p)28Al)) have extremely large cross sections for the capture of neutrons within a very narrow band of energy. Use of multiple absorber samples allows characterization of the neutron energy spectrum. Activation also allows recreation of an historic neutron exposure (e.g., forensic recreation of neutron exposures during an accidental criticality). - Elastic scattering reactions (also referred to as proton-recoil) - High energy neutrons are typically detected indirectly through elastic scattering reactions. Neutron collide with the nucleus of atoms in the detector, transferring energy to that nucleus and creating an ion, which is detected. Since the maximum transfer of energy occurs when the mass of the atom with which the neutron collides is comparable to the neutron mass, hydrogenous [materials with a high hydrogen content such as water or plastic] materials are often the preferred medium for such detectors.” "Some of the alpha particles are absorbed by the atomic nuclei. The [alpha,proton] process produces protons of a defined energy which are detected. Sodium, magnesium, silicon, aluminium and sulfur can be detected by this method. This method was only used in the Mars Pathfinder APXS." At right, the second figure shows the stopping power of aluminum metal single crystal for protons. "Choosing materials with the largest stopping powers enables thinner detectors to be produced with resulting benefits in radiation tolerance (which is a bulk effect) and lower leakage currents. Alternatively, choosing smaller stopping powers will increase scattering efficiency, which is a requirement for polarimetry, or say, the upper detection plane of a double Compton telescope." The Energetic Particles Detector (EPD) aboard the Galileo Orbiter is "designed to measure the numbers and energies of ... electrons whose energies exceed about 20 keV ... The EPD [can] also measure the direction of travel of [electrons] ... The EPD [uses] silicon solid state detectors and a time-of-flight detector system to measure changes in the energetic [electron] population at Jupiter as a function of position and time." "[The] two bi-directional, solid-state detector telescopes [are] mounted on a platform which [is] rotated by a stepper motor into one of eight positions. This rotation of the platform. combined with the spinning of the orbiter in a plane perpendicular to the platform rotation, [permits] a 4-pi [or 4π] steradian coverage of incoming [electrons]. The forward (0 degree) ends of the two telescopes [have] an unobstructed view over the [4π] sphere or [can] be positioned behind a shield which not only [prevents] the entrance of incoming radiation, but [contains] a source, thus allowing background corrections and in-flight calibrations to be made. ... The 0 degree end of the [Low-Energy Magnetospheric Measurements System] LEMMS [uses] magnetic deflection to separate incoming electrons and ions. The 180 degree end [uses] absorbers in combination with the detectors to provide measurements of higher-energy electrons ... The LEMMS [provides] measurements of electrons from 15 keV to greater than 11 MeV ... in 32 rate channels." "In the first 18 months of operations, AMS-02 [image under Cherenkov detectors] recorded 6.8 million positron (an antimatter particle with the mass of an electron but a positive charge) and electron events produced from cosmic ray collisions with the interstellar medium in the energy range between 0.5 giga-electron volt (GeV) and 350 GeV. These events were used to determine the positron fraction, the ratio of positrons to the total number of electrons and positrons. Below 10 GeV, the positron fraction decreased with increasing energy, as expected. However, the positron fraction increased steadily from 10 GeV to 250 GeV. This increase, seen previously though less precisely by instruments such as the Payload for Matter/antimatter Exploration and Light-nuclei Astrophysics (PAMELA) and the Fermi Gamma-ray Space Telescope, conflicts with the predicted decrease of the positron fraction and indicates the existence of a currently unidentified source of positrons, such as pulsars or the annihilation of dark matter particles. Furthermore, researchers observed an unexpected decrease in slope from 20 GeV to 250 GeV. The measured positron to electron ratio is isotropic, the same in all directions." "A neutrino detector is ... designed to study neutrinos. Because neutrinos are only weakly interacting with other particles of matter, neutrino detectors must be very large in order to detect a significant number of neutrinos. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation. The field of neutrino astronomy is still very much in its infancy – the only confirmed extraterrestrial sources so far are the Sun and supernova SN1987A. Various detection methods have been used. Super Kamiokande is a large volume of water surrounded by phototubes that watch for the Cherenkov radiation emitted when an incoming neutrino creates an electron or muon in the water. The Sudbury Neutrino Observatory is similar, but uses heavy water as the detecting medium. Other detectors have consisted of large volumes of chlorine or gallium which are periodically checked for excesses of argon or germanium, respectively, which are created by neutrinos interacting with the original substance. MINOS uses a solid plastic scintillator watched by phototubes, Borexino uses a liquid pseudocumene scintillator also watched by phototubes while the proposed NOνA detector will use liquid scintillator watched by avalanche photodiodes." "With γ ray energy 50 times higher than the muon energy and a probability of muon production by the γ's of about 1%, muon detectors can match the detection efficiency of a GeV satellite detector if their effective area is larger by 104." "Germanium detectors are mostly used for spectroscopy in nuclear physics. ... germanium can have a depleted, sensitive thickness of centimeters, and therefore can be used as a total absorption detector for gamma rays up to few MeV. These detectors are also called high-purity germanium detectors (HPGe) or hyperpure germanium detectors. ... germanium crystals were doped with lithium ions (Ge(Li)), in order to produce an intrinsic region in which the electrons and holes would be able to reach the contacts and produce a signal. ... HPGe detectors commonly use lithium diffusion to make an n+ ohmic contact, and boron implantation to make a p+ contact. Coaxial detectors with a central n+ contact are referred to as n-type detectors, while p-type detectors have a p+ central contact. The thickness of these contacts represents a dead layer around the surface of the crystal within which energy depositions do not result in detector signals." Detectors such as the X-ray detector at right collect individual X-rays (photons of X-ray light), count them, discern the energy or wavelength, or how fast they are detected. "X-ray detectors are devices used to measure the flux, spatial distribution, spectrum or other properties of X-rays. They vary in shape and function depending on their purpose. Some common principles used to detect X-rays include the ionization of gas, the conversion to visible light in a scintillator and the production of electron-hole pairs in a semiconductor detector." "X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers." "X-ray detectors can be either photon counting or integrating. Photon-counting detectors measure each individual x-ray photon separately, while integrating detectors measure the total amount of energy deposited in the active region of the detector. Photon-counting detectors are normally more sensitive since they do not suffer from thermal and readout noise in the same way. An other advantage is that they can be set to count only photons in a certain energy range, or even messure the energy of each absorbed photon. Integrating detectors are normally simpler and can handle much higher photon fluxes." "Aluminum nitride has the widest band-gap of any compound semiconductor and offers the potential of making ‘‘solar-blind’’ X-ray detectors, i.e., detectors insensitive to the solar visible and ultraviolet (UV) radiation." "The dispersed ultraviolet light [from the FUSE telescope] is detected by two microchannel plate intensified double delay-line detectors, whose surfaces are curved to match the curvature of the focal plane." "Two mirror segments are coated with silicon carbide for reflectivity at the shortest ultraviolet wavelengths, and two mirror segments are coated with lithium fluoride over aluminum that reflects better at longer wavelengths." Each segment such as with silicon carbide has a dedicated microchannel plate. The other microchannel plates are for the lithium fluoride mirror system. "LYRA will monitor the solar irradiance in four UV passbands. They have been chosen for their relevance to solar physics, aeronomy and Space Weather:" - the 115-125 nm Lyman-α channel, - the 200-220 nm Herzberg continuum channel, - the Aluminium filter channel (17-50 nm) including He II at 30.4 nm, and - the Zirconium filter channel (1-20 nm). "Diamond sensors make the instruments radiation-hard and solar-blind: their high bandgap energy makes them quasi-insensitive to visible light". Solar blind Cs-Te and Cs-I photocathode materials are sensitive to vacuum-UV and ultraviolet [and] [i]nsensitive to visible light and infrared (CsTe has cutoff at 320 nm, CsI at 200 nm)." "Magnesium fluoride transmits ultraviolet down to 115 nm. [But, it is] [h]ygroscopic, though less than other alkali halides usable for UV windows." "Transition radiation (TR) is a form of electromagnetic radiation emitted when a charged particle passes through inhomogeneous media, such as a boundary between two different media. This is in contrast to Cherenkov radiation, which occurs when a charged particle passes through a homogeneous dielectric medium at a speed greater than the phase velocity of electromagnetic waves in that medium." "Optical Transition radiation is produced by relativistic charged particles when they cross the interface of two media of different dielectric constants. The emitted radiation is the homogeneous difference between the two inhomogeneous solutions of Maxwell's equations of the electric and magnetic fields of the moving particle in each medium separately. In other words, since the electric field of the particle is different in each medium, the particle has to "shake off" the difference when it crosses the boundary. The total energy loss of a charged particle on the transition depends on its Lorentz factor and is mostly directed forward, peaking at an angle of the order of relative to the particle's path. The intensity of the emitted radiation is roughly proportional to the particle's energy ." "A transition radiation detector (TRD) is a particle detector using the -dependent threshold of transition radiation in a stratified material. It contains many layers of materials with different indices of refraction. At each interface between materials, the probability of transition radiation increases with the relativistic gamma factor. Thus particles with large give off many photons, and small give off few. For a given energy, this allows a discrimination between a lighter particle (which has a high and therefore radiates) and a heavier particle (which has a low and radiates much less)." "Multialkali (Na-K-Sb-Cs) [photocathode materials have a] wide spectral response from ultraviolet to near-infrared [where] special cathode processing can extend range to 930 nm. [These are] [u]sed in broadband spectrophotometers." "Borosilicate glass [window material] is commonly used for near-infrared to about 300 nm." "[T]he wide-gap II-VI semiconductor ZnO doped with Co2+ (Zn1-xCoxO) ... responds to visible light ... Excitation into the intense 4T1(P) d-d band at ∼2.0 eV (620 nm) leads to Co2+/3+ ionization [with an] experimental maximum in the external photon-to-current conversion efficiencies at values well below the solid solubility of Co2+ in ZnO." Most spacecraft designed for optical astronomy or visual astronomy carry aboard a violet or blue filter covering the wavelength range from 350-430 nm. The Solid State Imaging camera of the Galileo spacecraft uses a broad-band filter centered at 404 nm for violet astronomy. The Hubble Space Telescope has throughout its long life used a variety of violet broad and narrow band filters for violet astronomy. The Wide Field Planetary Camera (PC-1) in use from about 1990 through 1993 carried the violet band filters: F330W, F336W, F344N, F368M, F375N, F413M, F435W, F437N, F439W, and F469N. The Wide Field Planetary Camera (PC-2) replaced PC-1 and carried the following violet filters on the same filter wheels: F300W, F336W, F343N, F375N, F380W, F390N, F410M, F437N, F439W, F450W, F467M and F469N. The violet filter on each of the Viking Orbiters is centered at 440 nm with a range of 350-470 nm. At right is an image of the spectral range of the Violet filter (50 to 400 nm) on the Imaging Science System aboard the Voyager 1 and Voyager 2 Spacecraft, as defined by the instrument descriptions of the Narrow Angle Camera and Wide Angle Camera. In about 1981 "an efficient blue- and violet-sensitive RCA CCD did appear on the market." "To allow for the maximum number of high-efficiency coatings it was decided to separate the light into two separate spectrographs using a dichroic filter located immediately behind the entrance aperture of slit. The dichroic, mounter at 45°, reflects blue and violet light and transmits red and near-infrared light." The blue- and violet-sensitive CCD successfully detected the helium lines from 501.5 to 318.8 nm. "The MIC (Microchannel plate Intensified CCD (Charge Coupled Device)) detector ... [has a] measured resolution of the detector system [of] 18 micrometers FWHM at 490 nm. [It is] for the ESA X-Ray Multi Mirror Mission (XMM), where the MIC has been accepted as the blue detector for the incorporated Optical Monitor (OM)." "A0620-00 [is observed] with the [Faint Object Spectrograph] FOS blue detector" while aboard the Hubble Space Telescope. The Wide Field/Planetary Camera (PC-1) had an F469N, F487N, and F492M cyan filters in the filter wheel. The Wide Field Planetary Camera (PC-2) replaced PC-1 on the Hubble Space Telescope and carried the following cyan filters on the same filter wheels: F467M, F469N, F487N. The Advanced Camera for Surveys carried an F475W broadband cyan filter. The Faint Object Camera (FOC) carries F470M, F480LP, and F486N cyan filters. The Wide Field Planetary Camera (PC-1) of the Hubble Space Telescope was in use from about 1990 through 1993. It carried 48 filters on 12 filter wheels of four each. For the green band, these were the F492M, F502N, F517N, F547M, and the F555W. Those ending in 'N' are narrow-band filters. One of these filters is F492M which allows imaging with the [O III]λλ4959,5007 and its adjacent green continuum. The filter band pass is centered at 490.6 nm with a full-width at half maximum (FWHM) of 36.4 nm. "The F492M filter also includes Hβ. The F502N is centered at 501.85 nm with a band pass of 2.97 nm. The F547M is centered at 546.1 nm with a band pass of 43.8 nm. The Wide Field Planetary Camera (PC-2) replaced PC-1 and carried the following filters on the same filter wheels: F467M, F502N, F547M, F555W, and the F569W. In December 1993 PC-1 was replaced with PC-2 and the HST was declared operational on January 13, 1994. Onboard the HST is the Faint Object Camera (FOC) which carries filters for green astronomy: F470M, F480LP, F501N, F502N, and the F550M. "The Wide Field Camera 3 (WFC3) is the Hubble Space Telescope's last and most technologically advanced instrument to take images in the visible spectrum. It was installed as a replacement for the Wide Field and Planetary Camera 2 during the first spacewalk of Space Shuttle mission STS-125 on May 14, 2009." Initially the Hubble Space Telescope had the Wide Field/Planetary Camera (WF/PC-1) aboard where the F555W, F569W, F588N, and F606W cover the entire yellow portion of the electromagnetic spectrum. The Hubble's Faint Object Camera (FOC) uses F550M and F600M which cover from either side. The Wide Field and Planetary Camera (WFPC2) replaced PC-1 and used F555W, F569W, F588N and F606W filters. The WF/PC-1 filters available for orange astronomy are the F588N, F606W, and F622W. The FOC uses the F600M and F630M. The WFPC2 uses the F588N, F606W, and F622W. The following WF/PC-1 filters are available for red astronomy: F606W, F622W, F631N, F648M, F656N, F658N, F664N, F673N, F675W, F702W, F718M, and F725LP. The FOC uses the F630M for the shorter wavelength red rays. The Hubble WFPC2 uses F606W, F622W, F631N, F656N, F658N, F673N, F675W, F702W, and F775W. "An infrared detector is [usually one of] two main types ... thermal and photonic (photodetectors). The thermal effects of the incident IR radiation can be followed through many temperature dependent phenomena. Bolometers and microbolometers are based on changes in resistance. Thermocouples and thermopiles use the thermoelectric effect. Golay cells follow thermal expansion. In IR spectrometers the pyroelectric detectors are the most widespread." "The response time and sensitivity of photonic detectors can be much higher, but usually these have to be cooled to cut thermal noise. The materials in these are semiconductors with narrow band gaps. Incident IR photons can cause electronic excitations. In photoconductive detectors, the resistivity of the detector element is monitored. Photovoltaic detectors contain a p-n junction on which photoelectric current appears upon illumination." The Hubble PC-1 used the F785LP, F791W, F814W, F850LP, F875M, F889N, F1042M, and F1083N filters for infrared astronomy. The PC-2 used F785LP, F791W, F814W, F850LP, F953N, and F1042M. "Metal-mesh filters have many applications for use in the far infrared (FIR) and submillimeter regions of the electromagnetic spectrum. These filters have been used in FIR and submillimeter astronomical instruments for over 4 decades, in which they serve two main purposes: bandpass or low-pass filters are cooled and used to lower the noise equivalent power of cryogenic bolometers (detectors) by blocking excess thermal radiation outside of the frequency band of observation, and bandpass filters can be used to define the observation band of the detectors. Metal-mesh filters can also be designed for use at 45° to split an incoming optical signal into several observation paths, or for use as a polarizing half wave plate." "The coherer ... consists of a tube or capsule containing two electrodes spaced a small distance apart, with metal filings in the space between them. When a radio frequency signal is applied to the device, the initial high resistance of the filings reduces, allowing an electric current to flow through it." "Most Cherenkov detectors aim at recording the Cherenkov light produced by a primary charged particle. Some sensor technologies explicitly aim at Cherenkov light produced (also) by secondary particles, be it incoherent emission as occurring in an electromagnetic particle shower or by coherent emission, example Askaryan effect." "Cherenkov radiation is not only present in the range of visible light or UV light but also in any frequency range where the emission condition can be met i.e. in the radiofrequency range." "Different levels of information can be used. A binary information can be based on the absence or presence of detected Cherenkov radiation. The amount or the direction of Cherenkov light can be used. In contrast to a scintillation counter the light production is instantaneous." "Cherenkov threshold detectors have been used for fast timing and Time of flight measurements in particle physics experiments. More elaborate designs use the amount of light produced. Recording light from both primary and secondary particles, for a Cherenkov calorimeter the total light yield is proportional to the incident particle energy." "Using the light direction are differential Cherenkov detectors. Recording individual Cherenkov photon locations on a position-sensitive sensor area, RICH detectors then reconstruct Cherenkov angles from the recorded patterns. As RICH detectors hence provide information on the particle velocity, if the momentum of the particle is also known (from magnetic bending), combining these two informations enables the particle mass to be deduced so that the particle type can be identified." "A Ring-imaging Cherenkov (RICH) detector is a device that allows the identification of electrically charged subatomic particle types through the detection of the Cherenkov radiation emitted (as photons) by the particle in traversing a medium with refractive index > 1. The identification is achieved by measurement of the angle of emission, , of the Cherenkov radiation, which is related to the charged particle's velocity by where is the speed of light." "The LHCb experiment on the Large Hadron Collider uses two RICH detectors for differentiating between pions and kaons. The first (RICH-1) is located immediately after the Vertex Locator (VELO) around the interaction point and is optimised for low-momentum particles and the second (RICH-2) is located after the magnet and particle-tracker layers and optimised for higher-momentum particles." "A Faraday cup is a metal (conductive) cup designed to catch charged particles in vacuum. The resulting current can be measured and used to determine the number of ions or electrons hitting the cup." "When a beam or packet of ions hits the metal it gains a small net charge while the ions are neutralized. The metal can then be discharged to measure a small current equivalent to the number of impinging ions. Essentially the faraday cup is part of a circuit where ions are the charge carriers in vacuum and the faraday cup is the interface to the solid metal where electrons act as the charge carriers (as in most circuits). By measuring the electrical current (the number of electrons flowing through the circuit per second) in the metal part of the circuit the number of charges being carried by the ions in the vacuum part of the circuit can be determined. For a continuous beam of ions (each with a single charge) where is the number of ions observed in a time (in seconds), is the measured current (in amperes) and is the elementary charge (about 1.60 × 10−19 C). Thus, a measured current of one nanoamp (10−9 A) corresponds to about 6 billion ions striking the faraday cup each second." "A gas detector is a device which detects the presence of various gases within an area" or volume. "The combination of nanotechnology and microelectromechanical systems (MEMS) technology allows the production of a hydrogen microsensor that functions properly at room temperature. One type of MEMS-based hydrogen sensor is coated with a film consisting of nanostructured indium oxide (In2O3) and tin oxide (SnO2). A typical configuration for mechanical Pd-based hydrogen sensors is the usage of a free-standing cantilever that is coated with Pd. In the presence of H2, the Pd layer expands and thereby induces a stress that causes the cantilever to bend. Pd-coated nano-mechanical resonators have also been reported in literature, relying on the stress-induced mechanical resonance frequency shift caused by the presence of H2 gas. In this case, the response speed was enhanced through the use of a very thin layer of Pd (20 nm). Moderate heating was presented as a solution to the response impairment observed in humid conditions." "A liquid is made up of tiny vibrating particles of matter, such as atoms and molecules, held together by intramolecular bonds. ... Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist." The first image at right shows liquid water using an infrared detector, but information confirming the presence of liquid water solely from the infrared image is inferred. The image at left uses a visual radiation detector to record a meteor collision with liquid water. "Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field (see geodynamo)." "A number of bright spots with a bluish tinge are visible ... These are relatively recent impact craters. Some of the bright craters have bright streaks ... emanating from them. Bright features such as these are caused by the presence of freshly crushed rock material that was excavated and deposited during the highly energetic collision of a meteoroid with Mercury to form an impact crater." The object at left is detected to be a rocky object using radar astronomy. "The advantages of radar in planetary astronomy result from (1) the observer's control of all the attributes of the coherent signal used to illuminate the target, especially the wave form's time/frequency modulation and polarization; (2) the ability of radar to resolve objects spatially via measurements of the distribution of echo power in time delay and Doppler frequency; (3) the pronounced degree to which delay-Doppler measurements constrain orbits and spin vectors; and (4) centimeter-to-meter wavelengths, which easily penetrate optically opaque planetary clouds and cometary comae, permit investigation of near-surface macrostructure and bulk density, and are sensitive to high concentrations of metal or, in certain situations, ice." "Each element has electronic orbitals of characteristic energy. Following removal of an inner electron by an energetic photon provided by a primary radiation source, an electron from an outer shell drops into its place. There are a limited number of ways in which this can happen ... The main transitions are given names: an L→K transition is traditionally called Kα, an M→K transition is called Kβ, an M→L transition is called Lα, and so on. Each of these transitions yields a fluorescent photon with a characteristic energy equal to the difference in energy of the initial and final orbital." "[T]he detection of absorption by interstellar hydrogen fluoride (HF) [in the submillimeter band occurs] along the sight line to the submillimeter continuum sources W49N and W51." "[A]bsorption features in the submillimeter spectrum of Mars ... are due to the H2O (110-101) and 13CO (5-4) rotational transitions." "A metal detector is a device which responds to metal that may not be readily apparent. The simplest form of a metal detector consists of an oscillator producing an alternating current that passes through a coil producing an alternating magnetic field. If a piece of electrically conductive metal is close to the coil, eddy currents will be induced in the metal, and this produces a magnetic field of its own. If another coil is used to measure the magnetic field (acting as a magnetometer), the change in the magnetic field due to the metallic object can be detected." "Modern top models are fully computerized, using integrated circuit technology to allow the user to set sensitivity, discrimination, track speed, threshold volume, notch filters, etc., and hold these parameters in memory for future use. Compared to just a decade ago, detectors are lighter, deeper-seeking, use less battery power, and discriminate better." "Coil designers also tried out innovative designs. The original induction balance coil system consisted of two identical coils placed on top of one another. Compass Electronics produced a new design: two coils in a D shape, mounted back-to-back to form a circle. This system was widely used in the 1970s, and both concentric and D type (or widescan as they became known) had their fans. Another development was the invention of detectors which could cancel out the effect of mineralization in the ground. This gave greater depth, but was a non-discriminate mode. It worked best at lower frequencies than those used before, and frequencies of 3 to 20 kHz were found to produce the best results. Many detectors in the 1970s had a switch which enabled the user to switch between the discriminate mode and the non-discriminate mode. Later developments switched electronically between both modes. The development of the induction balance detector would ultimately result in the motion detector, which constantly checked and balanced the background mineralization." "At the same time, developers were looking at using a different technique in metal detection called pulse induction. Unlike the beat frequency oscillator or the induction balance machines which both used a uniform alternating current at a low frequency, the pulse induction machine simply fired a high-voltage pulse of signal into the ground. In the absence of metal, the pulse decayed at a uniform rate, and the time it took to fall to zero volts could be accurately measured. However, if metal was present when the machine fired, a small current would flow in the metal, and the time for the voltage to drop to zero would be increased. These time differences were minute, but the improvement in electronics made it possible to measure them accurately and identify the presence of metal at a reasonable distance. These new machines had one major advantage: they were completely impervious to the effects of mineralization, and rings and other jewelry could now be located even under highly-mineralized black sand." Occasionally, a detector needs a specific geographic property for optimal function. Large surface area “The Pierre Auger Observatory is an international cosmic ray observatory designed to detect ultra-high-energy cosmic rays: single sub-atomic particles (protons or atomic nuclei) with energies beyond 1020 eV (about the energy of a tennis ball traveling at 80 km/h). These high energy particles have an estimated arrival rate of just 1 per km2 per century, therefore the Auger Observatory has created a detection area the size of Rhode Island — over 3,000 km2 (1,200 sq mi) — in order to record a large number of these events. It is located in western Argentina's Mendoza Province, in one of the South American Pampas.” Next to water "The Big Bear Solar Observatory (BBSO) is a solar observatory located on the north side of Big Bear Lake in the San Bernardino Mountains of southwestern San Bernardino County, California (USA), approximately 120 kilometers (75 mi) east of downtown Los Angeles." "The location at Big Bear Lake is optimal due to the clarity of the sky and the presence a body of water. The lake surface is about 2,055 meters (6,742 ft) above sea level, putting it above a significant portion of the atmosphere. The main observatory building is in the open waters of the lake, and was originally reached by boat, though a causeway was added later. The water provides a cooling effect on the atmosphere surrounding the building and eliminates ground heat radiation waves that normally would cause optical aberrations." "Neutrino detectors [such as the Sudbury Neutrino Observatory shown in neutrino detectors] are often built underground to isolate the detector from cosmic rays and other background radiation." "IceCube [under the ice at the Amundsen-Scott South Pole Station in Antarctica ] contains thousands of spherical optical sensors called Digital Optical Modules (DOMs), each with a photomultiplier tube (PMT) and a single board data acquisition computer which sends digital data to the counting house on the surface above the array." "ANTARES is the name of a neutrino detector residing 2.5 km under the Mediterranean Sea off the coast of Toulon, France. It is designed to be used as a directional Neutrino Telescope to locate and observe neutrino flux from cosmic origins in the direction of the Southern Hemisphere of the Earth, a complement to the southern hemisphere neutrino detector IceCube that detects neutrinos from the North." "The wheel of the [OSO 5] satellite carried, amongst other experiments, a CsI crystal scintillator. The central crystal was 0.635 cm thick, had a sensitive area of 70 sq-cm, and was viewed from behind by a pair of photomultiplier tubes. The shield crystal had a wall thickness of 4.4 cm and was viewed by 4 photomultipliers. The field of view was ~ 40 degrees. The energy range covered was 14-254 keV. There were 9 energy channels:the first covering 14-28 keV and the others equally spaced from 28-254 keV. In-flight calibration was done with an 241 Am source. The instrument was designed primarily for observation of solar X-ray bursts. A secondary interest was the measurement of the intensity, spectrum, and spatial distribution of the diffuse cosmic background. The data produced a spectrum of the diffuse background over the energy range 14-200 keV." "The energy differences between levels in the Bohr model, and hence the wavelengths of emitted/absorbed photons, is given by the Rydberg formula: where n is the initial energy level, n′ is the final energy level, and R is the Rydberg constant. Meaningful values are returned only when n is greater than n′ and the limit of one over infinity is taken to be zero." "A relativistic jet coming out of the center of an active galactic nucleus is moving along AB with a velocity v. We are observing the jet from the point O. At time a light ray leaves the jet from point A and another ray leaves at time from point B. Observer at O receives the rays at time and respectively." - , where Apparent transverse velocity along CB, - , where "If (i.e. when velocity of jet is close to the velocity of light) then despite the fact that . And of course means apparent transverse velocity along CB, the only velocity on sky that we can measure, is larger than the velocity of light in vacuum, i.e. the motion is apparently superluminal." "An earth sensor is a device that senses the direction to the Earth. It is usually an infrared camera; now the main method to detect attitude is the star tracker, but earth sensors are still integrated in satellites for their low cost and reliability." "Star trackers, which require high sensitivity, may become confused by sunlight reflected from the spacecraft, or by exhaust gas plumes from the spacecraft thrusters (either sunlight reflection or contamination of the star tracker window). Star trackers are also susceptible to a variety of errors (low spatial frequency, high spatial frequency, temporal, ...) in addition to a variety of optical sources of error (spherical aberration, chromatic aberration, ...). There are also many potential sources of confusion for the star identification algorithm (planets, comets, supernovae, the bimodal character of the point spread function for adjacent stars, other nearby satellites, point-source light pollution from large cities on Earth, ...). There are roughly 57 bright navigational stars in common use. However, for more complex missions, entire star field databases are used to determine spacecraft orientation. A typical star catalog for high-fidelity attitude determination is originated from a standard base catalog (for example from the United States Naval Observatory) and then filtered to remove problematic stars, for example due to apparent magnitude variability, color index uncertainty, or a location within the Hertzsprung-Russell diagram implying unreliability. These types of star catalogs can have thousands of stars stored in memory on board the spacecraft, or else processed using tools at the ground station and then uploaded." "X-rays going through a gas will ionize it, producing positive ions and free electrons. An incoming photon will create a number of such ion pairs proportional to its energy. If there is an electric field in the gas chamber ions and electrons will move in different directions and thereby cause a detectable current. The behaviour of the gas will depend on the applied voltage and the geometry of the chamber." "Ionization chambers use a relatively low electric field of about 100 V/cm to extract all ions and electrons before they recombine. This gives a steady current proportional to the dose rate the gas is exposed to." "Proportional counters use a geometry with a thin positively charged anode wire in the center of a cylindrical chamber. Most of the gas volume will act as an ionization chamber, but in the region closest to the wire the electric field is high enough to make the electrons ionize gas molecules. This will create an avalanche effect greatly increasing the output signal. Since every electron cause an avalanche of approximately the same size the collected charge is proportional to the number of ion pairs created by the absorbed x-ray. This makes it possible to measure the energy of each incoming photon." "Gas detectors are usually single pixel detectors measuring only the average dose rate over the gas volume or the number of interacting photons ..., but they can be made spatially resolving by having many crossed wires in a wire chamber." Lithium-drifted silicon detectors "Since the 1970s, new semiconductor detectors have been developed (silicon or germanium doped with lithium: Si(Li) or Ge(Li)). X-ray photons are converted to electron-hole pairs in the semiconductor and are collected to detect the X-rays. When the temperature is low enough (the detector is cooled by Peltier effect or even cooler liquid nitrogen), it is possible to directly determine the X-ray energy spectrum; this method is called energy dispersive X-ray spectroscopy (EDX or EDS); it is often used in small X-ray fluorescence spectrometers. These detectors are sometimes called "solid state detectors". Detectors based on cadmium telluride (CdTe) and its alloy with zinc, cadmium zinc telluride, have an increased sensitivity, which allows lower doses of X-rays to be used." "Some materials such as sodium iodide (NaI) can "convert" an X-ray photon to a visible photon; an electronic detector can be built by adding a photomultiplier. These detectors are called "scintillators", filmscreens or "scintillation counters"." "A scintillator is a material, which exhibits scintillation—the property of luminescence when excited by ionizing radiation. Luminescent materials, when struck by an incoming particle, absorb its energy and scintillate, i.e., reemit the absorbed energy in the form of light." Here, "particle" refers to "ionizing radiation" and can refer either to charged particulate radiation, such as electrons and heavy charged particles, or to uncharged radiation, such as photons and neutrons, provided that they have enough energy to induce ionization. "A scintillation detector or scintillation counter is obtained when a scintillator is coupled to an electronic light sensor such as a photomultiplier tube (PMT) or a photodiode. PMTs absorb the light emitted by the scintillator and reemit it in the form of electrons via the photoelectric effect. The subsequent multiplication of those electrons (sometimes called photo-electrons) results in an electrical pulse which can then be analyzed and yield meaningful information about the particle that originally struck the scintillator." "When a charged particle strikes the scintillator, its atoms are excited and photons are emitted. These are directed at the photomultiplier tube's photocathode, which emits electrons by the photoelectric effect. These electrons are electrostatically accelerated and focused by an electrical potential so that they strike the first dynode of the tube. The impact of a single electron on the dynode releases a number of secondary electrons which are in turn accelerated to strike the second dynode. Each subsequent dynode impact releases further electrons, and so there is a current amplifying effect at each dynode stage. Each stage is at a higher potential than the previous to provide the accelerating field. The resultant output signal at the anode is in the form of a measurable pulse for each photon detected at the photocathode, and is passed to the processing electronics. The pulse carries information about the energy of the original incident radiation on the scintillator. Thus both intensity and energy of the radiation can be measured." "The time evolution of the number of emitted scintillation photons N in a single scintillation event can often be described by the linear superposition of one or two exponential decays. For two decays, we have the form: where τf and τs are the fast (or prompt) and the slow (or delayed) decay constants. Many scintillators are characterized by 2 time components: one fast (or prompt), the other slow (or delayed). While the fast component usually dominates, the relative amplitude A and B of the two components depend on the scintillating material. Both of these components can also be a function the energy loss dE/dx. In cases where this energy loss dependence is strong, the overall decay time constant varies with the type of incident particle. Such scintillators enable pulse shape discrimination, i.e., particle identification based on the decay characteristics of the PMT electric pulse. For instance, when BaF2 is used, γ rays typically excite the fast component, while α particles excite the slow component: it is thus possible to identify them based on the decay time of the PMT signal." "A semiconductor detector is a device that uses a semiconductor (usually silicon or germanium) to detect traversing charged particles or the absorption of photons. In the field of particle physics, these detectors are usually known as silicon detectors. When their sensitive structures are based on a single diode, they are called semiconductor diode detectors. When they contain many diodes with different functions, the more general term semiconductor detector is used. Semiconductor detectors have found broad application during recent decades, in particular for gamma and X-ray spectrometry and as particle detectors." "[R]adiation is measured by means of the number of charge carriers set free in the detector, which is arranged between two electrodes. Ionizing radiation produces free electrons and holes. The number of electron-hole pairs is proportional to the energy transmitted by the radiation to the semiconductor. As a result, a number of electrons are transferred from the valence band to the conduction band, and an equal number of holes are created in the valence band. Under the influence of an electric field, electrons and holes travel to the electrodes, where they result in a pulse that can be measured in an outer circuit, as described by the Shockley-Ramo Theorem. The holes travel in the opposite direction and can also be measured. As the amount of energy required to create an electron-hole pair is known, and is independent of the energy of the incident radiation, measuring the number of electron-hole pairs allows the energy of the incident radiation to be found." Silicon drift detector "Silicon drift detectors (SDDs), produced by conventional semiconductor fabrication, now provide a cost-effective and high resolving power radiation measurement. Unlike conventional X-ray detectors, such as Si(Li)s, they do not need to be cooled with liquid nitrogen." Scintillator plus semiconductor detectors "With the advent of large semiconductor array detectors it has become possible to design detector systems using a scintillator screen to convert from X-rays to visible light which is then converted to electrical signals in an array detector. ... The array consists of a sheet of glass covered with a thin layer of silicon that is in an amorphous or disordered state. At a microscopic scale, the silicon has been imprinted with millions of transistors arranged in a highly ordered array, like the grid on a sheet of graph paper. Each of these thin film transistors (TFTs) is attached to a light-absorbing photodiode making up an individual pixel (picture element). Photons striking the photodiode are converted into two carriers of electrical charge, called electron-hole pairs. Since the number of charge carriers produced will vary with the intensity of incoming light photons, an electrical pattern is created that can be swiftly converted to a voltage and then a digital signal, which is interpreted by a computer to produce a digital image. Although silicon has outstanding electronic properties, it is not a particularly good absorber of X-ray photons. For this reason, X-rays first impinge upon scintillators made from such materials as gadolinium oxysulfide or caesium iodide. The scintillator absorbs the X-rays and converts them into visible light photons that then pass onto the photodiode array." - Ginger Lehrman and Ian B Hogue, Sarah Palmer, Cheryl Jennings, Celsa A Spina, Ann Wiegand, Alan L Landay, Robert W Coombs, Douglas D Richman, John W Mellors, John M Coffin, Ronald J Bosch, David M Margolis (August 13, 2005). "Depletion of latent HIV-1 infection in vivo: a proof-of-concept study". Lancet 366 (9485): 549-55. doi:10.1016/S0140-6736(05)67098-5. Retrieved on 2012-05-09. - (March 16, 2013) "Radiation damage". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - (May 8, 2013) "Radiation hardening". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - (March 16, 2013) "Gadolinium oxysulfide". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-20. - P. Capper (1994). Properties of Narrow-Gap Cadmium-Based Compounds. London, UK: INSPEC, IEE. ISBN 0-85296-880-9. - (May 5, 2013) "Cadmium telluride". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-20. - Henric S. Krawczynski ; Ira Jung ; Jeremy S. Perkins ; Arnold Burger ; Michael Groza (October 21, 20042004). Thick CZT Detectors for Space-Borne X-ray Astronomy, In: Hard X-Ray and Gamma-Ray Detector Physics VI, 1. 5540. Denver, Colorado USA: The International Society for Optical Engineering. pp. 13. doi:10.1117/12.558912. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=849814. Retrieved 2013-05-20. - (May 8, 2013) "Stopping power (particle radiation)". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - (March 17, 2013) "Neutron detection". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (May 18, 2013) "Radiography". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (August 22, 2012) "3D single-object recognition". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (April 5, 2013) "Outline of object recognition". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (May 15, 2013) "Detector (radio)". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-25. - (May 30, 2012) "detector". Wiktionary. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - (April 22. 2012) "Particle detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - (June 12, 2012) "sensor". Wiktionary. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - (May 4, 2013) "Sense". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - (May 17, 2013) "Attitude control". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - Hemispherical Resonator Gyros, Northrop Grumman Corp. - (May 6, 2013) "Xenon arc lamp". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (March 31, 2013) "Continuous spectrum". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (May 16, 2013) "Compton scattering". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - B. A. Dolgoshein, V. N. Lebedenko & B. I. Rodionov, "New method of registration of ionizing-particle tracks in condensed matter", JETP Lett. 11(11): 351 (1970) - (May 10, 2013) "ZEPLIN-III". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (March 13, 2013) "Absorption apectroscopy". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-22. - (May 11, 2013) "Band gap". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-23. - (May 20, 2013) "Electronic band structure". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-23. - C. D. Motchenbacher, J. A. Connelly (1993). Low-noise electronic system design. Wiley Interscience. - L. B. Kish, C. G. Granqvist (November 2000). "Noise in nanotechnology". Microelectronics Reliability 40 (11): 1833–37. Elsevier. doi:10.1016/S0026-2714(00)00063-9. - (March 12, 2013) "Noise (electronics)". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - (May 15, 2013) "Voltage spike". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - Transient Protection, LearnEMC Online Tutorial. http://www.learnemc.com/tutorials/Transient_Protection/t-protect.html - Dennis Overbye (2009-07-24). "Hubble Takes Snapshot of Jupiter’s ‘Black Eye’". New York Times. Retrieved 2009-07-25. - (June 10, 2012) "Pierre Auger Observatory". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - (April 15, 2013) "Cloud chamber". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - Samuel Ting, Manuel Aguilar-Benitez, Silvie Rosier, Roberto Battiston, Shih-Chang Lee, Stefan Schael, and Martin Pohl (April 13, 2013). "Alpha Magnetic Spectrometer - 02 (AMS-02)". Washington, DC USA: NASA. Retrieved 2013-05-17. - (June 6, 2012) "Neutron detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - Tsoulfanidis, Nicholas (1995 (2nd Edition)). Measurement and Detection of Radiation. Washington, D.C.: Taylor & Francis. pp. 467–501. ISBN 1-56032-317-5. - (April 6. 2013) "Alpha particle X-ray spectrometer". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - Alan Owens, A. Peacock (September 2004). "Compound semiconductor radiation detectors". Nuclear Instruments and Methods in Physical Research A 531 (1-2): 18-37. doi:10.1016/j.nima.2004.05.071. Retrieved on 2013-05-24. - (August 7, 2012) "Galileo (spacecraft)". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-08-11. - Donald J. Williams (May 14, 2012). "Energetic Particles Detector (EPD)". Greenbelt, Maryland USA: NASA Goddard Space Flight Center. Retrieved 2012-08-11. - Ian Sample (23 January 2011). "The hunt for neutrinos in the Antarctic". The Guardian. Retrieved 2011-06-16. - (May 23, 2012) "Neutrino detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - Francis Halzen, Todor Stanev, Gaurang B. Yodh (April 1, 1997). "γ ray astronomy with muons". Physical Review D Particles, Fields, Gravitation, and Cosmology 55 (7): 4475-9. doi:10.1103/PhysRevD.55.4475. Bibcode: 1997PhRvD..55.4475H. Retrieved on 2013-01-18. - (February 26, 2013) "Semiconductor detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - (April 9, 2013) "X-ray detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - D.J. Sahnow, et al. (1995-07-03). "The Far Ultraviolet Spectroscopic Explorer Mission". JHU.edu. Retrieved 2007-09-07. - (March 9, 2012) "Far Ultraviolet Spectroscopic Explorer". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-26. - (April 25, 2013) "LYRA". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - (May 14, 2013) "Photomultiplier". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-24. - (February 28, 2013) "Transition radiation". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-20. - (March 7, 2013) "Transition radiation detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-20. - Claire A. Johnson, Alicia Cohn, Tiffany Kaspar, Scott A. Chambers, G. Mackay Salley, and Daniel R. Gamelin (September 6, 2011). "Visible-light photoconductivity of Zn1-xCoxO and its dependence on Co2+ concentration". Physical Review B 84 (12): 8. doi:10.1103/PhysRevB.84.125203. Retrieved on 2013-05-24. - M. Benesh and F. Jepsen (August 6, 1984). "SP-474 Voyager 1 and 2 Atlas of Six Saturnian Satellites Appendix A The Voyager Mission". Washington, DC USA: NASA. Retrieved 2013-04-01. - J. B. Oke and J. E. Gunn (June 1982). "An Efficient Low Resolution and Moderate Resolution Spectrograph for the Hale Telescope". Publications of the Astronomical Society of the Pacific 94 (06): 586-94. doi:10.1086/131027. Bibcode: 1982PASP...94..586O. Retrieved on 2013-05-24. - J. L. A. Fordham, D. A. Bone, M. K. Oldfield, J. G. Bellis, and T. J. Norton (December 1992). The MIC photon counting detector, In: Proceedings of an ESA Symposium on Photon Detectors for Space Instrumentation. European Space Agency. pp. 103-6. Bibcode: 1992ESASP.356..103F. - Jeffrey McClintock (December 1997). Black Hole A0620-00 and Advection-Dominated Accretion, In: HST Proposal ID #7393. Baltimore, Maryland USA: STSci. Bibcode: 1997hst..prop.7393M. - John Krist and Richard Hook (June 2004). "The Tiny Tim User’s Guide, Version 6.3". Space Telescope Science Institute. Retrieved 2013-01-22. Unknown parameter - A. S. Wilson, J. A. Braatz, T. M. Heckman, J. H. Krolik, and G. K. Miley (December 20, 1993). "The Ionization Cones in the Seyfert Galaxy NGC 5728". The Astrophysical Journal Letters 419 (12): L61-4. doi:10.1086/187137. Bibcode: 1993ApJ...419L..61W. Retrieved on 2013-01-21. - (January 21, 2013) "Hubble Space Telescope". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-01-22. - (January 15, 2013) "Wide Field Camera 3". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-01-22. - "filters - popular and hot telescope filters". Lumicon international. Retrieved 2010-11-22. - "Astronomical filter". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-01-25. - (April 8, 2012) "Infrared detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - Arline M. Melo, Mariano A. Kornberg, Pierre Kaufmann, Maria H. Piazzetta, Emílio C. Bortolucci, Maria B. Zakia, Otto H. Bauer, Albrecht Poglitsch, and Alexandre M. P. Alves da Silva (Nov 2008). "Metal mesh resonant filters for terahertz frequencies". Applied Optics 47 (32): 6064. doi:10.1364/AO.47.006064. PMID 19002231. Bibcode: 2008ApOpt..47.6064M. - Ade, Peter A. R.; Pisano, Giampaolo; Tucker, Carole; Weaver, Samuel (Jul 2006). "A Review of Metal Mesh Filters". Millimeter and Submillimeter Detectors and Instrumentation for Astronomy III. Proceedings of the SPIE. 6275: 62750U. - D. W. Porterfield, J. L. Hesler, R. Densing, E. R. Mueller, T. W. Crowe, and R. M. Weikle II (Sep 1994). "Resonant metal-mesh bandpass filters for the far infrared". Applied Optics 33 (25): 6046. doi:10.1364/AO.33.006046. PMID 20936018. Bibcode: 1994ApOpt..33.6046P. - Giampaolo Pisano, Giorgio Savini, Peter A. R. Ade, and Vic Haynes (2008). "Metal-mesh achromatic half-wave plate for use at submillimeter wavelengths". Applied Optics 47 (33): 6251–6256. doi:10.1364/AO.47.006251. PMID 19023391. Bibcode: 2008ApOpt..47.6251P. - (February 20, 2013) "Metal-mesh optical filters". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-25. - (April 22, 2013) "Coherer". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-25. - (May 14, 2013) "Cherenkov detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - (April 15, 2013) "Ring-imaging Cherekov detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - A.Augusto Alves Jr. et al (2008). "The LHCb Detector at the LHC". JINST 3 S08005. - M.Adinolfi et al (2012). "Performance of the LHCb RICH detector at the LHC". http://arxiv.org/abs/arXiv:1211.6759. - K. L. Brown, G. W. Tautfest (September 1956). "Faraday-Cup Monitors for High-Energy Electron Beams" (PDF). Review of Scientific Instruments 27 (9): 696–702. doi:10.1063/1.1715674. Bibcode: 1956RScI...27..696B. Retrieved on 2007-09-13. - (May 16, 2013) "Faraday cup". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-25. - (May 17, 2013) "Gas detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-26. - Gustavo Alverio. "A Nanoparticle-based Hydrogen Microsensor". University of Central Florida. Retrieved 2008-10-21. - D.R. Baselt. "Design and performance of a microcantilever-based hydrogen sensor". Sensors and Actuators B. Retrieved on 2013-02-26. - Sumio Okuyama. "Hydrogen Gas Sensing Using a Pd-Coated Cantilever". Japanese Journal of Applied Physics. Retrieved on 2013-02-26. - Jonas Henriksson. "Ultra-low power hydrogen sensing based on a palladium-coated nanomechanical beam resonator". Nanoscale Journal. Retrieved on 2013-02-26. - (March 16, 2013) "Hydrogen sensor". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-26. - (May 25, 2013) "Liquid". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-26. - (October 18, 2012) "Geophysics". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-11-16. - JHU/APL (January 30, 2008). "Mercury Shows Its True Colors". Baltimore, Maryland USA: JHU/APL. Retrieved 2013-04-01. - Steven J. Ostro (October-December 1993). "Planetary radar astronomy". Reviews of Modern Physics 65 (4): 1235-79. doi:10.1103/RevModPhys.65.1235. Retrieved on 2012-02-09. - (March 20, 2013) "X-ray fluorescence". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-12. - P. Sonnentrucker, D. A. Neufeld, T. G. Phillips, M. Gerin, D. C. Lis, M. De Luca, J. R. Goicoechea, J. H. Black, T. A. Bell, F. Boulanger, J. Cernicharo, A. Coutens, E. Dartois, M . Kaźmierczak, P. Encrenaz, E. Falgarone, T. R. Geballe, T. Giesen, B. Godard, P. F. Goldsmith, C. Gry, H. Gupta, P. Hennebelle, E. Herbst, P. Hily-Blant, C. Joblin, R. Kołos, J. Krełowski, J. Martín-Pintado, K. M. Menten, R. Monje, B. Mookerjea, J. Pearson, M. Perault, C. M. Persson, R. Plume, M. Salez, S. Schlemmer, M. Schmidt, J. Stutzki, D.Teyssier, C. Vastel, S. Yu, E. Caux, R. Güsten, W. A. Hatch, T. Klein, I. Mehdi, P. Morris and J. S. Ward (October 1, 2010). "Detection of hydrogen fluoride absorption in diffuse molecular clouds with Herschel/HIFI: a ubiquitous tracer of molecular gas". Astronomy & Astrophysics 521: 5. doi:10.1051/0004-6361/201015082. Retrieved on 2013-01-17. - M. A. Gurwell, E. A. Bergin, G. J. Melnick, M. L. N. Ashby, G. Chin, N. R. Erickson, P. F. Goldsmith, M. Harwit, J. E. Howe, S. C. Kleiner, D. G. Koch, D. A. Neufeld, B. M. Patten, R. Plume, R. Schieder, R. L. Snell, J. R. Stauffer, V. Tolls, Z. Wang, G. Winnewisser, and Y. F. Zhang (August 20, 2000). "Submillimeter Wave Astronomy Satellite Observations of the Martian Atmosphere: Temperature and Vertical Distribution of Water Vapor". The Astrophysical Journal 539 (2): L143-6. Retrieved on 2012-08-04. - (May 13, 2013) "Metal detector". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - (October 31, 2012) "Big Bear Solar Observatory". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-11-06. - "Big Bear Solar Observatory - Causeway". Big Bear Solar Observatory. Retrieved 2012-01-15. - R. Abbasi et al. (IceCube Collaboration) (2010). "Calibration and Characterization of the IceCube Photomultiplier Tube". Nuclear Instruments and Methods A 618: 139–152. doi:10.1016/j.nima.2010.03.102. Bibcode: 2010NIMPA.618..139A. - R. Abbasi et al. (IceCube Collaboration) (2009). "The IceCube Data Acquisition System: Signal Capture, Digitization, and Timestamping". Nuclear Instruments and Methods A 601: 294–316. doi:10.1016/j.nima.2009.01.001. Bibcode: 2009NIMPA.601..294T. - (August 10, 2012) "IceCube Neutrino Observatory". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-08-23. - (July 4, 2012) "ANTARES (telescope)". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-08-23. - Heasarc (June 26, 2003). "OSO-5". Greenbelt, Maryland USA: NASA GSFC. Retrieved 2013-05-18. - Niels Bohr (1985), "Rydberg's discovery of the spectral laws", in J. Kalckar, N. Bohr: Collected Works, 10, North-Holland Publ. - (May 2, 2012) "Hydrogen spectral series". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-05-14. - (April 22, 2013) "Superluminal motion". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-26. - "Star Camera". NASA. 05/04. Archived from the original on July 21, 2011. Retrieved 25 May 2012. - Albert C. Thompson. X-Ray Data Booklet, Section 4-5 X-ray detectors. http://xdb.lbl.gov/Section4/Sec_4-5.pdf. - Leo, W. R. (1994). “Techniques for Nuclear and particle Physics Experiments”, 2nd edition, Springer, ISBN 354057280 - (June 7, 2012) "Scintillator". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2012-06-19. - (April 7, 2013) "Scintillation counter". Wikipedia. San Francisco, California: Wikimedia Foundation, Inc. Retrieved on 2013-05-17. - Yousuke, I.; Daiki, S.; Hirohiko, K.; Nobuhiro, S.; Kenji, I. (2000). "Deterioration of pulse-shape discrimination in liquid organic scintillator at high energies". Nuclear Science Symposium Conference Record, Volume: 1 1: 6/219–6/221 vol.1. IEEE. doi:10.1109/NSSMIC.2000.949173. - Kawaguchi, N.; Yanagida, T.; Yokota, Y.; Watanabe, K.; Kamada, K.; Fukuda, K.; Suyama, T.; Yoshikawa, A. (2009). "Study of crystal growth and scintillation properties as a neutron detector of 2-inch diameter eu doped LiCaAlF6 single crystal". Nuclear Science Symposium Conference Record (NSS/MIC): 1493–1495. IEEE. doi:10.1109/NSSMIC.2009.5402299. - Example crystal scintillator based neutron monitor. - Bollinger, L.M.; Thomas, G.E.; Ginther, R.J. (1962). "Neutron Detection With Glass Scintillators". Nuclear Instruments and Methods 17: 97–116. - Miyanaga, N.; Ohba, N.; Fujimoto, K. (1997). "Fiber scintillator/streak camera detector for burn history measurement in inertial confinement fusion experiment". Review of Scientific Instruments 68 (1): 621–623. doi:10.1063/1.1147667. Bibcode: 1997RScI...68..621M. - G. F. Knoll (1999). Radiation Detection and Measurement, 3rd edition. Wiley. ISBN 978-0471073383. - Mireshghi, A.; Cho, G.; Drewery, J.S.; Hong, W.S.; Jing, T.; Lee, H.; Kaplan, S.N.; Perez-Mendez, V. (1994). "High efficiency neutron sensitive amorphous silicon pixel detectors". Nuclear Science 41 (4 , Part: 1–2): 915–921. IEEE. doi:10.1109/23.322831. Bibcode: 1994ITNS...41..915M. - C. Grupen (June 28-July 10 1999). Physics of Particle Detection. 536. Istanbul: Dordrecht, D. Reidel Publishing Co.. pp. 3–34. doi:10.1063/1.1361756. - "Radiation detectors". H. M. Stone Productions, Schloat. Tarrytown, N.Y., Prentice-Hall Media, 1972. - African Journals Online - Bing Advanced search - Google Books - Google scholar Advanced Scholar Search - How to Build a Cloud Chamber - International Astronomical Union - Lycos search - NASA/IPAC Extragalactic Database - NED - NASA's National Space Science Data Center - NCBI All Databases Search - Office of Scientific & Technical Information - PubChem Public Chemical Database - Questia - The Online Library of Books and Journals - SAGE journals online - The SAO/NASA Astrophysics Data System - Scirus for scientific information only advanced search - SDSS Quick Look tool: SkyServer - SIMBAD Astronomical Database - Spacecraft Query at NASA. - Taylor & Francis Online - WikiDoc The Living Textbook of Medicine - Wiley Online Library Advanced Search - Yahoo Advanced Web Search Learn more about Radiation detectors
http://en.wikiversity.org/wiki/Radiation_detectors
13
17
Art, baseball cards, coins, comic books, dolls, jewelry and stamps are just a few examples of the many things people collect. While some people collect for fun — others hope to profit. In this lesson, students explore how supply and demand influence the price of collectibles. They also evaluate speculation in collectibles as an investment option. They learn that collectibles are one of the riskiest ways people can invest their money. This lesson explores different types of inflation and terms associated with this economic concept. You may have heard relatives talk about the good old days when a dollar would buy something. What happened to that dollar? Why won’t it buy as much as it did last month or last year? What happened is inflation. In this lesson you will examine the various causes and theories of inflation as well as how it affects different groups in the economy such as savers, lender, and people living on fixed incomes. Students learn about price-gouging. Using a hypothetical post-disaster example, they will learn more about supply and demand, as well as the complexities associated with price increases in a supply-constrained market. The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information. Created specifically for high school mathematics teachers, this publication shows how mathematics concepts and knowledge can be used to develop economic and personal financial understandings. 7 out of 15 lessons from this publication relate to this EconEdLink lesson. This publication contains complete instructions for teaching the lessons in Capstone. When combined with a textbook, Capstone provides activities for a complete high school economics course. 45 exemplary lessons help students learn to apply economic reasoning to a wide range of real-world subjects. 7 out of 45 lessons from this publication relate to this EconEdLink lesson. This publication contains 23 lessons that introduce high school students to the world of investing--its benefits and risks and the critical role it plays in fostering capital formation and job creation in our free market system. 4 out of 23 lessons from this publication relate to this EconEdLink lesson.
http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=132
13
28
Unit Two: Studying Africa through the Social Studies Module 7B: African History, the Era of Global Encroachment Activity Three: The Practice and Legacy of Colonialism: Expand There is a general consensus among African historians that colonialism is morally wrong. It is not difficult to understand this conclusion! Colonialism, after all, is a political system in which an external nation takes complete control of a territory in another area of the world. Moreover, the colonized people do not invite the colonial power, nor do they have any say in how they are governed. Colonialism is by definition and practice un-democratic! In spite of the universal recognition that colonialism is morally reprehensible, there are differing opinions on the social, economic, and political consequences of colonialism. Since colonialism was practiced differently throughout Africa, the consequences of colonial rule will differ from colony to colony. In this section, we will look briefly at some of the general outcomes of colonialism in Africa. These outcomes are addressed in more detail in Module Six: The Geography of Africa, Module Eight: Culture and Society in Africa, Module Nine: African Economies, and Module Ten: African Politics and Government. Political Practice and Legacy You learned in the last activity that there were four different forms of colonial rule practiced in Africa: Company, Direct, Indirect, and Settler. The practice of governing was somewhat different depending on the form of colonialism. In spite of these differences, all colonial governments shared certain attributes. 1. Colonial political systems were un-democratic. No matter what form colonial rule took, all colonial systems were un-democratic. Colonial governments did not allow popular participation. Decisions and policies were made with little or no input from the African peoples. Even in the case where decisions or policies may have benefited some people, they were still un-democratic since there were no mechanisms for the people to officially express their opinions. 2. Law and Order ("Peace") was a primary objective of colonial governments. As you learned above, colonial rule was most often imposed without consent from the African people. Understandably, people were not happy with being governed without any representation, and colonial governments faced the potential of civil disobedience or outright resistance to their rule. Consequently, the maintenance of "peace" and law and order was a top priority of colonial governments. As a result, in most African colonies, more money was spent on developing and maintaining a police force and an army then was spent on education, housing, and health-care combined! 3. Colonial governments lacked capacity. Most colonial governments were not rich. The European colonial powers were not willing to fund the governing of their colonies in Africa fully. Each colony was responsible for raising most of the revenue (money) needed to fund the operations of colonial rule. Module Nine: African Economies details how different colonies attempted to raise revenues. But no matter how rich in resources a colony was, the government lacked the income and revenue necessary to develop a government system able to go beyond maintaining law and order. This meant that colonial governments were not able to provide basic infrastructure, such as roads and communication networks, nor were they able to provide basic social services such as education, health care, and housing. 4. Colonial governments practiced "divide and rule." Given the lack of capacity and the strong emphasis on law and order, all forms of colonial rule engaged in "divide and rule," by implementing policies that intentionally weakened indigenous power networks and institutions. Module Ten: African Politics and Governments explains how post-colonial ethnic conflicts in many parts of Africa have their roots in colonial policy of separating language, religious, and ethnic groups, and how these policies often created or exacerbated group differences. Economic Practice and Legacy Two primary factors or agenda influenced colonial economic practice. First, from early in the 19th century, Europeans believed that Africa was rich in natural resources, and one of reasons for colonialism was the desire to gain control of Africa's rich natural resources. Secondly, as indicated above, European colonial powers did not want to spend their own money to establish and maintain their colonies in Africa. Rather, they insisted that each colony (if at all possible) supply the revenues necessary to govern the colony. As you will learn in Module Nine: African Economies, each individual colonial government in Africa developed economic policies and practices that fit these two agendas. Meeting these two goals of generating wealth for the colonial power in Europe while simultaneously generating revenues for the local colonial rule had a lasting impact on economic practice in Africa. Just as there were a variety of types of colonial rule, there were also different types of colonial economies (detailed in Module Nine: African Economies) in Africa. However in spite of differences, there were some similarities between all types of colonial economic practice. 1. Emphasis on exploitation of raw materials for export. Colonial regimes concentrated on finding and exploiting the most profitable natural resources in each colony. In mineral-rich colonies, the emphasis was placed on mining. In other territories, the colonial power identified agricultural products suitable for export to Europe. In either case, the emphasis was on developing the resources for export, not for local use or consumption. Profits from the export of mineral and agricultural goods were also sent to Europe. Profits that could have been used to promote social and economic development in the colonies were not available. The small taxes levied on exports went to support colonial rule. [photo: 1. n/drive/worldreach.photodatabase.bender_penelope.picking cotton] 2. High demand for labor. Mining of minerals and the production of crops for export necessitated a ready supply of inexpensive labor. Consequently, colonial governments exerted considerable effort "recruiting" labor for these endeavors. As is detailed in Module Six: Geography of Africa and Module Nine: African Economies, at times colonial governments resorted to policies of forced labor in order to provide adequate labor for mines and plantations. At other times, their tactics were not as harsh, but in almost all situations, Africans labored in poor working conditions, for long hours, with inadequate pay. To improve the pay and working conditions of the labors would have lessened profits. The demand for labor also resulted in large-scale movements of people from areas that were not involved in colonial production to areas, including new urban areas, where colonial production occurred. [photos: Wisconsin: 3319as02; 4413as01] Social Practice and Legacy In most African colonies, given the lack of revenue, very little was done officially to promote social change or social development. However, the colonial experience had a dramatic impact on African societies. Once again, it is important to remember that the colonial impact on Africa was not uniform across the continent. However, some social consequences were experienced in most African colonies. 1. Movement of People. Colonial economic and political practices resulted in the massive movements of people in most African colonies. In some locales, migrations were primarily from one rural area to another. In other places, the migration was from rural areas to urban areas. In either case, these movements resulted in dislocation of peoples that impacted society and culture. Social and cultural beliefs and practices were challenged by these migrations. Long-held practices had to be adapted (and at times were completed abandoned) to fit the new circumstances. In U.S. history, rural to urban migration in the early 20th century had a similar impact on American society and culture. 2. Dislocation of Families. Families were often split up by migration. For example, men recruited to work in mines and on plantations often had to leave their families behind. As a result, women and adolescents were forced to take on new roles and to cope in absence of their husbands and fathers. Even when families remained unaffected by migration, they underwent considerable stress and change as the result of the colonial experience. Prior to colonialism, the extended family structure was the norm in most African societies. But by the end of colonial era, the nuclear family was becoming the norm in many African countries. (See discussion in Module Eight: African Societies and Cultures) A number of pre-colonial African societies had towns and small cities. However even in these societies, most people were engaged in agriculture in rural villages or homesteads. During colonialism, urbanization occurred fairly rapidly in many African colonies. Urban living resulted in changes in economic activities and occupation, and in changes in the way people lived. These changes often challenged existing values, beliefs, and social practices. 4. Religious changes. As you will learn in Module Fourteen: Religion in Africa, there was a significant change in religious belief and practice as a result of colonialism. At the beginning of the colonial era, less than five per cent of the people in Africa identified themselves as Christian. Today, nearly fifty per cent of the people in Africa identify themselves as Christians. Colonial rule provided an environment in which Christianity, in many forms, spread in many parts of Africa. While Islam was widespread in Africa prior to the coming of colonialism, it also benefited from colonialism. British and French colonial officials actively discouraged Christian mission work in Moslem areas. Peace and order established by colonial rule provided an environment in which Islam could consolidate its hold in certain African colonies. However, in spite of these significant changes, many Africans continued to hold to and practice traditional religions. Click here to see a Map of Religion in Africa. [photos: n:worldreach.photodatabase.metzler.Senegal mosque; Wisconsin: 5215as05] Throughout human history, all societies have practiced a form of "public" education. Education is the method by which families and societies transfer beliefs, values, and skills between generations. Throughout human history, education has mainly been informal. That is, values and knowledge were learned in informal settings in the home, church, and through work and play. It is has only been in the past 200 years that public education has become more formalized, taking place in schools with an added emphasis on literacy and numeracy-reading, writing, and mathematics. Koranic Schools were widespread in the Islamic areas of Africa prior to the coming of colonial rule. Koranic schools focused on learning to read the Koran, the holy book of Islam. The Koran was written in Arabic. Consequently, students learned to read Arabic, and not their local language, at the Koranic schools. However, schools that emphasized literacy and numeracy in African languages were not common. Proponents of colonialism claimed that it was necessary to enlighten and civilize African peoples and societies. Given this concern, you would think that colonial governments would have made a major effort to introduce schools throughout Africa. The truth is that most colonial governments did little to support schools. Most formal schooling African colonies was a result of the work of missionaries. Missionaries felt that education and schools were essential to their mission. Their primary concern was the conversion of people to Christianity. Missionaries believed that the ability of African peoples to read the Bible in their own language was important to the conversion process. However, most mission societies were not wealthy, and they could not support the number of schools that they really wanted. Consequently, with limited government support, most African children did not go to school during the colonial era. In fact at the end of colonial rule, no colony could boast that more than half of their children finished elementary school, and far fewer attended secondary school. However, in spite of lack of support for public education, schooling had a dramatic impact on children who were fortunate enough to attend school. Indeed, most of the leaders of Africa's independence movements (see next section), leaders of post-independent African governments and economies, were products of one of the few mission or fewer government schools. [photos: n.worldreach.photodatabase.metzler.school children; Wisconsin: 1419as06; 1519as01] Your teacher will provide you with a printed table titled Characteristics of Colonialism. Using information provided in the last two learning activities, fill in appropriate answers in each box of the table. Once you have completed this exercise, please put the table in your Exploring Africa Web Journal. The Practice and Legacy of Colonialism Using information provided in the last two learning activities, fill in appropriate answers in each box of the table. Please note that you may have similar answers for different types of colonial rule. Once you have completed this exercise, please put the table in your Exploring Africa Web Journal. |TYPE OF COLONIAL RULE||POLITICAL CHARACTERISTICS||ECONOMIC CHARACTERISTICS||SOCIAL CHARACTERISTICS| |Company Rule||**minimum government since primary interest is profit.** little government support for education, health care, and other services.** primary emphasis on "law and order"-keeping peace||** exploitation of natural resources.** profits for company most important economic goal.** alienation (taking away) of land from African peoples** forced labor policies-necessary for profits||*** no money spent on social services such as education and health care.*** social/cultural dislocation brought about by forced movement of people for labor.| |Direct Rule||** practiced primarily by French, Belgian, and Portuguese colonialists.** minimal government-lack of revenue.** laws created and enforced by European colonial officials, even at the local/rural levels.Emphasis on law and order.** traditional political authorities such as chiefs removed from power. ** used "divide and rule" tactics.||** exploitation of natural resources for export.** minimal taxes on exports so as to maximize profits for European companies.** revenues used to support law and order.** harsh labor policy to insure ready supply of inexpensive labor.** limited development of economic infrastructure.||*** little revenue spent on developing social services-schooling, health care, social security.*** social and cultural dislocation due to economic and labor policies .*** urbanization.***spread of Christianity in non-Islamic areas.| |Indirect Rule||*** practiced primarily by the British in West Africa (Ghana, Nigeria, Sierra Leone) and parts of East Africa (Uganda, Tanganyika).*** minimal government-lack of revenue.*** laws made by European colonialists, but used traditional African leaders (chiefs, headmen) as intermediaries in local government.***emphasis on law and order.*** used "divide and rule" tactics.||** exploitation of natural resources for export.** minimal taxes on exports so as to maximize profits for European companies. ** revenues used to support law and order.** harsh labor policy to insure ready supply of inexpensive labor.** limited development of economic infrastructure.|| *** little revenue spent on developing social services-schooling, health care, social security.*** social and cultural dislocation due to economic and labor policies.*** urbanization.*** spread of Christianity in non-Islamic areas. |Settler Rule||*** stronger government system to protect political rights of settlers.*** government policy oriented to protect and support settler population.*** African populations denied political participation or rights.*** harsh repression of African political movements.*** African populations ruled directly by European (often settler) officials.*** strong emphasis on law and order||*** infrastructural support for settler owned businesses.*** heavier taxes to support the development of the settler population.*** harsh labor policies used to guarantee an inexpensive labor force.||*** little revenue spent on developing social services-schooling, health care, social security.*** social and cultural dislocation due to economic and labor policies. *** urbanization.*** spread of Christianity in non-Islamic areas.| Or go to
http://exploringafrica.matrix.msu.edu/students/curriculum/m7b/activity3.php
13
15
Understanding how proteins work is a key to unlocking the secrets of life and health. Nothing happens in our bodies without them. As enzymes, proteins catalyze the living cell's chemistry. As hormones, these molecules regulate the body's development, direct our organs' activities, and organize our thoughts. As antibodies, they defend us against infection, but in their mutant forms or as coats on viruses, they help cause diseases such as sickle-cell anemia, cancer, or AIDS. What makes proteins so specific in observed functions are their unique shapes, which can range from ellipsoids to saucers to dumbbells. Each type of protein has a highly specific, three-dimensional (3D) structure that determines its biological activity—that is, its function in each body cell. Each protein is a product of a specific gene, so to understand the function of each of the 80,000 to 100,000 genes in the human genome, it helps to know the shapes and activities of the proteins encoded by each gene. To understand how a cell works, it is crucial to know the 3D structures of its proteins. A protein starts out as a string of amino acids (a combination of any of 20 different ones). The sequence of the amino acids is dictated by the order of the DNA bases in the gene that directs the protein's synthesis. The amino-acid string folds reproducibly to produce the protein's functional 3D shape. It's like bending a flexible wire connecting Ping-Pong balls of different colors to form a 3D complex shape that puts Ping-Pong balls of certain colors close together. The protein's function depends largely on how it is folded to give it a specific 3D form. For example, folding brings together widely separated amino acids to form an active site—the catalytic region of an enzyme that binds with a biochemical substance to cause a specific activity in the body, such as digestion. ORNL uses several technologies to determine the sequence of bases in genes and the structures of proteins, especially in the mouse (which is related genetically to humans). The ORNL-developed lab on a chip, mass spectrometry, and high-speed sequencing robots are being used to determine the order of bases in DNA sequences thought to contain genes. X-ray crystallography and mass spectrometry are used to decipher the structure of mouse proteins, including those involved in inflammation, a characteristic of diseases found in both mice and humans. Another approach at ORNL is to predict protein structure using computer modeling. Predicting Protein Shapes Although the amino-acid sequences of tens of thousands of proteins have been determined, the 3D structures of only about 1500 different proteins are known today. Amino-acid sequencing is a fairly rapid process, whereas determining the 3D structure of a protein is very time consuming and expensive. It can take a year for a crystallographer to determine the structure of a protein. Considerable time and money would be saved if the 3D structure of every protein could be predicted from its amino-acid sequence. Some researchers believe that, by 2005, computer modeling will accurately predict the structures of 75 to 100 unknown protein sequences a day. Then therapeutic drugs to block disease-causing proteins by matching their shapes might be developed more quickly. The Computational Protein Structure Group in the Computational Biosciences Section of ORNL's Life Sciences Division has developed a suite of computational tools for predicting protein structure. The group, led by Ying Xu, includes Oakley Crawford, Ralph Einstein, Michael Unseren, Dong Xu, and Ge Zhang. Their computer package, called the Protein Structure Prediction and Evaluation Computer Toolkit (PROSPECT), allows a user to predict the detailed 3D structure of an unknown protein, including its shape and the location of each of its amino acids. Using PROSPECT, the ORNL group has made predictions for all 43 target proteins in an international contest for protein structure predictions, called CASP-3. ORNL placed in the top 5% of about 100 groups worldwide. One approach the group uses is "protein threading," a term suggested by embroidery in which a thread is pulled through a predetermined design. In this case, the thread is a string of amino acids. ORNL scientists computationally superimpose the same amino-acid sequence in 1000 different representative protein structures to determine the structure that is the best fit. They do calculations to determine which structure aligns the amino-acid atoms at their lowest energy level (where the atoms want to be) and in positions where they are compatible with their neighbors. The representative protein structure that best fits a target amino-acid sequence is predicted to be the target's approximate structure. |Using a computer program such as PROSPECT, ORNL researchers can predict the likely three-dimensional structure of a protein from the order of the amino acids in the "target sequence."| "We also use an approach called homology modeling to fine-tune the predicted structure," says Ying Xu. "We computationally ‘tweak' the structure of the new protein by calculating the detailed forces between atoms and making adjustments in the final predicted structure to minimize the atoms' energies." Research groups from the National Institutes of Health, the Department of Energy's Lawrence Berkeley Laboratory, Amgen, and Boston University have expressed interest in using PROSPECT in their research and in collaborations with ORNL to further develop the computer toolkit. By folding their ideas together, the collaborators may soon solve a classic problem. Computing the Genome A team of researchers in Europe spent two years searching for the gene responsible for adrenoleukodystrophy, a disease described in the movie Lorenzo's Oil. The team tried the standard experimental techniques of mapping and sequencing. The researchers fragmented the chromosome believed to harbor the gene, producing ordered pieces of a manageable size. They placed these fragments into high-throughput sequencing machines. They obtained the order of the chemical bases in the entire chromosome. But they still couldn't find the gene. So in 1995 they e-mailed information on the sequence to the Oak Ridge computer containing the ORNL-developed computer program called Gene Recognition and Analysis Internet Link (GRAIL™). Within a couple of minutes, using statistical and pattern-recognition tools, GRAIL™ returned the location of the gene within the sequence. The ability of computing to find patterns in a flood of data gathered through mapping and sequencing is being increasingly appreciated by biologists. In the next four years, a new sequence of approximately 2 million DNA bases will be produced every day. Each day's sequence will represent about 75 to 100 new genes and their respective proteins. This information will be made available immediately on the Internet and in central genome databases. The DNA building blocks of several living organisms—a methane-producing microorganism from deep-sea volcanic vents, an influenza virus, yeast, and the round worm (C. Elegans)—have been completely sequenced. The sequencing of other organisms (e.g., the fruit fly) will be completed soon. The three million links in the human genome chain are expected to be completely sequenced by 2003, and 10,000 of our 80,000 to 100,000 genes will be identified then. Plans call for the order of DNA bases in the mouse genome to be determined by 2005. But a complete set of sequence data for any organism may not be very useful to medical researchers, molecular biologists, and environmental scientists without organized and comprehensive computational analysis. Such comprehensive genome analysis is needed to help researchers understand the basic biology of humans, microbes, plants, and other living organisms. To provide a comprehensive genome-wide analysis of genome sequence data from different organisms and help integrate biological data around a genome-sequence framework, ORNL and a team of researchers at the DOE Joint Genome Institute, Lawrence Berkeley National Laboratory, Baylor College of Medicine, Hospital for Sick Children (Toronto), Johns Hopkins University, Washington University, University of California at Santa Cruz, University of Pennsylvania, and the National Center for Genome Resources have constructed a computational resource that uses GRAIL-EXP, GENSCAN, and a suite of other tools to annotate genome sequences. Annotation is the process of organizing biological information and predictions in a sequenced genome framework (e.g., linking what a gene does to its structure). (See http://compbio.ornl.gov/gac/index.shtml). The team has developed a plan and has built a first prototype of the needed genome analysis framework and toolset. The prototype can do the following: - Retrieve biological data and assemble genomes; - Compute genes, proteins, and genome features from sequences and experimental data (e.g., the group that developed PROSPECT is predicting protein structure from amino-acid sequences available on ORNL computers); - Compute homology and function among genomes, genes, and gene products (e.g., proteins); - Model the three-dimensional structure of gene products; and - Link genes and gene products to biological pathways and systems. "We have made considerable progress in addressing some data management, data storage, and data access issues," says Ed Uberbacher, head of the Computational Biosciences Section in ORNL's Life Sciences Division. "For example, we developed a unique information resource and Web browser called the Genome Channel, which is available on the Internet. It gathers the results from sequencing centers around the world. It provides a fully assembled view of what is known about the human genome and its chromosomes, sequences, and experimentally cloned genes. It also provides information on computationally predicted genes. The Genome Channel is currently being used by the worldwide genome community to identify and predict gene and protein sequences of interest." Producing and Screening for Mouse Mutations Because mice and humans are genetically so similar, biologists can study genetic diseases in mice to better understand similar disorders in humans. DOE considers the mouse to be the most important mammalian model organism, and DOE's Human Genome Project has proposed to devote 10% of its efforts in DNA sequencing to the mouse genome. ORNL is playing a major role in determining the functions of mouse genes as a part of the Human Genome Project. |Ed Michaud watches the activity of normal and mutant mice in large beakers at the Mammalian Genetics Section research laboratory. | ORNL's Mammalian Genetics Section of the Life Sciences Division, with a large capacity for mouse production and a long history in mouse genetics and mutagenesis, has taken the lead in mouse functional genomics for DOE. The long-term goal of functional genomics at ORNL is to develop and employ the fastest, smartest, cheapest, and most efficient, high-throughput methods for generating and analyzing mouse mutations to help discover the functions of all 70,000 to 100,000 mouse genes. Starting in the late 1940s, ORNL researchers led by Bill and Liane Russell developed mutant strains of mice as they studied the genetic effects of radiation and chemical exposures on the animals. Using these experimentally induced mutations as a starting point, current studies are designed to find not only obvious changes in characteristics (phenotypes), such as altered coat color, but also more subtle disease phenotypes caused by a change in or deletion of a gene (genotype). The mutant stocks generated in ORNL's historic program to assess the genetic risks of exposing mammals to radiation make ideal targets for current mutagenesis efforts. For example, some of these mutant stocks contain deletions of a known section of a chromosome, and when combined with a chemically induced single-gene mutation in the same section of the paired chromosome, results in a mouse lacking any normal copies of that single gene. Using this approach, the mouse lacking the normal gene will reveal the function of the gene based on the resulting disease phenotype, and the chromosome deletion serves to identify the approximate physical location of the gene. Eugene Rinchik, one of the staff scientists leading research in this program, focuses on making mutations in mice and then using various techniques to discover the resulting phenotypes. Among the techniques employed, in addition to simple observation of the animals that could carry mutations, are tests for motor ability and behavior, as well as analysis of body fluids and tissues to detect subtle differences. The mice are also scanned in ORNL's newly developed MicroCAT device (see next section) to see internal changes such as fat deposits and enlarged organs. In these ways, mutations that cause, for example, diabetes, obesity, depression, anemia, kidney disease, nervous disorders, or stomach problems may be detected. Once a mutation is confirmed, various molecular mapping techniques are used to isolate the chromosome region and then the actual gene causing the disorder in the mouse. By linking the disorder to the mutated gene, the normal function of the gene can then be deduced. For example, by locating a mutated gene that causes cleft palate in mice, ORNL's Cymbeline Culiat was able to analyze the normal gene that assists the closing of the palate in the developing mouse. One way to make mutations in single genes is to inject male mice with ethylnitrosourea (ENU), a powerful chemical mutagen discovered by ORNL's Bill Russell in 1979. ENU causes the substitution of one chemical base for another in the DNA of male spermatogonial stem cells, which continuously produce mature sperm. When the ENU-treated male mouse is mated with an untreated female mouse, some offspring may have new mutations. Over the past 10 years ORNL's Eugene Rinchik and Don Carpenter isolated 31 new mutations in more than 4500 pedigrees from one large ENU experiment. In a second ENU experiment focusing on a different section of the mouse genome, they have so far isolated 19 new mutations from 1250 pedigrees tested, have mapped their positions on the target mouse chromosome, and have begun cloning the genes responsible for four of the new mutations. Mouse mutations, then, have historically been made by treating live mice with mutagens and breeding offspring to look for mutations. Now, mutations can also be made very efficiently in a culture dish using special cells from early mouse embryos; these embryonic stem cells have not yet differentiated into specific cell types but retain the potential to become any kind of cell in the mouse. After using molecular techniques to replace a particular normal gene with a mutant one, or to produce a deletion or rearrangement of a whole section of chromosome in the embryonic cell, ORNL's Ed Michaud and his colleagues can use the specifically altered cell to produce a live mouse carrying the desired genetic change. If the new mouse exhibits a mutation, such as epileptic seizures, then the engineered genetic change is assumed to have caused the seizures. The ORNL researchers also have the capability to make different types of mutations in the same gene to see the whole spectrum of functions in which a gene might be involved. Different gene mutations may completely turn the gene off so it produces no protein, lower the quantity of protein the gene produces, or alter the normal structure of the protein, causing a disease or disorder. According to Rinchik, a slightly injured gene resulting in a slightly altered mutant protein may help us understand the origin of a disease, because most human genetic diseases can be tied to a subtle alteration in a gene rather than a complete loss of gene function. ORNL researchers Dabney Johnson, Karen Goss, Jack Schryver, and Gary Sega have developed high-throughput biochemical and behavioral screening tests for the detection of subtle mutations in mice. These tests are routinely performed on 100 mice per week. For example, one test used in screening measures how long mice can maintain balance on a rotating dowel rod in a test for neuromuscular coordination, while another instrument quantifies the startle response to a sudden sound. |Normal mice maintain their balance on the rapidly turning Rotor-Rod. Because mice having certain mutations lack the coordination and balance of normal mice, they can be identified in the Rotor-Rod test because they fall off the rotating rod more quickly. | To increase the breadth and accuracy of screening for mutant mouse phenotypes at what Johnson calls the Screenotype Center, ORNL has organized the Tennessee Mouse Genome Consortium (TMGC). The TMGC taps into the expertise of academic and clinical researchers across the state; membership consists of the University of Tennessee at Knoxville, UT-Memphis, St. Jude Children's Research Hospital, Vanderbilt University, and Meharry Medical College. The TMGC participates both in screening mice for new mutations and in more detailed analysis of confirmed mutations. If, for example, a mutant strain has epileptic seizures, ORNL sends mice or samples from mice to consortium members qualified to determine if the cause is neurochemical or neurophysical and if this mouse is a good model for some form of human epilepsy. Currently, consortium members are helping ORNL screen mice for vision and hearing problems, brain and other organ malfunctions, neurotransmitter content in the brains, and the normal production of sperm cells. MicroCAT "Sees" Hidden Disorders A mouse may be able to hide from a cat, but some types of genetic disorders hidden in mice can now be seen by the MicroCAT miniature X-ray computerized tomography (CT) system devised by Mike Paulus, Hamed Sari-Sarraf, and Shaun Gleason, all of the Instrumentation and Controls (I&C) Division. This high-resolution X-ray imaging system, a kind of CT scanner for mice, allows biologists to see a detailed, three-dimensional image of the internal structure of a mouse in just a few minutes. Traditionally, determining if mice carry subtle anatomical disorders has been a slow, labor-intensive, manual process. Now, this new tool greatly cuts the time needed to determine accurately if a mouse has internal malformations not visible upon external inspection. Thus, it may speed the process of finding cures for some human diseases. For example, imaging of specific fat deposits in an anesthetized mouse allows ORNL researchers to track both the accumulation of fat in a mouse that carries mutant genes involved in obesity and the result of dietary or other obesity treatments. in Research Mice The I&C group is writing software to allow the computer to inspect and analyze the images to alert researchers to possible abnormalities of interest. The MicroCAT tool has already attracted the attention of researchers around the country who would like to image their own research animals using the Oak Ridge prototype. Mouse Gene for Stomach Cancer In a search for a gene thought to cause some mice to be born deaf, an ORNL researcher determined that the same gene can cause stomach cancer in mice. The discovery could speed up understanding of how both mice and humans get stomach cancer. Identified at ORNL The research was performed by Cymbeline (Bem) Culiat, a staff molecular biologist with the Mammalian Genetics Section in ORNL's Life Sciences Division, in collaboration with former ORNL researcher Lisa Stubbs, now with DOE's Lawrence Livermore National Laboratory (LLNL). Former ORNL biologist Walderico Generoso had induced the deafness mutation, designated 14Gso, in mice by irradiating male mice with X rays and then mating them with untreated female mice. Unlike normal mice, 14Gso mouse pups were not startled by loud noises, their heads persistently bobbed, and they frequently ran in circles in their cage. These behaviors suggested defects in the inner ear, where hearing and balance are controlled. Studies of the inner ear structures of these mutant mice showed they were too defective to allow sounds to be heard. In an attempt to locate the gene believed responsible for deafness in these mutant mice, Culiat focused on the tips of two of their chromosomes (7 and 10). Through microscope studies of stained chromosomes, ORNL's Nestor Cacheiro found evidence that genes on both tips had been disrupted and their parts exchanged. Culiat began hunting for the deafness gene in the tip of chromosome 7, which is mapped more extensively than chromosome 10 in the mouse. Using various genetic and molecular mapping techniques, Culiat localized the mutated region in chromosome 7 to a DNA segment containing muc2 (intestinal mucin 2), a gene coding for a major protein in the mucus lining of the intestine. A literature search indicated that one end of the protein produced by the human MUC2 gene is very similar to another protein associated with deafness in humans, thereby making muc2 a candidate gene for the inner ear defects observed in 14Gso mice. |Bem Culiat washes DNA samples of cloned mouse genes isolated and purified from bacterial cultures where multiple copies of the genes are made. | "I checked the expression of this muc2 gene in the deaf mice by measuring their levels of RNA, which carry the gene's instructions for synthesizing protein," she says. "The gene is normally expressed in the intestine and kidney, but I found it was overexpressed in the stomach and lungs and showed a loss of expression in kidneys of the mutant mice. In humans, the overexpression of muc2 in the stomach is associated with chronic gastritis leading to gastric lymphomas and adenocarcinomas. Therefore, we predicted the same defects will occur in the mutant mice." Stomach pathology studies and examination of the gastrointestinal systems of 14Gso mice by Xiaochen Lu, a researcher in Stubbs' LLNL laboratory, showed inflamed stomachs (gastritis), ulcers, and gastric cancer (lymphomas and adenocarcinomas), the same defects found in humans. "This mutant mouse," Culiat says, "is a good mouse model for studying how gastritis progresses to stomach cancer in both mice and humans." So far examination of the mutant mice has revealed no abnormal expression of muc2 in inner ears. More detailed analysis of this large gene and analysis of the mutated region of mouse chromosome 10 are both needed to confirm or rule out the involvement of muc2 in the inner ear defect of 14Gso mice. "If muc2 turns out to be the deafness gene in our mutant mice," Culiat says, "then we may be able to determine if there are mutations in this gene in certain groups of deaf people." Culiat performed most of this research at ORNL as a postdoctoral scientists working with Stubbs. She was supported by the Alexander Hollaender Postdoctoral Fellowship Program of the Oak Ridge Institute for Science and Education. Certain segments of the gene muc2 have been cloned and sequenced at ORNL. The sequencing and cloning of this very large gene will be completed at LLNL under the direction of Stubbs. The cloning and characterization of the chromosome regions containing the 14Gso mutation are goals of a continuing collaboration between Stubbs and Culiat. By identifying and characterizing genes and proteins using various technologies and mouse experiments, Oak Ridge researchers are finding clues that could lead to cures for human diseases.
http://www.ornl.gov/info/ornlreview/v32_2_99/from.htm
13
222
The Debt Ceiling - The Fiscal Cliff Under Article I Section 8 of the United States Constitution, Congress has the sole power to borrow money on the credit of the United States. From the founding of the United States until 1917 Congress directly authorized each individual debt issuance separately. In order to provide more flexibility to finance the United States' involvement in World War I, Congress modified the method by which it authorizes debt in the Second Liberty Bond Act of 1917. Under this act Congress established an aggregate limit, or "ceiling," on the total amount of bonds that could be issued. The current debt ceiling, in which an aggregate limit is applied to nearly all federal debt, was substantially established by Public Debt Acts passed in 1939 and 1941. The Treasury is authorized to issue debt needed to fund government operations (as authorized by each federal budget) up to a stated debt ceiling, with some small exceptions. The process of setting the debt ceiling is separate and distinct from the regular process of financing government operations, and raising the debt ceiling does not have any direct impact on the budget deficit. The U.S. government proposes a federal budget every year, which must be approved by Congress. This budget details projected tax collections and outlays and, if there is a budget deficit, the amount of borrowing the government would have to do in that fiscal year. A vote to increase the debt ceiling is, therefore, usually treated as a formality, needed to continue spending that has already been approved previously by the Congress and the President. The Government Accountability Office (GAO) explains: "The debt limit does not control or limit the ability of the federal government to run deficits or incur obligations. Rather, it is a limit on the ability to pay obligations already incurred." The apparent redundancy of the debt ceiling has led to suggestions that it should be abolished altogether. Since 1979, the House of Representatives passed a rule to automatically raise the debt ceiling when passing a budget, without the need for a separate vote on the debt ceiling, except when the House votes to waive or repeal this rule. The exception to the rule was invoked in 1995, which resulted in two government shutdowns. When the debt ceiling is reached, Treasury can declare a debt issuance suspension period and utilize "extraordinary measures" to acquire funds to meet federal obligations but which do not require the issue of new debt. Treasury first used these measures on December 16, 2009, to remain within the debt ceiling, and avoid a government shutdown, and also used it during the debt-ceiling crisis of 2011. However, there are limits to how much can be raised by these measures. The debt ceiling was increased on February 12, 2010, to $14.294 trillion. On April 15, 2011, Congress finally passed the 2011 United States federal budget, authorizing federal government spending for the remainder of the 2011 fiscal year, which ends on September 30, 2011, with a deficit of $1.48 trillion, without voting to increase the debt ceiling. The two Houses of Congress were unable to agree on a revision of the debt ceiling in mid-2011, resulting in the United States debt-ceiling crisis. The impasse was resolved with the passing on August 2, 2011, the deadline for a default by the U.S. government on its debt, of the Budget Control Act of 2011, which immediately increased the debt ceiling to $14.694 trillion, required a vote on a Balanced Budget Amendment, and established several complex mechanisms to further increase the debt ceiling and reduce federal spending. On September 8, 2011, one of the complex mechanisms to further increase the debt ceiling took place as the Senate defeated a resolution to block a $500 billion automatic increase. The Senate's action allowed the debt ceiling to increase to $15.194 trillion, as agreed upon in the Budget Control Act. This was the third increase in the debt ceiling in 19 months, the fifth increase since President Obama took office, and the twelfth increase in 10 years. The August 2 Act also created the United States Congress Joint Select Committee on Deficit Reduction for the purpose of developing a set of proposals by November 23, 2011, to reduce federal spending by $1.2 trillion. The Act requires both houses of Congress to convene an "up-or-down" vote on the proposals as a whole by December 23, 2011. The Joint Select Committee met for the first time on September 8, 2011. The debt ceiling was raised once more on January 30, 2012, to a new high of $16.394 trillion. At midnight on Dec. 31, 2012, a major provision of the Budget Control Act of 2011 (BCA) is scheduled to go into effect. The crucial part of the Act provided for a Joint Select Committee of Congressional Democrats and Republicans — the so-called 'Supercommittee '— to produce bipartisan legislation by late November 2012 that would decrease the U.S. deficit by $1.2 trillion over the next 10 years. To do so, the committee agreed to implement by law — if no other deal was reached before Dec. 31 — massive government spending cuts as well as tax increases or a return to tax levels from previous years. These are the elements that make up the 'fiscal cliff.' Source for the above: Wikipedia - The Budget and Economic Outlook: Fiscal Years 2010 to 2020, 01/2010 [181 Pages, 1.9MB] - The Congressional Budget Office (CBO) projects that if current laws and policies remained unchanged, the federal budget would show a deficit of about $1.3 trillion for fiscal year 2010 (see Summary Table 1). At 9.2 percent of gross domestic product (GDP), that deficit would be slightly smaller than the shortfall of 9.9 percent of GDP ($1.4 trillion) posted in 2009. Last year's deficit was the largest as a share of GDP since the end of World War II, and the deficit expected for 2010 would be the second largest. Moreover, if legislation is enacted in the next several months that either boosts spending or reduces revenues, the 2010 deficit could equal or exceed last year's shortfall. The large 2009 and 2010 deficits reflect a combination of factors: an imbalance between revenues and spending that predates the recession and turmoil in financial markets, sharply lower revenues and elevated spending associated with those economic conditions, and the costs of various federal policies implemented in response to those conditions. - The Budget and Economic Outlook: Fiscal Years 2008 to 2017, 01/2007 [194 Pages, 2.2MB] - If current laws and policies remained the same, the budget deficit would equal roughly 1 percent of gross domestic product (GDP) each fiscal year from 2007 to 2010, the Congressional Budget Office (CBO) projects. Those deficits would be smaller than last year's budgetary shortfall, which equaled 1.9 percent of GDP (see Summary Table 1). Under the assumptions that govern CBO's baseline projections, the budget would essentially be balanced in 2011 and then would show surpluses of about 1 percent of GDP each year through 2017 (the end of the current 10-year projection period). The favorable outlook suggested by those 10-year projections, however, does not indicate a substantial change in the nation's long-term budgetary challenges. The aging of the population and continuing increases in health care costs are expected to put considerable pressure on the budget in coming decades. Economic growth alone is unlikely to be sufficient to alleviate that pressure as Medicare, Medicaid, and (to a lesser extent) Social Security require ever greater resources under current law. Either a substantial reduction in the growth of spending, a significant increase in tax revenues relative to the size of the economy, or some combination of spending and revenue changes will be necessary to promote the nation's longterm fiscal stability. CBO's baseline budget projections for the next 10 years, moreover, are not a forecast of future outcomes; rather, they are a benchmark that lawmakers and others can use to assess the potential impact of future policy decisions. The deficits and surpluses in the current baseline are predicated on two key projections. - The Budget and Economic Outlook: Fiscal Years 2006 to 2015, 01/2005 [179 Pages, 1.9MB] - This volume is one of a series of reports on the state of the budget and the economy that the Congressional Budget Office (CBO) issues each year. It satisfies the requirement of section 202(e) of the Congressional Budget Act of 1974 for CBO to submit to the Committees on the Budget periodic reports about fiscal policy and to provide baseline projections of the federal budget. In accordance with CBO's mandate to provide impartial analysis, the report makes no recommendations. Chapter 1, The Budget Outlook, provides a review of 2004 followed by discussions on The Concept Behind CBO's Baseline Projections, Uncertainty and Budget Projections, The Long-Term Outlook, Changes to the Budget Outlook Since September 2004, The Outlook for Federal Debt, and Trust Funds and the Budget. Chapter 2, The Economic Outlook, presents an Overview of CBO's Two-Year Forecast followed by discussions of The Importance of Productivity Growth for Economic and Budget Projections, The Outlook for 2005 and 2006, The Economic Outlook through 2015, Taxable Income, Changes in CBO's Outlook Since September 2004, and A Comparison of Forecasts. Chapter 3, The Spending Outlook, focuses on Mandatory Spending, Discretionary Spending, and Net Interest. Chapter 4, The Revenue Outlook, examine Revenues by Source, Revenue Projections in Detail, Uncertainty in the Revenue Baseline, Revisions to CBO's September 2004 Revenue Projections,and The Effects of Expiring Tax Provisions. Appendixes A through F focus on the following: How Changes in Economic Assumptions Can Affect Budget Projections, The Treatment of Federal Receipts and Expenditures in the National Income and Product Accounts, Budget Resolution Targets and Actual Outcomes, Forecasting Employers' Contributions to Defined-Benefit Pensions and Health Insurance, CBO's Economic Projections for 2005 to 2015, Historical Budget Data, and Contributors to the Revenue and Spending Projections. A glossary completes the report. - The Debt Limit: History and Recent Increases, 09/08/2010 [26 Pages, 400kb] - Total debt of the federal government can increase in two ways. First, debt increases when the government sells debt to the public to finance budget deficits and acquire the financial resources needed to meet its obligations. This increases debt held by the public. Second, debt increases when the federal government issues debt to certain government accounts, such as the Social Security, Medicare, and Transportation trust funds, in exchange for their reported surpluses. This increases debt held by government accounts. The sum of debt held by the public and debt held by government accounts is the total federal debt. Surpluses generally reduce debt held by the public, while deficits raise it. On September 3, 2010, total federal debt outstanding was $13.435 trillion. - Economic Renewal: A Grand Strategy for the United States, 03/24/2010 [34 Pages, 389kb] - The nation's continuing deficits and increasing debt will lead to its declining economic strength if not checked. Economic power is the foundation for the other elements of national power so economic problems degrade military power, erode America's international image, and potentially may lead to declining faith in democratization efforts abroad as developing nations find free market capitalism less attractive. The U.S. should adopt a grand strategy of "economic renewal" to maintain its economic power. By taking steps to reduce its debt and leading an international effort to replace the dollar as the global currency, the United States can focus on rebuilding its economic power and maintaining its role as a global leader. Supporting military, diplomatic, and informational strategies will ensure the world sees these changes as the actions of a global power leading visionary change instead of a declining power trying to hold onto a fading empire. Changes led by the U.S. are essential for this country to maintain its power as well as to shine as a beacon of free market and democratic principles around the world. - The Federal Budget: Current and Upcoming Issues, 12/10/2008 [22 Pages, 550kb] - The federal budget implements Congress's "power of the purse" by expressing funding priorities through outlay allocations and revenue collections. Over the past decade, federal spending has accounted for approximately a fifth of the economy (as measured by GDP) and federal revenues have ranged between just over a fifth and just under a sixth of GDP. In FY2008, the U.S. Government collected $2.5 trillion in revenue and spent almost $3.0 trillion. Outlays as a proportion of GDP rose from 18.4% in FY2000 to 20.9% of GDP in FY2008. Federal revenues as a proportion of GDP reached a post-WWII peak of 20.9% in FY2000 and then fell to 16.3% of GDP in FY2004 before rising slightly to 17.7% of GDP in FY2008. The budget also affects, and is affected by, the national economy as a whole. Given recent turmoil in the economy and financial markets, the current economic climate poses a major challenge to policy makers shaping the FY2009 and FY2010 federal budgets. Federal spending tied to means-tested social programs has been increasing due to rising unemployment, while federal revenues will likely fall as individuals' incomes drop and corporate profits sink. As a result, federal deficits over the next few years will likely be high relative to historic norms. In addition to funding existing programs in a challenging economic climate, the government has undertaken significant financial interventions in an attempt to alleviate economic recession. The ultimate costs of federal responses to this turmoil will depend on how quickly the economy recovers, how well firms with federal credit guarantees weather future financial shocks, and whether or not the government receives positive returns on its asset purchases. Estimating how much these responses will cost is difficult, both for conceptual and operational reasons. Despite these budgetary challenges, many economists believe that fiscal policy would be the most effective macroeconomic tool under current conditions. - Federal Debt and Interests Costs, 05/2003 [106 Pages, 7.4MB] - The federal debt has grown rapidly in the past decade, and this trend is projected to continue. Interest costs have grown commensurately and now account for about one of every seven dollars spent by the government. In response to a request from the House Committee on Ways and Means, this study provides background material on federal debt and interest costs--their components, their sensitivity to assumptions about future deficits and interest rates, and the choices that the Treasury faces in deciding the mix of securities it will offer. - The FY2011 Federal Budget, 03/09/2010 [24 Pages, 300kb] - While considering the FY2011 budget, Congress faces very large budget deficits, rising costs of entitlement programs, and significant spending on overseas military operations. In FY2008 and FY2009, the enactment of financial intervention and fiscal stimulus legislation helped to bolster the economy, though it increased the deficit. While GDP growth has returned in recent quarters, unemployment remains elevated and government spending on “automatic stabilizer” programs, such as unemployment insurance and income support, remains higher than historical averages. - Reaching the Debt Limit: Background and Potential Effects on Government Operations, 02/11/2011 [23 Pages, 307kb] - The gross federal debt, which represents the federal government's total outstanding debt, consists of two types of debt: (1) debt held by the public and (2) debt held in government accounts, also known as intragovernmental debt. Federal government borrowing increases for two primary reasons: (1) budget deficits and (2) investments of any federal government account surpluses in Treasury securities, as required by law. Nearly all of this debt is subject to the statutory limit. The federal debt limit currently stands at $14,294 billion. Following current policy, Treasury has estimated that the debt limit will be reached in spring 2011. Treasury has yet to face a situation in which it was unable to pay its obligations as a result of reaching the debt limit. In the past, the debt limit has always been raised before the debt reached the limit. However, on several occasions Treasury took extraordinary actions to avoid reaching the limit and, as a result, affected the operations of certain programs. If the Secretary of the Treasury determines that the issuance of obligations of the United States may not be made without exceeding the public debt limit, a debt issuance suspension period can be authorized. This gives Treasury the authority to utilize nontraditional methods to finance obligations. - Wall Street and the Pentagon: Defense Industry Access to Capital Markets, 1990 - 2010, 11/2011 [23 Pages, 932kb] - Defense firms rely in part on cash raised from capital markets to finance ongoing operations as well as new investments in long-term assets, independent research and development, and retirement of maturing debt. The ability to access capital markets shapes the depth and breadth of the U.S. defense industry, the capabilities it can offer, and the cost of these capabilities to the Department of Defense. Given the monolithic nature of the defense market, it is paramount that decisionmakers understand the relationship between defense spending and the financial metrics that drive access to - and cost of - capital for defense firms. This paper presents the data and findings of research conducted by the Defense-Industrial Initiatives Group at the Center for Strategic and International Studies (CSIS) on defense companies- access to capital markets during the period 1990-2010. The analysis shows that for the universe of defense equities analyzed, there exists a positive relationship between defense spending, companies' financial health, and the industry's relative market valuation. However, no evidence was found to suggest that these firms encountered difficulties accessing capital markets either during a period of market contraction (1990-2001) or during the recent budget buildup (2002-2010). - There are no comments yet
http://www.theblackvault.com/m/articles/view/The-Debt-Ceiling-The-Fiscal-Cliff
13
21
The Taisho period (大正 lit. Great Righteousness, 1912 - 1926) is a period in the History of Japan. It is considered the time of the liberal movement known as the "Taisho democracy" in Japan; it is usually distinguished from the preceding chaotic Meiji Era and the following militarism-driven Showa Era. |Table of contents| 2 In Detail Key Historical Events On July 30, 1912, The Meiji emperor died and his Crown Prince Yoshihito succeded the throne, beginning the Taisho period. The end of the Meiji era was marked by huge government domestic and overseas investments and defense programs, nearly exhausted credit, and a lack of foreign exchange to pay debts. The beginning of the Taisho period was marked by a political crisis that interrupted the earlier politics of compromise. The health of the new emperor was weak, which prompted the shift in political power from the old oligarchic clique of "elder statesmen" (genro) to the parliament and the democratic parties. The shift and related movements is called the "Taisho democracy". On February 12, 1913 Yamamoto Gonbee (1852-1933) succeeded Katsura as prime minister. In April, 1914 Okuma Shigenobu replaced Yamamoto. On August 23, 1914 Japan declared war on Germany, joining the Allies in World War I. Within three months, Japan secured the control of German possessions on the Shandong Peninsula and the Pacific. On November 7, Jiaozhou surrendered to Japan. On October 9, 1916, Terauchi Masatake (1852-1919) took over prime minister from Okuma Shigenobu (1838-1922). On November 2, 1917, the Lansing-Ishii Agreement noted the recognization of Japan's interests in China and pledges of keeping an "Open Door" policy. In July 1918, the Siberian Expedition was launched with the deployment of 75,000 Japanese troops. In August 1918, rice riots erupted in towns and cities throughout Japan. When Saionji tried to cut the military budget, the army minister resigned, bringing down the Seiyokai cabinet. Both Yamagata and Saionji refused to resume office, and the genro were unable to find a solution. Public outrage over the military manipulation of the cabinet and the recall of Katsura for a third term led to still more demands for an end to genro politics. Despite old guard opposition, the conservative forces formed a party of their own in 1913, the Rikken Doshikai (Constitutional Association of Friends), a party that won a majority in the House over the Seiyokai in late 1914 The influence of western culture in the Meiji era continued. Kobayashi Kiyichika (1847 - 1915) adepted western painting as well as continue working in ukiyo-e. Okakura Tenshin (1862 - 1913) kept an interest in traditional Japanese painting. Mori Ogai (1862 - 1922) and Natsume Soseki (1867 - 1916) studied in the West and introduced a more modern view of human life. World War I permitted Japan, which fought on the side of the victorious Allies, to expand its influence in Asia and its territorial holdings in the Pacific. Acting virtually independently of the civil government, the Japanese navy seized Germany's Micronesian colonies. The postwar era brought Japan unprecedented prosperity. Japan went to the peace conference at Versailles in 1919 as one of the great military and industrial powers of the world and received official recognition as one of the "Big Five" of the new international order. It joined the League of Nations and received a mandate over Pacific islands north of the Equator formerly held by Germany. Japan was also involved in the post-war Allied intervention in Russia, and was the last Allied power to withdraw (doing so in 1925). During the 1920s, Japan progressed toward a democratic system of government. However, parliamentary government was not rooted deeply enough to withstand the economic and political pressures of the 1930s, during which military leaders became increasingly influential. These shifts in power were made possible by the ambiguity and imprecision of the Meiji constitution, particularly as regarded the position of the Emperor in relation to the constitution. Seizing the opportunity of Berlin's distraction with the European War and wanting to expand its sphere of influence in China, Japan declared war on Germany in August, 1914 and quickly occupied German-leased territories in China's Shandong Province and the Mariana, Caroline, and Marshall islands in the Pacific. With its Western allies heavily involved in the war in Europe, Japan sought further to consolidate its position in China by presenting the Twenty-One Demands to China in January, 1915. Besides expanding its control over the German holdings, Manchuria, and Inner Mongolia, Japan also sought joint ownership of a major mining and metallurgical complex in central China, prohibitions on China's ceding or leasing any coastal areas to a third power, and miscellaneous other political, economic, and military controls, which, if achieved, would have reduced China to a Japanese protectorate. In the face of slow negotiations with the Chinese government, widespread anti-Japanese sentiments in China, and international condemnation, Japan withdrew the final group of demands, and treaties were signed in May, 1915. Japan's hegemony in northern China and other parts of Asia was facilitated through other international agreements. One with Russia in 1916 helped further secure Japan's influence in Manchuria and Inner Mongolia, and agreements with France, Britain, and the United States in 1917 recognized Japan's territorial gains in China and the Pacific. The Nishihara Loans (named after Nishihara Kamezo, Tokyo's representative in Beijing) of 1917 and 1918, while aiding the Chinese government, put China still deeper into Japan's debt. Toward the end of the war, Japan increasingly filled orders for its European allies' needed war matériel, thus helping to diversify the country's industry, increase its exports, and transform Japan from a debtor to a creditor nation for the first time. Japan's power in Asia grew with the demise of the tsarist regime in Russia and the disorder the 1917 Bolshevik Revolution left in Siberia. Wanting to seize the opportunity, the Japanese army planned to occupy Siberia as far west as Lake Baykal. To do so, Japan had to negotiate an agreement with China allowing the transit of Japanese troops through Chinese territory. Although the force was scaled back to avoid antagonizing the United States, more than 70,000 Japanese troops joined the much smaller units of the Allied Expeditionary Force sent to Siberia in 1918. The year 1919 saw Japan sitting among the "Big Five" powers at the Versailles Peace Conference. Tokyo was granted a permanent seat on the Council of the League of Nations, and the peace treaty confirmed the transfer to Japan of Germany's rights in Shandong, a provision that led to anti-Japanese riots and a mass political movement throughout China. Similarly, Germany's former Pacific islands were put under a Japanese mandate. Despite its small role in World War I (and the Western powers' rejection of its bid for a racial equality clause in the peace treaty), Japan emerged as a major actor in international politics at the close of the war. The two-party political system that had been developing in Japan since the turn of the century finally came of age after World War I. This period has sometimes been called that of "Taisho Democracy," after the reign title of the emperor. In 1918 Hara Takashi (1856-1921), a prot馮・of Saionji and a major influence in the prewar Seiyokai cabinets, had become the first commoner to serve as prime minister. He took advantage of long-standing relationships he had throughout the government, won the support of the surviving genro and the House of Peers, and brought into his cabinet as army minister Tanaka Giichi (1864-1929), who had a greater appreciation of favorable civil-military relations than his predecessors. Nevertheless, major problems confronted Hara: inflation, the need to adjust the Japanese economy to postwar circumstances, the influx of foreign ideas, and an emerging labor movement. Prewar solutions were applied by the cabinet to these postwar problems, and little was done to reform the government. Hara worked to ensure a Seiyokai majority through time-tested methods, such as new election laws and electoral redistricting, and embarked on major government-funded public works programs. The public grew disillusioned with the growing national debt and the new election laws, which retained the old minimum tax qualifications for voters. Calls were raised for universal suffrage and the dismantling of the old political party network. Students, university professors, and journalists, bolstered by labor unions and inspired by a variety of democratic, socialist, communist, anarchist, and other Western schools of thought, mounted large but orderly public demonstrations in favor of universal male suffrage in 1919 and 1920. New elections brought still another Seiyokai majority, but barely so. In the political milieu of the day, there was a proliferation of new parties, including socialist and communist parties. In the midst of this political ferment, Hara was assassinated by a disenchanted railroad worker in 1921 (see Diplomacy , this ch.). Hara was followed by a succession of nonparty prime ministers and coalition cabinets. Fear of a broader electorate, left-wing power, and the growing social change engendered by the influx of Western popular culture together led to the passage of the Peace Preservation Law (1925), which forbade any change in the political structure or the abolition of private property. Unstable coalitions and divisiveness in the Diet led the Kenseikai (Constitutional Government Association) and the Seiy Honto (True Seiyokai) to merge as the Rikken Minseito (Constitutional Democratic Party) in 1927. The Rikken Minseito platform was committed to the parliamentary system, democratic politics, and world peace. Thereafter, until 1932, the Seiyokai and the Rikken Minseito alternated in power. Despite the political realignments and hope for more orderly government, domestic economic crises plagued whichever party held power. Fiscal austerity programs and appeals for public support of such conservative government policies as the Peace Preservation Law--including reminders of the moral obligation to make sacrifices for the emperor and the state--were attempted as solutions. Although the world depression of the late 1920s and early 1930s had minimal effects on Japan--indeed, Japanese exports grew substantially during this period--there was a sense of rising discontent that was heightened with the assassination of Rikken Minseito prime minister Hamaguchi Osachi (1870-1931) in 1931. The events flowing from the Meiji Restoration in 1868 had seen not only the fulfillment of many domestic and foreign economic and political objectives--without Japan's first suffering the colonial fate of other Asian nations--but also a new intellectual ferment, in a time when there was interest worldwide in socialism and an urban proletariat was developing. Universal male suffrage, social welfare, workers' rights, and nonviolent protest were ideals of the early leftist movement. Government suppression of leftist activities, however, led to more radical leftist action and even more suppression, resulting in the dissolution of the Japan Socialist Party (Nihon Shakaito), only a year after its 1906 founding, and in the general failure of the socialist movement. The victory of the Bolsheviks in Russia in 1917 and their hopes for a world revolution led to the establishment of the Comintern (a contraction of Communist International, the organization founded in Moscow in 1919 to coordinate the world communist movement). The Comintern realized the importance of Japan in achieving successful revolution in East Asia and actively worked to form the Japan Communist Party (Nihon Kyosanto), which was founded in July, 1922. The announced goals of the Japan Communist Party in 1923 were an end to feudalism, abolition of the monarchy, recognition of the Soviet Union, and withdrawal of Japanese troops from Siberia, Sakhalin, China, Korea, and Taiwan. A brutal suppression of the party followed. Radicals responded with an assassination attempt on Prince Regent Hirohito. The 1925 Peace Preservation Law was a direct response to the "dangerous thoughts" perpetrated by communist elements in Japan. The liberalization of election laws, also in 1925, benefited communist candidates even though the Japan Communist Party itself was banned. A new Peace Preservation Law in 1928, however, further impeded communist efforts by banning the parties they had infiltrated. The police apparatus of the day was ubiquitous and quite thorough in attempting to control the socialist movement (see The Police System , ch. 8). By 1926 the Japan Communist Party had been forced underground, by the summer of 1929 the party leadership had been virtually destroyed, and by 1933 the party had largely disintegrated. Emerging Chinese nationalism, the victory of the communists in Russia, and the growing presence of the United States in East Asia all worked against Japan's postwar foreign policy interests. The four-year Siberian expedition and activities in China, combined with big domestic spending programs, had depleted Japan's wartime earnings. Only through more competitive business practices, supported by further economic development and industrial modernization, all accommodated by the growth of the zaibatsu (wealth groups--see Glossary), could Japan hope to become predominant in Asia. The United States, long a source of many imported goods and loans needed for development, was seen as becoming a major impediment to this goal because of its policies of containing Japanese imperialism. An international turning point in military diplomacy was the Washington Conference of 1921-1922, which produced a series of agreements that effected a new order in the Pacific region. Japan's economic problems made a naval buildup nearly impossible and, realizing the need to compete with the United States on an economic rather than a military basis, rapprochement became inevitable. Japan adopted a more neutral attitude toward the civil war in China, dropped efforts to expand its hegemony into China proper, and joined the United States, Britain, and France in encouraging Chinese self-development. In the Four Power Treaty on Insular Possessions (December 13, 1921), Japan, the United States, Britain, and France agreed to recognize the status quo in the Pacific, and Japan and Britain agreed to terminate formally their Treaty of Alliance. The Five Power Naval Disarmament Treaty (February 6, 1922) established an international capital ship ratio (5, 5, 3, 1.75, and 1.75, respectively, for the United States, Britain, Japan, France, and Italy) and limited the size and armaments of capital ships already built or under construction. In a move that gave the Japanese Imperial Navy greater freedom in the Pacific, Washington and London agreed not to build any new military bases between Singapore and Hawaii. The goal of the Nine Power Treaty (February 6, 1922), signed by Belgium, China, the Netherlands, and Portugal, along with the original five powers, was the prevention of war in the Pacific. The signatories agreed to respect China's independence and integrity, not to interfere in Chinese attempts to establish a stable government, to refrain from seeking special privileges in China or threatening the positions of other nations there, to support a policy of equal opportunity for commerce and industry of all nations in China, and to reexamine extraterritoriality and tariff autonomy policies. Japan also agreed to withdraw its troops from Shandong, relinquishing all but purely economic rights there, and to evacuate its troops from Siberia. Ultranationalism was characteristic of right-wing politicians and conservative military men since the inception of the Meiji Restoration, contributing greatly to the prowar politics of the 1870s. Disenchanted former samurai had established patriotic societies and intelligence-gathering organizations, such as the Gen'yosha (Black Ocean Society, founded in 1881) and its later offshoot, the Kokuryukai (Black Dragon Society, or Amur River Society, founded in 1901). These groups became active in domestic and foreign politics, helped foment prowar sentiments, and supported ultranationalist causes through the end of World War II. After Japan's victories over China and Russia, the ultranationalists concentrated on domestic issues and perceived domestic threats, such as socialism and communism.
http://www.fact-index.com/t/ta/taisho_period.html
13
31
For forty-three years, although no war between the superpowers of the United States and the Soviet Union was ever officially declared, the leaders of the democratic West and the Communist East faced off against each other in what is known as the Cold War. The war was not considered “hot” because neither superpower directly attacked the other. Nevertheless, despite attempts to negotiate during periods of peaceful coexistence and détente, these two nations fought overt and covert battles to expand their influence across the globe. Cold War scholars have devised two conflicting theories to explain what motivated the superpowers to act as they did during the Cold War. One group of scholars argues that the United States and the Soviet Union, along with China, were primarily interested in protecting and advancing their political systems—that is, democracy and communism, respectively. In other words, these scholars postulate that the Cold War was a battle over ideology. Another camp of scholars contends that the superpowers were mainly acting to protect their homelands from aggressors and to defend their interests abroad. These theorists maintain that the Cold War was fought over national self-interest. These opposing theorists have in large measure determined how people understand the Cold War, a conflict that had been a long time in the making. A History of Conflict The conflict between East and West had deep roots. Well before the Cold War, the relationship between the United States and the Soviet Union had been hostile. Although in the early 1920s, shortly after the Communist revolution in Russia, the United States had provided famine relief to the Soviets and American businesses had established commercial ties in the Soviet Union, by the 1930s the relationship had soured. By the time the United States established an official relationship with the new Communist nation in 1933, the oppressive, totalitarian nature of Joseph Stalin’s regime presented an obstacle to friendly relations with the West. Americans saw themselves as champions of the free world, and tyrants such as Stalin represented everything the United States opposed. At the same time, the Soviets, who believed that capitalism exploited the masses, saw the United States as the oppressor. Despite deep-seated mistrust and hostility between the Soviet Union and Western democracies such as the United States, an alliance was forged among them in the 1940s to fight a common enemy, Nazi Germany, which had invaded Russia in June 1941. Although the Allies—as that alliance is called—eventually defeated Germany, the Soviet Union had not been completely satisfied with how its Western Allies had conducted themselves. For example, the Soviets complained that the Allies had taken too long to establish an offensive front on Germany’s west flank, leaving the Soviets to handle alone the offensive front on Germany’s east flank. Tension between the Soviet Union and the Western Allies continued after the war. During postwar settlements, the Allies agreed to give control of Eastern Europe—which had been occupied by Germany—to the Soviet Union for its part in helping to defeat Germany. At settlement conferences among the Allies in Tehran (1943), Yalta (February 1945), and Potsdam (July/August 1945), the Soviets agreed to allow the nations of Eastern Europe to choose their own governments in free elections. Stalin agreed to the condition only because he believed that these newly liberated nations would see the Soviet Union as their savior and create their own Communist governments. When they failed to do so, Stalin violated the agreement by wiping out all opposition to communism in these nations and setting up his own governments in Eastern Europe. The Cold War had begun. During the first years of the Cold War, Soviet and American leaders divided the world into opposing camps, and both sides ac- cused the other of having designs to take over the world. Stalin described a world split into imperialist and capitalist regimes on the one hand and Communist governments on the other. The Soviet Union and the Communist People’s Republic of China saw the United States as an imperialist nation, using the resources of emerging nations to increase its own profits. The Soviet Union and China envisioned themselves as crusaders for the working class and the peasants, saving the world from oppression by wealthy capitalists. U.S. president Harry Truman also spoke of two diametrically opposed systems: one free and the other bent on subjugating struggling nations. The United States and other democratic nations accused the Soviet Union and China of imposing their ideology on emerging nations to increase their power and sphere of influence. Western nations envisioned themselves as the champions of freedom and justice, saving the world for democracy. Whereas many scholars see Cold War conflicts in these same ideological terms, others view these kinds of ideological pronouncements as ultimately deceptive. They argue that despite the superpowers’ claims that they were working for the good of the world, what they were really doing was working for their own security and economic advancement. Two Schools of Thought Ideological theorists claim that the Soviets and the Americans so believed in the superiority of their respective values and beliefs that they were willing to fight a cold war to protect and advance them. Each nation perceived itself to be in a “do-or-die” struggle between alternative ways of life. According to foreign policy scholar Glenn Chafetz, a leading proponent of the ideology theory: Ideology served as the lens through which both sides viewed the world, defined their identities and interests, and justified their actions. U.S. leaders perceived the Soviet Union as threatening not simply because the USSR was powerful but because the entire Soviet enterprise was predicated on implacable hostility to capitalism and dedicated to its ultimate destruction. From the earliest days of the Russian Revolution until the end of the cold war, Moscow viewed the United States as unalterably hostile. Even when both nations were fighting a common enemy, Nazi Germany, the Soviets were certain that the Americans were determined to destroy the Soviet Union. Other scholars argue that the United States and the Soviet Union chose actions that would promote national self-interest, not ideology. That is, the nations were not primarily motivated by a desire to defend capitalism or communism but by the wish to strengthen their position in the world. These scholars reason that the highest priority of every nation is not to promote its ideology but to protect and promote its own self-interest. Thus, these theorists claim, the superpowers advanced their sphere of influence throughout the world in order to gain advantages, such as a valuable trading partner or a strategic military ally. Moreover, these scholars argue, the superpowers aligned themselves with allies who could protect their interests against those who threatened them. Historian Mary Hampton, a champion of the national interest theory, explains: Had ideology been the sustaining force of the cold war, the stability and predictability of the relationship between the two states would not have emerged. Their mutual respect for spheres of influence, the prudent management of their nuclear relationships, and their consistent policy of checking global expansions without resort to direct confrontation are best explained by an analysis based on interest-motivated behavior. . . . From 1946 to 1990, the relationship between the United States and Soviet Union included both diverging and shared interests, and it was a combination of these interests that governed their conduct during the cold war. Although the differences between these two interpretations of Cold War motivations are fairly clear, applying the theories to explain actual events during the period is more complicated. For example, even though a nation might claim that it deposed a leader in a Latin American nation because the ruler was despotic, the real reason might be that the Latin American country had some resource such as oil that the invading nation coveted. Conversely, invading nations are always vulnerable to charges that they are acting in self-interest when in reality nations often do become involved in other countries’ affairs out of a genuine concern about human rights or other humanitarian issues. Both theories have been used to explain many U.S. and Soviet actions during the Cold War, leading to radically different interpretations of events. The Battle over Europe Both theories have been used to explain Soviet and U.S. behavior in Europe. Those who believe the Cold War was primarily an ideological battle claim that aggressive Soviet action to quell democratic movements in the nations of Eastern Europe was motivated by the Soviet belief that capitalism harms the masses whereas communism protects them. Capitalism, the Soviets believed, exploits workers, who take home only a small percentage of companies’ profits in the form of wages whereas the owners reap huge financial benefits at the workers’ expense. Under socialism, in contrast, workers own the methods of production and therefore take their fair share of the profits. Thus, ideologically, the Soviet Union believed it was protecting the oppressed workers in the nations of Eastern Europe by opposing democratic movements. Indeed, the Soviet Union’s belief in socialism as the superior economic system informed all of its foreign policy decisions. According to Chafetz, the Soviets believed that “international relations are a reflection of the class struggle in which socialist countries represent the working class and capitalist countries represent the exploiting class. Socialist internationalism referred to the common class interest of all socialist states; these concerns trumped other interests, at least in the minds of Soviet leaders.” According to those who believe ideology-motivated actions taken during the Cold War, the United States reacted negatively to Soviet actions in Eastern Europe because it disapproved of the Soviet Union’s undemocratic treatment of Eastern Europeans, who had the right to choose their own systems of governance. “Moscow’s repression of democratic movements in Eastern Europe,” Chafetz claims, “conflicted with the promises to permit elections that Stalin made at Yalta and Potsdam.” In response to Soviet aggression in Eastern Europe, U.S. leaders publicly denounced Soviet actions and increased U.S. military forces in West- ern Europe. In June 1961, for example, President John F. Kennedy took a stand against Soviet premier Nikita Khrushchev’s attempt to occupy the city of Berlin. Although Berlin was located within the borders of East Germany, a Soviet satellite, after World War II the Allies had agreed that both East and West would occupy the city (dividing it into East and West Berlin) because Berlin had strong ties with the West. Capitalism and democracy, however, appealed to many East Germans, who fled to West Berlin by the thousands. This embarrassed the Soviets and threatened their hold on Eastern Europe. In June 1961 Khrushchev threatened to forcibly take West Berlin under Communist rule. Kennedy responded to this challenge by increasing America’s combat forces in West Berlin and using billions of dollars approved by Congress to increase U.S. nuclear and conventional weapons throughout Western Europe. Khrushchev’s counterresponse was to divide the city of Berlin with a cement wall, barbed wire, and a column of army tanks that remained until November 1989. Theorists who subscribe to the position that the superpowers were motivated more by national self-interest disagree with the ideological argument used to interpret such events. Hampton maintains: Arguments that seek to explain the cold-war competition in terms of ideology . . . should anticipate that the United States would have supported democratic reform movements and uprisings throughout Eastern Europe in this period, such as those that occurred in East Germany in 1953 and in Poland and Hungary in 1956. In fact, the Soviet Union resolved these crises [repressed the movements] without intervention from the United States or its Western allies. Indeed, the United States did not intervene with overt military action in Eastern Europe, taking a more cautious approach to maintain the balance of power between the two superpowers. National interest theorists claim that this stance suggests that the United States was more interested in maintaining its interests than promoting its ideology. Whereas ideological motivation causes nations to break rules and take risks in the name of some higher principles, these theorists say, nations protecting their self-interest do not want to “rock the boat”; thus, countries motivated by selfinterest play by the rules and take fewer risks. In consequence, while the Soviet Union marched into the nations of Eastern Europe to crush democratic movements, the United States, fearing international disapproval and hoping to avoid war with the Soviets, declined to intervene. The Third World According to theorists who believe ideology drove Cold War strategy, the United States and the Soviet Union both became involved in the third world to expand their spheres of influence, but for different reasons. The Soviets, unable to control Europe, sought to spread their ideology and expand their sphere of influence elsewhere. According to Chafetz: Stalin and his successors were convinced that the legitimacy of their rule depended on validating Marxist-Leninist predictions of world revolution. The beginning of the nuclear standoff in Europe [between the United States and the Soviet Union] made it apparent that fomenting revolution in the industrialized, democratic states of the West was either impossible or too dangerous. As a result the Soviets turned their efforts to exporting revolution to less developed countries. They tended to view all anti-Western movement throughout Latin America, Asia, Africa, and the Middle East through the single lens of [Communist leader Vladimir] Lenin’s theory of imperialism. Thus, despite the diverse motives behind revolutions, coups, and civil wars in China, Laos, Cuba, Vietnam, Congo, Ethiopia, Somalia, Afghanistan, Libya, and elsewhere, [Soviet leaders] Stalin, Nikita S. Khrushchev, and Leonid I. Brezhnev characterized them all in anti-imperialist terms. U.S. involvement in the third world was more complex. Chafetz writes, “Soviet exploitation of decolonization created a painful dilemma for the United States.” Although the United States, which regarded itself as a freed colony, was empathetic toward third world nations seeking self-determination and independence from colonial powers, it also viewed many of the regimes as anti- American. Indeed, the leaders of these third world coups and rev- olutions were often rebelling against increasing U.S. dominance in world affairs. Moreover, revolutionary leaders, inspired by Communist philosophy and weary of years of oppression at the hands of capitalist, democratic powers, were often attracted to the Soviet economic model. In consequence, the United States found itself in the uncomfortable position of opposing nationalist revolutions in order to contain the spread of communism. National self-interest theorists disagree with this analysis. The fact that the United States did not support these revolutions, they say, proves that the nation was motivated more by self-interest than ideology. If the ideology theory were true, they contend, the United States would have supported revolutions against colonial oppression. The United States had once been a colony and after independence had become a champion of the principle that nations have the right to choose their own systems of governance. Despite its past, the United States did not support these revolutions. Instead, the United States opposed them in order to gain or maintain political and economic allies. Thus, in the eyes of many, U.S. behavior toward the third world was immoral and hypocritical. These theorists believe that the use of less-than-honorable strategies, such as assassinations and secret agreements with repressive regimes, to prevent the success of these national revolutions stained America’s reputation across the globe. Of particular embarrassment were some of the actions taken by the Central Intelligence Agency (CIA). The Central Intelligence Agency National self-interest theorists find support for their views when examining CIA actions during the Cold War. Since its creation in 1947, the CIA was used as an instrument to carry out U.S. Cold War strategy, particularly during the 1950s and 1960s. The CIA was initially mandated to gather, evaluate, and disseminate intelligence. However, the vaguely mandated “other functions and duties” beyond its core mission led to the expansion of the CIA’s function to include counterespionage and covert action. Some of these activities were invaluable to America’s security. Foreign policy scholar Loch K. Johnson explains: “Intelligence-collection activities provided warnings about Soviet missiles in Cuba in 1962. Counterespionage uncovered Soviet agents inside U.S. secret agencies.” Johnson adds, however, that the CIA sometimes used tactics that conflicted with traditional American values. The CIA resorted to assassination plots against foreign leaders and spied on its own citizens. The agency engaged in paramilitary operations in Southeast Asia and abandoned the native people who had helped them to imprisonment, torture, and death when the United States pulled out of the region. Even covert acts that were deemed CIA successes, in historian Benjamin Frankel’s view, were moral failures: “Its role in toppling the ostensibly democratic, though Marxist, government of Guatemala in 1954 seemed to fly in the face of America’s commitment to democracy.” The fact that the administrations of several Cold War presidents approved these tactics suggests that national self-interest, not ideology, motivated CIA action during the Cold War. The Development of Alliances National self-interest theorists also find support for their point of view in the formation of alliances among the Communist nations of the East and the democratic nations of the West over the course of the Cold War. These alliances were designed to protect common interests. “Each state began mobilizing other states,” Hampton explains, “trying to form alliances and balance against the other.” To maintain a balance of power, these theorists claim, Western nations created the North Atlantic Treaty Organization (NATO) in 1949. The alliance was created largely to discourage an attack by the Soviet Union on the non-Communist nations of Western Europe. In 1955 the Soviet Union and the Communist nations of Eastern Europe formed their own military alliance to oppose NATO, the Warsaw Pact. Whether these alliances were responsible for keeping the peace, the balance of power was in fact maintained. National interest theorists maintain that an unlikely alliance between the United States and China further supports their position. A rift between the Soviet Union and China, the world’s most powerful Communist powers, would make this alliance possible. A Rift in the East Most of the Western world viewed China and the Soviet Union as two versions of the same Communist evil, but in reality, Sino- Soviet relations, not unlike those between the Soviet Union and the United States, had been historically uneasy. The two nations shared the longest land border in the world, the source of border disputes since the seventeenth century. Moreover, during the Communist revolution in China, the Soviet Union had initially supported Chiang Kai-shek rather than Mao Tse-tung, who ultimately defeated Chiang Kai-shek and became the leader of Communist China. However, to offer the newly Communist China some security against the United States, in 1950 the Soviet Union signed the Treaty of Friendship, Alliance, and Mutual Assistance with Mao. Despite this alliance, the Soviet Union and China had different ideas about the purpose of communism and the direction it should take. The Soviet Union began to rethink its Cold War strategy, choosing less overtly aggressive means of expanding its sphere of influence to avoid directly antagonizing the United States. China, on the other hand, vigorously opposed this stance, favoring continued aggression toward “imperialist” nations. China even accused the Soviet Union of going soft on capitalism. China’s vigorous opposition to Western imperialism drove a wedge between the Soviet Union and China. The conflicts between China and the Soviet Union escalated as both vied for control of satellite states. During the late 1960s the Soviet invasion of Czechoslovakia and the buildup of forces in the Soviet Far East led China to suspect that the Soviet Union would one day try to invade it. Border clashes along the Ussuri River that separates Manchuria from the Soviet Union peaked in 1969, and for several months China and the Soviet Union teetered on the brink of a nuclear conflict. Fortunately, negotiations between Soviet premier Aleksey Kosygin and Chinese premier Zhou En-lai defused the crisis. Nevertheless, Zhou and Mao began to rethink China’s geopolitical strategy. The goal had always been to drive imperialist nations from Asia, but such a strategy had led to a hostile relationship with America, the Soviet Union’s enemy. In fact, this strategy had brought China into conflict with the United States in two of the bloodiest clashes of the Cold War, the Korean and Vietnam Wars. However, when President Richard Nixon showed signs of reducing if not eliminating the American presence in Vietnam, China began to see normalization of relations with the United States as a way of safeguarding its security against the Soviet Union. Since this relationship was forged to enhance China’s national security and was created despite ideological differences between the two nations, the alliance between China and the United States supports the claims of self-interest theorists. The Fall of the Soviet Union Whereas national self-interest theorists find support for their theory in the development of alliances during the Cold War, ideological theorists find support for their position in the circumstances surrounding the fall of the Soviet Union. When Communist ideology eventually gave way to more democratic ideals in the Soviet Union, the union dissolved and the Cold War came to an end. This change, many argue, can be traced to the efforts of one man, Mikhail Gorbachev. When Gorbachev became leader of the Soviet Union in 1985, he began a political, economic, and social program that radically altered the Soviet government, creating a limited democracy. The nation’s political restructuring began with a newly created Congress of People’s Deputies, which elected Gorbachev executive president. The new government was not without opposition, and remaining hard-line Communists tried to unseat the new government. The coup failed, however, and shortly thereafter Gorbachev dissolved the Communist Party. Gorbachev tried to create a new Union—the Commonwealth of Independent States—but, explains Chafetz, “this experiment with limited democracy . . . developed a momentum of its own and became too strong for Gorbachev, or his more hardline opponents within the Communist party, to control.” When the commonwealth itself collapsed, the new union dissolved into independent nations. Ideological theorists point to this chain of events as proof that Cold War events were largely driven by ideology. Once the Soviet political system changed, there was no longer an ideological rift between the two nations, and the Cold War ended. For over four decades the United States and the Soviet Union had tried to expand their influence worldwide and in the process came into countless conflicts with one another. Whereas the Soviet Union pressured the nations of Eastern Europe to become Communist satellites and supported Communist revolutions in Southeast Asia, the United States forged alliances with democratic nations around the world and defended many emerging nations against communism. While trying to interpret these events, Cold War scholars have become divided into two camps: those who think the Cold War powers were acting to further their own belief systems and those who believe the major powers were simply aiming to protect their interests at home and abroad. Which of these theories best explains each superpower’s behavior during the Cold War remains controversial. In Opposing Viewpoints in World History: The Cold War, scholars debate other controversies surrounding the Cold War in the following chapters: From Allies to Enemies: The Origins of the Cold War, Coexistence and Conflict, From Détente to the Cold War’s End, and Reflections: The Impact of the Cold War. The authors express diverse views about the nature of the Cold War and the efficacy and justness of U.S. and Soviet policies. As ideology and national-interest theorists make clear, evaluating the Cold War is an exceedingly complex enterprise. Did this raise a question for you?
http://www.enotes.com/cold-war-article
13
24
A tropical rainforest is an ecosystem type that occurs roughly within the latitudes 28 degrees north or south of the equator (in the equatorial zone between the Tropic of Cancer and Tropic of Capricorn). This ecosystem experiences high average temperatures and a significant amount of rainfall. Rainforests can be found in Asia, Australia, Africa, South America, Central America, Mexico and on many of the Pacific, Caribbean, and Indian Ocean islands. Within the World Wildlife Fund's biome classification, tropical rainforests are thought to be a type of tropical wet forest (or tropical moist broadleaf forest) and may also be referred to as lowland equatorial evergreen rainforest. Tropical rainforests can be characterized in two words: warm and wet. Mean monthly temperatures exceed 18 °C (64 °F) during all months of the year. Average annual rainfall is no less than 168 cm (66 in) and can exceed 1,000 cm (390 in) although it typically lies between 175 cm (69 in) and 200 cm (79 in). This high level of precipitation often results in poor soils due to leaching of soluble nutrients. Tropical rainforests are unique in the high levels of biodiversity they exhibit. Around 40% to 75% of all biotic species are indigenous to the rainforests. Rainforests are home to half of all the living animal and plant species on the planet. Two-thirds of all flowering plants can be found in rainforests. A single hectare of rainforest may contain 42,000 different species of insect, up to 807 trees of 313 species and 1,500 species of higher plants. Tropical rainforests have been called the "jewels of the Earth" and the "world's largest pharmacy", because over one quarter of natural medicines have been discovered within them. It is likely that there may be many millions of species of plants, insects and microorganisms still undiscovered in tropical rainforests. Tropical rainforests are among the most threatened ecosystems globally due to large-scale fragmentation due to human activity. Habitat fragmentation caused by geological processes such as volcanism and climate change occurred in the past, and have been identified as important drivers of speciation. However, fast human driven habitat destruction is suspected to be one of the major causes of species extinction. Tropical rain forests have been subjected to heavy logging and agricultural clearance throughout the 20th century, and the area covered by rainforests around the world is rapidly shrinking. Tropical rainforests have existed on Earth for hundreds of millions of years. Most tropical rainforests today are on fragments of the Mesozoic era supercontinent of Gondwana. The separation of the landmass resulted in a great loss of amphibian diversity while at the same time the drier climate spurred the diversification of reptiles. The division left tropical rainforests located in five major regions of the world: tropical America, Africa, Southeast Asia, Madagascar, and New Guinea, with smaller outliers in Australia. However, the specifics of the origin of rainforests remain uncertain due to an incomplete fossil record. Types of tropical rainforest Several types of forest comprise the general tropical rainforest biome: - Lowland equatorial evergreen rain forests are forests which receive high rainfall (more than 2000 mm, or 80 inches, annually) throughout the year. These forests occur in a belt around the equator, with the largest areas in the Amazon Basin of South America, the Congo Basin of Central Africa, Indonesia, and New Guinea. - Moist deciduous and semi-evergreen seasonal forests, receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Some trees in these forests drop some or all of their leaves during the winter dry season. These forests are found in parts of South America, in Central America and around the Caribbean, in coastal West Africa, parts of the Indian subcontinent, and across much of Indochina. - Montane rain forests, some of which are known as cloud forests, are found in cooler-climate mountain areas. Depending on latitude, the lower limit of montane rainforests on large mountains is generally between 1500 and 2500 m while the upper limit is usually from 2400 to 3300 m. - Flooded forests, seven types of flooded forest are recognized for Tambopata Reserve in Amazonian Peru: - Permanently waterlogged swamp forest—Former oxbow lakes still flooded but covered in forest. - Seasonally waterlogged swamp forest—Oxbow lakes in the process of filling in. - Lower floodplain forest—Lowest floodplain locations with a recognizable forest. - Middle floodplain forest—Tall forest, flooded occasionally. - Upper floodplain forest—Tall forest, rarely flooded. - Old floodplain forest—Subjected to flooding within the last two hundred years. - Previous floodplain—Now terra firme, but historically ancient floodplain of Tambopata River. Rainforests are divided into different strata, or layers, with vegetation organized into a vertical pattern from the top of the soil to the canopy Each layer is a unique biotic community containing different plants and animals adapted for life in that particular strata. Only the emergent layer is unique to tropical rainforests, while the others are also found in temperate rainforests. The forest floor, the bottom-most layer, receives only 2% of the sunlight. Only plants adapted to low light can grow in this region. Away from riverbanks, swamps and clearings, where dense undergrowth is found, the forest floor is relatively clear of vegetation because of the low sunlight penetration. This more open quality permits the easy movement of larger animals such as: ungulates like the okapi (Okapia johnstoni), tapir (Tapirus sp.), Sumatran rhinoceros (Dicerorhinus sumatrensis), and apes like the western lowland gorilla (Gorilla gorilla), as well as many species of reptiles, amphibians, and insects. The understory also contains decaying plant and animal matter, which disappears quickly, because the warm, humid conditions promote rapid decay. Many forms of fungi growing here help decay the animal and plant waste. The understory layer lies between the canopy and the forest floor. The understory is home to a number of birds, small mammals, insects, reptiles, and predators. Examples include leopard (Panthera pardus), poison dart frogs (Dendrobates sp.), ring-tailed coati (Nasua nasua), boa constrictor (Boa constrictor), and many species of Coleoptera. The vegetation at this layer generally consists of shade-tolerant shrubs, herbs, small trees, and large woody vines which climb into the trees to capture sunlight. Only about 5% of sunlight breaches the canopy to arrive at the understory causing true understory plants to seldom grow to 3 m (10 feet). As an adaptation to these low light levels, understory plants have often evolved much larger leaves. Many seedlings that will grow to the canopy level are in the understory. The canopy is the primary layer of the forest forming a roof over the two remaining layers. It contains the majority of the largest trees, typically 30–45 m in height. Tall, broad-leaved evergreen trees are the dominant plants. The densest areas of biodiversity are found in the forest canopy, as it often supports a rich flora of epiphytes, including orchids, bromeliads, mosses and lichens. These epiphytic plants attach to trunks and branches and obtain water and minerals from rain and debris that collects on the supporting plants. The fauna is similar to that found in the emergent layer, but more diverse. It is suggested that the total arthropod species richness of the tropical canopy might be as high as 20 million. Other species habituating this layer include many avian species such as the yellow-casqued wattled hornbill (Ceratogymna elata), collared sunbird (Anthreptes collaris), African gray parrot (Psitacus erithacus), keel-billed toucan (Ramphastos sulfuratus), scarlet macaw (Ara macao) as well as other animals like the spider monkey (Ateles sp.), African giant swallowtail (Papilio antimachus), three-toed sloth (Bradypus tridactylus), kinkajou (Potos flavus), and tamandua (Tamandua tetradactyla). The emergent layer contains a small number of very large trees, called emergents, which grow above the general canopy, reaching heights of 45–55 m, although on occasion a few species will grow to 70–80 m tall. Some examples of emergents include: Balizia elegans, Dipteryx panamensis, Hieronyma alchorneoides, Hymenolobium mesoamericanum, Lecythis ampla and Terminalia oblonga. These trees need to be able to withstand the hot temperatures and strong winds that occur above the canopy in some areas. Several unique faunal species inhabit this layer such as the crowned eagle (Stephanoaetus coronatus), the king colobus (Colobus polykomos), and the large flying fox (Pteropus vampyrus). However, stratification is not always clear. Rainforests are dynamic and many changes affect the structure of the forest. Emergent or canopy trees collapse, for example, causing gaps to form. Openings in the forest canopy are widely recognized as important for the establishment and growth of rainforest trees. It’s estimated that perhaps 75% of the tree species at La Selva Biological Station, Costa Rica are dependent on canopy opening for seed germination or for growth beyond sapling size, for example. Most tropical rainforests are located around and near the equator, therefore having what is called an equatorial climate characterized by three major climatic parameters: temperature, rainfall, and dry season intensity Other parameters that affect tropical rainforests are carbon dioxide concentrations, solar radiation, and nitrogen availability. In general, climatic patterns consist of warm temperatures and high annual rainfall. However, the abundance of rainfall changes throughout the year creating distinct wet and dry seasons. Rainforests are classified by the amount of rainfall received each year, which has allowed ecologists to define differences in these forests that look so similar in structure. According to Holdridge’s classification of tropical ecosystems, true tropical rainforests have an annual rainfall greater than 800 cm and annual temperature greater than 24 degrees Celsius. However, most lowland tropical rainforests can be classified as tropical moist or wet forests, which differ in regards to rainfall. Tropical rainforest ecology- dynamics, composition, and function- are sensitive to changes in climate especially changes in rainfall. The climate of these forests is controlled by a band of clouds called the Intertropical Convergence Zone located near the equator and created by the convergence of the trade winds from the northern and southern hemispheres. The position of the band varies seasonally, moving north in the northern summer and south in the northern winter, and ultimately controlling the wet and dry seasons in the tropics. These regions have experienced strong warming at a mean rate of 0.26 degrees Celsius per decade which coincides with a global rise in temperature resulting from the anthropogenic inputs of greenhouse gases into the atmosphere. Studies have also found that precipitation has declined and tropical Asia has experienced an increase in dry season intensity whereas Amazonia has no significant pattern change in precipitation or dry season. Additionally, El Niño-Southern Oscillation events drive the interannual climatic variability in temperature and precipitation and result in drought and increased intensity of the dry season. As anthropogenic warming increases the intensity and frequency of ENSO will increase, rendering tropical rainforest regions susceptible to stress and increased mortality of trees. Soil types are highly variable in the tropics and are the result of a combination of several variables such as climate, vegetation, topographic position, parent material, and soil age Most tropical soils are characterized by significant leaching and poor nutrients; however there are some areas that contain fertile soils. Soils throughout the tropical rainforests fall into two classifications which include the ultisols and oxisols. Ultisols are known as well weathered, acidic red clay soils, deficient in major nutrients such as calcium and potassium. Similarly, oxisols are acidic, old, typically reddish, highly weathered and leached, however are well drained compared to ultisols. The clay content of ultisols is high, making it difficult for water to penetrate and flow through. The reddish color of both soils is the result of heavy heat and moisture forming oxides of iron and aluminum, which are insoluble in water and not taken up readily by plants. Soil chemical and physical characteristics are strongly related to above ground productivity and forest structure and dynamics. The physical properties of soil control the tree turnover rates whereas chemical properties such as available nitrogen and phosphorus control forest growth rates. The soils of the eastern and central Amazon as well as the Southeast Asian Rainforest are old and mineral poor whereas the soils of the western Amazon (Ecuador and Peru) and volcanic areas of Costa Rica are young and mineral rich. Primary productivity or wood production is highest in western Amazon and lowest in eastern Amazon which contains heavily weathered soils classified as oxisols. Additionally, Amazonian soils greatly weathered, making them devoid of minerals like phosphorus, potassium, calcium, and magnesium, which come from rock sources. However, not all tropical rainforests occur on nutrient poor soils, but on nutrient rich floodplains and volcanic soils located in the Andean foothills, and volcanic areas of Southeast Asia, Africa, and Central America. Oxisols, infertile, deeply weathered and severely leached, have developed on the ancient Gondwanan shields. Rapid bacterial decay prevents the accumulation of humus. The concentration of iron and aluminum oxides by the laterization process gives the oxisols a bright red color and sometimes produces minable deposits (e.g., bauxite). On younger substrates, especially of volcanic origin, tropical soils may be quite fertile. This high rate of decomposition is the result of phosphorus levels in the soils, precipitation, high temperatures and the extensive microorganism communities. In addition to the bacteria and other microorganisms, there are an abundance of other decomposers such as fungi and termites that aid in the process as well. Nutrient recycling is important because below ground resource availability controls the above ground biomass and community structure of tropical rainforests. These soils are typically phosphorus limited, which inhibits net primary productivity or the uptake of carbon. The soil contains tiny microbial organisms such as bacteria, which break down leaf litter and other organic matter into inorganic forms of carbon usable by plants through a process called decomposition. During the decomposition process the microbial community is respiring, taking up oxygen and releasing carbon dioxide. The decomposition rate can be evaluated by measuring the uptake of oxygen. High temperatures and precipitation increase decomposition rate, which allows plant litter to rapidly decay in tropical regions, releasing nutrients that are immediately taken up by plants through surface or ground waters. The seasonal patterns in respiration are controlled by leaf litter fall and precipitation, the driving force moving the decomposable carbon from the litter to the soil. Respiration rates are highest early in the wet season because the recent dry season results in a large percentage of leaf litter and thus a higher percentage of organic matter being leached into the soil. A common feature of many tropical rainforests is the distinct buttress roots of trees. Instead of penetrating to deeper soil layers, buttress roots create a wide spread root network at the surface for more efficient uptake of nutrients in a very nutrient poor and competitive environment. Most of the nutrients within the soil of a tropical rainforest occur near the surface because of the rapid turnover time and decomposition of organisms and leaves. Because of this, the buttress roots occur at the surface so the trees can maximize uptake and actively compete with the rapid uptake of other trees. These roots also aid in water uptake and storage, increase surface area for gas exchange, and collect leaf litter for added nutrition. Additionally, these roots reduce soil erosion and maximize nutrient acquisition during heavy rains by diverting nutrient rich water flowing down the trunk into several smaller flows while also acting as a barrier to ground flow. Also, the large surface areas these roots create provide support and stability to rainforests trees, which commonly grow to significant heights. This added stability allows these trees to withstand the impacts of severe storms, thus reducing the occurrence of fallen trees. Succession is an ecological process that changes the biotic community structure over time towards a more stable, diverse community structure after an initial disturbance to the community. The initial disturbance is often a natural phenomenon or human caused event. Natural disturbances include hurricanes, volcanic eruptions, river movements or an event as small as a fallen tree that creates gaps in the forest. In tropical rainforests, these same natural disturbances have been well documented in the fossil record, and are credited with encouraging speciation and endemism. South and Central America Australasia and Oceania Biodiversity and speciation |This section does not cite any references or sources. (December 2011)| Tropical rainforests exhibit a vast diversity in plant and animal species. The root for this remarkable speciation has been a query of scientists and ecologists for years. A number of theories have been developed for why and how the tropics can be so diverse. Interspecific competition hypothesis The Interspecific Competition Hypothesis suggests that because of the high density of species with similar niches in the tropics and limited resources available, they must do one of two things: become extinct or find a new niche. Direct competition will often lead to one species dominating another by some advantage, ultimately driving it to extinction. Niche partitioning is the other option for a species. This is the separation and rationing of necessary resources by utilizing different habitats, food sources, cover or general behavioral differences. A species with similar food items but different feeding times is an example of niche partitioning. The Theory of Pleistocene Refugia was developed by Jürgen Haffer in 1969 with his article Speciation of Amazonian Forest Birds. Haffer proposed the explanation for speciation was the product of rainforest patches being separated by stretches of non forest vegetation during the last glacial period. He called these patches of rainforest areas refuges and within these patches allopatric speciation occurred. With the end of the glacial period and increase in atmospheric humidity, rainforest began to expand and the refuges reconnected. This theory has been the subject of debate. Scientists are still skeptical of whether or not this theory is legitimate. Genetic evidence suggests speciation had occurred in certain taxa 1–2 million years ago, preceding the Pleistocene. Tropical rainforests are unable to support human life. Food resources within the forest are extremely dispersed due to the high biological diversity and what food does exist is largely restricted to the canopy and requires considerable energy to obtain. Some groups of hunter-gatherers have exploited rainforest on a seasonal basis but dwelt primarily in adjacent savanna and open forest environments where food is much more abundant. Other peoples described as rainforest dwellers are hunter-gatherers who subsist in large part by trading high value forest products such as hides, feathers, and honey with agricultural people living outside the forest. A variety of indigenous people live within deforested patches of rainforest, or subsist as part-time farmers supplemented in large part by trading high-value forest products such as hides, feathers, and honey with agricultural people living outside the forest. People have inhabited the rainforests for thousands of years and have remained so elusive that only recently have some tribes been discovered. On 18 January 2007, FUNAI reported also that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition, Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted tribes. The province of Irian Jaya or West Papua in the island of New Guinea is home to an estimated 44 uncontacted tribal groups. The pygmy peoples are hunter-gatherer groups living in equatorial rainforests characterized by their short height (below one and a half meters, or 59 inches, on average). Amongst this group are the Efe, Aka, Twa, Baka, and Mbuti people of Central Africa. However, the term pygmy is considered pejorative so many tribes prefer not to be labeled as such. Some notable indigenous peoples of the Americas, or Amerindians, include the Huaorani, Ya̧nomamö, and Kayapo people of the Amazon. The traditional agricultural system practiced by tribes in the Amazon is based on swidden cultivation (also known as slash-and-burn or shifting cultivation) and is considered a relatively benign disturbance. In fact, when looking at the level of individual swidden plots a number of traditional farming practices are considered beneficial. For example, the use of shade trees and fallowing all help preserve soil organic matter, which is a critical factor in the maintenance of soil fertility in the deeply weathered and leached soils common in the Amazon. There is a diversity of forest people in Asia, including the Lumad peoples of the Philippines and the Penan and Dayak people of Borneo. The Dayaks are a particularly interesting group as they are noted for their traditional headhunting culture. Fresh human heads were required to perform certain rituals such as the Iban “kenyalang” and the Kenyah “mamat”. Pygmies who live in Southeast Asia are, amongst others, referred to as “Negrito”. Cultivated foods and spices Yam, Coffee, chocolate, banana, mango, papaya, macadamia, avocado, and sugarcane all originally came from tropical rainforest and are still mostly grown on plantations in regions that were formerly primary forest. In the mid-1980s and 90s, 40 million tons of bananas were consumed worldwide each year, along with 13 million tons of mango. Central American coffee exports were worth US$3 billion in 1970. Much of the genetic variation used in evading the damage caused by new pests is still derived from resistant wild stock. Tropical forests have supplied 250 cultivated kinds of fruit, compared to only 20 for temperate forests. Forests in New Guinea alone contain 251 tree species with edible fruits, of which only 43 had been established as cultivated crops by 1985. In addition to extractive human uses rain forests also have non-extractive uses that are frequently summarized as ecosystem services. Rain forests play an important role in maintaining biological diversity, sequestering & storing carbon, global climate regulation, disease control, and pollination. Despite the negative effects of tourism in the tropical rainforests, there are also several important positive effects. - In recent years Ecotourism in the tropics has taken off. While Rainforests are becoming rarer and rarer people are flocking to nations that still have this diverse habitat. Locals are benefiting from the additional income brought in by visitors, as well areas deemed interesting for visitors are often conserved. Ecotourism can be an incentive for conservation, especially when it triggers positive economic change. Ecotourism can include a variety of activities including animal viewing, scenic jungle tours and even viewing cultural sights and native villages. If these practices are performed appropriately this can be beneficial for both locals and the present flora and fauna. - An increase in tourism has increased economic support, allowing more revenue to go into the protection of the habitat. Tourism can contribute directly to the conservation of sensitive areas and habitat. Revenue from park-entrance fees and similar sources can be utilised specifically to pay for the protection and management of environmentally sensitive areas. Revenue from taxation and tourism provides an additional incentive for governments to contribute revenue to the protection of the forest. - Tourism also has the potential to increase public appreciation of the environment and to spread awareness of environmental problems when it brings people into closer contact with the environment. Such increased awareness can induce more environmentally conscious behavior. Tourism has had a positive effect on wildlife preservation and protection efforts, notably in Africa but also in South America, Asia, Australia, and the South Pacific. Mining and drilling Deposits of precious metals (gold, silver, coltan) and fossil fuels (oil and natural gas) occur underneath rainforests globally. These resources are important to developing nations and their extraction is often given priority to encourage economic growth. Mining and Drilling can require large amounts of land development, directly causing deforestation. In Ghana, a West African nation, deforestation from decades of mining activity left about 12% of the country's original rainforest intact. Conversion to agricultural land With the invention of agriculture, humans were able to clear sections of rainforest to produce crops, converting it to open farmland. Such people, however, obtain their food primarily from farm plots cleared from the forest and hunt and forage within the forest to supplement this.The issue arising is between the independent farmer providing for his family and the needs and wants of the globe as a whole. This issue has seen little improvement because no plan has been established for all parties to be aided. Agriculture on formerly forested land is not without difficulties. Rainforest soils are often thin and leached of many minerals, and the heavy rainfall can quickly leach nutrients from area cleared for cultivation. People such as the Yanomamo of the Amazon, utilize slash-and-burn agriculture to overcome these limitations and enable them to push deep into what were previously rainforest environments. However, these are not rainforest dwellers, rather they are dwellers in cleared farmland that make forays into the rainforest. Up to 90% of the typical Yanamomo diet comes from farmed plants. Some action has been taken by suggesting fallow periods of the land allowing secondary forest to grow and replenish the soil. Beneficial practices like soil restoration and conservation can benefit the small farmer and allow better production on smaller parcles of land. The tropics take a major role in reducing atmospheric carbon dioxide. The tropics (most notably the Amazon rainforest) are called carbon sinks. As the main carbon reducer is destroyed atmospheric temperature rises. Climate change has seen a drastic shift with the destruction of the rainforest. A simulation was performed in which all rainforest in Africa were removed. The simulation showed an increase in atmospheric temperature by 2.5 to 5 degrees Celsius. Efforts to protect and conserve tropical rainforest habitats are diverse and widespread. Tropical rainforest conservation ranges from strict preservation of habitat to finding sustainable management techniques for people living in tropical rainforests. International policy has also introduced a market incentive program called Reducing Emissions from Deforestation and Forest Degradation (REDD) for companies and governments to outset their carbon emissions through financial investments into rainforest conservation. - List of tropical and subtropical moist broadleaf forests ecoregions - Temperate rain forest - Tropical and subtropical moist broadleaf forests - Tropical rainforest climate - Tropical Africa - Tropical vegetation - Why the Amazon Rainforest is So Rich in Species. Earthobservatory.nasa.gov (5 December 2005). Retrieved on 28 March 2013. - Why The Amazon Rainforest Is So Rich In Species. ScienceDaily.com (5 December 2005). Retrieved on 28 March 2013. - Olson, David M.; Dinerstein, Eric; Wikramanayake, Eric D.; Burgess, Neil D.; Powell, George V. N.; Underwood, Emma C.; d'Amico, Jennifer A.; Itoua, Illanga et al. (2001). "Terrestrial Ecoregions of the World: A New Map of Life on Earth". BioScience 51 (11): 933–938. doi:10.1641/0006-3568(2001)051[0933:TEOTWA]2.0.CO;2. - Woodward, Susan. Tropical broadleaf Evergreen Forest: The rainforest. Retrieved on 14 March 2009. - Newman, Arnold (2002). Tropical Rainforest: Our Most Valuable and Endangered Habitat With a Blueprint for Its Survival Into the Third Millennium (2 ed.). Checkmark. ISBN 0816039739. - "Rainforests.net – Variables and Math". Retrieved 4 January 2009. - The Regents of the University of Michigan. The Tropical Rain Forest. Retrieved on 14 March 2008. - Rainforests. Animalcorner.co.uk (1 January 2004). Retrieved on 28 March 2013. - Sahney, S., Benton, M.J. & Falcon-Lang, H.J. (2010). "Rainforest collapse triggered Pennsylvanian tetrapod diversification in Euramerica". Geology 38 (12): 1079–1082. doi:10.1130/G31182.1. - Brazil: Deforestation rises sharply as farmers push into Amazon, The Guardian, 1 September 2008 - China is black hole of Asia's deforestation, Asia News, 24 March 2008 - Corlett, R. and Primack, R. (2006). "Tropical Rainforests and the Need for Cross-continental Comparisons". Trends in Ecology & Evolution 21 (2): 104–110. doi:10.1016/j.tree.2005.12.002. - Bruijnzeel, L. A. and Veneklaas, E. J. (1998). "Climatic Conditions and Tropical Montane Forest Productivity: The Fog Has Not Lifted Yet". Ecology 79 (1): 3. doi:10.1890/0012-9658(1998)079[0003:CCATMF]2.0.CO;2. - Phillips, O.; Gentry, A.H.; Reynel, C.; Wilkin, P.; Galvez-Durand b, C. (1994). "Quantitative Ethnobotany and Amazonian Conservation". Conservation Biology 8 (1): 225–48. doi:10.1046/j.1523-1739.1994.08010225.x. - Bourgeron, Patrick S. (1983). "Spatial Aspects of Vegetation Structure". In Frank B. Golley. Tropical Rain Forest Ecosystems. Structure and Function. Ecosystems of the World (14A ed.). Elsevier Scientific. pp. 29–47. ISBN 0-444-41986-1. - Erwin, T.L. (1982). "Tropical forests: Their richness in Coleoptera and other arthropod species". The Coleopterists Bulletin 36: 74–75. JSTOR 4007977. - "Sabah". Eastern Native Tree Society. Retrieved 14 November 2007. - King, David A. and Clark, Deborah A. (2011). "Allometry of Emergent Tree Species from Saplings to Above-canopy Adults in a Costa Rican Rain Forest". Journal of Tropical Ecology 27 (6): 573–79. doi:10.1017/S0266467411000319. - Denslow, J S (1987). "Tropical Rainforest Gaps and Tree Species Diversity". Annual Review of Ecology and Systematics 18: 431. doi:10.1146/annurev.es.18.110187.002243. - Malhi, Yadvinder and Wright, James (2004). "Spatial patterns and recent trends in the climate of tropical rainforest regions". The Royal Society Biological Sciences 359 (1443): 311–329. doi:10.1098/rstb.2003.1433. - NWS JetStream – Inter-Tropical Convergence Zone. Srh.noaa.gov (5 January 2010). Retrieved on 28 March 2013. - Aragao, L. E. O. C. (2009). "Above- and below-ground net primary productivity across ten Amazonian forests on contrasting soils". Biogeosciences 6 (12): 2759–2778. doi:10.5194/bg-6-2759-2009. - Moreira, A.; Fageria, N. K.; Garcia y Garcia, A. (2011). "Soil Fertility, Mineral Nitrogen, and Microbial Biomass in Upland Soils of the Central Amazon under Different Plant Covers". Communications in Soil Science and Plant Analysis 42 (6): 694–705. doi:10.1080/00103624.2011.550376. - Environmental news and information. mongabay.com. Retrieved on 28 March 2013. - Cleveland, Cory C. and Townsend, Alan R. (2006). "Nutrient additions to a tropical rain forest drive substantial soil carbon dioxide losses to the atmosphere". PNAS 103 (27): 10316–10321. doi:10.1073/pnas.0600989103. PMC 1502455. PMID 16793925. - Tang, Yong; Yang, Xiaofei; Cao, Min; Baskin, Carol C.; Baskin, Jerry M. (2010). "Buttress Trees Elevate Soil Heterogeneity and Regulate Seedling Diversity in a Tropical Rainforest". Plant and Soil 338: 301–309. doi:10.1007/s11104-010-0546-4. - Haffer, J. (1969). "Speciation in Amazonian Forest Birds". Science 165 (131): 131. doi:10.1126/science.165.3889.131. - Moritz, C.; Patton, J. L.; Schneider, C. J.; Smith, T. B. (2000). "DIVERSIFICATION OF RAINFOREST FAUNAS: An Integrated Molecular Approach". Annu. Rev. Ecol. Syst. 31: 533. doi:10.1146/annurev.ecolsys.31.1.533. - Bailey, R.C., Head, G., Jenike, M., Owen,B., Rechtman, R., Zechenter, E. (1989). "Hunting and gathering in tropical rainforest: is it possible". American Anthropologist 91 (1): 59–82. doi:10.1525/aa.1989.91.1.02a00040. - Brazil sees traces of more isolated Amazon tribes. Reuters.com (17 January 2007). Retrieved on 28 March 2013. - BBC: First contact with isolated tribes? survivalinternational.org (25 January 2007) - Forest peoples in the central African rain forest: focus on the pygmies. fao.org - Dufour, D. R. (1990). "Use of tropical rainforest by native Amazonians". BioScience 40 (9): 652–659. doi:10.2307/1311432. JSTOR 1311432. - Herrera, Rafael; Jordan, Carl F.; Medina, Ernesto and Klinge, Hans (1981). 10 (2/3, MAB: A Special Issue). pp. 109–114. JSTOR 4312652. Missing or empty - Ewel, J J (1986). "Designing Agricultural Ecosystems for the Humid Tropics". Annual Review of Ecology and Systematics 17: 245. doi:10.1146/annurev.es.17.110186.001333. JSTOR 2096996. - Jessup, T. C. and Vayda, A. P. (1988). "Dayaks and forests of interior Borneo". Expedition 30 (1): 5–17. - Myers, N. (1985). The primary source, W. W. Norton and Co., New York, pp. 189–193, ISBN 0-393-30262-8 - Foley, Jonathan A.; Asner, Gregory P.; Costa, Marcos Heil; Coe, Michael T.; Defries, Ruth; Gibbs, Holly K.; Howard, Erica A.; Olson, Sarah et al. (2007). "Amazonia revealed: forest degradation and loss of ecosystem goods and services in the Amazon Basin". Frontiers in Ecology 5 (1): 25–32. doi:10.1890/1540-9295(2007)5[25:ARFDAL]2.0.CO;2. - Stronza, A. and Gordillo, J. (2008). "Community views of ecotourism: Redefining benefits". Annals of Tourism Research 35 (2): 448. doi:10.1016/j.annals.2008.01.002. - Fotiou, S. (October 2001). Environmental Impacts of Tourism. Retrieved 30 November 2007, from Uneptie.org - Ismi, A. (1 October 2003), Canadian mining companies set to destroy Ghana’s forest reserves, Canadian Centre for Policy Alternatives Monitor, Ontario, Canada. - Walker, Philip L.; Sugiyama, Larry and Chacon, Richard (1998) "Diet, Dental Health, and Cultural Change among Recently Contacted South American Indian Hunter-Horticulturalists", Ch. 17 in Human Dental Development, Morphology, and Pathology. University of Oregon Anthropological Papers, No. 54 - Tomich, P. T., Noordwijk, V. M., Vosti, A. S., Witcover, J (1998). "Agricultural development with rainforest conservation: methods for seeking best bet alternatives to slash-and-burn, with applications to Brazil and Indonesia". Agricultural Economics 19: 159–174. doi:10.1016/S0169-5150(98)00032-2. - De Jong, Wil; Freitas, Luis; Baluarte, Juan; Van De Kop, Petra; Salazar, Angel; Inga, Erminio; Melendez, Walter; Germaná, Camila (2001). "Secondary forest dynamics in the Amazon floodplain in Peru". Forest Ecology and Management 150: 135–146. doi:10.1016/S0378-1127(00)00687-3. - Semazzi, F. H., Song, Y (2001). "A GCM study of climate change induced by deforestation in Africa". Climate Research 17: 169–182. doi:10.3354/cr017169. - Varghese, Paul (August 2009). "An Overview of REDD, REDD Plus and REDD Readiness". Retrieved 23 November 2009. |Wikimedia Commons has media related to: Tropical rainforests| - Rainforest Action Network - Rain Forest Info from Blue Planet Biomes - Passport to Knowledge Rainforests
http://en.wikipedia.org/wiki/Tropical_rainforest
13
37
Land reforms (also agrarian reform, though that can have a broader meaning) is an often-controversial alteration in the societal arrangements whereby government administers possession and use of land. Land reform may consist of a government-initiated or government-backed real estate property redistribution, generally of agricultural land, or be part of an even more revolutionary program that may include forcible removal of an existing government that is seen to oppose such reforms. Throughout history, popular discontent with land-related institutions has been one of the most common factors in provoking revolutionary movements and other social upheavals. To those who labor upon the land, the landowner's privilege of appropriating a substantial portion —in some cases half or even more— of production without making a commensurate contribution to production may seem a rank injustice. Consequently, land reform most often refers to transfer from ownership by a relatively small number of wealthy (or noble) owners with extensive land holdings (e.g. plantations, large ranches, or agribusiness plots) to individual ownership by those who work the land. Such transfer of ownership may be with or without consent or compensation; compensation may vary from token amounts to the full value of the land. The land value tax advocated by Georgists is a moderate, market-based version of land reform. This definition is somewhat complicated by the issue of state-owned collective farms. In various times and places, land reform has encompassed the transfer of land from ownership — even peasant ownership in smallholdings — to government-owned collective farms; it has also, in other times and places, referred to the exact opposite: division of government-owned collective farms into smallholdings. The common characteristic of all land reforms is modification or replacement of existing institutional arrangements governing possession and use of land. Land ownership and tenure The variety of land reform derives from the variety of land ownership and tenure. Among the possibilities are: In addition, there is paid agricultural labor — under which someone works the land in exchange for money, payment in kind, or some combination of the two — and various forms of collective ownership. The latter typically takes the form of membership in a cooperative, or shares in a corporation, which owns the land (typically by fee simple or its equivalent, but possibly under other arrangements). There are also various hybrids: in many communist states, government ownership of most agricultural land has combined in various ways with tenure for farming collectives. Additionally there are, and have been, well-defined systems where neither land nor the houses people live in are their personal property (Statare, as defined in Scandinavia). The peasants or rural agricultural workers who are usually the intended primary beneficiaries of a land reform may be, prior to the reform, members of failing collectives, owners of inadequate small plots of land, paid laborers, sharecroppers, serfs, even slaves or effectively enslaved by debt bondage. Arguments for and against land reform Land reform policies are generally advocated as an effort to eradicate food insecurity and rural poverty , often with Utilitarian (i.e., "the greatest good for the greatest number"), philosophical arguments (see Jubilee ), a right to dignity , or a simple belief that justice requires a policy of "land to the tiller". However, many of these arguments conflict with prevailing notions of property rights in most societies and states. Implementations of land reform generally raise questions about how the members of the society view the individual's rights and the role of government. These questions include: - Is private property of any sort legitimate? - If so, is land ownership legitimate? - If so, are historic property rights in this particular state and society legitimate? - Even if property rights are legitimate, do they protect absolutely against expropriation, or do they merely entitle the property owner to partial or complete compensation? - How should property rights be weighed against rights to life and liberty? - Who should adjudicate land ownership disputes? - At what level of government is common land owned? - What constitutes fair land reform? - What are the internal and external political effects of the land reform? Concern over the value of land reform is based upon the following: - Lack of consistent track record to support land reform outcome; for example, in Zimbabwe, an aggressive land reform plan has led to a collapse of the economy and 45 percent malnutrition, while land reforms in Taiwan after WW II preceded a multi-decade economic boom that turned a poor country into a rich one. - Question of experience and competence of those receiving land to use it productively - Equity issues of displacing persons who have sometimes worked hard in previous farming of the land - Question of competence of governmental entities to make decisions regarding agricultural productivity - Question of miring a country in vast legal disputes from arbitrary property distribution - Demotivation of any property owners to invest in land that ultimately can be seized Opposing "royal libertarian" (but not geolibertarian) ethical arguments to government-directed "land reform" maintain it is just a euphemism for theft, and argue that stealing is still stealing regardless of whether property was originally justly obtained, or what any group of non-owners (of the property in question) may succeed in obtaining via government intermediary, and that such policies consequently cannot ever be just. They state that alleged "willing seller, willing buyer" programs also invariably involve governments buying land with tax-money (which may or may not be disproportionately collected from those whose land is the subject of the planned reform), and sometimes laws granting government first right to buy land for sale (diminishing the market value of the land by eliminating competing buyers), and so an element of coercion exists despite the "willing" label. The opposition for a land reform may also be based on other ideologies than modern-day liberalism. In countries where there has traditionally been no private land ownership (e.g. Russia in 19th century) the opposition for reforms enabling the creation of private farms may use nationalistic arguments, proposing that the private farms are inconsistent with the national culture. In countries where the established church was an important land owner, theological arguments have been used in the debate on privatization or nationalization of that land (e.g. 16th century Sweden). The right to ownership of the land, and sometimes, the persons residing on that land, has also been argued on the theory of right of conquest, implying that the original ownership was transferred to the land-owning class's ancestors in a just war. The ownership can also be argued on the ground of god-given right, implying that a supernatural power has given the land to its owners. For the proponents of the reform, the rights of the individuals for whose good the reform is supposed to work trump the property rights of the land owners. Usually their philosophical background differs significantly from the viewpoints outlined above, spanning from Marxism to religious ideologies. What is common for them, is that they see the rights or duties advocated as more important than a right to own real estate. Land reform efforts Agrarian land reform has been a recurring theme of enormous consequence in world history — see, for example, the history of the Semproninan Law or Lex Sempronia agraria proposed by Tiberius Sempronius Gracchus and passed by the Roman Senate (133 BC), which led to the social and political wars that ended the Roman Republic A historically important source of pressure for land reform has been the accumulation of significant properties by tax-exempt individuals or entities. In ancient Egypt, the tax exemption for temple lands eventually drove almost all the good land into the hands of the priestly class, making them immensely rich (and leaving the world a stunning legacy of monumental temple architecture that still impresses several millennia later), but starving the government of revenue. In Rome, the land tax exemption for the noble senatorial families had a similar effect, leading to Pliny's famous observation that the latifundia (vast landed estates) had ruined Rome, and would likewise ruin the provinces. In the Christian world, this has frequently been true of churches and monasteries, a major reason that many of the French revolutionaries saw the Catholic church as an accomplice of the landed aristos. In the Moslem world, land reforms such as that organized in Spain by al-Hurr in 718 have transferred property from Muslims to Christians, who were taxable by much higher rates. In the modern world and in the aftermath of colonialism and the Industrial Revolution, land reform has occurred around the world, from the Mexican Revolution (1917; the revolution began in 1910) to Communist China to Bolivia (1952, 2006) to Zimbabwe and Namibia. Land reform has been especially popular as part of decolonization struggles in Africa and the Arab world, where it was part of the program for African socialism and Arab socialism. Cuba has seen one of the most complete agrarian reforms in Latin America. Land reform was an important step in achieving economic development in many Third World countries since the post-World War II period, especially in the East Asian Tigers and "Tiger Cubs" nations such as Taiwan, South Korea, and Malaysia. Since mainland China's economic reforms led by Deng Xiaoping land reforms have also played a key role in the development of the People's Republic of China, with the re-emergence of rich property developers in urban areas (though as in Hong Kong, land in China is not privately owned but leased from the state, typically on very long terms that allow substantial opportunity for private speculative gain). - Brazil: In the 1930s, Getúlio Vargas reneged on a promised land reform. A first attempt to make a national scale reform was set up in the government of José Sarney, as a result of the strong popular movement that had contributed to the fall of the military government. However, the so-called First Land Reform National Plan never was put into force. Strong campaign including direct action by the Landless Workers' Movement throughout the 1990s has managed to get some advances for the past 10 years, during the Fernando Cardoso and Lula da Silva administrations. - Bolivia: The revolution of 1952 was followed by a land reform law, but in 1970 only 45% of peasant families had received title to land, although more land reform projects continued in the 1970s and 1980s. Bolivian president Evo Morales restarted land reform when he took office in 2006. On 29 November 2006, the Bolivian Senate passed a bill authorizing the government redistribution of land among the nation's mostly indigenous poor. The bill was signed into law hours later, though significant opposition is expected - Chile: Attempts at land reform began under the government of Jorge Alessandri in 1960, were accelerated during the government of Eduardo Frei Montalva (1964-1970), and reached its climax during the 1970-1973 presidency of Salvador Allende. Farms of more than 198 acres (80 hectares) were expropriated. After the 1973 coup the process was halted, and up to a point reversed by the market forces. - Colombia: Alfonso López Pumarejo (1934-1938) passed the Law 200 of 1936, which allowed for the expropriation of private properties, in order to promote "social interest". Later attempts declined, until the National Front presidencies of Alberto Lleras Camargo (1958-1962) and Carlos Lleras Restrepo (1966-1970), which respectively created the Colombian Institute for Agrarian Reform (INCORA) and further developed land entitlement. In 1968 and 1969 alone, the INCORA issued more than 60,000 land titles to farmers and workers. Despite this, the process was then halted and the situation began to reverse itself, as the subsequent violent actions of drug lords, paramilitaries, guerrillas and opportunistic large landowners severely contributed to a renewed concentration of land and to the displacement of small landowners. In the early 21st century, tentative government plans to use the land legally expropriated from drug lords and/or the properties given back by demobilized paramilitary groups have not caused much practical improvement yet. - Cuba: (See also main article Agrarian Reform Laws of Cuba) Land reform was among the chief planks of the revolutionary platform of 1959. Almost all large holdings were seized by the National Institute for Agrarian Reform (INRA), which dealt with all areas of agricultural policy. A ceiling of 166 acres (67 hectares) was established, and tenants were given ownership rights, though these rights are constrained by government production quotas and a prohibition of real estate transactions. - El Salvador: One among several land reform efforts was made during the revolution/civil-war during the 1980s. Salvadoran President Jose Napoleon Duarte promoted land-reform as counter-strategy in the war, while the FMLN carried out their own land-reform in the territory under their control. - Guatemala: land reform occurred during the "Ten Years of Spring", 1944–1954 under the governments of Juan José Arévalo and Jacobo Arbenz. It has been remarked that it was one of the most successful land reforms in history, given that it was relatively thorough and had minimal detrimental effects on the economy and on the incomes of wealthy classes (who were mostly spared because only uncultivated land was expropriated). The reforms were reversed entirely after a US-backed coup deposed the Arbenz government. - Mexico: The first land reform was driven by Ley Lerdo (the Lerdo Law of 1856), enacted by the liberals during the Reform War of the 1850s. One of the aims of the reform government was to develop the economy by returning to productive cultivation the underutilized lands of the Church and the municipal communities (Indian commons), which required the distribution of these lands to small owners. This was to be accomplished through the provisions of Ley Lerdo that prohibited ownership of land by the Church and the municipalities. The reform government also financed its war effort by seizing and selling church property and other large estates. After the war the principles of the Ley Lerdo were perverted by Pres. Porfirio Diaz, which contributed to causing the Mexican Revolution in 1910. A certain degree of land reform was introduced, albeit unevenly, as part of the Mexican Revolution. Francisco Madero and Emiliano Zapata were strongly identified with land reform, as are the present-day (as of 2006) Zapatista Army of National Liberation. See Mexican Agrarian Land Reform. - Nicaragua: Land reform was one of the programs of the Sandinista government. The last months of Sandinista rule were criticized for the Piñata Plan which distributed large tracts of land to prominent Sandinistas. - Peru: land reform in the 1950s largely eliminated a centuries-old system of debt peonage. Further land reform occurred after the 1968 coup by left-wing colonel Juan Velasco Alvarado, and again as part of a counterterrorism effort against the Shining Path during the Internal conflict in Peru roughly 1988–1995, led by Hernando de Soto and the Institute for Liberty and Democracy during the early years of the government of Alberto Fujimori, before the latter's auto-coup. - Venezuela: Hugo Chávez's government enacted Plan Zamora to redistribute government and unused private land to campesinos in need. Middle East and North Africa Land reform is discussed in the article on Arab Socialism - Egypt: Initially, Egyptian land reform essentially abolished the political influence of major land owners. However, land reform only resulted in the redistribution of about 15% of Egypt's land under cultivation, and by the early 1980s, the effects of land reform in Egypt drew to a halt as the population of Egypt moved away from agriculture. The Egyptian land reform laws were greatly curtailed under Anwar Sadat and eventually abolished. - Syria: Land reforms were first implemented in Syria during 1958. The Agricultural Relations Law laid down a redistribution of rights in landownership, tenancy and management . A culmination of factors led to the halt of the reforms in 1961, these included opposition from large landowners and sever crop failure during a drought between 1958 and 1961, whilst Syria was a member of the doomed United Arab Republic (UAR). After the Ba'th Party gained power in 1963 the reforms were resumed. The reforms were portrayed by the governing Ba'th Party as politically motivated to benefit the rural property-less communities. According to Arsuzi, a co-founder of the Ba'th Party, the reforms would, "liberate 75 percent of the Syrian population and prepare them to be citizens qualified to participate in the building of the state" . It has been argued that the land reform represented work by the 'socialist government' however, by 1984 the private sector controlled 74 percent of Syria's arable land . This questions both Ba'th claims of commitment to the redistribution of land to the majority of peasants as well as the state government being socialist - if it allowed the majority of land to be owned in the private sector how could it truly be socialist. Hinnebusch argued that the reforms were a way of galvanising support from the large rural population, "they[Ba'th Party members] used the implementation of agrarian reform to win over and organise peasants and curb traditional power in the countryside" . To this extent the reforms succeeded with increase in Ba'th party membership, they also prevented political threat emerging from rural areas by bringing the rural population into the system as supporters. - Iran: Significant land reform in Iran took place under the Shah as part of the socio-economic reforms of the White Revolution, begun in 1962, and agreed upon through a public referendum. At this time the Iranian economy was not performing well and there was political unrest. Essentially, the land reforms amounted to a huge redistribution of land to rural peasants who previously had no possibility of owning land as they were poorly paid labourers. The land reforms continued from 1962 until 1971 with three distinct phases of land distribution: private, government-owned and endowed land. These reforms resulted in the newly-created peasant landowners owning six to seven million hectares, around 52-63% of Iran's agricultural land. According to Country-Data, even though there had been a considerable redistribution of land, the amount received by individual peasants was not enough to meet most families' basic needs, "About 75 percent of the peasant owners [however] had less than 7 hectares, an amount generally insufficient for anything but subsistence agriculture.. By 1979 a quarter of prime land was in disputed ownership and half of the productive land was in the hands of 200,000 absentee landlords The large land owners were able to retain the best land with the best access to fresh water and irrigation facilities. In contrast, not only were the new peasant land holdings too small to produce an income but the peasants also lacked both quality irrigation system and sustained government support to enable them to develop their land to make a reasonable living. Set against the economic boom from oil revenue it became apparent that the Land Reforms did not make life better for the rural population: according to Amid, "..only a small group of rural people experienced increasing improvements in their welfare and poverty remained the lot of the majority" . Moghadam argues that the structural changes to Iran, including the land reforms, initiated by the White Revolution, contributed to the revolution in 1979 which overthrew the Shah and turned Iran into an Islamic republic. - Albania has gone through three waves of land reform since the end of World War II: in 1946 the land in estates and large farms was expropriated by the communist government and redistributed among small peasants; in the 1950s the land was reorganized into large-scale collective farms; and after 1991 the land was again redistributed among private smallholders. At the end of World War II, the farm structure in Albania was characterized by high concentration of land in large farms. In 1945, farms larger than 10 hectares, representing numerically a mere 3% of all farms in the country, managed 27% of agricultural land and just seven large estates (out of 155,000 farms) controlled 4% of agricultural land, averaging more than 2,000 hectares each (compared to the average farm size of 2.5 hectares at that time). The first post-war constitution of independent Albania (March 1946) declared that land belonged to the tiller and that large estates under no circumstances could be owned by private individuals (article 10). The post-war land reform of 1946 redistributed 155,000 hectares (40% of the land stock) from 19,355 relatively large farms (typically larger than 5 hectares) to 70,211 small farms and landless households. As a result, the share of large farms with more than 10 hectares declined from 27% of agricultural land in 1945 to 3% in 1954. By 1954, more than 90% of land was held in small and mid-sized farms of between 1 hectare and 10 hectares. The distributive effects of the post-war land reform were eliminated by the collectivization drive of the late 1950s-early 1960s, and by 1962 less than 18% of agricultural land had remained in family farms and household plots (the rest had shifted to Soviet-style collective and state farms). By 1971 independent family farms had virtually disappeared and individual farming survived only in household plots cultivated part time by cooperative members (approximately 6% of agricultural land). The post-communist land reform begun in 1991 as part of the transition to the market was in effect a replay of the 1946 land reform, and the arable land held in cooperatives and state farms was equally distributed among all rural households without regard to pre-communist ownership rights. Contrary to other transition countries in Central and Eastern Europe, Albania adopted a distributive land reform (like the CIS) and did not restitute land to former owners. The post-communist land reform of the 1990s was accompanied by special land privatization legislation, as Albania was the only country outside the former Soviet Union that had nationalized all agricultural land (in stages between 1946 and 1976). - Bulgaria: Upon independence in 1878 the overwhelmingly Turkish nobles estates were redistributed among peasant smallholdings. Additional reforms were implemented in 1920-23 and a maximum ownership 30 hectares was fixed. - Czechoslovakia: Major land reform was passed in 1919 redistributing mainly German noble's estates to peasant smallholdings. By 1937 60% of noble land was expropriated with remaining land mainly in unarable arias or German and Hungarian lands. Almost all remaining lands were redistributed in reforms of 1945 and 1948. - Finland: In the general reparcelling out of land, begun in 1757, the medieval model of all fields consisting of numerous strips, each belonging to a farm, was replaced by a model of fields and forest areas each belonging to a single farm. In the further reparcellings which started to took place in 1848, the idea of concentrating all the land in a farm to a single piece of real estate was reinforced. In these reparcelling processes, the land is redistributed in direct proportion to earlier prescription. Both the general reparcelling and the further reparcelling processes are still active in some parts of the country. In 1918, Finland fought a civil war resulting in a series of land reforms. These included the compensated transfer of lease-holdings (torppa) to the leasers and prohibition of forestry companies to acquire land. After the Second World War, Karelians evacuated from areas ceded to Russia were given land in remaining Finnish areas, taken from public and private holdings. Also the veterans of war benefited from these allotments. - France: a major and lasting land reform took place under the Directory during the latter phases of the French Revolution. - Greece: At independence in 1835 the predominately Turkish nobles estates were redistributed as peasant smallholdings. - Estonia and Latvia: at their founding as states in 1918–1919, they expropriate the large estates of Baltic German landowners, most of which was distributed among the peasants and became smallholdings. - Hungary: In 1945 every estate bigger than 142 acres was expropriated without compensation and distributed among the peasants. In the 1950s collective ownership was introduced according to the Soviet model, but after 1990 co-ops were dissolved and the land was redistributed among private smallholders. - Ireland: after the Irish Famine, land reform became the dominant issue in Ireland, where almost all of the land was owned by the Protestant Ascendancy. The Irish Parliamentary Party pressed for reform in a largely indifferent British House of Commons. Reform began tentatively in 1870 and continued for fifty years during which a number of Irish Land Acts were passed (see also Land War). - Lithuania: the major land reform was initiated since the 1919 and was fully launched in 1922. The excess land was taken from the major landowners, mostly aristocracy, and redistributed among new landowners, primarily soldiers, or small landowners, 65,000 in total. - Montenegro and Serbia: At independence in 1830 the predominately Turkish nobles estates were divided up among peasant smallholdings. - Poland: there have been several land reforms in Poland. The most important include land reforms in the Second Polish Republic (1919, 1921, 1923, 1925 and 1928) and land reforms (1944) in the People's Republic of Poland. - Romania: After failed attempts at land reform by Mihail Kogălniceanu in the years immediately after Romanian unification in 1863, a major land reform finally occurred in 1921, with a few additional reforms carried out in 1945. - Slovenia and Croatia: With absorption into the kingdom of Yugoslavia land reform was passed in 1919 with subsidiary laws thereafter redistributing nobles estates among peasant smallholders. Additional reform was implemented in 1945 under the communist. - Soviet Union - Scotland the Land Reform (Scotland) Act 2003 ends the historic legacy of feudal law and creates a framework for rural or croft communities right to buy land in their area. - Sweden: In 1757, the general reparcelling out of land, began. In this process, the medieval principle of dividing all the fields in a village into strips, each belonging to a farm, was changed into a principle of each farm consisting of a few relatively large areas of land. The land was redistributed in proportion to earlier possession of land, while uninhabited forests far from villages were socialized. In the 20th century, Sweden, almost non-violently, arrived at regulating the length minimum of tenant farming contracts at 25 years. - Ethiopia: The Derg carried out one of the most extensive land reforms in Africa in 1975. - Kenya: Kenyatta launched a "willing buyer-willing seller" based land reform program in the 1960s, funded by Britain, the former colonial power. In 2006 president Mwai Kibaki said it will repossess all land owned by "absentee landlords" in the coastal strip and redistribute it to squatters. - Namibia: A limited land reform has been a hallmark of the regime of Sam Nujoma; legislation passed in September 1994, with a compulsory, compensated approach. - South Africa: "Land restitution" was one of the promises made by the African National Congress when it came to power in South Africa in 1994. Initially, land was bought from its owners (willing seller) by the government (willing buyer) and redistributed. However, as of early 2006, the ANC government announced that it will start expropriating the land, although according to the country's chief land-claims commissioner, Tozi Gwanya, unlike Zimbabwe there will be compensation to those whose land is expropriated, "but it must be a just amount, not inflated sums. - Zimbabwe: Efforts at land reform in Zimbabwe under Robert Mugabe moved, after 15 years, in the 1990s, from a "willing seller, willing buyer" approach to the "fast track" land reform program. This was accelerated by "popular seizure" led by machete gangs of "war veterans" associated with the ruling party. Many parcels of land came under the control of people close to the government, as is the case throughout Africa. The several forms of forcible change in management caused a severe drop in production and other economic disruptions. In addition, the human rights violations and bad press led Britain, the European Union, the United States, and other Western allies to impose sanctions on the Zimbabwean government. All this has caused the collapse of the economy. The results have been disastrous and have resulted in widespread food shortages and large scale refugee flight. - China has been through a series of land reforms: - In the 1940s, the Sino-American Joint Commission on Rural Reconstruction, funded with American money, with the support of the national government, carried out land reform and community action programs in several provinces. - The thorough land reform launched by the Communist Party of China in 1946, three years before the foundation of the People's Republic of China (PRC), won the party millions of supporters among the poor and middle peasantry. The land and other property of landlords were expropriated and redistributed so that each household in a rural village would have a comparable holding. This agrarian revolution was made famous in the West by William Hinton's book Fanshen. - In the mid-1950s, a second land reform during the Great Leap Forward compelled individual farmers to join collectives, which, in turn, were grouped into People's Communes with centrally controlled property rights and an egalitarian principle of distribution. This policy was generally a failure in terms of production. The PRC reversed this policy in 1962 through the proclamation of the Sixty Articles. As a result, the ownership of the basic means of production was divided over three levels with collective land ownership vested in the production team (see also Ho ). - A third land reform beginning in the late 1970s re-introduced family-based contract system called the Household Responsibility System, which had enormous initial success, followed by a period of relative stagnation. Chen, Wang, and Davis suggest that the later stagnation was due, in part, to a system of periodic redistribution that encouraged over-exploitation rather than capital investment in future productivity. However, although land use rights were returned to individual farmers, collective land ownership was left undefined after the disbandment of the People's Communes. - Since 1998 China is in the midst of drafting the new Property Law which is the first piece of national legislation that will define the land ownership structure in China for years to come. The Property Law forms the basis for China's future land policy of establishing a system of freehold, rather than of private ownership (see also Ho, ). - India: Due the taxation and regulation under the British Raj, at the time of independence, India inherited a semi-feudal agrarian system, with ownership of land concentrated with a few individual landlords (Zamindars, Zamindari System). Since independence, there has been voluntary and state initiated/mediated land reforms in several states. The most notable and successful example of land reforms are in the states of West Bengal and Kerala. After promising land reforms and elected to power in West Bengal, the Communist Party of India (Marxist, CPI-M) kept their word and initiated gradual land reforms. The result was a more equitable distribution of land among the landless farmers. This has ensured an almost life long loyalty from the farmers and the communists have been in power ever since. In Kerala, the only other large state where the CPI(M) came to power, state administrations have actually carried out the most extensive land, tenancy and agrarian labor wage reforms in the non-socialist late-industrializing world. Another successful land reform program was launched in Jammu and Kashmir after 1947. However, this success was not replicated in other areas like the states of Andhra and Madhya Pradesh, where the more radical Communist Party of India (Maoist) or Naxalites resorted to violence as it failed to secure power. Even in West Bengal, the economy suffered for a long time as a result of the communist economic policies that did little to encourage heavy industries. In the state of Bihar, tensions between land owners militia, villagers and Maoists have resulted in numerous massacres. All in all, land reforms have been successful only in pockets of the country, as people have often found loopholes in the laws setting limits on the maximum area of land held by any one person. - Japan: The first land reform, called the Land Tax Reform or was passed in 1873 as a part of the Meiji Restoration. Another land reform of Japan was carried out in 1947 (at the occupied era after World War II) by the instructions of GHQ by the proposal from the Japanese government. It was prepared before the defeat of the Greater Japanese Empire. It is also called Nōchi-kaihō (農地解放,emancipation of farming land ). - Taiwan: In the 1950s, after the Nationalist government came to Taiwan, land reform and community development was carried out by the Sino-American Joint Commission on Rural Reconstruction. This course of action was made attractive, in part, by the fact that many of the large landowners were Japanese who had fled, and the other large landowners were compensated with Japanese commercial and industrial properties seized after Taiwan reverted from Japanese rule in 1945. The land program succeeded also because the Kuomintang were mostly from the mainland and had few ties to the remaining indigenous landowners. - Vietnam: In the years after World War II, even before the formal division of Vietnam, land reform was initiated in North Vietnam. This land reform (1953-1956) redistributed land to more than 2 million poor peasants, but at a cost of from tens to hundreds of thousands of lives and was one of the main reason for the mass exodus of 1 million people from the North to the South in 1954. The probable democide for this four year period then totals 283,000 North Vietnamese. South Vietnam made several further attempts in the post-Diem years, the most ambitious being the Land to the Tiller program instituted in 1970 by President Nguyen Van Thieu. This limited individuals to 15 hectares, compensated the owners of expropriated tracts, and extended legal title to peasants who in areas under control of the South Vietnamese government to whom had land had previously been distributed by the Viet Cong. Mark Moyar asserts that while it was effectively implemented only in some parts of the country, "In the Mekong Delta and the provinces around Saigon, the program worked extremely well... It reduced the percentage of total cropland cultivated by tenants from sixty percent to ten percent in three years." - South Korea: In 1945–1950, United States and South Korean authorities carried out a land reform that retained the institution of private property. They confiscated and redistributed all land held by the Japanese colonial government, Japanese companies, and individual Japanese colonists. The Korean government carried out a reform whereby Koreans with large landholdings were obliged to divest most of their land. A new class of independent, family proprietors was created. - Fiji: In a reverse that proves the rule of land reform to benefit the native and indigenous people, the land in Fiji has always been owned by native Fijians, but much of it has been leased long-term to immigrant Indians. As these leases have reached their end-of-term native Fijians increasingly have refused to renew leases and have expelled the Indians. - P.P.S. Ho, Who Owns China's Land? Policies, Property Rights and Deliberate Institutional Ambiguity, The China Quarterly, Vol. 166, June 2001, pp. 387-414 - R. H. Tawney, Land and Labour in China New York, Harcourt Brace & Company (1932). - Fu Chen, Liming Wang and John Davis, "Land reform in rural China since the mid-1980s", Land Reform 1998/2, a publication of the Sustainable Development Department of the United Nations' Food and Agriculture Organization (FAO). - William H. Hinton. Fanshen: A documentary of revolution in a Chinese village. New York: Monthly Review Press, 1966. ISBN 0-520-21040-9. - P.P.S. Ho, Institutions in Transition: Land Ownership, Property Rights and Social Conflict in China, (Oxford and New York: Oxford University Press, 2005) - Mark Moyar, "Villager attitudes during the final decade of the Vietnam War" Presented at 1996 Vietnam Symposium "After the Cold War: Reassessing Vietnam" - Summary of "Efficiency and wellbeing for 80 years" by Tarmo Luoma on site of TEHO magazine.
http://www.reference.com/browse/land-owning
13
49
See Also Aggregate Demand Curve Classical theory relies on market adjustments to changes in individual supplies and demands to keep an economy close to full employment. Thus, it predicts a vertical long-run Aggregate Supply curve. Keynesian theory deals with a depressed economy---so many resources are idle that Aggregate Supply is horizontal. An intermediate position is that Aggregate Supply is positively sloped. In the short run, the Aggregate Supply curve reflects a positive relationship between the price level and the real quantity of National Output. This short-run positive relationship occurs primarily because production costs (e.g., wages) are "sticky" relative to output prices when demand changes. Increases to Aggregate Demand cause movements up along the Aggregate Supply curve in which prices rise more quickly than wages, so higher profit per unit induces more output. Declines in Aggregate Demand reverse these movements along the Aggregate Supply curve---prices fall more quickly than costs, so profits decline and firms reduce production. Along the Aggregate Supply curve shown in Figure 3, if output is below Q0 and much capacity is idle, then output can increase in the short run without significant hikes in the price level. But when the classical prediction of full employment is approached, even small increases in output above Qf necessitate large increases in the price level. Between output levels Q0 and Qf, moderate growth in output results in relatively smaller price hikes. Figure 3 The Aggregate Supply Curve As is true for typical market supply curves, the Aggregate Supply curve is positively sloped. Thus, increases in Aggregate Demand along a stable Aggregate Supply curve normally entail increases in the price level and national output. National Output and the Work Force Increases in the total demand for labor generate pressure for more employment and output and for hikes in wages and prices as well. Conversely, declines in the economy-wide demand for labor create pressure for lower employment, output, wages, and prices. There are, however, differences between short-run and long-run adjustments. Understanding Aggregate Supply requires an appreciation of these differences. Labor Markets: The Short Run Keynesian models emphasize short-run adjustments. Suppose Aggregate Demand grew slightly during a severe depression like that of the 1930s. Keynes believed that high unemployment permits firms to fill all vacant positions at the going wage, so the relevant part of the aggregate labor-supply curve is assumed to be flat, while extensive idle capital limits diminishing returns as a problem. Thus, if the demand for labor grows during a depression, employment and output rise, but wages and prices may not. Consequently, Keynesian analysis assumes a horizontal Aggregate Supply curve (see the Keynesian depression range in Figure 3). Remember that from Keynes' point of view, Say's Law was backwards and should have read "Demand creates its own supply." Keynes' assumptions about depressed labor markets are drawn from the 1930s experience of a prolonged depression. Labor's supply curve normally has a positive slope because drawing more workers into the labor force requires higher wages; rising wages also enable unemployed workers to more rapidly find jobs they perceive as suitable. As firms pay higher wages to attract additional employees, wages rise for all workers. This results in a moderately positive slope in the Aggregate Supply curve in its intermediate range in Figure 3. Workers know that higher prices lower their real wages, but there is a lag between a given inflationary decline in real wages and labor's recognition of this loss. Workers may temporarily be "fooled" by hikes in money wages that are less than inflation, so more labor services may be offered even if real wages decline. This is shown in Figure 4. Initially, Q0 workers are hired at a wage of $10 per hour (at point a where DL(P=100) and SL(P=100) cross). Since the price level is 100, real wages (w/P) is also $10 ($10.00/1.00 = $10.00). Figure 4 Labor Markets and Changes in Inflation Rates Workers are aware that higher prices lower their real wages, but there is often a lag between inflation and labor's recognition of this loss. Rising prices (P=120) increase the demand for labor and the employment moves from the original equilibrium at point a to point b. Initially, more labor is supplied at $11.00 per hour. Eventually, however, workers realize that this higher nominal wage is less than the original real wage of $10.00 and labor supply shifts to SL(P=120) and the new equilibrium is point c where employment declines to Q0 and real wages return to $10.00 ($12.00/1.20 = $10.00). Firms hire more labor to produce more output if they perceive increased profit opportunities---output prices that rise faster than nominal wages. Rising prices (P=120) generate additional demands for labor equal to DL(P=120) and employment grows to Q1 and wages rise to $11.00 per hour. In the short run, workers fail to revise their expectations about changes in the price level and are fooled because they believe the original price level will prevail. Notice that real wages (w/P) have actually fallen to $9.17 per hour ($11.00/1.20 = $9.17). Workers may suffer from inflation illusion in the very short run, but their misconceptions are unlikely to persist. Labor Markets: The Longer Run The long-run orientation of classical reasoning represents the polar extreme from Keynesian analysis. New classical economics assumes that workers react to changes in real wages almost instantly, keeping the economy close to full employment. At the very least, there can be no involuntary unemployment. Workers try to base decisions about work on real wages, not on nominal money wages--- what their earnings will buy, not the money itself. In the longer run, workers recognize that price hikes reduce their real wages (w/P) and react by reducing the real supply of labor. This is shown as a reduction in labor supply to SL(P=120) in Figure 4. This restores the labor market to long-run equilibrium at point c where Q0 workers are employed at a real wage of $10.00 ($12.00/1.20 = $10.00). To summarize, suppose Aggregate Demand grows in an economy that is close to full employment. Higher money wages may temporarily lure more workers into the labor force if they expect the price level to remain constant. Once workers recognize that prices have risen, they demand commensurate raises. This yields the vertical long-run Aggregate Supply curve shown as the classical range in Figure 3. In reality, workers react slowly to changes in real earnings. A couple of reasons help explain why individual workers may be more easily "fooled" by inflation than are the firms that employ them. First, a major decision by a big firm may put millions of dollars on the line, while individual workers have only their salary at risk. Thus, firms devote more resources to forecasts of inflation. Second, a firm only needs to estimate how much extra revenue will be generated if extra workers are hired to know how much of a monetary wage (w) it can profitably afford to pay. This calculation requires only estimates of the worker's physical productivity and a forecast of the price (pi) at which the firm will be able to sell its own product. Thus, the real wage paid a worker from the vantage point of the firm is w/pi. Workers, on the other hand, must have forecasts of all the prices of all goods they expect to buy (e.g., the CPI) before they can estimate the future purchasing power of their monetary wages. The real wage from the point of view of a typical worker equals w/CPI, where CPI is the price level. Thus, firms may need less information about future prices (only pi) to make profitable decisions than workers need (forecasts of most prices in the CPI) as a guide for personally beneficial decisions. Finally, in reality, when real wages drop, the options open to most workers are limited. Quitting and looking for a new position involves significant transactions costs including search, interview, lost wages and possibly uprouting of family and moving to another region of the country. Labor immobility and high information and transactions costs may mean that labor isn't "fooled" but simply can only adequately adjust in the longer term. As a result, firms have better access to information about inflation while actually needing less information than workers do to react to changes in the price level. Workers also respond relatively slowly because (a) most long-term union contracts set nominal wages for the lives of these agreements, (b) both emplyers and employees often implicitedly agree to contracts where wages are adjusted only at scheduled intervals, and (c) changing jobs often entails considerable "search time" and lost income. Shifts in Aggregate Supply The Aggregate Supply curve shifts when technology changes or when resource availability or costs change. Technological advances boost Aggregate Supply, while disruptions in resource markets, higher tax rates, or inefficient new government regulations are among negative shocks to Aggregate Supply. Shocks Operating Through the Labor Market The notion that enhanced incentives to supply resources would boost Aggregate Supply was at the heart of the 25 percent cut in tax rates during 1981--1983 and was also partially responsible for attempts to cut growth in government transfer payment. The Reagan administration felt that this strategy would boost Aggregate Supply more than these tax cuts increased Aggregate Demand, so that substantial growth would more than offset any emerging inflationary pressure. Naturally, higher marginal tax rates would reduce the effective supply of labor and consequently, Aggregate Supply. A second type of labor market disturbance would occur if the power of unions grew and they commanded higher wages. Wage hikes raise production costs and push up prices, shrinking Aggregate Supply. This potential problem has diminished in importance over the last two decades as union membership as a percent of the workforce has declined not only in the U.S. but worldwide. More recently, the restructuring of American industry has led to serious changes in most labor markets. global competition, rapidly changing technology, expanded labor legislation and a broader legal liability of firms for vaious labor issues have resulted in corporate down-sizing and the added use of a contingency labor force. The wholesale trimming of employees in many companies has led some to argue that many corporate employees are becoming "overworked". Another problem area would be any rise in the inflation rate workers expect. Inflationary expectations continuously shift labor supply curves leftward as workers try to protect the purchasing power of their earnings. Naturally, decreases in inflationary expectations or in union power will shift the labor supply and Aggregate Supply curves toward the right. Finally, people's preferences between work and leisure obviously affect Aggregate Supply. Some analysts think that "incomes policies" may moderate inflationary expectations. The term incomes policy refers to measures intended to curb inflation without altering monetary or fiscal policies. These methods include jawboning, wage-and-price guidelines or controls, and wage-price freezes of the type imposed in 1971. President Nixon hoped the freeze would reduce expected inflation and halt continuous shrinkage of Aggregate Supply. Ideally, it might have increased the supplies of labor and output, shown as the shift of the Aggregate Supply curve from AS0 to AS2 in Figure 5. Figure 5 Events that Shift the Aggregate Supply Curve Unfortunately, incomes policies may perversely affect inflationary expectations and Aggregate Supply. If workers and firms share a belief that prices will soar soon after controls are lifted, they may withhold production from the market now in hopes of realizing higher wages or prices later. For example, suppose you face the following choices: You can (a) work during a period when wages are frozen and save money to cover your college expenses, or (b) borrow to go to college during a freeze and then repay the loan from funds you earn after the lid is removed from wage hikes. You (and many other people) might delay working until after the freeze. Incomes policies also hinder necessary relative price adjustments. Other Shocks Affecting Productive Capacity New regulations that hamper production shift the Aggregate Supply curve to the left (a movement from AS0 to AS1 in Figure 5), but elimination of inefficient regulation shifts the curve rightward. From the mid-1970s onward, deregulation and privatization of parts of our economy have been aimed at removing inefficiency and boosting Aggregate Supply. Technological advances expand Aggregate Supply, while external shocks that raise costs for imports or resources will shrink Aggregate Supply. Shocks to the U.S. economy occurred when OPEC coalesced in 1973 and world oil prices quadrupled shortly thereafter. Most industrialized countries endured painful leftward shifts in their Aggregate Supply curves. Rightward shifts occur when new resources are found. For example, Great Britain discovered oil in the North Sea, and exploiting of huge pools of Mexican oil during the late 1970s helped Mexico. Gluts of oil on world markets and the relative instability of OPEC drove prices down; energy costs fell from the mid-1980s onward, boosting our Aggregate Supply to the right. Influences that shift Aggregate Supply are listed in Figure 5. Understanding macroeconomic movements requires a good grasp of these concepts. We will now survey some recent theories developed by "new Keynesian" economists to explain why wages are sticky, causing involuntary unemployment to persist during some periods.
http://www.unc.edu/depts/econ/byrns_web/Economicae/asupplyc.html
13
69
Few other events in history so profoundly changed the American social, political, and cultural landscape as did the California Gold Rush. Through the letters, diaries, and photographs of the period, much can be known about the people who were there. The film The Gold Rush and this companion website offer insights into the discovery of gold and its impact on a rapidly expanding nation moving from agrarian to industrial output. Topics include: the discovery of gold and how the news spread; the impact of the discovery on the diverse populations already living in California specifically Native Americans and Californios; the rapid influx of population and its effects on San Francisco; who made the journey to find gold and what were the various routes they took to California; what were the successes and failures of the mostly young men and the few women; the living conditions of miners; the methods of mining gold and how this changed over time; lawlessness and freedom at the mining camps; how did the concepts of gender, class, and race change; what was the impact of gold fever on African Americans, Native Americans, Hispanics, and Chinese; and the Gold Rush's impact on the geographic expansion of the United States and the idea of Manifest Destiny. Use part or all of the film, or delve into the rich resources available on this Web site to learn more, either in a classroom or on your own. The following activities are grouped into 4 categories: history, economics, geography, and civics. You can also read a few helpful hints for completing the activities. At the beginning of the film, the historian J.S. Holliday says, "Next to the Civil War in the 19th century, no other event had a greater impact, more long-lasting reverberations, than the Gold Rush. It transformed obviously California, but more importantly, it transformed America." 1. Consult the timeline and research how the Gold Rush transformed the city of San Francisco, the Territory of California and then describe its impact on the rest of the country. Divide the class into groups. Group One describes the city of San Francisco before, during and after the Gold Rush. Group Two describes the Territory of California before, during, and after the Gold Rush. Group Three will describe the United States before, during and after the Gold Rush. Students will use a poster size piece of paper to draw, use photographs, maps, and pictures from appropriate websites, and write descriptions of key events. Before each group presents their finding to the rest of the class, have the class predict what kinds of problems pioneers and argonauts will encounter as they migrate to the region. Imagine that you are a young person living in San Francisco and write a diary entry before, during, and after the Gold Rush. In your diary, describe your journey, the city and its people, whether you decided to stay in California and why. Just as the nation was shifting away from independent workers like blacksmiths and becoming a nation of clerks and factory workers, the Gold Rush created a new model of the American Dream that was more about taking risks, gambling, and luck than about any particular skill or moral virtue. Previous success had nothing to do with whether they would make it or not and many people worried that this might corrupt the values that built America. 2. Read the profiles of Alfred Doten and Hiram Pierce. Research and discuss the differences in lifestyle a forty-niner encountered that led to a rebellion against the standards of respectability they had left in the East. How did the draw of distant and exotic travel, hard outdoor work, and the possibility of independent wealth affect family relationships? How did concepts of race, gender, and class change? Give examples of how their new freedoms affected the forty-niners. How do you think the Gold Rush changed the moral landscape of the United States then and in the years to come? Look at several portraits of forty-niners. Do the pictures suggest they were from a middle-class culture? What were the origins, status, and values of these men? On November 13, 1849, California held its first general election. Demands for some sort of civil authority had been mounting for months. Pressure grew for better communications and political connections to the rest of the United States. Unwilling to delay any longer, 48 Californians had convened in the town hall at Monterey in September and had hammered out their own constitution. Although only about 12,000 people cast ballots, the constitution passed -- and without waiting for approval from Washington, California promptly declared itself the nation's 31st state. On New Years Day 1850, one of California's newly-elected Senators set sail for the nation's capital to press for his state's immediate admission to the Union. 3. What do you think was the primary catalyst for California statehood -- the issue of slavery in the United States, the idea of manifest destiny, the gold rush, or a combination of all three? Divide the class into groups with each defending one of these ideas. Examine the actions and motives of President James K. Polk in regard to the Mexican-American War. Then read his remarks to Congress when it is proven that gold is found. Examine the sectional crisis between the North and the South and the balance that existed before California was admitted as a free state. Read African Americans in the Gold Rush and write about the experiences of Stephen Spenser Hill. Examine the Compromise of 1850, which brought California into the Union, along with other provisions that would keep the Union together for a while but soon would lead to Civil War. In 1847, the United States defeated Mexico in a two-year conflict known as the Mexican War. When the peace treaty was signed in early February 1848, Mexico was forced to cede an enormous swath of territory, including California, to the United States. Neither country was yet aware that gold had been discovered just days before. 4. Using a map, find the boundaries of Mexico before and after the war. Do you think there would have been a different outcome had both sides known of the gold deposits? Read Mexicans in the Gold Rush and the entries for Antonio Coronel. How did the outcome of the Mexican-American War affect the attitudes of the miners working side by side with diverse ethnic groups? Write a letter from Coronel to another family member. In the letter describe his early successes, his eyewitness accounts of violent discrimination as more miners arrived from elsewhere, and his decision to leave and why. President James K. Polk used the philosophy of Manifest Destiny to expand the territories of the United States. 5. Define "Manifest Destiny". Using the timeline give specific historical milestones of Manifest Destiny in the U.S. How did the idea of Manifest Destiny create racial and ethnic tension? Do you see any examples of the idea of "manifest destiny" today in American or international politics? In the film, historian Brian Roberts said the California Gold Rush was America's first large-scale media event. 6. Explain what he means by this. Specifically what was the role of the media in the expansion of the United States? How was the Gold Rush characterized by the media? Do you see parallels today of the media and political events? What were newspapers like before, during, and after the Gold Rush event? Read about Samuel Brannan. Find examples of newspapers from the Gold Rush era and design your own 1848 newspaper front page. Include interviews and stories with forty-niners and others. What will your headlines be and how do you make this decision? In the film, historian James Rawls says that the real chance for success in the Gold Rush was not in mining the gold but mining the miners. There were people who had the foresight to see the economic possibilities the Gold Rush would create (examples: Samuel Brannan, John Studebaker, Levi Strauss, Charles Crocker). Today San Francisco's streets are named after many of these people. 1. Research a historical figure that went on to amass great wealth as a result of mining the Gold Rush, not gold. Write a one-two page paper about this person. 2. One of history's great ironies was the fact that neither James Marshall nor John Sutter became rich as a result of their discovery of gold in 1848. Research the lives of these two men and what became of them. Then compare and contrast these men to the historical figures above who amassed great wealth. Was this fair? 3. Read the profiles of Brannan and Wilson. What profitable businesses arose as a result of the Gold Rush? What was the impact of supply and demand? Give examples of the price of commodities and entertainment. 4. Read about Wilson. again and examine the document Gaming and Entertainment in the Gold Rush Towns. How did the roles of woman change as a result of the Gold Rush? What were the primary occupations of the few women who were actually there? 5. By their own accounts, Alfred Doten, Hiram Pierce and Vicente Rosales were failures at finding gold. Roleplay their responses to this failure and how they did or did not cope with the failure. Look at photographs of these men and develop costumes for the roleplay. 6. Explore the Strike It Rich! Game and consider how it was made. What are some of the assumptions about the characters that affect game play? What are the factors that the game developers decided were important enough to include? What didn't they include? How would you improve the game? To make it simpler? To make it more complex? Develop a board game called Gold Rush in an imaginary place very much like San Francisco (use the computer game SimCity as an example). Players build the city. What infrastructure would be necessary to handle this sudden influx of people? What goods? What law enforcement would be necessary? What about the government, commerce, transportation? Establish a point system that will determine whether you find success or fail and return to your original home. The Gold Rush occurred during a time when the U.S. was rapidly changing from an agrarian republic to modern industrial nation. Take a look at The Gold Rush's Impact on California's Landscape. 7. What were the positive and negative aspects of unbridled capitalism -- the social, economic and environmental consequences of the forces unleashed by the Gold Rush that ultimately led to a manufacturing output that was greater than that of Britain, France and Germany combined by the end of the nineteenth century? 8. What was the effect of the Gold Rush on California's environment? Research the three types of mining (placer, hard rock and hydraulic) and trace the evolution in mining technology during the Gold Rush and its impact on the environment. Research how the individual's wash pan gave way to the team operated Long Tom and then to elaborate systems of dams and chutes that facilitated hydraulic mining through which whole hillsides were washed away in a matter of hours. How much gold did the mines yield in total? Divide the class into two groups. Discuss and vote on approving a hypothetical nearby mining project. One side is pro and the other con. What are the benefits of moving ahead? What are the costs to the community? Compare the cost and benefits today to the time of the Gold Rush. How do the entrepreneurial forces of the Gold Rush continue today on the exploitation of ever-diminishing natural resources? 9. What were the "Gold Rushes" of the 20th century? What do you think they will be in the 21st century? Imagine San Francisco on the day that gold was found at Sutter's Mill. No one knew yet but in just a few short days, the city would change forever in untold ways dominating the Far West for three decades following 1849. 1. Describe a before and after scenario. Use photographs, drawings, and documents to enhance your descriptions. Why did the population explode? What were the geographic considerations that favored San Francisco as a starting point for those seeking gold? 2. What were the primary routes and who took them? What were the hardships they experienced on the way to California? You may want to read about the Donner Party. 3. Visit the online poll. Which route would YOU take to the Gold Country? Answer the poll questions and check the poll's results to date. Write a short essay in which you advise a friend which route to take and why. 4. Look at a map of the Great Migration. What was the role of the Gold Rush in the Great Migration particularly on the development of California? 1. The rich cultural and racial diversity of California today has its origins in the Gold Rush. Who lived in California before the Gold Rush? What new ethnic groups arrived after the Gold Rush? What happened to these ethnic groups at the peak of the gold rush and then later once the gold began to run out? Using the timeline, carefully trace the ethnic and class warfare that determined who would control the riches and the definition of "society." The mining district was daily becoming more crowded and more contested; examine the differences in social behavior when people are prosperous versus bankrupt. The Gold Rush created much wealth but excluded many people in the process. Divide the class into four groups and research and discuss the exclusion of minorities including Native American Indians, Spanish speaking immigrants, Chinese, and free blacks and slaves accompanying southern migrants. 2. Greed and competition among the miners in the spring of 1850 caused Anglo gold seekers to persuade the newly-elected legislature to pass the Foreign Miners Tax, a steep levy that was meant to be imposed on all non-Americans. Spanish-speaking miners were most often forced to pay and within one year of that law's passage, an estimated 10,000 Mexican miners left California. What exactly was the Foreign Miners Tax and how did the Mexican miners respond to it? Read about Antonia Franco Coronel. What kind of law and order existed at the gold rush camps? Compare Mexican and Chineseresponses to the tax. Who was Joaquin Murieta? Discuss the myth versus the facts. Are there other examples in modern history of one person becoming a legendary scapegoat? 3. In the film, historian Richard White called what happened to the Native Americans during the Gold Rush "close to genocide." Other historians say it was legalized and subsidized mass murder. How did this happen? Research the Indians that lived along the overland routes to the West. Map the different tribes and their relationship with immigrants and settlers. Graph what happened to the population demographic of Native Americans during the Gold Rush. What was the relationship of Native Americans to the rich resources of the land of California? What happened when their land became overrun with gold-seekers? Explain the impact on the fish when the streams were mined. Explain the actions of Native Americans when miners depleted the game they depended on. What did California lawmakers do to bring this situation under control? 4. How did the Chinese fare during the Gold Rush? Compare and contrast their treatment to that of the Hispanic and Indian populations at the time. Pay close attention to the differences in their responses to discrimination. Exclusive Corporate Funding is provided by:
http://www.pbs.org/wgbh/amex/goldrush/tguide/index.html
13
20
Forests as a carbon sink are often overlooked as a major source of carbon removal from the atmosphere, a process known as carbon sequestration. Conversely, deforestation, whether removal of forests by logging or burning, has been a major contribution to atmospheric carbon addition since the mid-Holocene, when the ascent of man to planetary dominance began. Carbon storage in the world's forests as of 2005 was estimated at 1036 Gigatons (as carbon dioxide), and global net loss of forests averages 0.0002% per year (Nabuurs et al. 2005). Massive deforestation at present is most prevalent in places such as the Amazon, Indonesia, Brazil and Borneo. There are two main reasons for land use conversion by deforestation; ranching and agriculture. These conversions are done in an attempt to produce short term economic benefit by forcing immediate environmental change. Cattle ranching is one of the leading causes of deforestation in numerous places including the Brazilian Amazon, that have forested area to have effects of global proportion. Some economic reasons for the removal of the valuable forest resource include currency devaluation as a result of the decrease in a country’s monetary position, control over diseases (since some forests harbour pathogens), and land regulation laws (or lack of laws) which dictate where development may occur. Conversion from forested land to agricultural land is also a major issue in subsistence farming and commercial farming. In developing countries, government land policies encourage subsistence farming, while commercial farming is increasing worldwide as a result of the demand for soybeans, corn, and other crops demanded by the expanding human population. Wood is harvested for multiple reasons related to woody biomass harvest including for uses as timber, as a source of energy, and for pulp products such as paper. Although these reasons are not the primary drivers for forest destruction, they play a significant role. Forests serve as a valuable carbon sink by removing carbon dioxide (CO2) from the atmosphere and storing the carbon in a long term reservoir. Carbon sinks are natural or manmade storage sites for carbon that regularly absorb more carbon than they release. In the decade from 1993 to 2003, 3.3 Gigatons of carbon dioxide per annum were placed into storage in terrestrial sinks. Forests, along with their associated soils contain two to three times the amount of carbon in the atmosphere. Deforestation not only halts the positive carbon storage effects of a functioning forest ecosystem but also may cause detrimental costs to climate. Deforestation converts carbon sinks into sources which, diametrically opposite to sinks, release more carbon than they sequester. The destruction of forests also decreases the level of evapotranspiration, the process by which water re-enters the atmosphere through transpiration in plants. A decrease in evapotranspiration results in regionally decreased precipitation, increased surface temperature, and fewer clouds. This decline in average cloud cover contributes to the Earth’s albedo effect, consequently lessening the planet’s ability to reflect solar radiation. Similar to deforestation’s effects on evapotraspiration and sink conversion, a decrease in the ability to prevent solar radiation from entering the atmosphere contributes to increasing in global temperatures. Forests role in the carbon cycle Ceiba pentandra tree; individual rainforest trees may sequester up to 100 tons of carbon Tikal, Guatamala. Source: C.Michael Hogan Carbon cycling is the process that transfers carbon among the earth’s systems: the biosphere, lithosphere, hydrosphere, and atmosphere. As part of the biosphere, trees and other plants are a primary mechanisms by which carbon is transferred among different systems. Forests pull CO2 out of the atmosphere as part of the process of photosynthesis. Through a series of biochemical reactions, carbon in the low energy state found in CO2 is converted into higher energy carbon in a molecule of glucose. That glucose is used to power plant cell function and biomass production. The CO2 in vegetation is conventionally released back to the atmosphere through respiration, burning, and biomass decay. In the absence of man’s interference, the carbon cycle would function as a closed cycle, fluctuating within boundaries that are ideal to sustain life. Interrupting this cycle, deforestation releases unnaturally large amounts of CO2 into the carbon cycle. This overwhelming of the carbon cycle contributes to anthropogenic climate change. Deforestation has multiple impacts on climate change since in addition to releasing CO2, deforestation also reduces our natural safeguard to adverse climate change impacts by decreasing the amount of CO2 that these forests are able to store. As increasing amounts of CO2 are released into the atmosphere, humans continue to exceed the carbon threshold which was stable in the early Holocene. This compounding surplus of CO2 causes feedback to occur within the carbon cycle. Feedback effects are changes in climate that may seem small but can significantly increase climate change due to their ability to force greater change. If the resulting additional change is focused in the same direction as the initial change, then the effect is considered a positive feedback. The reverse is considered negative. An example of a positive feedback loop would be decreasing potential of oceans to retain CO2 when heated. Increasing amounts of CO2 in the atmosphere raise global ocean temperatures. This increase in ocean temperature decreases the amount of CO2 that the oceans can hold, releasing additional CO2 into the atmosphere and consequently raising global temperatures, perpetuating the cycle. Other examples include the albedo effect and the melting tundra releasing methane, which is a greenhouse gas about 23 times more potent than carbon dioxide in forcing atmospheric warming. Some of these loops may seem to have negative feedback traits such as the albedo effect which releases water vapor and increases cloud cover, causing more sunlight to be reflected, thus resulting in what appears to be global cooling. However, global averages suggest that the effects of the positive feedback loops are greater than the negative feedback loops. Specifically for deforestation, some feedback effects include impacts on El Nino, forest fires, and the release of soil carbon. The increases in temperature caused by deforestation can have an effect on El Nino and its precipitation patterns, which can in turn cause droughts in some locations and floods in others. Flammability of forests is also anticipated to increase due to higher temperatures, causing even more carbon to be released into the atmosphere. There is also the possibility of a “runaway greenhouse effect", caused by a large release of carbon from the soil because higher temperatures promote respiration of soil microbes that convert soil organic carbon to CO2. Another form of enhanced positive feedback is induced when forests are clearcut, and thence replaced by livestock grazing, which produces very large methane fluxes per hectare. Major carbon fluxes Massive clearfelling with Durris Forest, Aberdeenshire, Scotland. @ C.Michael Hogan It is important to quantify the major carbon fluxes, both sources and sinks, interacting with the atmosphere. There is an ongoing atmospheric contribution of carbon dioxide from fossil fuel combustion of approximately 8.7 gigatons per annum. Terrestrial vegetation including forests act as a sink for about 1.0 gigatons of carbon per annum, while oceans provide a sink for another 2.0 gigatons per annum. These summaries understate the importance of the role of forests, since they ignore the fact that prehistoric and historic deforestation of th middle and late Holocene has decimated roughly one half to three quarters of the Earth's forests. A alternative to carbon sequestration in living forests is biochar production, the controlled pyrolysis of forest product and agricultural wastes. This technique produces a rather stable product that can be returned to the Earth soil system and is capable of supplying plants with nutients and/or sequestering carbon for centuries. Estimates of the carbon sink value of biochar range as high as 2.0 gigatons of carbon per annum by the year 2050. While this amount of carbon flux from forest respiration, ocean respiration and biochar may not seem to be a complete match with industrial carbon production, it is a significant sink complex as well as a continuing (renewable) sink. More artificial sinks such as pipeline and well systems are not only quite expensive, but are arguably not truly renewable; in fact, the International Energy Agency estimates a maximum of 144 gigatons of total man-made well carbon storage could be brought on-line by the year 2050, given a prodigous capital investment. Furthermore, in a broader ecological context, the above analysis ignores other important natural sinks, including peat deposits and wetlands. Additionally as Archer and Pierrehumbert (2011) point out: organic carbon in the oceans may play a major role in atmospheric carbon content, but there are presently insufficient data upon which to model the atmospheric/upper ocean/marine organic carbon fluxes. Relationship to soil carbon Sequestration of carbon in forests is mostly inseparable from carbon storage in soil, because the above-ground biomass in forests typically reaches an upper limit within decades. However, carbon storage in soils is worth a separate note for perspective. Many have commented on the enormous volume of carbon tied up in the Earth's soil environment, and on the fragile nature of this storage that can be disturbed by deforestation, agricultural conversion, urbanisation or a multiplicity of other drivers. As early as 1954, Hutchinson pointed out that a principal cause of global warming is the loss of forests and the concomitant addition of carbon dioxide to the atmosphere. These conclusions were re-inforced by seminal work of Houghton et al (1983) who considered all terrestrial biota as well as soil carbon as the carbon source of deforestation. Most recently, Archer and Pierrehumbert (2011) performed more detailed calculations to demonstrate Hutchinson's hypothesis is essentially correct and must be a major part of the explanation for global carbon dioxide rise through the 20th century and early 21st century. Lack of regulation A large percentage of forested land is privately owned, some of which is not protected by law. These forests may lack sustainable management; therefore, landowners can execute unsustainable cuts and personally release large amounts of carbon into the atmosphere. This is especially a problem within the United States. According to the American Forest Foundation, more than half of the forest land in the USA is privately owned. This situation may make the USA vulnerable to the effects of deforestation, since these private land owners can undertake certain discretionary actions which are not subject to national, state or local regulations.In contrast, approximately 90 percent of all forests in all highly industrialized nations are under official or unofficial forest preservation; as a result the UN estimates that from current time until 2050 the industrialized nations will add between 60 to 230 million hectares per annum to the world forest stock. (Millennium Ecosystem Assessment. 2005) Due to lax enforcement of regulations in some other countries fail to prevent unsustainable acts such as clear-cutting and diameter limit cutting. In many developing countries, farmers are encouraged to move into forest lands by government land policies and then burn the forests in hopes to make a profit. Illegal logging is also a prevalent issue in many countries. This illegal logging is closely linked to road-building, which gives access to rainforest and the exploitation of its resources. Deforestation in developing countries In continents containing primarily developing nations the amount of carbon stock in living biomass is steadily decreasing. In South America 4.3 million ha/yr were lost from 2000-2005 and in Africa 4.0 million ha/yr were lost in the same time-frame, in comparison to developing countries which are losing biomass at slower rate or gaining total biomass. Developing countries often oppose restrictions on their carbon emissions because some environmental preservation strategies may not be initially cost effective given the developing countries’ current standing. Some developing countries, such as China and India, feel they have a right to their emissions and therefore suggest a carbon emission standard related to population. This is met with opposition from developed countries due to the difference in practices of developed and developing countries. Presently deforestation is more likely to occur in developing countries than in developed ones, since (a) much of the damage is already done in Europe and North America, and (2) there is a fairly advanced degree of forest proection in place for the developed countries. Developing countries are more inclined to deforest considering the potential short-run economic income that the practice presents. The Millennium Ecosystem Assessment (2005) estimates forest area in the developing regions will decrease by about 200 to 490 million hectares per annum. Agricultural conversion is a more significant path to short term income for developing nations, because it supplies essential short term human needs while producing economic benefits. Reducing deforestation as a mitigation tool Reducing deforestation can be a beneficial mitigation tool to enhance carbon storage. Halting deforestation altogether can reduce the rate of carbon emissions of one billion tons of carbon per year by 2054, which is equivalent to halving the projected fuel required in automobiles from now until that date. A reduction in deforestation practices results in fewer carbon emissions which would otherwise be released during the act of deforestation and allows the positive effects of the forest ecosystem to remain in function. As a mitigation tool, reduced deforestation is beneficial because the trees continue to sequester the carbon as they grow, and the sequestered carbon remains in the soil. Reducing deforestation would lessen the effects on climate change and would reduce the associated effects such as an increase in temperature, rise in sea level, biodiversity loss and nutrient loss, and adverse effects on human health. If implemented as a mitigation tool, reduced deforestation could aid decision makers in better understanding the weight of the ecological side of the economic benefits versus ecological loss argument. Reducing deforestation in comparison to other mitigation strategies is cheaper, less intrusive to industrial development and easier to implement as the strategy utilizes naturally occurring processes. There is no need for new infrastructure or any new technology, only for the implementation of policies and enforcement of existing laws. It is also more effective than many other strategies in keeping carbon out of the atmosphere. Other strategies such as afforestation or reforestation sequester less carbon because as newer forests grow they store far less carbon per hectare than the mature stands potentially being deforested. There are solutions that could be implemented to reduce deforestation and some are already in use or development. One such policy deals with the REDD- “reducing emissions from deforestation and degradation” concept in which global banks, financial experts, policy makers, and environmentalists pay nations and their inhabitants not to cut down large portions of the rainforests. Billions of dollars a year would be donated through REDD or other avoided deforestation projects. This strategy allows poorer countries to capitalize on their natural assets without destroying them. This strategy also promotes ecotourism by having more vibrant and robust forest resources to attract visitors; moreover, such countries as Belize, Romania, Panama and Botswana are capitalizing on the rewards of enhanced ecotourism resulting from forest preservation. Another proposal to mitigate climate change is the use of carbon credits for avoiding deforestation, however there is still much controversy over this strategy. Some critics argue that countries should not be given credit for the temporary carbon storage that forests provide because it could potentially be released in the atmosphere again. Others say that forests are valuable for being able to store carbon even if only temporarily, as any storage of carbon can aid in slowing down the rate of climate change. There are multiple other policies being considered that require a reduction in carbon dioxide emissions, and some suggest that reducing deforestation is a logical way to meet these standards, including those implemented by the Kyoto Protocol. Opposition to these standards. led by Brazil and some other powerful nations, is formidable. Brazil is a significant international player, since many of the mitigation options focus on Brazilian Amazonia. Brazil fears a potential loss of control over the region, and has expressed concern over international pressures. - David Archer & Ray Pierrehumbert. 2011. The Warming Papers. John Wiley and Sons. 432 pages - Erik Eriksson and Pierre Welander. 1956. On a Mathematical Model of the Carbon Cycle in Nature. Tellus 8: 155-75. - R.A.Houghton et al. 1983. Changes in the Carbon Content of Terrestrial Biota and Soils between 1860 and 1980. A Net Release of Carbon Dioxide to the Atmosphere. Ecological Monographs 53: 235-62. - G.E.Hutchinson. 1954. in The Earth as a Planet. G.Kuiper ed. University of Chicago Press. Chapter 8. - International Energy Agency. 2009. IEA Technology Roadmap Carbon Capture and Storage.publisher OECD/IEA - Millennium Ecosystem Assessment (MEA). 2005. Millennium Ecosystem Assessment. Ecosystems and Human Well-being: Synthesis. Island Press, Washington, D.C., 137 pp. - Nabuurs, G.J., O.Masera, K.Andrasko, P.Benitez-Ponce, R.Boer, M.Dutschke, E.Elsiddig, J.Ford-Robertson, P.Frumhoff, T.Karjalainen, O.Krankina, W.A.Kurz, M.Matsumoto, W.Oyhantcabal, N.H.Ravindranath, M.J.Sanz Sanchez, X.Zhang. 2007. Forestry. In: Metz B, Davidson OR, Bosch PR, Dave R, Meyer LA, eds. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. New York: Cambridge University Press, 541-584 - Ruddiman, W.F. 2008. Earth's Climate: Past and Future. Second Edition. W.H. Freeman and Company, New York. - Schmidt, G. 2004. Methane: A Scientific Journey from Obscurity to Climate Super-Stardom. NASA Goddard Space Center. - Shakhova, N., I. Semiletov, A. Salyuk, D. Kosmach, and N. Bel’cheva (2007), Methane release on the Arctic East Siberian shelf, Geophysical Research Abstracts, 9, 01071. - Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (Eds.). 2007. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, 2007. Cambridge University Press, Cambridge, United Kingdom. - Takahashi, J. and B.A. Young (Editors). 2002. Greenhouse Gases and Animal Agriculture. Proceedings of the 1st International Conference on Greenhouse Gases and Animal Agriculture, Obihiro, Japan, 7-11 November, 2001. Elsevier Sciences, Amsterdam, The Netherlands.
http://www.eoearth.org/article/Forests_as_a_Carbon_Sink?topic=54487
13
42
History of Burma |History of Burma| |Part of a series on the| The history of Burma (Myanmar) covers the period from the time of first-known human settlements 13,000 years ago to the present day. The earliest inhabitants of recorded history were the Pyu who entered the Irrawaddy valley from Yunnan c. 2nd century BCE. By the 4th century CE, the Pyu had founded several city states as far south as Prome (Pyay), and adopted Buddhism. Farther south, the Mon, who had entered from Haribhunjaya and Dvaravati kingdoms in the east, had established city states of their own along the Lower Burmese coastline by the early 9th century. Another group, the Mranma (Burmans or Bamar) of the Nanzhao Kingdom, entered the upper Irrawaddy valley in the early 9th century. They went on to establish the Pagan Empire (1044–1287), the first ever unification of Irrawaddy valley and its periphery. The Burmese language and culture slowly came to replace Pyu and Mon norms during this period. After Pagan's fall in 1287, several small kingdoms, of which Ava, Hanthawaddy, Arakan and Shan states were principal powers, came to dominate the landscape, replete with ever shifting alliances and constant wars. In the second half of the 16th century, the Toungoo Dynasty (1510–1752) reunified the country, and founded the largest empire in the history of Southeast Asia for a brief period. Later Toungoo kings instituted several key administrative and economic reforms that gave rise to a smaller, peaceful and prosperous kingdom in the 17th and early 18th centuries. In the second half of the 18th century, the Konbaung Dynasty (1752–1885) restored the kingdom, and continued the Toungoo reforms that increased central rule in peripheral regions and produced one of the most literate states in Asia. The dynasty also went to war with all its neighbors. The kingdom fell to the British over a six-decade span (1824–1885). The British rule brought several enduring social, economic, cultural and administrative changes that completely transformed the once-agrarian society. Most importantly, the British rule highlighted out-group differences among the country's myriad ethnic groups. Since independence in 1948, the country has been in one of the longest running civil wars that remains unresolved. The country was under military rule under various guises from 1962 to 2010, and in the process has become one of the least developed nations in the world. Early history (to 9th century CE) The earliest archaeological evidence suggests that cultures existed in Burma as early as 11,000 BCE. Most indications of early settlement have been found in the central dry zone, where scattered sites appear in close proximity to the Irrawaddy River. The Anyathian, Burma's Stone Age, existed at a time thought to parallel the lower and middle Paleolithic in Europe. The Neolithic or New Stone Age, when plants and animals were first domesticated and polished stone tools appeared, is evidenced in Burma by three caves located near Taunggyi at the edge of the Shan plateau that are dated to 10000 to 6000 BC. About 1500 BCE, people in the region were turning copper into bronze, growing rice, and domesticating chickens and pigs; they were among the first people in the world to do so. By 500 BCE, iron-working settlements emerged in an area south of present-day Mandalay. Bronze-decorated coffins and burial sites filled with earthenware remains have been excavated. Archaeological evidence at Samon Valley south of Mandalay suggests rice growing settlements that traded with China between 500 BC and 200 CE. Pyu city-states The Tibeto-Burman-speaking Pyu entered the Irrawaddy valley from present-day Yunnan, c. 2nd century BCE, and went on to found city states throughout the Irrawaddy valley. The original home of the Pyu is reconstructed to be Kokonor Lake in present-day Qinghai and Gansu provinces. The Pyu were the earliest inhabitants of Burma of whom records are extant. During this period, Burma was part of an overland trade route from China to India. Trade with India brought Buddhism from southern India. By the 4th century, many in the Irrawaddy valley had converted to Buddhism. Of the many city-states, the largest and most important was Sri Ksetra, southeast of modern Prome (Pyay). In March 638, the Pyu of Sri Ksetra launched a new calendar that later became the Burmese calendar. Eighth century Chinese records identify 18 Pyu states throughout the Irrawaddy valley, and describe the Pyu as a humane and peaceful people to whom war was virtually unknown and who wore silk cotton instead of actually silk so that they would not have to kill silk worms. The Chinese records also report that the Pyu knew how to make astronomical calculations, and that many Pyu boys entered the monastic life at seven to the age of 20. It was a long-lasting civilization that lasted nearly a millennium to early 9th century until a new group of "swift horsemen" from the north, the Mranma, (Burmans) entered the upper Irrawaddy valley. In the early 9th century, the Pyu city states of Upper Burma came under constant attacks by the Nanzhao Kingdom in present-day Yunnan. In 832, the Nanzhao sacked then Halingyi, which had overtaken Prome as the chief Pyu city state. A subsequent Nanzhao invasion in 835 further devastated Pyu city states in Upper Burma. While Pyu settlements remained in Upper Burma until the advent of the Pagan Empire in mid 11th century, the Pyu gradually were absorbed into the expanding Burman kingdom of Pagan in the next four centuries. The Pyu language still existed until the late 12th century. By the 13th century, the Pyu had assumed the Burman ethnicity. The histories/legends of the Pyu were also incorporated to those of the Burmans. Mon kingdoms As early as 6th century, another people called the Mon began to enter the present-day Lower Burma from the Mon kingdoms of Haribhunjaya and Dvaravati in modern-day Thailand. By the mid 9th century, the Mon had founded at least two small kingdoms (or large city-states) centered around Pegu and Thaton. The earliest external reference to a Mon kingdom in Lower Burma was in 844–848 by Arab geographers. The Mon practiced Theravada Buddhism. The kingdoms were prosperous from trade. The Kingdom of Thaton is widely considered to be the fabled kingdom of Suvarnabhumi (or Golden Land), referred to by the tradesmen of Indian Ocean. Pagan Dynasty (849–1297) Early Pagan The Burmans who had come down with the early 9th Nanzhao raids of the Pyu states remained in Upper Burma. (Trickles of Burman migrations into the upper Irrawaddy valley might have begun as early as the 7th century.) In the mid-to-late 9th century, Pagan was founded as a fortified settlement along a strategic location on the Irrawaddy near the confluence of the Irrawaddy and its main tributary the Chindwin. It may have been designed to help the Nanzhao pacify the surrounding country side. Over the next two hundred years, the small principality gradually grew to include its immediate surrounding areas— to about 200 miles north to south and 80 miles from east to west by Anawrahta's ascension in 1044. Pagan Empire (1044–1287) Over the next 30 years, Anawrahta founded the Pagan Empire, unifying for the first time the regions that would later constitute the modern-day Burma. Anawrahta's successors by the late 12th century had extended their influence farther south into the upper Malay peninsula, at least to the Salween river in the east, below the current China border in the farther north, and to the west, northern Arakan and the Chin Hills. (The Burmese Chronicles claim Pagan's suzerainty over the entire Chao Phraya river valley, and the Siamese chronicles include the lower Malay peninsula down to the Straits of Malacca to Pagan's realm.) By the early 12th century, Pagan had emerged as a major power alongside the Khmer Empire in Southeast Asia, recognized by the Chinese Song Dynasty, and Indian Chola dynasty. Well into the mid-13th century, most of mainland Southeast Asia was under some degree of control of either the Pagan Empire or the Khmer Empire. Anawrahta also implemented a series of key social, religious and economic reforms that would have a lasting impact in Burmese history. His social and religious reforms later developed into the modern-day Burmese culture. The most important development was the introduction of Theravada Buddhism to Upper Burma after Pagan's conquest of the Thaton Kingdom in 1057. Supported by royal patronage, the Buddhist school gradually spread to the village level in the next three centuries although Tantric, Mahayana, Brahmanic, and animist practices remained heavily entrenched at all social strata. Pagan's economy was primarily based on the Kyaukse agricultural basin northeast of the capital, and Minbu district south of Pagan where the Burmans had built a large number of new weirs and diversionary canals. It also benefited from external trade through its coastal ports. The wealth of the kingdom was devoted to building over 10,000 Buddhist temples in the Pagan capital zone between 11th and 13th centuries (of which 3000 remain to the present day). The wealthy donated tax-free land to religious authorities. The Burmese language and culture gradually became dominant in the upper Irrawaddy valley, eclipsing the Pyu, Mon and Pali norms by the late 12th century. By then, the Burman leadership of the kingdom was unquestioned. The Pyu had largely assumed the Burman ethnicity in Upper Burma. The Burmese language, once an alien tongue, was now the lingua franca of the kingdom. The kingdom went into decline in the 13th century as the continuous growth of tax-free religious wealth—by the 1280s, two-thirds of Upper Burma's cultivable land had been alienated to the religion—affected the crown's ability to retain the loyalty of courtiers and military servicemen. This ushered in a vicious circle of internal disorders and external challenges by Mons, Mongols and Shans. Beginning in the early 13th century, the Shans began to encircle the Pagan Empire from the north and the east. The Mongols, who had conquered Yunnan, the former homeland of the Burmans in 1253, began their invasion of Burma in 1277, and in 1287 sacked Pagan, ending the Pagan kingdom's 250-year rule of the Irrawaddy valley and its periphery. Pagan's rule of central Burma came to an end ten years later in 1297 when it was toppled by Myinsaing. Small kingdoms After the fall of Pagan, the Mongols left the searing Irrawaddy valley but the Pagan Kingdom was irreparably broken up into several small kingdoms. By the mid-14th century, the country had become organized along four major power centers: Upper Burma, Lower Burma, Shan States and Arakan. Many of the power centers were themselves made up of (often loosely held) minor kingdoms or princely states. This era was marked by a series of wars and switching alliances. Smaller kingdoms played a precarious game of paying allegiance to more powerful states, sometimes simultaneously. Ava (1364–1555) Founded in 1364, Ava (Inwa) was the successor state to earlier, even smaller kingdoms based in central Burma: Toungoo (1287–1322), Myinsaing–Pinya (1297–1364), and Sagaing (1315–1364). In its first years of existence, Ava, which viewed itself as the rightful successor to the Pagan Empire, tried to reassemble the former empire. While it was able to pull Toungoo and peripheral Shan states (Kale, Mohnyin, Mogaung, Thibaw (Hsipaw)) into its fold at the peak of its power, it failed to reconquer the rest. The Forty Years' War (1385–1424) with Hanthawaddy left Ava exhausted, and its power plateaued. Its kings regularly faced rebellions in its vassal regions but were able to put them down until the 1480s. In the late 15th century, Prome and its Shan states successfully broke away, and in the early 16th century, Ava itself came under attacks from its former vassals. In 1510, Toungoo also broke away. In 1527, the Confederation of Shan States led by Mohnyin captured Ava. The Confederation's rule of Upper Burma, though lasted until 1555, was marred by internal fighting between Mohnyin and Thibaw houses. The kingdom was toppled by Toungoo forces in 1555. The Burmese language and culture came into its own during the Ava period. Hanthawaddy Pegu (1287–1539, 1550–1552) The Mon-speaking kingdom was founded as Ramannadesa right after Pagan's collapse in 1287. In the beginning, the Lower-Burma-based kingdom was a loose federation of regional power centers in Martaban (Mottama), Pegu (Bago) and the Irrawaddy delta. The energetic reign of Razadarit (1384–1422) cemented the kingdom's existence. Razadarit firmly unified the three Mon-speaking regions together, and successfully held off Ava in the Forty Years' War (1385–1424). After the war, Hanthawaddy entered its golden age whereas its rival Ava gradually went into decline. From the 1420s to the 1530s, Hanthawaddy was the most powerful and prosperous kingdom of all post-Pagan kingdoms. Under a string of especially gifted monarchs, the kingdom enjoyed a long golden age, profiting from foreign commerce. The kingdom, with a flourishing Mon language and culture, became a center of commerce and Theravada Buddhism. Nonetheless, due to the inexperience of its last ruler, the powerful kingdom was conquered by the upstart kingdom of Toungoo in 1539. The kingdom was briefly revived between 1550 and 1552. But it controlled only Pegu and was crushed by Bayinnaung in 1552. Shan States (1287–1563) The Shans, who came down with the Mongols, stayed and quickly came to dominate much of northern to eastern arc of Burma—from northwestern Sagaing Division to Kachin Hills to the present day Shan Hills. The most powerful Shan states were Mohnyin and Mogaung in present-day Kachin State, followed by Theinni, Thibaw and Momeik in present-day northern Shan State. Minor states included Kale, Bhamo, Nyaungshwe and Kengtung. Mohnyin, in particular, constantly raided Ava's territory in the early 16th century. Monhyin-led Confederation of Shan States, in alliance with Prome Kingdom, captured Ava itself in 1527. The Confederation defeated its erstwhile ally Prome in 1533, and ruled all of Upper Burma except Toungoo. But the Confederation was marred by internal bickering, and could not stop Toungoo, which conquered Ava in 1555 and all of Shan States by 1563. Arakan (1287–1785) Although Arakan had been de facto independent since the late Pagan period, the Laungkyet dynasty of Arakan was ineffectual. Until the founding of the Mrauk-U Kingdom in 1429, Arakan was often caught between bigger neighbors, and found itself a battlefield during the Forty Years' War between Ava and Pegu. Mrauk-U went on to be a powerful kingdom in its own right between 15th and 17th centuries, including East Bengal between 1459 and 1666. Arakan was the only post-Pagan kingdom not to be annexed by the Toungoo dynasty. Toungoo Dynasty (1510–1752) First Toungoo Empire (1510–1599) Beginning in the 1480s, Ava faced constant internal rebellions and external attacks from the Shan States, and began to disintegrate. In 1510, Toungoo, located in the remote southeastern corner of the Ava kingdom, also declared independence. When the Confederation of Shan States conquered Ava in 1527, many Burmans fled southeast to Toungoo, the only kingdom remaining under Burman rule, and one surrounded by larger hostile kingdoms. Toungoo, led by its ambitious king Tabinshwehti and his deputy Gen. Bayinnaung, would go on to reunify the petty kingdoms that had existed since the fall of the Pagan Empire, and found the largest empire in the history of Southeast Asia. First, the upstart kingdom defeated a more powerful Hanthawaddy in the Toungoo–Hanthawaddy War (1535–1541). Tabinshwehti moved the capital to newly captured Pegu in 1539. Toungoo expanded its authority up to Pagan in 1544 but failed to conquer Arakan in 1546–1547 and Siam in 1548. Tabinshwehti's successor Bayinnaung continued the policy of expansion, conquering Ava in 1555, nearer Shan states (1557), Lan Na (1558), Manipur (1560), Farther/Trans-Salween Shan states (1562–1563), Siam (1564, 1569), and Lan Xang (1574), and bringing much of western and central mainland Southeast Asia under his rule. Bayinnaung put in place a lasting administrative system that reduced the power of hereditary Shan chiefs, and brought Shan customs in line with low-land norms. But he could not replicate an effective administrative system everywhere in his far flung empire. His empire was a loose collection of former sovereign kingdoms, whose kings were loyal to him as the Cakkavatti (စကြဝတေးမင်း, [sɛʔtɕà wədé mɪ́ɴ]; Universal Ruler), not the kingdom of Toungoo. The overextended empire unraveled soon after Bayinnaung's death in 1581. Siam broke away in 1584 and went to war with Burma until 1605. By 1593, the kingdom had lost its possessions in Siam, Lang Xang and Manipur. By 1597, all internal regions, including the city of Toungoo, the erstwhile home of the dynasty, had revolted. In 1599, the Arakanese forces aided by Portuguese mercenaries, and in alliance with the rebellious Toungoo forces, sacked Pegu. The country fell into chaos, with each region claiming a king. Portuguese mercenary Filipe de Brito e Nicote promptly rebelled against his Arakanese masters, and established Goa-backed Portuguese rule at Thanlyin in 1603. Restored Toungoo Kingdom (Nyaungyan Restoration) (1599–1752) While the interregnum that followed the fall of Pagan Empire lasted over 250 years (1287–1555), that following the fall of First Toungoo was relatively short-lived. One of Bayinnaung's sons, Nyaungyan, immediately began the reunification effort, successfully restoring central authority over Upper Burma and nearer Shan states by 1606. His successor Anaukpetlun defeated the Portuguese at Thanlyin in 1613; recovered the upper Tenasserim coast to Tavoy and Lan Na from the Siamese by 1614; and the trans-Salween Shan states (Kengtung and Sipsongpanna) in 1622–1626. His brother Thalun rebuilt the war torn country. He ordered the first ever census in Burmese history in 1635, which showed that the kingdom about two million people. By 1650, the three able kings–Nyaungyan, Anaukpetlun and Thalun–had successfully rebuilt a smaller but far more manageable kingdom. More importantly, the new dynasty proceeded to create a legal and political system whose basic features would continue under the Konbaung dynasty well into the 19th century. The crown completely replaced the hereditary chieftainships with appointed governorships in the entire Irrawaddy valley, and greatly reduced the hereditary rights of Shan chiefs. It also reined in the continuous growth of monastic wealth and autonomy, giving a greater tax base. Its trade and secular administrative reforms built a prosperous economy for more than 80 years. Except for a few occasional rebellions and an external war—Burma defeated Siam's attempt to take Lan Na and Martaban in 1662–64—the kingdom was largely at peace for the rest of the 17th century. The kingdom entered a gradual decline, and the authority of the "palace kings" deteriorated rapidly in the 1720s. From 1724 onwards, the Manipuris began raiding the Upper Chindwin valley. In 1727, southern Lan Na (Chiang Mai) successfully revolted, leaving just northern Lan Na (Chiang Saen) under an increasingly nominal Burmese rule. The Manipuri raids intensified in the 1730s, reaching increasingly deeper parts of central Burma. In 1740, the Mon in Lower Burma began a rebellion, and founded the Restored Hanthawaddy Kingdom, and by 1745 controlled much of Lower Burma. The Siamese also moved their authority up the Tenasserim coast by 1752. Hanthawaddy invaded Upper Burma in November 1751, and captured Ava on 23 March 1752, ending the 266-year-old Toungoo dynasty. Konbaung Dynasty (1752–1885) Soon after the fall of Ava, a new dynasty rose in Shwebo to challenge the authority of Hanthawaddy. Over the next 70 years, the highly militaristic Konbaung dynasty went on to create the largest Burmese empire, second only to the empire of Bayinnaung. By 1759, King Alaungpaya's Konbaung forces had reunited all of Burma (and Manipur), extinguished the Mon-led Hanthawaddy dynasty once and for all, and driven out the European powers who provided arms to Hanthawaddy—the French from Thanlyin and the English from Negrais. Wars with Siam and China The kingdom then went to war with Siam, which had occupied up the Tenasserim coast to Martaban during the Burmese civil war (1740–1757), and had provided shelter to the Mon refugees. By 1767, the Konbaung armies had subdued much of Laos and defeated Siam. But they could not finish off the remaining Siamese resistance as they were forced to defend against four invasions by Qing China (1765–1769). While the Burmese defenses held in "the most disastrous frontier war the Qing dynasty had ever waged", the Burmese were preoccupied with another impending invasion by the world's largest empire for years. The Qing kept a heavy military lineup in the border areas for about one decade in an attempt to wage another war while imposing a ban on inter-border trade for two decades. The Siamese used the Burmese preoccupation with China to recover their lost territories by 1770, and in addition, went on to capture much of Lan Na by 1776, ending over two centuries of Burmese suzerainty over the region. Burma and Siam went to war again in 1785–1786, 1787, 1792, 1803–1808, 1809–1812 and 1849–1855 but all resulted in a stalemate. After decades of war, the two countries essentially exchanged Tenasserim (to Burma) and Lan Na (to Siam). Westward expansion and wars with British Empire Faced with a powerful China in the northeast and a resurgent Siam in the southeast, King Bodawpaya turned westward for expansion. He conquered Arakan in 1785, annexed Manipur in 1814, and captured Assam in 1817–1819, leading to a long ill-defined border with British India. Bodawpaya's successor King Bagyidaw was left to put down British instigated rebellions in Manipur in 1819 and Assam in 1821–1822. Cross-border raids by rebels from the British protected territories and counter-cross-border raids by the Burmese led to the First Anglo-Burmese War (1824–1826). Lasting 2 years and costing 13 million pounds, the first Anglo-Burmese War was the longest and most expensive war in British Indian history, but ended in a decisive British victory. Burma ceded all of Bodawpaya's western acquisitions (Arakan, Manipur and Assam) plus Tenasserim. Burma was crushed for years by repaying a large indemnity of one million pounds (then US$5 million). In 1852, the British unilaterally and easily seized the Pegu province in the Second Anglo-Burmese War. After the war, King Mindon tried to modernize the Burmese state and economy, and made trade and territorial concessions to stave off further British encroachments, including ceding the Karenni States to the British in 1875. Nonetheless, the British, alarmed by the consolidation of French Indochina, annexed the remainder of the country in the Third Anglo-Burmese War in 1885, and sent the last Burmese king Thibaw and his family to exile in India. Administrative and economic reforms Konbaung kings extended administrative reforms first begun in the Restored Toungoo Dynasty period (1599–1752), and achieved unprecedented levels of internal control and external expansion. Konbaung kings tightened control in the low lands and reduced the hereditary privileges of Shan saophas (chiefs). Konbaung officials, particularly after 1780, began commercial reforms that increased government income and rendered it more predictable. Money economy continued to gained ground. In 1857, the crown inaugurated a full-fledged system of cash taxes and salaries, assisted by the country's first standardized silver coinage. Cultural integration continued. For the first time in history, the Burmese language and culture came to predominate the entire Irrawaddy valley, with the Mon language and ethnicity completely eclipsed by 1830. The nearer Shan principalities adopted more lowland norms. The evolution and growth of Burmese literature and theater continued, aided by an extremely high adult male literacy rate for the era (half of all males and 5% of females). Monastic and lay elites around the Konbaung kings, particularly from Bodawpaya's reign, also launched a major reformation of Burmese intellectual life and monastic organization and practice known as the Sudhamma Reformation. It led to amongst other things Burma's first proper state histories. British rule ||This section needs additional citations for verification. (December 2011)| Britain made Burma a province of India in 1886 with the capital at Rangoon. Traditional Burmese society was drastically altered by the demise of the monarchy and the separation of religion and state. Though war officially ended after only a couple of weeks, resistance continued in northern Burma until 1890, with the British finally resorting to a systematic destruction of villages and appointment of new officials to finally halt all guerrilla activity. The economic nature of society also changed dramatically. After the opening of the Suez Canal, the demand for Burmese rice grew and vast tracts of land were opened up for cultivation. However, in order to prepare the new land for cultivation, farmers were forced to borrow money from Indian moneylenders called chettiars at high interest rates and were often foreclosed on and evicted losing land and livestock. Most of the jobs also went to indentured Indian labourers, and whole villages became outlawed as they resorted to 'dacoity' (armed robbery). While the Burmese economy grew, all the power and wealth remained in the hands of several British firms, Anglo-Burmese and migrants from India. The civil service was largely staffed by the Anglo-Burmese community and Indians, and Burmese were excluded almost entirely from military service. Though the country prospered, the Burmese people failed to reap the rewards. (See George Orwell's novel Burmese Days for a fictional account of the British in Burma.) Throughout colonial rule through the mid-1960s, the Anglo-Burmese were to dominate the country, causing discontent among the local populace. By around the start of the 20th century, a nationalist movement began to take shape in the form of Young Men's Buddhist Associations (YMBA), modelled on the YMCA, as religious associations were allowed by the colonial authorities. They were later superseded by the General Council of Burmese Associations (GCBA) which was linked with Wunthanu athin or National Associations that sprang up in villages throughout Burma Proper. Between 1900 – 1911 the "Irish Buddhist" U Dhammaloka challenged Christianity and British rule on religious grounds. A new generation of Burmese leaders arose in the early 20th century from amongst the educated classes that were permitted to go to London to study law. They came away from this experience with the belief that the Burmese situation could be improved through reform. Progressive constitutional reform in the early 1920s led to a legislature with limited powers, a university and more autonomy for Burma within the administration of India. Efforts were also undertaken to increase the representation of Burmese in the civil service. Some people began to feel that the rate of change was not fast enough and the reforms not expansive enough. In 1920 the first university students strike in history broke out in protest against the new University Act which the students believed would only benefit the elite and perpetuate colonial rule. 'National Schools' sprang up across the country in protest against the colonial education system, and the strike came to be commemorated as 'National Day'. There were further strikes and anti-tax protests in the later 1920s led by the Wunthanu athins. Prominent among the political activists were Buddhist monks (pongyi), such as U Ottama and U Seinda in the Arakan who subsequently led an armed rebellion against the British and later the nationalist government after independence, and U Wisara, the first martyr of the movement to die after a protracted hunger strike in prison. (One of the main thoroughfares in Yangon is named after U Wisara.) In December 1930, a local tax protest by Saya San in Tharrawaddy quickly grew into first a regional and then a national insurrection against the government. Lasting for two years, the Galon rebellion, named after the mythical bird Garuda — enemy of the Nagas i.e. the British – emblazoned on the pennants the rebels carried, required thousands of British troops to suppress along with promises of further political reform. The eventual trial of Saya San, who was executed, allowed several future national leaders, including Dr Ba Maw and U Saw, who participated in his defence, to rise to prominence. May 1930 saw the founding of the Dobama Asiayone (We Burmans Association) whose members called themselves Thakin (an ironic name as thakin means "master" in the Burmese language—rather like the Indian 'sahib'— proclaiming that they were the true masters of the country entitled to the term usurped by the colonial masters). The second university students strike in 1936 was triggered by the expulsion of Aung San and Ko Nu, leaders of the Rangoon University Students Union (RUSU), for refusing to reveal the name of the author who had written an article in their university magazine, making a scathing attack on one of the senior university officials. It spread to Mandalay leading to the formation of the All Burma Students Union (ABSU). Aung San and Nu subsequently joined the Thakin movement progressing from student to national politics. The British separated Burma from India in 1937 and granted the colony a new constitution calling for a fully elected assembly, but this proved to be a divisive issue as some Burmese felt that this was a ploy to exclude them from any further Indian reforms whereas other Burmese saw any action that removed Burma from the control of India to be a positive step. Ba Maw served as the first prime minister of Burma, but he was succeeded by U Saw in 1939, who served as prime minister from 1940 until he was arrested on 19 January 1942 by the British for communicating with the Japanese. A wave of strikes and protests that started from the oilfields of central Burma in 1938 became a general strike with far-reaching consequences. In Rangoon student protesters, after successfully picketing the Secretariat, the seat of the colonial government, were charged by the British mounted police wielding batons and killing a Rangoon University student called Aung Kyaw. In Mandalay, the police shot into a crowd of protesters led by Buddhist monks killing 17 people. The movement became known as Htaung thoun ya byei ayeidawbon (the '1300 Revolution' named after the Burmese calendar year), and 20 December, the day the first martyr Aung Kyaw fell, commemorated by students as 'Bo Aung Kyaw Day'. World War II and Japan Some Burmese nationalists saw the outbreak of World War II as an opportunity to extort concessions from the British in exchange for support in the war effort. Other Burmese, such as the Thakin movement, opposed Burma's participation in the war under any circumstances. Aung San co-founded the Communist Party of Burma (CPB) with other Thakins in August 1939. Marxist literature as well as tracts from the Sinn Féin movement in Ireland had been widely circulated and read among political activists. Aung San also co-founded the People's Revolutionary Party (PRP), renamed the Socialist Party after the World War II. He was also instrumental in founding the Bama htwet yat gaing (Freedom Bloc) by forging an alliance of the Dobama, ABSU, politically active monks and Ba Maw's Sinyètha (Poor Man's) Party. After the Dobama organization called for a national uprising, an arrest warrant was issued for many of the organization's leaders including Aung San, who escaped to China. Aung San's intention was to make contact with the Chinese Communists but he was detected by the Japanese authorities who offered him support by forming a secret intelligence unit called the Minami Kikan headed by Colonel Suzuki with the objective of closing the Burma Road and supporting a national uprising. Aung San briefly returned to Burma to enlist twenty-nine young men who went to Japan with him in order to receive military training on Hainan Island, China, and they came to be known as the "Thirty Comrades". When the Japanese occupied Bangkok in December 1941, Aung San announced the formation of the Burma Independence Army (BIA) in anticipation of the Japanese invasion of Burma in 1942. The BIA formed a provisional government in some areas of the country in the spring of 1942, but there were differences within the Japanese leadership over the future of Burma. While Colonel Suzuki encouraged the Thirty Comrades to form a provisional government, the Japanese Military leadership had never formally accepted such a plan. Eventually the Japanese Army turned to Ba Maw to form a government. During the war in 1942, the BIA had grown in an uncontrolled manner, and in many districts officials and even criminals appointed themselves to the BIA. It was reorganised as the Burma Defence Army (BDA) under the Japanese but still headed by Aung San. While the BIA had been an irregular force, the BDA was recruited by selection and trained as a conventional army by Japanese instructors. Ba Maw was afterwards declared head of state, and his cabinet included both Aung San as War Minister and the Communist leader Thakin Than Tun as Minister of Land and Agriculture as well as the Socialist leaders Thakins Nu and Mya. When the Japanese declared Burma, in theory, independent in 1943, the Burma Defence Army (BDA) was renamed the Burma National Army (BNA). It soon became apparent that Japanese promises of independence were merely a sham and that Ba Maw was deceived. As the war turned against the Japanese, they declared Burma a fully sovereign state on 1 August 1943, but this was just another facade. Disillusioned, Aung San began negotiations with Communist leaders Thakin Than Tun and Thakin Soe, and Socialist leaders Ba Swe and Kyaw Nyein which led to the formation of the Anti-Fascist Organisation (AFO) in August 1944 at a secret meeting of the CPB,the PRP and the BNA in Pegu. The AFO was later renamed the Anti-Fascist People's Freedom League(AFPFL). Thakin Than Tun and Soe, while in Insein prison in July 1941, had co-authored the Insein Manifesto which, against the prevailing opinion in the Dobama movement, identified world fascism as the main enemy in the coming war and called for temporary cooperation with the British in a broad allied coalition which should include the Soviet Union. Soe had already gone underground to organise resistance against the Japanese occupation, and Than Tun was able to pass on Japanese intelligence to Soe, while other Communist leaders Thakin Thein Pe and Tin Shwe made contact with the exiled colonial government in Simla, India. There were informal contacts between the AFO and the Allies in 1944 and 1945 through the British organisation Force 136. On 27 March 1945 the Burma National Army rose up in a countrywide rebellion against the Japanese. 27 March had been celebrated as 'Resistance Day' until the military renamed it 'Tatmadaw (Armed Forces) Day'. Aung San and others subsequently began negotiations with Lord Mountbatten and officially joined the Allies as the Patriotic Burmese Forces (PBF). At the first meeting, the AFO represented itself to the British as the provisional government of Burma with Thakin Soe as chairman and Aung San as a member of its ruling committee. The Japanese were routed from most of Burma by May 1945. Negotiations then began with the British over the disarming of the AFO and the participation of its troops in a post-war Burma Army. Some veterans had been formed into a paramilitary force under Aung San, called the Pyithu yèbaw tat or People's Volunteer Organisation (PVO), and were openly drilling in uniform. The absorption of the PBF was concluded successfully at the Kandy conference in Ceylon in September 1945. From the Japanese surrender to Aung San's assassination The surrender of the Japanese brought a military administration to Burma and demands to try Aung San for his involvement in a murder during military operations in 1942. Lord Mountbatten realized that this was an impossibility considering Aung San's popular appeal. After the war ended, the British Governor, Sir Reginald Dorman-Smith returned. The restored government established a political program that focused on physical reconstruction of the country and delayed discussion of independence. The AFPFL opposed the government, leading to political instability in the country. A rift had also developed in the AFPFL between the Communists and Aung San together with the Socialists over strategy, which led to Than Tun being forced to resign as general secretary in July 1946 and the expulsion of the CPB from the AFPFL the following October. Dorman-Smith was replaced by Sir Hubert Rance as the new governor, and almost immediately after his appointment the Rangoon Police went on strike. The strike, starting in September 1946, then spread from the police to government employees and came close to becoming a general strike. Rance calmed the situation by meeting with Aung San and convincing him to join the Governor's Executive Council along with other members of the AFPFL. The new executive council, which now had increased credibility in the country, began negotiations for Burmese independence, which were concluded successfully in London as the Aung San-Attlee Agreement on 27 January 1947. The agreement left parts of the communist and conservative branches of the AFPFL dissatisfied, however, sending the Red Flag Communists led by Thakin Soe underground and the conservatives into opposition. Aung San also succeeded in concluding an agreement with ethnic minorities for a unified Burma at the Panglong Conference on 12 February, celebrated since as 'Union Day'. U Aung Zan Wai, U Pe Khin, Major Aung, Sir Maung Gyi and Dr. Sein Mya Maung. were most important negotiators and leaders of the historical pinlon (panglong) Conference negotiated with Burma national top leader General Aung San and other top leaders in 1947.All these leaders decided to join together to form the Union of Burma. Union day celebration is one of the greatest in the history of Burma. But in July 1947, political rivals assassinated Aung San and several cabinet members. Shortly after, rebellion broke out in the Arakan led by the veteran monk U Seinda, and it began to spread to other districts. The popularity of the AFPFL, now dominated by Aung San and the Socialists, was eventually confirmed when it won an overwhelming victory in the April 1947 constituent assembly elections. On 19 July 1947 U Saw, a conservative pre-war Prime Minister of Burma, engineered the assassination of Aung San and several members of his cabinet including his eldest brother Ba Win, while meeting in the Secretariat. 19 July has been commemorated since as Martyrs' Day. Thakin Nu, the Socialist leader, was now asked to form a new cabinet, and he presided over Burmese independence on 4 January 1948. The popular sentiment to part with the British was so strong at the time that Burma opted not to join the British Commonwealth, unlike India or Pakistan. Independent Burma The first years of Burmese independence were marked by successive insurgencies by the Red Flag Communists led by Thakin Soe, the White Flag Communists led by Thakin Than Tun, the Yèbaw Hpyu (White-band PVO) led by Bo La Yaung, a member of the Thirty Comrades, army rebels calling themselves the Revolutionary Burma Army (RBA) led by Communist officers Bo Zeya, Bo Yan Aung and Bo Yè Htut – all three of them members of the Thirty Comrades, Arakanese Muslims or the Mujahid, and the Karen National Union (KNU). Burma accepted foreign assistance in rebuilding the country in these early years, but continued American support for the Chinese Nationalist military presence in Burma finally resulted in the country rejecting most foreign aid, refusing to join the South-East Asia Treaty Organization (SEATO) and supporting the Bandung Conference of 1955. Burma generally strove to be impartial in world affairs and was one of the first countries in the world to recognize Israel and the People's Republic of China. By 1958, the country was largely beginning to recover economically, but was beginning to fall apart politically due to a split in the AFPFL into two factions, one led by Thakins Nu and Tin, the other by Ba Swe and Kyaw Nyein. And this despite the unexpected success of U Nu's 'Arms for Democracy' offer taken up by U Seinda in the Arakan, the Pa-O, some Mon and Shan groups, but more significantly by the PVO surrendering their arms. The situation however became very unstable in parliament, with U Nu surviving a no-confidence vote only with the support of the opposition National United Front (NUF), believed to have 'crypto-communists' amongst them. Army hardliners now saw the 'threat' of the CPB coming to an agreement with U Nu through the NUF, and in the end U Nu 'invited' Army Chief of Staff General Ne Win to take over the country. Over 400 'communist sympathisers' were arrested, of which 153 were deported to the Coco Island in the Andaman Sea. Among them was the NUF leader Aung Than, older brother of Aung San. The Botataung, Kyemon and Rangoon Daily were also closed down. Ne Win's caretaker government successfully established the situation and paved the way for new general elections in 1960 that returned U Nu's Union Party with a large majority. The situation did not remain stable for long, when the Shan Federal Movement, started by Nyaung Shwe Sawbwa Sao Shwe Thaik (the first President of independent Burma 1948–52) and aspiring to a 'loose' federation, was seen as a separatist movement insisting on the government honouring the right to secession in 10 years provided for by the 1947 Constitution. Ne Win had already succeeded in stripping the Shan Sawbwas of their feudal powers in exchange for comfortable pensions for life in 1959. On 2 March 1962, Ne Win, with sixteen other senior military officers, staged a coup d'état, arrested U Nu, Sao Shwe Thaik and several others, and declared a socialist state to be run by their Union Revolutionary Council. Sao Shwe Thaik's son, Sao Mye Thaik, was shot dead in what was generally described as a 'bloodless' coup. Thibaw Sawbwa Sao Kya Seng also disappeared mysteriously after being stopped at a checkpoint near Taunggyi. A number of protests followed the coup, and initially the military's response was mild. However, on 7 July 1962, a peaceful student protest on Rangoon University campus was suppressed by the military, killing over 100 students. The next day, the army blew up the Students Union building. Peace talks were convened between the RC and various armed insurgent groups in 1963, but without any breakthrough, and during the talks as well as in the aftermath of their failure, hundreds were arrested in Rangoon and elsewhere from both the right and the left of the political spectrum. All opposition parties were banned on 28 March 1964. The Kachin insurgency by the Kachin Independence Organisation (KIO) had begun earlier in 1961 triggered by U Nu's declaration of Buddhism as the state religion, and the Shan State Army (SSA), led by Sao Shwe Thaik's wife Mahadevi and son Chao Tzang Yaunghwe, launched a rebellion in 1964 as a direct consequence of the 1962 military coup. Ne Win quickly took steps to transform Burma into his vision of a 'socialist state' and to isolate the country from contact with the rest of the world. A one-party system was established with his newly formed Burma Socialist Programme Party (BSPP) in complete control. Commerce and industry were nationalized across the board, but the economy did not grow at first if at all as the government put too much emphasis on industrial development at the expense of agriculture. In April 1972, General Ne Win and the rest of the Union Revolutionary Council retired from the military, but now as U Ne Win, he continued to run the country through the BSPP. A new constitution was promulgated in January 1974 that resulted in the creation of a People's Assembly (Pyithu Hluttaw) that held supreme legislative, executive, and judicial authority, and local People's Councils. Ne Win became the president of the new government. Beginning in May 1974, a wave of strikes hit Rangoon and elsewhere in the country against a backdrop of corruption, inflation and food shortages, especially rice. In Rangoon workers were arrested at the Insein railway yard, and troops opened fire on workers at the Thamaing textile mill and Simmalaik dockyard. In December 1974, the biggest anti-government demonstrations to date broke out over the funeral of former UN Secretary-General U Thant. U Thant had been former prime minister U Nu's closest advisor in the 1950s and was seen as a symbol of opposition to the military regime. The Burmese people felt that U Thant was denied a state funeral that he deserved as a statesman of international stature because of his association with U Nu. On 23 March 1976, over 100 students were arrested for holding a peaceful ceremony (Hmaing yabyei) to mark the centenary of the birth of Thakin Kodaw Hmaing who was the greatest Burmese poet and writer and nationalist leader of the 20th. century history of Burma. He had inspired a whole generation of Burmese nationalists and writers by his work mainly written in verse, fostering immense pride in their history, language and culture, and urging them to take direct action such as strikes by students and workers. It was Hmaing as leader of the mainstream Dobama who sent the Thirty Comrades abroad for military training, and after independence devoted his life to internal peace and national reconciliation until he died at the age of 88 in 1964. Hmaing lies buried in a mausoleum at the foot of the Shwedagon Pagoda. U Nu, after his release from prison in October 1966, had left Burma in April 1969, and formed the Parliamentary Democracy Party (PDP) the following August in Bangkok, Thailand with the former Thirty Comrades, Bo Let Ya, co-founder of the CPB and former Minister of Defence and deputy prime minister, Bo Yan Naing, and U Thwin, ex-BIA and former Minister of Trade. Another member of the Thirty Comrades, Bohmu Aung, former Minister of Defence, joined later. The fourth, Bo Setkya, who had gone underground after the 1962 coup, died in Bangkok shortly before U Nu arrived. The PDP launched an armed rebellion across the Thai border from 1972 till 1978 when Bo Let Ya was killed in an attack by the Karen National Union (KNU). U Nu, Bohmu Aung and Bo Yan Naing returned to Rangoon after the 1980 amnesty. Ne Win also secretly held peace talks later in 1980 with the KIO and the CPB, again ending in a deadlock as before. Crisis and 1988 Uprising Ne Win retired as president in 1981, but remained in power as Chairman of the BSPP until his sudden unexpected announcement to step down on 23 July 1988. In the 1980s, the economy began to grow as the government relaxed restrictions on foreign aid, but by the late 1980s falling commodity prices and rising debt led to an economic crisis. This led to economic reforms in 1987–88 that relaxed socialist controls and encouraged foreign investment. This was not enough, however, to stop growing turmoil in the country, compounded by periodic 'demonetization' of certain bank notes in the currency, the last of which was decreed in September 1987 wiping out the savings of the vast majority of people. In September 1987, Burma's de facto ruler U Ne Win suddenly canceled certain currency notes which caused a great down-turn in the economy. The main reason for the cancellation of these notes was superstition on U Ne Win's part, as he considered the number nine his lucky number—he only allowed 45 and 90 kyat notes, because these were divisible by nine. (BBC News Website, http://news.bbc.co.uk/2/hi/asia-pacific/7012158.stm (Bilal Arif) Burma's admittance to Least Developed Country status by the UN the following December highlighted its economic bankruptcy. Triggered by brutal police repression of student-led protests causing the death of over a hundred students and civilians in March and June 1988, widespread protests and demonstrations broke out on 8 August throughout the country. The military responded by firing into the crowds, alleging Communist infiltration. Violence, chaos and anarchy reigned. Civil administration had ceased to exist, and by September of that year, the country was on the verge of a revolution. The armed forces, under the nominal command of General Saw Maung staged a coup on 8 August to restore order. During the 8888 Uprising, as it became known, the military killed thousands. The military swept aside the Constitution of 1974 in favor of martial law under the State Law and Order Restoration Council (SLORC) with Saw Maung as chairman and prime minister. At a special six-hour press conference on 5 August 1989, Brig. Gen. Khin Nyunt, the SLORC Secretary 1 and chief of Military Intelligence Service (MIS), claimed that the uprising had been orchestrated by the Communist Party of Burma through its underground organisation. Although there had inevitably been some underground CPB presence as well as that of ethnic insurgent groups, there was no evidence of their being in charge to any extent. In fact, in March 1989, the CPB leadership was overthrown by a rebellion by the Kokang and Wa troops that it had come to depend on after losing its former strongholds in central Burma and re-establishing bases in the northeast in the late 1960s; the Communist leaders were soon forced into exile across the Chinese border. The military government announced a change of name for the country in English from Burma to Myanmar in 1989. It also continued the economic reforms started by the old regime and called for a Constituent Assembly to revise the 1974 Constitution. This led to multiparty elections in May 1990 in which the National League for Democracy (NLD) won a landslide victory over the National Unity Party (NUP, the successor to the BSPP) and about a dozen smaller parties. The military, however, would not let the assembly convene, and continued to hold the two leaders of the NLD, U Tin U and Aung San Suu Kyi, daughter of Aung San, under house arrest imposed on them the previous year. Burma came under increasing international pressure to convene the elected assembly, particularly after Aung San Suu Kyi was awarded the Nobel Peace Prize in 1991, and also faced economic sanctions. In April 1992 the military replaced Saw Maung with General Than Shwe. Than Shwe released U Nu from prison and relaxed some of the restrictions on Aung San Suu Kyi's house arrest, finally releasing her in 1995, although she was forbidden to leave Rangoon. Than Shwe also finally allowed a National Convention to meet in January 1993, but insisted that the assembly preserve a major role for the military in any future government, and suspended the convention from time to time. The NLD, fed up with the interference, walked out in late 1995, and the assembly was finally dismissed in March 1996 without producing a constitution. During the 1990s, the military regime had also had to deal with several insurgencies by tribal minorities along its borders. General Khin Nyunt was able to negotiate cease-fire agreements that ended the fighting with the Kokang, hill tribes such as the Wa, and the Kachin, but the Karen would not negotiate. The military finally captured the main Karen base at Manerplaw in spring 1995, but there has still been no final peace settlement. Khun Sa, a major opium warlord who nominally controlled parts of Shan State, made a deal with the government in December 1995 after U.S. pressure. After the failure of the National Convention to create a new constitution, tensions between the government and the NLD mounted, resulting in two major crackdowns on the NLD in 1996 and 1997. The SLORC was abolished in November 1997 and replaced by the State Peace and Development Council (SPDC), but it was merely a cosmetic change. Continuing reports of human rights violations in Burma led the United States to intensify sanctions in 1997, and the European Union followed suit in 2000. The military placed Aung San Suu Kyi under house arrest again in September 2000 until May 2002, when her travel restrictions outside of Rangoon were also lifted. Reconciliation talks were held with the government, but these came to a stalemate and Suu Kyi was once again taken into custody in May 2003 after an ambush on her motorcade reportedly by a pro-military mob. The government also carried out another large-scale crackdown on the NLD, arresting many of its leaders and closing down most of its offices. The situation in Burma remains tense to this day. In August 2003, Kyin Nyunt announced a seven-step "roadmap to democracy", which the government claims it is in the process of implementing. There is no timetable associated with the government’s plan, or any conditionality or independent mechanism for verifying that it is moving forward. For these reasons, most Western governments and Burma's neighbors have been skeptical and critical of the roadmap. On 17 February 2005, the government reconvened the National Convention, for the first time since 1993, in an attempt to rewrite the Constitution. However, major pro-democracy organisations and parties, including the National League for Democracy, were barred from participating, the military allowing only selected smaller parties. It was adjourned once again in January 2006. In November 2005, the military junta started moving the government away from Yangon to an unnamed location near Kyatpyay just outside Pyinmana, to a newly designated capital city. This public action follows a long term unofficial policy of moving critical military and government infrastructure away from Yangon to avoid a repetition of the events of 1988. On Armed Forces Day (27 March 2006), the capital was officially named Naypyidaw Myodaw (lit. Royal City of the Seat of Kings). In November 2006, the International Labour Organization (ILO) announced it will be seeking – at the International Court of Justice. – "to prosecute members of the ruling Myanmar junta for crimes against humanity" over the continuous forced labour of its citizens by the military. According to the ILO, an estimated 800,000 people are subject to forced labour in Myanmar. 2007 anti-government protests The 2007 Burmese anti-government protests were a series of anti-government protests that started in Burma on 15 August 2007. The immediate cause of the protests was mainly the unannounced decision of the ruling junta, the State Peace and Development Council, to remove fuel subsidies which caused the price of diesel and petrol to suddenly rise as much as 100%, and the price of compressed natural gas for buses to increase fivefold in less than a week. The protest demonstrations were at first dealt with quickly and harshly by the junta, with dozens of protesters arrested and detained. Starting 18 September, the protests had been led by thousands of Buddhist monks, and those protests had been allowed to proceed until a renewed government crackdown on 26 September. During the crack-down, there were rumors of disagreement within the Burmese military, but none were confirmed. At the time, independent sources reported, through pictures and accounts, 30 to 40 monks and 50 to 70 civilians killed as well as 200 beaten. However, other sources reveal more dramatic figures. In a White House statement President Bush said: "Monks have been beaten and killed.... Thousands of pro-democracy protesters have been arrested". Some news reports referred to the protests as the Saffron Revolution. On 7 February 2008, SPDC announced that a referendum for the Constitution would be held, and Elections by 2010. The Burmese constitutional referendum, 2008 was held on 10 May and promised a "discipline-flourishing democracy" for the country in the future. Cyclone Nargis On 3 May 2008, Cyclone Nargis devastated the country when winds of up to 215 km/h (135 mph) touched land in the densely populated, rice-farming delta of the Irrawaddy Division. It is estimated that more than 130,000 people died or went missing and damage totalled 10 billion dollars (US$); it was the worst natural disaster in Burmese history. The World Food Programme report that, "Some villages have been almost totally eradicated and vast rice-growing areas are wiped out." The United Nations estimates that as many as 1 million were left homeless and the World Health Organization "has received reports of malaria outbreaks in the worst-affected area." Yet in the critical days following this disaster, Burma's isolationist regime complicated recovery efforts by delaying the entry of United Nations planes delivering medicine, food, and other supplies. The government's failure to permit entry for large-scale international relief efforts was described by the United Nations as "unprecedented." The 2011–2012 Burmese democratic reforms are an ongoing series of political, economic and administrative reforms in Burma undertaken by the military-backed government. These reforms include the release of pro-democracy leader Aung San Suu Kyi from house arrest and subsequent dialogues with her, establishment of the National Human Rights Commission, general amnesties of more than 200 political prisoners, institution of new labour laws that allow labour unions and strikes, relaxation of press censorship, and regulations of currency practices. As a consequence of the reforms, ASEAN has approved Burma's bid for the chairmanship in 2014. United States Secretary of State Hillary Clinton visited Burma on 1 December 2011, to encourage further progress; it was the first visit by a Secretary of State in more than fifty years. United States President Barack Obama visited one year later, becoming the first US president to visit the country. Suu Kyi's party, the National League for Democracy, participated in by-elections held on 1 April 2012 after the government abolished laws that led to the NLD's boycott of the 2010 general election. She led the NLD in winning the by-elections in a landslide, winning 41 out of 44 of the contested seats, with Suu Kyi herself winning a seat representing Kawhmu Constituency in the lower house of the Burmese Parliament. However, uncertainties exist as some other political prisoners have not been released and clashes between Burmese troops and local insurgent groups continue. See also - History of Asia - History of Southeast Asia - List of Burmese monarchs - List of Presidents of Burma - Prime Minister of Burma - Politics of Burma - Timeline of Burmese history - Cooler 2002: Chapter 1: Prehistoric and Animist Periods - Myint-U 2006: 45 - Hudson 2005: 1 - Moore 2007: 236 - Hall 1960: 8–10 - Myint-U 2006: 51–52 - Hall 1960: 11–12 - Htin Aung 1967: 329 - Lieberman 2003: 90–91 - Myint-U 2006: 56 - Harvey 1925: 24–25 - Harvey 1925: 21 - Htin Aung 1967: 34 - Lieberman 2003: 24 - Lieberman 2003: 112–119 - Lieberman 2003: 119–123 - Fernquest 2005: 20–50 - Htin Aung 1967: 117–118 - Liberman 2003: 158–164 - Phayre 1883: 153 - Lieberman 2003: 184–187 - Dai 2004: 145–189 - Wyatt 2003: 125 - Myint-U 2006: 109 - Marx 1853: 201–202 - Myint-U 2006: 113 - Htin Aung 1967: 214–215 - Myint-U 2006: 133 - Marx 1853: 656 - Lieberman 2003: 202–206 - Charney 2006: 96–107 - Tarun Khanna, Billions entrepreneurs : How China and India Are Reshaping Their Futures and Yours, Harvard Business School Press, 2007, ISBN 978-1-4221-0383-8 - Smith, Martin (1991). Burma – Insurgency and the Politics of Ethnicity. London and New Jersey: Zed Books. pp. 49, 91, 50, 53, 54, 56, 57, 58–9, 60, 61, 60, 66, 65, 68, 69, 77, 78, 64, 70, 103, 92, 120, 176, 168–9, 177, 178, 180, 186, 195–7, 193, 202, 204, 199, 200, 270, 269, 275–276, 292–3, 318–320, 25, 24, 1, 4–16, 365, 375–377, 414. - "The Statement on the Commemoration of Bo Aung Kyaw". All Burma Students League. 19 December 1999. Retrieved 2006-10-23. - "The Panglong Agreement, 1947". Online Burma/Myanmar Library. - "Who Killed Aung San? — an interview with Gen. Kyaw Zaw". The Irrawaddy. August 1997. Retrieved 2006-10-30. - Boudreau, Vincent (2004) Resisting Dictatorship: Repression and Protest in Southeast Asia Cambridge University Press, Cambridge, U.K., pp. 37–39, ISBN 0-521-83989-0 - "Thakin Kodaw Hmaing (1876–1964)". The Irrawaddy 1 March 2000. Retrieved 2008-03-06. - Aung Zaw. "A Coup Against Than Shwe". The Irrawaddy 24 November 2008. Retrieved 2008-11-24. - "Burma Communist Party's Conspiracy to take over State Power". SLORC. 5 August 1989. - "ILO seeks to charge Myanmar junta with atrocities". Reuters. 16 November 2006. Retrieved 2006-11-17. - ILO cracks the whip at Yangon - Burma leaders double fuel prices - UN envoy warns of Myanmar crisis - Booth, Jenny (24 September 2007). "Military junta threatens monks in Burma". The Times (London). Retrieved 4 May 2010. - "100,000 Protestors Flood Streets of Rangoon in "Saffron Revolution"". - CNN http://www.cnn.com/2008/WORLD/asiapcf/05/07/myanmar.aidcyclone/#cnnSTCText |url=missing title (help).[dead link] - Aid arrives in Myanmar as death toll passes 22,000, but worst-hit area still cut off – International Herald Tribune - The Associated Press: AP Top News at 4:25 p.m. EDT[dead link] - The Associated Press: Official: UN plane lands in Myanmar with aid after cyclone[dead link] - "The UN resumes foreign aid flights". The Guardian (London). 9 May 2008. Retrieved 2008-05-09. - Aung-Thwin, Michael A. (2005). The Mists of Rāmañña: The Legend that was Lower Burma (illustrated ed.). Honolulu: University of Hawai'i Press. ISBN 0824828860, 9780824828868 Check - Callahan, Mary (2003). Making Enemies: War and State Building in Burma. Ithaca: Cornell University Press. - Charney, Michael W. (2009). A History of Modern Burma. Cambridge University Press. ISBN 978-0-521-61758-1. - Charney, Michael W. (2006). Powerful Learning: Buddhist Literati and the Throne in Burma's Last Dynasty, 1752–1885. Ann Arbor: University of Michigan. - Cooler, Richard M. (2002). "The Art and Culture of Burma". Northern Illinois University. - Dai, Yingcong (2004). "A Disguised Defeat: The Myanmar Campaign of the Qing Dynasty". Modern Asian Studies (Cambridge University Press): 145–189. - Fernquest, Jon (Autumn 2005). "Min-gyi-nyo, the Shan Invasions of Ava (1524–27), and the Beginnings of Expansionary Warfare in Toungoo Burma: 1486–1539". SOAS Bulletin of Burma Research, Vol. 3, No. 2. ISSN 1479-8484. - Hall, D.G.E. (1960). Burma (3rd ed.). Hutchinson University Library. ISBN 978-1-4067-3503-1. - Harvey, G. E. (1925). History of Burma: From the Earliest Times to 10 March 1824. London: Frank Cass & Co. Ltd. - Htin Aung, Maung (1967). A History of Burma. New York and London: Cambridge University Press. - Hudson, Bob (March 2005), "A Pyu Homeland in the Samon Valley: a new theory of the origins of Myanmar's early urban system", Myanmar Historical Commission Golden Jubilee International Conference - Kyaw Thet (1962). History of Burma (in Burmese). Yangon: Yangon University Press. - Lieberman, Victor B. (2003). Strange Parallels: Southeast Asia in Global Context, c. 800–1830, volume 1, Integration on the Mainland. Cambridge University Press. ISBN 978-0-521-80496-7. - Mark, Karl (1853). "War in Burma—The Russian Question—Curious Diplomatic Correspondence". Collected Works of Karl Marx and Frederick Engels 12 (1979 ed.) (New York: International Publishers). Unknown parameter - Moore, Elizabeth H. (2007). Early Landscapes of Myanmar. Bangkok: River Books. ISBN 974-9863-31-3. - Myint-U, Thant (2001). The Making of Modern Burma. Cambridge University Press. ISBN 0-521-79914-7, 9780521799140 Check - Myint-U, Thant (2006). The River of Lost Footsteps—Histories of Burma. Farrar, Straus and Giroux. ISBN 978-0-374-16342-6, 0-374-16342-1 Check - Phayre, Lt. Gen. Sir Arthur P. (1883). History of Burma (1967 ed.). London: Susil Gupta. - Selth, Andrew (2012). Burma (Myanmar) Since the 1988 Uprising: A Select Bibliography. Australia: Griffith University. - Smith, Martin John (1991). Burma: insurgency and the politics of ethnicity (Illustrated ed.). Zed Books. ISBN 0-86232-868-3, 9780862328689 Check - Steinberg, David I. (2009). Burma/Myanmar: what everyone needs to know. Oxford University Press. ISBN 0-19-539068-7, 9780195390681 Check - Wyatt, David K. (2003). Thailand: A Short History (2 ed.). p. 125. ISBN 978-0-300-08475-7. - Factfile: Burma's history of repression - Biography of King Bayinnaung (r. 1551–1581) U Thaw Kaung - University of Washington Library papers by Burmese historians Than Tun, Yi Yi, U Pe Maung Tin, Ba Shin - SOAS Bulletin of Burma Research articles on Burma's history - The Origins of Pagan Bob Hudson - The Changing Nature of Conflict Between Burma and Siam as seen from the Growth and Development of Burmese States from the 16th to the 19th Centuries Pamaree Surakiat, Asia Research Institute, Singapore, March 2006 - Online Burma/Myanmar Library a veritable mine of information - Burma — Yunnan — Bay of Bengal (c. 1350–1600) Jon Fernquest - The Royal Ark: Burma Christopher Buyers - The Bloodstrewn Path:Burma's Early Journey to Independence BBC Burmese, 30 September 2005, Retrieved 2006-10-28 - The Nu-Attlee Treaty and Let Ya-Freeman Agreement, 1947 Online Burma/Myanmar Library - Federalism in Burma Online Burma/Myanmar Library - Burma Communist Party's Conspiracy to take over State Power and related information Online Burma/Myanmar Library - Understanding Burma's SPDC Generals Mizzima, Retrieved 2006-10-31 - Strangers in a Changed Land Thalia Isaak, The Irrawaddy, March–April 2001, Retrieved 2006-10-29 - Behold a New Empire Aung Zaw,The Irrawaddy, October 2006, Retrieved 2006-10-19 - Daewoo — A Serial Suitor of the Burmese Regime Clive Parker, The Irrawaddy, 7 December 2006, Retrieved on 2006-12-08 - Heroes and Villains The Irrawaddy, March 2007 - Lion City Lament Kyaw Zwa Moe, The Irrawaddy, March 2007 - Pyu Homeland in Samon Valley Bob Hudson 2005 - The History of India, as Told by Its Own Historians. The Muhammadan Period; by Sir H. M. Elliot; Edited by John Dowson; London Trubner Company 1867–1877. The Packard Humanities Institute, Persian Texts in Translation. - http://news.bbc.co.uk/2/hi/asia-pacific/7543347.stm Was the uprising of 1988 worth it?
http://en.wikipedia.org/wiki/History_of_Burma
13
14
The strengths of intermolecular forces of different substances vary over a wide range. However, they are generally much weaker than ionic or covalent bonds (Figure 11.2). For example, only 16 kJ/mol is required to overcome the intermolecular attractions between HCl molecules in liquid HCl in order to vaporize it. In contrast, the energy required to break the covalent bond to dissociate HCl into H and Cl atoms is 431 kJ/mol. Less energy is required to vaporize a liquid or to melt a solid than to break covalent bonds in molecules. Thus, when a molecular substance like HCl changes from solid to liquid to gas, the molecules themselves remain intact. Figure 11.2 Comparison of a covalent bond (an intramolecular force) and an intermolecular attraction. Many properties of liquids, including their boiling points, reflect the strengths of the intermolecular forces. A liquid boils when bubbles of its vapor form within the liquid. The molecules of a liquid must overcome their attractive forces in order to separate and form a vapor. The stronger the attractive forces, the higher is the temperature at which the liquid boils. Similarly, the melting points of solids increase with an increase in the strengths of the intermolecular forces. Three types of intermolecular attractive forces are known to exist between neutral molecules: dipole-dipole forces, London dispersion forces, and hydrogen-bonding forces. These forces are also called van der Waals forces after Johannes van der Waals, who developed the equation for predicting the deviation of gases from ideal behavior. Another kind of attractive force, the ion-dipole force, is important in solutions. As a group, intermolecular forces tend to be less than 15 percent as strong as covalent or ionic bonds. As we consider these forces, notice that each is electrostatic in nature, involving attractions between positive and negative species. An ion-dipole force exists between an ion and the partial charge on the end of a polar molecule. Polar molecules are dipoles; they have a positive end and a negative end. Recall, for example, that HCl is a polar molecule because of the difference in the electronegativities of the H and Cl atoms. Positive ions are attracted to the negative end of a dipole, whereas negative ions are attracted to the positive end, as shown in Figure 11.3. The magnitude of the attraction increases as either the charge of the ion or the magnitude of the dipole moment increases. Ion-dipole forces are especially important for solutions of ionic substances in polar liquids, for example, a solution of NaCl in water. We will say more about such solutions in Section 13.1. Figure 11.3 Illustration of the preferred orientation of polar molecules toward ions. The negative end of the polar molecule is oriented toward a cation (a), the positive end toward an anion (b). A dipole-dipole force exists between neutral polar molecules. Polar molecules attract each other when the positive end of one molecule is near the negative end of another, as in Figure 11.4(a). Dipole-dipole forces are effective only when polar molecules are very close together, and they are generally weaker than ion-dipole forces. Figure 11.4 (a) The electrostatic interaction of two polar molecules. (b) The interaction of many dipoles in a condensed state. In liquids polar molecules are free to move with respect to one another. As shown in Figure 11.4(b), they will sometimes be in an orientation that is attractive, and sometimes in an orientation that is repulsive. Two molecules that are attracting each other spend more time near each other than do two that are repelling each other. Thus, the overall effect is a net attraction. When we examine various liquids, we find that for molecules of approximately equal mass and size, the strengths of intermolecular attractions increase with increasing polarity. We can see this trend in Table 11.2, which lists several substances with similar molecular weights but different dipole moments. Notice that the higher the dipole moment, the higher is the boiling point. What kind of interparticle forces can exist between nonpolar atoms or molecules? Clearly, there can be no dipole-dipole forces when the particles are nonpolar. Yet the fact that nonpolar gases can be liquefied tells us that there must be some kind of attractive interactions between the particles. The origin of this attraction was first proposed in 1930 by Fritz London, a German-American physicist. London recognized that the motion of electrons in an atom or molecule can create an instantaneous dipole moment. Let's consider helium atoms as an example. In a collection of helium atoms the average distribution of the electrons about each nucleus is spherically symmetrical. The atoms are nonpolar and possess no permanent dipole moment. The instantaneous distribution of the electrons, however, can be different from the average distribution. For example, if we could freeze the motion of the electrons in a helium atom at any given instant, both electrons could be on one side of the nucleus. At just that instant, then, the atom would have an instantaneous dipole moment. Because electrons repel one another, the motions of electrons on one atom influence the motions of electrons on its near neighbors. Thus, the temporary dipole on one atom can induce a similar dipole on an adjacent atom, causing the atoms to be attracted to each other as shown in Figure 11.5. This attractive interaction is called the London dispersion force (or merely the dispersion force). This force, like dipole-dipole forces, is significant only when molecules are very close together. (In 1993 researchers at the University of Minnesota conducted experiments in which He2 was detected at temperatures below 0.001 K. This "molecule" does not contain an electron-pair bond but instead is held together by the London-dispersion force of attraction. The HeHe "bond" is over 50 Å long and the bond enthalpy is less than 0.1 J/mol!) Figure 11.5 Two schematic representations of the instantaneous dipoles on two adjacent helium atoms, showing the electrostatic attraction between them. The ease with which the charge distribution in a molecule can be distorted by an external electric field is called its polarizability. We can think of the polarizability of a molecule as a measure of the "squashiness" of its electron cloud; the greater the polarizability of a molecule, the more easily its electron cloud can be distorted to give a momentary dipole. Therefore, more polarizable molecules have stronger London dispersion forces. In general, larger molecules tend to have greater polarizabilities because they have a greater number of electrons and their electrons are farther from the nuclei. Therefore, the strength of the London dispersion forces tends to increase with increasing molecular size. Because molecular size and mass generally parallel each other, dispersion forces tend to increase in strength with increasing molecular weight. Thus, the boiling points of the halogens and the noble gases increase with increasing molecular weight (Table 11.3). The shapes of molecules can also play a role in the magnitudes of dispersion forces. For example, n-pentane and neopentane, illustrated in Figure 11.6, have the same molecular formula, C5H12, yet the boiling point of n-pentane is 27 K higher than that of neopentane. (The n in n-pentane is an abbreviation for the word normal. A normal hydrocarbon is one in which carbon atoms are arranged in a straight chain.) The difference can be traced to the different shapes of the two molecules. The overall attraction between molecules is greater in the case of n-pentane because the molecules can come in contact over the entire length of the long, somewhat cylindrically shaped molecule. Less contact is possible between the more compact and nearly spherical molecules of neopentane. Figure 11.6 Molecular shape affects intermolecular attraction. The n-pentane molecules make more contact with each other than do the neopentane molecules. Thus, n-pentane has the greater intermolecular attractive forces and therefore has the higher boiling point (bp). Dispersion forces operate between all molecules, whether they are polar or nonpolar. In fact, dispersion forces between polar molecules commonly contribute more to intermolecular attractions than do dipole-dipole forces. In the case of HCl, for example, it is estimated that dispersion forces account for more than 80 percent of the total attraction between molecules; dipole-dipole attractions account for the rest. When comparing the relative strengths of intermolecular attractions, the following generalizations are useful: There is, however, a type of intermolecular attraction that is typically stronger than dispersion forces—the hydrogen bond—which we consider after the next Sample Exercise. The dipole moments of methyl chloride, CH3Cl, and methyl iodide, CH3I, are 1.87 D and 1.62 D, respectively. (a) Which of these substances will have the greater dipole-dipole attractions among its molecules? (b) Which of these substances will have the greater London dispersion attractions? (c) The boiling points of CH3Cl and CH3I are 249.0 K and 315.6 K, respectively. Which substance has the greatest overall attractive forces? SOLUTION (a) Dipole-dipole attractions increase in magnitude as the dipole moment of the molecule increases. Thus, CH3Cl molecules attract each other by stronger dipole-dipole forces than CH3I molecules do. (b) When molecules differ in their molecular weights, the more massive molecule generally has the stronger dispersion attractions. In this case CH3I (142.0 amu) is much more massive than CH3Cl (50.5 amu). Thus, the dispersion forces will be stronger for CH3I. (c) Because CH3I has the higher boiling point, we can conclude that more energy is required to overcome attractive forces between CH3I molecules. Thus, the total intermolecular attractions are stronger for CH3I, suggesting that the dispersion forces are the decisive ones in comparing these two substances. Of Br2, Ne, HCl, HBr, and N2, which is likely to have (a) the largest intermolecular dispersion forces; (b) the largest dipole-dipole attractive forces? Answers: (a) Br2; (b) HCl Figure 11.7 shows the boiling points of the simple hydrogen compounds of group 4A and 6A elements. In general, the boiling point increases with increasing molecular weight, owing to increased dispersion forces. The notable exception to this trend is H2O, whose boiling point is much higher than we would expect on the basis of its molecular weight. The compounds NH3 and HF also have abnormally high boiling points. These compounds also have many other characteristics that distinguish them from other substances of similar molecular weight and polarity. For example, water has a high melting point, a high specific heat, and a high heat of vaporization. Each of these properties indicates that the intermolecular forces between H2O molecules are abnormally strong. Figure 11.7 Boiling points of the group 4A (bottom) and 6A (top) hydrides as a function of molecular weight. These strong intermolecular attractions in H2O result from hydrogen bonding. Hydrogen bonding is a special type of intermolecular attraction that exists between the hydrogen atom in a polar bond (particularly an H F, H O, or H N bond) and an unshared electron pair on a nearby small electronegative ion or atom (usually an F, O, or N atom on another molecule). For example, a hydrogen bond exists between the H atom in an HF molecule and the F atom of an adjacent HF molecule, F H F H (where the dots represent the hydrogen bond between the molecules). Several additional examples are shown in Figure 11.8. Figure 11.8 Examples of hydrogen bonding. The solid lines represent covalent bonds; the red dotted lines represent hydrogen bonds. Hydrogen bonds can be considered unique dipole-dipole attractions. Because F, N, and O are so electronegative, a bond between hydrogen and any of these three elements is quite polar, with hydrogen at the positive end: The hydrogen atom has no inner core of electrons. Thus, the positive side of the bond dipole has the concentrated charge of the partially exposed, nearly bare proton of the hydrogen nucleus. This positive charge is attracted to the negative charge of an electronegative atom in a nearby molecule. Because the electron-poor hydrogen is so small, it can approach an electronegative atom very closely and thus interact strongly with it. The energies of hydrogen bonds vary from about 4 kJ/mol to 25 kJ/mol or so. Thus, they are much weaker than ordinary chemical bonds (see Table 8.3). Nevertheless, because hydrogen bonds are generally stronger than dipole-dipole or dispersion forces, they play important roles in many chemical systems, including those of biological significance. For example, hydrogen bonds are important in stabilizing the structures of proteins, which are key parts of skin, muscles, and other structural components of animal tissues (see Section 25.7). They are also responsible for the way that DNA is able to carry genetic information (Section 25.9). One of the remarkable consequences of hydrogen bonding is found in comparing the density of ice to that of liquid water. In most substances the molecules in the solid are more densely packed than in the liquid. Thus, the solid phase is denser than the liquid phase (Figure 11.9). By contrast, the density of ice at 0°C (0.917 g/mL) is less than that of liquid water at 0°C (1.00 g/mL). The low density of ice compared to that of water can be understood in terms of hydrogen-bonding interactions between water molecules. The interactions in the liquid are random. However, when water freezes, the molecules assume the ordered, open arrangement shown in Figure 11.10, which leads to a less dense structure for ice compared to that of water: A given mass of ice occupies a greater volume than does the same mass of liquid water. The structure of ice permits the maximum number of hydrogen-bonding interactions between the H2O molecules. The density of ice compared to water profoundly affects life on Earth. Because ice is less dense than water, ice floats (Figure 11.9). When ice forms in cold weather, it covers the top of the water, thereby insulating the water below. If ice were more dense than water, ice forming at the top of a lake would sink to the bottom, and the lake could freeze solid. Most aquatic life could not survive under these conditions. The expansion of water upon freezing (Figure 11.11) is also what causes water pipes to break in freezing weather. In which of the following substances is hydrogen bonding possible: methane (CH4), hydrazine (H2NNH2), methyl fluoride (CH3F), or hydrogen sulfide (H2S)? SOLUTION All of these compounds contain hydrogen, but hydrogen bonding normally requires that the hydrogen be directly bonded to N, O, or F atom. There also needs to be an unshared pair of electrons on an electronegative atom (usually N, O or F) in a nearby molecule. These criteria clearly eliminate CH4 and H2S, which do not contain H bonded to N, O, or F. They also eliminate CH3F whose Lewis structure shows a central C atom surrounded by three H atoms and a F atom. (Carbon always forms four bonds, whereas hydrogen and fluorine form one each.) Because the molecule contains a C F bond and not a H F one, it does not form hydrogen bonds. In the case of H2NNH2, however, we find N H bonds, and therefore hydrogen bonds exist between the molecules. In which of the following substances is significant hydrogen bonding possible: methylene chloride (CH2Cl2), phosphine (PH3), hydrogen peroxide (HOOH), or acetone (CH3COCH3)? Answer: HOOH Let's put the intermolecular forces in perspective. To summarize, we can identify the intermolecular forces that are operative in a substance by considering its composition and structure. Dispersion forces are found in all substances. The strengths of these forces increase with increased molecular weight and also depend on molecular shapes. Dipole-dipole forces add to the effect of dispersion forces and are found in polar molecules. Hydrogen bonds, which are recognized by the presence of H atoms bonded to F, O, or N, also add to the effect of dispersion forces. Hydrogen bonds tend to be the strongest type of intermolecular force. None of these intermolecular forces, however, is as strong as ordinary ionic or covalent bonds. Figure 11.12 presents a systematic way of identifying the kinds of intermolecular forces in a particular system, including ion-dipole and ion-ion forces. Figure 11.12 Flowchart for recognizing the major types of intermolecular forces. London dispersion forces occur in all instances. The strengths of the other forces generally increase proceeding from left to right. List the substances BaCl2, H2, CO, HF, and Ne in order of increasing boiling points. SOLUTION The boiling point depends in part on the attractive forces in the liquid. These are stronger for ionic substances than for molecular ones, so BaCl2 has the highest boiling point. The intermolecular forces of the remaining substances depend on molecular weight, polarity, and hydrogen bonding. The other molecular weights are H2 (2), CO (28), HF (20), and Ne (20). The boiling point of H2 should be the lowest because it is nonpolar and has the lowest molecular weight. The molecular weights of CO, HF, and Ne are roughly the same. Because HF can hydrogen bond, it has the highest boiling point of the three. Next is CO, which is slightly polar and has the highest molecular weight. Finally, Ne, which is nonpolar, should have the lowest boiling point of these three. The predicted order of boiling points are therefore The actual normal boiling points are H2 (20 K), Ne (27 K), CO (83 K), HF (293 K), and BaCl2 (1813 K), in agreement with our predictions. (a) Identify the intermolecular forces present in the following substances, and (b) select the substance with the highest boiling point: CH3CH3, CH3OH, CH3CH2OH. Answers: (a) CH3CH3 has only dispersion forces, whereas the other two substances have both dispersion forces and hydrogen bonds; (b) CH3CH2OH
http://wps.prenhall.com/wps/media/objects/3311/3391416/blb1102.html
13
16
Formulas in Word Problems Help Introduction to Formulas in Word Problems - Cost and Profit For some word problems, nothing more will be required of you than to substitute a given value into a formula, which is either given to you or is readily available. The most difficult part of these problems will be to decide which variable the given quantity will be. For example, the formula might look like R = 8 q and the value given to you is 440. Is R = 440 or is q = 440? The answer lies in the way the variables are described. In R = 8 q, it might be that R represents revenue (in dollars) and q represents quantity (in units) sold of some item. “If 440 units were sold, what is the revenue?” Here 440 is q. You would then solve R = 8(440). “If the revenue is $440, how many units were sold?” Here 440 is R , and you would solve 440 = 8 q . Cost Formula Word Problem The cost formula for a manufacturer’s product is C = 5000 + 2 x , where C is the cost (in dollars) and x is the number of units manufactured. (a) If no units are produced, what is the cost? (b) If the manufacturer produces 3000 units, what is the cost? (c) If the manufacturer has spent $16,000 on production, how many units were manufactured? Answer these questions by substituting the numbers into the formula. (a) If no units are produced, then x = 0, and C = 5000 + 2 x becomes C = 5000 + 2(0) = 5000. The cost is $5,000. (b) If the manufacturer produces 3000 units, then x = 3000, and C = 5000 + 2 x becomes C = 5000 + 2(3000) = 5000 + 6000 = 11,000. The manufacturer’s cost would be $11,000. (c) The manufacturer’s cost is $16,000, so C = 16,000. Substitute C = 16,000 into C = 5000 + 2 x to get 16,000 = 5000 + 2 x . There were 5500 units produced. Profit Formula Word Problem The profit formula for a manufacturer’s product is P = 2 x – 4000 where x is the number of units sold and P is the profit (in dollars). (a) What is the profit when 12,000 units were sold? (b) What is the loss when 1500 units were sold? (c) How many units must be sold for the manufacturer to have a profit of $3000? (d) How many units must be sold for the manufacturer to break even? (This question could have been phrased, “How many units must be sold in order for the manufacturer to cover its costs?”) (a) If 12,000 units are sold, then x = 12,000. The profit equation then becomes P = 2(12,000) – 4000 = 24,000 – 4000 = 20,000. The profit is $20,000. (b) Think of a loss as a negative profit. When 1500 units are sold, P = 2 x – 4000 becomes P = 2(1500) – 4000 = 3000 – 4000 = – 1000. The manufacturer loses $1000 when 1500 units are sold. (c) If the profit is $3000, then P = 3000; P = 2 x – 4000 becomes 3000 = 2 x – 4000. A total of 3500 units were sold. (d) The break-even point occurs when the profit is zero, that is when P = 0. Then P = 2 x – 4000 becomes 0 = 2 x – 4000. The manufacturer must sell 2000 units in order to break even. Volume Formula Word Problem A box has a square bottom. The height has not yet been determined, but the bottom is 10 inches by 10 inches. The volume formula is V = lwh , because each of the length and width is 10, lw becomes 10.10 = 100. The formula for the box’s volume is V = 100 h . (a) If the height of the box is to be 6 inches, what is its volume? (b) If the volume is to be 450 cubic inches, what should its height be? (c) If the volume is to be 825 cubic inches, what should its height be? (a) The height is 6 inches, so h = 6. Then V = 100 h becomes V = 100(6) = 600. The box’s volume is 600 cubic inches. (b) The volume is 450 cubic inches, so V = 450, and V = 100 h becomes 450 = 100 h . The box’s height would need to be 4.5 inches. (c) The volume is 825, so V = 100 h becomes 825 = 100 h . The height should be 8.25 inches. Suppose a square has a perimeter of 18 cm. What is the length of each of its sides? (Recall the formula for the perimeter of a square: P = 4 l where l is the length of each of its sides.) P = 18, so P = 4 l becomes 18 = 4 l . The length of each of its sides is 4.5 cm. Add your own comment Today on Education.com SUMMER LEARNINGJune Workbooks Are Here! TECHNOLOGYAre Cell Phones Dangerous for Kids? - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - First Grade Sight Words List - Graduation Inspiration: Top 10 Graduation Quotes - 10 Fun Activities for Children with Autism - What Makes a School Effective? - Child Development Theories - Should Your Child Be Held Back a Grade? Know Your Rights - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner
http://www.education.com/study-help/article/algebra-help-word-problems-formulas/
13
20
Vocabulary Development During Read-Alouds: Primary Practices Reading aloud is a common practice in primary classrooms and is viewed as an important vehicle for vocabulary development. Read-alouds are complex instructional interactions in which teachers choose texts, identify words for instruction, and select the appropriate strategies to facilitate word learning. This study explored the complexities by examining the read-aloud practices of four primary teachers through observations and interviews. In this article: Reading storybooks aloud to children is recommended by professional organizations as a vehicle for building oral language and early literacy skills (International Reading Association & National Association for the Education of Young Children, 1998). Reading aloud is widely accepted as a means of developing vocabulary (Newton, Padak, & Rasinski, 2008), particularly in young children (Biemiller & Boote, 2006). Wide reading is a powerful vehicle for vocabulary acquisition for older and more proficient readers (Stanovich, 1986), but since beginning readers are limited in their independent reading to simple decodable or familiar texts, exposure to novel vocabulary is unlikely to come from this source (Beck & McKeown, 2007). Read-alouds fill the gap by exposing children to book language, which is rich in unusual words and descriptive language. Much is known about how children acquire new vocabulary and the conditions that facilitate vocabulary growth. Less is known about how teachers go about the business of teaching new words as they read aloud. The effortless manner in which skilled teachers conduct read-alouds masks the complexity of the pedagogical decisions that occur. Teachers must select appropriate texts, identify words for instruction, and choose strategies that facilitate word learning. This study sheds light on the process by examining the strategies that teachers use to develop vocabulary as they read aloud to their primary classes. What we know about vocabulary and read-alouds Reading aloud to children provides a powerful context for word learning (Biemiller & Boote, 2006; Bravo, Hiebert, & Pearson, 2007). Books chosen for readalouds are typically engaging, thus increasing both children's motivation and attention (Fisher, Flood, Lapp, & Frey, 2004) and the likelihood that novel words will be learned (Bloom, 2000). As teachers read, they draw students' attention to Tier 2 words-the "high frequency words of mature language users" (Beck, McKeown, & Kucan, 2002, p. 8). These words, which "can have a powerful effect on verbal functioning" (Beck et al., 2002, p. 8), are less common in everyday conversation, but appear with high frequency in written language, making them ideal for instruction during read-alouds. Tier 1 words, such as car and house, are acquired in everyday language experiences, seldom requiring instruction. Tier 3's academic language is typically taught within content area instruction. During read-aloud interactions, word learning occurs both incidentally (Carey, 1978) and as the teacher stops and elaborates on particular words to provide an explanation, demonstration, or example (Bravo et al., 2007). Even brief explanations of one or two sentences, when presented in the context of a supportive text, can be sufficient for children to make initial connections between novel words and their meanings (Biemiller & Boote, 2006). Word learning is enhanced through repeated readings of text, which provide opportunities to revise and refine word meanings (Carey, 1978). These repetitions help students move to deeper levels of word knowledge from never heard it, to sounds familiar, to it has something to do with, to well known (Dale, 1965). Incidental word learning through read-alouds Carey (1978) proposed a two-stage model for word learning that involves fast and extended mapping. Fast mapping is a mechanism for incidental word learning, consisting of the connection made between a novel word and a tentative meaning. Initial understandings typically represent only a general sense of the word (Justice, Meier, & Walpole, 2005) and are dependent on students' ability to infer meaning from context (Sternberg, 1987). Extended mapping is required to achieve complete word knowledge, because "initial learning of word meanings tends to be useful but incomplete" (Baumann, Kame'enui, & Ash, 2003, p. 755). Through additional exposures, the definition is revised and refined to reflect new information (Carey, 1978; Justice et al., 2005). Adult mediation in read-alouds The style of read-aloud interaction is significant to vocabulary growth (Dickinson & Smith, 1994; Green Brabham & Lynch-Brown, 2002) with reading styles that encourage child participation outperforming verbatim readings. Simply put, "the way books are shared with children matters" (McGee & Schickedanz, 2007, p. 742). High-quality read-alouds are characterized by adult mediation. Effective teachers weave in questions and comments as they read, creating a conversation between the children, the text, and the teacher. To facilitate word learning, teachers employ a variety of strategies such as elaboration of student responses, naming, questioning, and labeling (Roberts, 2008). Analysis of the literature on vocabulary learning through read-alouds leads to two conclusions. First, adult mediation facilitates word learning (i.e., Justice, 2002; Walsh & Blewitt, 2006). Biemiller and Boote (2006) concluded that "there are repeated findings that encouraging vocabulary acquisition in the primary grades using repeated reading combined with word meaning explanations works" (p. 46). Second, the relative effectiveness of different types of mediation remains less clear. Adult explanations are clearly linked to greater word learning, but it is not evident which aspects of the explanations are the critical components: the context, a paraphrased sentence, or even the child's interest in the story (Brett, Rothlein, & Hurley, 1996; Justice et al., 2005). It is also possible that active involvement in discussions is more salient than the type of questions posed (Walsh & Blewitt, 2006). Setting for the study This study was conducted at a small private school in the south central United States. Westpark School (pseudonym) is located in an ethnically diverse, middle class neighborhood in a suburb of a large metropolitan area. Four of the six primary teachers at Westpark agreed to participate in the study: one kindergarten, one first-grade, and two second-grade teachers. Cindy, Debby, Patricia, and Barbara (all pseudonyms) varied in their years of experience. Debby, who had previously retired from public school teaching, was the most experienced with more than 20 years in the classroom. Barbara was also a veteran with 10 years of experience. At the other end of the spectrum, Patricia was in her third year of teaching, and Cindy was in her internship year of an alternative licensure program. Observations and interviews To determine the teachers' practices for developing vocabulary within read-alouds, the teachers' "own written and spoken words and observable behavior" (Bliss, Monk, & Ogborn, 1983, p. 4) provided the best sources of data. By constructing detailed, extensive descriptions of teacher practice within a single site, patterns of interaction and recurring themes can be identified (Merriam, 2001). Carspecken's (1996) critical ethnography methodology was adapted and used to collect and analyze data. Observations were conducted to identify patterns of teacher-student interactions within readalouds. Following preliminary coding, individual interviews were conducted. The combined data provide a rich description of the pedagogical context of vocabulary development during read-alouds. Each teacher was observed four times over a sixweek period. The teachers were asked to include a read-aloud during each observation and were informed that vocabulary development was the focus of this study. They were encouraged to "just do what they normally would do" when reading to their classes. The hour-long observations, scheduled at the teachers' convenience, were audiotaped and transcribed. Additional data, such as gestures, actions, and descriptions of student work, were recorded in field notes. Transcriptions and field notes were compiled in a thick record for analysis. Following the observations and preliminary data coding, semistructured individual interviews were conducted. An interview protocol was developed and peer-reviewed. Topics for discussion included teaching experience, understanding of vocabulary development, use of read-alouds, and instructional strategies. Lead-off questions and possible followup questions were generated to ensure that key areas were adequately addressed in the interview. Transcripts of the interviews were coded and the observation data were re-analyzed and peer-reviewed. Vocabulary instruction during read-alouds The determination that a particular word in a readaloud is unfamiliar to students triggers a series of decisions. The teacher must decide both the extent and intent of instruction. How much time should be spent? What do students need to know about this word? Also, the teacher must select an appropriate instructional strategy from a wide range of possibilities. Which strategy will be most effective? What is the most efficient way to build word knowledge without detracting from the story? The teachers at Westpark used a variety of instructional strategies and levels of instructional foci in their read-alouds. Categories of instructional focus emerged during data coding. Interactions centered on vocabulary differed in both extent and intent. The extent, or length, of interactions varied greatly. Typically, more instructional time was spent on words that were deemed critical to story comprehension or that students would be using in a subsequent activity. Pragmatic issues of time seemed to impact the extent of the interactions as well. The frequency and length of interactions tended to decrease through the course of the read-aloud as the time allotted came to an end or children's attention began to wane. |Levels of Instruction| |Level of instruction||Example||Explanation| |Incidental exposure||I don't know what I would have done. Curiosity might have gotten the better of me.||Teacher infuses a Tier 2 word into a discussion during the read-aloud.| |Embedded instruction||And he's using a stick-an oar-to help move the raft [pointing to illustration].||Teacher provides a synonym before the target term oar, pointing to the illustration.| |Focused instruction||Let's get set means let's get ready [elicit examples of things students get ready for].||Teacher leads a discussion on what it means to get set, including getting set for school and Christmas.| As seen in the table, three different levels of instruction were identified in the data: incidental exposure, embedded instruction, and focused instruction. Incidental exposure occurred during the course of discussions before, during, and after reading and resulted from teachers' efforts to infuse rich vocabulary into class discourse. For example, during one discussion, Cindy commented that the character was humble; in another that she came bearing gifts. Even though no direct instruction was provided for these terms, the intent is instructional since Cindy deliberately infused less common words to build vocabulary knowledge through context clues. Embedded instruction is defined as attention to word meaning, consisting of fewer than four teacher-student exchanges. The teachers used embedded instruction when the target word represented a familiar concept for the students or when it was peripheral to the story. Information was provided about word meaning with minimal disruption to the flow of the reading. Typically, teachers gave a synonym or a brief definition and quickly returned to the text. Focused instruction occurred when target words were considered important to story comprehension or when difficulties arose communicating word meaning. These interactions varied greatly in length from 4 to 25 teacher-student exchanges. Focused instruction often took place before or after reading. In most cases, the teachers had identified keywords that they felt were important for students to learn, warranting additional time and attention. Other times, focused instruction appeared to be spontaneous, triggered by students' questions or "puzzled looks" during the reading. Instruction also varied in its intent. Teachers sought to develop definitional, contextual, or conceptual word knowledge (Herman & Dole, 1988) based on the specific situation. The learning goal shaped the nature of the interactions.The definitional approach was used when the underlying concept was familiar to the students or when the goal of instruction was to simply provide exposure to a word. Teachers either provided or elicited a synonym or phrase that approximated the meaning of the target word. This approach can be quite efficient, requiring little investment of time (Herman & Dole, 1988), thus allowing attention to be given to many words during the course of the read-aloud.Teachers developed contextual knowledge when they referred students back to the text to determine word meaning. In such cases, the teacher might refer students back to the text or reread the sentence in which the target term occurred, helping students to confirm or disconfirm their thinking as in this example from Sarah, Plain and Tall (MacLachlan, 1985): Cindy: Wooly ragwort. Where is that? [looks through text] What was wooly ragwort? Do you remember? It was part of Caleb's song. Cindy: It said-or Sarah said [reads from the text], "We don't have these by the sea. We have seaside goldenrod and wild asters and wooly ragwort." Cindy's intent was for students to gain contextual knowledge using the information in the text to draw a tentative conclusion about word meaning. This example highlights one of the problems inherent with contextual strategies. Students, perhaps misled by the word sea in the text, suggested that wooly ragwort might be a seal, a bird, or a stone. Since they were unfamiliar with goldenrod and asters, they were unable to use these clues effectively to conclude that wooly ragwort was a plant. In this case, reminding students that the characters were picking wildflowers might have helped. Learning a definition is seldom enough for children to develop deep word knowledge. Students need conceptual knowledge to make connections between new words, their prior experiences, and previously learned words and concepts (Newton et al., 2008). Cindy relayed an incident that taught her the importance of building conceptual knowledge when working with unfamiliar words. She had instructed her students to look up the word pollinate in the dictionary, write two or three sentences using the word, and then draw a picture illustrating its meaning. Unfortunately, the definition contained many words that the children did not know such as pistil and stamen. It was obvious when she reviewed their work that her students "didn't get it." Cindy realized that the definition was not sufficient for them to understand the concept of pollination. - Providing a definition - Providing a synonym - Providing examples - Clarifying or correcting students' responses - Extending a student-generated definition - Morphemic analysis Each of these strategies is described along with examples from the observation data. Questioning. The most commonly used strategy was questioning. As the teachers read and encountered a word that they thought might be unfamiliar, they would simply stop and ask about it. This strategy usually occurred at the beginning of an instructional exchange. For example, after reading a section of Sarah, Plain and Tall (MacLachlan, 1985), Debby paused to ask her students about the word bonnet. Debby: What's a bonnet? Do you all know what a bonnet is? What's a bonnet? It is interesting to note that most of the teachers repeated the question several times in their initial utterance. This practice gives students time to formulate a response and also helps to establish a phonological representation of the new word, which is linked to word learning (Beck & McKeown, 2001).Questioning was also used to assess the students' existing word knowledge and to determine if students had effectively used context clues. Once a correct response was given, the exchange ended and the teacher resumed reading, as seen in the following sequence. Debby: [reads from The BFG, Dahl, 1982] "So I keep staring at her and in the end her head drops on to her desk and she goes fast to sleep and snorkels loudly." What is that? Debby: [resumes reading] "Then in marches the head teacher." Alternatively, the teacher might provide the definition and ask students to supply the term. For example, in an after-reading discussion, Patricia asked students to recall the meaning of research to review or assess word learning. Patricia: And what was it called when they look in the encyclopedia for information? What was that word, John? This strategy can prove difficult. John and several of his classmates made incorrect responses before the correct answer was given.Providing the Definition. At times, teachers chose to provide a definition of a word. Word learning is enhanced when the explanation is made in simple, child-friendly language and the typical use of the word is discussed (Beck et al., 2002). This strategy was more commonly used in embedded instruction, as seen in the following example. Barbara: [reading Duck for President (Cronin, 2008)] "On election day, each of the animals filled out a ballot and placed it in a box." Filled out a piece of paper. Wrote down who they wanted to vote-or who they wanted to win the election. Barbara thought it unlikely that her students would be familiar with the word ballot, so she simply provided the definition in terms that kindergartners could understand.Providing Synonyms. An expedient means of providing word meaning is to state a synonym for the word. This method was used often in conjunction with recasting. That is, the teacher repeated a sentence, replacing the target word with a synonym, as seen in this example. Barbara: Let's get ready. Let's get set. This strategy was used extensively by Barbara to reinforce word meanings. For example, in a postreading discussion, she went back and reviewed key events in the story, simultaneously reinforcing the meaning of the phrase a bit. Although her focus was comprehension, the students heard the target word alongside a recasting with a synonym many times. Barbara: So remember, a bit of blue means-how much is she going to add? Student: Um-a little bit? Barbara: A little bit, right. Just a small amount. Barbara: So what happened here? They mixed red, they mixed blue-but it's still red. But why? Why is that Sarah? Student: Because Sal adds a bit of blue. Barbara: Right, just a little bit of blue. Just a tiny small amount. But that wasn't enough to change the color, was it? Barbara: Just a little bit, right. Providing Examples. Word knowledge can be extended and clarified through examples that may be provided by the teacher or elicited from the students. Students learn how the target word is related to other known words and concepts and are given opportunities to use the target words, further strengthening word learning (Beck et al., 2002). Teachers help students make their own connections when they ask for examples of how or where students have heard the word used, or remind them of situations in which they might have encountered a specific word. As Patricia introduced a folk tale, she wanted her students to be prepared for the regional language they would hear. Although she did not use the word dialect, she explained that the language in the story would sound different to them and asked them for examples from their own experiences. Patricia: This is a story from Appalachia and they use a different kind of language. Uh, they speak in English, but they kind of talk — what do you call it — country. Have you ever heard people talk like that? Student 1: Yeah. Student 2: My grandma. Patricia: They use different little sayings and maybe have a different accent to their voice. Student 3: But they're still speaking English. Student 4: Like New York? Student 5: England, England! Student 6: Kind of like cowboys Two students demonstrated their understanding of the concept as they generated their examples of New York and English accents. Another student made the connection between dialect and the cowboy lingo the class had learned during a recent unit of study. Clarification and Correction. Teacher guidance is an important part of the instructional process (Beck et al., 2002). At times, students suggest definitions for target words that reflect misconceptions or partial understandings. The teacher must then either correct or clarify students' responses. When Patricia asked her students for the meaning of the word glared, a student gave a response that was partially correct, but missed the essence of the meaning. Patricia's additional question helped the students to refine their understandings. Patricia: What does it mean to glare at somebody? Student: Stare at them? Patricia: Yeah. Is it a friendly stare? Student: No-like [makes an angry face]. Extension. Due to the gradual nature of word learning, students may provide definitions that are correct but simplistic. The teacher may elect to extend the definition, providing additional information that builds on the student's response. For example, when a student stated that a bonnet was something you wear on your head, Debby extended the definition by providing some historical information and describing its function or use. Debby: They wore it a lot on in the prairie days because they traveled a lot and they got a lot of you-those wagon trains and the stagecoaches and all were kind of windy. And so they would keep their bonnets on-to keep their head-their hair from blowing all over the place. Very, very common to use-to wear bonnets back then. Labeling. Labeling was most often used with picture book read-alouds. As the teacher named the unfamiliar item, she pointed to the illustration, connecting the word with the picture. Debby used this strategy while reading Leonardo and the Flying Boy (Anholt, 2007) to her second graders, pointing to the depictions of various inventions mentioned in the text. Thus, without interrupting the flow of the reading, word meaning was enhanced as children related novel terms with the visual images. Barbara used the strategy extensively with her kindergartners. While reading Duck for President(Cronin, 2008), she pointed to the picture of the lawnmower as she described how a push mower is different from the more familiar power mowers. In another text, she reversed the process, providing the unfamiliar word raft for the boat pictured in the illustration. Imagery. At times, teachers used facial expressions, sounds, or physical movements to demonstrate word meaning during the course of read-alouds. Gestures of this type occurred more frequently when the teachers were reading aloud from chapter books, perhaps due to the lack of illustrations to provide such visual support. In some cases, imagery appeared to be intrinsic to expressive reading, rather than a deliberate effort to enhance word meaning. For example, Debby lowered her head and looked sad as she read about a character hanging his head in shame. Although her intent was to create a dramatic reading, the addition of the simple actions would also serve to facilitate word learning if that particular expression was unknown to students. In the following example, Debby provided two imagery clues as she read the text. Debby: [reads text] "There was a hiss of wind." [extends /s/ to create a hissing sound] "A sudden pungent smell." [holds her hand up to her nose]. The use of imagery was more common with embedded instruction than with the longer focused instructional exchanges. Typically, imagery was used to enhance students' understanding of the text without impeding the flow of the story, although in some instances, imagery was used after discussion as a means of reinforcing the stated definition. At times, however, the use of imagery was a more integral part of instruction and was even used by the children when they could demonstrate a word meaning more easily than put it in words. When Patricia asked her students about the meaning of the word pout, several responded nonverbally, sticking out their lower lips and looking sad. Cindy used the strategy to help her students understand the meaning of the word rustle. Although a student provided a synonym, Cindy used imagery to extend word learning. Cindy: What does rustle mean? Cindy: Movement. OK. What's a rustle sound like? Somebody rustle for me. [students begin moving their feet under their desks] Maybe like [shuffles her feet], like really soft sounds. Like a movement. They're not meaning to make a noise, but they are just kind of moving around in the grass and stuff. Morphemic Analysis. Even young children need to become aware of how word parts are combined to make longer, more complex words. Children can be taught to "look for roots and/or familiar words when trying to figure out the meaning of an unfamiliar word" (Newton et al., 2008, p. 26). Instructional strategies that draw children's attention to structural analysis are an appropriate choice when the meaning of the root word is familiar. In the exchange that follows, Barbara drew attention to the prefix re-, affixed to the familiar word count. Barbara: [reads text] "Farmer Brown demanded a recount." A recount is-do you know what a recount is, Jeremy? Jeremy: Uh, no. Barbara: A recount is-he said he wanted the votes to be counted again. Multiple Strategies. Teachers often employed more than one strategy during focused instruction. Although questioning was commonly used to initiate instruction, the target word must be either partially known or appear in a very supportive context for this strategy to be effective. Questioning can lead to guessing, so "it is important to provide guidance if students do not quickly know the word's meaning" (Beck et al., 2002, p. 43). In cases where questioning yielded either an incorrect response or no response at all, teachers added additional strategies, such as providing the definition, examples, or imagery. The practices of the teachers at Westpark are both unremarkable and remarkable. They are unremarkable in that their practices are consistent with the descriptions of read-alouds in the literature. The teachers selected appropriate texts, words for instruction, and strategies to teach unknown words. They engaged in discussions before, during, and after reading the texts. Practitioners and researchers alike will find familiarity in the descriptions of the read-alouds. At the same time, their practices were remarkable. The intricate series of interactions between teacher, students, and text in a read-aloud reflects countless instructional decisions, underlying pedagogical beliefs, and the unique quality of the relationship that has been built between teacher and students. The data obtained from the observations and interviews provide a window into the processes of the readaloud, providing brief but significant glimpses that have important implications. There were many similarities noted in the readaloud practices of the teachers in this study. With the exception of one performance-style reading, readalouds were interactive with the children actively engaged. Attention to word meaning occurred in every read-aloud, providing evidence of the importance placed on vocabulary by the teachers. At the same time, individual differences were noted in the way the teachers went about developing word meaning. They varied in their use of incidental exposure, embedded instruction, and focused instruction. Cindy felt it was important for her students to be able to independently figure out word meaning from context. Consistent with that conviction, she most frequently used focused instruction with questioning and incidental exposures, with relatively few incidences of embedded instruction. In contrast, Barbara's pattern of interaction seems to reflect a preference for adult mediation over incidental learning, perhaps stemming from a belief that kindergarten children require more support to learn words during read-alouds than their older schoolmates. In addition to variance in the level of instruction used by the teachers, they also exhibited differences in their use of instructional strategies. Some differences were directly related to the type of book being read. For example, labeling was common when reading picture books, but was seldom used with chapter books. Differences in strategy use may also reflect the teachers' perceptions of appropriate practice for a specific grade. Both second-grade teachers stressed the importance of context clues in teaching vocabulary. This conviction was evident in their frequent use of questioning and context strategies. Other strategies were only used when an adequate response was not obtained, or when a more extensive definition was required for comprehension. The increased use of multiple strategies seen in kindergarten and first grade may reflect the teachers' beliefs that vocabulary development was an important goal apart from story comprehension. There may be a more pragmatic explanation as well. When reading chapter books, the teachers seemed to have a set stopping point in mind each day. Completing a chapter on time appeared to take precedence over vocabulary instruction. Shorter picture books seemed to afford teachers more time to develop words and employ more strategies within instructional sequences. This would suggest that text selection impacts strategy use in addition to word selection. Individual differences in read-aloud practice are significant because they impact word learning. Even when scripts were used for read-alouds, Biemiller and Boote (2006) found that "some teachers were more effective than others in teaching vocabulary to children" (p. 51). They concluded that intangible qualities such as the teachers' attitudes about and enthusiasm for word learning could be a factor in the number of words children learn. Given the degree of variance in word learning, evident when teachers were constrained by a script, it would certainly be expected that differences would only increase when teachers are free to conduct read-alouds in their own manner. Recommendations for practice Read-alouds are instructional events and require the same advance planning as any other lesson. Although the teachers in this study used many strategies identified in the literature as effective, additional time and thought in advance of the reading would have decreased confusions, used time more efficiently, and ultimately increased learning. Books should be selected with vocabulary in mind, previewed, and practiced. Attention to student questions about word meaning that arise during reading is important but may result in extended discourse on words that are not critical to comprehension and can detract significantly from the read-aloud experience. Teachers should select target words in advance and plan instructional support based on those particular words. To increase word learning potential, the following five steps are recommended. - Identify words for instruction. To maximize learning, words targeted for instruction should be identified in advance. Examine the text for words that are essential for comprehension and Tier 2 words (Beck et al., 2002) that will build reading vocabulary. Look for words that are interesting or fun to say. Narrow the list down to four or five words to target for more in-depth instruction, giving priority to those needed for comprehension. - Consider the type of word learning required. Does the target word represent a new label for something familiar or an unfamiliar concept, or is it a familiar word used in a new way? Is the word critical for comprehension? These questions determine the appropriate level of instruction (incidental, embedded, or focused); whether instruction should occur before, during, or after reading; and strategy selection. - Identify appropriate strategies. Select strategies that are consistent with your instructional goals. When the novel word represents a new label for a familiar term, a synonym or gesture may be adequate. Providing examples and questioning might be used to develop a new concept prior to reading, with a simple definition included during the reading to reinforce learning. - Have a Plan B. If a strategy proves ineffective, be prepared to intervene quickly and provide correction or clarification. Have an easy-tounderstand definition at the ready. Be able to provide a synonym or an example. - Infuse the words into the classroom. Find opportunities for the new words to be used in other contexts to encourage authentic use and deepen word learning. Read-alouds can be viewed as microcosms of balanced instruction. This balance does not result from adherence to a prescribed formula, but rather from countless decisions made by teachers. These instructional decisions affect the balance of direct and incidental instruction, between planning in advance and seizing the teachable moment, the quantity and quality of vocabulary instruction within the readalouds, and ultimately student learning. Teachers' perceptions of an appropriate balance are evident in their uses of read-alouds, styles of reading, text selection, and in the way that vocabulary is developed. The read-aloud context has proven to be an effective vehicle for vocabulary instruction, but further research is needed to clarify the conditions that optimize word learning and to determine the most effective manner of adding elaborations and explanations during story reading without detracting from the pleasure of the reading itself. Identifying the practices that are commonly used by primary classroom teachers provides researchers with valuable information that can lead to the development of effective instructional strategies, inservice teachers' staff development, and preservice teacher training. Related video: Strengthening Vocabulary with Read Alouds During read alouds, exploring expressive language through movement, listening, and discus boosts interest among these first graders while increasing their vocabulary. (This clip was excerpted from Stenhouse Publishers' "Organizing for Literacy" video.) Click the "References" link above to hide these references. Baumann, J.F., Kame'enui, E.J., & Ash, G.E. (2003). Research on vocabulary instruction: Voltaire redux. In J. Flood, D. Lapp, J.R. Squire, & J.M. Jensen (Eds.), Handbook of research on teaching the English language arts (pp. 752-785). Mahwah, NJ: Erlbaum. Beck, I.L., & McKeown, M.G. (2001). Text talk: Capturing the benefits of read-aloud experiences for young children. The Reading Teacher, 55(1), 10-20. Beck, I.L., & McKeown, M.G. (2007). Different ways for different goals, but keep your eye on the higher verbal goals. In R.K. Wagner, A.E. Muse, & K.R. Tannenbaum (Eds.), Vocabulary acquisition: Implications for reading comprehension (pp. 182-204). New York: Guilford. Beck, I.L., McKeown, M.G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary instruction. New York: Guilford. Biemiller, A., & Boote, C. (2006). An effective method for building meaning vocabulary in primary grades. Journal of Educational Psychology, 98(1), 44-62. doi:10.1037/0022-06184.108.40.206 Bliss, J., Monk, M., & Ogborn, J. (1983). Qualitative data analysis for educational research: A guide to uses of systematic networks. Croon Helm, Australia: Croon Helm. Bloom, L. (2000). The intentionality model of word learning: How to learn a word, any word. In R.M. Golinkoff, K. Hirsh-Pasek, L. Bloom, L.B. Smith, A.L. Woodward, N. Akhtar, et al. (Eds.), Becoming a word learner: A debate on lexical acquisition (pp. 19-50). New York: Oxford University Press. Bravo, M.A., Hiebert, E.H., & Pearson, P.D. (2007). Tapping the linguistic resources of Spanish/English bilinguals: The role of cognates in science. In R.K. Wagner, A.E. Muse, & K.R. Tannenbaum (Eds.), Vocabulary acquisition: Implications for reading comprehension (pp. 140-156). New York: Guilford. Brett, A., Rothlein, L., & Hurley, M. (1996). Vocabulary acquisition from listening to stories and explanations of target words. The Elementary School Journal, 96(4), 415-422. doi:10.1086/461836 Carey, S. (1978). The child as word learner. In M. Halle, J. Bresnan, & G.A. Miller (Eds.), Linguistic theory and psychological reality(pp. 359-373). Cambridge, MA: MIT Press. Carspecken, P.F. (1996). Critical ethnography in educational research: A theoretical and practical guide. New York: Routledge. Dale, E. (1965). Vocabulary measurement: Techniques and major findings. Elementary English, 42, 82-88. Dickinson, D., & Smith, M.W. (1994). Long-term effects of preschool teachers' book readings on low-income children's vocabulary and story comprehension. Reading Research Quarterly, 29(2), 104-122. doi:10.2307/747807 Fisher, D., Flood, J., Lapp, D., & Frey, N. (2004). Interactive readalouds: Is there a common set of implementation practices? The Reading Teacher, 58(1), 8-17. doi:10.1598/RT.58.1.1 Green Brabham, E., & Lynch-Brown, C. (2002). Effects of teachers' reading-aloud styles on vocabulary comprehension in the early elementary grades. Journal of Educational Psychology, 94(3), 465-473. doi:10.1037/0022-06220.127.116.115 Herman, P.A., & Dole, J. (1988). Theory and practice in vocabulary learning and instruction. The Elementary School Journal, 89(1), 42-54. doi:10.1086/461561 International Reading Association & National Association for the Education of Young Children. (1998). Learning to read and write: Developmentally appropriate practices for young children. Newark, DE: International Reading Association. Justice, L.M. (2002). Word exposure conditions and preschoolers' novel word learning during shared storybook reading. Reading Psychology, 23(2), 87-106. doi:10.1080/027027102760351016 Justice, L.M., Meier, J., & Walpole, S. (2005). Learning words from storybooks: An efficacy study with at-risk kindergartners. Language, Speech, and Hearing Services in Schools, 36(1), 17-32. doi:10.1044/0161-1461(2005/003) McGee, L.M., & Schickedanz, J.A. (2007). Repeated interactive read-alouds in preschool and kindergarten. The Reading Teacher, 60(8), 742-751. doi:10.1598/RT.60.8.4 Merriam, S.B. (2001). Qualitative research and case study applications in education (2nd ed.). San Francisco: Jossey-Bass. Newton, E., Padak, N.D., & Rasinski, T.V. (2008). Evidence-based instruction in reading: A professional development guide to vocabulary. Boston, MA: Pearson Education. Roberts, T.A. (2008). Home storybook reading in primary or second language with preschool children: Evidence of equal effectiveness for second-language vocabulary acquisition. Reading Research Quarterly, 43(2), 103-130. doi:10.1598/RRQ.43.2.1 Stanovich, K.E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21(4), 360-406. doi:10.1598/RRQ.21.4.1 Sternberg, R.J. (1987). Most vocabulary is learned from context. In M.C. McKeown & M.E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 89-105). Hillsdale, NJ: Erlbaum. Walsh, B.A., & Blewitt, P. (2006). The effect of questioning style during storybook reading on novel vocabulary acquisition of preschoolers. Early Childhood Education Journal, 33(4), 273-278. doi:10.1007/s10643-005-0052-0. Anholt, L. (2007). Leonardo and the flying boy: A story about Leonardo da Vinci. Hauppauge, NY: Barron's Educational Books. Cronin, D. (2008). Duck for president. New York: Atheneum. Dahl, R. (1982). The BFG. New York: Puffin. MacLachlan, P. (1985). Sarah, plain and tall. New York: HarperTrophy.
http://www.readingrockets.org/article/39979/
13
27
An ironclad was a steam-propelled warship in the early part of the second half of the 19th century, protected by iron or steel armour plates. The ironclad was developed as a result of the vulnerability of wooden warships to explosive or incendiary shells. The first ironclad battleship, La Gloire, was launched by the French Navy in November 1859. The British Admiralty had been considering armored warships since 1856 and prepared a draft design for an armored corvette in 1857; however, in early 1859 the Royal Navy started building two iron-hulled armored frigates, and by 1861 had made the decision to move to an all-armored battle fleet. After the first clashes of ironclads (both with wooden ships and with one another) took place in 1862 during the American Civil War, it became clear that the ironclad had replaced the unarmored ship of the line as the most powerful warship afloat. This type of ship would come to be very successful in the American Civil War. Ironclads were designed for several roles, including as high seas battleships, coastal defense ships, and long-range cruisers. The rapid evolution of warship design in the late 19th century transformed the ironclad from a wooden-hulled vessel that carried sails to supplement its steam engines into the steel-built, turreted battleships and cruisers familiar in the 20th century. This change was pushed forward by the development of heavier naval guns (the ironclads of the 1880s carried some of the heaviest guns ever mounted at sea), more sophisticated steam engines, and advances in metallurgy which made steel shipbuilding possible. The rapid pace of change in the ironclad period meant that many ships were obsolete as soon as they were complete, and that naval tactics were in a state of flux. Many ironclads were built to make use of the ram or the torpedo, which a number of naval designers considered the crucial weapons of naval combat. There is no clear end to the ironclad period, but towards the end of the 1890s the term ironclad dropped out of use. New ships were increasingly constructed to a standard pattern and designated battleships or armored cruisers. Before the ironclad The ironclad became technically feasible and tactically necessary because of developments in shipbuilding in the first half of the 19th century. According to naval historian J. Richard Hill: "The (ironclad) had three chief characteristics: a metal-skinned hull, steam propulsion and a main armament of guns capable of firing explosive shells. It is only when all three characteristics are present that a fighting ship can properly be called an ironclad." Each of these developments was introduced separately in the decade before the first ironclads. Steam propulsion In the 18th and early 19th centuries fleets had relied on two types of major warship, the ship of the line and the frigate. The first major change to these types was the introduction of steam power for propulsion. While paddle steamer warships had been used from the 1830s onwards, steam propulsion only became suitable for major warships after the adoption of the screw propeller in the 1840s. Steam-powered screw frigates were built in the mid-1840s, and at the end of the decade the French Navy introduced steam power to its line of battle. The desire for change came from the ambition of Napoleon III to gain greater influence in Europe, which required a challenge to the British at sea. The first purpose-built steam battleship was the 90-gun Le Napoléon in 1850. Le Napoléon was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of 12 knots (22 km/h), regardless of the wind conditions: a potentially decisive advantage in a naval engagement. The introduction of the steam ship-of-the-line led to a building competition between France and Britain. Eight sister-ships to Le Napoléon were built in France over a period of ten years, but the United Kingdom soon managed to take the lead in production. Altogether, France built ten new wooden steam battleships and converted 28 from older ships of the line, while the United Kingdom built 18 and converted 41. Explosive shells The era of the wooden steam ship-of-the-line was brief, because of new, more powerful naval guns. In the 1820s and 1830s, warships began to mount increasingly heavy guns, replacing 18- and 24-pounder guns with 32-pounders on sailing ships-of-the-line and introducing 68-pounders on steamers. Then, the first shell guns firing explosive shells were introduced following their development by the French Général Henri-Joseph Paixhans, and by the 1840s were part of the standard armament for naval powers including the French Navy, Royal Navy, Imperial Russian Navy and United States Navy. It is often held that the power of explosive shells to smash wooden hulls, as demonstrated by the Russian destruction of a Turkish squadron at the Battle of Sinope, spelled the end of the wooden-hulled warship. The more practical threat to wooden ships was from conventional cannon firing red-hot shot, which could lodge in the hull of a wooden ship and cause a fire or ammunition explosion. Some navies even experimented with hollow shot filled with molten metal for extra incendiary power. Iron armor Following the demonstration of the power of explosive shells against wooden ships at the Battle of Sinop, and fearing that his own ships would be vulnerable to the Paixhans guns of Russian fortifications in the Crimean War, Emperor Napoleon III ordered the development of light-draft floating batteries, equipped with heavy guns and protected by heavy armor. Experiments made during the first half of 1854 proved highly satisfactory, and on 17 July 1854, the French communicated to the British Government that a solution had been found to make gun-proof vessels and that plans would be communicated. After tests in September 1854, the British Admiralty agreed to build five armoured floating batteries on the French plans, establishing the important Thames and Millwall Iron Works within the docks. The French floating batteries were deployed in 1855 as a supplement to the wooden steam battle fleet in the Crimean War. The role of the battery was to assist unarmored mortar and gunboats bombarding shore fortifications. The French used three of their ironclad batteries (Lave, Tonnante and Dévastation) in 1855 against the defenses at the Battle of Kinburn (1855) on the Black Sea, where they were effective against Russian shore defences. They would later be used again during the Italian war in the Adriatic in 1859. The British floating batteries Glatton and Meteor arrived too late to participate to the action at Kinburn. The British planned to use theirs in the Baltic Sea against Kronstadt, and may have been influential in causing the Russians to sue for peace. However, Kronstadt was widely regarded as the most heavily-fortified naval arsenal in the world throughout most of the 19th-century, continually upgrading its combined defences to meet new changes in technology. Even as the British armoured-batteries were readied against Kronstadt in early 1856, the Russians had already constructed newer networks of outlying forts, mortar batteries of their own, and submarine mines against which the British had no system for removing under fire. The batteries have a claim to the title of the first ironclad warships but they were capable of only 4 knots (7 km/h) under their own power: they operated under their own power at the Battle of Kinburn, but had to be towed for long range transit. They were also arguably marginal to the work of the navy. The brief success of the floating ironclad batteries convinced France to begin work on armored warships for their battlefleet. Early ironclad ships and battles By the end of the 1850s it was clear that France was unable to match British building of steam warships, and to regain the strategic initiative a dramatic change was required. The result was the first ocean-going ironclad, the La Gloire, begun in 1857 and launched in 1859. La Gloire's wooden hull was modelled on that of a steam ship of the line, reduced to one deck, sheathed in iron plates 4.5 inches (110 mm) thick. She was propelled by a steam engine, driving a single screw propeller for a speed of 13 knots (24 km/h). She was armed with thirty-six 6.4-inch (160 mm) rifled guns. France proceeded to construct 16 ironclad warships, including two more sister ships to La Gloire, and the only two-decked broadside ironclads ever built, Magenta and Solferino. The Royal Navy had not been keen to sacrifice its advantage in steam ships of the line, but was determined that the first British ironclad would outmatch the French ships in every respect, particularly speed. A fast ship would have the advantage of being able to choose a range of engagement which could make her invulnerable to enemy fire. The British specification was more a large, powerful frigate than a ship-of-the-line. The requirement for speed meant a very long vessel, which had to be built from iron. The result was the construction of two Warrior class ironclads; HMS Warrior and HMS Black Prince. The ships had a successful design, though there were necessarily compromises between 'sea-keeping', strategic range and armour protection; their weapons were more effective than that of La Gloire, and with the largest set of steam engines yet fitted to a ship they could steam at 14.3 knots (26.5 km/h). Yet the Gloire and her sisters had full iron-armour protection along the waterline and the battery itself. Warrior and Black Prince (but also the smaller Defence and Resistance) were obliged to concentrate their armour in a central 'citadel' or 'armoured box', leaving many main deck guns and the fore and aft sections of the vessel unprotected. Iron hulls also required more intensive repair time in dockyards worldwide—which the Royal Navy was not prepared for by the 1860s. Easily-fouled iron hulls could not be coppered like the French wooden hulls because of a corrosive reaction. Nevertheless, as a symbol of Britain's industrial, financial and maritime capabilities and potential at least, the Warrior class ironclads were in many respects the most powerful warships in the world, but were soon rendered obsolete by rapid advances in naval technology which did not necessarily favour the richest or most 'maritime' powers. By 1862, navies across Europe had adopted ironclads. Britain and France each had sixteen either completed or under construction, though the British vessels were larger. Austria, Italy, Russia, and Spain were also building ironclads. However, the first battles using the new ironclad ships involved neither Britain nor France, and involved ships markedly different from the broadside-firing, masted designs of La Gloire and Warrior. The use of ironclads by both sides in the American Civil War, and the clash of the Italian and Austrian fleets at the Battle of Lissa, had an important influence on the development of ironclad design. First battles between ironclads: the U.S. Civil War The first use of ironclads in action came in the U.S. Civil War. The U.S. Navy at the time the war broke out had no ironclads, its most powerful ships being six steam-powered unarmoured frigates. Since the bulk of the Navy remained loyal to the Union, the Confederacy sought to gain advantage in the naval conflict by acquiring modern armored ships. In May 1861, the Confederate Congress voted that $2 million be appropriated for the purchase of ironclads from overseas, and in July and August 1861 the Confederacy started work on construction and converting wooden ships. On 12 October 1861, the CSS Manassas became the first ironclad to enter combat, when she fought Union warships on the Mississippi during the Battle of the Head of Passes. She had been converted from a commercial vessel in New Orleans for river and coastal fighting. In February 1862, the larger CSS Virginia (Merrimack) joined the Confederate Navy, having been rebuilt at Norfolk. As USS Merrimack, Virginia originally was a conventional warship made of wood, but she was reconstructed with an iron-covered casemate when she entered the Confederate navy. By this time, the Union had completed seven ironclad gunboats of the City class, and was about to complete the USS Monitor, an innovative design proposed by the Swedish inventor John Ericsson. The Union was also building a large armored frigate, the USS New Ironsides, and the smaller USS Galena. The first battle between ironclads happened on 9 March 1862, as the armored Monitor was deployed to protect the Union's wooden fleet from the ironclad ram Virginia and other Confederate warships. In this engagement, the second day of the Battle of Hampton Roads, the two ironclads repeatedly tried to ram one another while shells bounced off their armor. The battle attracted attention worldwide, making it clear that the wooden warship was now out of date, with the ironclads destroying them easily. The Civil War saw more ironclads built by both sides, and they played an increasing role in the naval war alongside the unarmored warships, commerce raiders and blockade runners. The Union built a large fleet of fifty monitors modeled on their namesake. The Confederacy built ships designed as smaller versions of the Virginia, many of which saw action, but their attempts to buy ironclads overseas were frustrated as European nations confiscated ships being built for the Confederacy — especially in Russia, the only country to openly support the Union through the war. Only CSS Stonewall was completed, and she arrived in American waters just in time for the end of the war. Through the remainder of the war, ironclads saw action in the Union's attacks on Confederate ports. Seven Union monitors, including USS Montauk, as well as two other ironclads, the ironclad frigate New Ironsides and a light-draft Keokuk, participated in the failed attack on Charleston; one was sunk. Two small ironclads, CSS Palmetto State and CSS Chicora participated in the defence of the harbor. For the later attack at Mobile Bay, the Union assembled four monitors as well as 11 wooden ships, facing the CSS Tennessee, the Confederacy's most powerful ironclad and the gunboats CSS Morgan, CSS Gaines, CSS Selma. On the western front, the Union built a formidable force of river ironclads, beginning with several converted riverboats and then contracting engineer James Eads of St. Louis, Missouri to build the "City" class ironclads. These excellent ships were built with twin engines and a central paddle wheel, all protected by an armored casement. They had a shallow draft, allowing them to journey up smaller tributaries, and were very well suited for river operations. Eads also produced monitors for use on the rivers, the first two of which differed from the ocean going monitors in that they contained a paddle wheel (the USS Neosho (1863) and USS Osage (1863)). Arguably Eads vessels were some of the better ironclads of the Western Flotilla, but there were a number of other vessels that served valiantly with the fleet. All were of varying design, some more successful than others, and some were similar to standard riverboats but with armored side-mounted paddle wheels. All were armed with various smoothbore and some rifled guns. If nothing else the experience of the American Civil War and its wild variety of competing ironclad designs, some more successful (or disastrous) than others, confirmed the emerging trade-off or compromises required in applying the latest technological advances in iron armour manufacture, ship construction and gun design—to name a few—also going on in Europe. There was no such thing as a 'perfect' ironclad which could be invincible in every possible encounter; ship duels, standing up to forts, Brown & Blue-water operations. The Union ironclads played an important role in the Mississippi and tributaries by providing tremendous fire upon Confederate forts, installations and vessels with relative impunity to enemy fire. They were not as heavily armored as the ocean going monitors of the Union, but they were adequate for their intended use. More Western Flotilla Union ironclads were sunk by torpedoes (mines) than by enemy fire, and the most damaging fire for the Union ironclads was from shore installations, not Confederate vessels. Lissa: First fleet battle The first fleet battle, and the first ocean battle, involving ironclad warships was the Battle of Lissa in 1866. Waged between the Austrian and Italian navies, the battle pitted combined fleets of wooden frigates and corvettes and ironclad warships on both sides in the largest naval battle between the battles of Navarino and Tsushima. The Italian fleet consisted of 12 ironclads and a similar number of wooden warships, escorting transports which carried troops intending to land on the Adriatic island of Lissa. Among the Italian ironclads were seven broadside ironclad frigates, four smaller ironclads, and the newly built Affondatore — a double-turretted ram. Opposing them, the Austrian navy had seven ironclad frigates. The Austrians believed their ships to have less effective guns than their enemy, so decided to engage the Italians at close range and ram the enemy. The Austrian fleet formed into an arrowhead formation with the ironclads in the first line, charging at the Italian ironclad squadron. In the melée which followed both sides were frustrated by the lack of damage inflicted by guns, and by the difficulty of ramming—nonetheless, the effective ramming attack being made by the Austrian flagship against the Italian attracted great attention in following years. The superior Italian fleet lost its two ironclads, Re d'Italia and Palestro, while the Austrian unarmoured screw two-decker Kaiser remarkably survived close actions with four Italian ironclads. The battle ensured the popularity of the ram as a weapon in European ironclads for many years, and the victory won by Austria established it as the predominant naval power in the Adriatic. The battles of the American Civil War and at Lissa were very influential on the designs and tactics of the ironclad fleets that followed. In particular, it taught a generation of naval officers the misleading lesson that ramming was the best way to sink enemy ironclads. Armament and tactics The adoption of iron armor meant that the traditional naval armament of dozens of light cannon became useless, since their shot would bounce off an armored hull. To penetrate armor, increasingly heavy guns were mounted on ships; nevertheless, the view that ramming was the only way to sink an ironclad became widespread. The increasing size and weight of guns also meant a movement away from the ships mounting many guns broadside, in the manner of a ship-of-the-line, towards a handful of guns in turrets for all-round fire. Ram craze From the 1860s to the 1880s many naval designers believed that the development of the ironclad meant that the ram was again the most important weapon in naval warfare. With steam power freeing ships from the wind, and armor making them invulnerable to shellfire, the ram seemed to offer the opportunity to strike a decisive blow. The scant damage inflicted by the guns of Monitor and Virginia at Battle of Hampton Roads and the spectacular but lucky success of the Austrian flagship Ferdinand Max sinking the Italian Re d'Italia at Lissa gave strength to the ramming craze. From the early 1870s to early 1880s most British naval officers thought that guns were about to be replaced as the main naval armament by the ram. Those who noted the tiny number of ships that had actually been sunk by ramming struggled to be heard. The revival of ramming had a significant effect on naval tactics. Since the 17th century the predominant tactic of naval warfare had been the line of battle, where a fleet formed a long line to give it the best fire from its broadside guns. This tactic was totally unsuited to ramming, and the ram threw fleet tactics into disarray. The question of how an ironclad fleet should deploy in battle to make best use of the ram was never tested in battle, and if it had been, combat might have shown that rams could only be used against ships which were already stopped dead in the water. The ram finally fell out of favour in the 1880s, as the same effect could be achieved with a torpedo, with less vulnerability to quick-firing guns. The armament of ironclads tended to become concentrated in a small number of powerful guns capable of penetrating the armor of enemy ships at range; calibre and weight of guns increased markedly to achieve greater penetration. Throughout the ironclad era navies also grappled with the complexities of rifled versus smoothbore guns and breech-loading versus muzzle-loading. HMS Warrior carried a mixture of 110-pounder 7 inch (180 mm) breech-loading rifles and more traditional 68-pounder smoothbore guns. Warrior highlighted the challenges of picking the right armament; the breech-loaders she carried, designed by Sir William Armstrong, were intended to be the next generation of heavy armament for the Royal Navy, but were shortly withdrawn from service. Breech-loading guns seemed to offer important advantages. A breech-loader could be reloaded without moving the gun, a lengthy process particularly if the gun then needed to be re-aimed. The Warrior's Armstrong guns also had the virtue of being lighter than an equivalent smoothbore and, because of their rifling, more accurate. Nonetheless, the design was rejected because of problems which plagued breech-loaders for decades. The weakness of the breech-loader was the obvious problem of sealing the breech. All guns are powered by the explosive conversion of gunpowder into gas. This explosion propels the shot or shell out of the front of the gun, but also imposes great stresses on the gun-barrel. If the breech — which experiences some of the greatest forces in the gun — is not entirely secure, then there is a risk that either gas will discharge through the breech or that the breech will break. This in turn reduces the muzzle velocity of the weapon and can also endanger the gun crew. The Warrior's Armstrong guns suffered from both problems; the shells were unable to penetrate the 4.5 in (118 mm) armor of La Gloire, while sometimes the screw which closed the breech flew backwards out of the gun on firing. Similar problems were experienced with the breech-loading guns which became standard in the French and German navies. These problems influenced the British to equip ships with muzzle-loading weapons of increasing power until the 1880s. After a brief introduction of 100-pounder or 9.5-inch (240 mm) smoothbore Somerset Gun, which weighed 6.5 tons (6.6 t), the Admiralty introduced 7-inch (178 mm) rifled guns, weighing 7 tons. These were followed by a series of increasingly mammoth weapons—guns weighing 12, 25, 25, 38 and finally 81 tons, with calibre increasing from 8-inch (203 mm) to 16-inch (406 mm). The decision to retain muzzle-loaders until the 1880s has been criticised by historians. However, at least until the late 1870s, the British muzzle-loaders had superior performance in terms of both range and rate of fire than the French and Prussian breech-loaders, which suffered from the same problems as had the first Armstrong guns. From 1875 onwards, the balance between breech- and muzzle-loading changed. Captain de Bange invented a method of reliably sealing a breech, adopted by the French in 1873. Just as compellingly, the growing size of naval guns made muzzle-loading much more complicated. With guns of such size there was no prospect of hauling in the gun for re-loading, or even re-loading by hand, and complicated hydraulic systems were required for re-loading the gun outside the turret without exposing the crew to enemy fire. In 1882, the 81-ton, 16-inch (406 mm) guns of HMS Inflexible fired only once every 11 minutes while bombarding Alexandria during the Urabi Revolt. The 100-ton, 450 mm (17.72 inch) guns of Duilio could each fire a round every 15 minutes. In the Royal Navy, the switch to breech-loaders was finally made in 1879; as well as the significant advantages in terms of performance, opinion was swayed by an explosion on board HMS Thunderer caused by a gun being double-loaded, a problem which could only happen with a muzzle-loading gun. The calibre and weight of guns could only increase so far. The larger the gun, the slower it would be to load, the greater the stresses on the ship's hull, and the less the stability of the ship. The size of the gun peaked in the 1880s, with some of the heaviest calibres of gun ever used at sea. HMS Benbow carried two 16.25-inch (413 mm) breech-loading guns, each weighing 110 tons—no British battleship would ever carry guns as large. The Italian 450 mm (17.72 inch) guns would be larger than any gun fitted to a battleship until the 18.1-inch (460 mm) armament of the Japanese Yamato class of World War II. One consideration which became more acute was that even from the original Armstrong models, following the Crimean War, range and hitting power far exceeded simple accuracy, especially at sea where the slightest roll or pitch of the vessel as 'floating weapons-platform' could negate the advantage of rifling. American ordnance experts accordingly preferred smoothbore monsters whose round shot could at least 'skip' along the surface of the water. Actual effective combat ranges, they had learned during the Civil War, were comparable to those in the Age of Sail—though a vessel could now be smashed to pieces in only a few rounds. Smoke and the general chaos of battle only added to the problem. As a result, many naval engagements in the 'Age of the Ironclad' were still fought at ranges within easy eyesight of their targets, and well below the maximum reach of their ships' guns. Another method of increasing firepower was to vary the projectile fired or the nature of the propellant. Early ironclads used black powder, which expanded rapidly after combustion; this meant cannons had relatively short barrels, to prevent the barrel itself slowing the shell. The sharpness of the black powder explosion also meant that guns were subjected to extreme stress. One important step was to press the powder into pellets, allowing a slower, more controlled explosion and a longer barrel. A further step forward was the introduction of chemically different "brown powder" which combusted more slowly again. It also put less stress on the insides of the barrel, allowing guns to last longer and to be manufactured to tighter tolerances. The development of smokeless powder, based on nitroglycerine or nitrocellulose, by the French inventor Paul Vielle in 1884 was a further step allowing smaller charges of propellant with longer barrels. The guns of the pre-Dreadnought battleships of the 1890s tended to be smaller in calibre compared to the ships of the 1880s, most often 12 in (305 mm), but progressively grew in length of barrel, making use of improved propellants to gain greater muzzle velocity. The nature of the projectiles also changed during the ironclad period. Initially, the best armor-piercing projectile was a solid cast-iron shot. Later, shot of chilled iron, a harder iron alloy, gave better armor-piercing qualities. Eventually the armor-piercing shell was developed. Positioning of armament Broadside ironclads The first British, French and Russian ironclads, in a logical development of warship design from the long preceding era of wooden ships of the line, carried their weapons in a single line along their sides and so were called "broadside ironclads." Both La Gloire and HMS Warrior were examples of this type. Because their armor was so heavy, they could only carry a single row of guns along the main deck on each side rather than a row on each deck. A significant number of broadside ironclads were built in the 1860s, principally in Britain and France, but in smaller numbers by other powers including Italy, Austria, Russia and the United States. The advantages of mounting guns on both broadsides was that the ship could engage more than one adversary at a time, and the rigging did not impede the field of fire. Broadside armament also had disadvantages, which became more serious as ironclad technology developed. Heavier guns to penetrate ever-thicker armor meant that fewer guns could be carried. Furthermore, the adoption of ramming as an important tactic meant the need for ahead and all-round fire. These problems led to broadside designs being superseded by designs that gave greater all-round fire, which included central-battery, turret, and barbette designs. Turrets, batteries and barbettes There were two main design options to the broadside. In one design, the guns were placed in an armoured casemate amidships: this arrangement was called the 'box-battery' or 'centre-battery'. In the other, the guns could be placed on a rotating platform to give them a broad field of fire; when fully armored, this arrangement was called a turret and when partially armored or unarmored, a barbette. The centre-battery was the simpler and, during the 1860s and 1870s, the more popular method. Concentrating guns amidships meant the ship could be shorter and handier than a broadside type. The first full-scale centre-battery ship was HMS Bellerophon of 1865; the French laid down centre-battery ironclads in 1865 which were not completed until 1870. Centre-battery ships often, but not always, had a recessed freeboard enabling some of their guns to fire directly ahead. The turret made its debut with USS Monitor in 1862, with a type of turret designed by the Swedish engineer John Ericsson. A competing turret design was proposed by the British inventor Cowper Coles. Ericsson's turret turned on a central spindle, and Coles's turned on a ring of bearings. Turrets offered the maximum arc of fire from the guns, but there were significant problems with their use in the 1860s. The fire arc of a turret would be considerably limited by masts and rigging, so they were unsuited to use on the earlier ocean-going ironclads. The second problem was that turrets were extremely heavy. Ericsson was able to offer the heaviest possible turret (guns and armour protection) by deliberately designing a ship with very low freeboard. The weight thus saved from having a high broadside above the waterline was diverted to actual guns and armour. Low freeboard, however, also meant a smaller hull and therefore a smaller capacity for coal storage—and therefore range of the vessel. In many respects, the turreted, low-freeboard Monitor and the broadside sailer HMS Warrior represented two opposite extremes in what an 'Ironclad' was all about. The most dramatic attempt to compromise these two extremes, or 'squaring this circle', was designed by Captain Cowper Phipps Coles: HMS Captain, a dangerously low freeboard turret ship which nevertheless carried a full rig of sail, and which subsequently capsized not long after her launch in 1870. Her half-sister Monarch was restricted to firing from her turrets only on the port and starboard beams. The third Royal Navy ship to combine turrets and masts was HMS Inflexible of 1876, which carried two turrets on either side of the centre-line, allowing both to fire fore, aft and broadside. A lighter alternative to the turret, particularly popular with the French navy, was the barbette. These were fixed armored towers which held a gun on a turntable. The crew was sheltered from direct fire, but vulnerable to plunging fire, for instance from shore emplacements. The barbette was lighter than the turret, needing less machinery and no roof armor—though nevertheless some barbettes were stripped of their armor plate to reduce the top-weight of their ships. The barbette became widely adopted in the 1880s, and with the addition of an armored 'gun-house', transformed into the turrets of the pre-Dreadnought battleships. The ironclad age saw the development of explosive torpedoes as naval weapons, which helped complicate the design and tactics of ironclad fleets. The first torpedoes were static mines, used extensively in the American Civil War. That conflict also saw the development of the spar torpedo, an explosive charge pushed against the hull of a warship by a small boat. For the first time, a large warship faced a serious threat from a smaller one—and given the relative inefficiency of shellfire against ironclads, the threat from the spar torpedo was taken seriously. The U.S. Navy converted four of its monitors to become turretless armored spar-torpedo vessels while under construction in 1864–5, but these vessels never saw action. Another proposal, the towed or 'Harvey' torpedo, involved an explosive on a line or outrigger; either to deter a ship from ramming or to make a torpedo attack by a boat less suicidal. A more practical and influential weapon was the self-propelled or 'Whitehead' torpedo. Invented in 1868 and deployed in the 1870s, the Whitehead torpedo formed part of the armament of ironclads of the 1880s like HMS Inflexible and the Italian Duilio and Dandolo. The ironclad's vulnerability to the torpedo was a key part of the critique of armored warships made by the Jeune Ecole school of naval thought; it appeared that any ship armored enough to prevent destruction by gunfire would be slow enough to be easily caught by torpedo. In practice, however, the Jeune Ecole was only briefly influential and the torpedo formed part of the confusing mixture of weapons possessed by ironclads. Armor and construction The first ironclads were built on wooden or iron hulls, and protected by wrought iron armor backed by thick wooden planking. Ironclads were still being built with wooden hulls into the 1870s. Hulls: iron, wood and steel Using iron construction for warships offered advantages for the engineering of the hull. However, unarmored iron had many military disadvantages, and offered technical problems which kept wooden hulls in use for many years, particularly for long-range cruising warships. Iron ships had first been proposed for military use in the 1820s. In the 1830s and 1840s France, Britain and the United States had all experimented with iron-hulled but unarmored gunboats and frigates. However, the iron-hulled frigate was abandoned by the end of the 1840s, because iron hulls were more vulnerable to solid shot; iron was more brittle than wood, and iron frames more likely to fall out of shape than wood. The unsuitability of unarmored iron for warship hulls meant that iron was only adopted as a building material for battleships when protected by armor. However, iron gave the naval architect many advantages. Iron allowed larger ships and more flexible design, for instance the use of watertight bulkheads on the lower decks. Warrior, built of iron, was longer and faster than the wooden-hulled La Gloire. Iron could be produced to order and used immediately, in contrast to the need to give wood a long period of seasoning. And, given the large quantities of wood required to build a steam warship and the falling cost of iron, iron hulls were increasingly cost-effective. The main reason for the French use of wooden hulls for the ironclad fleet built in the 1860s was that the French iron industry could not supply enough, and the main reason why Britain built its handful of wooden-hulled ironclads was to make best use of hulls already started and wood already bought. Wooden hulls continued to be used for long-range and smaller ironclads, because iron nevertheless had a significant disadvantage. Iron hulls suffered quick fouling by marine life, slowing the ships down—manageable for a European battlefleet close to dry docks, but a difficulty for long-range ships. The only solution was to sheath the iron hull first in wood and then in copper, a laborious and expensive process which made wooden construction remain attractive. Iron and wood were to some extent interchangeable: the Japanese Kongo and Hiei ordered in 1875 were sister-ships, but one was built of iron and the other of composite construction. After 1872, steel started to be introduced as a material for construction. Compared to iron, steel allows for greater structural strength for a lower weight. The French Navy led the way with the use of steel in its fleet, starting with the Redoutable, laid down in 1873 and launched in 1876. Redoutable nonetheless had wrought iron armor plate, and part of her exterior hull was iron rather than steel. Even though Britain led the world in steel production, the Royal Navy was slow to adopt steel warships. The Bessemer process for steel manufacture produced too many imperfections for large-scale use on ships. French manufacturers used the Siemens-Martin process to produce adequate steel, but British technology lagged behind. The first all-steel warships built by the Royal Navy were the dispatch vessels Iris and Mercury, laid down in 1875 and 1876. Armor and protection schemes Iron-built ships used wood as part of their protection scheme. HMS Warrior was protected by 4.5 in (114 mm) of wrought iron backed by 15 in (381 mm) of teak, the strongest shipbuilding wood. The wood played two roles, preventing spalling and also preventing the shock of a hit damaging the structure of the ship. Later, wood and iron were combined in 'sandwich' armor, for instance in HMS Inflexible. Steel was also an obvious material for armor. It was tested in the 1860s, but the steel of the time was too brittle and disintegrated when struck by shells. Steel became practical to use when a way was found to fuse steel onto wrought iron plates, giving a form of compound armor. This compound armor was used by the British in ships built from the late 1870s, first for turret armor (starting with HMS Inflexible) and then for all armor (starting with Colossus of 1882). The French and German navies adopted the innovation almost immediately, with licenses being given for the use of the 'Wilson System' of producing fused armor. The first ironclads to have all-steel armor were the Italian Duilio and Dandolo. Though the ships were laid down in 1873 their armor was not purchased from France until 1877. The French navy decided in 1880 to adopt compound armor for its fleet, but found it limited in supply, so from 1884 the French navy was using steel armor. Britain stuck to compound armor until 1889. The ultimate ironclad armor was case hardened nickel-steel. In 1890, the U.S. Navy tested steel armor hardened by the Harvey process and found it superior to compound armor. For several years 'Harvey steel' was the state of the art, produced in the U.S., France, Germany, Britain, Austria and Italy. In 1894, the German firm Krupp developed gas cementing, which further hardened steel armor. The German Kaiser Friedrich III, laid down in 1895, was the first ship to benefit from the new 'Krupp armor' and the new armor was quickly adopted; the Royal Navy using it from HMS Canopus, laid down in 1896. By 1901 almost all new battleships used Krupp armor, though the U.S. continued to use Harvey armor alongside until the end of the decade. The equivalent strengths of the different armor plates was as follows: 15 in (381 mm) of wrought iron was equivalent to 12 in (305 mm) of either plain steel or compound iron and steel armor, and to 7.75 in (197 mm) of Harvey armor or 5.75 in (146 mm) of Krupp armor. Ironclad construction also prefigured the later debate in battleship design between tapering and 'all-or-nothing' armour design. Warrior was only semi-armoured, and could have been disabled by hits on the bow and stern. As the thickness of armor grew to protect ships from the increasingly heavy guns, the area of the ship which could be fully protected diminished. Inflexible's armor protection was largely limited to the central citadel amidships, protecting boilers and engines, turrets and magazines, and little else. An ingenious arrangement of cork-filled compartments and watertight bulkheads was intended to keep her stable and afloat in the event of heavy damage to her un-armored sections. Propulsion: steam and sail The first ocean-going ironclads carried masts and sails like their wooden predecessors, and these features were only gradually abandoned. Early steam engines were inefficient; the wooden steam fleet of the Royal Navy could only carry "5 to 9 days coal", and the situation was similar with the early ironclads. Warrior also illustrates two design features which aided hybrid propulsion; she had retractable screws to reduce drag while under sail (though in practice the steam engine was run at a low throttle), and a telescopic funnel which could be folded down to the deck level. Ships designed for coastal warfare, like the floating batteries of the Crimea, or USS Monitor and her sisters, dispensed with masts from the beginning. The British HMS Devastation, started in 1869, was the first large, ocean-going ironclad to dispense with masts. Her principal role was for combat in the English Channel and other European waters; and while her coal supplies gave her enough range to cross the Atlantic, she would have had little endurance on the other side of the ocean. The Devastation and the similar ships commissioned by the British and Russian navies in the 1870s were the exception rather than the rule. Most ironclads of the 1870s retained masts, and only the Italian navy, which during that decade was focused on short-range operations in the Adriatic, built consistently mastless ironclads. During the 1860s, steam engines improved with the adoption of double-expansion steam engines, which used 30–40% less coal than earlier models. The Royal Navy decided to switch to the double-expansion engine in 1871, and by 1875 they were widespread. However, this development alone was not enough to herald the end of the mast. Whether this was due to a conservative desire to retain sails, or was a rational response to the operational and strategic situation, is a matter of debate. A steam-only fleet would require a network of coaling stations worldwide, which would need to be fortified at great expense to stop them falling into enemy hands. Just as significantly, because of unsolved problems with the technology of the boilers which provided steam for the engines, the performance of double-expansion engines was rarely as good in practice as it was in theory. During the 1870s the distinction grew between 'first-class ironclads' or 'battleships' on the one hand, and 'cruising ironclads' designed for long-range work on the other. The demands on first-class ironclads for very heavy armor and armament meant increasing displacement, which reduced speed under sail; and the fashion for turrets and barbettes made a sailing rig increasingly inconvenient. HMS Inflexible, launched in 1876 but not commissioned until 1881, was the last British battleship to carry masts, and these were widely seen as a mistake. The start of the 1880s saw the end of sailing rig on ironclad battleships. Sails persisted on 'cruising ironclads' for much longer. During the 1860s, the French navy had produced the Alma and La Galissoniere classes as small, long-range ironclads as overseas cruisers and the British had responded with ships like Swiftsure of 1870. The Russian ship General Admiral, laid down in 1870 and completed in 1875, was a model of a fast, long-range ironclad which was likely to be able to outrun and outfight ships like Swiftsure. Even the later HMS Shannon, often described as the first British armored cruiser, would have been too slow to outrun General Admiral. While Shannon was the last British ship with a retractable propellor, later armored cruisers of the 1870s retained sailing rig, sacrificing speed under steam in consequence. It took until 1881 for the Royal Navy to lay down a long-range armored warship capable of catching enemy commerce raiders, Warspite, which was completed in 1888. While sailing rigs were obsolescent for all purposes by the end of the 1880s, rigged ships were in service until the early years of the 20th century. The final evolution of ironclad propulsion was the adoption of the triple-expansion steam engine, a further refinement which was first adopted in HMS Sans Pareil, laid down in 1885 and commissioned in 1891. Many ships also used a forced draught to get additional power from their engines, and this system was widely used until the introduction of the steam turbine in the mid-1900s (decade). While ironclads spread rapidly in navies worldwide, there were few pitched naval battles involving ironclads. Most European nations settled differences on land, and the Royal Navy struggled to maintain a deterrent parity with at least France, while providing suitable protection to Britain's commerce and colonial outposts worldwide. Ironclads remained, for the British Royal Navy, a matter of defending the British Isles first and projecting power abroad second. Those naval engagements of the latter half of the 19th-century which involved ironclads normally involved colonial actions or clashes between second-rate naval powers. But these encounters were often enough to convince British policy-makers of the increasing hazards of strictly naval foreign intervention, from Hampton Roads in the American Civil War to the hardening combined defences of naval arsenals such as Kronstadt and Cherbourg. There were many types of ironclads: - Seagoing ships intended to "stand in the line of battle"; the precursors of the battleship. - Coastal service and riverine vessels, including 'floating batteries' and 'monitors' - Vessels intended for commerce raiding or protection of commerce, called 'armoured cruisers' The United Kingdom possessed the largest navy in the world for the whole of the ironclad period. The Royal Navy was the second to adopt ironclad warships, and it applied them worldwide in their whole range of roles. In the age of sail, the British strategy for war depended on the Royal Navy mounting a blockade of the ports of the enemy. Because of the limited endurance of steamships, this was no longer possible, so the British at times considered the risk-laden plan of engaging an enemy fleet in harbor as soon as war broke out. To this end, the Royal Navy developed a series of 'coast-defence battleships', starting with the Devastation class. These 'breastwork monitors' were markedly different from the other high-seas ironclads of the period and were an important precursor of the modern battleship. As long-range monitors they could reach Bermuda unescorted, for example. However, they were still armed with only four heavy guns and were as vulnerable to mines and obstructions (and enemy monitors) as the original monitors of the Union Navy proved to be during the Civil War. The British prepared for an overwhelming mortar bombardment of Kronstadt by the close of the Crimean War, but never considered running the smoke-ridden, shallow-water gauntlet straight to St. Petersburg with ironclads. Likewise, monitors proved acutely unable to 'overwhelm' enemy fortifications single-handed during the American conflict, though their low-profile and heavy armour protection made them ideal for running gauntlets. Mines and obstructions, however, negated these advantages; a problem the British Admiralty frequently acknowledged but never countered throughout the period. The British never laid down enough Devastation-class 'battleships' to instantly overwhelm Cherbourg, Kronstadt or even New York City with gunfire. Although throughout the 1860s and 1870s the Royal Navy was still in many respects superior to its potential rivals, by the early 1880s widespread concern about the threat from France and Germany culminated in the Naval Defence Act which promulgated the idea of a 'two-power standard', that Britain should possess as many ships as the next two navies combined. This standard provoked aggressive shipbuilding in the 1880s and 1890s. British ships did not participate in any major wars in the ironclad period. The Royal Navy's ironclads only saw action as part of colonial battles or one-sided engagements like the bombardment of Alexandria in 1882. Defending British interests against Ahmed 'Urabi's Egyptian revolt, a British fleet opened fire on the fortifications around the port of Alexandria. A mixture of centre-battery and turret ships bombarded Egyptian positions for most of a day, forcing the Egyptians to retreat; return fire from Egyptian guns was heavy at first, but inflicted little damage, killing only five British sailors. Few Egyptian guns were actually dismounted, on the other hand, and the fortifications themselves were typically left intact. Had the Egyptians actually utilised the heavy mortars that were at their disposal they might have quickly turned the tide, for the attacking British ironclads found it easy (for accuracy's sake) to simply anchor whilst firing; perfect targets for high-angle fire upon their thinly-armoured topdecks. The French navy built the first ironclad to try to gain a strategic advantage over the British, but were consistently out-built by the British. Despite taking the lead with a number of innovations like breech-loading weapons and steel construction, the French navy could never match the size of the Royal Navy. In the 1870s, the construction of ironclads ceased for a while in France as the Jeune Ecole school of naval thought took prominence, suggesting that torpedo boats and unarmored cruisers would be the future of warships. Like the British, the French navy saw little action with its ironclads; the French blockade of Germany in the Franco-Prussian War was ineffective, as the war was settled entirely on land. Russia built a number of ironclads, generally copies of British or French designs. Nonetheless, there were real innovations from Russia; the first true type of ironclad armored cruiser, the General Admiral of the 1870s, and a set of remarkably badly designed circular battleships referred to as 'popoffkas'. The Russian Navy pioneered the wide-scale use of torpedo boats during the Russo-Turkish War of 1877–1878, mainly out of necessity because of the superior numbers and quality of ironclads used by the Turkish navy. Russia expanded her navy in the 1880s and 1890s with modern armored cruisers and battleships, but the ships were manned by inexperienced crews and politically appointed leadership, which enhanced their defeat in the Battle of Tsushima on 27 May 1905. The U.S. Navy ended the Civil War with about fifty monitor-type coastal ironclads; by the 1870s most of these were laid up in reserve, leaving the USA virtually without an ironclad fleet. Another five large monitors were ordered in the 1870s. The limitations of the monitor type effectively prevented the USA from projecting power overseas, and until the 1890s the USA would have come off badly in a conflict with even Spain or the Latin American powers. The 1890s saw the beginning of what became the Great White Fleet, and it was the modern pre-Dreadnoughts and armored cruisers built in the 1890s which defeated the Spanish fleet in the Spanish-American War of 1898. This started a new era of naval warfare. Ironclads were widely used in South America. Both sides used ironclads in the Chincha Islands War between Spain and the combined forces of Peru and Chile in the early 1860s. The powerful Spanish Numancia participated in the Battle of Callao but was unable to inflict significant damage to the Callao defences. Besides, Peru was able to deploy two locally built ironclads based on American Civil War designs, the Loa (a wooden ship converted into a casemate ironclad) and the Victoria (an small monitor armed with a single 68 pdr gun), as well as two British-built ironclads; Independencia, a centre-battery ship, and the turret ship Huáscar. Numancia was the first ironclad to circumnavigate the world, arriving in Cádiz on 20 September 1867, and earning the motto: "Enloricata navis que primo terram circuivit"). In the War of the Pacific in 1879, both Peru and Chile had ironclad warships, including some of those used a few years previously against Spain. While the Independencia ran aground early on, the Peruvian ironclad Huáscar made a great impact against Chilean shipping, delaying Chilean ground invasion by six months. She was eventually caught by two more modern Chilean centre-battery ironclads, the Blanco Encalada and the Almirante Cochrane at the Battle of Angamos Point. Ironclads were also used from the inception of the Imperial Japanese Navy. The Kōtetsu (Japanese: 甲鉄, literally "Ironclad", later renamed Azuma 東, "East") had a decisive role in the Naval Battle of Hakodate Bay in May 1869, which marked the end of the Boshin War, and the complete establishment of the Meiji Restoration. The IJN continued to develop its strength and commissioned a number of warships from British and European shipyards, first ironclads and later armored cruisers. These ships engaged the Chinese Beiyang fleet which was superior on paper at least at the Battle of the Yalu River. Thanks to superior short-range firepower, the Japanese fleet came off better, sinking or severely damaging eight ships and receiving serious damage to only four. The naval war was concluded the next year at the Battle of Weihaiwei, where the strongest remaining Chinese ships were surrendered to the Japanese. End of the ironclad There is no clearly defined end to the ironclad, besides the transition from wood hulls to all metal. Ironclads continued to be used in World War I. Towards the end of the 19th century, the descriptions 'battleship' and 'armored cruiser' came to replace the term 'ironclad'. The proliferation of ironclad battleship designs came to an end in the 1890s as navies reached a consensus on the design of battleships, producing the type known as the pre-Dreadnought. These ships are sometimes covered in treatments of the ironclad warship. The next evolution of battleship design, the dreadnought, is never referred to as an 'ironclad'. Most of the ironclads of the 1870s and 1880s served into the 1900s (decade). A handful, for instance US navy monitors laid down in the 1870s, saw active service in World War I. Pre-Dreadnought battleships and cruisers of the 1890s saw widespread action in World War I and in some cases through to World War II. The example of the ironclads had some bearing on the history of the tank, as ironclad warships became an inspiration for ideas of landships and other armored vehicles. H. G. Wells, in his short story The Land Ironclads, published in The Strand Magazine in December 1903, described the use of large, armoured cross-country vehicles, armed with cannon and machine guns, and equipped with pedrail wheels A number of ironclads have been preserved or reconstructed as museum ships. - Parts of USS Monitor have been recovered and are being conserved and displayed at the Mariners' Museum in Newport News, Virginia - HMS Warrior is today a fully restored museum ship in Portsmouth, England. - Huáscar is berthed at the port of Talcahuano, Chile, on display for visitors. - The City class ironclad USS Cairo is currently on display in Vicksburg, Mississippi. - Northrop Grumman in Newport News constructed a full-scale replica of USS Monitor. The replica was laid down in February 2005 and completed just two months later. - The Dutch Ramtorenschip (Coastal ram) Zr. Ms. Buffel is currently under display in the Maritime Museum Rotterdam. - The Dutch Ramtorenschip (Coastal ram) Zr. Ms. Schorpioen is a museum ship at Den Helder. - The complete, recovered wooden hull of the CSS Neuse, a casemate ram ironclad, is on view in Kinston, North Carolina, and, in another part of town on the Neuse River, the recreated ship, named CSS Neuse II, is nearly built and can be visited. - The hull of the casemate ironclad CSS Jackson can be seen in the National Civil War Naval Museum at Port Columbus, Georgia. - The new United States Navy Zumwalt class guided missile destroyer has been described as bearing resemblance to ironclads. - Hill, Richard. War at Sea in the Ironclad Age ISBN 0-304-35273-X; p. 17. - Sondhaus, Lawrence. Naval Warfare 1815–1914 ISBN 0-415-21478-5, pp. 73–4. - Sondhaus, Naval Warfare 1815–1914 p. 86. - Hill, War at Sea in the Ironclad Age p. 17. - Lambert, A. "The Screw Propellor Warship", in Gardiner Steam, Steel and Shellfire pp. 30–44. - Sondhaus, Naval Warfare 1815–1914 pp. 37–41. - Hill, War at Sea in the Ironclad Age p. 25. - Sondhaus, Naval Warfare 1815–1914 p. 58. - Lambert, A. Battleships in Transition, Conway Maritime Press, London, 1984. ISBN 0-85177-315-X. pp. 94–5. - Baxter, The Introduction of the Ironclad Warship, p70 - Baxter, The Introduction of the Ironclad Warship, p72 - Batteries flottantes classe Dévastation. Dorriers marine - Baxter, The Introduction of the Ironclad Warship, p82 - Lambert A. "Iron Hulls and Armour Plate"; Gardiner Steam, Steel and Shellfire pp. 47–55. - Baxter, The Introduction of the Ironclad Warship, p84 - Sondhaus, Naval Warfare 1815–1914 p. 61. - Sondhaus, Naval Warfare 1815–1914 p. 74. - Sondhaus, Naval Warfare 1815–1914 p. 76. - Sondhaus, Naval Warfare 1815–1914 p. 77. - Still, William "The American Civil War" in Gardiner Steam, Steel and Shellfire. - Sondhaus, Naval Warfare 1815–1914 p. 78. - Preston, pp. 12–4. - Sondhaus, Naval Warfare 1815–1914 pp. 78–81. - Sondhaus, Naval Warfare 1815–1914 p. 82. - Sondhaus, Naval Warfare 1815–1914 p. 85. - Sondhaus, Naval Warfare 1815–1914 p. 81. - Angus Konstam, (2002), Union River Ironclad 1861-65, Osprey Publishing, New Vanguard 56, ISBN 978-1-84176-444-3 - Sondhaus, Naval Warfare 1815–1914 pp. 94–6. - Hill, War at Sea in the Ironclad Age p. 35. - Beeler, J. Birth of the Battleship: British Capital Ship Design, 1870–1881. London, Caxton, 2003. ISBN 1-84067-534-9 pp. 106–7. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 107. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 146. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 71. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 72–3. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 73–5. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p.77–8 - Brown, D.K. The Era of Uncertainty, in Steam Steel and Shellfire, p. 85. - Roberts, J "Warships of Steel 1879–1889" in Gardiner Steam, Steel and Shellfire" - The Royal Navy did build 18-inch (460 mm) guns for the Furious class battlecruisers, though these ships were finished as aircraft carriers and their guns eventually fitted to the Lord Clive class monitor, seeing service in World War I. - Campbell, J "Naval Armaments and Armour" in Gardiner Steam, Steel and Shellfire, pp. 158–69. - Reed, Our Ironclad Ships p 4, 45-50, 68, 139, 217-221, 224-6, 228, 233. - Conways's All the World's Fighting Ships 1860-1905 7-11, 118-9, 173, 267-8, 286-7, 301, 337-9, 389. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 91–3. - Noel, Gerard H U et al., The Gun, Ram and Torpedo, Manoeuvres and tactics of a Naval Battle of the Present Day, 2nd Edition, pub Griffin 1885. - Sondhaus, Naval Warfare 1815–1914 p. 87. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 122. - Sondhaus, Naval Warfare 1815–1914 p. 83. - Sondhaus, Naval Warfare 1815–1914 p. 156. - Lambert Battleships in Transition, p. 19. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 30–6. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 32–3. - Jenschura Jung & Mickel, Warships of the Imperial Japanese Navy, ISBN 0-85368-151-1. - Gardiner, "Steam, Steel and Shellfire", p.96 - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 37–41. - Hill, War at Sea in the Ironclad Age p. 39. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 45. - Sondhaus, Naval Warfare 1815–1914 pp. 164–5. - Sondhaus, Naval Warfare 1815–1914 p. 166. - Reed "Our Iron Clad Ships", pp. 45–7. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 133–4. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 54. - Hill, War at Sea in the Ironclad Age p. 44. - Sondhaus, Naval Warfare 1815–1914 p. 111–2. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 63–4. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 pp. 57–62. - Sondhaus, Naval Warfare 1815–1914 p. 88. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 194. - Griffiths, D "Warship Machinery" in Gardiner Steam, Steel and Shellfire. - Conway, All the World's Fighting Ships 1860–1905, Conway Maritime Press, 1979. ISBN 0-8317-0302-4. - This term was still in use in the 1860s and 1970s for what we would now call 'battleships'. See Noel, Gerard H U et al., The Gun, Ram and Torpedo, Manoeuvres and tactics of a Naval Battle of the Present Day, 2nd Edition, Griffin 1885. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 204. - Kennedy, Paul M. The Rise and Fall of British Naval Mastery, Macmillan Publishers, London, 1983. ISBN 0-333-35094, pp. 178–9. - Hill, War at Sea in the Ironclad Age p. 185. - Sondhaus, Naval Warfare 1815–1914 p. 101. - Sondhaus, Naval Warfare 1815–1914 pp. 122–6. - Sondhaus, Naval Warfare 1815–1914 pp. 187–91. - Sondhaus, Naval Warfare 1815–1914 pp. 126–8 173–9. - Historia naval del Perú. Tomo IV, Valdizán Gamio, José. - Sondhaus, Naval Warfare 1815–1914 pp. 97–9, 127–32. - Hill, War at Sea in the Ironclad Age p. 191. - Beeler, Birth of the Battleship: British Capital Ship Design 1870–1881 p. 154 states that HMS Edinburgh (1882) was the first British capital ship to be routinely called a battleship. - Hill, War at Sea in the Ironclad Age p. 18. - War and the Future by H.G. Wells, p.93 - Northrop Grumman Newport News. "Northrop Grumman Employees Reconstruct History with USS Monitor Replica". Archived from the original on February 19, 2007. Retrieved 2007-05-21 - Mail Online. "The $7 billion dollar warship being built to maintain American naval supremacy over China in the 21st Century". Retrieved 2013-02-25. |Wikisource has original text related to this article:| - Eugène M. Koleśnik, Roger Chesneau, N. J. M. Campbell. Conway's All the World's Fighting Ships 1860–1905. Conway Maritime Press, 1979. ISBN 0-8317-0302-4. - Archibald, EHH (1984). The Fighting Ship in the Royal Navy 1897–1984. Blandford. ISBN 0-7137-1348-8. - Ballard, George, The Black Battlefleet. Naval Institute Press, 1980. ISBN 0-87021-924-3. - Baxter, James Phinney III (1933), The Introduction of the Ironclad Warship, Harvard University Press, 1933. - Beeler, John, Birth of the Battleship: British Capital Ship Design 1870–1881. Caxton, London, 2003. ISBN 1-84067-534-9 - Brown, DK (2003). Warrior to Dreadnought: Warship Development 1860–1905. Caxton Editions. ISBN 1-84067-529-2. - Gardiner, Robert and Lambert, Andrew (2001). Steam, Steel and Shellfire: The Steam Warship, 1815–1905. Book Sales. ISBN 0-7858-1413-2. - Canney, Donald L The Old Steam Navy, The Ironclads, 1842–1885. Naval Institute Press, 1993 - Greene, Jack and Massignani, Alessandro (1998). Ironclads At War. Combined Publishing. ISBN 0-938289-58-6. - Fuller, Howard J., Clad in Iron: The American Civil War and the Challenge of British Naval Power (Naval Institute Press, 2010) - Hill, Richard. War at Sea in the Ironclad Age, ISBN 0-304-35273-X. - Jenschura Jung & Mickel, Warships of the Imperial Japanese Navy 1869–1946, ISBN 0-85368-151-1 - Kennedy, Paul M. The Rise and Fall of British Naval Mastery. Macmillan, London, 1983. ISBN 0-333-35094-4. - Lambert, Andrew Battleships in Transition: The Creation of the Steam Battlefleet 1815–1860. Conway Maritime Press, London, 1984. ISBN 0-85177-315-X. - Lyon, David and Winfield, Rif: The Sail and Steam Navy List, 1815–1889, Chatham Publishing, 2004. ISBN 1-86176-032-9. - Noel, Gerard et al., The Gun, Ram and Torpedo, Manoeuvres and tactics of a Naval Battle of the Present Day, 2nd Edition, pub. Griffin 1885. - Northrop Grumman Newport News, Northrop Grumman Employees Reconstruct History with USS Monitor Replica. Retrieved on 2007-05-21. - Reed, Edward J Our Ironclad Ships, their Qualities, Performance and Cost. John Murray, 1869. - Sondhaus, Lawrence. Naval Warfare 1815–1914. Routledge, London, 2001. ISBN 0-415-21478-5. - Sandler, Stanley. Emergence of the Modern Capital Ship (Newark, Delaware) Associated University Presses, 1979. |Wikimedia Commons has media related to: ironclads| - The first ironclads 1859–1872, engravings - Ironclads and Blockade Runners of the American Civil War - Images and text on the USS Monitor - The Spanish Navy Numancia, first ironclad warship to circumnavigate the world - Circular Iron-Clads in the Imperial Russian Navy - CSS Neuse II
http://en.wikipedia.org/wiki/Ironclad
13
14
Credit: Image Courtesy of American Memory at the Library of Congress. On March 31, 1776, future First Lady Abigail Adams wrote to her husband, John Adams, who was soon to be appointed a member of the committee drafting the Declaration of Independence: ... In the new Code of Laws which I suppose it will be necessary for you to make I would desire you would Remember the Ladies.... Do not put such unlimited power into the hands of the Husbands.... If particular care and attention is not paid to the Ladies, we are determined to foment a Rebellion, and will not hold ourselves bound by any Laws in which we have no voice, or Representation." Mrs. Adams's remarks were well ahead of their time. The representation she wrote about did not formally materialize until 1917, when Jeannette Rankin was elected the first female member of the House of Representatives. In 1920, the 19th Amendment finally gave women the right to vote. In the absence of official power, women had to find other ways to shape the world in which they lived. The First Ladies of the United States were among the women who were able to play "a significant role in shaping the political and social history of our country, impacting virtually every topic that has been debated" (Mary Regula, Founding Chair and President, National Board of Directors for The First Ladies' Library). Through the lessons in this unit, you will explore with your students the ways in which First Ladies were able to shape the world while dealing with the expectations placed on them as women and as partners of powerful men. How have First Ladies traditionally been viewed? How much has that view changed in two centuries? To answer these questions, begin by comparing visual images of First Ladies and their husbands from the nation's early years and more recent times. (Note: The photograph analysis worksheet available through the EDSITEment-reviewed The Digital Classroom may be useful in comparing the portraits in this lesson.) 1. Share with your class portraits of Abigail Adams and President John Adams from the original paintings by Gilbert Stuart, available through the EDSITEment-reviewed American Memory. Despite similar poses, these portraits are quite different. What details do the students notice in the two portraits? (For example, the President is shown with a book.) How do these portraits differ from each other? What reasons can the students offer for these differences? What is their significance? 2. Now share with your class official White House portraits from the EDSITEment resource American Memory of Jimmy Carter (color version or black and white) and Rosalynn Carter (color version or black and white), created in January and February 1977, respectively. What differences do students recognize between the two modern portraits? Are the differences similar or dissimilar to those present in the Adams portraits created nearly 200 years earlier? Does this indicate a change in the public image of a First Lady? 3. Ask students to think about whether the role of First Lady might have changed between 1977 and today. In what ways? Why? If desired, the students can look at home for news or magazine articles or web news about the First Lady. Traditionally, the First Lady has been regarded primarily as a political helpmate for her husband, a social leader in Washington, and an unofficial representative of the female population throughout the United States. Pass out to small groups an appropriate number of the following images. Appoint a spokesperson for each group. Give the groups time to analyze their images. (The document analysis worksheets available through the EDSITEment resource The Digital Classroom may be useful in completing this analysis.) Then, the group spokesperson should share the image with the class, describe it and hypothesize about the First Lady role represented. Make a list of the traditional First Lady roles that come up through this discussion and save it for future reference. Setting Fashion Standards Uplifting National Spirit During a Crisis Serving as the White House Representative in Areas of Special Interest to Women Campaigning for Her Husband (both with and without him) Promoting Charities and Causes Accompanying the President at Important Functions Making Good Will Travel Missions Serving as White House Hostess Maintaining the Role of Wife and Mother Taking an Interest in White House Restoration, Renovation and Preservation Circumstances and individual personalities have sometimes resulted in a First Lady taking on responsibilities not generally (or at least not publicly) associated with the role. Share with your class the story of Dolly Madison and the British attack on the White House, a brief version of which is available on The White House for Kids, a link from the EDSITEment-reviewed National First Ladies Library. Discuss the story with the class. What do students think of Mrs. Madison's actions? Do they think she did more than would have been expected of a First Lady? Pass out to small groups an appropriate number of images from the list below. Appoint a spokesperson for each group. Give the groups time to analyze the image. (The document analysis worksheets available through the EDSITEment resource The Digital Classroom may be useful in completing this analysis.) Then, the spokesperson should share the image with the class, describe it and hypothesize about the First Lady role represented. Make a list of these non-traditional First Lady roles as they are discussed and save it for future reference. Advising the President Lobbying for Causes Behind the Scenes Taking a High-Profile Moral Stand Assuming Important Roles after Being First Lady Taking a Stand for the Rights of Women Having a Career Give your students the opportunity to get to know some of the nation's First Ladies with whom they are likely to be less familiar. An expanded knowledge base about First Ladies will help students clarify what they learned in Lessons 2 and 3. Surveying many First Ladies will also prepare students for the culminating activity found in Lesson 5. Biographies of all of the First Ladies are available online through the White House website and through the National First Ladies Library, both accessible through links from the EDSITEment resource American Memory. Provide students, working individually or in small groups, with biographies of one or more First Ladies. Cover as many of the First Ladies as appropriate to the class. Review with students the traditional and non-traditional roles of the First Lady that were discussed in Lessons 2 and 3 of this unit, and ask students to find examples of those roles in the biographies of First Ladies. Does the information in the biographies support the documentary evidence of the traditional and non-traditional roles the students have already studied? Have the non-traditional roles received more or less public attention and recognition than the traditional roles? Why do students think this is the case? Do the lists of roles need to be redefined? Is the distinction between traditional and non-traditional roles a valid one? Does the current First Lady fall more into the category of traditional roles or non-traditional roles? Has the role of the First Lady changed? In what ways? Why are some First Ladies more memorable than others? At home, your students will conduct a poll of adults to find out which First Ladies come to mind for them. 1. Let students decide the "ground rules" for the poll, such as: 2. Share and analyze poll results. As a class, choose a certain number of First Ladies appearing in the poll (about five, or any number appropriate to the size of your class) for further research in small groups. In addition, let students choose an equal number of First Ladies who did not appear in the poll to research as well. Groups of two to three students then do an in-depth study of the First Ladies they selected. Group research should attempt to answer the following: 3. Have each group present its findings to the class, alternating between presentations on "memorable" and "unmemorable" First Ladies. Presentations, which should include a biography, could be oral reports or a display such as a "mini-museum" of her life, much like the rooms devoted to the First Lady in Presidential Libraries. Student museums could be constructed in large boxes, or tech-savvy students could create a Power Point presentation for the class. 4. Take a class poll. Establish a list of the First Ladies most worth remembering, based on student responses. How does the list differ from the poll of adults? Make a list of First Ladies who deserve more recognition. The class could create a bulletin board for public display promoting the lesser-known First Ladies. If desired, use a rubric to evaluate students' presentations on First Ladies from Lesson 5. To be completely effective, a rubric should be designed for your class with student skill level, your curriculum, and the specific assignment in mind. The following is a sample. If desired, click here to download this rubric to copy or to use when designing your own. | Does Not Meet| |Structure: (Note: This section of the evaluation asks if the elements are present.)| Did the presentation or display: |Content: (Note: This section evaluates the quality of the information presented.)| Did the presentation or display: |Delivery (speech): Was the speaker's:| |Mechanics (display): Does the piece evidence care taken with:| |Overall Rating (circle one): | Meets All or Most Many presidential libraries, including the following referenced in these lessons, are accessible through links from Digital Classroom: History Matters: A Brief Timeline of American Literature and Events Pre-1620 to 1920 1 class periods
http://edsitement.neh.gov/lesson-plan/remember-ladies-first-ladies
13
21
Thermal power station A thermal power station is a power plant in which the prime mover is steam driven. Water is heated, turns into steam and spins a steam turbine which drives an electrical generator. After it passes through the turbine, the steam is condensed in a condenser and recycled to where it was heated; this is known as a Rankine cycle. The greatest variation in the design of thermal power stations is due to the different fossil fuel resources generally used to heat the water. Some prefer to use the term energy center because such facilities convert forms of heat energy into electrical energy. Certain thermal power plants also are designed to produce heat energy for industrial purposes of district heating, or desalination of water, in addition to generating electrical power. Globally, fossil fueled thermal power plants produce a large part of man-made CO2 emissions to the atmosphere, and efforts to reduce these are many, varied and widespread. Introductory overview Almost all coal, nuclear, geothermal, solar thermal electric, and waste incineration plants, as well as many natural gas power plants are thermal. Natural gas is frequently combusted in gas turbines as well as boilers. The waste heat from a gas turbine can be used to raise steam, in a combined cycle plant that improves overall efficiency. Power plants burning coal, fuel oil, or natural gas are often called fossil-fuel power plants. Some biomass-fueled thermal power plants have appeared also. Non-nuclear thermal power plants, particularly fossil-fueled plants, which do not use co-generation are sometimes referred to as conventional power plants. Commercial electric utility power stations are usually constructed on a large scale and designed for continuous operation. Electric power plants typically use three-phase electrical generators to produce alternating current (AC) electric power at a frequency of 50 Hz or 60 Hz. Large companies or institutions may have their own power plants to supply heating or electricity to their facilities, especially if steam is created anyway for other purposes. Steam-driven power plants have been used in various large ships, but are now usually used in large naval ships. Shipboard power plants usually directly couple the turbine to the ship's propellers through gearboxes. Power plants in such ships also provide steam to smaller turbines driving electric generators to supply electricity. Shipboard steam power plants can be either fossil fuel or nuclear. Nuclear marine propulsion is, with few exceptions, used only in naval vessels. There have been perhaps about a dozen turbo-electric ships in which a steam-driven turbine drives an electric generator which powers an electric motor for propulsion. Combined heat and power plants (CH&P plants), often called co-generation plants, produce both electric power and heat for process heat or space heating. Steam and hot water lose energy when piped over substantial distance, so carrying heat energy by steam or hot water is often only worthwhile within a local area, such as a ship, industrial plant, or district heating of nearby buildings. The initially developed reciprocating steam engine has been used to produce mechanical power since the 18th Century, with notable improvements being made by James Watt. When the first commercially developed central electrical power stations were established in 1882 at Pearl Street Station in New York and Holborn Viaduct power station in London, reciprocating steam engines were used. The development of the steam turbine in 1884 provided larger and more efficient machine designs for central generating stations. By 1892 the turbine was considered a better alternative to reciprocating engines; turbines offered higher speeds, more compact machinery, and stable speed regulation allowing for parallel synchronous operation of generators on a common bus. After about 1905, turbines entirely replaced reciprocating engines in large central power stations. The largest reciprocating engine-generator sets ever built were completed in 1901 for the Manhattan Elevated Railway. Each of seventeen units weighed about 500 tons and was rated 6000 kilowatts; a contemporary turbine-set of similar rating would have weighed about 20% as much. The energy efficiency of a conventional thermal power station, considered salable energy produced as a percent of the heating value of the fuel consumed, is typically 33% to 48%. As with all heat engines, their efficiency is limited, and governed by the laws of thermodynamics. By comparison, most hydropower stations in the United States are about 90 percent efficient in converting the energy of falling water into electricity. The energy of a thermal not utilized in power production must leave the plant in the form of heat to the environment. This waste heat can go through a condenser and be disposed of with cooling water or in cooling towers. If the waste heat is instead utilized for district heating, it is called co-generation. An important class of thermal power station are associated with desalination facilities; these are typically found in desert countries with large supplies of natural gas and in these plants, freshwater production and electricity are equally important co-products. The Carnot efficiency dictates that higher efficiencies can be attained by increasing the temperature of the steam. Sub-critical fossil fuel power plants can achieve 36–40% efficiency. Super critical designs have efficiencies in the low to mid 40% range, with new "ultra critical" designs using pressures of 4400 psi (30.3 MPa) and multiple stage reheat reaching about 48% efficiency. Above the critical point for water of 705 °F (374 °C) and 3212 psi (22.06 MPa), there is no phase transition from water to steam, but only a gradual decrease in density. Current nuclear power plants must operate below the temperatures and pressures that coal-fired plants do, since the pressurized vessel is very large and contains the entire bundle of nuclear fuel rods. The size of the reactor limits the pressure that can be reached. This, in turn, limits their thermodynamic efficiency to 30–32%. Some advanced reactor designs being studied, such as the Very high temperature reactor, Advanced gas-cooled reactor and Super critical water reactor, would operate at temperatures and pressures similar to current coal plants, producing comparable thermodynamic efficiency. Electricity cost The direct cost of electric energy produced by a thermal power station is the result of cost of fuel, capital cost for the plant, operator labour, maintenance, and such factors as ash handling and disposal. Indirect, social or environmental costs such as the economic value of environmental impacts, or environmental and health effects of the complete fuel cycle and plant decommissioning, are not usually assigned to generation costs for thermal stations in utility practice, but may form part of an environmental impact assessment. Typical coal thermal power station For units over about 200 MW capacity, redundancy of key components is provided by installing duplicates of the forced and induced draft fans, air preheaters, and fly ash collectors. On some units of about 60 MW, two boilers per unit may instead be provided. Boiler and steam cycle In fossil-fueled power plants, steam generator refers to a furnace that burns the fossil fuel to boil water to generate steam. In the nuclear plant field, steam generator refers to a specific type of large heat exchanger used in a pressurized water reactor (PWR) to thermally connect the primary (reactor plant) and secondary (steam plant) systems, which generates steam. In a nuclear reactor called a boiling water reactor (BWR), water is boiled to generate steam directly in the reactor itself and there are no units called steam generators. In some industrial settings, there can also be steam-producing heat exchangers called heat recovery steam generators (HRSG) which utilize heat from some industrial process. The steam generating boiler has to produce steam at the high purity, pressure and temperature required for the steam turbine that drives the electrical generator. Geothermal plants need no boiler since they use naturally occurring steam sources. Heat exchangers may be used where the geothermal steam is very corrosive or contains excessive suspended solids. A fossil fuel steam generator includes an economizer, a steam drum, and the furnace with its steam generating tubes and superheater coils. Necessary safety valves are located at suitable points to avoid excessive boiler pressure. The air and flue gas path equipment include: forced draft (FD) fan, air preheater (AP), boiler furnace, induced draft (ID) fan, fly ash collectors (electrostatic precipitator or baghouse) and the flue gas stack. Feed water heating and deaeration The boiler feedwater used in the steam boiler is a means of transferring heat energy from the burning fuel to the mechanical energy of the spinning steam turbine. The total feed water consists of recirculated condensate water and purified makeup water. Because the metallic materials it contacts are subject to corrosion at high temperatures and pressures, the makeup water is highly purified before use. A system of water softeners and ion exchange demineralizers produces water so pure that it coincidentally becomes an electrical insulator, with conductivity in the range of 0.3–1.0 microsiemens per centimeter. The makeup water in a 500 MWe plant amounts to perhaps 120 US gallons per minute (7.6 L/s) to replace water drawn off from the boiler drums for water purity management, and to also offset the small losses from steam leaks in the system. The feed water cycle begins with condensate water being pumped out of the condenser after traveling through the steam turbines. The condensate flow rate at full load in a 500 MW plant is about 6,000 US gallons per minute (400 L/s). The water is pressurized in two stages, and flows through a series of six or seven intermediate feed water heaters, heated up at each point with steam extracted from an appropriate duct on the turbines and gaining temperature at each stage. Typically, in the middle of this series of feedwater heaters, and before the second stage of pressurization, the condensate plus the makeup water flows through a deaerator that removes dissolved air from the water, further purifying and reducing its corrosiveness. The water may be dosed following this point with hydrazine, a chemical that removes the remaining oxygen in the water to below 5 parts per billion (ppb).[vague] It is also dosed with pH control agents such as ammonia or morpholine to keep the residual acidity low and thus non-corrosive. Boiler operation The boiler is a rectangular furnace about 50 feet (15 m) on a side and 130 feet (40 m) tall. Its walls are made of a web of high pressure steel tubes about 2.3 inches (58 mm) in diameter. Pulverized coal is air-blown into the furnace through burners located at the four corners, or along one wall, or two opposite walls, and it is ignited to rapidly burn, forming a large fireball at the center. The thermal radiation of the fireball heats the water that circulates through the boiler tubes near the boiler perimeter. The water circulation rate in the boiler is three to four times the throughput. As the water in the boiler circulates it absorbs heat and changes into steam. It is separated from the water inside a drum at the top of the furnace. The saturated steam is introduced into superheat pendant tubes that hang in the hottest part of the combustion gases as they exit the furnace. Here the steam is superheated to 1,000 °F (540 °C) to prepare it for the turbine. Plants designed for lignite (brown coal) are increasingly used in locations as varied as Germany, Victoria, Australia and North Dakota. Lignite is a much younger form of coal than black coal. It has a lower energy density than black coal and requires a much larger furnace for equivalent heat output. Such coals may contain up to 70% water and ash, yielding lower furnace temperatures and requiring larger induced-draft fans. The firing systems also differ from black coal and typically draw hot gas from the furnace-exit level and mix it with the incoming coal in fan-type mills that inject the pulverized coal and hot gas mixture into the boiler. Plants that use gas turbines to heat the water for conversion into steam use boilers known as heat recovery steam generators (HRSG). The exhaust heat from the gas turbines is used to make superheated steam that is then used in a conventional water-steam generation cycle, as described in gas turbine combined-cycle plants section below. Boiler furnace and steam drum The water enters the boiler through a section in the convection pass called the economizer. From the economizer it passes to the steam drum and from there it goes through downcomers to inlet headers at the bottom of the water walls. From these headers the water rises through the water walls of the furnace where some of it is turned into steam and the mixture of water and steam then re-enters the steam drum. This process may be driven purely by natural circulation (because the water is the downcomers is denser than the water/steam mixture in the water walls) or assisted by pumps. In the steam drum, the water is returned to the downcomers and the steam is passed through a series of steam separators and dryers that remove water droplets from the steam. The dry steam then flows into the superheater coils. The boiler furnace auxiliary equipment includes coal feed nozzles and igniter guns, soot blowers, water lancing and observation ports (in the furnace walls) for observation of the furnace interior. Furnace explosions due to any accumulation of combustible gases after a trip-out are avoided by flushing out such gases from the combustion zone before igniting the coal. The steam drum (as well as the super heater coils and headers) have air vents and drains needed for initial start up. Fossil fuel power plants often have a superheater section in the steam generating furnace. The steam passes through drying equipment inside the steam drum on to the superheater, a set of tubes in the furnace. Here the steam picks up more energy from hot flue gases outside the tubing and its temperature is now superheated above the saturation temperature. The superheated steam is then piped through the main steam lines to the valves before the high pressure turbine. Nuclear-powered steam plants do not have such sections but produce steam at essentially saturated conditions. Experimental nuclear plants were equipped with fossil-fired super heaters in an attempt to improve overall plant operating cost. Steam condensing The condenser condenses the steam from the exhaust of the turbine into liquid to allow it to be pumped. If the condenser can be made cooler, the pressure of the exhaust steam is reduced and efficiency of the cycle increases. The surface condenser is a shell and tube heat exchanger in which cooling water is circulated through the tubes. The exhaust steam from the low pressure turbine enters the shell where it is cooled and converted to condensate (water) by flowing over the tubes as shown in the adjacent diagram. Such condensers use steam ejectors or rotary motor-driven exhausters for continuous removal of air and gases from the steam side to maintain vacuum. For best efficiency, the temperature in the condenser must be kept as low as practical in order to achieve the lowest possible pressure in the condensing steam. Since the condenser temperature can almost always be kept significantly below 100 °C where the vapor pressure of water is much less than atmospheric pressure, the condenser generally works under vacuum. Thus leaks of non-condensible air into the closed loop must be prevented. Typically the cooling water causes the steam to condense at a temperature of about 35 °C (95 °F) and that creates an absolute pressure in the condenser of about 2–7 kPa (0.59–2.1 inHg), i.e. a vacuum of about −95 kPa (−28 inHg) relative to atmospheric pressure. The large decrease in volume that occurs when water vapor condenses to liquid creates the low vacuum that helps pull steam through and increase the efficiency of the turbines. The limiting factor is the temperature of the cooling water and that, in turn, is limited by the prevailing average climatic conditions at the power plant's location (it may be possible to lower the temperature beyond the turbine limits during winter, causing excessive condensation in the turbine). Plants operating in hot climates may have to reduce output if their source of condenser cooling water becomes warmer; unfortunately this usually coincides with periods of high electrical demand for air conditioning. The condenser generally uses either circulating cooling water from a cooling tower to reject waste heat to the atmosphere, or once-through water from a river, lake or ocean. The heat absorbed by the circulating cooling water in the condenser tubes must also be removed to maintain the ability of the water to cool as it circulates. This is done by pumping the warm water from the condenser through either natural draft, forced draft or induced draft cooling towers (as seen in the image to the right) that reduce the temperature of the water by evaporation, by about 11 to 17 °C (20 to 30 °F)—expelling waste heat to the atmosphere. The circulation flow rate of the cooling water in a 500 MW unit is about 14.2 m³/s (500 ft³/s or 225,000 US gal/min) at full load. The condenser tubes are made of brass or stainless steel to resist corrosion from either side. Nevertheless they may become internally fouled during operation by bacteria or algae in the cooling water or by mineral scaling, all of which inhibit heat transfer and reduce thermodynamic efficiency. Many plants include an automatic cleaning system that circulates sponge rubber balls through the tubes to scrub them clean without the need to take the system off-line. The cooling water used to condense the steam in the condenser returns to its source without having been changed other than having been warmed. If the water returns to a local water body (rather than a circulating cooling tower), it is tempered with cool 'raw' water to prevent thermal shock when discharged into that body of water. Another form of condensing system is the air-cooled condenser. The process is similar to that of a radiator and fan. Exhaust heat from the low pressure section of a steam turbine runs through the condensing tubes, the tubes are usually finned and ambient air is pushed through the fins with the help of a large fan. The steam condenses to water to be reused in the water-steam cycle. Air-cooled condensers typically operate at a higher temperature than water-cooled versions. While saving water, the efficiency of the cycle is reduced (resulting in more carbon dioxide per megawatt of electricity). From the bottom of the condenser, powerful condensate pumps recycle the condensed steam (water) back to the water/steam cycle. Power plant furnaces may have a reheater section containing tubes heated by hot flue gases outside the tubes. Exhaust steam from the high pressure turbine is passed through these heated tubes to collect more energy before driving the intermediate and then low pressure turbines. Air path External fans are provided to give sufficient air for combustion. The Primary air fan takes air from the atmosphere and, first warming it in the air preheater for better combustion, injects it via the air nozzles on the furnace wall. The induced draft fan assists the FD fan by drawing out combustible gases from the furnace, maintaining a slightly negative pressure in the furnace to avoid backfiring through any closing. Steam turbine generator The turbine generator consists of a series of steam turbines interconnected to each other and a generator on a common shaft. There is a high pressure turbine at one end, followed by an intermediate pressure turbine, two low pressure turbines, and the generator. As steam moves through the system and loses pressure and thermal energy it expands in volume, requiring increasing diameter and longer blades at each succeeding stage to extract the remaining energy. The entire rotating mass may be over 200 metric tons and 100 feet (30 m) long. It is so heavy that it must be kept turning slowly even when shut down (at 3 rpm) so that the shaft will not bow even slightly and become unbalanced. This is so important that it is one of only five functions of blackout emergency power batteries on site. Other functions are emergency lighting, communication, station alarms and turbogenerator lube oil. Superheated steam from the boiler is delivered through 14–16-inch (360–410 mm) diameter piping to the high pressure turbine where it falls in pressure to 600 psi (4.1 MPa) and to 600 °F (320 °C) in temperature through the stage. It exits via 24–26-inch (610–660 mm) diameter cold reheat lines and passes back into the boiler where the steam is reheated in special reheat pendant tubes back to 1,000 °F (500 °C). The hot reheat steam is conducted to the intermediate pressure turbine where it falls in both temperature and pressure and exits directly to the long-bladed low pressure turbines and finally exits to the condenser. The generator, 30 feet (9 m) long and 12 feet (3.7 m) in diameter, contains a stationary stator and a spinning rotor, each containing miles of heavy copper conductor—no permanent magnets here. In operation it generates up to 21,000 amperes at 24,000 volts AC (504 MWe) as it spins at either 3,000 or 3,600 rpm, synchronized to the power grid. The rotor spins in a sealed chamber cooled with hydrogen gas, selected because it has the highest known heat transfer coefficient of any gas and for its low viscosity which reduces windage losses. This system requires special handling during startup, with air in the chamber first displaced by carbon dioxide before filling with hydrogen. This ensures that the highly explosive hydrogen–oxygen environment is not created. The power grid frequency is 60 Hz across North America and 50 Hz in Europe, Oceania, Asia (Korea and parts of Japan are notable exceptions) and parts of Africa. The desired frequency affects the design of large turbines, since they are highly optimized for one particular speed. The electricity flows to a distribution yard where transformers increase the voltage for transmission to its destination. The steam turbine-driven generators have auxiliary systems enabling them to work satisfactorily and safely. The steam turbine generator being rotating equipment generally has a heavy, large diameter shaft. The shaft therefore requires not only supports but also has to be kept in position while running. To minimize the frictional resistance to the rotation, the shaft has a number of bearings. The bearing shells, in which the shaft rotates, are lined with a low friction material like Babbitt metal. Oil lubrication is provided to further reduce the friction between shaft and bearing surface and to limit the heat generated. Stack gas path and cleanup As the combustion flue gas exits the boiler it is routed through a rotating flat basket of metal mesh which picks up heat and returns it to incoming fresh air as the basket rotates, This is called the air preheater. The gas exiting the boiler is laden with fly ash, which are tiny spherical ash particles. The flue gas contains nitrogen along with combustion products carbon dioxide, sulfur dioxide, and nitrogen oxides. The fly ash is removed by fabric bag filters or electrostatic precipitators. Once removed, the fly ash byproduct can sometimes be used in the manufacturing of concrete. This cleaning up of flue gases, however, only occurs in plants that are fitted with the appropriate technology. Still, the majority of coal-fired power plants in the world do not have these facilities. Legislation in Europe has been efficient to reduce flue gas pollution. Japan has been using flue gas cleaning technology for over 30 years and the US has been doing the same for over 25 years. China is now beginning to grapple with the pollution caused by coal-fired power plants. Where required by law, the sulfur and nitrogen oxide pollutants are removed by stack gas scrubbers which use a pulverized limestone or other alkaline wet slurry to remove those pollutants from the exit stack gas. Other devices use catalysts to remove Nitrous Oxide compounds from the flue gas stream. The gas travelling up the flue gas stack may by this time have dropped to about 50 °C (120 °F). A typical flue gas stack may be 150–180 metres (490–590 ft) tall to disperse the remaining flue gas components in the atmosphere. The tallest flue gas stack in the world is 419.7 metres (1,377 ft) tall at the GRES-2 power plant in Ekibastuz, Kazakhstan. In the United States and a number of other countries, atmospheric dispersion modeling studies are required to determine the flue gas stack height needed to comply with the local air pollution regulations. The United States also requires the height of a flue gas stack to comply with what is known as the "Good Engineering Practice (GEP)" stack height. In the case of existing flue gas stacks that exceed the GEP stack height, any air pollution dispersion modeling studies for such stacks must use the GEP stack height rather than the actual stack height. Fly ash collection Fly ash is captured and removed from the flue gas by electrostatic precipitators or fabric bag filters (or sometimes both) located at the outlet of the furnace and before the induced draft fan. The fly ash is periodically removed from the collection hoppers below the precipitators or bag filters. Generally, the fly ash is pneumatically transported to storage silos for subsequent transport by trucks or railroad cars . Bottom ash collection and disposal At the bottom of the furnace, there is a hopper for collection of bottom ash. This hopper is always filled with water to quench the ash and clinkers falling down from the furnace. Some arrangement is included to crush the clinkers and for conveying the crushed clinkers and bottom ash to a storage site . Ash extractor is used to discharge ash from Municipal solid waste fired boilers. Auxiliary systems Boiler make-up water treatment plant and storage Since there is continuous withdrawal of steam and continuous return of condensate to the boiler, losses due to blowdown and leakages have to be made up to maintain a desired water level in the boiler steam drum. For this, continuous make-up water is added to the boiler water system. Impurities in the raw water input to the plant generally consist of calcium and magnesium salts which impart hardness to the water. Hardness in the make-up water to the boiler will form deposits on the tube water surfaces which will lead to overheating and failure of the tubes. Thus, the salts have to be removed from the water, and that is done by a water demineralising treatment plant (DM). A DM plant generally consists of cation, anion, and mixed bed exchangers. Any ions in the final water from this process consist essentially of hydrogen ions and hydroxide ions, which recombine to form pure water. Very pure DM water becomes highly corrosive once it absorbs oxygen from the atmosphere because of its very high affinity for oxygen. The capacity of the DM plant is dictated by the type and quantity of salts in the raw water input. However, some storage is essential as the DM plant may be down for maintenance. For this purpose, a storage tank is installed from which DM water is continuously withdrawn for boiler make-up. The storage tank for DM water is made from materials not affected by corrosive water, such as PVC. The piping and valves are generally of stainless steel. Sometimes, a steam blanketing arrangement or stainless steel doughnut float is provided on top of the water in the tank to avoid contact with air. DM water make-up is generally added at the steam space of the surface condenser (i.e., the vacuum side). This arrangement not only sprays the water but also DM water gets deaerated, with the dissolved gases being removed by a de-aerator through an ejector attached to the condenser. Fuel preparation system In coal-fired power stations, the raw feed coal from the coal storage area is first crushed into small pieces and then conveyed to the coal feed hoppers at the boilers. The coal is next pulverized into a very fine powder. The pulverizers may be ball mills, rotating drum grinders, or other types of grinders. Some power stations burn fuel oil rather than coal. The oil must kept warm (above its pour point) in the fuel oil storage tanks to prevent the oil from congealing and becoming unpumpable. The oil is usually heated to about 100 °C before being pumped through the furnace fuel oil spray nozzles. Boilers in some power stations use processed natural gas as their main fuel. Other power stations may use processed natural gas as auxiliary fuel in the event that their main fuel supply (coal or oil) is interrupted. In such cases, separate gas burners are provided on the boiler furnaces. Barring gear Barring gear (or "turning gear") is the mechanism provided to rotate the turbine generator shaft at a very low speed after unit stoppages. Once the unit is "tripped" (i.e., the steam inlet valve is closed), the turbine coasts down towards standstill. When it stops completely, there is a tendency for the turbine shaft to deflect or bend if allowed to remain in one position too long. This is because the heat inside the turbine casing tends to concentrate in the top half of the casing, making the top half portion of the shaft hotter than the bottom half. The shaft therefore could warp or bend by millionths of inches. This small shaft deflection, only detectable by eccentricity meters, would be enough to cause damaging vibrations to the entire steam turbine generator unit when it is restarted. The shaft is therefore automatically turned at low speed (about one percent rated speed) by the barring gear until it has cooled sufficiently to permit a complete stop. Oil system An auxiliary oil system pump is used to supply oil at the start-up of the steam turbine generator. It supplies the hydraulic oil system required for steam turbine's main inlet steam stop valve, the governing control valves, the bearing and seal oil systems, the relevant hydraulic relays and other mechanisms. At a preset speed of the turbine during start-ups, a pump driven by the turbine main shaft takes over the functions of the auxiliary system. Generator cooling While small generators may be cooled by air drawn through filters at the inlet, larger units generally require special cooling arrangements. Hydrogen gas cooling, in an oil-sealed casing, is used because it has the highest known heat transfer coefficient of any gas and for its low viscosity which reduces windage losses. This system requires special handling during start-up, with air in the generator enclosure first displaced by carbon dioxide before filling with hydrogen. This ensures that the highly flammable hydrogen does not mix with oxygen in the air. The hydrogen pressure inside the casing is maintained slightly higher than atmospheric pressure to avoid outside air ingress. The hydrogen must be sealed against outward leakage where the shaft emerges from the casing. Mechanical seals around the shaft are installed with a very small annular gap to avoid rubbing between the shaft and the seals. Seal oil is used to prevent the hydrogen gas leakage to atmosphere. The generator also uses water cooling. Since the generator coils are at a potential of about 22 kV, an insulating barrier such as Teflon is used to interconnect the water line and the generator high voltage windings. Demineralized water of low conductivity is used. Generator high voltage system The generator voltage for modern utility-connected generators ranges from 11 kV in smaller units to 22 kV in larger units. The generator high voltage leads are normally large aluminium channels because of their high current as compared to the cables used in smaller machines. They are enclosed in well-grounded aluminium bus ducts and are supported on suitable insulators. The generator high voltage leads are connected to step-up transformers for connecting to a high voltage electrical substation (usually in the range of 115 kV to 765 kV) for further transmission by the local power grid. The necessary protection and metering devices are included for the high voltage leads. Thus, the steam turbine generator and the transformer form one unit. Smaller units,may share a common generator step-up transformer with individual circuit breakers to connect the generators to a common bus. Monitoring and alarm system Most of the power plant operational controls are automatic. However, at times, manual intervention may be required. Thus, the plant is provided with monitors and alarm systems that alert the plant operators when certain operating parameters are seriously deviating from their normal range. Battery supplied emergency lighting and communication A central battery system consisting of lead acid cell units is provided to supply emergency electric power, when needed, to essential items such as the power plant's control systems, communication systems, turbine lube oil pumps, and emergency lighting. This is essential for a safe, damage-free shutdown of the units in an emergency situation. Transport of coal fuel to site and to storage Most thermal stations use coal as the main fuel. Raw coal is transported from coal mines to a power station site by trucks, barges, bulk cargo ships or railway cars. Generally, when shipped by railways, the coal cars are sent as a full train of cars. The coal received at site may be of different sizes. The railway cars are unloaded at site by rotary dumpers or side tilt dumpers to tip over onto conveyor belts below. The coal is generally conveyed to crushers which crush the coal to about 3⁄4 inches (19 mm) size. The crushed coal is then sent by belt conveyors to a storage pile. Normally, the crushed coal is compacted by bulldozers, as compacting of highly volatile coal avoids spontaneous ignition. The crushed coal is conveyed from the storage pile to silos or hoppers at the boilers by another belt conveyor system. See also - http://books.google.com/books?id=ZMw7AAAAIAAJ&pg=PA175&dq=central+station+steam+engine+turbine&hl=en&ei=uzfQTKX9EsKXnAfF2cSNBg&sa=X&oi=book_result&ct=result&resnum=6&ved=0CEMQ6AEwBQ#v=onepage&q=central%20station%20steam%20engine%20turbine&f=false The early days of the power station industry, Cambridge University Press Archive, pages 174-175 - Maury Klein, The Power Makers: Steam, Electricity, and the Men Who Invented Modern America Bloomsbury Publishing USA, 2009 ISBN 1-59691-677-X - Climate TechBook, Hydropower, Pew Center on Global Climate Change, October 2009 - British Electricity International (1991). Modern Power Station Practice: incorporating modern power system practice (3rd Edition (12 volume set) ed.). Pergamon. ISBN 0-08-040510-X. - Babcock & Wilcox Co. (2005). Steam: Its Generation and Use (41st edition ed.). ISBN 0-9634570-0-4. - Thomas C. Elliott, Kao Chen, Robert Swanekamp (coauthors) (1997). Standard Handbook of Powerplant Engineering (2nd edition ed.). McGraw-Hill Professional. ISBN 0-07-019435-1. - Pressurized deaerators - Tray deaerating heaters - Air Pollution Control Orientation Course from website of the Air Pollution Training Institute - Energy savings in steam systems Figure 3a, Layout of surface condenser (scroll to page 11 of 34 pdf pages) - Robert Thurston Kent (Editor in Chief) (1936). Kents’ Mechanical Engineers’ Handbook (Eleventh edition (Two volumes) ed.). John Wiley & Sons (Wiley Engineering Handbook Series). - EPA Workshop on Cooling Water Intake Technologies Arlington, Virginia John Maulbetsch, Maulbetsch Consulting Kent Zammit, EPRI. 6 May 2003. Retrieved 10 September 2006. - Beychok, Milton R. (2005). Fundamentals Of Stack Gas Dispersion (4th Edition ed.). author-published. ISBN 0-9644588-0-2. www.air-dispersion.com - Guideline for Determination of Good Engineering Practice Stack Height (Technical Support Document for the Stack Height Regulations), Revised, 1985, EPA Publication No. EPA–450/4–80–023R, U.S. Environmental Protection Agency (NTIS No. PB 85–225241) - Lawson, Jr., R. E. and W. H. Snyder, 1983. Determination of Good Engineering Practice Stack Height: A Demonstration Study for a Power Plant, 1983, EPA Publication No. EPA–600/3–83–024. U.S. Environmental Protection Agency (NTIS No. PB 83–207407)
http://en.wikipedia.org/wiki/Thermal_power_station
13
16
The Nootka Crisis was an international incident and political dispute between the Kingdom of Great Britain and the Kingdom of Spain, triggered by a series of events that took place during the summer of 1789 at Nootka Sound. Nootka Sound is a network of inlets on the west coast of Vancouver Island, in the Pacific Northwest region of North America, now part of the Canadian province of British Columbia and the territory of the Mowachaht group of the Nuu-chah-nulth indigenous people. The crisis revolved around larger issues about sovereignty claims and rights of navigation and trade. Between 1774 and 1789 Spain sent several expeditions to the Pacific Northwest to reassert its long-held navigation and territorial claims to the area. By 1776 these expeditions had reached Bucareli Bay in the mouth of the Columbia River in Sitka Sound. Territorial rights were asserted according to acts of sovereignty customary of the time. However, some years later several British fur trading vessels entered the area which Spain had laid claim to. A complex series of events led to these British vessels being seized by the Spanish Navy at Nootka Sound. When the news reached Europe, Britain requested compensation and the Spanish government refused. Both sides prepared for war and sought assistance from allies. The crisis was resolved peacefully but with difficulty through a set of three agreements, known collectively as the Nootka Conventions. Spain agreed to share some rights to settle along the Pacific coast but kept its main Pacific claims. The outcome was considered a victory for mercantile interests of Britain and opened the way to certain British expansion in the Pacific. However, Spain continued to colonize and settle the Pacific coast, especially present-day California, until 1821. The events at Nootka Sound, apart from the larger international crisis, are sometimes called the Nootka Incident, the Nootka Sound Incident, and similar terms. The larger Nootka Crisis is known variously by names such as the Nootka Sound Crisis, the Nootka Sound Controversy, the Great Spanish Armament, and other variations. Northwestern North America (the Pacific Northwest) was little explored by European ships before the mid-18th century. But by the end of the century several nations were vying for control of the region, including Britain, Spain, Russia, and the United States. For centuries Spain had claimed the entire Pacific coast of North and South America. This claim was based on a number of events. In 1493 Pope Alexander VI had issued the Inter caetera papal bull, dividing the western hemisphere into Spanish and Portuguese zones, in theory granting nearly the entire New World to Spain. This was further defined in the 1494 Treaty of Tordesillas. In 1513 Balboa crossed the Isthmus of Panama and formally laid claim to all the shores washed by the Pacific Ocean. As the years went by new criteria for determining sovereignty evolved in European international law, including "prior discovery" and "effective occupation". Spain made claims of prior discovery for the northwest coast of North America by citing the voyages of Cabrillo in 1542, Ferrelo in 1543, and Vizcaino in 1602–03. Before the early 17th century, these voyages had not reached north of the 44th parallel, and Spain had no "effective settlement" north of Mexico. Thus when, in the mid-18th century, the Russians began to explore Alaska and establish fur trading posts, Spain responded by building a new naval base at San Blas, Mexico, and using it as a base for sending a series of exploration and reconnaissance voyages to the far northwest. These voyages, intended to ascertain the Russian threat and to establish "prior discovery" claims, were supplemented by the "effective settlement" of Alta California. Starting in 1774, Spanish expeditions were sent to northern Canada and Alaska to reassert Spain's claims and navigation rights in the area. By 1775 Spanish exploration had reached Bucareli Bay including the mouth of the Columbia River between present-day Oregon and Washington, and Sitka Sound. James Cook of the British Royal Navy explored the Pacific Northwest coast, including Nootka Sound, in 1778. His journals were published in 1784 and aroused great interest in the fur trading potential of the region. Even before 1784 unauthorized accounts had already familiarized British merchants with the possible profits to be made. The first British trader to arrive on the northwest coast after Cook was James Hanna, in 1785. News of the large profit Hanna made selling northwest furs in China inspired many other British ventures. Cook's visit to Nootka Sound would later be used by the British in their claim to the region, even though Cook made no effort to formally claim possession. Spain countered by citing Juan Pérez, who anchored in Nootka Sound in 1774. By the late 1780s Nootka Sound was the most important anchorage on the northwestern coast. Russia, Britain, and Spain all made moves to occupy it for good. John Meares was one of the movers behind the early British fur trading effort in the Pacific Northwest. After an ill-fated voyage to Alaska in 1786–87, Meares returned to the northwest in 1788. He arrived at Nootka Sound in command of the Felice Adventurero, along with the Iphigenia Nubiana under William Douglas. The ships were registered in Macau, a Portuguese colony in China, and used Portuguese flags in order to evade the British East India Company monopoly on trading in the Pacific. Non-British ships were not required to have licences from the East India Company. Meares later claimed that Maquinna, a chief of the Nuu-chah-nulth (Nootka) people, sold him some land on the shore of Friendly Cove in Nootka Sound, in exchange for some pistols and trade goods, and that on this land some kind of building was erected. These claims would become a key point in Britain's position during the Nootka Crisis. Spain strongly disputed both claims, and the true facts of the matter have never been fully established. The land and building aside, there is no doubt that Meares's men, and a group of Chinese workers they brought, built the sloop North West America. It was launched in September 1788, the first non-indigenous vessel built in the Pacific Northwest. The North West America would also play a role in the Nootka Crisis, being one of the vessels seized by Spain. At the end of the summer Meares and the three ships left. During the winter of 1788–89 Meares was in Guangzhou (Canton), China, where he and others including John Henry Cox and Daniel Beale formed a partnership called the Associated Merchants Trading to the Northwest Coast of America. Plans were made for more ships to sail to the Pacific Northwest in 1789, including the Princess Royal, under Thomas Hudson, and the Argonaut under James Colnett. The consolidation of the fur trading companies of Meares and the Etches (King George's Sound Company) resulted in James Colnett being given the overall command. Colnett's orders in 1789 were to establish a permanent fur trading post at Nootka Sound based on the foothold accomplished by Meares. While the British fur traders were getting organized, the Spanish were continuing their effort to secure the Pacific Northwest. At first the Spanish were responding mainly to Russian activity in Alaska. On a 1788 voyage to Alaska, Esteban José Martínez had learned that the Russians were intending to establish a fortified outpost at Nootka Sound. This, in addition to the increasing use of Nootka Sound by British fur traders, resulted in the Spanish decision to assert sovereignty on the northwest coast once and for all. Plans were laid for Nootka Sound to be colonized. Spain hoped to establish and maintain sovereignty on the entire coast as far north as the Russia posts in Prince William Sound. In early 1789 the Spanish expedition under Martínez arrived at Nootka Sound. The force consisted of the warship Princesa, commanded by Martínez, and the supply ship San Carlos, under Gonzalo López de Haro. The expedition built British Columbia's first settlement Santa Cruz de Nuca on Nootka Sound, including houses, a hospital, and Fort San Miguel. Martínez arrived at Nootka Sound on May 5, 1789. He found three ships already there. Two were American, the Columbia Rediviva and the Lady Washington, which had wintered at Nootka Sound. The British ship was the Iphigenia. It was seized and its captain, William Douglas, arrested. After a few days Martínez released Douglas and his ship and ordered him to leave and not return. Douglas heeded the warning. On June 8, the North West America, under Robert Funter, arrived at Nootka Sound and was seized by Martínez. The sloop was renamed Santa Gertrudis la Magna and used for exploring the region. José María Narváez was given command and sailed far into the Strait of Juan de Fuca. Martínez later claimed that Funter had abandoned the vessel. Martínez had given supplies to the Iphigenia and claimed his seizure of the North West America was for the purpose of holding the vessel as a security for the money owed by Meares's company for the supplies. On June 24, in front of the British and Americans present at Nootka Sound, Martínez performed a formal act of sovereignty, taking possession of the entire northwest coast for Spain. On July 2, the British ships Princess Royal and Argonaut arrived. The Princess Royal was first, and Martínez ordered its captain, Thomas Hudson to abandon the area and return to China, based on Spain's territorial and navigation rights. Later in the day the Argonaut arrived and Martínez seized the ship and arrested Colnett, his crew, and the Chinese workers Colnett had brought. In addition to the Chinese workers, the Argonaut carried a considerable amount of equipment. Colnett said that he was intending to build a settlement at Nootka Sound, which was considered a violation of Spanish sovereignty. After a hot-tempered argument Martínez arrested Colnett. Later, Martínez used the Chinese workforce to build Fort San Miguel and otherwise improve the Spanish post. The Argonaut also carried materials for the construction of a new ship. After Narváez returned in the Santa Gertrudis la Magna (the seized and renamed North West America), the materials from the Argonaut where used to improve the vessel. By the end of 1789 the Santa Gertrudis la Magna was in San Blas, where it was dismantled. The pieces were taken back to Nootka Sound in 1790 by Francisco de Eliza and used to build a schooner, christened Santa Saturnina. This vessel, the third incarnation of the North West America, was used by Narváez during his 1791 exploration of the Strait of Georgia. On July 12, Hudson returned to Nootka Sound with the Princess Royal. He did not intend to enter but was becalmed. This was seen as a provocation and he was seized by the Spanish. The Nuu-chah-nulth, indigenous to Nootka Sound, observed but did not understand the disputes between the Spanish and British. On July 13, one of the Nuu-chah-nulth leaders, Callicum, the son of Maquinna, went to meet with Martínez, who was on board the newly captured Princess Royal. Callicum's attitude and angry calls alarmed the Spanish and somehow Callicum ended up shot dead. Sources differ over exactly how this happened. Some say that Martínez fired a warning shot and a nearby Spanish sailor, thinking Martínez meant to kill and missed, fired as well and killed Callicum. Another source says that Martínez aimed to hit Callicum but his musket misfired and another sailor fired his musket and killed Callicum. Sources also differ over what Callicum was angry about, whether it was the seizing of ships, or something else. In any case the event caused a rift between the Spanish and the Nuu-chah-nulth. Maquinna, in fear of his life, fled to Clayoquot Sound and moved with his people from Yuquot to Aoxsha. On July 14 the Argonaut set sail for San Blas, with a Spanish crew and Colnett and his crew as prisoners. Two weeks later the Princess Royal followed, with the San Carlos as an escort. The American ships Columbia Rediviva and Lady Washington, also fur trading, were in the area all summer, sometimes anchored in Friendly Cove. Martínez left them alone even though his instructions were to prevent ships of any nation from trading at Nootka Sound. The captured crew of the North West America was sent to the Columbia before the Americans set sail for China. Despite the ongoing conflict and the warnings, two other American ships arrived at Nootka Sound late in the season. As a result, the first of these ships, the Fair American, under Thomas Humphrey Metcalfe, was captured by the forces of Martínez upon arrival. Its sister ship, the Eleanora, under Humphrey's father, Simon Metcalfe, was nearly captured but escaped. On July 29, 1789:295 the Spanish supply ship Aranzazu arrived from San Blas with orders from Viceroy Flores to evacuate Nootka Sound by the end of the year. By the end of October the Spanish had completely abandoned Nootka Sound. They returned to San Blas with the Princess Royal and the Argonaut, with their captains and crews as prisoners, as well as the Fair American. The captured North West America, renamed Santa Gertrudis la Magna, returned to San Blas separately. The Fair American was released in early 1790 without much notice. The Nootka Incident did not spark a crisis in the relationship of the United States and Spain. By late 1789 Viceroy Flores had already been replaced with a new viceroy, Juan Vicente de Güemes Padilla Horcasitas y Aguayo, 2nd Count of Revillagigedo, who was determined to continue defending the Spanish rights to the area, including settling Nootka Sound and the Pacific Northwest coast in general. Martínez, who had enjoyed the favor of Flores, became a scapegoat under the new regime. The senior commander of the Spanish naval base at San Blas, Juan Francisco de la Bodega y Quadra, replaced Martínez as the primary Spaniard in charge of Nootka Sound and the northwest coast. A new expedition was organized and in early 1790 Nootka Sound was reoccupied by the Spanish, under the command of Francisco de Eliza. The fleet sent to Nootka Sound in 1790 was the largest Spanish force yet sent to the northwest. News about the events at Nootka Sound reached London in January 1790. The main statesmen involved in the impending crisis were William Pitt the Younger, the British Prime Minister, and José Moñino y Redondo, conde de Floridablanca, the chief minister of Spain. Pitt made the claim that the British had the right to trade in any Spanish territory desired, despite Spanish laws to the contrary. He knew this claim was indefensible and would likely lead to war, but felt driven to make it by "the public outcry" in Britain. The ultimate outcome of the Nootka Crisis, publicized as a diplomatic victory in Britain, increased the prestige and popularity of Pitt. In April 1790 John Meares arrived in England, confirmed various rumors, claimed to have bought land and built a settlement at Nootka before Martínez, and generally fanned the flames of anti-Spanish feelings. In May the issue was taken up in the House of Commons as the Royal Navy began to make preparations for hostilities. An ultimatum was delivered to Spain. Meares published an account of his Voyages in 1790, which gained widespread attention, especially in light of the developing Nootka Crisis. Meares not only described his voyages to the northwest coast, but put forward a grand vision of a new economic network based in the Pacific, joining in trade widely separated regions such as the Pacific Northwest, China, Japan, Hawaii, and England. This idea tried to imitate Spain's centuries-old Pacific and Atlantic trade networks of the Manila Galleons and Atlantic treasure fleets which linked Asia and the Philippines with North America and Spain since the 16th century. Meares' vision required a loosening of the monopolistic power of the East India Company and the South Sea Company, which between them controlled all British trade in the Pacific. Meares argued strongly for loosening their power. His vision eventually came to pass, in its general form, but not before the long struggle of the Napoleonic Wars was over. Both Britain and Spain sent powerful fleets of warships towards each other in a show of force. There was a chance of open warfare had the fleets encountered one another, but they did not. The role of France in the conflict was key. France and Spain were allies under the Family Compact between the ruling Bourbon houses. The combined French and Spanish fleets would be a serious threat to the Royal Navy of Britain. The French Revolution had broken out in July 1789 but had not reached truly serious levels by the summer of 1790. King Louis XVI was still the monarch and the French military was relatively intact. In response to the Nootka Crisis France mobilized its navy. But by the end of August the French government had decided it could not become involved. The National Assembly, growing in power, declared that France would not go to war. Spain's position was threatened and negotiations to avoid war began. The Dutch Republic provided naval support to the British during the Nootka Crisis, a result of a shift in Dutch alliance from France to Britain. This was the first test of the Triple Alliance of Britain, Prussia, and the Dutch Republic. Without French help, Spain decided to negotiate in order to avoid war, and the first Nootka Convention was signed on October 28, 1790. The first Nootka Convention, called the Nootka Sound Convention, resolved the crisis in general. The convention held that the northwest coast would be open to traders of both Britain and Spain, that the captured British ships would be returned and an indemnity paid. It also held that the land owned by the British at Nootka Sound would be restored, which proved difficult to carry out. The Spanish claimed that the only such land was the small parcel where Meares had built the North West America. The British held that Meares had in fact purchased the whole of Nootka Sound from Maquinna, as well as some land to the south. Until the details were worked out, which took several years, Spain retained control of Nootka Sound and continued to garrison the fort at Friendly Cove. Complicating the issue was the changing role of the Nuu-chah-nulth in relation to Britain and Spain. The Nuu-chah-nulth had become highly suspicious and hostile toward Spain following the 1789 killing of Callicum. But the Spanish worked hard to improve the relationship, and by the time of Nootka Conventions were to be carried out the Nuu-chah-nulth were essentially allied with the Spanish. This development came about in a large degree due to the efforts by Alessandro Malaspina and his officers during his month-long stay at Nootka Sound in 1791. Malaspina was able to regain the trust of Maquinna and the promise that the Spanish had the rightful title of land ownership at Nootka Sound. Negotiations between Britain and Spain over the details of the Nootka Convention were to take place at Nootka Sound in the summer of 1792, for which purpose Juan Francisco de la Bodega y Quadra came. The British negotiator was George Vancouver, who arrived on August 28, 1792. Although Vancouver and Bodega y Quadra were friendly with one another, their negotiations did not go smoothly. Spain desired to set the Spanish-British boundary at the Strait of Juan de Fuca, but Vancouver insisted on British rights to the Columbia River. Vancouver also objected to the new Spanish post at Neah Bay. Bodega y Quadra insisted on Spain retaining Nootka Sound, which Vancouver could not accept. In the end the two agreed to refer the matter to their respective governments. By 1793 Britain and Spain had become allies in a war against France. The issues of the Nootka Crisis had become less important. An agreement was signed on January 11, 1794, under which both nations agreed to abandon Nootka Sound, with a ceremonial transfer of the post at Friendly Cove to the British. The official transfer occurred on March 28, 1795. General Álava represented Spain and Lieutenant Thomas Pearce Britain. The British flag was ceremoniously raised and lowered. Afterwards, Pearce presented the flag to Maquinna and asked him to raise it whenever a ship appeared. Under the Nootka Convention, Britain and Spain agreed not to establish any permanent base at Nootka Sound, but ships from either nation could visit. The two nations also agreed to prevent any other nation from establishing sovereignty. The Nootka Conventions are sometimes described as a commitment by Spain to withdraw from the northwest coast, but there was no such requirement. In the larger scheme of things the Nootka Conventions weakened the notion that a country could claim exclusive sovereignty without establishing settlements. It was not enough to claim territory by a grant of the Pope, or by "right of first discovery". Claims had to be backed up with some kind of actual occupation. The British did not win all of the points they had sought. British merchants were still restricted from trading directly with Spanish America and no northern boundary of Spanish America was set. Nevertheless, in the aftermath of the Nootka Crisis Britain became the dominant power in the Pacific. Spanish rights in the Pacific Northwest were later acquired by the United States via the Adams-Onís Treaty, signed in 1819. The United States argued that it acquired exclusive sovereignty from Spain, which became a key part of the American position during the Oregon boundary dispute. In countering the US claim of exclusive sovereignty the British cited the Nootka Conventions. This dispute was not resolved until the signing of the Oregon Treaty in 1846, dividing the disputed territory, and establishing what later became the current international boundary between Canada and the United States. Here you can share your comments or contribute with more information, content, resources or links about this topic.
http://www.mashpedia.com/Nootka_Sound_Controversy
13
15
Recommended for grades 9–12. Olive oil is made by pressing or extracting the rich oil from the olive fruit. It seems like a simple matter to press the olives and collect the oil, but many oil extraction processes exist for the many different types of olives grown around the world. To complicate things further, there are also various grades of olive oil, and carefully selected groups of officials meet to define and redefine the grading of olive oil. To help make our experiment a more scientific and less political exercise, we will winnow our investigation of olive oil down to a manageable few variables. After processing, olive oil comes in three common grades: extra virgin, regular, and light. Extra virgin olive oil is considered the highest quality. It is the first pressing from freshly prepared olives. It has a greenish-yellow tint and a distinctively fruity aroma because of the high levels of volatile materials extracted from the fruit. Regular olive oil is collected with the help of a warm water slurry to increase yield, squeezing every last drop of oil out of the olives. It is pale yellow in color, with a slight aroma, because it contains fewer volatile compounds. Light olive oil is very light in color and has virtually no aroma because it has been processed under pressure. This removes most of the chlorophyll and volatile compounds. Light olive oil is commonly used for frying because it does not affect the taste of fried foods, and it is relatively inexpensive. The visible light absorbance spectrum of chlorophyll gives interesting results. The chemistry of chlorophyll (some references site four types: a, b, c, and d) creates absorbance peaks in the 400–500 nm range and in the 600–700 nm range. The combination of visible light that is not absorbed appears green to the human eye, but different sources of chlorophylls will have different ratios of these peaks, which create various shades of green. The ability of chlorophyll to soak up light energy across a wide swath of the visible range helps power photosynthesis at optimum efficiency in plants. In this experiment, you will - Measure and analyze the visible light absorbance spectra of three standard olive oils: extra virgin, regular, and light. - Measure the absorbance spectrum of an “unknown” olive oil sample. - Identify the unknown olive oil as one of the three standard types. Sensors and Equipment This experiment requires each of the following Vernier sensors and equipment (unless otherwise noted): You may also need an interface and software for data collection. What do I need for data collection? Download Experiment Preview The student-version preview includes: - Step-by-step instructions for computer-based data collection - List of materials and equipment Note: The experiment preview of the computer edition does not include essential teacher information, safety tips, or sample data. Instructions for Logger Pro and other software (such as LabQuest App or TI handheld software, where available) are on the CD that accompanies the book. We strongly recommend that you purchase the book before performing experiments. See all standards correlations for Advanced Biology with Vernier »
http://www.vernier.com/experiments/bio-a/14/determination_of_chlorophyll_in_olive_oil/
13
43
In the early days of the Roman Republic, public taxes consisted of modest assessments on owned wealth and property. The tax rate under normal circumstances was 1% and sometimes would climb as high as 3% in situations such as war. These modest taxes were levied against land, homes and other real estate, slaves, animals, personal items and monetary wealth. Taxes were collected from individuals and, at times, payments could be refunded by the treasury for excess collections. With limited census accuracy, tax collection on individuals was a difficult task at best. By 167 B.C. the Republic had enriched itself greatly through a series of conquests. Gains such as the silver and gold mines in Spain created an excellent source of revenue for the state, and a much larger tax base through its provincial residents. By this time, Rome no longer needed to levy a tax against its citizens in Italy and looked only to the provinces for collections. With expansion, Roman censors found that accurate census taking in the provinces was a difficult task at best. To ease the strain, taxes were assessed as a tithe on entire communities rather than on individuals. Tax assessments in these communities fell under the jurisdiction of Provincial governors and various local magistrates, using rules similar to the old system. Tax farmers (Publicani) were used to collect these taxes from the provincials. Rome, in eliminating its own burden for this process, would put the collection of taxes up for auction every few years. The Publicani would bid for the right to collect in particular regions, and pay the state in advance of this collection. These payments were, in effect, loans to the state and Rome was required to pay interest back to the Publicani. As an offset, the Publicani had the individual responsibility of converting properties and goods collected into coinage, alleviating this hardship from the treasury. In the end, the collectors would keep anything in excess of what they bid plus the interest due from the treasury; with the risk being that they might not collect as much as they originally bid. Tax farming proved to be an incredibly profitable enterprise and served to increase the treasury, as well as line the pockets of the Publicani. However, the process was ripe with corruption and scheming. For example, with the profits collected, tax farmers could collude with local magistrates or farmers to buy large quantities of grain at low rates and hold it in reserve until times of shortage. These Publicani were also money lenders, or the bankers of the ancient world, and would lend cash to hard-pressed provincials at the exorbitant rates of 4% per month or more. In the late 1st century BC, and after considerably more Roman expansion, Augustus essentially put an end to tax farming. Complaints from provincials for excessive assessments and large, un-payable debts ushered in the final days of this lucrative business. The Publicani continued to exist as money lenders and entrepreneurs, but easy access to wealth through taxes was gone. Tax farming was replaced by direct taxation early in the Empire and each province was required to pay a wealth tax of about 1% and a flat poll tax on each adult. This new procedure, of course, required regular census taking to evaluate the taxable number of people and their income/wealth status. Taxation in this environment switched mainly from one of owned property and wealth to that of an income tax. As a result, the taxable yield varied greatly based on economic conditions, but theoretically, the process was fairer and less open to corruption. In contrast, the Publicani had to focus their efforts on collecting revenues where it was most easily available due to limited time and capacity. Their efforts were mainly directed at the cash wealthy because converting properties into cash could be a difficult process. Additionally, growth of a provincial tax base went straight to the coffers of the Publicani. They had the luxury of bidding against previous tax collections and the Treasury's knowledge of increased wealth would take several collections before auction prices were raised. In this way, the Publicani increased their own wealth, but eventually the state would reap the benefit of increased collections down the line. The imperial system of flat levies instituted by Augustus shifted the system into being far less progressive, however. Growth in the provincial taxable basis under the Publicani led to higher collections in time, while under Augustus, fixed payments reduced this potential. Tax paying citizens were aware of the exact amounts they needed to pay and any excess income remained with the communities. While there could obviously be reassessments that would adjust the taxable base it was a slow process that left a lot of room for the earning of untaxed incomes. While seemingly less effective to the state than that of the Publicani system, the new practice allowed for considerable economic growth and expansion. As time passed each successive emperor was challenged with meeting the soaring costs of administration and financing the legions, both for national defense and to maintain loyalty. New schemes to revise the tax structure came and went throughout the Empire's history. Large inflation rates and debased coinage values, by the reign of Diocletion, led to one of the more drastic changes in the system. In the late 3rd century AD, he imposed a universal price freeze, capping maximum prices, while at the same time he reinstated the land tax on Italian landowners. Special tolls on money traders and companies were also imposed to help increase the tax collections. Diocletion's program, in theory, should have helped ease the burden on various classes of taxpayers, but it didn't work that way in practice. As an example, additional taxes were levied on land owners after the land tax had been paid because this was now a separate tax, instead of taking into account that taxes had already been collected. The burden of paying the expected amounts was shifted from communities and individuals within them, to the local senatorial class. The Senators who would then be subject to complete ruin in the case of economic shortfall in a particular region. Following Diocletion, Constantine compounded these burdens by making the senatorial class hereditary. By so doing, all debts and economic ramifications were passed from one senatorial generation to the next, ruining entire families and never allowing for a recovery that could benefit an entire community. Taxes in the Roman Empire, in comparison with modern times, were certainly no more excessive. In many cases they are far less per capita than anything we can compare to today. However, the strain of tax revenues was heavily placed on those who could most influence the economy and it would have dire consequences. The economic struggles that plagued the late Imperial system coupled with the tax laws certainly played a part in the demise of the world greatest empire. "...in this world nothing is certain but death and taxes"--Benjamin Franklin
http://www.unrv.com/economy/roman-taxes.php
13
17
The proximate cause of the famine was a potato disease commonly known as late blight. Although blight ravaged potato crops throughout Europe during the 1840s, the impact and human cost in Ireland — where a third of the population was entirely dependent on the potato for food — was exacerbated by a host of political, social and economic factors which remain the subject of historical debate. The famine was a watershed in the history of Ireland. Its effects permanently changed the island's demographic, political and cultural landscape. For both the native Irish and those in the resulting diaspora, the famine entered folk memory and became a rallying point for various nationalist movements. Modern historians regard it as a dividing line in the Irish historical narrative, referring to the preceding period of Irish history as "pre-Famine." The fall-out of the famine continued for decades afterwards and Ireland's population still has not recovered to pre-famine levels. From 1801 Ireland had been directly governed, under the Act of Union, as part of the United Kingdom. Executive power lay in the hands of the Lord Lieutenant of Ireland and Chief Secretary for Ireland, both of whom were appointed by the British government. Ireland sent 105 members of parliament to the British House of Commons, and Irish representative peers elected twenty-eight of their own number to sit for life in the House of Lords. Between 1832 and 1859 seventy percent of Irish representatives were landowners or the sons of landowners. In the forty years that followed the union, successive British governments grappled with the problems of governing a country which had, as Benjamin Disraeli put it in 1844, "a starving population, an absentee aristocracy, and an alien Church, and in addition the weakest executive in the world. One historian calculated that between 1801 and 1845 there had been 114 commissions and 61 special committees inquiring into the state of Ireland and that "without exception their findings prophesied disaster; Ireland was on the verge of starvation, her population rapidly increasing, three-quarters of her labourers unemployed, housing conditions appalling and the standard of living unbelievably low." Although central to everyday life, the Irish potato crop was an uncertain quantity. The famine of 1845 was notable for its vastness only: according to the 1851 Census of Ireland Commissioners there were twenty-four failures of the potato crop going back 1728, of varying severity. In 1739 the crop was "entirely destroyed", and again in 1740, in 1770 the crop largely failed again. In 1800 there was another "general" failure, and in 1807 half the crop was lost. In 1821 and 1822 the potato crop failed completely in Munster and Connaught, and 1830 and 1831 were years of failure in Mayo, Donegal and Galway. In 1832, 1833, 1834 and 1836 a large number of districts suffered serious loss, and in 1835 the potato failed in Ulster. 1836 and 1837 brought "extensive" failures throughout Ireland and again in 1839 failure was universal throughout the country; both 1841 and 1844 potato crop failure was widespread. Catholic emancipation had been achieved in 1829, and Catholics made up 80 percent of the population, the bulk of which lived in conditions of poverty and insecurity. At the top of the "social pyramid" was the "ascendancy class," the English and Anglo-Irish families who owned most of the land, and who had more or less limitless power over their tenants. Some of their estates were vast: the Earl of Lucan, for example, owned over . Many of these landlords lived in England and were called "absentee landlords". They used agents to administer their property for them, with the revenue generated being sent to England. A number of the absentee landlords living in England never set foot in Ireland. They took their rents from their "impoverished tenants" or paid them minimal wages to raise crops and livestock for export. The 1841 census showed a population of just over eight million. Two-thirds of those depended on agriculture for their survival, but they rarely received a working wage. They had to work for their landlords in return for the patch of land they needed in order to grow enough food for their own families. This was the system which forced Ireland and its peasantry into monoculture, as only the potato could be grown in sufficient quantity. The rights to a plot of land in Ireland could mean the difference between life and death in the early 19th century. The period of the potato blight in Ireland from 1845–51 was full of political confrontation. The mass movement for Repeal of the Act of Union had failed in its objectives by the time its founder Daniel O'Connell died in 1847. A more radical Young Ireland group seceded from the Repeal movement and attempted an armed rebellion in the Young Irelander Rebellion of 1848. It was unsuccessful. Ireland at this time was, according to the Act of Union of 1801, an integral part of the British imperial homeland, "the richest empire on the globe," and was "the most fertile portion of that empire," in addition; Ireland was sheltered by both "... Habeas Corpus and trial by jury ...". And yet Ireland's elected representatives seemed powerless to act on the country's behalf as Members to the British Parliament. Commenting on this at the time John Mitchel wrote: "That an island which is said to be an integral part of the richest empire on the globe ... should in five years lose two and a half millions of its people (more than one fourth) by hunger, and fever the consequence of hunger, and flight beyond sea to escape from hunger ..." Ireland remained a net exporter of food even during the blight. The immediate effect on Ireland was devastating, and its long-term effects proved immense, changing Irish culture and tradition for generations. The population of Ireland continued to fall for 70 years, stabilizing at half the level prior to the famine. This long-term decline ended in the west of the country only in 2006, over 160 years after the famine struck. Symptoms of the potato blight were then recorded in Belgium in 1845. According to W.C. Paddock, Phytophthora infestans (which is an oomycete, not a fungus) was transported on potatoes being carried to feed passengers on clipper ships sailing from America to Ireland. It is estimated that as much as a third of the entire population of Ireland perished during the civil wars and subsequent Cromwellian conquest. William Petty who conducted the first scientific land and demographic survey of Ireland in the 1650s (the Down Survey), concluded that at least 400,000 people and maybe as many as 620,000 had died in Ireland between 1641 and 1653 many as a result of famine and plague. And this in a country of only around 1.5 million inhabitants. Penal laws were introduced during the reign of King William III and further reinforced during the subsequent 18th century Hanoverian period whereby Roman Catholics rights were restricted from education and many of their civil liberties were removed, including ownership of a horse worth more than five pounds. Laws were also introduced to encourage Irish linen production but wool exports were restricted. Roman Catholic clergy were also banished as the British Parliament took over to legislate for Ireland. British law then barred Roman Catholics from succession, securing the ascendancy to remain in Protestant control and therefore removing any possible claims to the throne by the Roman Catholic descendants of King James II. Land ownership in Ireland fell mainly to English and Scottish Protestants who were loyal to the crown and the Established Church who rented out large tracts to tenant farmers. Many prominent Roman Catholics who owned land prior to the Williamite Wars and Treaty of Limerick had been forced into exile, called the Flight of the Wild Geese continued to live off rental income collected by their appointed land agents. Sometimes rents grew difficult to collect, forcing the landlords into debt and causing them to sell their estates. This period also saw the rise of economic and other colonialism, often influencing countries to produce for export a single crop. Ireland, too, became mostly a single-crop nation although the southern and eastern regions sustained a fair sized commercial agriculture of grain and cattle. The potatoes grew well in Ireland and seemed the only crop that could support a peasant family limited — through subdivision of larger Catholic-owned estates — to a very small tenant plot of land. According to James S. Donnelly Jr, it is impossible to be sure how many people were evicted during the years of the famine and its immediate aftermath. It was only in 1849 that the police began to keep a count, and they recorded a total of almost 250,000 persons as officially evicted between 1849 and 1854. Donnelly considered this to be an underestimate, and if the figures were to include the number pressured into involuntary surrenders during the whole period (1846-54) the figure would almost certainly exceed half a million persons. While Helen Litton says there were also thousands of “voluntary” surrenders, she notes also that there was “precious little voluntary about them.” In some cases tenants were persuaded to accept a small sum of money to leave their homes, “cheated into believing the workhouse would take them in.” Under the notorious Gregory clause, described by Donnelly as a “vicious amendment to the Irish poor law, named after William H. Gregory, M.P. and commonly know as the quarter-acre clause, provided that no tenant holding more than a quarter-acre of land would be eligible for public assistance either in or outside the workhouse. This clause had been a successful Tory amendment to the Whig poor-relief bill which became law in early June 1847, were its potential as an estate-clearing device was widely recognised in parliament, though not in advance. At first the poor law commissioners and inspectors viewed the clause as an valuable instrument for a more cost-effective administration of public relief, but the drawbacks soon became apparent, even from an administrative perspective. They would soon view them as little more than murderous from a humanitarian perspective. According to Donnelly it became obvious that the quarter-acre clause was “indirectly a death-dealing instrument.” Cecil Woodham-Smith, an authority on the Irish Famine, wrote in The Great Hunger; Ireland 1845-1849 that, ...no issue has provoked so much anger or so embittered relations between the two countries (England and Ireland) as the indisputable fact that huge quantities of food were exported from Ireland to England throughout the period when the people of Ireland were dying of starvation.Ireland remained a net exporter of food throughout most of the five-year famine. Christine Kinealy, a University of Liverpool fellow and author of two texts on the famine, Irish Famine: This Great Calamity and A Death-Dealing Famine, writes that Irish exports of calves, livestock (except pigs), bacon and ham actually increased during the famine. The food was shipped under guard from the most famine-stricken parts of Ireland. However, the poor had no money to buy food and the government then did not ban exports. Irish meteorologist Austin Bourke, in The use of the potato crop in pre-famine Ireland disputes some of Woodham-Smith's calculations, and notes that during December 1846 imports almost doubled. He opines that it is beyond question that the deficiency arising from the loss of the potato crop in 1846 could not have been met by the simple expedient of prohibiting the export of grain from Ireland. The Celtic grazing lands of...Ireland had been used to pasture cows for centuries. The British colonized...the Irish, transforming much of their countryside into an extended grazing land to raise cattle for a hungry consumer market at home.... The British taste for beef had a devastating impact on the impoverished and disenfranchised people of...Ireland.... Pushed off the best pasture land and forced to farm smaller plots of marginal land, the Irish turned to the potato, a crop that could be grown abundantly in less favorable soil. Eventually, cows took over much of Ireland, leaving the native population virtually dependent on the potato for survival. It is not known how many people died during the period of the Famine, although it is believed more died from diseases than from starvation. State registration of births, marriages or deaths had not yet begun, and records kept by the Roman Catholic Church are incomplete. Eye witness accounts have helped medical historians identify both the ailments and effects of famine, and have been used to evaluate and explain in greater detail features of the famine. In Mayo, English Quaker William Bennett wrote of three children huddled together, lying there because they were too weak to rise, pale and ghastly, their little limbs ... perfectly emaciated, eyes sunk, voice gone, and evidently in the last stages of actual starvation.Revd Dr Traill Hall, a Church of Ireland rector in Schull, described the aged, who, with the young — are almost without exception swollen and ripening for the grave.Marasmic children also left a permanent image on Quaker Joseph Crosfield who in 1846 witnessed a heart—rending scene [of] poor wretches in the last stages of famine imploring to be received into the [work]house...Some of the children were worn to skeletons, their features sharpened with hunger, and their limbs wasted almost to the bone…William Forster wrote in Carrick-on-Shannon that the children exhibit the effects of famine in a remarkable degree, their faces looking wan and haggard with hunger, and seeming like old men and women. One possible estimate has been reached by comparing the expected population with the eventual numbers in the 1850s (see Irish Population Analysis). Earlier predictions expected that by 1851 Ireland would have a population of eight to nine million. A census taken in 1841 revealed a population of slightly over 8 million. A census immediately after the famine in 1851 counted 6,552,385, a drop of almost 1,500,000 in ten years. Modern historian R.J. Foster estimates that 'at least 775,000 died, mostly through disease, including cholera in the latter stages of the holocaust'. He further notes that 'a recent sophisticated computation estimates excess deaths from 1846 to 1851 as between 1,000,000 and 1,500,000...; after a careful critique of this, other statisticians arrive at a figure of 1,000,000.' In addition, in excess of one million Irish emigrated to Great Britain, United States, Canada, Australia, and elsewhere, while millions emigrated over following decades. |Table from Joe Lee, The Modernisation of Irish Society (Gill History of Ireland Series No.10) p.2| Detailed statistics of the population of Ireland since 1841 are available at Irish Population Analysis. Perhaps the best-known estimates of deaths at a county level are those by Joel Mokyr. The range of Mokyr’s mortality figures goes from 1.1 million to 1.5 million Famine deaths in Ireland between 1846 and 1851. Mokyr produced two sets of data which contained an upper-bound and lower-bound estimate, which showed not much difference in regional patterns. Because of such anomalies, Cormac Ó Gráda, revisited the work of S. H. Cousen’s. Cousen's estimates of mortality was to rely heavily on retrospective information contained in the 1851 census. The death tables, contained in the 1851 census have been rightly criticised, as under-estimating the true extent of mortality, Cousen’s mortality of 800,000 is now regarded as much too low. There were a number of reasons for this, because the information was gathered from the surviving householders and others and having to look back over the previous ten years, it underestimates the true extent of disease and mortality. Death and emigration had also cleared away entire families, leaving few or no survivors to answer the questions on the census. Another area of uncertainty lies in the descriptions of disease given by tenants as to the cause of their relatives’ deaths. Though Wilde’s work has been rightly criticised as under-estimating the true extents of mortality it does provide a framework for the medical history of the Great Famine. The diseases that badly affected the population fell into two categories, famine induced diseases and diseases of nutritional deficiency. Of the nutritional deficiency diseases the most commonly experienced were starvation and marasmus, as well as condition called at the time dropsy. Dropsy was a popular name given for the symptoms of several diseases, one of which, kwashiorkor, is associated with starvation. The greatest mortality, however, was not from nutritional deficiency diseases, but from famine induced ailments. The malnourished are very vulnerable to infections; therefore, they were more severe when they occurred. Measles, diarrheal diseases, tuberculosis, most respiratory infections, whooping cough, many intestinal parasites, and cholera were all strongly conditioned by nutritional status. Potentially lethal diseases, such as smallpox and influenza, were so virulent that their spread was independent of nutrition. A significant cause spreading disease during the Famine was “social dislocation.” The best example of this phenomenon was fever, which exacted the greatest toll of death. In the popular mind, as well as among much medical opinion, fever and famine are closely related. This view was not wholly mistaken, but the most critical connection was the congregating of the hungry at soup kitchens, food depots, overcrowded work houses where conditions were ideal for spreading infectious diseases such as typhus, typhoid and relapsing fever. As to the diarrheal diseases, their presence was the result of poor hygiene, bad sanitation and dietary changes. The concluding attack on a population incapacitated by famine was delivered by Asiatic cholera. Cholera had visited Ireland, briefly in the 1830’s. But in the following decade it spread uncontrollably across Asia, through Europe, and into Britain and finally reached Ireland in 1849. On the 1851 census both Cormac Ó Gráda & Joel Mokry would also describe it as a famous but flawed source. They would contend that the combination of institutional and individuals figures gives “an incomplete and biased count” of fatalities during the famine. Ó Gráda referencing the work of W. A. MacArthur, writes, specialists have long known the Irish death tables left a lot to be desired in terms of accuracy. As a result Ó Gráda says to take the Tables of Death at face value would be a grave mistake, as they seriously undercount the number of deaths both before and during the famine. In 1851, the census commissioners collected information on the number who died in each family since 1841, the cause, season and year of death. Its disputed findings were as follows: 21,770 total deaths from starvation in the previous decade, and 400,720 deaths from disease. Listed diseases were fever, dysentery, cholera, smallpox and influenza; the first two being the main killers (222,021 and 93,232). The commissioners acknowledged that their figures were incomplete and that the true number of deaths was probably higher: "The greater the amount of destitution of mortality...the less will be the amount of recorded deaths derived through any household form; - for not only were whole families swept away by disease...but whole villages were effaced from off the land." A later historian has this to say: “In 1851, the Census Commissioners attempted to produce a table of mortality for each year since 1841… The statistics provided were flawed and probably under-estimated the level of mortality…” If they ask me what are my propositions for relief of the distress, I answer, first, Tenant-Right. I would propose a law giving to every man his own. I would give the landlord his land, and a fair rent for it; but I would give the tenant compensation for every shilling he might have laid out on the land in permanent improvements. And what next do I propose? Repeal of the Union.John Mitchel wrote that in the latter part of O'Connell’s speech after pointing out the means used by the Belgian legislature during the same season—“shutting the ports against export of provisions, but opening them to import, and the like,” O’Connell continued: If we had a domestic Parliament would not the ports be thrown open—would not the abundant crops with which heaven has blessed her be kept for the people of Ireland—and would not the Irish Parliament be more active even than the Belgian Parliament to provide for the people food and employment (hear, hear)? The blessings that would result from Repeal—the necessity for Repeal. The impossibility of the country enduring the want of Repeal,—and the utter hopelessness of any other remedy— all those things powerfully urge you to join with me, and hurrah for the Repeal. As early as 1844, Mitchel, one of the leading political writers of Young Ireland, raised the issue of the "Potato Disease" in The Nation; he noted how powerful an agent hunger had been in certain revolutions. Mitchel again in The Nation on 14 February 1846, put forward his views on "the wretched way in which the famine was being trifled with”, and asked, had not the Government even yet any conception that there might be soon "millions of human beings in Ireland having nothing to eat." On 28 February, writing on the Coercion Bill which was then going through the House of Lords, he writes, This is the only kind of legislation for Ireland that is sure to meet with no obstruction in that House. However they may differ about feeding the Irish people, they agree most cordially in the policy of taxing, prosecuting and ruining them.In an article on "English Rule" on 7 March, Mitchel wrote: The Irish People are expecting famine day by day... and they ascribe it unanimously, not so much to the rule of heaven as to the greedy and cruel policy of England. Be that right or wrong, that is their feeling. They believe that the season as they roll are but ministers of England’s rapacity; that their starving children cannot sit down to their scanty meal but they see the harpy claw of England in their dish. They behold their own wretched food melting in rottenness off the face of the earth, and they see heavy-laden ships, freighted with the yellow corn their own hands have sown and reaped, spreading all sail for England; they see it and with every grain of that corn goes a heavy curse. Again the people believe—no matter whether truly or falsely—that if they should escape the hunger and the fever their lives are not safe from judges and juries. They do not look upon the law of the land as a terror to evil-doers, and a praise to those who do well; they scowl on it as an engine of foreign rule, ill-omened harbinger of doom."Mitchel because of his writings was charged with sedition, but this charge was dropped, and he was convicted under a new law purposefully enacted of Treason Felony Act and sentenced to 14 years transportation. In 1847 William Smith O'Brien, the leader of the Young Ireland party, became one of the founding members of the Irish Confederation to campaign for a Repeal of the Act of Union, and called for the export of grain to be stopped and the ports closed. The following year he organised the resistance of landless farmers in County Tipperary against the landowners and their agents. The measures undertaken by Peel's successor, Lord John Russell, proved comparatively "inadequate" as the crisis deepened. Russell's ministry introduced public works projects, which by December 1846 employed some half million Irish and proved impossible to administer. The Public Works were “strictly ordered” to be unproductive—that is, they would create no fund to repay their own expenses. Many hundreds of thousands of “feeble and starving men” according to John Mitchel, were kept digging holes, and breaking up roads, which was doing no service. In January the government abandoned these projects and turned to a mixture of "indoor" and "outdoor" direct relief; the former administered in work-houses through the Poor Law, the latter through soup kitchens. The costs of the Poor Law fell primarily on the local landlords, who in turn attempted to reduce their liability by evicting their tenants. This was then facilitated through the “Cheap Ejectment Acts.” The poor law amendment act was passed in June 1847. According to James Donnelly in Fearful Realities: New Perspectives on the Famine it embodied the principle popular in Britain that Irish property must support Irish poverty. The landed proprietors in Ireland were held in Britain to have created the conditions that lead to the famine. It was asserted however, that the British parliament since the Act of Union of 1800 was partly to blame. This point was raised in the Illustrated London News on the 13 February 1847, “There was no laws it would not pass at their request, and no abuse it would not defend for them.” On the 24 March the Times reported that Britain had permitted in Ireland “a mass of poverty, disaffection, and degradation without a parallel in the world. It allowed proprietors to suck the very life-blood of that wretched race.” The "Gregory clause" of the Poor Law prohibited anyone who held at least a quarter of an acre from receiving relief. Which in practice meant, that if a farmer, having sold all his produce to pay the rent, duties, rates and taxes, should be reduced, as many thousands of them were, on apply for public out-door relief, he should not get it, until he had first delivered up all his land to the landlord. Of this Law Mitchel was to write: "it is the able-bodied idler only who is to be fed — if he attempted to till but one rood of ground, he dies." This simple method of ejectment was called "passing paupers through the workhouse" — a man went in, a pauper came out. These factors combined to drive thousands of people off the land: 90,000 in 1849, and 104,000 in 1850. “I congratulate you, that the universal sentiment hitherto exhibited upon this subject has been that we will accept no English charity (loud cheers). The resources of this country are still abundantly adequate to maintain our population: and until those resources shall have been utterly exhausted, I hope there is no man in Ireland who will so degrade himself as to ask the aid of a subscription from England.”Mitchel wrote in his The Last Conquest of Ireland (Perhaps), on the same subject, that no one from Ireland ever asked for charity during this period, and that it was England who sought charity on Ireland’s behalf, and, having received it, was also responsible for administering it. He stated: It has been carefully inculcated upon the world by the British Press, that the moment Ireland fell into distress, she became an abject beggar at England’s gate, and that she even craved alms from all mankind. Some readers may be surprised when I affirm that , neither Ireland nor anybody in Ireland, ever asked alms or favours of any kind, either from England or any other nation or people;—but, on the contrary, that it was England herself that begged for us, that sent round the hat over all the globe, asking a penny for the love of God to relieve the poor Irish ;—and further, that, constituting herself the almoner and agent of all that charity, she, England, took all the profit of it.The Nation according to Charles Gavan Duffy, insisted, that the one remedy, was that which the rest of Europe had adopted which even the parliaments of the Pale had adopted in periods of distress, which was to retain in the country the food raised by her people till the people were fed. The following poem written by Miss Jane Francesca Elgee was carried in the The Nation and who was one of the best known and most popular authors. Weary men, what reap ye? Golden corn for the stranger.The response from Ireland was that the Corporation of Dublin sent a memorial to the Queen, “praying her” to call Parliament together early (Parliament was at this time prorogued), and to recommend the requisition of some public money for public works, especially railways in Ireland. The Town Council of Belfast met and made similar suggestions to those of Dublin, but neither body asked charity, according to Mitchel. “They demanded that, if Ireland was indeed an Integral part of the realm, the common exchequer of both islands should be used—not to give alms, but to provide employment on public works of general utility.” It was Mitchel’s opinion that “if Yorkshire and Lancashire had sustained a like calamity in England, there is no doubt such measures as these would have been taken, promptly and liberally.” A deputation from the citizens of Dublin, which including the Duke of Leister, the Lord Mayor, Lord Cloncurry, and Daniel O’Connell, went to the Lord Lieutenant (Lord Heytesbury), and offered suggestions, such as opening the ports to foreign corn for a time, to stopping distillation from grain, or providing public works, that this was extremely urgent, as millions of people would shortly be without food. Lord Haytesbury told them they "were premature", and told them not to be alarmed, that learned men (Playfair and Lindley) had been sent from England to enquire into all those matters; and that the Inspectors of Constabulary and Stipendiary Magistrates were charged with making constant reports from their districts; and there was no "immediate pressure on the market". Of these reports from Lord Haytesbury, Peel in a latter to Sir James Graham was to say that he found the accounts "very alarming", though he reminded him that there was, according to Woodham-Smith "always a tendency to exaggeration in Irish news". What sow ye? Human corpses that wait for the avenger. Fainting forms, Hunger—stricken, what see you in the offing Stately ships to bear our food away, amid the stranger’s scoffing. There’s a proud array of soldiers—what do they round your door? They guard our master’s granaries from the thin hands of the poor. Pale mothers, wherefore weeping? ‘Would to God that we were dead— Our children swoon before us, and we cannot give them bread. Large sums of money were donated by charities; Calcutta is credited with making the first donation of £14,000. The money was raised by Irish soldiers serving there and Irish people employed by the East India Company. Pope Pius IX sent funds and Queen Victoria donated £2,000. Quaker Alfred Webb, one of the many volunteers in Ireland at the time, wrote: Upon the famine arose the wide spread system of proselytism ... and a network of well-intentioned Protestant associations spread over the poorer parts of the country, which in return for soup and other help endeavoured to gather the people into their churches and schools...The movement left seeds of bitterness that have not yet died out, and Protestants, and not altogether excluding Friends, sacrificed much of the influence for good they might have had... In addition to the religious, non-religious organizations came to the assistance of famine victims. The British Relief Association was one such group. Founded in 1847, the Association raised money throughout England, America and Australia; their funding drive benefited by a "Queen's Letter", a letter from Queen Victoria appealing for money to relieve the distress in Ireland. With this initial letter the Association raised £171,533. A second, somewhat less successful "Queen's Letter" was issued in late 1847. In total, the British Relief Association raised approximately £200,000. (c.$1,000,000 at the time) In 1845, the onset of the Great Irish Famine resulted in over 1,000,000 deaths. Ottoman Sultan Abdülmecid declared his intention to send 10,000 sterling to Irish farmers but Queen Victoria requested that the Sultan send only 1,000 sterling, because she had sent only 2,000 sterling. The Sultan sent the 1,000 sterling but also secretly sent 3 ships full of food. The English courts tried to block the ships, but the food arrived at Drogheda harbour and was left there by Ottoman sailors. In 1847, midway through the Great Irish Famine (1845–1849), a group of American Indian Choctaws collected $710 (although many articles say the original amount was $170 after a misprint in Angie Debo's The Rise and Fall of the Choctaw Republic) and sent it to help starving Irish men, women and children. "It had been just 16 years since the Choctaw people had experienced the Trail of Tears, and they had faced starvation… It was an amazing gesture. By today's standards, it might be a million dollars." according to Judy Allen, editor of the Choctaw Nation of Oklahoma's newspaper, Bishinik, based at the Oklahoma Choctaw tribal headquarters in Durant, Oklahoma. To mark the 150th anniversary, eight Irish people retraced the Trail of Tears, and the donation was publicly commemorated by President Mary Robinson. Consequently, later mini-famines made only minimal effect and are generally forgotten, except by historians. By the 1911 census, the island of Ireland's population had fallen to 4.4 million, about the same as the population in 1800 and 2000 and only a half of its peak population. While the famine was responsible for a significant increase in emigration from Ireland, of anywhere from 45% to nearly 85%, depending on the year and the county it was not the sole cause. Nor was it even the era when mass emigration from Ireland commenced. That can be traced to the middle of the 18th century, when some quarter of a million people left Ireland to settle in the New World alone, over a period of some fifty years. From the defeat of Napoleon to the beginning of the famine, a period of thirty years, "at least 1,000,000 and possibly 1,500,000 emigrated However, during the worst of the famine, emigration reached somewhere around 250,000 in one year alone, with far more emigrants coming from western Ireland than any other part. As a rule, families en masse did not emigrate, younger members of it did. So much so that emigration almost became a rite of passage, as evidenced by the data that show that, unlike similar emigration throughout world history, women emigrated just as often, just as early, and in the same numbers as men. The emigrant started a new life in a new land, sent remittances "reached £1,404,000 by 1851 back to his/her family in Ireland which, in turn, allowed another member of the family to emigrate. Generally speaking, emigration during the famine years of 1845 to 1850 was to England, Scotland, the United States, Canada, and Australia. Many of those fleeing to the Americas used the well-established McCorkell Line. Of the 100,000 Irish that sailed to Canada in 1847, an estimated one out of five died from disease and malnutrition, including over five thousand at Grosse Isle. Mortality rates of 30% aboard the coffin ships were common. By 1854, between 1½ and 2 million Irish left their country due to evictions, starvation, and harsh living conditions. In America, most Irish became city-dwellers: with little money, many had to settle in the cities that the ships they came on landed in. By 1850, the Irish made up a quarter of the population in Boston, Massachusetts; New York City; Philadelphia, Pennsylvania; and Baltimore, Maryland. In addition, Irish populations became prevalent in some American mining communities. The 1851 census reported that more than half the inhabitants of Toronto, Ontario were Irish, and in 1847 alone, 38,000 famine Irish flooded a city with less than 20,000 citizens. Other Canadian cities such as Saint John, New Brunswick; Quebec City and Montreal, Quebec; Ottawa, Kingston and Hamilton, Ontario also received large numbers of Famine Irish since Canada, as part of the British Empire, could not close its ports to Irish ships (unlike the United States), and they could get passage cheaply (or free in the case of tenant evictions) in returning empty lumber holds. However fearing nationalist insurgencies the British government placed harsh restrictions on Irish immigration to Canada after 1847 resulting in larger influxes to the United States. The largest Famine grave site outside of Ireland is at Grosse-Île, Quebec, an island in the St. Lawrence River used to quarantine ships near Quebec City. In 1851, about a quarter of Liverpool's population was Irish-born. The famine marked the beginning of the steep depopulation of Ireland in the 19th century. Population had increased by 13–14% in the first three decades of the 19th century. Between 1831 and 1841 population grew by 5%. Application of Thomas Malthus's idea of population expanding 'geometrically' (exponentially) while resources increase arithmetically was popular during the famines of 1817 and 1822. However by the 1830s, a decade before the famine, they were seen as overly simplistic and Ireland's problems were seen "less as an excess of population than as a lack of capital investment. The population of Ireland was increasing no faster than that of England, which suffered no equivalent catastrophe. This criticism was not confined to outside critics. The Lord Lieutenant of Ireland, Lord Clarendon, wrote a letter to Russell on 26 April 1849, urging that the government propose additional relief measures: "I do not think there is another legislature in Europe that would disregard such suffering as now exists in the west of Ireland, or coldly persist in a policy of extermination." Also in 1849 the Chief Poor Law Commissioner, Edward Twistleton, resigned in protest over the Rate-in-Aid Act, which provided additional funds for the Poor Law through a 6p in the pound levy on all rateable properties in Ireland. Twisleton testified that "comparatively trifling sums were required for Britain to spare itself the deep disgrace of permitting its miserable fellow subjects to die of starvation." According to Peter Gray, in his book The Irish Famine, the government spent seven million pounds for relief in Ireland between 1845 and 1850, "representing less than half of one percent of the British gross national product over five years. Contemporaries noted the sharp contrast with the 20 million Pounds compensation given to West Indian slave-owners in the 1830s." Other critics maintained that even after the government recognised the scope of the crisis, it failed to take sufficient steps to address it. John Mitchel, one of the leaders of the Young Ireland Movement, wrote the following in 1860: "I have called it an artificial famine: that is to say, it was a famine which desolated a rich and fertile island that produced every year abundance and superabundance to sustain all her people and many more. The English, indeed, call the famine a 'dispensation of Providence;' and ascribe it entirely to the blight on potatoes. But potatoes failed in like manner all over Europe; yet there was no famine save in Ireland. The British account of the matter, then, is first, a fraud - second, a blasphemy. The Almighty, indeed, sent the potato blight, but the English created the famine." Still other critics saw reflected in the government's response the government's attitude to the so-called "Irish Question." Nassau Senior, an economics professor at Oxford University, wrote that the Famine "would not kill more than one million people, and that would scarcely be enough to do any good." In 1848, Denis Shine Lawlor suggested that Russell was a student of the Elizabethan poet Edmund Spenser, who had calculated "how far English colonization and English policy might be most effectively carried out by Irish starvation." Charles Trevelyan, the civil servant with most direct responsibility for the government's handling of the famine, described it in 1848 as "a direct stroke of an all-wise and all-merciful Providence", which laid bare "the deep and inveterate root of social evil"; the Famine, he affirmed, was "the sharp but effectual remedy by which the cure is likely to be effected. God grant that the generation to which this opportunity has been offered may rightly perform its part..." Several writers single out the decision of the government to permit the continued export of food from Ireland as suggestive of the policy-makers attitude. Leon Uris suggested that "there was ample food within Ireland", while all the Irish-bred cattle were being shipped off to England. The following exchange appeared in Act IV of George Bernard Shaw's play Man and Superman: Critics of British imperialism point to the structure of empire as a contributing factor. J. A. Froude wrote that "England governed Ireland for what she deemed her own interest, making her calculations on the gross balance of her trade ledgers, and leaving moral obligations aside, as if right and wrong had been blotted out of the statute book of the Universe. Dennis Clark, an Irish-American historian, claimed that the famine was "the culmination of generations of neglect, misrule and repression. It was an epic of English colonial cruelty and inadequacy. For the landless cabin dwellers it meant emigration or extinction... As mentioned, the famine is still a controversial event in Irish history. Debate and discussion on the British government's response to the failure of the potato crop in Ireland and the subsequent large-scale starvation, and whether or not this constituted what would now be called genocide, remains a historically and politically-charged issue. In 1996 Francis A. Boyle, a law professor at the University of Illinois at Urbana-Champaign, wrote a report commissioned by the New York-based Irish Famine/Genocide Committee which concluded that "Clearly, during the years 1845 to 1850, the British government pursued a policy of mass starvation in Ireland with intent to destroy in substantial part the national, ethnic and racial group commonly known as the Irish People.... Therefore, during the years 1845 to 1850 the British government knowingly pursued a policy of mass starvation in Ireland that constituted acts of genocide against the Irish people within the meaning of Article II (c) of the 1948 [Hague] Genocide Convention. On the strength of Boyle's report, the U.S. state of New Jersey included the famine in the "Holocaust and Genocide Curriculum" at the secondary tier. Several commentators have argued that the searing effect of the famine in Irish cultural memory has effects similar to that of genocide, while maintaining that one did not occur. Robert Kee suggests that the Famine is seen as "comparable" in its force on "popular national consciousness to that of the 'final solution' on the Jews," and that it is not "infrequently" thought that the Famine was something very like, "a form of genocide engineered by the English against the Irish people." This point was echoed by James Donnelly, a historian at the University of Wisconsin, who wrote in his work Landlord and Tenant in Nineteenth-Century Ireland, "I would draw the following broad conclusion: at a fairly early stage of the Great Famine the government's abject failure to stop or even slow down the clearances (evictions) contributed in a major way to enshrining the idea of English state-sponsored genocide in Irish popular mind. Or perhaps one should say in the Irish mind, for this was a notion that appealed to many educated and discriminating men and women, and not only to the revolutionary minority...And it is also my contention that while genocide was not in fact committed, what happened during and as a result of the clearances had the look of genocide to a great many Irish... Historian Cormac Ó Gráda disagreed that the famine was genocide: first, that "genocide includes murderous intent and it must be said that not even the most bigoted and racist commentators of the day sought the extermination of the Irish"; second, that most people in Whitehall "hoped for better times in Ireland" and third, that the claim of genocide overlooks "the enormous challenges facing relief efforts, both central, local, public and private". Ó Gráda thinks that a case of neglect is easier to sustain than that of genocide Well-known Irish columnist and song-writer John Waters has described the famine as the most violent event in a history which was characterised by violence of every imaginable kind and stated that the famine "was an act of genocide, driven by racism and justified by ideology", arguing that the destruction of Ireland’s cultural, political and economic diversity and the reduction of the Irish economy to basically a mono-cultural dependence was a holocaust waiting to happen. Waters contends that arguments about the source of the blight or the practicability of aid efforts once the Famine had taken hold were irrelevant to the meaning of the experience. Evaluating the Practicability and Sustainability of a Reading Intervention Programme, Using Preservice Teachers as Trained Volunteers Jun 01, 2009; Introduction Reading is one of the most important skills that children develop while in school and the problems faced by... Practicability and Acute Hematological Toxicity of 2- and 3-Weekly CHOP and CHOEP Chemotherapy for Aggressive NHL. Aug 01, 2003; practicability and Acute Hematological Toxicity of 2- and 3-Weekly CHOP and CHOEP Chemotherapy for Aggressive Non-Hodgkins... Return to practicability: clarity on safety duty for employers.(Laing O'Rourke Pty Ltd v Kirwin)(Western Australia. Supreme Court)(Western Australia. Mines Safety and Inspection Act 1994)(Western Australia. Occupational Safety and Health Act 1984)(Case overview) Jun 07, 2011; A significant recent decision by the WA Court of Appeal has overturned an earlier decision of the Supreme Court, which...
http://www.reference.com/browse/practicability
13
52
Supply and demand 2008/9 Schools Wikipedia Selection. Related subjects: Economics In economics, supply and demand describes market relations between prospective sellers and buyers of a good. The supply and demand model determines price and quantity sold in a market. This model is fundamental in microeconomic analysis, and is used as a foundation for other economic models and theories. It predicts that in a competitive market, price will function to equalize the quantity demanded by consumers, and the quantity supplied by producers, resulting in an economic equilibrium of price and quantity. The model incorporates other factors changing equilibrium as a shift of demand and/or supply. Strictly speaking, the model of supply and demand applies to a type of market called perfect competition in which no single buyer or seller has much effect on prices, and prices are known. The quantity of a product supplied by the producer and the quantity demanded by the consumer are dependent on the market price of the product. The law of supply states that quantity supplied is related to price. It is often depicted as directly proportional to price: the higher the price of the product, the more the producer will supply, ceteris paribus ("all other things being equal"). The law of demand is normally depicted as an inverse relation of quantity demanded and price: the higher the price of the product, the less the consumer will demand, ceteris paribus. The respective relations are called the supply curve and demand curve, or supply and demand for short. The laws of supply and demand state that the equilibrium market price and quantity of a commodity is at the intersection of consumer demand and producer supply. At this point, quantity supplied equals quantity demanded (as shown in the figure ). If the price for a good is below equilibrium, consumers demand more of the good than producers are prepared to supply. This defines a shortage of the good. A shortage results in producers increasing the price until equilibrium is reached. If the price of a good is above equilibrium, there is a surplus of the good. Producers are motivated to eliminate the surplus by lowering the price, until equilibrium is reached. The supply schedule, graphically represented by the supply curve, is the relationship between market price and amount of goods produced. In short-run analysis, where some input variables are fixed, a positive slope can reflect the law of diminishing marginal returns, which states that beyond some level of output, additional units of output require larger amounts of input. In the long-run, where no input variables are fixed, a positively-sloped supply curve can reflect diseconomies of scale. For a given firm in a perfectly competitive industry, if it is more profitable to produce then to not produce, profit is maximized by producing just enough so that the producer's marginal cost is equal to the market price of the good. Occasionally, supply curves bend backwards. A well known example is the backward bending supply curve of labour. Generally, as a worker's wage increases, he is willing to work longer hours, since the higher wages increase the marginal utility of working, and the opportunity cost of not working. But when the wage reaches an extremely high amount, the employee may experience the law of diminishing marginal utility. The large amount of money he is making will make further money of little value to him. Thus, he will work less and less as the wage increases, choosing instead to spend his time in leisure. The backwards-bending supply curve has also been observed in non-labor markets, including the market for oil: after the skyrocketing price of oil caused by the 1973 oil crisis, many oil-exporting countries decreased their production of oil. The supply curve for public utility production companies is unusual. A large portion of their total costs are in the form of fixed costs. The supply curve for these firms is often constant (shown as a horizontal line). Another postulated variant of a supply curve is that for child labor. Supply will increase as wages increase, but at a certain point a child's parents will pull the child from the child labor force due to cultural pressures and a desire to concentrate on education. The supply will not increase as the wage increases, up to a point where the wage is high enough to offset these concerns. For a normal demand curve, this can result in two stable equilibrium points - a high wage and a low wage equilibrium point. The demand schedule, depicted graphically as the demand curve, represents the amount of goods that buyers are willing and able to purchase at various prices, assuming all other non-price factors remain the same. The demand curve is almost always represented as downwards-sloping, meaning that as price decreases, consumers will buy more of the good. Just as the supply curves reflect marginal cost curves, demand curves can be described as marginal utility curves. The main determinants of individual demand are: the price of the good, level of income, personal tastes, the population (number of people), the government policies, the price of substitute goods, and the price of complementary goods. The shape of the aggregate demand curve can be convex or concave, possibly depending on income distribution. As described above, the demand curve is generally downward sloping. There may be rare examples of goods that have upward sloping demand curves. Two different hypothetical types of goods with upward-sloping demand curves are a Giffen good (a sweet inferior, but staple, good) and a Veblen good (a good made more fashionable by a higher price). Changes in market equilibrium Practical uses of supply and demand analysis often centre on the different variables that change equilibrium price and quantity, represented as shifts in the respective curves. Comparative statics of such a shift traces the effects from the initial equilibrium to the new equilibrium. Demand curve shifts When consumers increase the quantity demanded at a given price, it is referred to as an increase in demand. Increased demand can be represented on the graph as the curve being shifted outward. At each price point, a greater quantity is demanded, as from the initial curve D1 to the new curve D2. More people wanting coffee is an example. In the diagram, this raises the equilibrium price from P1 to the higher P2. This raises the equilibrium quantity from Q1 to the higher Q2. A movement along the curve is described as a "change in the quantity demanded" to distinguish it from a "change in demand," that is, a shift of the curve. In the example above, there has been an increase in demand which has caused an increase in (equilibrium) quantity. The increase in demand could also come from changing tastes, incomes, product information, fashions, and so forth. If the demand decreases, then the opposite happens: an inward shift of the curve. If the demand starts at D2, and decreases to D1, the price will decrease, and the quantity will decrease. This is an effect of demand changing. The quantity supplied at each price is the same as before the demand shift (at both Q1 and Q2). The equilibrium quantity, price and demand are different. At each point, a greater amount is demanded (when there is a shift from D1 to D2). Supply curve shifts When the suppliers' costs change for a given output, the supply curve shifts in the same direction. For example, assume that someone invents a better way of growing wheat so that the cost of wheat that can be grown for a given quantity will decrease. Otherwise stated, producers will be willing to supply more wheat at every price and this shifts the supply curve S1 outward, to S2—an increase in supply. This increase in supply causes the equilibrium price to decrease from P1 to P2. The equilibrium quantity increases from Q1 to Q2 as the quantity demanded increases at the new lower prices. In a supply curve shift, the price and the quantity move in opposite directions. If the quantity supplied decreases at a given price, the opposite happens. If the supply curve starts at S2, and shifts inward to S1, the equilibrium price will increase, and the quantity will decrease. This is an effect of supply changing. The quantity demanded at each price is the same as before the supply shift (at both Q1 and Q2). The equilibrium quantity, price and supply changed. When there is a change in supply or demand, there are four possible movements. The demand curve can move inward or outward. The supply curve can also move inward or outward. See also: Induced demand A very important concept in understanding supply and demand theory is elasticity. In this context, it refers to how supply and demand respond to various factors. One way to define elasticity is the percentage change in one variable divided by the percentage change in another variable (known as arc elasticity, which calculates the elasticity over a range of values, in contrast with point elasticity, which uses differential calculus to determine the elasticity at a specific point). It is a measure of relative changes. Often, it is useful to know how the quantity demanded or supplied will change when the price changes. This is known as the price elasticity of demand and the price elasticity of supply. If a monopolist decides to increase the price of their product, how will this affect their sales revenue? Will the increased unit price offset the likely decrease in sales volume? If a government imposes a tax on a good, thereby increasing the effective price, how will this affect the quantity demanded? Another distinguishing feature of elasticity is that it is more than just the slope of the function. For example, a line with a constant slope will have different elasticity at various points. Therefore, the measure of elasticity is independent of arbitrary units (such as gallons vs. quarts, say for the response of quantity demanded of milk to a change in price), whereas the measure of slope only is not. One way of calculating elasticity is the percentage change in quantity over the associated percentage change in price. For example, if the price moves from $1.00 to $1.05, and the quantity supplied goes from 100 pens to 102 pens, the slope is 2/0.05 or 40 pens per dollar. Since the elasticity depends on the percentages, the quantity of pens increased by 2%, and the price increased by 5%, so the price elasticity of supply is 2/5 or 0.4. Since the changes are in percentages, changing the unit of measurement or the currency will not affect the elasticity. If the quantity demanded or supplied changes a lot when the price changes a little, it is said to be elastic. If the quantity changes little when the prices changes a lot, it is said to be inelastic. An example of perfectly inelastic supply, or zero elasticity, is represented as a vertical supply curve. (See that section below) Elasticity in relation to variables other than price can also be considered. One of the most common to consider is income. How would the demand for a good change if income increased or decreased? This is known as the income elasticity of demand. For example, how much would the demand for a luxury car increase if average income increased by 10%? If it is positive, this increase in demand would be represented on a graph by a positive shift in the demand curve. At all price levels, more luxury cars would be demanded. Another elasticity sometimes considered is the cross elasticity of demand, which measures the responsiveness of the quantity demanded of a good to a change in the price of another good. This is often considered when looking at the relative changes in demand when studying complement and substitute goods. Complement goods are goods that are typically utilized together, where if one is consumed, usually the other is also. Substitute goods are those where one can be substituted for the other, and if the price of one good rises, one may purchase less of it and instead purchase its substitute. Cross elasticity of demand is measured as the percentage change in demand for the first good that occurs in response to a percentage change in price of the second good. For an example with a complement good, if, in response to a 10% increase in the price of fuel, the quantity of new cars demanded decreased by 20%, the cross elasticity of demand would be -2.0. Vertical supply curve (Perfectly Inelastic Supply) It is sometimes the case that a supply curve is vertical: that is the quantity supplied is fixed, no matter what the market price. For example, the surface area or land of the world is fixed. No matter how much someone would be willing to pay for an additional piece, the extra cannot be created. Also, even if no one wanted all the land, it still would exist. Land therefore has a vertical supply curve, giving it zero elasticity (i.e., no matter how large the change in price, the quantity supplied will not change). Supply-side economics argues that the aggregate supply function – the total supply function of the entire economy of a country – is relatively vertical. Thus, supply-siders argue against government stimulation of demand, which would only lead to inflation with a vertical supply curve. The model of supply and demand also applies to various specialty markets. The model applies to wages, which are determined by the market for labor. The typical roles of supplier and consumer are reversed. The suppliers are individuals, who try to sell their labor for the highest price. The consumers of labors are businesses, which try to buy the type of labor they need at the lowest price. The equilibrium price for a certain type of labor is the wage. The model applies to interest rates, which are determined by the money market. In the short term, the money supply is a vertical supply curve, which the central bank of a country can control through monetary policy. The demand for money intersects with the money supply to determine the interest rate. Other market forms The supply and demand model is used to explain the behaviour of perfectly competitive markets, but its usefulness as a standard of performance extends to other types of markets. In such markets, there may be no supply curve, such as above, except by analogy. Rather, the supplier or suppliers are modeled as interacting with demand to determine price and quantity. In particular, the decisions of the buyers and sellers are interdependent in a way different from a perfectly competitive market. A monopoly is the case of a single supplier that can adjust the supply or price of a good at will. The profit-maximizing monopolist is modeled as adjusting the price so that its profit is maximized given the amount that is demanded at that price. This price will be higher than in a competitive market. A similar analysis can be applied when a good has a single buyer, a monopsony, but many sellers. Oligopoly is a market with so few suppliers that they must take account of their actions on the market price or each other. Game theory may be used to analyze such a market. The supply curve does not have to be linear. However, if the supply is from a profit-maximizing firm, it can be proven that curves-downward sloping supply curves (i.e., a price decrease increasing the quantity supplied) are inconsistent with perfect competition in equilibrium. Then supply curves from profit-maximizing firms can be vertical, horizontal or upward sloping. Positively-sloped demand curves? Standard microeconomic assumptions cannot be used to disprove the existence of upward-sloping demand curves. However, despite years of searching, no generally agreed upon example of a good that has an upward-sloping demand curve (also known as a Giffen good) has been found. Some suggest that luxury cosmetics can be classified as a Giffen good. As the price of a high end luxury cosmetic drops, consumers see it as an low quality good compared to its peers. The price drop may indicate lower quality ingredients, thus consumers would not want to apply such an inferior product to their face. Lay economists sometimes believe that certain common goods have an upward-sloping curve. For example, people will sometimes buy a prestige good (eg. a luxury car) because it is expensive, a drop in price may actually reduce demand. However, in this case, the good purchased is actually prestige, and not the car itself. So, when the price of the luxury car decreases, it is actually decreasing the amount of prestige associated with the good (see also Veblen good). However, even with downward-sloping demand curves, it is possible that an increase in income may lead to a decrease in demand for a particular good, probably due to the existence of more attractive alternatives which become affordable: a good with this property is known as an inferior good. Negatively-sloped supply curve There are cases where the price of goods gets cheaper, but more of those goods are produced. This is usually related to economies of scale and mass production. One special case is computer software where creating the first instance of a given computer program has a high cost, but the marginal cost of copying this program and distributing it to many consumers is low (almost zero). Demand and supply relations in a market can be statistically estimated from price, quantity, and other data with sufficient information in the model. This can be done with simultaneous-equation methods of estimation in econometrics. Such methods allow solving for the model-relevant "structural coefficients," the estimated algebraic counterparts of the theory. The Parameter identification problem is a common issue in "structural estimation." Typically, data on exogenous variables (that is, variables other than price and quantity, both of which are endogenous variables) are needed to perform such an estimation. An alternative to "structural estimation" is reduced-form estimation, which regresses each of the endogenous variables on the respective exogenous variables. Macroeconomic uses of demand and supply Demand and supply have also been generalized to explain macroeconomic variables in a market economy, including the quantity of total output and the general price level. The Aggregate Demand-Aggregate Supply model may be the most direct application of supply and demand to macroeconomics, but other macroeconomic models also use supply and demand. Compared to microeconomic uses of demand and supply, different (and more controversial) theoretical considerations apply to such macroeconomic counterparts as aggregate demand and aggregate supply. Demand and supply may also be used in macroeconomic theory to relate money supply to demand and interest rates. A demand shortfall results from the actual demand for a given product being lower than the projected, or estimated, demand for that product. Demand shortfalls are caused by demand overestimation in the planning of new products. Demand overestimation is caused by optimism bias and/or strategic misrepresentation. The phrase "supply and demand" was first used by James Denham-Steuart in his Inquiry into the Principles of Political Economy, published in 1767. Adam Smith used the phrase in his 1776 book The Wealth of Nations, and David Ricardo titled one chapter of his 1817 work Principles of Political Economy and Taxation "On the Influence of Demand and Supply on Price". In The Wealth of Nations, Smith generally assumed that the supply price was fixed but that its "merit" (value) would decrease as its "scarcity" increased, in effect what was later called the law of demand. Ricardo, in Principles of Political Economy and Taxation, more rigorously laid down the idea of the assumptions that were used to build his ideas of supply and demand. Antoine Augustin Cournot first developed a mathematical model of supply and demand in his 1838 Researches on the Mathematical Principles of the Theory of Wealth. During the late 19th century the marginalist school of thought emerged. This field mainly was started by Stanley Jevons, Carl Menger, and Léon Walras. The key idea was that the price was set by the most expensive price, that is, the price at the margin. This was a substantial change from Adam Smith's thoughts on determining the supply price. In his 1870 essay "On the Graphical Representation of Supply and Demand", Fleeming Jenkin drew for the first time the popular graphic of supply and demand which, through Marshall, eventually would turn into the most famous graphic in economics. The model was further developed and popularized by Alfred Marshall in the 1890 textbook Principles of Economics. Along with Léon Walras, Marshall looked at the equilibrium point where the two curves crossed. They also began looking at the effect of markets on each other.
http://schools-wikipedia.org/wp/s/Supply_and_demand.htm
13
21
The system established two major international financial institutions - the International Monetary Fund (IMF) and the World Bank (also known as the International Bank for Reconstruction and Development). The Bretton Woods system was largely the product of Anglo-American negotiations. The British representative was the prominent economist John Maynard Keynes. The United States representative was Harry Dexter White who, given the economic dominance of the US, was able to exercise a powerful influence on Bretton Woods polices. In 1944 representatives of 44 allied nations met at Bretton Woods, New Hampshire. They wished to avoid the turmoil in international monetary and commercial relations which characterised the interwar years and which were seen as a cause of the Second World War. Several points of agreement emerged: There were significant differences between the British and American approaches, particularly with regard to the liquidity fund. Keynes wanted unlimited access to the fund. The US representative wanted the rights to draw on the fund to be linked to contributions. The Bretton Woods agreement of 1944 attempted to resolve these central problems. The disadvantages of floating and rigidly fixed exchange rates were avoided by 'pegging' each currency against gold. Member states agreed to maintain their currencies within one per cent of this value, although they were allowed to revalue their currencies should circumstances produce 'fundamental disequilibrium'. Dollars were fixed in value against gold and were the only currency directly convertible into gold. Before long the dollar became the dominant world currency. The agreement set up the International Monetary Fund (IMF) to ensure that member states had access to funds to help guarantee the 'pegged' value of their currencies. Members paid a subscription, based on the size of their economies, into the IMF, which could be drawn, according to quotas, when they lacked sufficient reserves to back their currency. The outcome represented the American view of how the liquidity problem should be solved. The Bretton Woods arrangements were incorporated in a Bretton Woods Agreement Bill and a subsequent Exchange Control Bill.
http://nationalarchives.gov.uk/cabinetpapers/themes/bretton-woods-conference.htm
13
139
Balance of Payments The balance of payments is a simple measure of the payments in financial capital that flow from one nation to another. Ifmore money flows in than out, one has a positive balance of payments -if more flows out than in, one has then a negative balance. The money flowing over the border is like other money paying for goods,commodities, real estate, services, securities. It’s usually separated into: Goods and services. Financial Account.Financial assets. (Stocks, Bonds, Foreign Direct Investments) Balance of Trade Balance of trade figures are the sum of the money gained by a given economy by selling exports, minus the cost of buying imports. They form part of the balance of payments, which also includes other transactions such as international investment. The figures are usually split into visible and invisible balance figures. The visible balance represents the physical goods, and invisible represents otherforms of trade, e.g. the service economy. A positive balance of trade is known as a trade surplus and consists of exporting more (in financial capital terms) than one imports. A negative balance of trade is known as a trade deficit and consists of importing more than one exports. Neither is necessarily dangerous in modern economies, although large trade surpluses or trade deficits may sometimes be a sign ofother economic problems. If the balance of trade is positive, then the economy has received more money than it has spent. This may appear tobe a good thing but may not always be so. Beige Book Fed Survey Officially known as the Survey on Current Economic Conditions, the Beige Book, is published eight times per year by a Federal Reserve Bank, containing anecdotal information on current economic and business conditions in its District through reports from Bank and Branch directors, and interviews with key business contacts, economists, market experts, and other sources. The Beige Book highlights the activity information by District and sector. The survey normally covers a period of about 4-weeks in duration, and is released two weeks prior to each FOMC meeting, which is also held eight timesper year. While being deemed by some as a lagging report, the BeigeBook has usually served as a helpful indicator to FOMC policy decisions on monetary policy. Business Inventories and Sales Business inventories consist of items produced and held for future sale. Capacity utilization consists of total industrial output divided by total production capability. The term refers to the maximum level of output a plant can generate under normal business conditions. A normal figure for a steady economy is 81.5 percent. If the figure reads 85 percent or more, the data suggests that the industrial production is overheating, that the economy is close to full capacity. High capacity utilization rates precede inflation, and expectation in the foreign exchange market is that the central bank will raise interest rates in order to avoid or fight inflation. Balance of international transactions in financialcapital. The capital account is associated with the relationship between import and export capital, direct investment and loans, and also deals with securities investments such as repayment of principals of foreign debts, overseas investments, and by investments made by foreign enterprises. Measures the total amount of spending in the U.S. on all types of construction. The residential construction component is useful for predicting future national new homes sales and mortgage origination volume. The Consumer Confidence index attempts to gaugeconsumers’ feelings about the current condition of the economy and their expectations about the economy’s future direction. Consumer Price Index The consumer price index (CPI) gauges the averagechange in retail prices for a fixed market basket of goods andservices. The CPI data is compiled from a sample of prices for food, shelter, clothing, fuel, transportation and medical services that people purchase on daily basis. Balance of trade plus NET investment income & transfers. The difference between what the country earns and spends overseas. The current account more specifically deals with the daily recurring transactions in the ordinary course of business. It involves international receipts & payments including trading receipts & payments, service receipts & payments and unilateral transfers such as payment of royalties, repatriation of after-taxprofits & dividends, remittance of after-tax wages & other income by foreign employees and any payment of interest on foreign debts. Durable Goods Orders Durable goods orders consist of products with a life span of more than three years. Examples of durable goods are: autos,appliances, furniture, jewelry and toys. This data is fairly important to foreign exchange markets because it gives a good indication of consumer confidence. Because durable goods cost more than non-durables, a high number in this indicator shows consumer’s propensity to spend. Therefore , a good figure is generally bullish for the domestic currency. Employment Cost Index The Employment Cost Index measures wages and inflation and provides the most comprehensive analysis of worker compensation,including wages, salaries and fringe benefits. The ECI is one of the Fed’s favorite quarterly economic statistics. Employment Report (Labor Report) In the US, the employment report, also known as the labor report, is regarded as the most important among all economic indicators. The report provides the first comprehensive look at the economy, covering nine economic categories. Here are the three main components of the report: Payroll Employment: Measures the change in number of workers in a given month and measures the number of jobs in more than 500 industries(ex-farming) in all states and 255 metropolitan areas. The employment estimates are based on a survey of larger businesses and counts the number of paid employees working part-time or fulltime in the nation’s business and government establishments. This release is the mostc losely watched indicator because of its timeliness, accuracy and its comprehensiveness. It is important to compare this figure to a monthly moving average (6 or 9 months) to capture a true perspective of the trend in labor market strength. Equally important are the frequent revisions for the prior months, which are often significant. Unemployment Rate: The percentage of the civilian labor force actively looking for employment but unable to find jobs. Although it is a highly proclaimed figure (due to simplicity of the number and its political implications), the unemployment rate gets relatively less importance in the markets because it is known to be a lagging indicator — it usuallyfalls behind economic turns. Average Hourly Earnings Growth: The growthrate between one month’s average hourly rate and another’s sheds light on wage growth and, hence, assesses the potential of wage-pushinflation. The year-on-year rate is also important in capturing the longer-term trend. The employment data give the most comprehensive report on how many people are looking for jobs, how many have them, what they’re getting paid and how many hours they are working. These numbers are the best way to gauge the current state and future direction of the economy. They also provide insight on wage trends and wage inflation. Fed chairman Alan Greenspan frequently talks about this data. By tracking the jobs data, investors can sense the degree of tightness in the job market. If wage inflation threatens, usually interest rates will rise, and bond and stock prices will fall. One weakness in this indicator is it is subject to significant revisions and large seasonal distortions. Existing Home Sales Existing Home Sales is a measure of the selling rate of pre-owned single family homes, collected by the National Association of Realtors from 650 realtor associations. Factory orders refer to the total of durable and nondurable good orders. The Federal Open Market Committee is a twelve-member committee made up of the seven members of the Board of Governors and five Federal Reserve Bank presidents. It meets eight times per year to determine the near-term direction of monetary policy, such as setting guidelines for the purchase and sale of government securities and setting policy relating to System operations in the foreign exchange markets. These changes in monetary policy are now announced immediately after FOMC meetings. Most importantly, the Fed determines interest rate policy at FOMC meetings. Market participants speculate about the possibility of an interest rate change at these meetings, and if the outcome is different from expectations, the impact on the markets can be dramatic and far-reaching. The interest rate set by the Fed the federal funds rate serves as a benchmark for all other rates. A changein the fed funds rate, the lending rate banks charge each other for theuse of overnight funds, translates directly through to all othe rinterest rates from Treasury bonds to mortgage loans. It also changes the dynamics of competition for investor dollars: when bonds yield 10percent, they will attract more money away from stocks then when they only yield 5 percent. The level of interest rates affects the economy higher rates tend to slow activity; lower rates stimulate activity, a ripple effect that expands into all sectors of the economy. Gross Domestic Product Gross Domestic Product (GDP) is the total value offinal goods and services produced within a country’s borders in a year. It is one of the measures of national income and output. It may be used as one indicator of the standard of living in a country, but there maybe limitations with this view. Housing Starts/Building Permits An estimate of the number of housing units on which construction was started. Starting construction is defined as excavation for the footings or foundation, or the first shovel of dirt to break ground. (In response to natural disasters such as Hurricane Andrew in August of 1992, that definition has been expanded to a housing unit built on an existing foundation after the previous structure had been completely destroyed.) Housing starts are divided into single-family and multifamily (2+) units. Beginning construction on a 100 unit apartment building, for example, is counted as 100 starts. The Ifo Business Climate Index is based on ca. 7,000 monthly survey responses of firms in manufacturing, construction, wholesaling and retailing. The firms are asked to give their assessments of the current business situation and their expectations for the next six months. They can characterise their situation as: good, satisfactorily or poor and their business expectations for the next six months as: more favourable, unchanged or more unfavourable. The balance value of the current business situation is the difference of the percentages of the responses: good and poor, the balance value of the expectations is the difference of the percentages of the responses more favourable and more unfavourable. The business climate is a transformed mean of the balances of the business situation and the expectations. For calculating the index values the transformed balances are all normalized to the average of the year 2000. The implicit deflator is calculated by dividing thecurrent dollar GDP figure by the constant dollar GDP figure. GDP implicit deflator is released quarterly with the respective GDP figure. Index of Leading Economic Indicators The Index of Leading Indicators consist of the following economic indicators: Average workweek of production workersin manufacturing. Average weekly claims for state unemployment. Neworders for consumer goods and materials Vendor performance (companies receiving slower deliveries from suppliers) Contracts and orders forplant and equipment New building permits issued Change inmanufacturers: unfilled orders, durable goods Change in sensitivematerials prices – Index of stock prices Money supply – Index ofconsumer expectations This index is designed to offer a 6 to 9 monthfuture outlook of economic perfomance. Industrial production consists of the total output of a nation’s plants, utilities, and mines. From a fundamental point of view, it is an important economic indicator that reflects the strength of the economy, and by extrapolation, the strength of a specific currency. Institute for Supply Management (ISM) Index The Institute for Supply Management (ISM) Index is acomposite diffusion index of national manufacturing conditions. Readings above 50 percent indicate an expanding factory sector. The index is calculated from five of the eight sub-components of a monthly survey of purchasing managers at roughly 400 manufacturing firmsrepresenting 20 industries and all 50 states. The survey queries purchasing managers about the general direction of production, orders, inventories, employment, vendor deliveries and prices. ISM Index: Manufacturing A national manufacturing index based on a survey ofpurchasing executives at roughly 300 industrial companies. Signals expansion when the ISm is above 50 and contraction when below. ISM Services Index Also known as Non-Manufacturing ISM. This index is based on a survey of about 370 purchasing executives in industries offinance, insurance, real estate, communications, and utilities. It reports business activity in the service sector. New unemployment claims are compiled weekly to show the number of individuals who filed for unemployment insurance for the first time. An increasing (decreasing) trend suggests a deteriorating (improving) labor market. The four-week moving average of new claims smoothes out weekly volatility. Machine orders (Japan) Machine Orders Data (also known as Machine Tool OrderData) is a figure issued by Japan Machine Tool Builders Association(JMTBA) every month. It serves as one indicator of the Japanes eeconomy. In the forex market, the release of such data is often followed by sharp change in currency exchange rate. Monetary Base (Japan) The monetary base is the “Currency Supplied by the Bankof Japan” and is defined as follows. Monetary base = Bank notes in Circulation + Coins in Circulation + Current Account Balances (Current Account Deposits in the Bank of Japan). An attempt to influence the economy by operating on such monetary variables as the quantity of money and the rate of interest. The nation’s central bank is usually involved with monetary policy. The money supply is basically defined as the quantity of money (money stock) held by money holders (general corporations,individuals and local governments). M1 – A category of the money supply that includes all coins,currency and demand deposits (that is, checking accounts and NOW accounts). M2 – A category of the money supply that includes M1 in addition to all time deposits, savings deposits and non institutional money-market funds. M3 – A category of the money supply that includes M2 in addition to all large time deposits, institutional money-market funds, short-term repurchase agreements and certain other large liquid assets. National Association of Purchasing Managers (NAPM CHICAGO) The Chicago PMI (officially known as the BusinessBarometer) is a monthly composite index based on opinion surveys ofmore than 200 Chicago purchasing managers regarding the manufacturing industry. The survey responses are limited to three options: slower, faster and same. As such, the index will not capture if a component is growing but at a much slower rate or vice versa. The index is a composite of seven similarly constructed indexes including: new orders, production, supplier delivery times, backlogs, inventories, prices paid, and employment. New orders and orders backlog indices indicate future production activity. It signals factory-sector expansion when it is above 50 and contraction when below it. The index is seasonally adjusted for the effects of variations within the year, differences due to holidays and institutional changes. Because it is an opinion survey, it is often influenced by respondents’ perception of current events, as opposed to actual hard data. Also, it does not capture technological and production changes, which make it possible for production to expand, while employment contracts. Because the Chicago PMI is released the day before the ISM, it is watched in order to predict the more important ISM report, which is in itself a good leading indicator of overall economic activity. It frequently moves markets. New Home Sales The New Home Sales report shows the number of newly constructed homes with a committed sale during the month. The level of new home sales indicates housing market trends, and economic momentum signaling consumer purchases of furniture and appliances. Simply, the volume of sales indicates housing demand. Also, the monthly supply of homes serves as an input into the level of housing pressure. However, when analyzing sales trends, one must remember to take into account unusual weather and seasonal effects NY Empire State Index The New York Fed conducts this monthly survey of manufacturers in New York State. Participants from across the state represent a variety of industries. On the first of each month, the same pool of roughly 175 manufacturing executives (usually the CEO or the president) is sent a questionnaire to report the change in an assortment of indicators from the previous month. Respondents also give their views about the likely direction of these same indicators six months ahead. This index is seasonally adjusted using the Philadelphia Fed’s seasonal factors because its own history is not long enough with data only going back a couple of years. Personal income is simply the income received by individuals, non-profit institutions, and private trust funds. This indicator is vital for the sales sector. Without an adequate personal income and a propensity to purchase, consumer purchases of durable and nondurable goods are limited. Philadelphia Fed Survey A composite diffusion index of manufacturing conditions within the Philadelphia Federal Reserve district. This survey is widely followed as an indicator of manufacturing sector trends since it is correlated with the NAPM survey and the index of industrial production. Producer Price Index The PPI gauges the average changes in prices receivedby domestic producers for their output at all stages of processing. The PPI data is compiled from most sectors of the economy, such as manufacturing, mining and agriculture. The economic measure of efficiency summarizing the value of outputs relative to the value of inputs. Purchasing Managers Index (PMI) The Index is widely used by industrialized economies toassess business confidence. Germany, Japan and the UK use PMI surveys for both manufacturing and services industries. The numbers are arrived at through a series of questions regarding Business activity, NewBusiness, Employment, Input Prices, Prices Charged and Business Expectations. In addition to the headline figures, the prices paid components is highly scrutinized by the markets for evaluating pricing power and inflationary risks. The retail sales report is a measure of the total receipts of retail stores from samples representing all sizes and kinds of business in retail trade throughout the nation. It is the most timely indicator of broad consumer spending patterns and is adjusted for normal seasonal variation, holidays, and trading-day differences.Retail sales include durable and nondurable merchandise sold, andservices and excise taxes incidental to the sale of merchandise.Excluded are sales taxes collected directly from the customer. It also excludes spending for services, a large component of consumer expenditures. Retail sales is a the first picture of consumer spending for a given month. Retail sales are often viewed ex-autos, as autosales can move sharply from month-to-month. Also, gas and food component changes are often a result of price changes rather than shifting consumer demand. Retail sales can be quite volatile and theadvance reports are subject to large revisions. Tankan Survey (Japan) An economic survey of Japanese business issued by the central Bank of Japan, the survey is conducted to provide an accurate picture of business trends of enterprises in Japan, there byc ontributing to the appropriate implementation of monetary policy . The report is released four times a year in April, July, October and mid-December. Tertiary Industry Index (Japan) The tertiary index measures activity in six industries: utilities, transport and telecommunications, wholesale and retail, finance and insurance, realestate and services. Treasury International Capital (TIC) These Treasury data track the flows of financial instruments into and out of the United States. Instruments tracked include Treasury securities, agency securities, corporate bonds, and corporate equities. The percentage of the people classified as unemployed as compared to the total labor force. Wholesale Trade is the dollar value of sales made and of inventories held by merchant wholesalers. It is one of the components of business inventories. Statistics include sales, inventories, and stock/sale ratios, collected via mail-out/mail backsurvey of about 7,100 selected wholesale firms. The ZEW works in the field of user-related empiricale conomic research. In this context it particularly distinguished its elfnationally and internationally by analysing internationally comparative issues in the European context and by compiling scientifically important data bases. The ZEW’s duty is to carry out economic research,economic counseling and knowledge transfer. The institute focuses on decision-makers in politics, economics, and administration, scientists in the national and international arena as well as the interested public. Regular interviews on the situation on the financial markets and business-related service providers as well as large-scale annual studies on technological competitiveness of and innovation activities in the economy are representative of the different types of top icalinformation provided by the ZEW. The ZEW is subdivided into the following five research fields: International Finance and Financial Management; Labour Economics, Human Resources, and Social Policy; Industrial Economics and International Management; Corporate Tax ationand Public Business Finance; Environmental and Resource Economics, Ecomanagement.
http://www.trading-point.com/economic-indicators
13
14
State standardized assessments often include reading assessment as an essential component. Educators may also impose reading assessments as a means of testing student's reading comprehension and skills. To help young students succeed at reading assessments, it is essential to teach them effective study methods. Teach kids diverse study methods for reading assessment to improve their abilities in note taking, comprehension, reflection and collaborative studying. A strong reader utilizes both comprehension skills and interpretive skills when analyzing a new written work. While reading comprehension skills allow a reader to understand a written work, interpretive assessments question a work further by discerning the author’s intentions. Understanding the differences between the two assessment techniques helps you understand the expectations for each. Kindergarten is not all play. It’s the time that children are taught to recognize letters, read small words and count. They start by learning the letters of the alphabet. The first word that a child usually learns how to write is his name. First grade teachers expect either the parents or the kindergarten teacher to have taught the student basic phonetics. Kindergarten teachers teach in the transitional stage from pre-k to the first grade classroom in which they will be learning for the majority of the day instead of playing. Intuition is a type of knowledge that people sense. Rather than being based on hard facts, intuition is founded in a gut feeling or a strong belief that can sway people to make decisions or even influence the relationships they form. Having high intuition takes that gut feeling a step further by categorizing it as a sense of something mystical that is beyond cognition. Active readers engage the text with questions as they read, seeking specific information or challenging and testing the writer's assertions. There are a variety of methods you can use to help your students develop into active readers by teaching them reading strategies that will keep them grappling with the text, not just getting through it. There are many ways to use novels in a 4th grade classroom. Like any worthwhile academic experience, time and preparation are essential. The novel should be age- and level-appropriate with regard to lexile levels, themes and content. After you find the right book, let your "salesman" skills kick in --- demonstrate your excitement and promote the novel to your students. Building enthusiasm and interest lures students into reading a novel, thereby making the novel experience rewarding and successful. Informal Reading Inventories, known as IRIs, are criterion-referenced reading assessments designed to help teachers, tutors or mentors understand a student's strengths and weaknesses in reading skills. IRIs typically begin with a graded word list that requires the student to read common words in isolation. The second portion of the test requires the student to read passages orally, silently or both and then respond to comprehension questions. Informal Reading Inventories are available commercially or can be designed by teachers. They are inexpensive and easily-administered instruments useful in a variety of settings. Learning to read is a process of movement through each of the five stages of reading development that you cannot do hastily. The stages are important for understanding written English and learners must master them. Reading is a process of language development, communication, acquiring and sharing of ideas or information, and the mastery of plain cognitive processes. First grade is time where the novice reader fine tunes and masters reading or gets left behind. The argument between how to teach reading has been constant with oscillation between two overarching theories the top-down (phonics) and bottom-up (whole language) approach. Within each theory are a variety of reading theories that are employed in a first-grade classroom. Parents often choose a program based on whether or not the teacher emphasizes whole language or phonics. Yet, most educators today agree that one is not necessarily better than the other. Both theories have their place in reading instruction, so teachers attempt to… When children begin reading, assessing their performance is key in determining their abilities. Through assessments, educators can determine how children's reading skills are developing, allowing them to offer supplemental lessons if needed. When assessing children's reading skills, fluency, decoding and comprehension skills should all be monitored, as these components are critical for effective reading. Using a combination of techniques is necessary to effectively assess these skills. The MetaMetrics "Lexile Framework for Reading" is one of the best known methods of measuring text complexity and monitoring a student's reading ability. The new "Common Core [Educational] Standards" being adopted across the country call for a sweeping increase in text complexity, so understanding how to calculate the Lexile level of a text or the Lexile reading range appropriate for a child is becoming more important than ever. Parents, teachers and students alike can benefit from the use of Lexile calculator tools. Easy Grade Pro is a suite of software programs that allows educators to keep track of student grades, attendance and other student record information. Easy Grade Pro also provides educators with the means to adjust this information in a number of ways. For example, a teacher may wish to drop the lowest assignment or test grade from a student's final grade calculation. Easy Grade Pro provides the means to do this in a few short steps. Assessments allow teachers to see how well their students are comprehending information presented in class. To serve this purpose, assessments must be properly graded and the scores taken into consideration when planning future lessons or projects. The way in which you do this depends, at least in part, upon the types of questions found on the assessment. To ensure that these scores are useful to you, make scoring your assessment properly of paramount importance. Young children get ready to learn about reading well before they start formal schooling. Preschool and kindergarten teachers, as well as parents, can assess early literacy skills to pinpoint a child's strengths and weaknesses for instructional focus. It is important that children have a solid foundation so they can benefit from reading instruction in more formal settings. Your Lexile score, or Lexile reader measure, provides information about your reading ability. These scores consist of a number followed by "L" and range from 0L (beginning readers) to over 1700L (advanced readers). Written texts also have Lexile measures that indicate how difficult they are to comprehend. Unfortunately, there is not a single test that measures Lexile scores. Instead, the scores are derived from other types of standardized tests and programs. Transition assessment, as defined by the Division on Career Development and Transition (DCDT), is the process of collecting data of a person's interests, needs and strengths and aligning the data with future career planning. Transition assessment occurs primarily in kindergarten through 12th grade. Transition assessment involves many individuals in a student's life like counselors, job coaches, co-workers and the students themselves. Many methods and tools exist for transition assessment. In-basket activities help develop problem solving, constructive reasoning, analytical thinking and communication skills for college-level students. This type of project-based instruction presents opportunities for students to solve problems based upon what might be encountered in an "in-basket" upon arrival at work. While the activity can be presented while instructing within any professional field, it is particularly helpful for developing educators or administrators charged with student assessment and achievement. Kindergarten students are not yet sufficiently literate for traditional assessments, so teachers must use interactive, auditory test methods rather than simply administering written tests. Literacy standards vary from state to state, and each state has its own kindergarten assessment packet, which walks teachers through the process. Teachers should follow the instructions in their packets, and tally the students' correct and incorrect answers on the forms provided; they can analyze the students' successes and errors later to determine where to focus classroom time and attention. Lexile is an organization that rates the reading ability of students. Ratings range from 0 to 1700 based on the reading ability of the child. A 0L level is a beginning reader, or pre-reader while a 1700L is an advanced reader. Students in 20 states are tested for Lexile level through standardized testing. Other students' Lexile levels must be determined by examining the Lexile levels of the books they read, which can be found on the Lexile website. Lexile levels are a method of rating the reading difficulty of a book. It does not rate the content or interest level--a factor of which teachers and parents need to be aware when selecting books for younger students with a high reading level. The book level that a student is able to read, comprehend and retain is an indicator of the student's reading ability. The lexile rating system is flexible and is not tied to grade levels. The ability to read independently is one of the most important skills for a student to learn. But if you ask a student to read a book beyond her reading level, you set her up for frustration and failure -- and too much failure will turn a learner off from reading, crippling her lifelong learning potential. Set up your student for success through using Lexile scores, a framework for determining a text's difficulty for any reader. In the United States, the achievement test is a pivotal challenge students must often master to move on to bigger and better things in life. Students of all grade levels and ages face this obstacle, with the importance of achieving a high score increasing exponentially as they get older. Whether one is faced with a standardized achievement test such as the ACT, which originally stood for American College Testing, or the SAT, which originally stood for Scholastic Aptitude Test, or a locally created achievement test such as one developed by an individual teacher or institution, these achievement tests are forever… Informal classroom assessment identifies a student's progress, knowledge, performance and achievements through non-standardized procedures. These procedures can include essays, presentations, homework and experiments. Teachers can also simply observe or interview the students. This assessment is used to link students' learning with teaching style. Checklists are used to measure students' progress. Students who are taking a college curriculum are tested after taking study courses. Posttesting determines what knowledge they acquired and how much of an effect the teaching has had on them compared with when they first started the course. (See Reference 2, page 1, paragraph 1.) The data gained from posttesting allow for statistical reports to be created. The Developmental Reading Assessment (DRA) is primarily used by classroom teachers to determine reading levels and progress in kindergarten through eighth-grade students. Teachers observe, document and plan instruction according to students' reading performance during a DRA. Teachers can provide appropriate reading material for each student by determining at what level a child can read with success independently. Reading assessment tests are given to children from the ages of 5 to 12 years to help school faculty with grade placement, identification of students who may require extra help and provision of help for teachers and parents to improve literacy in their students and children. These tests contain various subjects that test literacy skills and they vary in difficulty, based on the age of the child taking the examination. The results of this examination are then given in graph form. A narrative analysis of the reading assessment interprets the data provided in the graphs and provides suggestions for improving… Teachers have the difficult task of relaying material to many students at the same time while monitoring each individual student's progress. Not every person learns the same way or at the same pace. Unfortunately, a teacher can't stop the entire class to wait for just a few people to catch up. Without proper comprehension assessment tools, some students will get lost in the crowd. At the fourth-grade level, students should be able to read passages and answer comprehension questions, as well as use prior reading skills to understand the text. Fourth-graders should be able to write using topic sentences and cohesion, use grammar and parts of speech effectively and notice errors. Specific standards vary by state, but similar assessment measures can be used to see how students are progressing. Formative assessment covers the range of informal diagnostic tests a teacher can use to assist the process of learning by his students. Prescriptive but ungraded feedback enables students to reflect on what they are learning and why. The goal is to improve performance and achieve successful outcomes. Robert Stake, Director of the Center for Instructional Research and Curriculum Evaluation, likens formative assessment to a cook tasting a soup before serving it to a guest. But despite its advantages, formative assessment can be time-consuming, and incentives in the school system tend to favor more objective assessments. The use of multiple assessment in middle school affects instructional programs and overall school progress positively. From district adminstration to classroom teachers, the use of data drives decision-making. Data informs decisions involving new programs, staff training and students' special placements. Using multiple assessment tools also increases the reliability of results; students demonstrate what they know in a variety of ways overtime. Many districts require schools to follow an assessment program schedule in addition to required state testing, which ensures the use of multiple assessment data for evaluating students' progress and, at the district level, evaluating programs and whole school progress. Reading is an essential life skill. It begins being taught at a very young age, and quickly becomes beneficial both for educational and general life purposes. As a parent, it is natural to worry about how your child is developing these skills, and whether the school is doing all it can to support your child. If you begin to question your child's ability to read, it is worthwhile to approach their teacher in a respectful, productive way, so your concerns are addressed. Whether you're currently in school or you're trying to teach yourself a new skill, personal learning tools are invaluable to the process. Long ago, personal learning tools may have simply consisted of a textbook and a pen. While these tools are still with us, there are now many different types that you can choose from. Though each tool has positive and negative aspects, choosing the ones that best fit your learning style can make learning more effective and fun for you. Whether measuring the speed of a car or the length of a certain area, you need particular measuring tools to get a precise measurement. The most commonly used measuring tool is a weight scale, which displays a person's individual weight. However, you can find measuring tools practical everywhere you go from watches, automobiles, schools and homes. Assessments in education are intended to provide information regarding students' progress in the learning process. Identification of strengths and weaknesses assist educators in focusing on students' needs. Assessments are also used to identify specific areas where students may need additional support or servicing. These assessments might evaluate students for a developmental disability or special classes. The more information gathered regarding students' progress in learning, the more tailored lessons and special services can be to meet students' academic needs. Reading may be the most important skill children learn in elementary school. In first grade, they begin to read more complex sentences and books. There are certain benchmarks, such as enhanced comprehension, the ability to ask questions and recognizing literary elements, that first-grade students should meet as they further develop their reading skills. Each state has various formal reading tests a student will have to take, usually in the elementary and middle school grades. While students groan at the thought of having to take a test, they serve the purpose of evaluating the students and, sometimes, guiding the educator with the types of material to focus one. There are also disadvantages associated with the administration of these formal tests, however. A Developmental Reading Assessment (DRA) is used by educators to track the reading abilities of primary students over a period of time. A DRA helps teachers assess a student's reading level, particularly in the areas of accuracy, fluency and comprehension. Results from a DRA identify the individual strengths and weakness of students and help the teacher provide appropriate reading material for students. To help monitor reading growth over time, classroom teachers are responsible for administering and scoring the DRA. Preschool reading assessments have to do with vocabulary understanding as well as acknowledgment of sounds many preschoolers cannot yet read. Visual assessments are also used to identify letters and words attached to particular pictures. These assessments allow teachers to know where a preschooler is in the learning process before starting to work with the student on reading skills. Response to Intervention or RTI is an approach to skills remediation that targets students at multiple levels. Tier one involves direct reading instruction in the general education class. Tier two instruction requires systematic small-group reading instruction for students who score low on screenings. Tier three intervention consists of daily, intensive reading instruction for students who do not respond to Tier two intervention. The Florida Center for Reading Research has recommended several programs for Tier three intervention. Informal reading assessments generally take place in the natural learning environment of the classroom. Types of informal reading assessments include: checklists, running records, observations, work samples, portfolios, rating scales and parent interviews. Informal assessments are useful because they provide information about how children apply reading on a daily basis. However, there are some drawbacks to using informal tests. The assessment process attempts to find out what people learn and how they learn. The research process helps educators determine the information that students find helpful in putting together a paper or project, as well as the process they go through to gather that information. Because assessment tools vary, most educators use a variety of tools to get the best results. Multiple methods exist to find an individual's learning needs. Teachers determine student weaknesses through classroom exams and quarterly grades. However, a more research-based approach to determining academic needs can be accomplished through a psycho-educational evaluation. This most often includes intellectual and achievement assessments. The intellectual test provides information on an individual's overall thinking and reasoning abilities, whereas the achievement battery assesses academic areas. To establish learning disabilities (needs), discrepancies between ability (IQ) and achievement are measured. When assessing a young learner's reading skills, it's particularly important for educators to develop thorough reading-improvement plans. This means linking assessment with learning and ensuring those plans are clearly communicated to both the student and the parents. It is recommended that educators present this information both in a written report, as well in a personal meeting with the student's parents. The assessment process attempts to discover what and how students learn throughout a course or over a period of time in education. Teachers also use assessments to learn how to become better teachers and discover what concepts and procedures students struggle with through the learning process. Many educators prefer to use several assessment tools to obtain varied results. Assessment takes two forms: formative and summative. Formative assessment occurs in the middle of a lesson or unit and it allows teachers to gauge what information they have successfully imparted thus far in the lesson. Summative assessment occurs at the end of a lesson or unit and allows teachers to gauge how much of a lesson or unit has been successfully imparted over the course of the unit. Teachers use formative assessment to tweak upcoming unit plans, while they use summative assessment to evaluate the overall success of the students over the course of a unit. Pearson is a publishing company that specializes in textbooks for elementary through college subjects. A prolific textbook publisher based in the United States, Pearson sells to schools, home-school teachers and college students. Addison-Wesley, Allen & Bacon, Benjamin Cummings, Longman and Prentice Hall are some of its better-known publishing brands. Individuals can order higher education textbooks from Pearson through their website, but many elementary through high school textbooks require the buyer to be a registered school or home-school parent. Learning to read is arguably the most important skill taught in elementary school. Some characteristics of effective reading are fluency, comprehension and retelling. When teaching a child to read, it is important to frequently assess his progress. Assessing reading is a combination of subjective and objective measures. Although reading assessments can sometimes be difficult to administer, especially to younger children, it is an important part of understanding a child's language acquisition needs and development. Increase your chances of bringing your child to the head of the class before she ever enters school by teaching her reading basics. She may have an easier time adjusting to school if she recognizes some of the material that is presented. Before you attempt to get your child to read, teach her the basics, such as the alphabet and pronunciation. Work with your child for a few minutes each day to create a routine. The pre-reading stage of reading development is also called reading readiness or emergent literacy. Children begin to acquire literacy skills long before formal schooling begins. From birth, infants begin to develop language skills. These language skills are the foundation for later literacy development. Having a number that corresponds a child's reading level to a book helps a child enjoy reading more. After testing at school to determine a child's Lexile or Advanced Reading level, this number helps a child, parent or teacher choose books that encourage reading independently. A caregiver knows by the Lexile or AR level if a child needs help reading, if a book is just right or if the child can easily read it by himself. If you are unsure of a book's reading level you can find the Lexile or AR number online. In the third grade, many students receive their first introduction to major topics in science, including the orbiting of the planets, the movement of celestial bodies through the sky, the needs of living things, the interactions between living and nonliving things and the concepts of gravity, force and heat. You should regularly test your students' grasp of these topics, since you are laying a foundation for future science courses, through tests, quizzes and projects. As an everyday assessment tool, though, interact with your students and play games or ask them review questions, to assess their learning in a non-intimidating and… Kindergarten readiness is more than just knowing the alphabet and counting to 10. While most parents focus on academics, social and behavioral skills often are a better indicator of whether your child is ready for school. Working with your child on skills such as listening and sharing will help prepare her for kindergarten and ease her transition. It is challenging to teach children to read and requires knowledge of effective practices. There are five components of reading: phonemic awareness, the knowledge and manipulation of sounds in spoken words; phonics, the relationship between written and spoken letters and sounds; fluency, the ability to read with accuracy, appropriate rate, expression and phrasing; vocabulary, knowledge of word definitions and their context; reading comprehension, understanding the meaning in text. Teachers who have a thorough understanding of these components are well prepared to teach children to read. Teachers at all levels need to measure how well their students are learning. Traditionally, formal tests and written examinations were the only forms of assessment used to determine students' grades and reflect their knowledge and understanding of the material that was taught. Informal methods of assessment, such as checklists and observation of daily work, have become increasingly popular but have not replaced formal assessment. Lexile levels indicate a book's readability. They are calculated by a program called MetaMetrics which uses a formula based on sentence length and word frequency. Students who take standardized reading tests are assigned a Lexile score based on their reading ability. Students are advised to read books rated between 100 Lexiles below and 50 Lexiles above the score. Students within each grade level often share a similar Lexile range. Therefore, knowing the Lexile level of textbooks helps teachers choose the most appropriate reading material for their classes. There are a few approaches for determining a textbook's Lexile score. The Developmental Reading Assessment, or DRA, is a tool used to assess literacy skills. Recently updated, the latest version is the DRA 2 and the K-3 kit includes text levels A-40. Levels 28 and higher require students to complete a written response. The oral reading is timed for levels 14 and higher. Texts provided in the kit are narrative stories, except for levels 16, 28, 38 and 40, which have an informational text in addition to the narrative. The primary skills that are the focus of first grade reading instruction give students the foundation needed to increase reading skills in upper grade levels. First graders learn to use strategies to decode words, develop understanding of story lines, learn the parts of a story and increase reading fluency. In Virginia, the Standards of Learning (SOL) assessment tests have been used to measure children's proficiency in English, mathematics, reading, science and history/social science since 2000. The 35 to 50 items on each test (600 total) measure content knowledge, scientific and mathematical processes, and reasoning and critical thinking skills. A two-part assessment that includes multiple-choice questions and a short essay is utilized to assess writing skills. All tests are administered in English, with other provisions available for students with disabilities or limited English-language proficiency. Parents receive scores and guidelines for interpreting them. But how exactly are the SOL scored? Reading assessments help identify struggling students in need of additional instruction. They are used to determine if the instructional methods are appropriate for the entire group of students or only a portion of them and to monitor student growth. The Developmental Reading Assessment, or DRA, is a tool used by educational systems to evaluate a child's reading and comprehension level. Reading material is matched with the child's ability to read and is especially targeted to challenge the student to develop and progress the more they read. There are many intervention and assessment programs and strategies available. Intervention specialists and teachers must choose the right tools for each individual child and then evaluate how well that tool is working. Throughout their scholastic careers, students undergo assessments in order for schools to evaluate and understand their academic progress. Many forms of assessments exist. They range from complex psycho-educational analyses to simple, formative assessments given in the classroom. Students can be evaluated as a group, such as with the SATs, or individually, such as with intellectual scales. There are multiple aspects to consider when deciding which educational test is appropriate to use. Teachers and administrators need to examine criteria in order to select the most suitable evaluation tool. Reading assessment, as the name implies, is the evaluation of a student to determine his progress in all areas of reading. Assessment occurs in many forms and is most effective when implemented as part of the instructional process. A thorough assessment will answer the following questions: (1) At what level is the child reading? (2) What reading interests does the child possess? (3) What is the child's attitude toward reading? (4) What reading strengths and abilities does the child possess? Once each of these questions is answered, the teacher can tailor her instruction to focus on the student's weak areas. Visual and auditory learning styles help classify those who learn predominantly through seeing and those who learn through hearing. According to LDPride, "Knowing your learning style will help you develop coping strategies to compensate for your weaknesses and capitalize on your strengths." As a teacher, running assessments allows you to determine the type of learning style in which a student fits, and you can tailor your classes accordingly. The Developmental Reading Assessment, or DRA, is a set of individually administered reading assessments for children in kindergarten through grade eight. According to Natalie Rathvon, Ph.D., the purpose of the DRA is to identify students' independent reading level, fluency and comprehension. Educators use these assessments to identify students' reading strengths and weaknesses and to monitor reading growth. The classroom teacher administers, scores and interprets the DRA. Building critical thinking skills in writing is important for middle and high school students because this will prepare them for more advanced college English courses, and it will teach them how to use their creativity and research to produce well-organized documents in their future careers. When assessing students' writing skills, you want to look at how well they organize their thoughts, what weaknesses they have and the originality of their writing. Astrology is a belief system based on the idea that the positions of sun, stars and planets have an effect on a person's destiny, personality and other earthly matters. Believers say that the alignment of the stars and planets on the day you were born, for example, help determine characteristics such as aggressiveness, neatness, thoughtfulness and imagination. The signs of the zodiac, as well as astrology associated with other cultures, such as China, can be great stepping-off points for classroom activities and lessons for students at all grade levels. Assessment tools help score student understanding. Determining the ideal assessment tool to use largely depends on what needs to be tested. Using a variety of assessment tools in the classroom can create a more rounded view of students. Though assessment is necessary for learning, it doesn't have to be boring or stressful for students. As students move through school, they should continually progress and understand grade-level material. Many middle school teachers put time and effort into ensuring that their students progress, and to monitor this progression, they use monitoring tools. Through the use of these tools, teachers can effectively see how well their students are advancing in their understanding, and ensure that everything they are doing is proving beneficial to their pupils. It's encouraging if you're a parent or teacher and your children or students are reading. However, just reading isn't enough. A person must understand what they are reading, make predictions about what might come next and answer questions about the reading. Reading comprehension is the foundation of all subjects and can be built with the use of various practices. Arguably one of the most important skills a person can and should develop in her lifetime, writing is an immensely difficult ability to assess and develop. In early childhood, writing problems tend to center on motor skills. As children move into fourth, fifth and sixth grades, writing problems may show up during the act of composition. By its very nature, writing resists quantification, so assessing a child's writing ability is a challenging task for a teacher. Teachers may want to conduct assessments that take the form of critical and reflective discussions with individual students about their writing. The Degrees of Reading Power test, or DRP, indicates a student's ability to understand text. It is administered between grades three and eight, though tests are available for students of all ages. The results are then used to determine appropriate books, teacher effectiveness, and points of concern for developing readers. Test scores are derived from a multiple-choice standardized test, and possible scores range from 0 to 99, though the highest possible score varies slightly by grade level. Students who struggle with reading and comprehending grade-level materials have difficulty, not just in English and Language Arts classes, but in all content areas. Reading is an integral part of learning. Teaching students strategies for both reading assigned materials, and better understanding what they read can lead to increased academic success overall. According to the Highland Schools website, diagnostic assessments are used to determine how well students are learning in addition to the effectiveness of the curriculum. The assessments are meant to show both strengths and weaknesses so that the necessary adjustments can be made and learning can be improved. There are a series of diagnostic literary assessments that can be used at any grade level. A variety of assessment tools are available to test children's early reading skills. Each test varies slightly and focuses on different results. Before you select an assessment tool, it is important to consider what your goal is for your pupils. Select a test that will assess the pupils in the areas you wish to focus upon. The assessments listed below are common resources used in many districts. Teachers' assessment tools are an important part of judging the capabilities, progress and development of students. Assessment tools help teachers judge how much a student knows at the beginning of a school year, semester or subject. Assessment tools also help track progress and inform the teacher when the subject matter has been adequately learned by the students. Teachers' assessment tools come in various forms, including homework, tests, interviews, oral reports, papers and instructor observation. Teachers' assessment tools can be formative, summative, objective and subjective. A medication aide -- better known as a pharmacy technician and also referred to as a medication technician -- primarily helps pharmacists to prepare prescription medications. They also handle duties such as receiving prescription requests, labeling pill bottles, answering phone calls and stocking medication. To this end, states like Ohio have technical schools that provide certificate and associate degree programs. Upon graduation, medication aides can expect to earn between $20,000 and $40,000 a year, according to a 2009 report by the U.S. Bureau of Labor Statistics. Pearson is a leading publishing company offering a wide selection of school products, curriculums and assessments for students of all ages and teachers. Pearson has many brands of educational products to choose from in a variety of subjects. The company also supports several private non-profit organizations such as Jumpstart, a national early education group serving preschool children in low-income communities; the National Teachers Hall of Fame; and its own Pearson Foundation, which delivers books and offers literacy support to communities. The faster you read, the more information you are able to comprehend in a given period of time. Accordingly, the speed of reading is an important factor in determining the ability of a person to process vast amounts of information quickly. In the information age, this ability can be an important variable in determining whether a person will make a successful career or will be a mediocre professional. There are two methods to measure how fast you read. In today's society, education is always a popular topic of conversation. There is a constant push for higher standards and a need to remain competitive with other nations. Assessment is becoming more and more important as citizens demand accountability from teachers and proof that students are learning. Children of the same age generally develop physically and intellectually along certain benchmarks or milestones, but they can progress at different rates in regards to school readiness, particularly in reading and writing. With enough of a foundation in place to tackle these new challenges, children will generally exhibit certain characteristics in common. There are a variety of professional readability assessments that determine the reading level of a given text and match it to the reader's reading level. These assessments measure word frequency, concept load, syllable counts, lengths of sentences and other factors to determine reading level. They can be difficult for teachers, parents and even trained reading specialists to use. More direct approaches apply word recognition to a specific text to determine if it is appropriate for the individual learner's level. "Lexile" is a trademarked product created by MetaMetrics, a private, educational measurement company. Lexile levels are specific reading levels matched with appropriate reading materials of the same approximate levels for students. These levels can range between 0L through 1700L. Lexile reading level determination is specific and individualized for each student. Books are also assigned Lexile levels, helping parents and teachers better match reading materials with the needs of the learner. Also, because student reading progress is so variable, Lexile levels more accurately grade a student's reading progress since the measure is broken into smaller increments than general grade level reading… Understanding what one reads is essential for success in school and in work. Learning how to fully understand what one reads takes time and effort, but it can be accomplished through literary assessment tools. Literary assessment tools measure reading comprehension in qualitative ways that offer accurate reflections of what one really gleans when one reads. Literary assessment tools abound, but some tools are more valuable than others. If a child is tired of reading books that are too easy or perhaps too difficult, then finding a book using her Lexile score can help eliminate that problem. State departments and test publishers create assessments that schools administer to obtain a child's Lexile level. This helps pair her with a book that is at an appropriate reading level. By taking some time to plug in information, the perfect books for a child's reading level appears. Read through the titles and see what books grab the attention of your child. Teachers in the classroom today are constantly struggling to meet state and federal education standards, so having assessment tools to monitor student progress are essential. There are a variety of tools available, many of them web- or computer-based. The tools often require the input of data, and have the capability to chart and track a students' progress. Although there are a variety of ways to test a child's reading age, one of the most commonly used methods is to ask your child to read from a list of increasingly difficult words until she is unable to correctly pronounce several words in a row. While this does provide a reliable indication of a child's ability to sound out and read words, it has the disadvantage of not accounting for comprehension. By familiarizing yourself with several ways of assessing reading level, you will be able to gauge your child's comprehensive reading ability. Implementation of the Texas Reading First Program focuses on Kindergarten through fifth grade. It is a specific process that must be followed closely in order to draw continued funds. Funds are administered over a three-year period that allows for gradual development. The first year builds a foundation of readers, the second year uses a core reading program, and the third year uses data-driven instruction to continue with high-level and on-going implementation. It must include explicit and systematic instruction in particular pieces of reading instruction, or The Big Five: phonological awareness, phonics, fluency, vocabulary, and comprehension. Reading makes learning possible in school. The Ontario Ministry of Education states that reading "paves the way to success in school, which can build self-confidence and motivate your child to set high expectations for life." For preschoolers, mastering reading readiness skills can help them become good readers in kindergarten and elementary school. Preschoolers who have letter, word and book awareness will possess the right readiness skills for reading. Assess 2 Learn, or A2L, was a formative assessment program made by Riverside Publishing Company, a division of Houghton Mifflin Harcourt. Assess 2 Learn has been updated and renamed Assess 2 Know, or A2K. Reading comprehension is a skill that all students need. According to Janice Light and David McNaughton of Pennsylvania State University, reading comprehension requires students to recognize written words; understand and relate the meaning of words and sentences; relate their prior knowledge to what they are reading and understand long texts. The importance of reading comprehension is clear, and there are many tools available to facilitate this comprehension. Typically there are two ways of assessing early literacy. One can be called "traditional testing," which involves removing a child from the classroom setting to perform a test. An example of a traditional test might be a multiple choice test on a story they have just read. Another type of testing is called "authentic," which involves assessment by observing the child in everyday classroom activities. An example of authentic testing might involve noting the number of words a child uses when asking for a toy from another student. Assessment tools and methods help teachers gauge the development and progress of their students. Assessment methods encompass the means by which a teacher wishes to assess students. Tools are the instruments for measurement for each method. Formal methods and tools include standardized tests and age-related developmental milestones. Informal methods and tools include use of flash cards and anecdotal records. Assessment tools help teachers make important decisions about student instruction. Test results help identify student needs so learning strengths and weaknesses can be identified. Students are then ranked according to grade-level proficiency. Once a grade and learning needs are established, focused instructional strategy can be used to improve student performance. Along with good instruction, the right assessment tools can go a long way in contributing to increased student performance. Assessment tools allow the individual administering the assessment to gather specific information about the student or patient. Assessments can be user generated, and thousands of assessment tools exist that people can use for certain purposes. Some assessment tools have a specific intent, such as the Scholastic Aptitude Test (SAT), which many higher education institutions use to determine acceptance. Persons who will collect the information create other assessments, such as a teacher who creates a test to assess understanding on a particular subject. Concepts of print, or how print functions within books, has much to do with a child's degree of early exposure to reading. Tools that evaluate concepts of print assume that the more exposure a child has to print early on, the easier his reading experiences will be as a beginning reader. This concept is as much about understanding as it is about the handling of books from front to back, flipping pages from left-to-right, previewing the cover, using pictures to draw in information, recognizing order and having an overall understanding of punctuation flow. Some popular assessment tools are used in… The Developmental Reading Assessment, or DRA, is a standardized tool used by classroom teachers in grades K-8 to evaluate the independent reading level of their students. It helps teachers keep records of the progress students are making in the areas of reading comprehension, fluency and accuracy. The test is administered individually to each student by the classroom teacher. Some states require this assessment to be administered two to three times per year. When a teacher shares results of this assessment with parents it is important to understand the results, what they mean for their child at school and what parents… A reading problem assessment is a way to determine issues that may be affecting a student's reading performance. Reading assessments are often cognitively based, which means they focus on issues beyond the mere ability to understand words. Since there are many possible factors that can affect a student's reading ability, a reading assessment should be determined on an individual basis and be delivered and analyzed by a qualified professional. School districts can provide reading assessment and assistance to any students who require it. Assessing what a student knows is an important task for teachers so that they know what they need to teach and what skills students have already mastered. There are many forms of assessment, and in order accurately assess what a student knows, teachers must use different assessments for different skills. Teachers must make sure that when they assess a student, they are using a tool that measures the skill they are trying to assess. Obtaining detailed information about each student's reading level and ability is crucial for reading teachers planning instruction. According to the Reading Rockets website, assessment is an important element of teaching and should be regularly implemented. There are a variety of reading assessments to choose from, depending on the information needed and the age of the students. Early childhood teachers in grades K-2 are responsible for reading instruction and their students' ability to become proficient readers. Assessment is an integral part of any reading program. Teachers use test data along with their observations to determine if students are progressing in specific skills. Teachers can also identify areas that need remediation and become aware of learning disabilities that can be addressed before they become detrimental. The Florida Comprehensive Assessment Test (FCAT) is a standardized test given to students from third grade through 11th grade in Florida. Students must pass the test in specific years in order to progress to the next grade level or, in some cases, in order to graduate from high school. The FCAT provides several different scores in the reported results, so it can be difficult to understand FCAT results unless you know what to look for. Performance-based assessments have become a popular way for teachers to authentically determine their students' mastery of educational objectives. In order for performance-based assessments to be implemented correctly, teachers need to take time to develop a comprehensive method of evaluation. An effective way to both easily and successfully accomplish this task is to develop a rubric. A rubric is a set of scoring criteria that describes the indicators of different levels of success of the assessment. A rubric can be developed for any task in any subject at any grade level. Most of the time, teachers are required to show how their lessons, activities and assessments align with current state standards. This alignment ensures that all children are receiving an appropriate and consistent education. In addition, assessing students based on content standards can help teachers and administrators identify a possible literacy problem. For public, private and home school teachers, the educational publisher Pearson Education Inc. provides the DRA2 Reading Assessment, the second edition of the Developmental Reading Assessment. Available in print and online, the assessment measures reading skills for students in kindergarten through eighth grade. In the assessment, teachers find benchmark assessment books, blackline masters, student assessment folders, an organizer, training DVD, procedures overview card, clipboard and a word analysis kit for grades K -- 3. Determining how well a child reads requires taking a comprehensive look at the child's reading level. A running record is one way this can be done with a great degree of accuracy. The Qualitative Reading Inventory-3 (QRI-3) provides reading materials and detailed instruction on how to assess a student's reading skills accurately. If this resource is not available, a simple running record can provide much of the information necessary to determine how well a child is reading. Reading assessment is a tool that schools use to measure the reading and comprehension skills of students at various grade levels, according to the U.S. Department of Education. Generally, reading-assessment tests require students to read certain passages of text and then answer questions based on what they have just read. There is no single way to judge a student's reading ability. States, school districts and individual teachers use many assessment tools to get a picture of student abilities and to plan for instruction. "Tool" is a better word than "test," because not all assessments are formal exams. Reading assessment tools include casual mini-conferences in which a student reads aloud to a teacher and, at the other extreme, standardized exams that critics refer to as "high stakes" tests. Many school districts require teachers to integrate technology into the their lessons and classroom activities. Students are regularly required to complete assignments on the Internet. However, the Internet does not provide a form of evaluation for the teacher to use. The teacher is required to create an assessment to correctly gauge the students' understanding of concepts. Rubrics, research requirement worksheets, and checklists are effective assessment tools teachers can utilize because they can be easily adapted to many learning styles. As children move through primary grades, they make impressive growth in reading development. Teachers assess their students' progress regularly to ensure that each pupil is making adequate progress toward the mastery of reading. This assessment is multi-faceted and requires teachers to use their knowledge of grade appropriate reading development to determine whether the child in question is moving toward reading success. By assessing regularly, teachers can identify and respond to potential reading problems before the issue becomes detrimental to the student. With the national goal to have all students reading at grade level by the third grade, many teachers need small, informal assessments to see how students are faring in their reading skills instead of just using standardized testing. Many of these informal assessments only take a few minutes to complete, and teachers usually work with students one-on-one. Elementary schools depend on assessments to gauge students' progress and drive classroom instruction. Reliable reading tests indicate areas of weakness teachers can address through small-group interventions. Most schools administer tests that target specific skills such as letter-naming, phoneme segmentation and oral reading. There are assessment programs that focus on these and other reading skills using tests that are administered individually so that valid results will be indicated for each child. Authentic assessment--evaluating students' ability in real-world contexts--is a key term in the current methods and materials that ESOL teachers develop when looking at student progress. The wealth of authentic assessment materials includes newspapers, magazines, Internet web pages and blogs, and graphic novels or comic books. The materials adapt to the methodology, and teachers can use their expertise and imagination to create authentic assessment opportunities. Dyslexia is a specific learning difficulty usually associated with spelling and reading, and sometimes with numbers. Dyslexia may go unrecognized if it is not addressed in school. As a result, adults may grow up having trouble reading, writing and or figuring out math problems. It can also make it difficult for adults to attend college or train for a job. Adults should get tested if they believe they have dyslexia so that they can take steps to get help. Preparing students for the state reading assessments can be challenging and exhausting for them. Not only do students have to be skilled readers to do well on the assessments, but they also have to know how to pace themselves, how to remain calm and how to block out distractions. Fortunately, you can help your students prepare for state reading assessments. The Developmental Reading Assessment (DRA) is an individual reading assessment used by teachers and reading specialists to identify a student's independent reading level. It can be used with students in kindergarten through eighth grade. The assessment provides information about the student's reading accuracy, fluency (speed or rate of reading) and comprehension. This information is used by teachers and parents when selecting books to ensure that students are reading material at an appropriate level. This information is helpful for all students but becomes particularly useful for students who are struggling with reading. The challenge for parents, students and teachers alike is… Assessments are used to determine if students are performing up to grade level. For reading, states set standards and school districts implement curriculum designed to meet these standards. Students must be assessed to ensure that they meet milestones determined by state standards and progress at an appropriate pace. Assessment in education is the process of observing and measuring learning. Teachers evaluate a student's level of achievement and skill for the purpose of supporting and improving student learning. For reading, which is part of a language arts program consisting of listening, speaking, media and technology literacy, teachers can use authentic assessments, performance assessments and portfolio assessments in addition to tests to evaluate student progress. The Developmental Reading Assessment (D.R.A.) identifies the independent reading levels of students in kindergarten through eighth grade. The independent reading level is the level at which the student is able to read and comprehend text without assistance. Educators use the D.R.A. to determine each student's appropriate text difficulty level that challenges him enough to increase reading proficiency without being overwhelmingly difficult. Teachers administer the assessment in one-on-one conferences. The child reads a selected text out loud while the teacher tracks time and scores the child on accuracy of reading, comprehension, and fluency. The Developmental Reading Assessment carries a few challenges… Some children have trouble with their reading skills no matter how old they are. Sometimes, after-school classes and tutoring is not enough. Children sometimes to have properly learn reading assessment. They teach on a different level than most schools do. The teacher or tutor should assess the child's current reading skills and decide how to further instruct them. After all, everyone learns differently. Parents can check out a few free reading assessment resources. Assisting struggling readers with reading strategies is important work. Those responsible for teaching them need to have a firm foundation of working, practical knowledge about administering assessments and interpreting results. Knowing how curriculum works, how best practices are achieved, and how to effectively evaluate student strengths and needs can all be reflected in a practicum reading assessment portfolio. Reading assessments provide valuable insight into the skills and needs of students. The use of assessments at an early age can identify potential reading problems to increase the chances of reading success down the road. Reading assessments also allow teachers to chart growth in students' reading skills and tailor reading instruction to the weaknesses of the students. Both informal and formal reading assessments give a well-rounded look at the reading abilities of the students. Children learn the basic building blocks of reading before they enter kindergarten. Though a Pre-K student may not be able to read an entire story, he is capable of using reading skills. In order to encourage young children in their development of reading skills, teachers must first assess the reading skills of Pre-K students and then work to build upon those skills. Visual and auditory processing challenges are often the root causes for many reading difficulties among struggling learners, both young and adult. Once detected and properly diagnosed, important strategies can be developed for overcoming them. Assessment tools developed specifically for detecting such reading challenges are thoughtfully administered using prepared checklists, inventories and response charts. The Developmental Reading Assessment, or DRA, is a test that teachers administer individually to students to determine each student's reading level and fluency. The scores can be useful to teachers in determining how much students understand about reading and also for planning instruction. To prepare for administering this test, a teacher must familiarize herself with the test format and set aside time to work with her students. Assessment and instruction go hand in hand. You cannot teach your students without assessing what they know, and continuing to assess what they have learned in your class. Assessment functions in a variety of ways when teaching any concept; when teaching beginning reading, it can be particularly useful for tracking student progress. Reading assessment provides vital information to teachers and parents about a child's ability to perform many skills that are essential for judging proficiency and comprehenson. Teachers should use several assessment techniques, including informal observation and formal, standardized tests. If given frequently, these tests keep teachers updated about their students' current progress and help them diagnose potential reading difficulties that can be addressed immediately. Assessing reading skills can get very repetitive. It can be boring and the students learn to dread having to be tested, especially if the students have reading difficulties. Finding new ways to assess student reading will help you in the long run. Paper and pencil tests are no longer the only way to assess student performance. This type of assessment is fine sometimes, but mixing up your assessment activities makes learning fun again. It can also relieve test anxiety many students experience. The Dominie Reading Assessment is a portfolio of tests and tools that teachers can use to measure the reading and writing ability of children in kindergarten through third grade. It was developed by Dr. Diane DeFord at the University of South Carolina. Reading assessments play a key role in identifying potential reading difficulties. They guide reading instruction and ensure students make sufficient progress in their reading skills. Choosing the best reading assessment for your needs takes some research and evaluation. A reading skills test that works well for one school might not give another school the information it needs about its students. The reading assessment needs for homeschooling families might vary from those needed by a school. Take stock of your needs for the best assessment option. Nursing homes and hospitals require medication aides to assist in distributing and administering medications to patients. They ensure patients take the medication as prescribed at the correct time and monitor them for reactions to the drugs. With only short-term training required for profession, almost anyone can become a medication aide. Paraprofessionals may modify resources by helping a student with language needs. For example, a paraprofessional who speaks a second language may translate test questions or homework questions into the student's primary language to help him or her understand the material more clearly. Or a paraprofessional may take his or her students to another room to read assessment questions aloud. Some students require read-alouds based on their IEP's, while others benefit from the security of having an adult read the questions on a test. As students interact with reading curricula and activities, it's important to evaluate their progress from time to time. Teachers, parents and schools need to be sure that the programs and techniques being used to improve reading are effective for each student. Reading is such a fundamental part of academic training that achievement needs to be closely monitored. Anything less will short-change our students. Reading assessments are an integral part of any college career. Whether you are taking the SAT or GRE, your reading comprehension skills can make or break the outcome of these standardized tests. Knowing how to read does not ensure comprehension, but there are strategies to improve reading comprehension. Before beginning a shared reading session, the teacher can preview the text the class will read together. It may be a large story book that the teacher reads and displays as she goes, or it may be several copies of a short story, poem or other text that each student holds as they read. The teacher can ask students to make predictions about the text based on the title, to relate previous knowledge about the topic or to look at the cover art and pictures to determine the basis of the story. All of these forms of assessment can be… Reading teachers often have to assess their students to gauge their progress, for direct instruction and to address reading problems. This can be done formally or informally, using pre-made assessments and a combination of teacher observations documented on checklists and surveys. When used together, these measures equip the teacher with the information needed to give the most beneficial assistance to the student. Children are tested from kindergarten through grade twelve on their reading comprehension skills. Tests are mandated by both states and school districts. Upon completion of testing, parents receive reading comprehension scores that evaluate student performance. Reading comprehension exams can be scored in various ways. Reading assessments measure phonological and comprehension skills. Most of these tests are timed, requiring students to think quickly as well as critically. Students can be better prepared for these tests if they have frequent opportunities to practice and reinforce the necessary skills in the classroom or at home. Formal reading assessments allow a teacher to track a student's progress over a period of time in acquiring specific skills. These tests can be administered quickly, and the results are easily interpreted. The teacher can then use the test data to make informed decisions about teaching strategies and how to drive instruction. Teachers, parents, students and publishers use the Lexile Framework for Reading to determine a reader's ability to comprehend written text. Most standardized assessments administered in school calculate the student's current Lexile measurement. Scores range from about 200 Lexile (200L) for beginning readers to 1,700L or greater for advanced readers who can comprehend complicated and difficult reading material. A student's Lexile level depends on reading ability, not age or grade level. The Lexile Framework for Reading uses the same scale to determine a student's reading level and the difficulty level of books, magazines, websites and other reading materials. Teachers, parents and… Assessment testing has become a standard in all educational institutions. Federal and state governments use the scores of such tests to evaluate the effectiveness of schools. Assessment test scores are also used to assess how much a student has learned and retained throughout a course. These scores are reported on test assessment forms that can be difficult to interpret. Learning how to read assessments are essential to understanding and monitoring your educational progress or your child's learning progress. When students are learning to read, it can be useful to measure their progress with varying methods of assessment. Some teachers use reading assessment strategies to supplement district assessments, while others prefer them as more authentic measures of students' individual learning styles. Assessment testing has become standard in public schools across the United States as federal and state governments use the scores to evaluate the effectiveness of a school. It's also often used to determine federal and state funding. Parents get copies of the scores from their child's test, but they can be hard to understand if you don't know how to read assessment grades. Syllabication is the process of dividing words into syllables. Emergent readers begin grasping this concept almost immediately. Teaching students how to divide words will improve their reading fluency, making it easier for the eyes to break a word apart in sections, rather than letters. The Diagnostic Assessment for Reading (DAR) is a standardized test used by teachers and schools to assess a child's reading level and plan for reading instruction. Often, schools administer these tests on a yearly basis to every student in a class, grade level or school. Tests like the DAR may also be used at other times to individually test a child or adult who struggles with reading. A diagnostic reading assessment is used to measure students' skills in each of the five components of reading: vocabulary, phonemic awareness, phonics, fluency and comprehension. It is given several times throughout the school year and helps teachers drive instruction toward specific needs of their students. There are many types of reading assessments used in schools; some of them are meant to measure the knowledge of the structure of words and language, while others gauge a child's overall comprehension and ability to evaluate and comment on what they've read. Below is a general overview of both types. The National Institutes of Health estimates that up to 15 percent of Americans are dyslexic. Because early intervention is so important in overcoming this disability, parents need to know what factors can increase the risk their child will be affected. You may know your astrological sun sign, or even your rising sign, but your Karma is different from those. It has to do with the debt you have--both the benefits and lessons unlearned--from previous lives. Some astrologers believe that we choose the time and place of our birth so that we can keep working on the lessons we need to master in order to evolve to a higher state of consciousness. Others believe that the map of the planets at the moment we are born imprints itself onto our bodies and in our minds. Read on to learn more and… Zero is one of the rock stars of mathematics. Although it was only invented as a concept in the fourth or fifth century B.C., it confounded even philosophers. They asked, "How can nothing be something?" Even so, zero has endured. And when zero encounters fractions, the bane of many beginning math students, confusion ensues.
http://www.ehow.com/reading-assessment/
13
18
Cladistics, or phylogenetic systematics, is a system of classifying living and extinct organisms based on evolutionary ancestry as determined by grouping taxa according to "derived characters," that is characteristics or features shared uniquely by the taxa and their common ancestor. Cladistics places heavy emphasis on objective, quantitative analysis and emphasizes evolution and genealogy in contrast to more traditional biological taxonomy with its focus on physical similarities between species. Emphasizing no particular mechanism of evolution, cladistics as a classification schema lies largely separate from much of the debate between those who favor natural selection and those who favor intelligent design. Cladistics generates diagrams, called "cladograms," that represent the evolutionary tree of life. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) sequencing data are used in many important cladistic efforts. Cladistics originated in the field of biology by a German entomologist, but in recent years cladistic methods have found application in other disciplines. The word cladistics, created in 1950, is derived from the ancient Greek κλάδος, klados, or "branch." Although the emphasis of cladistics on biological lineage through millions of years is metaphorically similar to the human convention of tracing genealogical lineage through multiple generations, the two are quite different in substance, as one traces lineage of species while the other traces lineage of specific members of a species. The trend of cladistics toward mapping a connectedness between all species of organisms, based on the theory of descent with modification, shows metaphorical similarity with views of some religions that humans are all connected because of a common origin. The history of the various schools or research groups that developed around the concept of biological classification was often filled with disputes, competitions, and even bitter opposition (Hull 1988). This is frequently the history of new ideas that challenge the existing paradigm, as cladism has done in presenting a strong alternative to Linnaean taxonomy. Systematics is the branch of biology that strives to discover the genealogical relationships underlying organic diversity and also constructs classifications of living things (Sober 1988, 7). There is a diversity of opinion on how genealogy and taxonomy are related. Two prominent research groups taking very different approaches from each other emerged in the mid-twentieth century (Hull 1988). One, the Sokol-Sneath school, proposed to improve on the methods of traditional Linnaean taxonomy by introducing "numerical taxonomy," which aimed to ascertain the overall similarity among organisms using objective, quantitative, and numerous characters (Hull 1988). A second group, led by the German biologist Willi Hennig (1913-1976), proposed a fundamentally new approach that emphasized classifications representing phylogeny focused on the sister-group relationship: Two taxa are sister groups if they are more related to each other than to a third taxa, and the evidence for this is the presence of characters that the sister groups exhibit but the third group does not exhibit (Hull 1988). That is, the sister groups share a more recent common ancestor with each other than with the third group (Hull 1988). The method emphasizes common ancestry and descent more than chronology. Hennig's 1950 work, Grundzüge einer Theorie der Phylogenetischen Systematik, published in German, began this area of cladistics. The German-American biologist, Ernst Mayr, in a 1965 paper termed the Sokol-Sneath school "phenetic" because its aim in classifications was to represent the overall similarities exhibited by organisms regardless of descent (Hull 1988). He also coined the term "cladistics" ("branch") for Hennig's system because Hennig wished to represent branching sequences (Hull 1988). Mayr thought his own view to be "evolutionary taxonomy" because it reflected both order of branching (cladistics) and degrees of divergence (phenetics) (Hull 1988). In Mayr's terms then there would be three notable schools of biological taxonomy: cladists who insist that only genealogy should influence classification; pheneticists who hold that overall similarity, rather than descent, should determine classification; and evolutionary taxonomists (the heirs of traditional Linnaean taxonomists) who hold that both evolutionary descent and adaptive similarity should be used in classification (Sober 1988). Hennig referred to his approach as phylogenetic systematics, which is the title of his 1966 book. Hennig's major book, even the 1979 version, does not contain the term "cladistics" in the index. A review paper by Dupuis observes that the term clade was introduced in 1958, by Julian Huxley, cladistic by Cain and Harrison in 1960, and cladist (for an adherent of Hennig's school) by Mayr in 1965 (Dupuis 1984). The term "phylogenetics" is often used synonymously with "cladistics." Computer programs are widely used in cladistics, due to the highly complex nature of cladogram-generation procedures. Cladists construct cladograms, branching diagrams, to graphically depict the groups of organisms that share derived characters. Key to cladistics analysis is identifying monophyletic groups, that is, groups comprising a given species, all of that species' descendants, and nothing else (Sober 1988). In phylogenetics, a group of species is said to be paraphyletic (Greek para meaning near and phyle meaning race) if the group contains its most recent common ancestor, but does not contain all the descendants of that ancestor. For instance, the traditional class Reptilia excludes birds even though they are widely considered to have evolved from an ancestral reptile. Similarly, the traditional invertebrates are paraphyletic because vertebrates are excluded, although the latter evolved from an invertebrate. A group comprising members from separate evolutionary lines is called polyphyletic. For instance, the once-recognized Pachydermata order was found to be polyphyletic because elephants and rhinoceroses arose separately from non-pachyderms. Evolutionary taxonomists consider polyphyletic groups to be errors in classification, often occurring because convergence or other homoplasy was misinterpreted as homology. Cladistic taxonomy requires taxa to be clades (monophyletic groups). Cladists argue, therefore, that the prevailing classification system, Linnaean taxonomy, should be reformed to eliminate all non-clades. Others, such as those in the school of evolutionary taxonomy, often use cladistic techniques and require that groups reflect phylogenies, but they also allow both monophyletic and paraphyletic groups as taxa. Following Hennig, cladists argue that paraphyly is as harmful as polyphyly. The idea is that monophyletic groups can be defined objectively through identifying synapomorphies, that is, features shared uniquely by a group of species and their most immediate common ancestor. This cladistic approach is claimed to be more objective than the alternative approach of defining paraphyletic and polyphyletic groups based on a set of key characteristics determined by researchers. Making such determinations, cladists argue, is an inherently subjective process highly likely to lead to "gradistic" thinking that groups advance from "lowly" grades to "advanced" grades, which can in turn lead to teleological thinking. A cladistic analysis organizes a certain set of information by making a distinction between characters and character states. Consider feathers, whose color may be blue in one species but red in another. In this case, "feather-color" is a character and "red feathers" and "blue feathers" are two character states. In the "old days," before the introduction of computer analysis into cladistics, the researcher would assign the selected character states as being either plesiomorphies, character states present before the last common ancestor of the species group, or synapomorphies, character states that first appeared in the last common ancestor. Usually the researcher would make this assignment by considering one or more outgroups (organisms considered not to be part of the group in question, but nonetheless related to the group). Then, as now, only synapomorphies would be used in characterizing cladistic divisions. Next, different possible cladograms were drawn up and evaluated by looking for those having the greatest number of synapomorphies. The hope then, as now, was that the number of true synapomorphies in the cladogram would be large enough to overwhelm any unintended symplesiomorphies (homoplasies) caused by convergent evolution, that is, characters that resemble each other because of environmental conditions or function, but not because of common ancestry. A well-known example of homoplasy due to convergent evolution is wings. Though the wings of birds and insects may superficially resemble one another and serve the same function, each evolved independently. If a dataset contained data on a bird and an insect that both scored "POSITIVE" for the character "presence of wings," a homoplasy would be introduced into the dataset, which could cause erroneous results. When two alternate possible cladograms were evaluated to be equally probable, one was usually chosen based on the principle of parsimony: The most compact arrangement was likely the best hypothesis of relationship (a variation of Occam's razor, which states that the simplest explanation is most often the correct one). Another approach, particularly useful in molecular evolution, involved applying the statistical analysis of maximum likelihood to select the most likely cladogram based on a specific probability model of changes. Of course, it is no longer done this way: researcher selection, and hence bias, is something to be avoided. These days much of the analysis is done by software: Besides the software to calculate the trees themselves, there is sophisticated statistical software to provide a more objective basis. As DNA sequencing has become easier, phylogenies are increasingly constructed with the aid of molecular data. Computational systematics allows the use of these large data sets to construct objective phylogenies. These can more accurately distinguish some true synapomorphies from homoplasies that are due to parallel evolution. Ideally, morphological, molecular, and possibly other (behavioral, etc.) phylogenies should be combined. Cladistics does not assume any particular theory of evolution, but it does assume the pattern of descent with modification. Thus, cladistic methods can be, and recently have been, usefully applied to mapping descent with modification in non-biological systems, such as language families in historical linguistics and the filiation of manuscripts in textual criticism. The starting point of cladistic analysis is a group of species and the molecular, morphological, or other data that characterizes those species. The end result is a tree-like relationship-diagram called a cladogram. The cladogram graphically represents a hypothetical evolutionary process. Cladograms are subject to revision as additional data becomes available. In a cladogram, all organisms lie at the leaves, and each inner node is ideally binary (two-way). The two taxa on either side of a split are called "sister taxa" or "sister groups." Each subtree is called a "clade," and by definition is a natural group, all of whose species share a common ancestor. Each clade is set off by a series of characteristics that appear in its members, but not in the other forms from which it diverged. These identifying characteristics of a clade are its synapomorphies (shared, derived characters). For instance, hardened front wings (elytra) are a synapomorphy of beetles, while circinate vernation, or the unrolling of new fronds, is a synapomorphy of ferns. Synonyms—The term "evolutionary tree" is often used synonymously with cladogram. The term phylogenetic tree is sometimes used synonymously with cladogram (Singh 2004), but others treat phylogenetic tree as a broader term that includes trees generated with a non-evolutionary emphasis. Subtrees are clades—In a cladogram, all species lie at the leaves (Albert 2006). The two taxa on either side of a split are called sister taxa or sister groups. Each subtree, whether it contains one item or a hundred thousand items, is called a clade. Two-way versus three-way Forks—Many cladists require that all forks in a cladogram be 2-way forks. Some cladograms include 3-way or 4-way forks when the data is insufficient to resolve the forking to a higher level of detail, but nodes with more than two branches are discouraged by many cladists. Depth of a Cladogram—If a cladogram represents N species, the number of levels (the "depth") in the cladogram is on the order of log2(N) (Aldous 1996). For example, if there are 32 species of deer, a cladogram representing deer will be around 5 levels deep (because 25=32). A cladogram representing the complete tree of life, with about 10 million species, would be about 23 levels deep. This formula gives a lower limit: In most cases the actual depth will be a larger value because the various branches of the cladogram will not be uniformly deep. Conversely, the depth may be shallower if forks larger than two-way forks are permitted. Number of Distinct Cladograms—For a given set of species, the number of distinct rooted cladograms that can in theory be drawn (ignoring which cladogram best matches the species characteristics) is (Lowe 2004): |Number of Species||2||3||4||5||6||7||8||9||10||N| |Number of Cladograms||1||3||15||105||945||10,395||135,135||2,027,025||34,459,425||1*3*5*7*...*(2N-3)| This exponential growth of the number of possible cladograms explains why manual creation of cladograms becomes very difficult when the number of species is large. Extinct Species in Cladograms—Cladistics makes no distinction between extinct and non-extinct species (Scott-Ram 1990), and it is appropriate to include extinct species in the group of organisms being analyzed. Cladograms based on DNA/RNA generally do not include extinct species because DNA/RNA samples from extinct species are rare. Cladograms based on morphology, especially morphological characteristics preserved in fossils, are more likely to include extinct species. Time Scale of a Cladogram—A cladogram tree has an implicit time axis (Freeman 1998), with time running forward from the base of the tree to the leaves of the tree. If the approximate date (for example, expressed as millions of years ago) of all the evolutionary forks were known, those dates could be captured in the cladogram. Thus, the time axis of the cladogram could be assigned a time scale (for example 1 cm = 1 million years), and the forks of the tree could be graphically located along the time axis. Such cladograms are called scaled cladograms. Many cladograms are not scaled along the time axis, for a variety of reasons: - Many cladograms are built from species characteristics that cannot be readily dated (for example, morphological data in the absence of fossils or other dating information) - When the characteristic data is DNA/RNA sequences, it is feasible to use sequence differences to establish the relative ages of the forks, but converting those ages into actual years requires a significant approximation of the rate of change (Carrol 1997). - Even when the dating information is available, positioning the cladogram's forks along the time axis in proportion to their dates may cause the cladogram to become difficult to understand or hard to fit within a human-readable format Summary of terminology - A clade is an ancestor species and all of its descendants - A monophyletic group is a clade - A paraphyletic group is an ancestor species and most of its descendants, usually with a specific group of descendants excluded (for example, reptiles are all the sauropsids (members of the class Sauropsida) except for birds). Most cladists discourage the use of paraphyletic groups. - A polyphyletic group is a group consisting of members from two non-overlapping monophyletic groups (for example, flying animals). Most cladists discourage the use of polyphyletic groups. - An outgroup is an organism considered not to be part of the group in question, although it is closely related to the group. - A characteristic present in both the outgroups and the ancestors is called a plesiomorphy (meaning "close form," as in close to the root ancestor; also called an ancestral state). - A characteristic that occurs only in later descendants is called an apomorphy (meaning "separate form" or "far from form," as in far from the root ancestor; also called a "derived" state) for that group. Note: The adjectives plesiomorphic and apomorphic are often used instead of "primitive" and "advanced" to avoid placing value judgments on the evolution of the character states, since both may be advantageous in different circumstances. It is not uncommon to refer informally to a collective set of plesiomorphies as a ground plan for the clade or clades they refer to. - A species or clade is basal to another clade if it holds more plesiomorphic characters than that other clade. Usually a basal group is very species-poor as compared to a more derived group. It is not a requirement that a basal group be extant. For example, palaeodicots are basal to flowering plants. - A clade or species located within another clade is said to be nested within that clade. Cladistics compared with Linnaean taxonomy Prior to the advent of cladistics, most taxonomists limited themselves to using Linnaean taxonomy for organizing lifeforms. That traditional approach used several fixed levels of a hierarchy, such as Kingdom, Phylum, Class, Order, and Family. Cladistics does not use those terms because one of its fundamental premises is that the evolutionary tree is very deep and very complex, and it is not meaningful to use a fixed number of levels. Linnaean taxonomy insists that groups reflect phylogenies, but in contrast to cladistics allows both monophyletic and paraphyletic groups as taxa. Since the early twentieth century, Linnaean taxonomists have generally attempted to make genus and lower-level taxa monophyletic. Cladistics originated in the work of Willi Hennig, and since that time there has been a spirited debate (Wheeler 2000) about the relative merits of cladistics versus Linnaean classification and other Linnaean-associated classification systems, such as the evolutionary taxonomy advocated by Mayr (Benton 2000). Some of the debates that the cladists engaged in had been running since the nineteenth century, but they entered these debates with a new fervor (Hull 1988), as can be learned from the Foreword to Hennig (1979) in which Rosen, Nelson, and Patterson wrote the following—not about Linnaean taxonomy but about the newer evolutionary taxonomy: Encumbered with vague and slippery ideas about adaptation, fitness, biological species and natural selection, neo-Darwinism (summed up in the "evolutionary" systematics of Mayr and Simpson) not only lacked a definable investigatory method, but came to depend, both for evolutionary interpretation and classification, on consensus or authority (Foreword, page ix). Proponents of cladistics enumerate key distinctions between cladistics and Linnaean taxonomy as follows (Hennig 1975): |Treats all levels of the tree as equivalent.||Treats each tree level uniquely. Uses special names (such as Family, Class, Order) for each level.| |Handles arbitrarily-deep trees.||Often must invent new level-names (such as superorder, suborder, infraorder, parvorder, magnorder) to accommodate new discoveries. Biased towards trees about 4 to 12 levels deep.| |Discourages naming or use of groups that are not monophyletic||Accepts naming and use of paraphyletic groups| |Primary goal is to reflect actual process of evolution||Primary goal is to group species based on morphological similarities| |Assumes that the shape of the tree will change frequently, with new discoveries||Often responds to new discoveries by re-naming or re-levelling of Classes, Orders, and Kingdoms| |Definitions of taxa are objective, hence free from personal interpretation||Definitions of taxa require individuals to make subjective decisions. For example, various taxonomists suggest that the number of Kingdoms is two, three, four, five, or six (see Kingdom).| |Taxa, once defined, are permanent (e.g. "taxon X comprises the most recent common ancestor of species A and B along with its descendants")||Taxa can be renamed and eliminated (e.g. Insectivora is one of many taxa in the Linnaean system that have been eliminated).| Proponents of Linnaean taxonomy contend that it has some advantages over cladistics, such as: |Limited to entities related by evolution or ancestry||Supports groupings without reference to evolution or ancestry| |Does not include a process for naming species||Includes a process for giving unique names to species| |Difficult to understand the essence of a clade, because clade definitions emphasize ancestry at the expense of meaningful characteristics||Taxa definitions based on tangible characteristics| |Ignores sensible, clearly-defined paraphyletic groups such as reptiles||Permits clearly-defined groups such as reptiles| |Difficult to determine if a given species is in a clade or not (for example, if clade X is defined as "most recent common ancestor of A and B along with its descendants," then the only way to determine if species Y is in the clade is to perform a complex evolutionary analysis)||Straightforward process to determine if a given species is in a taxon or not| |Limited to organisms that evolved by inherited traits; not applicable to organisms that evolved via complex gene-sharing or lateral transfer||Applicable to all organisms, regardless of evolutionary mechanism| How complex is the Tree of Life? One of the arguments in favor of cladistics is that it supports arbitrarily complex, arbitrarily deep trees. Especially when extinct species are considered (both known and unknown), the complexity and depth of the tree can be very large. Every single speciation event, including all the species that are now extinct, represents an additional fork on the hypothetical, complete cladogram representing the full tree of life. Fractals can be used to represent this notion of increasing detail: As a viewpoint zooms into the tree of life, the complexity remains virtually constant (Gordon 1999). This great complexity of the tree and its associated uncertainty is one of the reasons that cladists cite for the attractiveness of cladistics over traditional taxonomy. Proponents of non-cladistic approaches to taxonomy point to punctuated equilibrium to bolster the case that the tree-of-life has a finite depth and finite complexity. According to punctuated equilibrium, generally a species comes into the fossil record very similar to when it departs the fossil record, as contrasted with phyletic gradualism whereby a species gradually changes over time into another species. If the number of species currently alive is finite, and the number of extinct species that we will ever know about is finite, then the depth and complexity of the tree of life is bounded, and there is no need to handle arbitrarily deep trees. Applying Cladistics to other disciplines The processes used to generate cladograms are not limited to the field of biology (Mace 2005). The generic nature of cladistics means that cladistics can be used to organize groups of items in many different realms. The only requirement is that the items have characteristics that can be identified and measured. For example, one could take a group of 200 spoken languages, measure various characteristics of each language (vocabulary, phonemes, rhythms, accents, dynamics, etc.) and then apply a cladogram algorithm to the data. The result will be a tree that may shed light on how, and in what order, the languages came into existence. Thus, cladistic methods have recently been usefully applied to non-biological systems, including determining language families in historical linguistics, culture, history (Lipo 2005), and filiation of manuscripts in textual criticism. - ↑ Ernst Mayr, Evolution and the Diversity of Life (Selected Essays) (Cambridge, MA: Harvard Univ. Press, 1976). ISBN 0-674-27105-X - Albert, V. Parsimony, Phylogeny, and Genomics. Oxford University Press. ISBN 0199297304 - Aldous, D. 1996. Probability distributions on cladograms. In D. J. Aldous, and R. Pemantle, Random Discrete Structures. New York: Springer. ISBN 0387946233 - Ashlock, P. D. 1974. The uses of cladistics. Annual Review of Ecology and Systematics 5: 81-99. - Benton, M. 2000. Stems, nodes, crown clades, and rank-free lists: Is Linnaeus dead? Biological Reviews 75(4): 633-648. - Carrol, R. 1997. Patterns and Processes of Vertebrate Evolution. Cambridge University Press. ISBN 052147809X - Cavalli-Sforza, L. L. and A. W. F. Edwards. 1967. Phylogenetic analysis: Models and estimation procedures. Evol. 21(3): 550-570. - Cuénot, L. 1940. Remarques sur un essai d'arbre généalogique du règne animal. Comptes Rendus de l'Académie des Sciences de Paris 210: 23-27. - de Queiroz, K. and J. A. Gauthier. 1992. Phylogenetic taxonomy. Annual Review of Ecology and Systematics 23: 449–480. - de Queiroz, K. and J. A. Gauthier. 1994. Toward a phylogenetic system of biological nomenclature. Trends in Research in Ecology and Evolution 9(1): 27-31. - Dupuis, C. 1984. Willi Hennig's impact on taxonomic thought. Annual Review of Ecology and Systematics 15: 1-24. - Felsenstein, J. 2004. Inferring Phylogenies. Sunderland, MA: Sinauer Associates. ISBN 0878931775 - Freeman, S. 1998. Evolutionary Analysis. Prentice Hall. ISBN 0135680239 - Gordon, R. 1999. The Hierarchical Genome and Differentiation Waves. World Scientific. ISBN 9810222688 - Hamdi, H., H. Nishio, R. Zielinski, and A. Dugaiczyk. 1999. Origin and phylogenetic distribution of Alu DNA repeats: Irreversible events in the evolution of primates. Journal of Molecular Biology 289: 861–871. - Hennig, W. 1950. Grundzüge einer Theorie der Phylogenetischen Systematik. Berlin: Deutscher Zentralverlag. - Hennig, W. 1966. Phylogenetic Systematics. Urbana: University of Illinois Press. - Hennig, W. and W. Hennig. 1982. Phylogenetische Systematik. Berlin: Parey. ISBN 3489609344 - Hennig, W. 1975. Cladistic analysis or cladistic classification: A reply to Ernst Mayr. Systematic Zoology 24: 244-256. - Hennig, W. 1979. Phylogenetic Systematics. Urbana: University of Illinois Press. ISBN 0252068149 - Hull, D. L. 1979. The limits of cladism. Systematic Zoology 28: 416-440. - Hull, D. L. 1988. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. Chicago: The University of Chicago Press. - Kitching, I. J., P. L. Forey, C. J. Humphries, and D. M. Williams. 1998. Cladistics: The Theory and Practice of Parsimony Analysis. Systematics Association Special Volume. 11 (REV): ALL. ISBN 0198501382 - Lipo, C. 2005. Mapping Our Ancestors: Phylogenetic Approaches in Anthropology and Prehistory. Aldine Transaction. ISBN 0202307514 - Lowe, A. 2004. Ecological Genetics: Design, Analysis, and Application. Blackwell Publishing. ISBN 1405100338 - Luria, S., S. J. Gould, and S. Singer. 1981. A View of Life. Menlo Park, CA: Benjamin/Cummings. ISBN 0805366482 - Letunic, I. 2007. Interactive Tree Of Life (iTOL): an online tool for phylogenetic tree display and annotation. Bioinformatics 23(1): 127-128. - Mace, R. 2005. The Evolution of Cultural Diversity: A Phylogenetic Approach. London: Routledge Cavendish. ISBN 1844720993 - Mayr, E. 1982. The Growth of Biological Thought: Diversity, Evolution and Inheritance. Cambridge, MA: Harvard Univ. Press. ISBN 0674364465 - Mayr, E. 1976. Evolution and the Diversity of Life (Selected essays). Cambridge, MA: Harvard Univ. Press. ISBN 067427105X - Mayr, E. 1965. Numerical phenetics and taxonomic theory. Systematic Zoology 14:73-97. - Patterson, C. 1982. Morphological characters and homology. In K. A. Joysey and A. E. Friday, eds., Problems in Phylogenetic Reconstruction. London: Academic Press. ISBN 0123912504 - Rosen, D., G. Nelson, and C. Patterson. 1979. Phylogenetic Systematics. Urbana, IL: University of Illinois Press. ISBN 0252068149 - Scott-Ram, N. R. 1990. Transformed Cladistics, Taxonomy and Evolution. Cambridge University Press. ISBN 0521340861 - Shedlock, A. M., and N. Okada. 2000. SINE insertions: Powerful tools for molecular systematics. Bioessays 22: 148–160. - Singh, G. 2004. Plant Systematics: An Integrated Approach. Enfield, N.H.: Science. ISBN 1578083516 - Sober, E. 1988. Reconstructing the Past: Parsimony, Evolution, and Inference. Cambridge, MA: The MIT Press. ISBN 026219273X - Sokal, R. R. 1975. Mayr on cladism—and his critics. Systematic Zoology 24: 257-262. - Swofford, D. L., G. J. Olsen, P. J. Waddell, and D. M. Hillis. 1996. Phylogenetic inference. In D. M. Hillis, C. Moritz, and B. K. Mable, eds., Molecular Systematics. Sunderland, MA: Sinauer Associates. ISBN 0878932828 - Wheeler, Q. 2000. Species Concepts and Phylogenetic Theory: A Debate. Columbia University Press. ISBN 0231101430 - Wiley, E. O. 1981. Phylogenetics: The Theory and Practice of Phylogenetic Systematics. New York: Wiley Interscience. ISBN 0471059757 - Zwickl, D. J., and D. M. Hillis. 2002. Increased taxon sampling greatly reduces phylogenetic error. Systematic Biology 51: 588-598. All links retrieved May 23, 2013. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Cladistics
13
15
The causes and degrees of hearing loss vary across the Deaf and hard-of-hearing community, as do methods of communication and attitudes toward deafness. In general, there are three types of hearing loss: - Conductive loss affects the sound-conducting paths of the outer and middle ear. The degree of loss can be decreased through the use of a hearing aid or by surgery. People with conductive loss might speak softly, hear better in noisy surroundings than people with normal hearing, and experience ringing in their ears. - Sensorineural loss affects the inner ear and the auditory nerve and can range from mild to profound. People with sensorineural loss might speak loudly, experience greater high-frequency loss, have difficulty distinguishing consonant sounds, and not hear well in noisy environments. - Mixed loss results from both a conductive and sensorineural loss. Given the close relationship between oral language and hearing, Deaf and hard-of-hearing students with hearing loss might also have speech impairments. One's age at the time of the loss determines whether one is prelingually deaf (hearing loss before oral language acquisition) or adventitiously deaf (normal hearing during language acquisition). Those born deaf or who become deaf as very young children might have more limited speech development. - The inability to hear does not affect an individual's native intelligence or the physical ability to produce sounds. - Some Deaf and hard-of-hearing students are skilled lipreaders, but many are not. Many speech sounds have identical mouth movements, which can make lipreading particularly difficult. For example "p," "b," and "m" look exactly alike on the lips, and many sounds (vowels, for example) are produced without using clearly differentiated lip movements. - Make sure you have a Deaf or hard-of-hearing student's attention before speaking. A light touch on the shoulder, a wave, or other visual signal will help. - Look directly at a Deaf or hard-of-hearing person during a conversation, even when an interpreter is present. Speak clearly, without shouting. If you have problems being understood, rephrase your thoughts. Writing is also a good way to clarify. E-mailing and using the Relay Service (see Other SFSU Disability Resources on the DPRC website) are very good communication alternatives outside of class. - Make sure that your face is clearly visible. Keep your hands away from your face and mouth while speaking. Sitting with your back to a window, gum chewing, cigarette smoking, pencil biting, and similar obstructions of the lips can also interfere with the effectiveness of communication. - Common accommodations for Deaf or hard-of-hearing students include sign language interpreters, stenocaptioners, assistive listening devices, TTY/relay services, volume control telephones, signaling devices (e.g., a flashing light to alert individuals to a door knock or ringing telephone), priority registration, early syllabus, notetakers, and captions for films and videos. Modes of Communication Not all Deaf or hard-of-hearing students are fluent users of all of the communication modes used in the Deaf community, just as users of spoken language are not fluent in all oral languages. For example, not all Deaf or hard-of-hearing students lipread; many Deaf individuals use sign language but there are several types of sign language systems. American Sign Language (ASL) is a natural, visual language having its own syntax and grammatical structure. Fingerspelling is the use of the manual alphabet to form words. Pidgin Sign English (PSE) combines aspects of ASL and English and is used in educational situations often combined with speech. Nearly every spoken language has an accompanying sign language. It is important to assign interpreters who will match the communication needs and preferences of each student. Interpreters convey all information in a given situation, including instructor's comments, class discussion, and environmental sounds. The following strategies are suggested in order to enhance the accessibility of course instruction, materials, and activities. They are general strategies designed to support individualized reasonable accommodations. - Include a Disability Access Statement on the syllabus, inviting students with disabilities to request accommodations. - Circular seating arrangements offer Deaf or hard-of-hearing students the advantage of seeing all class participants, especially in a seminar setting. - For the lecture setting, keep front seats open for students who are Deaf or hard-of-hearing and their interpreters. - Repeat the comments and questions of other students, especially those from the back rows; acknowledge who has made the comment so the Deaf or hard-of-hearing student can focus on the speaker. - When appropriate, ask for a hearing volunteer to team up with a Deaf or hard-of-hearing student for in-class assignments. - Assist the student with finding an effective notetaker or lab assistant from the class, if the student is eligible for these services. - If possible, provide transcripts of audio information. - Face the class while speaking. If an interpreter is present, make sure the student can see both you and the interpreter. Request the handout "Guidelines for Working with an Interpreter" from the DPRC. - If there is a break in the class, get the Deaf or hard-of-hearing student's attention before resuming class. - Because visual information is a Deaf student's primary means of receiving information, films, overheads, diagrams, and other visual aids are useful instructional tools. Spoken dialogue and commentary in films, videotapes, DVDs, and online course websites, should either be presented in captions or other alternate means, such as a transcript. - Be flexible: allow a Deaf or hard-of-hearing student to work with audio-visual material independently and for a longer period of time. - When in doubt about how to assist the student, ask him or her. - Allow the student the same anonymity as other students (i.e., avoid pointing out the student or the alternative arrangements to the rest of the class)
http://www.sfsu.edu/~dprc/dhohsrvc/dhh.html
13
37
In the previous section we saw how we could use the first derivative of a function to get some information about the graph of a function. In this section we are going to look at the information that the second derivative of a function can give us a about the graph of a function. Before we do this we will need a couple of definitions out of the way. The main concept that we’ll be discussing in this section is concavity. Concavity is easiest to see with a graph (we’ll give the mathematical definition in a bit). So a function is concave up if it “opens” up and the function is concave down if it “opens” down. Notice as well that concavity has nothing to do with increasing or decreasing. A function can be concave up and either increasing or decreasing. Similarly, a function can be concave down and either increasing or It’s probably not the best way to define concavity by saying which way it “opens” since this is a somewhat nebulous definition. Here is the mathematical definition of To show that the graphs above do in fact have concavity claimed above here is the graph again (blown up a little to make things So, as you can see, in the two upper graphs all of the tangent lines sketched in are all below the graph of the function and these are concave up. In the lower two graphs all the tangent lines are above the graph of the function and these are concave Again, notice that concavity and the increasing/decreasing aspect of the function is completely separate and do not have anything to do with the other. This is important to note because students often mix these two up and use information about one to get information about the other. There’s one more definition that we need to get out of the A point is called an inflection point if the function is continuous at the point and the concavity of the graph changes at that point. Now that we have all the concavity definitions out of the way we need to bring the second derivative into the mix. We did after all start off this section saying we were going to be using the second derivative to get information about the graph. The following fact relates the second derivative of a function to its concavity. The proof of this fact is in the Proofs From Derivative Applications section of the Extras chapter. Notice that this fact tells us that a list of possible inflection points will be those points where the second derivative is zero or doesn’t exist. Be careful however to not make the assumption that just because the second derivative is zero or doesn’t exist that the point will be an inflection point. We will only know that it is an inflection point once we determine the concavity on both sides of it. It will only be an inflection point if the concavity is different on both sides of the point. Now that we know about concavity we can use this information as well as the increasing/decreasing information from the previous section to get a pretty good idea of what a graph should look like. Let’s take a look at an example of that. Example 1 For the following function identify the intervals where the function is increasing and decreasing and the intervals where the function is concave up and concave down. Use this information to sketch the graph. Okay, we are going to need the first two derivatives so let’s get those first. Let’s start with the increasing/decreasing information since we should be fairly comfortable with that after the last section. There are three critical points for this function : , and . Below is the number line for the So, it looks like we’ve got the following intervals of increasing and decreasing. Note that from the first derivative test we can also say that is a relative maximum and that is a relative minimum. Also is neither a relative minimum or maximum. Now let’s get the intervals where the function is concave up and concave down. If you think about it this process is almost identical to the process we use to identify the intervals of increasing and decreasing. This only difference is that we will be using the second derivative instead of the first derivative. The first thing that we need to do is identify the possible inflection points. These will be where the second derivative is zero or doesn’t exist. The second derivative in this case is a polynomial and so will exist everywhere. It will be zero at the following points. As with the increasing and decreasing part we can draw a number line up and use these points to divide the number line into regions. In these regions we know that the second derivative will always have the same sign since these three points are the only places where the function may change sign. Therefore, all that we need to do is pick a point from each region and plug it into the second derivative. The second derivative will then have that sign in the whole region from which the point came from Here is the number line for this second derivative. So, it looks like we’ve got the following intervals of This also means that are all inflection points. All this information can be a little overwhelming when going to sketch the graph. The first thing that we should do is get some starting points. The critical points and inflection points are good starting points. So, first graph these points. Now, start to the left and start graphing the increasing/decreasing information as we did in the previous section when all we had was the increasing/decreasing information. As we graph this we will make sure that the concavity information matches up with what we’re graphing. Using all this information to sketch the graph gives the We can use the previous example to illustrate another way to classify some of the critical points of a function as relative maximums or Notice that is a relative maximum and that the function is concave down at this point. This means that must be negative. Likewise, is a relative minimum and the function is concave up at this point. This means that must be positive. As we’ll see in a bit we will need to be very careful with . In this case the second derivative is zero, but that will not actually mean that is not a relative minimum or maximum. We’ll see some examples of this in a bit, but we need to get some other information taken care of first. It is also important to note here that all of the critical points in this example were critical points in which the first derivative was zero and this is required for this to work. We will not be able to use this test on critical points where the derivative doesn’t exist. Here is the test that can be used to classify some of the critical points of a function. The proof of this test is in the Proofs From Derivative Applications section of the Extras chapter. The third part of the second derivative test is important to notice. If the second derivative is zero then the critical point can be anything. Below are the graphs of three functions all of which have a critical point at , the second derivative of all of the functions is zero at and yet all three possibilities are exhibited. The first is the graph of . This graph has a relative minimum at . Next is the graph of which has a relative maximum at . Finally, there is the graph of and this graph had neither a relative minimum or a relative maximum at . So, we can see that we have to be careful if we fall into the third case. For those times when we do fall into this case we will have to resort to other methods of classifying the critical point. This is usually done with the first derivative test. Let’s go back and relook at the critical points from the first example and use the Second Derivative Test on them, if possible. Let’s work one more example. Example 3 For the following function find the inflection points and use the second derivative test, if possible, to classify the critical points. Also, determine the intervals of increase/decrease and the intervals of concave up/concave down and sketch the graph of the function. We’ll need the first and second derivatives to get us The critical points are, Notice as well that we won’t be able to use the second derivative test on to classify this critical point since the derivative doesn’t exist at this point. To classify this we’ll need the increasing/decreasing information that we’ll get to sketch the graph. We can however, use the Second Derivative Test to classify the other critical point so let’s do that before we proceed with the sketching work. Here is the value of the second derivative at . So, according to the second derivative test is a relative maximum. Now let’s proceed with the work to get the sketch of the graph and notice that once we have the increasing/decreasing information we’ll be able to classify . Here is the number line for the first derivative. So, according to the first derivative test we can verify that is in fact a relative maximum. We can also see that is a relative minimum. Be careful not to assume that a critical point that can’t be used in the second derivative test won’t be a relative extrema. We’ve clearly seen now both with this example and in the discussion after we have the test that just because we can’t use the Second Derivative Test or the Test doesn’t tell us anything about a critical point doesn’t mean that the critical point will not be a relative extrema. This is a common mistake that many students make so be careful when using the Second Okay, let’s finish the problem out. We will need the list of possible inflection points. These are, Here is the number line for the second derivative. Note that we will need this to see if the two points above are in fact inflection points. So, the concavity only changes at and so this is the only inflection point for Here is the sketch of the graph. The change of concavity at is hard to see, but it is there it’s just a very subtle change in concavity.
http://tutorial.math.lamar.edu/classes/calcI/ShapeofGraphPtII.aspx
13
46
Heart failure is a condition in which the heart does not pump enough blood to meet the needs of the body’s tissues. Heart failure can develop slowly over time as the result of other conditions (such as high blood pressure and coronary artery disease) that weaken the heart. It can also occur suddenly as the result of damage to the heart muscle. Common signs and symptoms of heart failure include: Treatment for heart failure depends on its severity. Patients with very weakened hearts may need surgery or implanted devices, such as pacemakers or implantable cardioverter defibrillators. All patients need to make lifestyle changes, including restricting sodium (salt) in their diets. Doctors usually treat heart failure, and the underlying conditions that cause it, with a combination of medications. These medications include: Other medications that may be helpful include: Heart failure is a condition in which the heart does not pump enough blood to meet the needs of the body’s tissues. To understand what occurs in heart failure, it helps to be familiar with the anatomy of the heart and how it works. The heart is composed of two independent pumping systems, one on the right side, and the other on the left. Each has two chambers, an atrium and a ventricle. The ventricles are the major pumps in the heart. The Right Side of the Heart. The right system receives blood from the veins of the whole body. This is "used" blood, which is poor in oxygen and rich in carbon dioxide. The Left Side of the Heart. The left system receives blood from the lungs. This blood is now rich in oxygen. The Valves. Valves are muscular flaps that open and close so blood will flow in the right direction. There are four valves in the heart: The Heart's Electrical System. The heartbeats are triggered and regulated by the conducting system, a network of specialized muscle cells that form an independent electrical system in the heart muscles. These cells are connected by channels that pass chemically-triggered electrical impulses. Heart failure is a process, not a disease. The heart doesn't "fail" in the sense of ceasing to beat (as occurs during cardiac arrest). Rather, it weakens, usually over the course of months or years, so that it is unable to pump out all the blood that enters its chambers. As a result, fluids tend to build up in the lungs and tissues, causing congestion. This is why heart failure is also sometimes referred to as "congestive heart failure." Ways the Heart Can Fail. Heart failure can occur in several ways: The specific effects of heart failure on the body depend on whether it occurs on the left or right sides of the heart. Over time, however, in either form of heart failure, the organs in the body do not receive enough oxygen and nutrients, and the body's wastes are removed slowly. Eventually, vital systems break down. Failure on the Left Side (Left-Ventricular Heart Failure). Failure on the left side of the heart is more common than failure on the right side. The failure can be a result of abnormal systolic (contraction) or diastolic (relaxation) action: Failure on the Right Side (Right-Ventricular Heart Failure). Failure on the right side of the heart is most often a result of failure on the left. Because the right ventricle receives blood from the veins, failure here causes the blood to back up. As a result, the veins in the body and tissues surrounding the heart to swell. This causes swelling in the feet, ankles, legs, and abdomen. Pulmonary hypertension (increase in pressure in the lung's pulmonary artery) and lung disease may also cause right-sided heart failure. Ejection Fraction. To help determine the severity of left-sided heart failure, doctors use an ejection fraction (EF) calculation, also called a left-ventricular ejection fraction (LVEF). This is the percentage of the blood pumped out from the left ventricle during each heartbeat. An ejection fraction of 50 - 75% is considered normal. Patients with left-ventricular heart failure are classified as either having a preserved ejection fraction (greater than 50%) or a reduced ejection fraction (less than 50%). Patients with preserved LVEF heart failure are more likely to be female and older, and have a history of high blood pressure and atrial fibrillation (a disturbance in heart rhythm). Heart failure has many causes and can evolve in different ways. In all cases, the weaker pumping action of the heart means that less blood is sent to the kidneys. The kidneys respond by retaining salt and water. This in turn increases edema (fluid buildup) in the body, which causes widespread damage. Uncontrolled high blood pressure (hypertension) is a major cause of heart failure even in the absence of a heart attack. In fact, about 75% of cases of heart failure start with hypertension. It generally develops as follows: [For more information, see In-Depth Report #14:High blood pressure.] Coronary artery disease is the end result of a process called atherosclerosis (commonly called "hardening of the arteries"). It is the most common cause of heart attack and involves the build-up of unhealthy cholesterol in the arteries, with inflammation and injury in the cells of the blood vessels. The arteries narrow and become brittle. Heart failure in such cases most often results from a pumping defect in the left side of the heart. [For more information, see In-Depth Report #3: Coronary artery disease and angina; and In-Depth Report #23: Cholesterol.] People often survive heart attacks, but many eventually develop heart failure from the damage the attack does to the heart muscles. [For more information, see In-Depth Report #12: Heart attack.] The valves of the heart control the flow of blood leaving and entering the heart. Abnormalities can cause blood to back up or leak back into the heart. In the past, rheumatic fever, which scars the heart valves and prevents them from functioning properly, was a major cause of death from heart failure. Fortunately, antibiotics and other advances in industrialized countries have now made this disease a minor cause of heart failure. Birth defects may also cause abnormal valvular development. Although more children born with heart defects are now living to adulthood, they still face a higher than average risk for heart failure as they age. Cardiomyopathy is a disorder that damages the heart muscles and leads to heart failure. There are several different types. Injury to the heart muscles may cause the heart muscles to thin out (dilate) or become too thick (become hypertrophic). In either case, the heart doesn't pump correctly. Viral myocarditis is a rare viral infection that involves the heart muscle and can produce either temporary or permanent heart muscle damage. Dilated Cardiomyopathy. Dilated cardiomyopathy involves an enlarged heart ventricle. The muscles thin out, reducing the pumping action, usually on the left side. Although this condition is associated with genetic factors, the direct cause is often not known. (This is called idiopathic dilated cardiomyopathy.) In other cases, viral infections, alcoholism, and high blood pressure may increase the risk for this condition. Hypertrophic Cardiomyopathy. In hypertrophic cardiomyopathy, the heart muscles become thick and contract with difficulty. Some research indicates that this occurs because of a genetic defect that causes a loss of power in heart muscle cells and, subsequently, lower pumping strength. To compensate for this power loss, the heart muscle cells grow. This condition, rare in the general population, is often the cause of sudden death in young athletes. Restrictive Cardiomyopathy. Restrictive cardiomyopathy refers to a group of disorders in which the heart chambers are unable to properly fill with blood because of stiffness in the heart. The heart is of normal size or only slightly enlarged. However, it cannot relax normally during the time between heartbeats when the blood returns from the body to the heart (diastole). The most common causes of restrictive cardiomyopathy are amyloidosis and scarring of the heart from an unknown cause (idiopathic myocardial fibrosis). It frequently occurs after a heart transplant. Chronic obstructive pulmonary disease (severe emphysema) and other major lung diseases are risk factors for right-side heart failure. Pulmonary hypertension is increased pressure in the pulmonary arteries that carry blood from the heart to the lungs. The increased pressure makes the heart work harder to pump blood, which can cause heart failure. The development of right-sided heart failure in patients with pulmonary hypertension is a strong predictor of death within 6 - 12 months. An overactive thyroid (hyperthyroidism) or underactive thyroid (hypothyroidism) can have severe effects on the heart and increase the risk for heart failure. Nearly 6 million Americans are living with heart failure. About 670,000 new cases of heart failure are diagnosed each year. Although there has been a dramatic increase over the last several decades in the number of people who suffer from heart failure, survival rates have greatly improved. Coronary artery disease, heart attack, and high blood pressure are the main causes and risk factors of heart failure. Other diseases that damage or weaken the heart muscle or heart valves can also cause heart failure. Heart failure is most common in people over age 65, African-Americans, and women. Heart failure risk increases with advancing age. Heart failure is the most common reason for hospitalization in people age 65 years and older. Men are at higher risk for heart failure than women. However, women are more likely than men to develop diastolic heart failure (a failure of the heart muscle to relax normally), which is often a precursor to systolic heart failure (impaired ability to pump blood). African-Americans are more likely than Caucasians to develop heart failure before age 50 and to die from the condition. People with a family history of cardiomyopathies (diseases that damage the heart muscle) are at increased risk of developing heart failure. Researchers are investigating specific genetic variants that increase heart failure risk. People with diabetes are at high risk for heart failure, particularly if they also have coronary artery disease and high blood pressure. Some types of diabetes medications, such as rosiglitazone (Avandia) and pioglitazone (Actos), have been associated with heart failure. Chronic kidney disease caused by diabetes also increases heart failure risk. Obesity is associated with both high blood pressure and type 2 diabetes, conditions that place people at risk for heart failure. Evidence strongly suggests that obesity itself is a major risk factor for heart failure, particularly in women. Smoking, sedentary lifestyle, and alcohol and drug abuse can increase the risk for developing heart failure. Long-term use of high-dose anabolic steroids (male hormones used to build muscle mass) increases the risk for heart failure. The drug itraconazole (Sporanox), used to treat skin, nail, or other fungal infections, has occasionally been linked to heart failure. The cancer drug imatinib (Gleevec) has been associated with heart failure. Other chemotherapy drugs, such as doxorubicin, can increase the risk for developing heart failure years after cancer treatment. (Cancer radiation therapy to the chest can also damage the heart muscle.) Nearly 290,000 people die from heart failure each year. Nevertheless, although heart failure produces very high mortality rates, treatment advances are improving survival rates. Cardiac Cachexia. If patients with heart failure are overweight to begin with, their condition tends to be more severe. Once heart failure develops, however, an important indicator of a worsening condition is the occurrence of cardiac cachexia, which is unintentional rapid weight loss (a loss of at least 7.5% of normal weight within 6 months). Impaired Kidney Function. Heart failure weakens the heart’s ability to pump blood. This can affect other parts of the body including the kidneys (which in turn can lead to fluid build-up). Decreased kidney function is common in patients with heart failure, both as a complication of heart failure and other diseases associated with heart failure (such as diabetes). Studies suggest that, in patients with heart failure, impaired kidney function increases the risks for heart complications, including hospitalization and death. Congestion (Fluid Buildup). In left-sided heart failure, fluid builds up first in the lungs, a condition called pulmonary edema. Later, as right-sided heart failure develops, fluid builds up in the legs, feet, and abdomen. Fluid buildup is treated with lifestyle measures, such as reducing salt in the diet, as well as drugs, such as diuretics. Arrhythmias (Irregular Beatings of the Heart) Angina and Heart Attacks. While coronary artery disease is a major cause of heart failure, patients with heart failure are at continued risk for angina and heart attacks. Special care should be taken with sudden and strenuous exertion, particularly snow shoveling, during colder months. Many symptoms of heart failure result from the congestion that develops as fluid backs up into the lungs and leaks into the tissues. Other symptoms result from inadequate delivery of oxygen-rich blood to the body's tissues. Since heart failure can progress rapidly, it is essential to consult a doctor immediately if any of the following symptoms are detected: Fatigue. Patients may feel unusually tired. Shortness of Breath (Dyspnea). Fluid Retention (Edema) and Weight Gain. Patients may complain of foot, ankle, leg or abdominal swelling. In rare cases, swelling can occur in the veins of the neck. Fluid retention can cause sudden weight gain and frequent urination. Wheezing or Cough. Patients may have asthma-like wheezing, or a dry hacking cough that occurs a few hours after lying down but then stops after sitting up. Loss of Muscle Mass. Over time, patients may lose muscle weight due to low cardiac output and a significant reduction in physical activity. Gastrointestinal Symptoms. Patients experience loss of appetite or a sense of feeling full after eating small amounts. They may also have abdominal pain. Pulmonary Edema. When fluid in the lungs builds up, it is called pulmonary edema. When this happens, symptoms become more severe. These episodes may happen suddenly, or gradually build up over a matter of days: Abnormal Heart Rhythms. Patients may have episodes of abnormally fast or slow heart rate. Central Sleep Apnea. This sleep disorder results when the brain fails to signal the muscles to breathe during sleep. It occurs in up to half of people with heart failure. Sleep apnea causes disordered breathing at night. If heart failure progresses, the apnea may be so acute that a person, unable to breathe, may awaken from sleep in panic. Doctors can often make a preliminary diagnosis of heart failure by medical history and careful physical examination. A thorough medical history may identify risks for heart failure that include: The following physical signs, along with medical history, strongly suggest heart failure: Both blood and urine tests are used to check for problems with the liver and kidneys and to detect signs of diabetes. Lab tests can measure: An electrocardiogram (ECG) is a test that measures and records the electrical activity of the heart. It is also called an EKG. An electrocardiogram cannot diagnose heart failure, but it may indicate underlying heart problems. The test is simple and painless to perform. It may be used to diagnose: A completely normal ECG means that heart failure is unlikely. The best diagnostic test for heart failure is echocardiography. Echocardiography is a noninvasive test that uses ultrasound to image the heart as it is beating. Cardiac ultrasounds provide the following information: Doctors use information from the echocardiogram for calculating the ejection fraction (how much blood is pumped out during each heartbeat), which is important for determining the severity of heart failure. Stress echocardiography may be needed if coronary artery disease is suspected. Doctors may recommend angiography if they suspect that blockage of the coronary arteries is contributing to heart failure. This procedure is invasive. Radionuclide Ventriculography. Radionuclide ventriculography is an imaging technique that uses a tiny amount of radioactive material (called a trace element). It is very sensitive in revealing heart enlargement or evidence of fluid accumulation around the heart and lungs. It may be done at the same time as coronary artery angiography. It can help diagnose or exclude the presence of coronary artery disease and helps demonstrate how the heart works during exercise. Chest x-rays can show whether the heart is enlarged. Computed tomography (CT) and magnetic resonance imaging (MRI) may also be used to evaluate the heart valves and arteries. The exercise stress test measures heart rate, blood pressure, electrocardiographic changes, and oxygen consumption while a patient is performing physically, usually walking on a treadmill. It can help determine heart failure symptoms. Doctors also use exercise tests to evaluate long-term outlook and the effects of particular treatments. A stress test may be done using echocardiography or may be done as a nuclear stress test (myocardial perfusion imaging). Heart failure is classified into four stages (Stage A through Stage D) that reflect the development and progression of the condition. Treatment depends on the stage of heart failure. The first two stages (Stage A and Stage B) are not technically heart failure, but indicate that a patient is at high risk for developing it. Stage A. In Stage A, patients are at high risk for heart failure but do not show any symptoms or have structural damage of the heart. The first step in managing or preventing heart failure is to treat the primary conditions that cause or complicate heart failure. Risk factors include high blood pressure, heart diseases, diabetes, obesity, metabolic syndrome, and previous use of medications that damage the heart (such as some chemotherapy). Important risk factors to manage include: Stage B. Patients have a structural heart abnormality seen on echocardiogram or other imaging tests but no symptoms of heart failure. Abnormalities include left ventricular hypertrophy and low ejection fraction, asymptomatic valvular heart disease, and a previous heart attack. In addition to the treatment guidelines for Stage A, the following types of drugs and devices may be recommended for some patients: Stage C. Patients have a structural abnormality and current or previous symptoms of heart failure, including shortness of breath, fatigue, and difficulty exercising. Treatment includes those for Stage A and B plus: Stage D. Patients have end-stage symptoms that do not respond to standard treatments. Treatment includes appropriate measures used for Stages A, B, and C plus: Whenever heart failure worsens, whether quickly or chronically over time, various factors must be considered as the cause: Many different medications are used in the treatment of heart failure. They include: Angiotensin-converting enzyme (ACE) inhibitors are among the most important drugs for treating patients with heart failure. ACE inhibitors open blood vessels and decrease the workload of the heart. They are used to treat high blood pressure but can also help improve heart and lung muscle function. ACE inhibitors are particularly important for patients with diabetes, because they also help slow progression of kidney disease. Brands and Indications. ACE inhibitors are used to treat Stage A high-risk conditions such as high blood pressure, heart disease, and diabetic nerve disorders (neuropathy). They are also used to treat Stage B patients who have had a heart attack or who have left ventricular systolic disorder, and Stage C patients with heart failure. Specific brands of ACE inhibitors include: Side Effects of ACE Inhibitors: ARBs, also known as angiotensin II receptor antagonists, are similar to ACE inhibitors in their ability to open blood vessels and lower blood pressure. They may have fewer or less-severe side effects than ACE inhibitors, especially coughing, and are sometimes prescribed as an alternative to ACE inhibitors. Some patients with heart failure take an ACE inhibitor along with an ARB. Brands and Indications. ARBs are used to treat Stage A high-risk conditions such as high blood pressure and diabetic nerve disorders (neuropathy). They are also used to treat Stage B patients who have had a heart attack or who have left ventricular systolic disorder, and Stage C patients with heart failure. Specific brands include: Common Side Effects Beta blockers are almost always used in combination with other drugs, such as ACE inhibitors and diuretics. They help slow heart rate and lower blood pressure. When used properly, beta blockers can reduce the risk of death or rehospitalization. Brands and Indications. Beta blockers treat Stage A high blood pressure. They also treat Stage B patients (both those who have had a heart attack and those who have not had a heart attack but who have heart damage). Patients with heart failure who take beta blockers should be monitored by a specialist. The three beta blockers that are best for treating Stage C patients with heart failure are: Beta Blocker Concerns Common Side Effects Check with your doctor about any side effects. Do not stop taking these drugs on your own. Diuretics cause the kidneys to rid the body of excess salt and water. Fluid retention is a major symptom of heart failure. Aggressive use of diuretics can help eliminate excess body fluids, while reducing hospitalizations and improving exercise capacity. These drugs are also important to help prevent heart failure in patients with high blood pressure. In addition, certain diuretics, notably spironolactone (Aldactone, generic), block aldosterone, a hormone involved in heart failure. This drug class is beneficial for patients with more severe heart failure (Stages C and D). Patients taking diuretics usually take a daily dose. Under the directions and care of a doctor or nurse, some patients may be taught to adjust the amount and timing of the diuretic when they notice swelling or weight gain. Diuretics come in many brands and are generally inexpensive. Some need to be taken once a day, some twice a day. Treatment is usually started at a low dose and gradually increased. Diuretics are virtually always used in combination with other drugs, especially ACE inhibitors and beta blockers. There are three main types of diuretics: Thiazide diuretics. These include chlorothiazide (Diuril, generic), chlorthalidone (Clorpres, generic), indapamide (Lozol, generic), hydrochlorothiazide (Esidrix, generic), and metolazone (Zaroxolyn, generic). Loop diuretics. These are considered the preferred diuretic type for most patients with heart failure. Common Side Effects Aldosterone is a hormone that is critical in controlling the body's balance of salt and water. Excessive levels may play important roles in hypertension and heart failure. Drugs that block aldosterone are prescribed for some patients with symptomatic heart failure. They have been found to reduce death rates for patients with heart failure and coronary artery disease, especially after a heart attack. These blockers pose some risk for high potassium levels. Brands include: Elevated levels of potassium in the blood are also a concern with these drugs. Patients should not take potassium supplements at the same time as this drug without their doctor's knowledge and may need to avoid foods with high potassium content. Digitalis is derived from the foxglove plant. It has been used to treat heart disease since the 1700s. Digoxin (Lanoxin, generic) is the most commonly prescribed digitalis preparation. Digoxin decreases heart size and reduces certain heart rhythm disturbances (arrhythmias). Unfortunately, digitalis does not reduce mortality rates, although it does reduce hospitalizations and worsening of heart failure. Controversy has been ongoing for more than 100 years over whether the benefits of digitalis outweigh its risks and adverse effects. Digitalis may be useful for select patients with left-ventricular systolic dysfunction who do not respond to other drugs (diuretics, ACE inhibitors). It may also be used for patients who have atrial fibrillation. Side Effects and Problems. While digitalis is generally a safe drug, it can have toxic side effects due to overdose or other accompanying conditions. The most serious side effects are arrhythmias (abnormal heart rhythms that can be life threatening). Early signs of toxicity may be irregular heartbeat, nausea and vomiting, stomach pain, fatigue, visual disturbances (such as yellow vision, seeing halos around lights, flickering or flashing of lights), and emotional and mental disturbances. Many factors increase the chance for side effects. Digitalis also interacts with many other drugs, including quinidine, amiodarone, verapamil, flecainide, amiloride, and propafenone. For most patients with mild-to-moderate heart failure, low-dose digoxin may be as effective as higher doses. If side effects are mild, patients should still consider continuing with digitalis if they experience other benefits. Hydralazine and nitrates are two older drugs that help relax arteries and veins, thereby reducing the heart's workload and allowing more blood to reach the tissues. They are used primarily for patients who are unable to tolerate ACE inhibitors and angiotensin receptor blockers. In 2005, the FDA approved BiDil, a drug that combines isosorbide dinitrate and hydralazine. BiDil is approved to specifically treat heart failure in African-American patients. Statins are important drugs used to lower cholesterol and to prevent heart disease leading to heart failure. These drugs include lovastatin (Mevacor, generic), pravastatin (Pravachol, generic), simvastatin (Zocor, generic), fluvastatin (Lescol), atorvastatin (Lipitor), rosuvastatin (Crestor), and pitavastatin (Livalo). Atorvastatin is specifically approved to reduce the risks for hospitalization for heart failure in patients with heart disease. Aspirin. Aspirin is a type of non-steroid anti-inflammatory (NSAID). Aspirin is recommended for protecting patients with heart disease, and can safely be used with ACE inhibitors, particularly when it is taken in lower dosages (75 - 81 mg). Warfarin (Coumadin, generic). Warfarin is recommended only for patients with heart failure who also have: Nesiritide (Natrecor). Nesiritide is an intravenous drug that has been used for hospitalized patients with decompensated heart failure. Decompensated heart failure is a life-threatening condition in which heart failure progresses over the course of minutes or a few days, often as the result of a heart attack or sudden and severe heart valve problems. Because nesiritide may cause serious kidney damage and has been linked to an increased risk of death from heart failure, the drug is of limited value. Erythropoietin. Many patients with chronic heart failure are also anemic. Treatment of these patients with erythropoietin has been shown to provide some benefit for heart failure control and hospitalization risk. However, erythropoietin therapy can also increase the risk of blood clots. The exact role of this drug for the treatment of anemia in patients with heart failure is not yet decided. [For more information, see In-Depth Report #57: Anemia.] Tolvaptan. Tolvaptan (Samsca) is a drug approved for treating hyponatremia (low sodium levels) associated with heart failure and other conditions. Levosimendan. Levosimendan is an experimental drug that is being investigated as a treatment for severely ill patients with heart failure. It belongs to a new class of drugs called calcium sensitizers that may help improve heart contractions and blood flow. The drug appears to reduce levels of BNP (brain natriuretic peptide), a chemical marker for heart failure severity. Revascularization surgery helps to restore blood flow to the heart. It can treat blocked arteries in patients with coronary artery disease and may help select patients with heart failure. Surgery types include coronary artery bypass graft (CABG) and angioplasty (also called percutaneous coronary intervention [PCI]). CABG is a traditional type of open heart surgery. Angioplasty uses a catheter to inflate a balloon inside the artery. A metal stent may also be inserted during an angioplasty procedure. [For more information, see In-Depth Report #03: Coronary artery disease.] Pacemakers, also called pacers, help regulate the heart’s beating action, especially when the heart beats too slowly. Biventricular pacers (BVPs) are a special type of pacemaker used for patients with heart failure. Because BVPs help the heart’s left and right chambers beat together, this treatment is called cardiac resynchronization therapy (CST). BVPs are recommended for patients with moderate-to-severe heart failure that is not controlled with medication therapy and who have evidence of left-bundle branch block on their EKG. Left-bundle branch block is a condition in which the electrical impulses in the heart do not follow their normal pattern, causing the heart to pump inefficiently. Patients with enlarged hearts are at risk for having serious cardiac arrhythmias (abnormal heartbeats) that are associated with sudden death. Implantable cardioverter defibrillators (ICDs) can quickly detect life-threatening arrhythmias. The ICD is designed to convert any abnormal heart rhythm back to normal by sending an electrical shock to your heart. This action is called defibrillation. This device can also work as a pacemaker. In recent years, certain ICD models and biventricular pacemaker-defibrillators have been recalled by the manufacturers because of circuitry flaws. However, doctors stress that the chance of an ICD or pacemaker saving a person’s life far outweigh the possible risks of these devices failing. Ventricular assist devices are mechanical devices that help improve pumping actions. They are used as a bridge to transplant for patients who are on medications but still have severe symptoms and are awaiting a donor heart. In some cases, they may delay the need for a transplant. Therefore they may be used as short-term (less than 1 week) or longer term support. Ventricular assist devices include: The risks and complications involved with many of these devices include bleeding, blood clots, and right-side heart failure. Infections are a particular hazard. Patients who suffer from severe heart failure and whose symptoms do not improve with drug therapy or mechanical assistance may be candidates for heart transplantation. About 2,000 heart transplant operations are performed in the United States each year, but thousands more patients wait on a list for a donor heart. The most important factor for heart transplant eligibility is overall health. Chronological age is less important. Most heart transplant candidates are between the ages of 50 - 64 years. While the risks of this procedure are high, the 1-year survival rate is about 88% for men and 77% for women. Five years after a heart transplant, about 73% of men and 67% of women remain alive. In general, the highest risk factors for death three or more years after a transplant operation are coronary artery disease and the adverse effects (infection and certain cancers) of immunosuppressive drugs used in the procedure. The rejection rates in older people appear to be similar to those of younger patients. Abiocor is a permanent implantable artificial heart. It is available only for patients who are not eligible for a heart transplant and who are not expected to live more than a month without medical treatment. The device requires a large chest cavity, which means that most women are not eligible for it. Up to half of patients hospitalized for heart failure are back in the hospital within 6 months. Many people return because of lifestyle factors, such as poor diet, failure to comply with medications, and social isolation. Programs that offer intensive follow-up to ensure that the patient complies with lifestyle changes and medication regimens at home can reduce rehospitalization and improve survival. Patients without available rehabilitation programs should seek support from local and national heart associations and groups. A strong emotional support network is also important. Patients should weigh themselves each morning and keep a record. Any changes are important: Sodium (Salt) Restriction. All patients with heart failure should limit their sodium (salt) intake to less than 1,500 mg a day, and in severe cases, very stringent salt restriction may be necessary. Patients should not add salt to their cooking and their meals. They should also avoid foods high in sodium. These salty foods include ham, bacon, hot dogs, lunch meats, prepared snack foods, dry cereal, cheese, canned soups, soy sauce, and condiments. Some patients may need to reduce the amount of water they consume. People with high cholesterol levels or diabetes require additional dietary precautions. [For more information on diet and heart health, see In-Depth Report #43: Heart-healthy diet. ] Here are some tips to lower your salt and sodium intake: People with heart failure used to be discouraged from exercising. Now, doctors think that exercise, when performed under medical supervision, is extremely important for stable patients with stable conditions. Studies have reported that patients with stable conditions who engage in regular moderate exercise (three times a week) have a better quality of life and lower mortality rates than those who do not exercise. However: Studies report benefits from specific exercises: Some people with severe heart failure may need bed rest. To reduce congestion in the lungs, the patient's upper body should be elevated. For most patients, resting in an armchair is better than lying in bed. Relaxing and contracting leg muscles is important to prevent clots. As the patient improves, a doctor will progressively recommend more activity. Stress reduction techniques, such as meditation and relaxation response methods, may have direct physical benefits. Anxiety can cause the heart to work harder and beat faster. Patients with heart failure may resort to alternative remedies. Such remedies are often ineffective and may have severe or toxic effects. Of particular note for patients with heart failure is an interaction between St. John's wort (an herbal medicine used for depression) and digoxin (a heart drug). St. John's wort can significantly interfere with this drug. Fish Oil Supplements. Some research shows that a daily capsule of fish oil may help improve survival in patients with heart failure. Fish oil contains omega-3 polyunsaturated fatty acids, a healthy kind of fat. However, while evidence is not conclusive, some studies have suggested that fish oil supplements may not be safe for patients with implanted cardiac defibrillators. Coenzyme Q10 and Vitamin E. Small studies suggested that coenzyme Q10 (CoQ10) may help patients with heart failure, particularly when combined with vitamin E. CoQ10 is a vitamin-like substance found in organ meats and soybean oil. More recent studies, however, have found that CoQ10 and vitamin E do not help the heart or prevent heart disease. In fact, vitamin E supplements may actually increase the risk of heart failure, especially for patients with diabetes or vascular diseases. Other Vitamins and Supplements. A wide variety of other vitamins (thiamin, B6, and C), minerals (calcium, magnesium, zinc, manganese, copper, selenium), nutritional supplements (carnitine, creatine), and herbal remedies (hawthorn) have been proposed as treatments for heart failure. None have been adequately tested. There is no evidence that a particular vitamin or supplement can cure heart failure. In any case, vitamins are best consumed through the food sources contained in a healthy diet. Generally, manufacturers of herbal remedies and dietary supplements do not need FDA approval to sell their products. Just like a drug, herbs and supplements can affect the body's chemistry, and therefore have the potential to produce side effects that may be harmful. There have been several reported cases of serious and even lethal side effects from herbal products. Always check with your doctor before using any herbal remedies or dietary supplements. Al-Majed NS, McAlister FA, Bakal JA, Ezekowitz JA. Meta-analysis: cardiac resynchronization therapy for patients with less symptomatic heart failure. Ann Intern Med. 2011 Mar 15;154(6):401-12. Epub 2011 Feb 14. Bibbins-Domingo K, Pletcher MJ, Lin F, Vittinghoff E, Gardin JM, et al. Racial differences in incident heart failure among young adults. N Engl J Med. 2009 Mar 19;360(12):1179-90. Carlson MD, Wilkoff BL, Maisel WH, Carlson MD, Ellenbogen KA, Saxon LA, et al. Recommendations from the Heart Rhythm Society Task Force on Device Performance Policies and Guidelines Endorsed by the American College of Cardiology Foundation (ACCF) and the American Heart Association (AHA) and the International Coalition of Pacing and Electrophysiology Organizations (COPE). Heart Rhythm. 2006 Oct;3(10):1250-73. Epstein AE, Dimarco JP, Ellenbogen KA, Estes NA 3rd, Freedman RA, Gettes LS, et al. ACC/AHA/HRS 2008 Guidelines for device-based therapy of cardiac rhythm abnormalities. Heart Rhythm. 2008 Jun;5(6):e1-62. Epub 2008 May 21. Ghanbari H, Dalloul G, Hasan R, Daccarett M, Saba S, David S, et al. Effectiveness of implantable cardioverter-defibrillators for the primary prevention of sudden cardiac death in women with advanced heart failure: a meta-analysis of randomized controlled trials. Arch Intern Med. 2009 Sep 14;169(16):1500-6. Gissi-HF Investigators, Tavazzi L, Maggioni AP, Marchioli R, Barlera S, Franzosi MG, et al. Effect of n-3 polyunsaturated fatty acids in patients with chronic heart failure (the GISSI-HF trial): a randomised, double-blind, placebo-controlled trial. Lancet. 2008 Oct 4;372(9645):1223-30. Epub 2008 Aug 29. Greenberg B and Kahn AM. Clinical assessment of heart failure. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Saunders; 2011: chap 26. Haykowsky MJ, Liang Y, Pechter D, Jones LW, McAlister FA, Clark AM. A meta-analysis of the effect of exercise training on left ventricular remodeling in heart failure patients: the benefit depends on the type of training performed. J Am Coll Cardiol. 2007 Jun 19;49(24):2329-36. Epub 2007 Jun 4. Heart Failure Society of America, Lindenfeld J, Albert NM, Boehmer JP, Collins SP, Ezekowitz JA, et al. HFSA 2010 Comprehensive Heart Failure Practice Guideline. J Card Fail. 2010 Jun;16(6):e1-194.. Hildebrandt P. Systolic and nonsystolic heart failure: equally serious threats. JAMA. 2006 Nov 8;296(18):2259-60. Jessup M, Abraham WT, Casey DE, Feldman AM, Francis GS, Ganiats TG, et al. 2009 focused update: ACCF/AHA Guidelines for the Diagnosis and Management of Heart Failure in Adults: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines: developed in collaboration with the International Society for Heart and Lung Transplantation. Circulation. 2009 Apr 14;119(14):1977-2016. Epub 2009 Mar 26 Khush KK, Waters DD, Bittner V, Deedwania PC, Kastelein JJ, Lewis SJ, et al. Effect of high-dose atorvastatin on hospitalizations for heart failure: subgroup analysis of the Treating to New Targets (TNT) study. Circulation. 2007 Feb 6;115(5):576-83. Epub 2007 Jan 29. Mann DL. Pathophysiology of heart failure. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Saunders; 2011: chap 25. McAlister FA, Ezekowitz J, Dryden DM, Hooton N, Vandermeer B, Friesen C, et al. Cardiac Resynchronization Therapy and Implantable Cardiac Defibrillators in Left Ventricular Systolic Dysfunction. Evidence Report/Technology Assessment No. 152 (Prepared by the University of Alberta Evidence-based Practice Center under Contract No. 290-02-0023). AHRQ Publication No. 07-E009. Rockville, MD: Agency for Healthcare Research and Quality. June 2007. Moss AJ, Hall WJ, Cannom DS, Klein H, Brown MW, Daubert JP, et al. N Engl J Med. 2009 Oct 1;361(14):1329-38. Epub 2009 Sep 1. Cardiac-resynchronization therapy for the prevention of heart-failure events. Schocken DD, Benjamin EJ, Fonarow GC, Krumholz HM, Levy D, Mensah GA, et al. Prevention of heart failure: a scientific statement from the American Heart Association Councils on Epidemiology and Prevention, Clinical Cardiology, Cardiovascular Nursing, and High Blood Pressure Research; Quality of Care and Outcomes Research Interdisciplinary Working Group; and Functional Genomics and Translational Biology Interdisciplinary Working Group. Circulation. 2008 May 13;117(19):2544-65. Epub 2008 Apr 7.
http://www.centurahealthinfo.org/In-Depth%20Reports/10/000013.htm
13
25
The Stamp Act was a tax imposed by the British government on the American colonies. British taxpayers already paid a stamp tax and Massachusetts briefly experimented with a similar law, but the Stamp Act imposed on colonial residents went further than the existing ones. The primary goal was to raise money needed for military defenses of the colonies. This legislative act was initiated by the British prime minister George Grenville and adopted by the British Parliament. The decision was taken on March 1765 but did not take effect until November 1st of the same year. The Act imposed a tax that required colonial residents to purchase a stamp to be affixed to a number of documents. In addition to taxing legal documents such as bills of sale, wills, contracts and paper printed for official documents, it required the American population to purchase stamps for newspapers, pamphlets, posters and even playing cards. The tax was payable in scarce silver and gold coins and not in paper money which was the most common method of payment in the colonies. According to Oliver M. Dickerson, more than one hundred thousand pounds worth of stamps were shipped to America. Stamps showing that the tax had been paid. They show the value in British currency. American colonies under their own chapters of the Sons of Liberty had more than half a year to voice their opinion to the motherland during which riots and protests occurred in what is known as the Stamp Act Crisis. The reason colonists protested is that for the first time the British government imposed an “internal” American tax which differed from the previous taxes such as the Sugar Act which was viewed as a trade tax. The people most affected by this tax were lawyers, printers, merchants and ministers – some of the most influential people in society. The British were not able to enforce the act as resistance by colonists was fierce. The Stamp Act Congress, held in New York in October 1765, was the first attempt to organize the opposition. Nine of the thirteen colonies sent a total of 27 representatives. Congress approved thirteen resolutions in the Declaration of rights and grievances, including \”no taxation without representation\”, among others. The repeal of the Stamp Act took effect on March 18th, 1766 in part because of economic concerns expressed by British merchants. In order to reassert their right to tax the colonies British Parliament issued the Declaratory Act as a reaction to the failure of the Stamp Act as they did not want to give up on the principle of imperial taxation. The Stamp Act was a political and economic failure for the British. Politically they were facing the beginning of an organized effort to get rid of their British. Economically, the revenue collected was a mere £3,292, of which £45 came from Georgia and the rest from the West Indies, Canada and Florida. See table for revenue by colony. Value in £ of stamp consignment, cash revenue, stamp returns and stamp unaccounted for by colony (see notes for explanation) |Florida, Canada and West Indies||172,587||3,293||105,139||64,155| |Notes: Consignments are the nomial value of stamped parchment supplied to the colonial distributors Cash is the amount of money returned to Britain. Returns are the nominal value of the unsold stamped paper returned to Britain. Balances are the unsold stamped paper, burned, destroyed or lost. |Source: Adolph Koeppel, "New Discovery from British Archives on the 1765 Tax Stamps for America", Boyertown, PA:Publication of the American Revenue Association, 1962.| List of documents affected by the Stamp Act Stamps had to be affixed to documents or products in order to have legal value. Among them are: Legal documents, ship’s papers, wills, licenses, newspapers, pamphlets, advertisement, bills of sale, almanacs, calendars, any kind of declarations, pleas to courts, donations, inventory, testimonials, diplomas and certificates of university, college, seminary or academy of learning; affidavits, bails, business license, writ of covenant for levying of fines, writ of entry for suffering a common recovery, court orders, dice and playing cards among others. See original text for more. Origin and purpose of the Stamp Act British Prime Minister, George Grenville, originated the law and Parliament passed it virtually without any debate. This decision was justified as it was considered an extension of the stamp tax which already existed in Great Britain. The purpose of the Stamp Act was to raise revenue to pay for the military expenses incurred during the French Indian War and for the military troops stationed in the newly conquered territories set by the Royal Proclamation of 1763. The war developed from 1754 to 1763, Americans and Englishmen fought together against the French and were victorious. Britain annexed the French Canadian territories and Acadia, both colonies had approximately 80,000 French Roman Catholic residents. In order to gain their support parliament passed the Quebec Act in 1774 which included reforms favorable to French Catholics. France also ceded the territories along the west of the thirteen colonies. These lands were inhabited by Native American Indians who supported the French during the war. This war changed the geopolitical and economic relations between America, France and Britain. Map showing the territories before (top) and after (bottom) the French Indian War Since western territories were now under British authority many colonist started moving west looking for opportunities. Indian tribes were not receptive to this new invasion and started a short lasting war led by a chief named Pontiac. All they wanted was to protect their land. In October 1763 King George III issued an order known as The Royal Proclamation of 1763. This law prohibited settlement on lands west of the Allegheny Mountains without Royal permission. The proclamation was not just to protect Native Indians but also to maintain control of their American colonies. Many in the British government feared that by residents migrating west they would start trading with the Spanish and French which would mean a decrease in trade between England and its American colonies. Residents should be kept close to the coastline near the current 13 colonies. Britain had to send more troops to make sure that the proclamation was obeyed which cost a lot of money. As the enemy was defeated and England had new territories to defend, the motherland started looking at its new possessions with a different perspective. Since all the money and sacrifice had come from England it was the colonies’ turn to pay back as they were the ones benefiting from their attachment to the motherland. The British view for imposing a revenue tax on its colonies was that it was time residents of the colonies paid for part of the cost of defending and protecting their own territory. Interesting known and unknown facts about the Stamp Act. The boycott of English goods by the colonies forced the British Parliament to repeal the original Stamp Act on March 18, 1766. The act allowed the revolution movement to gain tactical experience and set a pattern of resistance that led to the American independence. Text of the original document of the act as enacted by the British Parliament.
http://www.stamp-act-history.com/stamp-act-1765-description/
13
14
- Battle of Plassey - Pitt's India Act - Abolition of Sati - Railway and Telegraph Line - First War of Independence - Indian National Congress - First Partition of Bengal - Formation of Muslim League - Jalianwallah Bagh Massacre - Civil Disobedience Movement - Cripp's Mission - Quit India Movement - Indian National Army - Partition and Independence Partition of Bengal, 1905 effected on 16 October during the viceroyalty of lord curzon (1899-1905), proved to be a momentous event in the history of modern Bengal. The idea of partitioning Bengal did not originate with Curzon. Bengal, which included Bihar and Orissa since 1765, was admittedly much too large for a single province of British India. This premier province grew too vast for efficient administration and required reorganization and intelligent division. The lieutenant governor of Bengal had to administer an area of 189,000 sq miles and by 1903 the population of the province had risen to 78.50 million. Consequently, many districts in eastern Bengal had been practically neglected because of isolation and poor communication, which made good governance almost impossible. Calcutta and its nearby districts attracted all the energy and attention of the government. The condition of peasants was miserable under the exaction of absentee landlords; and trade, commerce and education were being impaired. The administrative machinery of the province was under-staffed. Especially in east Bengal, in countryside so cut off by rivers and creeks, no special attention had been paid to the peculiar difficulties of police work till the last decade of the 19th century. Organized piracy in the waterways had existed for at least a century. Along with administrative difficulties, the problems of famine, of defence, or of linguistics had at one time or other prompted the government to consider the redrawing of administrative boundaries. Occasional efforts were made to rearrange the administrative units of Bengal. In 1836, the upper provinces were sliced off from Bengal and placed under a lieutenant governor. In 1854, the Governor-General-in-Council was relieved of the direct administration of Bengal, which was placed under a lieutenant governor. In 1874 Assam (along with Sylhet) was severed from Bengal to form a Chief-Commissionership and in 1898 Lushai Hills were added to it. Proposals for partitioning Bengal were first considered in 1903. Curzon's original scheme was based on grounds of administrative efficiency. It was probably during the vociferous protests and adverse reaction against the original plan, that the officials first envisaged the possible advantages of a divided Bengal. Originally, the division was made on geographical rather than on an avowedly communal basis. 'Political Considerations' in this respect seemed to have been 'an afterthought'. The government contention was that the Partition of Bengal was purely an administrative measure with three main objectives. Firstly, it wanted to relieve the government of Bengal of a part of the administrative burden and to ensure more efficient administration in the outlying districts. Secondly, the government desired to promote the development of backward Assam (ruled by a Chief Commissioner) by enlarging its jurisdiction so as to provide it with an outlet to the sea. Thirdly, the government felt the urgent necessity to unite the scattered sections of the Oriya-speaking population under a single administration. There were further proposals to separate Chittagong and the districts of Dhaka (then Dacca) and Mymensigh from Bengal and attach them to Assam. Similarly Chhota Nagpur was to be taken away from Bengal and incorporated with the Central Provinces. The government's proposals were officially published in January 1904. In February 1904, Curzon made an official tour of the districts of eastern Bengal with a view to assessing public opinion on the government proposals. He consulted the leading personalities of the different districts and delivered speeches at Dhaka, Chittagong and Mymensigh explaining the government's stand on partition. It was during this visit that the decision to push through an expanded scheme took hold of his mind. This would involve the creation of a self-contained new province under a Lieutenant Governor with Legislative Council, an independent revenue authority and transfer of so much territory as would justify a fully equipped administration. The enlarged scheme received the assent of the governments of Assam and Bengal. The new province would consist of the state of Hill Tripura, the Divisions of Chittagong, Dhaka and Rajshahi (excluding Darjeeling) and the district of Malda amalgamated with Assam. Bengal was to surrender not only these large territories on the east but also to cede to the Central Provinces the five Hindi-speaking states. On the west it would gain Sambalpur and a minor tract of five Oriya-speaking states from the Central Provinces. Bengal would be left with an area of 141,580 sq. miles and a population of 54 million, of which 42 million would be Hindus and 9 million Muslims. The new province was to be called 'Eastern Bengal and Assam' with its capital at Dhaka and subsidiary headquarters at Chittagong. It would cover an area of 106,540 sq. miles with a population of 31 million comprising of 18 million Muslims and 12 million Hindus. Its administration would consist of Legislative Council, a Board of Revenue of two members, and the jurisdiction of the Calcutta High Court would be left undisturbed. The government pointed out that the new province would have a clearly demarcated western boundary and well defined geographical, ethnological, linguistic and social characteristics. The most striking feature of the new province was that it would concentrate within its own bounds the hitherto ignored and neglected typical homogenous Muslim population of Bengal. Besides, the whole of the tea industry (except Darjeeling), and the greater portion of the jute growing area would be brought under a single administration. The government of India promulgated their final decision in a Resolution dated 19 July 1905 and the Partition of Bengal was effected on 16 October of the same year. The publication of the original proposals towards the end of 1903 had aroused unprecedented opposition, especially among the influential educated middle-class Hindus. The proposed territorial adjustment seemed to touch the existing interest groups and consequently led to staunch opposition. The Calcutta lawyers apprehended that the creation of a new province would mean the establishment of a Court of Appeal at Dacca and diminish the importance of their own High Court. Journalists feared the appearance of local newspapers, which would restrict the circulation of the Calcutta Press. The business community of Calcutta visualized the shift of trade from Calcutta to Chittagong, which would be nearer, and logically the cheaper port. The Zamindars who owned vast landed estates both in west and east Bengal foresaw the necessity of maintaining separate establishments at Dhaka that would involve extra expenditure. The educated Bengali Hindus felt that it was a deliberate blow inflicted by Curzon at the national consciousness and growing solidarity of the Bengali-speaking population. The Hindus of Bengal, who controlled most of Bengal's commerce and the different professions and led the rural society, opined that the Bengali nation would be divided, making them a minority in a province including the whole of Bihar and Orissa. They complained that it was a veiled attempt by Curzon to strangle the spirit of nationalism in Bengal. They strongly believed that it was the prime object of the government to encourage the growth of a Muslim power in eastern Bengal as a counterpoise to thwart the rapidly growing strength of the educated Hindu community. Economic, political and communal interests combined together to intensify the opposition against the partition measure. The Indian and specially the Bengali press opposed the partition move from the very beginning. The British press, the Anglo-Indian press and even some administrators also opposed the intended measure. The partition evoked fierce protest in west Bengal, especially in Calcutta and gave a new fillip to Indian nationalism. Henceforth, the indian national congress was destined to become the main platform of the Indian nationalist movement. It exhibited unusual strength and vigour and shifted from a middle-class pressure group to a nation-wide mass organization. The leadership of the Indian National Congress viewed the partition as an attempt to 'divide and rule' and as a proof of the government's vindictive antipathy towards the outspoken Bhadralok intellectuals. Mother-goddess worshipping Bengali Hindus believed that the partition was tantamount to the vivisection of their 'Mother province'. 'Bande-Mataram' (Hail Motherland) almost became the national anthem of the Indian National Congress. Defeat of the partition became the immediate target of Bengali nationalism. Agitation against the partition manifested itself in the form of mass meetings; rural unrest and a swadeshi movement to boycott the import of British manufactured goods. Swadeshi and Boycott were the twin weapons of this nationalism and Swaraj (self-government) its main objective. Swaraj was first mentioned in the presidential address of Dadabhai Naoroji as the Congress goal at its Calcutta session in 1906. Leaders like surendranath banerjee along with journalists like Krishna Kumar Mitra, editor of the Sanjivani (13 July 1905) urged the people to boycott British goods, observe mourning and sever all contact with official bodies. In a meeting held at Calcutta on 7 August 1905 (hailed as the birthday of Indian nationalism) a resolution to abstain from purchases of British products so long as 'Partition resolution is not withdrawn' was accepted with acclaim. This national spirit was popularised by the patriotic songs of Dwijendralal Roy, Rajanikanta Sen and Rabindranath Tagore. As with other political movements of the day this also took on religious overtones. Pujas were offered to emphasize the solemn nature of the occasion. The Hindu religious fervour reached its peak on 28 September 1905, the day of the Mahalaya, the new-moon day before the puja, and thousands of Hindus gathered at the Kali temple in Calcutta. In Bengal the worship of Kali, wife of Shiva, had always been very popular. She possessed a two-dimensional character with mingled attributes both generative and destructive. Simultaneously she took great pleasure in bloody sacrifices but she was also venerated as the great Mother associated with the conception of Bengal as the Motherland. This conception offered a solid basis for the support of political objectives stimulated by religious excitement. Kali was accepted as a symbol of the Motherland, and the priest administered the Swadeshi vow. Such a religious flavour could and did give the movement a widespread appeal among the Hindu masses, but by the same token that flavour aroused hostility in average Muslim minds. Huge protest rallies before and after Bengal's division on 16 October 1905 attracted millions of people heretofore not involved in politics. The Swadeshi Movement as an economic movement would have been quite acceptable to the Muslims, but as the movement was used as a weapon against the partition (which the greater body of the Muslims supported) and as it often had a religious colouring added to it, it antagonized Muslim minds. The new tide of national sentiment against the Partition of Bengal originating in Bengal spilled over into different regions in India Punjab, Central Provinces, Poona, Madras, Bombay and other cities. Instead of wearing foreign made outfits, the Indians vowed to use only swadeshi (indigenous) cottons and other clothing materials made in India. Foreign garments were viewed as hateful imports. The Swadeshi Movement soon stimulated local enterprise in many areas; from Indian cotton mills to match factories, glassblowing shops, iron and steel foundries. The agitation also generated increased demands for national education. Bengali teachers and students extended their boycott of British goods to English schools and college classrooms. The movement for national education spread throughout Bengal and reached even as far as Benaras where Pandit Madan Mohan Malaviya founded his private Benaras Hindu University in 1910. The student community of Bengal responded with great enthusiasm to the call of nationalism. Students including schoolboys participated en masse in the campaigns of Swadeshi and Boycott. The government retaliated with the notorious Carlyle Circular that aimed to crush the students' participation in the Swadeshi and Boycott movements. Both the students and the teachers strongly reacted against this repressive measure and the protest was almost universal. In fact, through this protest movement the first organised student movement was born in Bengal. Along with this the 'Anti-Circular Society', a militant student organization, also came into being. The anti-partition agitation was peaceful and constitutional at the initial stage, but when it appeared that it was not yielding the desired results the protest movement inevitably passed into the hands of more militant leaders. Two techniques of boycott and terrorism were to be applied to make their mission successful. Consequently the younger generation, who were unwittingly drawn into politics, adopted terrorist methods by using firearms, pistols and bombs indiscriminately. The agitation soon took a turn towards anarchy and disorder. Several assassinations were committed and attempts were made on the lives of officials including Sir andrew fraser. The terrorist movement soon became an integral part of the Swadeshi agitation. Bengal terrorism reached its peak from 1908 through 1910, as did the severity of official repression and the number of 'preventive detention' arrests. The new militant spirit was reflected in the columns of the nationalist newspapers, notably the Bande Mataram, Sandhya and Jugantar. The press assisted a great deal to disseminate revolutionary ideas. In 1907, the Indian National Congress at its annual session in Surat split into two groups - one being moderate, liberal, and evolutionary; and the other extremist, militant and revolutionary. The young militants of Bal Gangadhar Tilak's extremist party supported the 'cult of the bomb and the gun' while the moderate leaders like Gopal Krishna Gokhale and Surendranath Banerjea cautioned against such extremist actions fearing it might lead to anarchy and uncontrollable violence. Surendranath Banerjee, though one of the front-rank leaders of the anti-Partition agitation, was not in favour of terrorist activities. When the proposal for partition was first published in 1903 there was expression of Muslim opposition to the scheme. The moslem chronicle, the central national muhamedan association, chowdhury kazemuddin ahmad siddiky and Delwar Hossain Ahmed condemned the proposed measure. Even Nawab salimullah termed the suggestion as 'beastly' at the initial stage. In the beginning the main criticism from the Muslim side was against any part of an enlightened and advanced province of Bengal passing under the rule of a chief commissioner. They felt that thereby, their educational, social and other interests would suffer, and there is no doubt that the Muslims also felt that the proposed measure would threaten Bengali solidarity. The Muslim intelligentsia, however, criticized the ideas of extremist militant nationalism as being against the spirit of Islam. The Muslim press urged its educated co-religionists to remain faithful to the government. On the whole the Swadeshi preachers were not able to influence and arouse the predominantly Muslim masses in east Bengal. The anti-partition trend in the thought process of the Muslims did not continue for long. When the wider scheme of a self contained separate the educated section of the Muslims knew province they soon changed their views. They realised that the partition would be a boon to them and that their special difficulties would receive greater attention from the new administration. The Muslims accorded a warm welcome to the new Lieutenant-Governor Bampfylde fuller. Even the Moslem Chronicle soon changed its attitude in favour of partition. Some Muslims in Calcutta also welcomed the creation of the new province. The muhamedan literary society brought out a manifesto in 1905 signed by seven leading Muslim personalities. The manifesto was circulated to the different Muslim societies of both west and east Bengal and urged the Muslims to give their unqualified support to the partition measure. The creation of the new province provided an incentive to the Muslims to unite into a compact body and form an association to voice their own views and aspiration relating to social and political matters. On 16 October 1905 the Mohammedan Provincial Union was founded. All the existing organisations and societies were invited to affiliate themselves with it and Salimullah was unanimously chosen as its patron. Even then there was a group of educated liberal Muslims who came forward and tendered support to the anti-partition agitation and the Swadeshi Movement. Though their number was insignificant, yet their role added a new dimension in the thought process of the Muslims. This broad-minded group supported the Indian National Congress and opposed the partition. The most prominent among this section of the Muslims was khwaza atiqullah. At the Calcutta session of the Congress (1906), he moved a resolution denouncing the partition of Bengal. abdur rasul, Khan Bahadur Muhammad Yusuf (a pleader and a member of the Management Committee of the Central National Muhamedan Association), Mujibur Rahman, AH abdul halim ghaznavi, ismail hossain shiraji, Muhammad Gholam Hossain (a writer and a promoter of Hindu-Muslim unity), Maulvi Liaqat Hussain (a liberal Muslim who vehemently opposed the 'Divide and Rule' policy of the British), Syed Hafizur Rahman Chowdhury of Bogra and Abul Kasem of Burdwan inspired Muslims to join the anti-Partition agitation. There were even a few Muslim preachers of Swadeshi ideas, like Din Muhammad of Mymensingh and Abdul Gaffar of Chittagong. It needs to be mentioned that some of the liberal nationalist Muslims like AH Ghaznavi and Khan Bahadur Muhammad Yusuf supported the Swadeshi Movement but not the Boycott agitation. A section of the Muslim press tried to promote harmonious relations between the Hindus and the Muslims. ak fazlul huq and Nibaran Chandra Das preached non-communal ideas through their weekly Balaka (1901, Barisal) and monthly Bharat Suhrd (1901, Barisal). Only a small section of Muslim intellectuals could rise above their sectarian outlook and join with the Congress in the anti-partition agitation and constitutional politics. The general trend of thoughts in the Muslim minds was in favour of partition. The All India muslim league, founded in 1906, supported the partition. In the meeting of the Imperial Council in 1910 Shamsul Huda of Bengal and Mazhar-ul-Huq from Bihar spoke in favour of the partition. The traditional and reformist Muslim groups - the Faraizi, Wahabi and Taiyuni - supported the partition. Consequently an orthodox trend was visible in the political attitude of the Muslims. The Bengali Muslim press in general lent support to the partition. The Islam Pracharak described Swadeshi as a Hindu movement and expressed grave concern saying that it would bring hardship to the common people. The Muslim intelligentsia in general felt concerned about the suffering of their co-religionists caused by it. They particularly disliked the movement as it was tied to the anti-partition agitation. Reputed litterateurs like mir mosharraf hossain were virulent critics of the Swadeshi Movement. The greater body of Muslims at all levels remained opposed to the Swadeshi Movement since it was used as a weapon against the partition and a religious tone was added to it. The economic aspect of the movement was partly responsible for encouraging separatist forces within the Muslim society. The superiority of the Hindus in the sphere of trade and industry alarmed the Muslims. Fear of socio-economic domination by the Hindus made them alert to safeguard their own interests. These apprehensions brought about a rift in Hindu-Muslims relations. In order to avoid economic exploitation by the Hindus, some wealthy Muslim entrepreneurs came forward to launch new commercial ventures. One good attempt was the founding of steamer companies operating between Chittagong and Rangoon in 1906. In the context of the partition the pattern of the land system in Bengal played a major role to influence the Muslim mind. The absentee Hindu zamindars made no attempt to improve the lot of the raiyats who were mostly Muslims. The agrarian disputes (between landlords and tenants) already in existence in the province also appeared to take a communal colour. It was alleged that the Hindu landlords had been attempting to enforce Swadeshi ideas on the tenants and induce them to join the anti-partition movement. In 1906, the Muslims organized an Islamic conference at Keraniganj in Dhaka as a move to emphasise their separate identity as a community. The Swadeshi Movement with its Hindu religious flavour fomented aggressive reaction from the other community. A red pamphlet of a highly inflammatory nature was circulated among the Muslim masses of Eastern Bengal and Assam urging them completely to dissociate from the Hindus. It was published under the auspices of the anjuman-i-mufidul islam under the editorship of a certain Ibrahim Khan. Moreover, such irritating moves as the adoption of the Bande Mataram as the song of inspiration or introduction of the cult of Shivaji as a national hero, and reports of communal violence alienated the Muslims. One inevitable result of such preaching was the riot that broke out at Comilla in March 1907, followed by similar riots in Jamalpur in April of that year. These communal disturbances became a familiar feature in Eastern Bengal and Assam and followed a pattern that was repeated elsewhere. The 1907 riots represent a watershed in the history of modern Bengal. While Hindu-Muslims relations deteriorated, political changes of great magnitude were taking place in the Government of India's policies, and simultaneously in the relations of Bengali Muslim leaders with their non-Bengali counterparts. Both developments had major repercussions on communal relations in eastern Bengal. The decision to introduce constitutional reforms culminating in the morley-minto reforms of 1909 introducing separate representation for the Muslims marked a turning point in Hindu-Muslim relations. The early administrators of the new province from the lieutenant governor down to the junior-most officials in general were enthusiastic in carrying out the development works. The anti-Partition movement leaders as being extremely partial to Muslims accused Bampfylde Fuller. He, because of a difference with the Government of India, resigned in August 1906. His resignation and its prompt acceptance were considered by the Muslims to be a solid political victory for the Hindus. The general Muslim feeling was that in yielding to the pressure of the anti-Partition agitators the government had revealed its weakness and had overlooked the loyal adherence of the Muslims to the government. Consequently, the antagonism between the Hindus and Muslims became very acute in the new province. The Muslim leaders, now more conscious of their separate communal identity, directed their attention in uniting the different sections of their community to the creation of a counter movement against that of the Hindus. They keenly felt the need for unity and believed that the Hindu agitation against the Partition was in fact a communal movement and as such a threat to the Muslims as a separate community. They decided to faithfully follow the directions of leaders like Salimullah and Nawab Ali Chowdhury and formed organisations like the Mohammedan Provincial Union. Though communalism had reached its peak in the new province by 1907, there is evidence of a sensible and sincere desire among some of the educated and upper class Muslims and Hindus to put an end to these religious antagonisms. A group of prominent members of both communities met the Viceroy Lord Minto on 15 March 1907 with suggestions to put an end to communal violence and promote religious harmony between the two communities. The landlord-tenant relationship in the new province had deteriorated and took a communal turn. The Hindu landlords felt alarmed at the acts of terrorism committed by the anti-partition agitators. To prove their unswerving loyalty to the government and give evidence of their negative attitude towards the agitation, they offered their hands of friendship and co-operation to their Muslim counterparts to the effect that they would take a non-communal stand and work unitedly against the anti-government revolutionary movements. In the meantime the All-India Muslim League had come into being at Dacca on 30 December 1906. Though several factors were responsible for the formation of such an organization, the Partition of Bengal and the threat to it was, perhaps, the most important factor that hastened its birth. At its very first sitting at Dacca the Muslim League, in one of its resolutions, said: 'That this meeting in view of the clear interest of the Muhammadans of Eastern Bengal consider that Partition is sure to prove beneficial to the Muhammadan community which constitute the vast majority of the populations of the new province and that all such methods of agitation such as boycotting should be strongly condemned and discouraged'. To assuage the resentment of the assertive Bengali Hindus, the British government decided to annul the Partition of Bengal. As regards the Muslims of Eastern Bengal the government stated that in the new province the Muslims were in an overwhelming majority in point of population, under the new arrangement also they would still be in a position of approximate numerical equality or possibly of small superiority over the Hindus. The interests of the Muslims would be safeguarded by special representation in the Legislative Councils and the local bodies. lord hardinge succeeded Minto and on 25 August 1911. In a secret despatch the government of India recommended certain changes in the administration of India. According to the suggestion of the Governor-General-in-Council, King George V at his Coronation Darbar in Delhi in December 1911 announced the revocation of the Partition of Bengal and of certain changes in the administration of India. Firstly, the Government of India should have its seat at Delhi instead of Calcutta. By shifting the capital to the site of past Muslim glory, the British hoped to placate Bengal's Muslim community now aggrieved at the loss of provincial power and privilege in eastern Bengal. Secondly, the five Bengali speaking Divisions viz The Presidency, Burdwan, Dacca, Rajshahi and Chittagong were to be united and formed into a Presidency to be administered by a Governor-in-Council. The area of this province would be approximately 70,000 sq miles with a population of 42 million. Thirdly, a Lieutenant-Governor-in-Council with Legislative Council was to govern the province comprising of Bihar, Chhota Nagpur and Orissa. Fourthly, Assam was to revert back to the rule of a Chief Commissioner. The date chosen for the formal ending of the partition and reunification of Bengal was I April 1912. Reunification of Bengal indeed served somewhat to soothe the feeling of the Bengalee Hindus, but the down grading of Calcutta from imperial to mere provincial status was simultaneously a blow to 'Bhadralok' egos and to Calcutta real estate values. To deprive Calcutta of its prime position as the nerve centre of political activity necessarily weakened the influence of the Bengali Hindus. The government felt that the main advantage, which could be derived from the move, was that it would remove the seat of the government of India from the agitated atmosphere of Bengal. Lord Carmichael, a man of liberal sympathies, was chosen as the first Governor of reunified Bengal. The Partition of Bengal and the agitation against it had far-reaching effects on Indian history and national life. The twin weapons of Swadeshi and Boycott adopted by the Bengalis became a creed with the Indian National Congress and were used more effectively in future conflicts. They formed the basis of Gandhi's Non-Cooperation, Satyagraha and Khadi movements. They also learned that organized political agitation and critical public opinion could force the government to accede to public demands. The annulment of the partition as a result of the agitation against it had a negative effect on the Muslims. The majority of the Muslims did not like the Congress support to the anti-partition agitation. The politically conscious Muslims felt that the Congress had supported a Hindu agitation against the creation of a Muslim majority province. It reinforced their belief that their interests were not safe in the hands of the Congress. Thus they became more anxious to emphasize their separate communal identity and leaned towards the Muslim League to safeguard their interest against the dominance of the Hindu majority in undivided India. To placate Bengali Muslim feelings Lord Harding promised a new University at Dacca on 31 January 1912 to a Muslim deputation led by Salimullah. The Partition of Bengal of 1905 left a profound impact on the political history of India. From a political angle the measure accentuated Hindu-Muslim differences in the region. One point of view is that by giving the Muslim's a separate territorial identity in 1905 and a communal electorate through the Morley-Minto Reforms of 1909 the British Government in a subtle manner tried to neutralize the possibility of major Muslim participation in the Indian National Congress. The Partition of Bengal indeed marks a turning point in the history of nationalism in India. It may be said that it was out of the travails of Bengal that Indian nationalism was born. By the same token the agitation against the partition and the terrorism that it generated was one of the main factors, which gave birth to Muslim nationalism and encouraged them to engage in separatist politics. The birth of the Muslim League in 1906 at Dacca (Dhaka) bears testimony to this. The annulment of the partition sorely disappointed not only the Bengali Muslims but also the Muslims of the whole of India. They felt that loyalty did not pay but agitation does. Thereafter, the dejected Muslims gradually took an anti-British stance.
http://www.indohistory.com/partition_of_bengal.html
13
28
Allergy Info > Asthma Asthma (from the Greek άσθμα, ásthma, “panting”) is a common chronic inflammatory disease of the airways characterized by variable and recurring symptoms, reversible airflow obstruction, and bronchospasm. Symptoms include wheezing, coughing, chest tightness, and shortness of breath. Asthma is clinically classified according to the frequency of symptoms, forced expiratory volume in 1 second (FEV1), and peak expiratory flow rate. Asthma may also be classified as atopic (extrinsic) or non-atopic (intrinsic). It is thought to be caused by a combination of genetic and environmental factors. Treatment of acute symptoms is usually with an inhaled short-acting beta-2 agonist (such as salbutamol). Symptoms can be prevented by avoiding triggers, such as allergens and irritants, and by inhaling corticosteroids. Leukotriene antagonists are less effective than corticosteroids and thus less preferred. Its diagnosis is usually made based on the pattern of symptoms and/or response to therapy over time. The prevalence of asthma has increased significantly since the 1970s. As of 2010, 300 million people were affected worldwide. In 2009 asthma caused 250,000 deaths globally. Despite this, with proper control of asthma with step down therapy, prognosis is generally good. |Severity in patients ≥ 12 years of age ||Symptom frequency||Nighttime symptoms||%FEV1 of predicted||FEV1 Variability||Use of short-acting beta2 agonist for symptom control (not for prevention of EIB)| |Intermittent||≤2 per week||≤2 per month||≥80%||<20%||≤2 days per week| |Mild persistent||>2 per week but not daily |3-4 per month||≥80%||20–30%||>2 days/week but not daily |Moderate persistent||Daily||>1 per week but not nightly||60–80%||>30%||Daily| |Severe persistent||Throughout the day||Frequent (often 7x/week)||<60%||>30%||Several times per day| Asthma is clinically classified according to the frequency of symptoms, forced expiratory volume in 1 second (FEV1), and peak expiratory flow rate. Asthma may also be classified as atopic (extrinsic) or non-atopic (intrinsic), based on whether symptoms are precipitated by allergens (atopic) or not (non-atopic). While asthma is classified based on severity, at the moment there is no clear method for classifying different subgroups of asthma beyond this system. Within the classifications described above, although the cases of asthma respond to the same treatment differs, thus it is clear that the cases within a classification have significant differences. Finding ways to identify subgroups that respond well to different types of treatments is a current critical goal of asthma research. Although asthma is a chronic obstructive condition, it is not considered as a part of chronic obstructive pulmonary disease as this term refers specifically to combinations of disease that are irreversible such as bronchiectasis, chronic bronchitis, and emphysema. Unlike these diseases, the airway obstruction in asthma is usually reversible; however, if left untreated, the chronic inflammation of the lungs during asthma can become irreversible obstruction due to airway remodeling. In contrast to emphysema, asthma affects the bronchi, not the alveoli. Brittle asthma is a term used to describe two types of asthma, distinguishable by recurrent, severe attacks. Type 1 brittle asthma refers to disease with wide peak flow variability, despite intense medication. Type 2 brittle asthma describes background well-controlled asthma, with sudden severe exacerbations. An acute asthma exacerbation is commonly referred to as an asthma attack. The classic symptoms are shortness of breath, wheezing, and chest tightness. While these are the primary symptoms of asthma, some people present primarily with coughing, and in severe cases, air motion may be significantly impaired such that no wheezing is heard. Signs which occur during an asthma attack include the use of accessory muscles of respiration (sternocleidomastoid and scalene muscles of the neck), there may be a paradoxical pulse (a pulse that is weaker during inhalation and stronger during exhalation), and over-inflation of the chest. A blue color of the skin and nails may occur from lack of oxygen. In a mild exacerbation the peak expiratory flow rate (PEFR) is ≥200 L/min or ≥50% of the predicted best. Moderate is defined as between 80 and 200 L/min or 25% and 50% of the predicted best while severe is defined as ≤ 80 L/min or ≤25% of the predicted best. Insufficient levels of vitamin D are linked with severe asthma attacks. Status asthmaticus is an acute exacerbation of asthma that does not respond to standard treatments of bronchodilators and steroids. Nonselective beta blockers (such as Timolol) have caused fatal status asthmaticus. A diagnosis of asthma is common among top athletes. One survey of participants in the 1996 Summer Olympic Games, in Atlanta, Georgia, U.S., showed that 15% had been diagnosed with asthma, and that 10% were on asthma medication. There appears to be a relatively high incidence of asthma in sports such as cycling, mountain biking, and long-distance running, and a relatively lower incidence in weightlifting and diving. It is unclear how much of these disparities are from the effects of training in the sport. Exercise induced asthma can be treated with the use of a short-acting beta2 agonist. Asthma as a result of (or worsened by) workplace exposures is a commonly reported occupational respiratory disease. Still most cases of occupational asthma are not reported or are not recognized as such. Estimates by the American Thoracic Society (2004) suggest that 15–23% of new-onset asthma cases in adults are work related. In one study monitoring workplace asthma by occupation, the highest percentage of cases occurred among operators, fabricators, and laborers (32.9%), followed by managerial and professional specialists (20.2%), and in technical, sales, and administrative support jobs (19.2%). Most cases were associated with the manufacturing (41.4%) and services (34.2%) industries. Animal proteins, enzymes, flour, natural rubber latex, and certain reactive chemicals are commonly associated with work-related asthma. When recognized, these hazards can be mitigated, dropping the risk of disease. Signs and symptoms |Problems listening to this file? See media help.| Common symptoms of asthma include wheezing, shortness of breath, chest tightness and coughing. Symptoms are often worse at night or in the early morning, or in response to exercise or cold air. Some people with asthma only rarely experience symptoms, usually in response to triggers, whereas other may have marked persistent airflow obstruction. Gastro-esophageal reflux disease Gastro-esophageal reflux disease coexists with asthma in 80% of people with asthma, with similar symptoms. This is due to increased lung pressures, promoting bronchoconstriction, and through chronic aspiration. Due to altered anatomy of the respiratory tract: increased upper airway adipose deposition, altered pharynx skeletal morphology, and extension of the pharyngeal airway; leading to upper airway collapse. Asthma is caused by environmental and genetic factors. These factors influence how severe asthma is and how well it responds to medication. The interaction is complex and not fully understood. Studying the prevalence of asthma and related diseases such as eczema and hay fever have yielded important clues about some key risk factors. The strongest risk factor for developing asthma is a history of atopic disease; this increases one’s risk of hay fever by up to 5x and the risk of asthma by 3-4x. In children between the ages of 3-14, a positive skin test for allergies and an increase in immunoglobulin E increases the chance of having asthma. In adults, the more allergens one reacts positively to in a skin test, the higher the odds of having asthma. Because much allergic asthma is associated with sensitivity to indoor allergens and because Western styles of housing favor greater exposure to indoor allergens, much attention has focused on increased exposure to these allergens in infancy and early childhood as a primary cause of the rise in asthma. Primary prevention studies aimed at the aggressive reduction of airborne allergens in a home with infants have shown mixed findings. Strict reduction of dust mite allergens, for example, reduces the risk of allergic sensitization to dust mites, and modestly reduces the risk of developing asthma up until the age of 8 years old. However, studies also showed that the effects of exposure to cat and dog allergens worked in the converse fashion; exposure during the first year of life was found to reduce the risk of allergic sensitization and of developing asthma later in life. The inconsistency of this data has inspired research into other facets of Western society and their impact upon the prevalence of asthma. One subject that appears to show a strong correlation is the development of asthma and obesity. In the United Kingdom and United States, the rise in asthma prevalence has echoed an almost epidemic rise in the prevalence of obesity. In Taiwan, symptoms of allergies and airway hyper-reactivity increased in correlation with each 20% increase in body-mass index. Several factors associated with obesity may play a role in the pathogenesis of asthma, including decreased respiratory function due to a buildup of adipose tissue (fat) and the fact that adipose tissue leads to a pro-inflammatory state, which has been associated with non-eosinophilic asthma. Asthma has been associated with Churg–Strauss syndrome, and individuals with immunologically mediated urticaria may also experience systemic symptoms with generalized urticaria, rhino-conjunctivitis, orolaryngeal and gastrointestinal symptoms, asthma, and, at worst, anaphylaxis. Additionally, adult-onset asthma has been associated with periocular xanthogranulomas. Many environmental risk factors have been associated with asthma development and morbidity in children. Maternal tobacco smoking during pregnancy and after delivery is associated with a greater risk of asthma-like symptoms, wheezing, and respiratory infections during childhood. Low air quality, from traffic pollution or high ozone levels, has been repeatedly associated with increased asthma morbidity and has a suggested association with asthma development that needs further research. Recent studies show a relationship between exposure to air pollutants (e.g. from traffic) and childhood asthma. This research finds that both the occurrence of the disease and exacerbation of childhood asthma are affected by outdoor air pollutants. High levels of endotoxin exposure may contribute to asthma risk. Viral respiratory infections are not only one of the leading triggers of an exacerbation but may increase one’s risk of developing asthma especially in young children. Psychological stress has long been suspected of being an asthma trigger, but only in recent decades has convincing scientific evidence substantiated this hypothesis. Rather than stress directly causing the asthma symptoms, it is thought that stress modulates the immune system to increase the magnitude of the airway inflammatory response to allergens and irritants. Antibiotic use early in life has been linked to development of asthma in several examples; it is thought that antibiotics make children who are predisposed to atopic immune responses susceptible to development of asthma because they modify gut flora, and thus the immune system (as described by the hygiene hypothesis). The hygiene hypothesis (see below) is a hypothesis about the cause of asthma and other allergic disease, and is supported by epidemiologic data for asthma. All of these things may negatively affect exposure to beneficial bacteria and other immune system modulators that are important during development, and thus may cause an increased risk for asthma and allergy. Caesarean sections have been associated with asthma, possibly because of modifications to the immune system (as described by the hygiene hypothesis). Respiratory infections such as rhinovirus, Chlamydia pneumoniae and Bordetella pertussis are correlated with asthma exacerbations. Beta blocker medications such as metoprolol may trigger asthma in those who are susceptible. Observational studies have found that indoor exposure to volatile organic compounds (VOCs) may be one of the triggers of asthma, however experimental studies have not confirmed these observations. Even VOC exposure at low levels has been associated with an increase in the risk of pediatric asthma. Because there are so many VOCs in the air, measuring total VOC concentrations in the indoor environment may not represent the exposure of individual compounds. Exposure to VOCs is associated with an increase in the IL-4 producing Th2 cells and a reduction in IFN-γ producing Th1 cells. Thus the mechanism of action of VOC exposure may be allergic sensitization mediated by a Th2 cell phenotype. Different individual variations in discomfort, from no response to excessive response, were seen in one of the studies. These variations may be due to the development of tolerance during exposure. Another study has concluded that formaldehyde may cause asthma-like symptoms. Low VOC emitting materials should be used while doing repairs or renovations which decreases the symptoms related to asthma caused by VOCs and formaldehyde. In another study “the indoor concentration of aliphatic compounds (C8-C11), butanols, and 2,2,4-trimethyl 1,3-pentanediol diisobutyrate (TXIB) was significantly elevated in newly painted dwellings. The total indoor VOC was about 100 micrograms/m3 higher in dwellings painted in the last year”. The author concluded that some VOCs may cause inflammatory reactions in the airways and may be the reason for asthmatic symptoms. There is a significant association between asthma-like symptoms (wheezing) among preschool children and the concentration of DEHP (pthalates) in indoor environment. DEHP (di-ethylhexyl phthalate) is a plasticizer that is commonly used in building material. The hydrolysis product of DEHP (di-ethylhexyl phthalate) is MEHP (Mono-ethylhexyl phthalate) which mimics the prostaglandins and thromboxanes in the airway leading to symptoms related to asthma. Another mechanism that has been studied regarding phthalates causation of asthma is that high phthalates level can “modulate the murine immune response to a coallergen”. Asthma can develop in the adults who come in contact with heated PVC fumes. Two main type of phthalates, namely n-butyl benzyl phthalate (BBzP) and di(2-ethylhexyl) phthalate (DEHP), have been associated between the concentration of polyvinyl chloride (PVC) used as flooring and the dust concentrations. Water leakage were associated more with BBzP, and buildings construction were associated with high concentrations of DEHP. Asthma has been shown to have a relationship with plaster wall materials and wall-to wall carpeting. The onset of asthma was also related to the floor–leveling plaster at home. Therefore, it is important to understand the health aspect of these materials in the indoor surfaces. Over 100 genes have been associated with asthma in at least one genetic association study. However, such studies must be repeated to ensure the findings are not due to chance. Through the end of 2005, 25 genes had been associated with asthma in six or more separate populations: Many of these genes are related to the immune system or to modulating inflammation. However, even among this list of highly replicated genes associated with asthma, the results have not been consistent among all of the populations that have been tested. This indicates that these genes are not associated with asthma under every condition, and that researchers need to do further investigation to figure out the complex interactions that cause asthma. One theory is that asthma is a collection of several diseases, and that genes might have a role in only subsets of asthma. For example, one group of genetic differences (single nucleotide polymorphisms in 17q21) was associated with asthma that develops in childhood. |Endotoxin levels||CC genotype||TT genotype| |High exposure||Low risk||High risk| |Low exposure||High risk||Low risk| Research suggests that some genetic variants may only cause asthma when they are combined with specific environmental exposures, and otherwise may not be risk factors for asthma. The genetic trait, CD14 single nucleotide polymorphism (SNP) C-159T and exposure to endotoxin (a bacterial product) are a well-replicated example of a gene-environment interaction that is associated with asthma. Endotoxin exposure varies from person to person and can come from several environmental sources, including environmental tobacco smoke, dogs, and farms. Researchers have found that risk for asthma changes based on a person’s genotype at CD14 C-159T and level of endotoxin exposure. Some individuals will have stable asthma for weeks or months and then suddenly develop an episode of acute asthma. Different asthmatic individuals react differently to various factors. However, most individuals can develop severe exacerbation of asthma from several triggering agents. Home factors that can lead to exacerbation include dust, house mites, animal dander (especially cat and dog hair), cockroach allergens and molds at any given home. Perfumes are a common cause of acute attacks in females and children. Both virus and bacterial infections of the upper respiratory tract infection can worsen asthma. One theory for the cause of the increase in asthma prevalence worldwide is the “hygiene hypothesis” —that the rise in the prevalence of allergies and asthma is a direct and unintended result of reduced exposure to a wide variety of different bacteria and virus types in modern societies, or modern hygienic practices preventing childhood infections. Children living in less hygienic environments (East Germany vs. West Germany, families with many children, day care environments) tend to have lower incidences of asthma and allergic diseases. This seems to run counter to the logic that viruses are often causative agents in exacerbation of asthma. Additionally, other studies have shown that viral infections of the lower airway may in some cases induce asthma, as a history of bronchiolitis or croup in early childhood is a predictor of asthma risk in later life. Studies which show that upper respiratory tract infections are protective against asthma risk also tend to show that lower respiratory tract infections conversely tend to increase the risk of asthma. The incidence of asthma is highest among low-income populations worldwide[specify]. Asthma deaths are most common in low and middle income countries, and in the Western world, it is found in those low-income neighborhoods whose populations consist of large percentages of ethnic minorities. Additionally, asthma has been strongly associated with the presence of cockroaches in living quarters; these insects are more likely to be found in those same neighborhoods. Most likely due to income and geography, the incidence of and treatment quality for asthma varies among different racial groups. The prevalence of “severe persistent” asthma is also greater in low-income communities than those with better access to treatment. |Near-fatal asthma||High PaCO2 and/or requiring mechanical ventilation| |Life threatening asthma||Any one of the following in a person with severe asthma:-| |Altered level of consciousness||Peak flow < 33%| |Exhaustion||Oxygen saturation < 92%| |Arrhythmia||PaO2 < 8 kPa| |Low blood pressure||“Normal” PaCO2| |Poor respiratory effort| |Acute severe asthma||Any one of:-| |Peak flow 33-50%| |Respiratory rate ≥ 25 breaths per minute| |Heart rate ≥ 110 beats per minute| |Unable to complete sentences in one breath| |Moderate asthma exacerbation||Worsening symptoms| |Peak flow 80%-50% best or predicted| |No features of acute severe asthma| Obstruction of the lumen of the bronchiole by mucoid exudate, goblet cell metaplasia, epithelial basement membrane thickening and severe inflammation of bronchiole in a patient with asthma. There is currently not a precise physiologic, immunologic, or histologic test for diagnosing asthma. The diagnosis is usually made based on the pattern of symptoms (airways obstruction and hyperresponsiveness) and/or response to therapy (partial or complete reversibility) over time. The British Thoracic Society determines a diagnosis of asthma using a ‘response to therapy’ approach. If the patient responds to treatment, then this is considered to be a confirmation of the diagnosis of asthma. The response measured is the reversibility of airway obstruction after treatment. Airflow in the airways is measured with a peak flow meter or spirometer, and the following diagnostic criteria are used by the British Thoracic Society: - ≥20% difference on at least three days in a week for at least two weeks; - ≥20% improvement of peak flow following treatment, for example: - 10 minutes of inhaled β-agonist (e.g., salbutamol); - six weeks of inhaled corticosteroid (e.g., beclometasone); - 14 days of 30 mg prednisolone. - ≥20% decrease in peak flow following exposure to a trigger (e.g., exercise). In contrast, the US National Asthma Education and Prevention Program (NAEPP) uses a ‘symptom patterns’ approach. Their guidelines for the diagnosis and management of asthma state that a diagnosis of asthma begins by assessing if any of the following list of indicators is present. While the indicators are not sufficient to support a diagnosis of asthma, the presence of multiple key indicators increases the probability of a diagnosis of asthma. Spirometry is needed to establish a diagnosis of asthma. - Wheezing—high-pitched whistling sounds when breathing out—especially in children. (Lack of wheezing and a normal chest examination do not exclude asthma.) - history of any of the following: - Cough, worse particularly at night - Recurrent wheeze - Recurrent difficulty in breathing - Recurrent chest tightness - Symptoms occur or worsen in the presence of: - Viral infection - Animals with fur or hair - House-dust mites (in mattresses, pillows, upholstered furniture, carpets) - Smoke (tobacco, wood) - Changes in weather - Strong emotional expression (laughing or crying hard) - Airborne chemicals or dusts - Menstrual cycles - Symptoms occur or worsen at night, awakening the patient The latest guidelines from the U.S. National Asthma Education and Prevention Program (NAEPP) recommend spirometry at the time of initial diagnosis, after treatment is initiated and symptoms are stabilized, whenever control of symptoms deteriorates, and every 1 or 2 years on a regular basis. The NAEPP guidelines do not recommend testing peak expiratory flow as a regular screening method because it is more variable than spirometry. However, testing peak flow at rest (or baseline) and after exercise can be helpful, especially in young patients who may experience only exercise-induced asthma. It may also be useful for daily self-monitoring and for checking the effects of new medications. Peak flow readings can be charted together with a record of symptoms or use peak flow charting software. This allows patients to track their peak flow readings and pass information back to their doctor or nurse. Differential diagnoses include: - Infants and Children - Upper airway diseases - Allergic rhinitis and allergic sinusitis - Obstructions involving large airways - Foreign body in trachea or bronchus - Vocal cord dysfunction - Vascular rings or laryngeal webs - Laryngotracheomalacia, tracheal stenosis, or bronchostenosis - Enlarged lymph nodes or tumor - Obstructions involving small airways - Viral bronchiolitis or obliterative bronchiolitis - Cystic fibrosis - Bronchopulmonary dysplasia - Heart disease - Other causes - Recurrent cough not due to asthma - Aspiration from swallowing mechanism dysfunction or gastroesophageal reflux - Medication induced - Upper airway diseases - COPD (e.g., chronic bronchitis or emphysema) - Congestive heart failure - Pulmonary embolism - Mechanical obstruction of the airways (benign and malignant tumors) - Pulmonary infiltration with eosinophilia - Cough secondary to drugs (e.g., angiotensin-converting enzyme (ACE) inhibitors) - Vocal cord dysfunction Before diagnosing asthma, alternative possibilities should be considered such as the use of known bronchoconstrictors (substances that cause narrowing of the airways, e.g. certain anti-inflammatory agents or beta-blockers). Among elderly people, the presenting symptom may be fatigue, cough, or difficulty breathing, all of which may be erroneously attributed to Chronic obstructive pulmonary disease(COPD), congestive heart failure, or simple aging. Chronic Obstructive Pulmonary Disease Chronic obstructive pulmonary disease can coexist with asthma and can occur as a complication of chronic asthma. After the age of 65 most people with obstructive airway disease will have asthma and COPD. In this setting, COPD can be differentiated by increased airway neutrophils, abnormally increased wall thickness, and increased smooth muscle in the bronchi. However, this level of investigation is not performed due to COPD and asthma sharing similar principles of management: corticosteroids, long acting beta agonists, and smoking cessation. It closely resembles asthma in symptoms, is correlated with more exposure to cigarette smoke, an older age, less symptom reversibility after bronchodilator administration (as measured by spirometry), and decreased likelihood of family history of atopy. The term “atopy” was coined to describe this triad of atopic eczema, allergic rhinitis and asthma. Pulmonary aspiration, whether direct due to dysphagia (swallowing disorder) or indirect (due to acid reflux), can show similar symptoms to asthma. However, with aspiration, fevers might also indicate aspiration pneumonia. Direct aspiration (dysphagia) can be diagnosed by performing a modified barium swallow test. If the aspiration is indirect (from acid reflux), then treatment is directed at this is indicated. The evidence for the effectiveness of measures to prevent the development of asthma is weak. Ones which show some promise include: limiting smoke exposure both in utero and after delivery, breastfeeding, increased exposure to respiratory infection per the hygiene hypothesis (such as in those who attend daycare or are from large families). A specific, customized plan for proactively monitoring and managing symptoms should be created. Someone who has asthma should understand the importance of reducing exposure to allergens, testing to assess the severity of symptoms, and the usage of medications. The treatment plan should be written down and adjusted according to changes in symptoms. The most effective treatment for asthma is identifying triggers, such as cigarette smoke, pets, or aspirin, and eliminating exposure to them. If trigger avoidance is insufficient, medical treatment is recommended. Medical treatments used depends on the severity of illness and the frequency of symptoms. Specific medications for asthma are broadly classified in to fast acting and long acting. Bronchodilators are recommended for short-term relief of symptoms. In those with occasional attacks, no other medication is needed. If mild persistent disease is present (more than two attacks a week), low-dose inhaled glucocorticoids or alternatively, an oral leukotriene antagonist or a mast cell stabilizer is recommended. For those who suffer daily attacks, a higher dose of inhaled glucocorticoid is used. In a severe asthma exacerbation, oral glucocorticoids are added to these treatments. Avoidance of triggers is a key component of improving control and preventing attacks. The most common triggers include: allergens, smoke (tobacco and other), air pollution, non selective beta-blockers, and sulfite-containing foods. Medications used to treat asthma are divided into two general classes: quick-relief medications used to treat acute symptoms; and long-term control medications used to prevent further exacerbation. - Fast acting Salbutamol metered dose inhaler commonly used to treat asthma attacks. - Short acting beta2-adrenoceptor agonists (SABA), such as salbutamol (albuterol USAN) are the first line treatment for asthma symptoms. - Anticholinergic medications, such as ipratropium bromide provide addition benefit when used in combination with SABA in those with moderate or severe symptoms. Anticholinergic bronchodilators can also be used if a person cannot tolerate a SABA. - Older, less selective adrenergic agonists, such as inhaled epinephrine, have similar efficacy to SABAs. They are however not recommended due to concerns regarding excessive cardiac stimulation. - Long term control Fluticasone propionate metered dose inhaler commonly used for long term control. - Glucocorticoids are the most effective treatment available for long term control. Inhaled forms are usually used except in the case of severe persistent disease, in which oral steroids may be needed. Inhaled formulations may be used once or twice daily, depending on the severity of symptoms. - Long acting beta-adrenoceptor agonists (LABA) have at least a 12-hour effect. They are however not to be used without a steroid due to an increased risk of severe symptoms. In December 2008, members of the FDA’s drug-safety office recommended withdrawing approval for these medications in children. Discussion is ongoing about their use in adults. - Leukotriene antagonists ( such as zafirlukast) are an alternative to inhaled glucocorticoids, but are not preferred. They may also be used in addition to inhaled glucocorticoids but in this role are second line to LABA. - Mast cell stabilizers (such as cromolyn sodium) are another non-preferred alternative to glucocorticoids. - Delivery methods Medications are typically provided as metered-dose inhalers (MDIs) in combination with an asthma spacer or as a dry powder inhaler. The spacer is a plastic cylinder that mixes the medication with air, making it easier to receive a full dose of the drug. A nebulizer may also be used. Nebulizers and spacers are equally effective in those with mild to moderate symptoms however insufficient evidence is available to determine whether or not a difference exist in those severe symptomatology. - Safety and adverse effects Long-term use of glucocorticoids carries a significant potential for adverse effects. The incidence of cataracts is increased in people undergoing treatment for asthma with corticosteroids, due to altered regulation of lens epithelial cells. The incidence of osteoporosis is also increased, due to changes inbone remodeling. When an asthma attack is unresponsive to usual medications, other options are available for emergency management. - Oxygen is used to alleviate hypoxia if saturations fall below 92%. - Magnesium sulfate intravenous treatment has been shown to provide a bronchodilating effect when used in addition to other treatment in severe acute asthma attacks. - Heliox, a mixture of helium and oxygen, may also be considered in severe unresponsive cases. - Intravenous salbutamol is not supported by available evidence and is thus used only in extreme cases. - Methylxanthines (such as theophylline) were once widely used, but do not add significantly to the effects of inhaled beta-agonists. - The dissociative anesthetic ketamine is theoretically useful if intubation and mechanical ventilation is needed in people who are approaching respiratory arrest; however, there is no evidence from clinical trials to support this. Many asthma patients, like those who suffer from other chronic disorders, use alternative treatments; surveys show that roughly 50% of asthma patients use some form of unconventional therapy. There is little data to support the effectiveness of most of these therapies. Evidence is insufficient to support the usage of Vitamin C. Acupuncture is not recommended for the treatment as there is insufficient evidence to support its use. Air ionisers show no evidence that they improve asthma symptoms or benefit lung function; this applied equally to positive and negative ion generators. Dust mite control measures, including air filtration, chemicals to kill mites, vacuuming, mattress covers and others methods had no effect on asthma symptoms. However, a review of 30 studies found that “bedding encasement might be an effective asthma treatment under some conditions” (when the patient is highly allergic to dust mite and the intervention reduces the dust mite exposure level from high levels to low levels). Washing laundry/rugs in hot water was also found to improve control of allergens. A study of “manual therapies” for asthma, including osteopathic, chiropractic, physiotherapeutic and respiratory therapeutic manoeuvres, found there is insufficient evidence to support or refute their use in treating. The Buteyko breathing technique for controlling hyperventilation may result in a reduction in medications use however does not have any effect on lung function. Thus an expert panel felt that evidence was insufficient to support its use. The prognosis for asthma is good, especially for children with mild disease.[not in citation given] Of asthma diagnosed during childhood, 54% of cases will no longer carry the diagnosis after a decade. The extent of permanent lung damage in people with asthma is unclear. Airway remodeling is observed, but it is unknown whether these represent harmful or beneficial changes. Although conclusions from studies are mixed, most studies show that early treatment with glucocorticoids prevents or ameliorates decline in lung function as measured by several parameters. For those who continue to suffer from mild symptoms, corticosteroids can help most to live their lives with few disabilities. It is more likely to consider immediate medication of inhaled corticosteroids as soon as asthma attacks occur. According to studies conducted, patients with relatively mild asthma who have received inhaled corticosteroids within 12 months of their first asthma symptoms achieved good functional control of asthma after 10 years of individualized therapy as compared to patients who received this medication after 2 years (or more) from their first attacks. Though they (delayed) also had good functional control of asthma, they were observed to exhibit slightly less optimal disease control and more signs of airway inflammation. Asthma mortality has decreased over the last few decades due to better recognition and improvement in care. Disability-adjusted life year for asthma per 100,000 inhabitants in 2004. The prevalence of childhood asthma in the United States has increased since 1980, especially in younger children. As of 2009, 300 million people worldwide were affected by asthma leading to approximately 250,000 deaths per year. It is estimated that asthma has a 7-10% prevalence worldwide. As of 1998, there was a great disparity in prevalence worldwide across the world (as high as a 20 to 60-fold difference), with a trend toward more developed and westernized countries having higher rates of asthma. Westernization however does not explain the entire difference in asthma prevalence between countries, and the disparities may also be affected by differences in genetic, social and environmental risk factors. Mortality however is most common in low to middle income countries, while symptoms were most prevalent (as much as 20%) in the United Kingdom, Australia, New Zealand, and Republic of Ireland; they were lowest (as low as 2–3%) in Eastern Europe, Indonesia, Greece, Uzbekistan, India, and Ethiopia.[dated info] While asthma is more common in affluent countries, it is by no means a restricted problem; the WHO estimate that there are between 15 and 20 million people with asthma in India. In the U.S., urban residents, Hispanics, and African Americans are affected more than the population as a whole. Striking increases in asthma prevalence have been observed in populations migrating from a rural environment to an urban one,[dated info] or from a third-world country to Westernized one.[dated info] Asthma affects approximately 7% of the population of the United States and 5% of people in the United Kingdom. Asthma causes 4,210 deaths per year in the United States. In 2005 in the United States asthma affected more than 22 million people including 6 million children. It accounted for nearly 1/2 million hospitalizations[when?]. More boys have asthma than girls, but more women have it than men. Of all children, African Americans and Latinos who live in cities are more at risk for developing asthma. African American children in the U.S. are four times more likely to die of asthma and three times more likely to be hospitalized, compared to their white counterparts. In some Latino neighborhoods, as many as one in three children has been found to have asthma. In England, an estimated 261,400 people were newly diagnosed with asthma in 2005; 5.7 million people had an asthma diagnosis and were prescribed 32.6 million asthma-related prescriptions. The frequency of atopic dermatitis, asthma, urticaria and allergic contact dermatitis has been found to be lower in psoriatic patients. Rates of asthma have increased significantly between the 1960s and 2008. Some 9% of US children had asthma in 2001, compared with just 3.6% in 1980. The World Health Organization (WHO) reports that some 10% of the Swiss population suffers from asthma today, compared with just 2% some 25–30 years ago. Asthma prevalence in the US is higher than in most other countries in the world, but varies drastically between diverse US populations. In the US, asthma prevalence is highest in Puerto Ricans, African Americans, Filipinos, Irish Americans, and Native Hawaiians, and lowest in Mexicans and Koreans. Mortality rates follow similar trends, and response to salbutamol is lower in Puerto Ricans than in African Americans or Mexicans. As with worldwide asthma disparities, differences in asthma prevalence, mortality, and drug response in the US may be explained by differences in genetic, social and environmental risk factors. Asthma prevalence also differs between populations of the same ethnicity who are born and live in different places. US-born Mexican populations, for example, have higher asthma rates than non-US born Mexican populations that are living in the US. There is no correlation between asthma and gender in children. More adult women are diagnosed with asthma than adult men, but this does not necessarily mean that more adult women have asthma. |This section requires expansion.| Asthma was first recognized and named by Hippocrates circa 450 BC. During the 1930s–50s, asthma was considered as being one of the ‘holy seven’ psychosomatic illnesses. Its aetiology was considered to be psychological, with treatment often based on psychoanalysis and other ‘talking cures’. As these psychoanalysts interpreted the asthmatic wheeze as the suppressed cry of the child for its mother, so they considered that the treatment of depression was especially important for individuals with asthma. among the first papers in modern medicine, is one that was published in 1873 and this paper tried to explain the pathophysiology of the disease. And one of the first papers discussing treatment of asthma was released in 1872, the author concluded in his paper that asthma can be cured by rubbing the chest with chloroform liniment. Among the first times researches referred to medical treatment was at the year 1880, where Dr. J. B. Berkart used IV method to administer doses of drug called Pilocarpin. In the year 1886, F.H. Bosworth FH suspected a connection between asthma and hay fever. Epinephrine was first referred to in the treatment of asthma in 1905, and for acute asthma in 1910. - The University of Maryland School of Medicine announced in 2010 that bitter taste receptors had been discovered on smooth muscle in human lung bronchi. These smooth muscles control airway contraction and dilation – contrary to expectation, bitter substances such as quinine or chloroquine opened contracted airways, offering new insight into asthma.
http://www.allergychat.org/allergy-info/asthma/
13
15
The Deep South is a descriptive category of cultural and geographic subregions in the American South. Historically, it is differentiated from the "Upper South" as being the states that were most dependent on plantation-type agriculture during the period before the American Civil War. The region is also commonly referred to as the Lower South or the "Cotton States." The Deep South is a belt stretching from the Atlantic Ocean to west of the Mississippi River primarily consisting of five states, South Carolina, Georgia, Alabama, Mississippi, and Louisiana. Some consider Florida and Texas as part of the area, due to their shared borders with the other five states. They are usually identified as being those states and areas where things most often thought of as "Southern" exist in their most concentrated form. The states are distinguished from the Old South in that the "Old South" states, are those that were among the original thirteen American colonies. Another frequently used term is "Black Belt," which Booker T. Washington described as "the part of the South … where the black people outnumber the white." Usage of the term The term "Deep South" is defined in a variety of ways: - Most definitions include the states of Alabama, Georgia, Louisiana, Mississippi, and South Carolina. - The seven states that seceded from the United States before the firing on Fort Sumter and the start of the American Civil War, and originally formed the Confederate States of America. In order of secession they are: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. Due to the migration patterns of the last half-century, large areas of Florida and Texas are often no longer included. However, there are certain parts of these states, such as East Texas and the Florida Panhandle, that retain cultural characteristics of the Deep South. For most of the nineteenth and twentieth centuries, the Deep South overwhelmingly supported the Democratic Party, viewing the rival Republican Party as a Northern organization responsible for the Civil War, which devastated the economy of the Old South. This pattern became known as the "Solid South." Since the 1964 presidential election, however, along with the Civil Rights Movement, the Deep South has tended to vote for the Republican candidate in presidential elections, except in the 1976 election when Georgia native Jimmy Carter received the Democratic nomination. Since the 1990s there has been a continued shift toward Republican candidates in most political venues; another Georgian, Republican Newt Gingrich, was elected U.S. Speaker of the House in 1995. Presidential elections in which the region diverged noticeably from the Upper South occurred in 1928, 1948, 1964 and 1968, and, to a lesser extent, in 1952 and 1956. Within the Deep South is a region known as the Black Belt. Although the term originally described the prairies and dark soil of central Alabama and northeast Mississippi, it long has been used for a broad region in the South characterized by a high percentage of black people, acute poverty, rural decline, inadequate education programs, low educational attainment, poor health care, substandard housing, and high levels of crime and unemployment. While black residents are disproportionately affected, these problems apply to the region's general population. There are various definitions of the region, but it is generally a belt-like band through the center of the Deep South, stretching as far west as eastern Texas. The term Black Belt is still used to describe a crescent-shaped region about 300 miles (480 km) long and up to 25 miles (40 km) wide, extending from southwest Tennessee to east-central Mississippi and then east through Alabama to the border with Georgia. Before the nineteenth century, this region was a mosaic of prairies and oak-hickory forests. In the 1820s and 1830s, this region was identified as prime land for cotton plantations, resulting in a rush of immigrant planters and their slaves called Alabama Fever. The region became one of the cores of an expanding cotton plantation system that spread through much of the American South. Eventually, Black Belt came to describe the larger area of the South with historic ties to slave plantation agriculture and the cash crops cotton, rice, sugar, and tobacco. Although this had been a richly productive region, the early twentieth century brought a general economic collapse, among the many causes of which were soil erosion and depletion, the boll weevil invasion and subsequent collapse of the cotton economy, and the socially repressive Jim Crow laws. What had been one of the nation's wealthiest and most politically powerful regions became one of the poorest. The African American push to be afforded civil rights equal to those of white Americans had roots in the center of the Deep South. Despite the successes of the civil rights movement, the region remains one of the nation's poorest. Most of it remains rural, with a diverse range of crops, including most of the nation's peanut and soybean production. In his 1901 autobiography Up from Slavery, Booker T. Washington wrote, describing the Black Belt, The term was first used to designate a part of the country which was distinguished by the colour of the soil. The part of the country possessing this thick, dark, and naturally rich soil was, of course, the part of the South where the slaves were most profitable, and consequently they were taken there in the largest numbers. Later and especially since the civil war, the term seems to be used wholly in a political sense—that is, to designate the counties where the black people outnumber the white. According to the 2000 Census, there were 96 counties in the U.S. where the black percentage of the population was over 50 percent, of which 95 were distributed across the Coastal and Lowland South in a loose arc. In 2000, a United States Department of Agriculture report proposed creating a federal regional commission, similar to the Appalachian Regional Commission, to address the social and economic problems of the Black Belt. This politically defined region, called the Southern Black Belt, is a patchwork of 623 counties scattered throughout the South. Geographically, Old South is a subregion of the American South, differentiated from the "Deep South" as being the Southern states represented in the original thirteen American colonies, as well as a way of describing the former lifestyle in the Southern United States. Culturally, the term can be used to describe the antebellum period. The Southern colonies were Virginia, Maryland, North Carolina, Delaware, South Carolina, and Georgia. Despite Maryland's early association as a Southern colony and later as a state, based on customs, economy, and slave ownership, its failure to secede during the American Civil War has resulted in a modern disassociation with the area known as the "Old South," a disassociation even more pronounced in the similar case of Delaware. The "Old South" is usually defined in opposition to the Deep South including Alabama, Louisiana, Georgia and Mississippi, and it is also further differentiated from the inland border states such as Kentucky and West Virginia and the peripheral southern states of Florida and Texas. After the Civil War, many southern whites used the term "Old South" with nostalgia to represent the memories of a time of prosperity, social order, and gracious living. A majority of blacks saw it as being a reference to the past times of slavery and the plantation. Once those with personal memories of the antebellum South were largely deceased, the term continued to be used. It was used even as a marketing term, where products were advertised as having "genuine Old South goodness" and the like. Certain groups now wish to rescue the term from racist connotations by stating that they desire to celebrate only the things about the Old South which might be considered good, such as Southern chivalry. The former agricultural economy of the region gradually is being replaced. Louisiana's industries include chemical products, petroleum and coal products, food processing, transportation equipment, and paper products. The Port of South Louisiana, located on the Mississippi River between New Orleans and Baton Rouge, is the largest volume shipping port in the Western Hemisphere and fourth largest in the world. Tourism and culture are also major factors in Louisiana's economy. In the twentieth century Alabama transitioned from agriculture to diversified interests in heavy manufacturing, mining, education, and technology. Alabama is on track to surpass Michigan as the largest automobile manufacturing state in North America. Georgia has emerged as a regional leader, due in large part to Atlanta's steady economic and population growth. Before Hurricane Katrina struck the Gulf Coast in 2005, Mississippi was the second largest gambling state in the United States, after Nevada and ahead of New Jersey, seeking to capitalize on its climate to offset prevailing rural poverty. A 2007 United States Government report found that even though Mississippi ranked as the poorest state in the nation, Mississippians consistently rank as one of the highest per capita in charitable contributions. While cotton farmers have large, mechanized plantations, some of which receive extensive federal subsidies, many Mississippians live in poverty as rural landless laborers. Farms across the Deep South have become fewer but larger in recent years. South Carolina ranks third in peach production and fourth overall in tobacco production. Other top agricultural commodities include nursery and greenhouse products, watermelons, peanuts, chickens and turkeys. As many as 25 percent of the manufacturing companies in South Carolina are foreign-owned. In 2003, foreign trade pumped $23 billion into the state's economy and generated $2.5 billion in state and local taxes. While South Carolina remains a major agricultural producer, its industrial outputs include textiles, chemical products, paper products, and machinery. Looking to the future Some of the urban areas in the region, such as Atlanta, Georgia and Miami, Florida, are progressive in terms of economy, technology, social services, and are cultural and tourist centers. However, much of the rural Deep South suffers from poverty, inadequate medical and education services, and few opportunities for personal enrichment. For these disparities need to be resolved, it is incumbent upon the states' leaders to find solutions. - ↑ TheFreeDictionary. Deep South Retrieved October 30, 2008. - ↑ Synonym.com. Deep South Retrieved October 30, 2008. - ↑ John Shelton Reed and Dale Volberg Reed. 1996. 1001 things everyone should know about the South. (New York: Doubleday. ISBN 9780385474412) - ↑ For many Southern white voters, Republican Dwight D. Eisenhower first broke their voting behavior in the presidential elections of 1952 and 1956, but with the Goldwater-Johnson election of 1964 a significant contingent of those same voters "crossed the Rubicon" into more or less permanent adherence to the Republican Party. Correspondingly, support for Republicans among black voters continued eroding, as it had started moving toward Democrats in the Franklin Delano Roosevelt election of 1936. - ↑ Joe MacGown, Richard Brown, and JoVonn Hill. Black Belt Prairie Mississippi Entomological Museum. Retrieved October 30, 2008. - ↑ Booker T. Washington. Up from Slavery: an autobiography Project Gutenberg Literary Archive Foundation. Retrieved October 30, 2008. - ↑ U.S. Department of Commerce; Economics and Statistics Administration. The Black Population: 2000 Retrieved October 30, 2008. - ↑ University of Kentucky. The Southern Black Belt, A National Perspective Overview Retrieved October 30, 2008. - ↑ Samuel D. Calhoun, Richard J. Reeder, and Faqir S. Bagi. January 2000. Federal Funds in the Black Belt US Department of Agriculture. Retrieved October 30, 2008. - ↑ American Association of Port Authorities. U.S. Port Cargo Tonnage Rankings linked from Port Industry Statistics Retrieved October 7, 2008. - ↑ Catalogue For Philanthropy. Index of National Generosity Retrieved November 21, 2008. - ↑ Jen Schradie. The Black Belt South: Germany in the World Economy Ibiblio.org. Retrieved November 21, 2008. - ↑ South Carolina State Ports Authority. South Carolina's Ports Retrieved November 21, 2008. - Atkins, Leah Rawls, Wayne Flynt, William Warren Rogers, and David Ward. 1994. Alabama: The History of a Deep South State. ISBN 9780585263670 - Johnson, Walter. 1999. Soul by Soul: Life Inside the Antebellum Slave Market. Cambridge, MA: Harvard University Press. ISBN 0674821483 - Peirce, Neal R. 1974. The Deep South States of America; people, politics, and power in the seven Deep South States. New York: Norton. ISBN 9780393054965 - Reed, John Shelton, and Dale Volberg Reed. 1996. 1001 things everyone should know about the South. New York: Doubleday. ISBN 9780385474412 - Winstead, Mary. 2002. Back to Mississippi: A Personal Journey Through the Events that Changed America in 1964. New York: Hyperion. ISBN 0786867965 New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: - Deep_South (Oct 21, 2008) history - Black_Belt_(U.S._region) (Oct 21, 2008) history - Old_South (Oct 21, 2008) history Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/Deep_South
13
14
Gene cloning is a common practice in molecular biology labs that is used by researchers to create copies of a particular gene for downstream applications, such as sequencing, mutagenesis, genotyping or heterologous expression of a protein. The traditional technique for gene cloning involves the transfer of a DNA fragment of interest from one organism to a self-replicating genetic element, such as a bacterial plasmid. This technique is commonly used today for isolating long or unstudied genes and protein expression. A more recent technique is the use of polymerase chain reaction (PCR) for amplifying a gene of interest. The advantage of using PCR over traditional gene cloning, as described above, is the decreased time needed for generating a pure sample of the gene of interest. However, gene isolation by PCR can only amplify genes with predetermined sequences. For this reason, many unstudied genes require initial gene cloning and sequencing before PCR can be performed for further analysis. Related Topics: Gene Expression Analysis, Mutational Analysis, and Epigenetics and Chromatin Structure. DNA sequencing is typically the first step in understanding the genetic makeup of an organism, which helps to: - Locate regulatory and gene sequences - Compare homologous genes across species - Identify mutations Sequencing uses biochemical methods to determine the order of nucleotide bases (adenine, guanine, cytosine, and thymine) in a DNA oligonucleotide. Knowing the sequence of a particular gene will assist in further analysis to understand the function of the gene. PCR is used to amplify the gene of interest before sequencing can be performed. Many biotechnology companies offer sequencing instruments, however, these instruments can be expensive. As a result, many researchers usually perform PCR in-house and then send out their samples to sequencing labs. Site-directed mutagenesis is a widely used procedure for the study of the structure and function of proteins by modifying the encoding DNA. By using this method, mutations can be created at any specific site in a gene whose wild-type sequence is already known. Many techniques are available for performing site-directed mutagenesis. A classic method for introducing mutations, either single base pairs or larger insertions, deletions, or substitutions into a DNA sequence, is the Kunkel method. The first step in any site-directed mutagenesis method is to clone the gene of interest. For the Kunkel method, the cloned plasmid is then transformed into a dut ung mutant of Escherichia coli. This E. coli strain lacks dUTPase and uracil deglycosidase, which will ensure that the plasmid containing the gene of interest will be converted to DNA that lacks Ts and contains Us instead. The next step is to design a primer that contains the region of the gene which you wish to mutate, along with the mutation you want to introduce. PCR can then be used with the mutated primers to create hybrid plasmids; each plasmid will now contain one strand without the mutation and uracil bases, and another strand with the mutation and lacking uracil. The final step is to isolate this hybrid plasmid and transform it into a different strain that does contain the uracil-DNA glycosylase (ung) gene. The uracil deglycosidase will destroy the strands that contain uracil, leaving only the strands with your mutation. When the bacteria replicate, the resulting plasmids will contain the mutation on both strands. Genotyping is the process of determining the DNA sequence specific to an individual's genotype. This process can be accomplished by several techniques, such as high resolution melt (HRM) analysis, or any other mutation detection technique. All of these techniques will provide an insight into the individual's genotype, which can help determine specific sequences that can be manipulated and cloned for further analysis. Heterologous protein expression uses gene cloning to express a protein of interest in a self-replicating genetic element, such as a bacterial plasmid. Heterologous expression is used to produce large amounts of a protein of interest for functional and biochemical analyses.
http://www.bio-rad.com/evportal/en/US/evolutionPortal.portal?_nfpb=true&_pageLabel=SolutionsLandingPage&catID=LUSNKO4EH
13
19
Marketplace: The Argentina Barter Fair This lesson printed from: What happens when a country's currency loses its value? If money in our society were to lose its value, how would you obtain the goods and services you have come to expect? How much value do you place on your belongings? In this lesson, you will explore the idea that goods and services have non-monetary values which can be exchanged and traded through barter. In this lesson, you will listen to an audio file about Argentina's barter economy. This barter economy became necessary in 2002 when Argentina's currency lost much of its value. While you listen, you will use an interactive note-taker to record supporting details about three main ideas presented in the audio. Then, using your notes, you will answer questions related to the audio. Listen to the Marketplace audio file about 'Argentina Barter Fair.' Go to http://marketplace.publicradio.org/shows/2002/04/rafiles/29_mpp.ram . The starting timestamp is 19:10, the ending timestamp is 23:43 for the part about Argentina. While you are listening, you will use the note-taker to find supporting details about three main ideas. The three main ideas you will focus on today are these: - Goods Argentines Are Bartering - Services Argentines Are Bartering - Problems with creditos (creditos are paper currency used in Argentina) Also, as you listen to the segment, record any words that you don't know- especially words that may be important economic terms. Then, listen to the audio file again to gather additional supporting details and possible definitions of the vocabulary words using context clues, and record them in your note-taker. Finally, you will be asked a series of questions related to the story. Listen to the marketplace audio file again and consider the choices the Argentine people must make in order to benefit from the new barter economy. Answer the following questions and discuss your answers with your class: - Have you ever participated in a barter exchange with friends? - Are there services you think you could provide in exchange for goods? - How much value do you place on the goods you own and on the services you could provide? Next, create a mock barter fair in your classroom. imagine that your classroom represents a country whose currency has depreciated. You need to determine the value of goods and services and exchange them. When you are finished, reflect upon the following questions and record your answers. What problems did you encounter during the barter fair? Was it hard to determine the value of goods and services? Were all the exchanges fair? What choices would you have to make in order to survive in a barter economy? Are there any benefits from a barter economy? Why is the use of creditos not the same as bartering? How is the credito the same as currency? By the conclusion of this lesson you should be able to explain the reason for Argentina's barter economy, how a barter economy works, and the possible consequences of having a barter economy. 1. Turn in your note-taker on the Argentina barter fair and discuss the news story with the class. Be sure to focus on the concept of barter and how the use of bartering might replace the use of currency. 2. You participated in a mock barter fair and you will hand in your reflections. As you reflect on the activity, consider the reasons for Argentina's failing economy and the ways in which the barter system may or may not have helped people to deal with the problems involoved. 1. Complete 'A case study: United States International Trade in Goods and Services-December 23, 2003' to extend your understanding of the concepts of trade and inflation. 2. Complete 'One is Silver and the Other is Gold' to extend your understanding of the concept of money and its value.
http://www.econedlink.org/lessons/EconEdLink-print-lesson.php?lid=776&type=student
13
40
Numerical models and data assimilation systems have improved enormously over recent years so that today's 3-day forecast is as good as a 1-day forecast 20 years ago. Despite this, an NWP forecast looking a few days ahead can frequently be quite wrong, and even 1-day forecasts can occasionally have large errors. The reason for this lies in the chaotic nature of the atmosphere, which means that very small errors in the initial conditions can lead to large errors in the forecast, the so-called butterfly effect. This means that we can never create a perfect forecast system because we can never observe every detail of the initial state of the atmosphere. Tiny errors in the initial state will be amplified such that after a period of time the forecast becomes useless. This sensitivity varies from day to day, but typically we can forecast the main weather patterns reasonably well up to about three days ahead. Beyond that uncertainties in the forecasts can become large. To cope with this uncertainty, we use Ensemble Forecasts. Instead of running just a single forecast, the model is run a number of times from slightly different starting conditions. The complete set of forecasts is referred to as the ensemble and individual forecasts within it as ensemble members. The initial differences between ensemble members are very small so that if we compared members with observations it would be impossible to say which members fitted the observations better. All members are therefore equally likely to be correct, but when we look several days ahead the forecasts can be quite different. Some days the forecasts from different ensemble members are all quite similar, which gives us confidence that we can issue a reliable forecast. On other days the members can differ radically and then we have to be more cautious. As an illustration of the sensitivity, the following charts show an example of two equally valid 4-day forecasts of the surface pressure (isobars) from an ensemble forecast. Differences at the start of the forecast, in the top row, are so small that we cannot tell which is more accurate, but the forecasts below are very different! (In reality, of course, the weather systems over the British Isles at Day 4 would probably have originated further west at the start of the forecast, but it illustrates how very similar ensemble members grow apart during the forecast.) Forecast A on the left predicts a deep area of low pressure over Ireland bringing strong winds and rain to much of the British Isles; forecast B on the right predicts that the high pressure over the Atlantic will be much stronger and does not develop the low at all, and thus suggests fine weather although with a cool northerly wind and the risk of showers in the S and E. Clearly in this situation a forecaster who has access to only a single model forecast is in danger of issuing a forecast which could go seriously wrong. By using this sort of information from an ensemble with many members, Met Office forecasters are able to assess the range of possible scenarios and issue advice on the probabilities and risks associated with them. Ensemble Prediction is thus all about Risk Management in weather forecasting. operational medium-range ensemble forecasting, the Met Office makes use of the Ensemble Prediction System (EPS) run by the European Centre for Medium-Range Weather Forecasts (ECMWF). ECMWF is an international organisation supported by many European states, including the UK, and specialises in NWP for medium-range prediction. ECMWF does not issue weather forecasts itself, but distributes its products to the National Meteorological Services of its member states, including the Met Office, for use in production of weather As part of the THORPEX programme, the Met Office is carrying out research into the possible use of multi-model ensemble techniques for medium-range weather forecasting, with the emphasis on improving the forecasting of high-impact weather. Back to top |The ECMWF Ensemble The ECMWF EPS consists of 51 forecasts run twice daily using the ECMWF global forecast model with a horizontal resolution of around 80km. One member, called the control forecast, is run directly from the ECMWF analysis, our best guess at the initial state of the atmosphere. Initial conditions for the other 50 members are created by adding small "perturbations" to this analysis. These perturbations are designed to identify those regions of the atmosphere which are most likely to lead to errors in the forecast on each particular occasion. Small random variations in the model itself are also introduced to allow for some of the approximations which have to be made in how the model represents the atmosphere. The charts below show an example of surface pressures (isobars) for all 51 members of the ECMWF ensemble for a sample 4-day forecast from November 2003. On many occasions the atmosphere is much more predictable than this, but this illustrates the level of uncertainty that regularly occurs in forecasts only a few days Clearly an ensemble forecast contains a huge amount of information which we need to condense for both forecasters and end-users! Below we describe some of the ways we can do this, including the use of More about the EPS from the ECMWF user guide Back to top uncertainty in forecasts Forecast uncertainty results In June 2006 we asked for feedback on how uncertainty in the forecast could be presented. More than a thousand respondents completed a short questionnaire which showed a variety of options for presenting uncertainty in the five-day temperature a summary of the findings Ensemble prediction allows the uncertainty in forecasts to be assessed quantitatively. This uncertainty can be passed on to users of the forecast in several ways. For example, we can provide a range of possible values for a forecast parameter (such as temperature or windspeed) such that we know how confident we are that the actual value will fall within that range. For the scientist this is very similar to putting an error bar on the forecast. The example below shows how maximum and minimum temperatures for each day can be given a range of uncertainty. The full length of each vertical line represents the 95% confidence range, while the central bar represents a 50% confidence range. The horizontal line across this bar is the mid-point of the distribution, and may be used to estimate the most likely temperature.Thus for the first night we can be 95% certain the minimum temperature will be between 8 and 13 Celsius, and 50% certain it will be between about 11 and 12 Celsius. Alternatively we can estimate the probability of certain events happening, for example of the temperature falling below freezing or the wind speed reaching gale force. Probability forecasts can help users to assess the risks associated with particular weather events which are important to them. information on uncertainty in forecasts examples of how uncertainty can be presented in forecasts about probability forecasts Back to top | The Met Office Ensemble Post-Processing System (Previn) Ensemble forecasts from ECMWF are post-processed to produce a wide variety of chart displays to aid forecasters, and also high-quality probability forecasts which can be supplied to customers. Two examples illustrate how these can be used. A daily task for our weather forecasters is to assess the most likely developments of major weather systems for several days ahead - most of our forecast products depend on them getting this right! Among the most important systems for NW Europe are Atlantic low pressure centres (cyclones). To aid the forecasters in assessing the most likely positions and movements of lows, cyclone tracks predicted by all the ensemble members are plotted on a single chart. The example below for a 3-day forecast in February 2004 shows that it is most likely that a low pressure will move northeastwards to the west of the British Isles. However for anyone interested in risks, it is also worth noting that there is a small chance that the low will pass further south over Scotland, bringing rain and wind much further south across the country. This is just one example of the types of charts used to summarise ensemble forecasts The ensemble provides useful estimates of probabilities for many weather events, but these can often be further improved by statistical post-processing. Calibrated probability forecast data are generated daily for over 300 sites worldwide. The graph below shows an example of a calibrated 5-day forecast of the relative probabilities of different temperatures at Heathrow Airport for midday on 28th February 2004. Clearly the most likely temperature is around zero Celsius, with a 27% probability of temperature below freezing (right). However there is also a real possibility of quite mild temperatures above 6 Celsius. On a balance of probabilities, information like this was used by forecasters on this occasion to issue early warnings of a cold spell of wintry weather several days ahead. In the event Heathrow experienced temperatures below freezing overnight with light snow, rising to a maximum around 5 Celsius and falling rapidly to 1 Celsius in heavy snow at 1720. On this occasion the balance of probabilities provided good guidance, and this should normally been the case. However the lower probability events should also be expected to occur on some, fewer, occasions. Had the temperature actually been 8-12 Celsius the warning would have seemed excessive, but would still have been fully justified on the basis of the evidence available at the time it was issued. These site-specific probability forecasts are verified routinely to monitor performance and demonstrate the capability of statistical post-processing. | Short-range ensemble Current operational use of ensembles is restricted to application of the ECMWF EPS for medium-range prediction. Important uncertainty can also occur in short-range forecasts. Usually at short-range, up to 3 days ahead the general weather pattern is well forecast by a single model run, but there can still be uncertainty in the resulting fine details of the weather, for example in the amount, location or timing of rainfall. On rare occasions there can also be significant uncertainty in the large-scale weather patterns, and these occasions can be particularly important as they may be associated with severe weather developments. Research is currently being undertaken to investigate whether ensembles designed specifically for short-range use can help in quantifying the uncertainty in these areas. Back to top
http://research.metoffice.gov.uk/research/nwp/ensemble/index.html
13
93
Thousands of record-breaking weather events worldwide bolster long-term trends of increasing heat waves, heavy precipitation, droughts and wildfires. A combination of observed trends, theoretical understanding of the climate system, and numerical modeling demonstrates that global warming is increasing the risk of these types of events today. Debates about whether single events are "caused" by climate change are illogical, but individual events offer important lessons about society's vulnerabilities to climate change. Reducing the future risk of extreme weather requires reducing greenhouse gas emissions and adapting to changes that are already unavoidable. Typically, climate change is described in terms of average changes in temperature or precipitation, but most of the social and economic costs associated with climate change will result from shifts in the frequency and severity of extreme events.1 This fact is illustrated by a large number of costly weather disasters in 2010, which tied 2005 as the warmest year globally since 1880.2 Incidentally, both years were noted for exceptionally damaging weather events, such as Hurricane Katrina in 2005 and the deadly Russian heat wave in 2010. Other remarkable events of 2010 include Pakistan’s biggest flood, Canada’s warmest year, and Southwest Australia’s driest year. 2011 continued in similar form, with “biblical” flooding in Australia, the second hottest summer in U.S. history, devastating drought and wildfires in Texas, New Mexico and Arizona as well as historic flooding in North Dakota, the Lower Mississippi and in the Northeast.3 Munich Re, the world’s largest reinsurance company, has compiled global disaster for 1980-2010. In its analysis, 2010 had the second-largest (after 2007) number of recorded natural disasters and the fifth-greatest economic losses.4 Although there were far more deaths from geological disasters—almost entirely from the Haiti earthquake—more than 90 percent of all disasters and 65 percent of associated economic damages were weather and climate related (i.e. high winds, flooding, heavy snowfall, heat waves, droughts, wildfires). In all, 874 weather and climate-related disasters resulted in 68,000 deaths and $99 billion in damages worldwide in 2010. The fact that 2010 was one of the warmest years on record as well as one of the most disastrous, begs the question: Is global warming causing more extreme weather? The short and simple answer is yes, at least for heat waves and heavy precipitation.5 But much of the public discussion of this relationship obscures the link behind a misplaced focus on causation of individual weather events. The questions we ask of science are critical: When we ask whether climate change “caused” a particular event, we pose a fundamentally unanswerable question (see Box 1). This fallacy assures that we will often fail to draw connections between individual weather events and climate change, leading us to disregard the real risks of more extreme weather due to global warming. Climate change is defined by changes in mean climate conditions—that is, the average of hundreds or thousands events over the span of decades. Over the past 30 years, for example, any single weather event could be omitted or added to the record without altering the long-term trend in weather extremes and the statistical relationship between that trend and the rise in global temperatures. Hence, it is illogical to debate the direct climatological link between a single event and the long-term rise in the global average surface temperature. Nonetheless, individual weather events offer important lessons about social and economic vulnerabilities to climate change. Dismissing an individual event as happenstance because scientists did not link it individually to climate change fosters a dangerously passive attitude toward rising climate risk. The uncertainty about future weather conditions and the illogic of attributing single events to global warming need not stand in the way of action to manage the rising risks associated with extreme weather. Indeed, such uncertainty is why risk managers exist – insurance companies, for example – and risk management is the correct framework for examining the link between global climate change and extreme weather. An effective risk management framework accommodates uncertainty, takes advantage of learning opportunities to update understanding of risk, and probes today’s rare extreme events for useful information about how we should respond to rising risk. Risk management eschews futile attempts to forecast individual chaotic events and focuses on establishing long-term risk certainty; that is, an understanding of what types of risks are increasing and what can be done to minimize future damages. An understanding of the meaning of risk and how it relates to changes in the climate system is crucial to assessing vulnerability and planning for a future characterized by rising risk. Climate is the average of many weather events over of a span of years. By definition, therefore, an isolated event lacks useful information about climate trends. Consider a hypothetical example: Prior to any change in the climate, there was one category 5 hurricane per year, but after the climate warmed for some decades, there were two category 5 hurricanes per year. In a given year, which of the two hurricanes was caused by climate change? Since the two events are indistinguishable, this question is nonsense. It is not the occurrence of either of the two events that matters. The two events together – or more accurately, the average of two events per year – define the change in the climate. Since 2010 tied with 2005 as the warmest year on record globally, it should come as no surprise that 19 countries set new national high-temperature records; this is the largest number of national high temperature records in a single year, besting 2007 by two.6 One of the countries was Pakistan, which registered “the hottest reliably measured temperature ever recorded on the continent of Asia” (128.3 °F on May 26 in Mohenjo-daro).7 Strikingly, no new national record low-temperatures occurred in 2010.8 Several historic heat waves occurred across the globe, as well. Unprecedented summer heat in western Russia caused wildfires and destroyed one-third of Russia’s wheat crop; the combination of extreme heat, smog, and smoke killed 56,000 people.9 In China, extreme heat and the worst drought in 100 years struck Yunan province, causing crop failures and setting the stage for further devastation by locust swarms.10 In the United States, the summer of 2010 featured record breaking heat on the east coast with temperatures reaching 106 degrees as far north as Maryland.11 Records also were set for energy demand and the size of the area affected by extreme warmth.12 Even in California where the average temperatures were below normal, Los Angeles set its all-time high temperature record of 113 degrees on September 27. Global precipitation was also far above normal, with 2010 ranking as the wettest year since 1900.13 Many areas received record heavy rainfall and flooding. Westward shifts of the monsoon dropped 12 inches of rain across wide areas of Pakistan, flooding the Indus River valley, displacing millions of people and destabilizing an already precariously balanced nation.14 Rio de Janeiro received the heaviest rainfall in 30 years—almost 12 inches in 24 hours, causing nearly 300 mudslides and killing at least 900 people.15 Developed countries also suffered debilitating downpours. On the heels of Queensland, Australia’s wettest spring since 1900, December rainfall broke records in 107 locations.16 Widespread flooding shaved an estimated $30 billion off Australia’s GDP.17 The United States experienced several record breaking torrential downpours. In Tennessee, an estimated 1,000-year flooding event18 brought more than a foot of rain in two days, resulting in record flooding and over two billion dollars in damages in Nashville alone, equivalent to a full year of economic output for that city. In Arkansas, an unprecedented 7 inches of rain fell in a few hours, causing flash flooding as rivers swelled up to 20 feet.19 Wisconsin had its wettest summer on record, which is remarkable given the series of historic floods that have impacted the upper Midwest over the last two decades. In 2011, there have already been three separate historic floods in the United States, the driest 12 months ever recorded in Texas, and a record breaking tornado outbreak (see Box 2).20 Damages from Hurricane Irene, much of which is flood related, are estimated to be between $7 and $10 billion, making it one of the top ten most damaging hurricanes ever to hit the US.21 Scientists are unsure if tornadoes will become stronger or more frequent, but with increased temperatures changing the weather in unexpected ways, the risk is real that tornado outbreaks will become more damaging in the future. The lack of certainty in the state of the science does not equate with a lack of risk, since risk is based on possibility. The lack of scientific consensus is a risk factor itself, and we must prepare for a future that could possibly include increased tornado damage. The historic weather extremes of 2010 and 2011 fit into a larger narrative of damaging extreme weather events in recent decades. Recent heat waves in Russia and the United States have evoked memories of the 1995 heat wave that killed hundreds of Chicagoans, and the 2003 European heat wave that killed at least 35,000 people.22 In the United States, the number of storms costing more than $100 million has increased dramatically since 1990. Although the 2010 flooding in the American Midwest was highly damaging, it was not on the scale of the 1993 and 2008 events, each costing billions of dollars and of such ferocity that they should be expected to occur only once in 300 years.23 Other unprecedented disasters include the 2008 California wildfires that burned over a million acres,24 and the decade-long Southwest drought, which continues in spite of an uncharacteristically wet winter.25 Mumbai, India, recorded its highest ever daily rainfall with a deluge of 39 inches that flooded the city in July of 2005.26 This neared the Indian daily record set the year before when 46 inches fell in Aminidivi, which more than doubled 30-year-old record of 22.6 inches.27 Torrential downpours continued for the next week, killing hundreds of people and displacing as many as 1 million.28 Taken in aggregate, this narrative of extreme events over recent decades provides a few snapshots of a larger statistical trend toward more frequent and intense extreme weather events. Rising frequency of heavy downpours is an expected consequence of a warming climate, and this trend has been observed. Some areas will see more droughts as overall rainfall decreases and other areas will experience heavy precipitation more frequently. Still other regions may not experience a change in total rainfall amounts but might see rain come in rarer, more intense bursts, potentially leading to flash floods punctuating periods of chronic drought. Therefore, observed trends in heat, heavy precipitation, and drought in different places are consistent with global warming.29 Over the past 50 years, total rainfall has increased by 7 percent globally, much of which is due to increased frequency of heavy downpours. In the United States, the amount of precipitation falling in the heaviest 1 percent of rain events has increased by nearly 20 percent overall, while the frequency of light and moderate events has been steady or decreasing (Fig. 1).30 Meanwhile, heat waves have become more humid, thereby increasing biological heat stress, and are increasingly characterized by extremely high nighttime temperatures, which are responsible for most heat-related deaths.31 In the western United States, drought is more frequent and more persistent, while the Midwest experiences less frequent drought but more frequent heavy precipitation.32 Record daytime and nighttime high temperatures have been increasing on a global scale.33 In the United States today, a record high temperature is twice as likely to be broken as a record low, and nighttime temperature records show a strong upward trend (Fig. 2). By contrast, record highs and lows were about equally likely in the 1950s (Fig. 3).34 This trend shows that the risk of heat waves is increasing over time, consistent with the results of global climate models that are forced by rising atmospheric greenhouse gas concentrations.35 Indeed, the observed heat wave intensities in the early 21st century already exceed the worst-case projections of climate models.36 Moreover, the distribution of observed temperatures is wider than the temperature range produced by climate models, suggesting that models may underestimate the rising risk extreme heat as warming proceeds. Percentage increase in heavy downpours in the regions of the United States since the late 1950s. The map shows the percentage increases in the average number of days with very heavy precipitation (defined as the heaviest 1 percent of all events) from 1958 to 2007 for each region. There are clear trends toward more days with very heavy precipitation for the nation as a whole, and particularly in the Northeast and Midwest. Source: USGCRP (2009) (Ref. 32). When averaged together, changing climate extremes can be traced to rising global temperatures, increases in the amount of water vapor in the atmosphere, and changes in atmospheric circulation. Warmer temperatures directly influence heat waves and increase the moisture available in the atmosphere to supply extreme precipitation events. Expanding sub-tropical deserts swelling out from the equator are creating larger areas of sinking, dry air, thus expanding the area of land that is subject to drought.37 The expansion of this sub-tropical circulation pattern also is increasing heat transport from the tropics to the Arctic and pushing mid-latitude storm tracks, along with their rainfall, to higher latitudes. As discussed above, no particular short-term event can be conclusively attributed to climate change. The historical record provides plenty of examples of extreme events occurring in the distant past and such events obviously occur without requiring a change in the climate. What matters is that there is a statistical record of these events occurring with increasing frequency and/or intensity over time, that this trend is consistent with expectations from global warming, and that our understanding of climate physics indicates that this trend should continue into the future as the world continues to warm. Hence, a probability-based risk management framework is the correct way to consider the link between climate change and extreme weather. It is also important to disentangle natural cycles from climate change, both of which are risk factors for extreme weather. Consider an analogy: An unhealthy diet, smoking, and lack of exercise are all risk factors for heart disease, and not one of these factors can or should be singled out as the cause of a particular heart attack. Similarly, a particular weather event is not directly caused by a single risk factor but has a higher probability of occurrence depending on the presence of various risk factors. The influence on risk from different sources of climate variability is additive, so global warming presents a new risk factor added on top of the natural ones that have always been with us. Over time, natural cycles will come and go, but global warming will continue in one direction such that its contribution to risk will reliably increase over time. Global warming has simply added an additional and ever rising risk factor into an already risky system (see Box 3). Over the past year, Texas has experienced its most intense single-year drought in recorded history. Texas State Climatologist John Nielsen-Gammon estimated the three sources of climate variability – two natural cycles plus global warming – that contributed to the drought's unprecedented intensity: Although information about uncertainty is lacking in this analysis, it clearly identifies global warming as one of the risk factors. Extreme events are often described by their expected frequency of recurrence. A “25-year event” has a statistical expectation of occurring once in 25 years, on average. It may occur more than once in any 25 year span or not at all for a full century, but over many centuries it is expected to occur on average once every 25 years. Events with a longer recurrence time tend to be more severe, so that a 100-year flood is a more dreaded event than a 25-year flood. A 500-year flood would be even more damaging, but it is considered to be so rare that people generally do not worry about events of such a magnitude. The problem with climate change, however, is that what used to be a 500-year event may become a 100-year or 10-year event, so that most people will experience such events within their lifetimes. Risk cannot be thought of in a discontinuous way, with singular events having predictive power about specific future events. Risk is the accumulation of all future possibilities weighted by their probabilities of occurrence. Therefore, an increase in either disaster frequency or severity increases the risk. Events can be ordered on a future timeline and ranked by expectations about their frequency, but this only describes what we expect to happen on average over a long period of time; it does not predict individual events. Consequently, impacts are uncertain in the short term, but the risk of impacts will rise in a predictable fashion. Risk therefore tells us what future climate conditions we should plan for in order to minimize the expected costs of weather-related disasters over the lifetime of long-lived investments, such as houses, levees, pipelines, and emergency management infrastructure. Risk management is used extensively almost anywhere decision-makers are faced with incomplete information or unpredictable outcomes that may have negative impacts. Classic examples include the military, financial services, the insurance industry, and countless actions taken by ordinary people every day. Homeowners insurance, bicycle helmets, and car seatbelts are risk-management devices that billions of people employ daily, even though most people will never need them. Changes in land area (as percent of total) in the contiguous 48 U.S. states experiencing extreme nightly low temperatures during summer. Extreme is defined as temperatures falling in the upper (red bars) or lower (blue bars) 10th percentile of the local period of record. Green lines represent decade-long averages. The area of land experiencing unusually cold temperatures has decreased over the past century, while the area of land experiencing unusually hot temperatures (red bars) reached record levels during the past decade. During the Dust Bowl period of the 1930s, far less land area experienced unusually hot temperatures. Source: NOAA NCDC Climate Extremes Index (2011) (Ref. 38). A non-changing climate would have approximately equal numbers of record highs and lows, as observed in the 1950s-1980s. The last decade (2000s) had twice as many record highs as it did record lows. Source: Meehl et al., 2009 (Ref 33); figure ©UCAR, graphic by Mike Shibao The extreme events cataloged above and the trends they reflect provide a proxy for the types of events society will face with greater risk in the future. With a clear record of trends and reasonable projections for the future, the level of risk can be assessed and prepared for. Risk can be thought of as a continuous range of possibilities, each with a different likelihood of occurring; extreme outcomes reside on the low-probability tails of the range or distribution. For example, climate change is widening the probability distribution for temperature extremes and shifting the mean and the low-probability tails toward more frequent and intense heat events (Fig. 4). Conceptual representation of the shift in the probability distribution for average and extreme temperatures as a result of global warming. The frequency of extreme high temperatures increases non-linearly, while extreme lows show a more muted response. Source: Adapted from IPCC (2001) (Ref. 39). The rising risk of extreme events has much in common with playing with loaded dice, where the dice are weighted to roll high numbers more frequently. Moreover, one of the dice has numbers from two to seven instead of one to six. It is therefore possible to roll a 13 (i.e. the maximum possible temperature is higher than before) and would be more likely (because the dice are loaded) than rolling a 12 with two normal dice. The probability distribution of the loaded dice compared to normal dice is translated into changing climate risk in Figure 4. With normal dice, one can expect to roll snake eyes (cold extremes) about equally as often as double sixes (hot extremes). But with climate change, the dice are loaded so that cold extremes (as defined in the previous climate) are a bit less likely than they used to be and hot extremes are hotter and more likely than before. The new risk profile presents a nonlinear increase in the number of extremes on one tail (i.e. heat waves). In light of recent cold winters in the United States and Europe, it is important to recognize that this new curve does not dispense with cold extremes, as the widening of the distribution (i.e. increase in variability) partially offsets the shift toward warmer events. Cold extremes become less frequent but do not disappear (Fig. 4). Moreover, like heavy downpours, heavy snowfall is also consistent with global warming (see Box 4). Under this new risk profile, the probability of record heat increases dramatically. The deadly 2003 European heat wave offers an example of a real world event that conforms to this new expectation. An event of that magnitude has a very small probability under the unchanged climate regime but has a much higher probability under a new climate profile that is both hotter and more variable (Fig. 5). Since this event actually happened, we know that an event of that intensity is possible, and model projections tell us that the risk of such an event should be expected to rise dramatically in the coming decades due to global warming. Indeed, a 50 percent increase in variance alone, without even shifting the average temperature, could make the 2003 heat wave a 60-year event rather than a 500-year event under the old regime.40 Other research has indicated that the risk of a 2003-type heat wave in Europe is already twice as large because of warming over recent decades. With continued warming, the frequency of such an event could rise to multiple occurrences per decade by the middle of this century.41 Historically 2003 is exceptionally warm but in the future scenario it has become relatively common. Source: Schar et al., 2004 (Ref. 38) as redrawn by Barton et al., 2010 (Ref. 42). Hot extremes are not the only sort of weather event to have increased beyond expectations. Observed increases in extreme hourly precipitation are beyond projections, even while daily precipitation changes remain within expectations. This indicates that the scaling of precipitation with increases in atmospheric moisture is not consistent between short bursts and total amounts over longer periods. In the Netherlands, a study shows that one-hour precipitation extremes have increased at twice the rate with rising temperatures as expected when temperatures exceed 12°C.43 This is another example of the type of rapid increase in extreme events that is possible when the risk distribution is not only shifted but also exhibits increased variance. In December 2009 and February 2010, several American East Coast cities experienced back-to-back record-breaking snowfalls. These events were popularly dubbed "Snowmageddon" and "Snowpocalypse." Such events are consistent with the effects of global warming, which is expected to cause more heavy precipitation because of a greater amount of water vapor in the atmosphere. Freezing temperatures are normal during the winter for cities like Washington, D.C., Philadelphia, and New York. Storms called Nor'easters are also normal occurrences. As global warming evaporates more water from the Gulf of Mexico and the Atlantic Ocean, the amount of atmospheric moisture available to fuel these storms has been increasing, thus elevating the risk of "apocalyptic" snowstorms. It should be clear that while one cannot attribute a particular weather event to climate change, it is possible to attribute and project changes in risk of some categories of extreme weather. In order to have confidence in any climate-related risk assessment, the connection between climate change and a particular type of weather event needs to be established by multiple lines of evidence. This connection relies on three supporting avenues of evidence: theory, modeling and observation, which can be viewed as the legs of a stool (Fig. 6). First, scientists must understand the physical basis of why a type of weather event ought to respond to climate change. To assess whether such a response has already begun, observational data should show an increase in frequency, duration, or intensity that is commensurate with the physical understanding. Finally, computational models forced by elevated greenhouse gas concentrations should show an increase in risk that is consistent with theory and observation. Physical understanding (theory) should provide a reason to expect a change in risk. Observations are needed to confirm that a change is taking place and computational modeling can be used to determine whether the observations reconcile with the theory and, if so, to project future changes in risk. There is supporting evidence in all three areas (theory, modeling, and observation) pointing to a global-warming induced increase in risk for four important categories of weather-related extreme events: extreme heat, heavy downpours, drought and drought-associated wildfires. For some other types of weather events, there is not sufficient evidence to conclude that global warming has increased risk. For example, evidence relating hurricane risk to climate change is “two-legged”: There is a physical basis for expecting hurricanes to have stronger winds and produce more rainfall due to global warming, and models with enhanced greenhouse gas levels show an increase in the number of such storms. With two legs of the stool, hurricanes are a type of event that we should consider a potential future threat for increased risk, but more research is needed to confirm. However, observational evidence is insufficient to confirm that such a response has already begun. For tornadoes, the evidence is “zero-legged,” meaning that neither theory, modeling, nor observation offer any indication of how tornado risk has changed or might change in the future due to global warming, although that does not mean there is no risk (see Box 2). In addition to aggregate trend analysis, planners and policymakers can and do use individual extreme weather events as laboratories for assessing social and economic vulnerabilities and crafting appropriate actions to minimize the suffering and costs expected from similar events in the future. For example, in 1995 a prolonged heat wave killed hundreds in Chicago, after which the city took effective steps to prepare for future heat waves.44 Prior to the 2003 European heat wave, the possibility that such a deadly heat wave could strike Europe had not been considered. Now that European society is aware of this possibility, preparations have been made to decrease future suffering and economic damage. Similarly, Hurricane Katrina demonstrated that a major American city can be paralyzed for weeks, without effective emergency response, communications, security, sanitation, or health care. Other recent examples of flooding and extreme rainfall should provide lessons on where flood control and emergency response systems are most needed and how much the investments in preparation are worth. Additionally, extreme events represent data points that can improve trends and estimates of future risk, as it is critically important to update trends for estimating existing risk as well as future risk. Both adapting to unavoidable climate change and mitigating future greenhouse gas emissions are required to manage the risks of extreme weather in a warmer climate. Since limiting the amount of CO2 in the atmosphere limits the magnitude of climate change in general, reducing CO2 emissions is effective at preventing both linear increases in risks and the more difficult to predict, nonlinear changes in extremes. Due to this property, mitigation action can be thought of as a benefit multiplier, as linear decreases in emissions can result in nonlinear decreases in extreme risk. Conversely, since climate change is already underway, some impacts are unavoidable and society must adapt to them. In order to be effective, adaptation actions must be commensurate with the magnitude of the risk. Nonlinear increases in risk associated with weather extremes require adaptation actions beyond what would be expected by looking at changes in average climate conditions. Moreover, many adaptation options are likely to be infeasible if the climate changes too much; adequate mitigation is therefore required to facilitate successful adaptation. Science is not a crystal ball, but it offers powerful tools for evaluating the risks of climate change. Scientists can investigate whether the risk of certain types of events is rising by examining recent trends, and also whether the risks are likely to rise in the future using projections from climate models. When these two indicators converge, we should look to reduce vulnerability to such events. Indeed, a growing body of research is using climate models as a mechanism for investigating future increases in risk. Models cannot predict specific events but for some types of extremes they can indicate how risk profiles are likely to change in the future. This approach is particularly powerful when benchmarked against actual events that society agrees should be guarded against. In 2000, the United Kingdom experienced devastating autumn floods associated with meteorological conditions that are realistically mimicked in climate models. In a climate model, the risk of severe autumn flooding increased by 20 to 90 percent under present-day greenhouse gas concentrations compared to preindustrial concentrations.45 Conversely, modeling simulations of the deadly 2010 Russian heat wave found no evidence that climate change has so far increased the risk of such an event but did find that continued warming is very likely to produce frequent heat waves of a similar magnitude later this century.46 Hence, regardless of the cause of that particular heat wave, the risk of similar events in the future can be expected to rise with continued warming of the global climate. Because the event was so deadly and economically harmful, the rising risk of similar events should prompt serious consideration of appropriate actions to limit and adapt to this risk. Given the uncertainties and risks, it does not make sense to focus on whether current events are supercharged by climate change. It does make sense, however, to take lessons from them about our current vulnerabilities and the risks inherent in unabated greenhouse gas emissions that drive extreme weather risks ever higher as time passes. Climate science can provide risk-based information that decision makers can use to understand how the risk is changing so that they can prioritize and value investments in prevention and adaptation. 1 Karl, T. R., Meehl, G. A., Miller, C. D., Hassol, S. J., Waple, A. M., & Murray, W. L. (2008). Weather and Cliamte Extremes in a Changing Climate; Regions of Focus: North America, Hawaii, Caribbean, and U.S. Pacific Islands. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research. Washington, D.C., USA: Department of Commerce, NOAA's National Climatic Data Center. 2 National Climatic Data Center. (2010, December). State of the Climate Global Analysis: Annual 2010. Retrieved May 19, 2011, from http://1.usa.gov/fxdFai. 3 BBC News. (2011, January 1). Australia's Queensland faces 'biblical' flood. Retrieved May 19, 2011, from http://bbc.in/fNzGgK; Associated Press. (2011, May 1). Federal fire crews bring expertist to huge TX fire. Retrieved May 19, 2011, from http://bit.ly/iz6JRs; Associated Press. (2011, June 16). Concern over human-caused blazes grows as wind-driven wildfires promp more evacuations. Retrieved June 22, 2011, from Washington Post: http://wapo.st/iWxirz; Sulzberger, A.G. (2011, June 26). In Minot, N.D., Flood Waters Stop Rising. Retrieved November 22, 2011, from New York Times: http://nyti.ms/ufT9jY; Doyle, R. (2011, September 8). U.S. sweltered through hottest summer in 75 years. Retrieved November 22, 2011, from USA Today: http://usat.ly/o73h4o; Robertson, C. (2011, May 15). Record Water for a Mississippi River City. Retrieved November 22, 2011, from New York Times: http://nyti.ms/lp0cTA; Freedman, A. (2011, September 12). Historic Flooding Recedes in Pennsylvania, New York; at least 15 dead. Retrieved November 22, 2011, from Washington Post: http://wapo.st/qvywOo. 4 Munich Re. (2011, February). Topics Geo Natural catastrophes 2010: Analyses, assessments, positions. Retrieved May 19, 2011, from http://bit.ly/i5zbut. 5 Karl et al., Weather and Climate Extremes in a Changing Climate, Op. cit. 6 Masters, J. (2010, August 7). Dr. Jeff Masters' WunderBlog. Retrieved May 20, 2011, from Weather Underground: http://bit.ly/dxKthO. 7 Masters, J. (2010, June 2). Dr. Jeff Masters' WunderBlog. Retrieved May 20, 2011, from Weather Underground: http://bit.ly/bDAvx2. 8 Herrera, M. (n.d.). Extreme temperatures around the world. Retrieved May 20, 2011, from http://bit.ly/crTJ2a. 9 Munich Re, Topics Geo Natural Catastrophes 2010, Op. cit. 10 National Climatic Data Center. State of the Climate Global Analysis: Annual 2010, Op.cit. 11 National Climatic Data Center. Top 10 US Weather/Climate Events of 2011. Retrieved May 19, 2011, from http://1.usa.gov/lGpdnE. 13 National Climatic Data Center (2011, January 12). 2010 Global Climate Highlights. Retrieved May 20, 2011, from http://1.usa.gov/eCwQmd. 14 National Climatic Data Center, State of the Climate Global Analysis: Annual 2010, Op.cit. 15 Biles, P. (2010, April 7). Flooding in Rio de Janeiro state kills scores. Retrieved May 19, 2011, from BBC News: http://bit.ly/kKe20D. O Globo. (2011, February 16). Número de mortos na Região Serrana já passa de 900 após chuvas de janeiro. Retrieved May 19, 2011, from O Globo: http://glo.bo/lMkp7G. 16 Australian Government Bureau of Meteorology. (2010, December 1). Queensland in spring 2010: The wettest spring. Retrieved May 19, 2011, from http://bit.ly/l0FVKs; Australian Government Bureau of Meteorology Queensland Climate Services Centre. (2010). Monthly Weather Review: Queensland December 2010. Brisbane: Commonwealth of Australia. Available at http://bit.ly/jcdZLt. 17 ABC News AU. (2011, January 18). Flood costs tipped to top $30b. Retrieved May 19, 2011, from http://bit.ly/gD7FyR. 18 US Army Corps of Engineers. (n.d.). Fact Sheet: Nashville Flood After Action Report (AAR). Retrieved May 19, 2011, from http://bit.ly/lUtgrR. 19 National Climatic Data Center, Top 10 US Weather/Climate Events of 2010, Op. cit. 20 Associated Press. (2011, November 16). Texas wildfire season roars on, with no end in sight. Retrieved November 22, 2011, from USA Today: http://usat.ly/rKqiWq. 21 Cooper, M. (2011, August 30). Hurricane Cost Seen as Ranking Among Top Ten. Retrieved November 22, 2011, from New York Times: http://nyti.ms/q0KDYG. 22 Schär, C., & Jendritzky, G. (2004). Climate change: Hot news from summer 2003. Nature , 432, 559-560. 23 Larson, L. W. (1996, June). The Great USA Flood of 1993. Retrieved May 19, 2011, from Destructive Water: Water-Caused Natural Disasters- Their Abatement and Control: http://1.usa.gov/4qyQbo; National Climatic Data Center. (2008, July 9). 2008 Midwestern U.S. Floods. Retrieved May 19, 2011, from http://1.usa.gov/iUW1MM. 24 Higgs, K. (2008, August 11). California Wildfires~FEMA EM-3287-CA Total Incidents from 6/22/08-8/11/08. Retrieved May 19, 2011, from http://1.usa.gov/knDLpr. 25 Carlton, J. (2011, March 31). Wet Winter Can't Slake West's Thirst. Retrieved May 19, 2011, from Wall Street Journal: http://on.wsj.com/gmPD3t. 26 Government of Maharashtra Department of Relief and Rehabilitation. (n.d.). Maharashtra Floods 2005. Retrieved May 19, 2011, from http://mdmu.maharashtra.gov.in/pdf/Flood/statusreport.pdf. 29 Karl et al. (2008), Op. cit. 31 Ibid; Ebi, K.L. & Meehl, G.A. (2007). The Heat is On: Climate Change & Heatwaves in the Midwest. In [Gulledge, J. & Smith, J., Eds.] Regional Impacts of Climate Change: Four Case Studies in the United States. Pew Center on Global Climate Change, Arlington, Virginia USA. 32 Karl, T. R., Melillo, J. M., & Peterson, T. C. (2009). Global Climate Change Impacts in the United States. Cambridge University Press. Available at http://1.usa.gov/7Mcd7Q. 33 Meehl, G. A., Tebaldi, C., Walton, G., Easterling, D., & McDaniel, L. (2009). The relative increase of record high maximum temperatuers compared to record low minimum temperatures in the U.S. Geophysical Research Letters , 36 (23), L23701. 35 Ebi & Meehl (2007),Op. cit. 36 Ganguly, A. R., Steinhaeuser, K., Erickson III, D. J., Branstetter, M., Parish, E. S., Singh, N., et al. (2009). Higher trends but larger uncertianty and geographic variability in 21st century temperature and heat waves. PNAS , 106 (37), 15555-15559. 37 Seidel, D. J., Fu, Q., Randel, W. J., & Reichler, T. J. (2007). Widening of the tropical belt in a changing climate. Nature Geoscience , 1, 21-24. 38 National Climatic Data Center, (2011). U.S. Climate Extremes Index, National Oceanic and Atmospheric Administration. Retrieved November 22, 2011, from: http://1.usa.gov/vAH4Qx. 39 IPCC, (2001). Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, & C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. 40 Schar, C., Vidale, P.L., Luthi, D., Frei, C., Haberli, C., Liniger, M.A. & Appenzeller, C. (2004). The Role of Increasing Temperature Variability in European Summer Heatwaves. Nature 427, 332-336. 41 Stott, P. A., Stone, D., & Allen, M. (2004). Human contribution to the European heatwave of 2003. Nature , 432, 610-614. 42 Barton, N.H., Briggs, D.E.G., Eisen, J.A., Goldstein, D.B. & Patel, N.H. (2010). Evolution. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. 43 Lenderink, G., & van Meijgaard, E. (2008). Increase in hourly precipitation extremes beyond expectations from temperature changes. Nature Geoscience, 1, 511-514. 44 Ebi & Meehl (2007), Op. cit. 45 Pall, P., Aina, T., Stone, D. A., Stott, P. A., Nozawa, T., Hilberts, A. G., et al. (2011). Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000. Nature , 470, 382-385. 46 Dole, R., Hoerling, M., Perlwitz, J., Eischeid, J., Pegion, P., Zhang, T., et al. (2011). Was there a basis for anticipating the 2010 Russian heat wave? Geophysical Research Letters , 38, L06702.
http://www.c2es.org/publications/extreme-weather-and-climate-change
13
67
History of Honduras ||This article needs additional citations for verification. (September 2010)| |Part of a series on the| |History of Honduras| Honduras was already occupied by many indigenous peoples when the Spanish arrived in the 16th century. The western-central part of Honduras was inhabited by the Lencas, the central north coast by the Tol, the area eastand west of Trujillo by the Pech(or Paya) and the Mayans and Sumo. These autonomous groups maintained commercial relationships with each other and with other populations as distant as Panama and Mexico. Pre-Columbian era Archaeologists have demonstrated that Honduras has a multi-ethnic prehistory. An important part of that prehistory was the Mayan presence around the city of Copán, in western Honduras which is near the Guatemalan border. Copan is a major Maya city that began flourishing around 150 A.D. bur reached its height in the Late Classic (700-850 A.D.). It has many carved inscriptions and stelae. The ancient kingdom, named Xukpi, existed from the 5th century to the early 9th century, with antecedents going back to at least the 2nd century. The Mayan civilization began a marked decline in their population during the 9th century, but there is evidence of people still living in and around the city until at least 1200. By the time the Spanish came to Honduras, the once great city-state of Copán was overrun by the jungle, and the surviving Ch’orti’ were isolated from their Choltian linguistic peers to the west. The non-Maya Lencas were then dominant in western Honduras. Conquest period Honduras was first sighted by Europeans when Christopher Columbus arrived at the Bay Islands on 30 July 1502 on his fourth voyage. On August 14, 1502 Columbus landed on the mainland near modern Trujillo. Columbus named the country Honduras ("Depths") for the deep waters off its coast. In January 1524, Hernán Cortés directed captain Cristóbal de Olid to establish a colony for him in Honduras. Olid sailed with a force of several ships and over 400 soldiers and colonists. He sailed first to Cuba, to pick up supplies Cortés had arranged for him, where Governor Velázquez convinced him to go and claim the colony he was to found as his own. Olid sailed from Cuba to the coast of Honduras, coming ashore east of Puerto Caballos at Triunfo de la Cruz where he initially settled and declared himself governor. Hernán Cortés, however, in 1524, got word of Olid's insurrection and sent his cousin, Francisco de las Casas, along with several ships to Honduras to remove Olid and claim the area for Cortés. Las Casas, however, lost most of his fleet in a series of storms along the coast of Belize and Honduras. His ships limped into the bay at Triunfo, where Olid had established his headquarters. When Las Casas arrived at Olid's headquarters, a large part of Olid's army was inland, dealing with another threat from a party of Spaniards under Gil González Dávila. Nevertheless, Olid decided to launch an attack with two caravels. Las Casa returned fire and sent boarding parties which captured Olid's ships. Under the circumstances, Olid proposed a truce to which Las Casas agreed, and he did not land his forces. During the night, a fierce storm destroyed his fleet and about a third of his men were lost. The remainder were taken prisoner after two days of exposure and without food. After being forced to swear loyalty to Olid, they were released. However, Las Casas was kept a prisoner, soon to be joined by González, who had been captured by Olid's inland force. The Spanish record two different stories about what happened next. Antonio de Herrera y Tordesillas writing in the 17th century, records that Olid's soldiers rose up and murdered him. Bernal Diaz del Castillo, in his Verdadera Historia de la Conquista de Nueva España, recalls that Las Casas captured Olid and beheaded him at Naco. In the meantime, Cortés had marched overland from Mexico to Honduras, arriving in 1525. Cortés ordered the founding of two cities, Nuestra Señora de la Navidad, near modern Puerto Cortés, and Trujillo, and named Francisco de las Casas Governor. However, both las Casas and Cortés sail back to Mexico before the end of 1525, where Francisco was arrested and sent back to Spain as a prisoner by Estrada and Alboronoz. Francisco returned to Mexico in 1527, and returned again to Spain with Cortés in 1528. On April 25, 1526, before returning to Mexico Cortes appointed Hernando de Saavedra as governor of Honduras and left instructions to treat the indigenous people well. On October 26, 1526, Diego López de Salcedo, was appointed by the emperor as governor of Honduras, replacing Saavedra. The next decade was marked by clashes between the personal ambitions of the rulers and conquerors, which hindered the installation of good government. The Spanish colonists rebelled against their leaders, and the indigenous people rebelled against their masters, and against the abuses their new masters imposed on them. Salcedo, seeking to enrich himself, had serious clashes with Pedrarias, the Governor of Castilla del Oro, who for his part, wanted to have Honduras as part of his domains. In 1528, Salcedo arrested Pedarias and forced him to cede part of his Honduran domain, but Charles V, Holy Roman Emperor rejected the agreement. After the death of Salcedo in 1530, the settlers became arbiters of power. Governors hung and removed. In this situation, the settlers asked Pedro de Alvarado to end the anarchy. With the arrival of Alvarado in 1536, chaos decreased, and the region was under authority. In 1537, Francisco de Montejo was appointed governor. He set aside the division of territory made by Alvarado upon arrival in Honduras. One of his principal captains, Alonso de Cáceres, was responsible for quelling the indigenous revolt, led by the cacique Lempira in 1537 and 1538. In 1539 Alvarado and Montejo and had serious disagreements over who was governor, which caught the attention of the Council of India. Montejo went to Chiapas, and Alvarado became governor of Honduras. During the period leading up to the conquest of Honduras by Pedro de Alvarado, many indigenous people along the north coast of Honduras were captured and taken as slaves to work on Spain's Caribbean plantations. It wasn't until Pedro de Alvarado defeated the indigenous resistance headed by Çocamba near Ticamaya, that the Spanish began to conquer the country in 1536. Alvarado divided the native towns and gave their labor to the Spanish conquistadors in repartimiento. Further indigenous uprisings near Gracias a Dios, Comayagua, and Olancho occurred in 1537–38. The uprising near Gracias a Dios was led by Lempira, who is honored today by the name of the Honduran currency. Colonial Honduras The defeat of Lempira's revolt, and the decline in fighting among rival Spanish factions all contributed to expanded settlement and increased economic activity in Honduras. In late 1540, Honduras looked to be heading towards development and prosperity, thanks to the establishment of Gracias as the regional capital of the Audiencia of Guatemala (1544). However, this decision created resentment in the populated areas of Guatemala and El Salvador. In 1549, the capital was moved to Antigua, Guatemala, and Honduras and remained a province within the Captaincy General of Guatemala until 1821. Colonial mining operations The initial mining centers were located near the Guatemalan border, around Gracias. In 1538 these mines produced significant quantities of gold. In the early 1540s, the center for mining shifted eastward to the Río Guayape Valley, and silver joined gold as a major product. This change contributed to the rapid decline of Gracias and the rise of Comayagua as the center of colonial Honduras. The demand for labor also led to further revolts and accelerated the decimation of the native population. As a result, African slavery was introduced into Honduras, and by 1545 the province may have had as many as 2,000 slaves. Other gold deposits were found near San Pedro Sula and the port of Trujillo. Mining production began to decline in 1560, and thus the importance of Honduras. In early 1569, new silver discoveries briefly revived the economy, which led to the founding of Tegucigalpa, which soon began to rival Comayagua as the most important city of the province. The silver boom peaked in 1584, and economic depression returned shortly thereafter. Honduras mining efforts were hampered by lack of capital, labor and the difficult terrain. Mercury, vital for the production of silver was scarce, besides the neglect of the officials. The partially conquered northern coast While the Spanish made significant conquests in the southern half of the area, they had less success in the Caribbean section, on the north. They founded a number of towns on the coast, Puerto Caballos in the east, and on the west, and sent minerals and other exports across the country from the Pacific side to be sent to Spain from the Atlantic ports. The founded a number of inland towns on the northwestern side of the province, notably Naco and San Pedro Sula. In the northeast side, the "province" of Taguzgalpa resisted all attempts to conquer it, physically in the sixteenth century, and spiritually, by missionaries in the seventeenth and eighteenth centuries. Among the groups found along northern coast and in neighboring Nicaragua were the Miskito, who although organized in democratic and egalitarian way, had an institution of king, and hence were known as the Mosquito Kingdom. One of the major problems for the Spanish rulers of Honduras, was the activity of the British in northern Honduras, a region over which they had only tenuous control. These activities began in the sixteenth century and continued until the nineteenth century. In the early years, European pirates frequently attacked the villages on the Honduran Caribbean. The Providence Island Company, which occupied Providence Island not far from the coast, raided it occasionally and probably also had some settlements on the shore, possible around Cape Gracias a Dios. Around 1638, the king of the Miskito visited England and made an alliance with the English crown. In 1643 an English expedition destroyed the city of Trujillo, Honduras's main port. The British and the Miskito Kingdom The Spanish sent a fleet from Cartagena which destroyed the English colony at Providence island in 1641, and for a time the presence of an English base so close to the shore was eliminated. At about the same time, however, a group of slaves revolted and captured a ship on which they were traveling, and ended up wrecking it at Cape Gracias a Dios. Managing to get ashore, they were received by the Miskito, and within a generation had given birth to the Miskito Zambo, a mixed race group that by 1715 had become the leaders of the kingdom. Meanwhile the English captured Jamaica in 1655 and soon were seeking allies on the coast, and hit upon the Miskito, whose king Jeremy visited Jamaica in 1687. A variety of other Europeans made settlements in the area during this time. An account of 1699 reveals a patchwork of private individuals, large Miskito family groups, Spanish settlements and pirate hideouts along the coast. Britain declared much of the area a Protectorate in 1740, though they exercised little authority as a result of this decision. British colonization was particularly strong in the Bay Islands, and alliances between the British and Miskito as well as more local supporters made this an area the Spanish could not easily control and a haven for pirates. Bourbon reforms In the early eighteenth century, the Bourbon dynasty, linked to the rulers of France, replaced the Habsburgs on the throne of Spain. The new dynasty began a series of reforms throughout the empire (the Bourbon Reforms), designed to make administration more efficient and profitable, and to facilitate the defense of the colonies. Among these reforms was a reduction in tax on precious metals and the cost of mercury, which was a royal monopoly. In Honduras, these reforms contributed to the resurgence of the mining industry in the 1730s. Under the Bourbons, the Spanish government made several efforts to regain control of the Caribbean coast. In 1752, the Spaniards built the fort of San Fernando de Omoa. In 1780, the Spanish returned to Trujillo, who started out as base of operations against British settlements to the east. During the decade of 1780, the Spanish regained control of the Bay Islands and took most of the British and their allies in the Black River area. They were not, however, able to expand their control beyond Puerto Caballos and Trujillo, thanks to determined Miskito resistance. The Anglo-Spanish Convention of 1786, issued the final recognition of Spanish sovereignty over the Caribbean coast. Honduras in the nineteenth century Independence from Spain (1821) The news that Guatemala had declared the separation from Spain on September 15, 1821, the Provincial Government of Comayagua Honduras declared independence from the Spanish monarchy on September 15, 1821. Federal independence period (1821-1838) Among the most important figures of the federal era include the first democratically elected president in Honduras, Dionisio de Herrera, a lawyer, whose government, begun in 1824 established the first constitution, Gen. Francisco Morazán, Federal President 1830-1834 and 1835–1839, whose figure embodies the ideal American Unionist, and José Cecilio del Valle, editor of the Declaration of Independence signed in Guatemala on September 15, 1821 and Foreign Minister of Mexico in 1823. Soon, social and economic differences between Honduras and its regional neighbors exacerbated harsh partisan strife among Central American leaders and brought the collapse of the Federation from 1838 to 1839. General Francisco Morazán, a Honduran national hero, led unsuccessful efforts to maintain the federation. Restoring Central American unity remained the officially stated chief aim of Honduran foreign policy until after World War I. Honduras broke away from the Central American Federation in October 1838 and became independent and sovereign state. Democratic period between 1838 to 1899 Comayagua was the capital of Honduras until 1880, when it was transferred to Tegucigalpa. In the decades of 1840 and 1850 Honduras participated in several failed attempts to restore Central American unity, such as the Confederation of Central America (1842–1845), the covenant of Guatemala (1842), the Diet of Sonsonate ( 1846), the Diet of Nacaome (1847) and National Representation in Central America (1849–1852). Although Honduras eventually adopted the name Republic of Honduras, the unionist ideal never waned, and Honduras was one of the Central American countries that pushed hardest for the policy of regional unity. In 1850, Honduras attempted to build, with foreign assistance, an Inter-Oceanic Railroad from Trujillo to Tegucigalpa and then on to the Pacific Coast. The project stalled due to difficulties in the work, corruption and other issues, and in 1888, ran out of money when it reached San Pedro Sula, resulting in its growth into the nation's main industrial center and second largest city. Since independence, nearly 300 small internal rebellions and civil wars have occurred in the country, including some changes of government. Honduras in the twentieth century The internationalization of the north, 1899-1932 Political stability and instability both aided and distracted the economic revolution which transformed Honduras through the development of a plantation economy on the north coast, ultimately leading to military interventions from the United States. The Rise of United States influence in Honduras (1899-1919) In 1899, the banana industry in Honduras was growing rapidly and the peaceful transfer of power from Policarpo Bonilla to General Terencio Sierra would mark the first time in decades that such a constitutional transition had taken place. By 1902, railroads had been constructed along the country's Caribbean coast to accommodate the growing banana industry. However, Sierra made efforts to perpetuate himself in office after refusing to step down after a new president was elected in 1902 and would be overthrown by Manuel Bonilla in 1903. After toppling Sierra, Bonilla, a conservative, imprisoned ex-president Policarpo Bonilla, a liberal rival, for two years and made other attempts to suppress liberals throughout the country, as they were the only group in the country with an organized political party. The conservatives were divided into a host of personalist factions and lacked coherent leadership, but Bonilla made some efforts to reorganize the conservatives into a "national party." The present-day National Party of Honduras (Partido Nacional de Honduras—PNH) traces its origins to his administration. Bonilla proved to be an even greater friend of the banana companies than Sierra had been. Under Bonilla's rule, companies gained exemptions from taxes and permission to construct wharves and roads, as well as permission to improve interior waterways and to obtain charters for new railroad construction. He would also successfully establish the border with Nicaragua and resist an invasion from Guatemala in 1906. After fending off Guatemalan military forces, Bonilla sought peace with the country and signed a friendship pact with both Guatemala and El Salvador. Nicaragua's powerful President José Santos Zelaya saw this friendship pact as an alliance to counter Nicaragua and began to undermine Bonilla. Zelaya now supported liberal Honduran exiles in Nicaragua in their efforts to topple Bonilla, who had established himself as a dictator. Supported by elements of the Nicaraguan army, the exiles invaded Honduras in February 1907. With the assistance of Salvadoran troops, Manuel Bonilla tried to resist, but in March his forces were decisively beaten in a battle notable for the introduction of machine guns into Central American civil strife. After toppling Bonilla, the exiles established a provisional junta, but this junta would not last. The United States noticed: it was in US interests to contain Zelaya, protect the region of the new Panama Canal, and defend the increasingly important banana trade. This Nicaragua-assisted invasion by Honduran exiles strongly displeased the United States government, which concluded that Zelaya wanted to dominate the entire Central American region, and the government dispatched marines to Puerto Cortes to protect the banana trade; US naval units were also sent to Honduras and were able to successfully defend Bonilla's last defense position at Amapala in the Gulfo de Fonseca. Through a peace settlement arranged by the US charge' d' affaires in Tegucigalpa, Bonilla stepped down and the war with Nicaragua came to an end. The settlement also provided for the installation of a compromise regime headed by General Miguel R. Davila in Tegucigalpa. Zelaya, however, was not pleased by the settlement, as he strongly distrusted Davila. Zelaya afterwards made a secret arrangement with El Salvador to oust Davila from office. The plan failed to reach fruition, but alarmed the United States. Mexico and the U.S. then called the five Central American countries into diplomatic talks at the Central American Peace Conference to increase stability in the area. At the conference, the five countries signed the General Treaty of Peace and Amity of 1907, which established the Central American Court of Justice to resolve future disputes among the five nations. Honduras also agreed to become permanently neutral in any future conflicts among the other nations. In 1908, opponents of Davila made an unsuccessful attempt to overthrow him. Despite the failure of this coup, the United States became concerned over Honduran instability. The Taft Administration saw the huge Honduran debt, over $120 miilion, as a contributing factor to this instability and began efforts to refinance the largely British debt with provisions for a United States customs receivership or some similar arrangement. Negotiations were arranged between Honduran representatives and New York bankers, headed by J.P. Morgan. By the end of 1909, an agreement had been reached providing for a reduction in the debt and the issuance of new 5 percent bonds: the bankers would control the Honduran railroad, and the United States government would guarantee continued Honduran independence and would take control of customer revenue. The terms proposed by the bankers met with considerable opposition in Honduras, further weakening the Dávila government. A treaty incorporating the key provisions of this agreement with J.P. Morgan was finally signed in January 1911 and submitted to the Honduran legislature by Dávila. However, that body, in a rare display of independence, rejected it by a vote of thirty-three to five. An uprising in 1911 against Dávila interrupted efforts to deal with the debt problem. The United States stepped in to mediate the conflict, bringing both sides to a conference on one of its warships. The revolutionaries, headed by former president Manuel Bonilla, and the government agreed to a cease-fire and the installation of a provisional president who would be selected by the United States mediator, Thomas Dawson. Dawson selected Francisco Bertrand, who promised to hold early, free elections, and Dávila resigned. The 1912 elections were won by Manuel Bonilla, but he died after just over a year in office. Bertrand, who had been his vice president, returned to the presidency and in 1916 won election for a term that lasted until 1920. Between the years 1911 and 1920, Honduras saw relative stability. During this time, railroads expanded throughout Honduras and the banana trade grew rapidly. This stability, however, would prove to be difficult to maintain in the years following 1920. Revolutionary intrigues also continued throughout the period, accompanied by constant rumors that one faction or another was being supported by one of the banana companies. The development of the banana industry contributed to the beginnings of organized labor movements in Honduras and to the first major strikes in the nation's history. The first of these occurred in 1917 against the Cuyamel Fruit Company. The strike was suppressed by the Honduran military, but the following year additional labor disturbances occurred at the Standard Fruit Company's holding in La Ceiba. In 1920, a general strike hit the Caribbean coast. In response, a United States warship was dispatched to the area, and the Honduran government began arresting leaders. When Standard Fruit offered a new wage equivalent to US$1.75 per day, the strike ultimately collapsed. Labor troubles in the banana area, however, were far from ended. The Fruit Companies activity The Liberal government opted to expand production in mining and agriculture, and in 1876 began granting substantial grants of land and tax exemptions to foreign concerns as well as local businesses. Mining was particularly important, and the new policies coincided with the growing of banana exporting, which began in the Bay Islands in the 1870s and was pursued on the mainland by small and middling farmers in the 1880s. Liberal concessions allowed U. S. based concerns to enter the Honduran market, first as shipping companies, and then as railroad and banana producing enterprises. The U. S. companies created very large plantations worked by labor that flooded into the region from the densely settled Pacific side, other Central American countries, and thanks to the company's policies favoring English speaking people, from the English-speaking Caribbean. The result was the creation of an enclave economy centered on the settlements and activities of the three major companies, Cuyamel Fruit Company, Standard Fruit and particularly United Fruit after it absorbed Cuyamel in 1930. In 1899, Vaccaro Brothers and Company (later known as Standard Fruit,a New Orleans-based fruit corporation, which came to Honduras in 1899 to purchase coconuts, oranges and bananas on Roatan Island. After successfully selling these fruits in New Orleans, the company decided to move to the mainland of Honduras. In 1901, Vaccaro Brothers and Company established offices in La Ceiba and Salado and eventually controlled the banana industry between Boca Cerrada and Balfate (an area of about 80 kilometers of coastline). In 1900, American businessman Samuel Zemurray and United Fruit came to Honduras to purchase some banana plantations. In 1905, Zemurray had started buying his own plantations and in 1910, after purchasing 5,000 acres (20 km2) of plantation land in Honduras, formed his own company, the Cuyamel Fruit Company. The two companies' wealth and powerful connections allowed them to gain extraordinary influence in government. Rivalries between the companies, however, escalated in 1910, when the United Fruit came to Honduras to set up company operations; the company had already been a local producer of bananas in Honduras. By 1912, United Fruit had two concessions which it had purchased with government approval. One was to build a railroad from Tela to Progreso which is in the Sula Valley, and the other was to build a railroad from Trujillo, to the city of Juticalpa in Olancho. In 1913, United Fruit established the Tela Railroad Company and shortly thereafter a similar subsidiary, the Trujillo Railroad Company; these two railroads managed the concessions which the Honduran government granted them. Through these two railroad companies, United Fruit dominated the banana trade in Honduras. A census of 1899 revealed that northern Honduras had been exporting bananas for several years and that over 1,000 people in the region between Puerto Cortes and La Ceiba (and inland as far as San Pedro Sula) were tending bananas, most of them small holders. The fruit companies received very large concessions of land on which to grow bananas, often forcing small holders who had been growing and exporting bananas off their land or out of business. In addition, the brought in many workers from the British West Indies, especially Jamaica and Belize, both to work on the plantations, but also as lower managers and skilled workers. The companies often favored the West Indian workers because they spoke English and were sometimes better educated than their Honduran counterparts. This perception of foreign occupation, coupled with a growing race-prejudice against the African-descended West Indians led to considerable tension, as the arrival of the West Indians drove demographic change in the region. The connection between the wealth of the banana trade and the influence of outsiders, particularly North Americans, led O. Henry, the American writer who took temporary refuge in Honduras in 1896-97, to coin the term "Banana Republic" to describe Honduras. By 1912, three companies dominated the banana trade in Honduras: Samuel Zemurray's Cuyamel Fruit Company, Vaccaro Brothers and Company and the United Fruit Company; all of which tended to be vertically integrated, owning their own lands and railroad companies and ship lines such as United's "Great White Fleet". Through land subsidies granted to the railroads, they soon came to control vast tracts of the best land along the Caribbean coast. Coastal cities such as La Ceiba, Tela, and Trujillo and towns further inland such as El Progreso and La Lima became virtual company towns. For the next twenty years, the U.S. government was involved in quelling Central American disputes, insurrections, and revolutions, whether supported by neighboring governments or by United States companies. As part of the so-called Banana Wars all around the Caribbean, Honduras saw the insertion of American troops in 1903, 1907, 1911, 1912, 1919, 1924 and 1925. For instance, in 1917 the Cuyamel Fruit Company extended its rail lines into disputed Guatemalan territory. Renewed threat of instability (1919-1924) In 1919, it became obvious that Bertrand would refuse to allow an open election to choose his successor. Such a course of action was opposed by the United States and had little popular support in Honduras. The local military commander and governor of Tegucigalpa, General Rafael López Gutiérrez, took the lead in organizing PLH opposition to Bertrand. López Gutiérrez also solicited support from the liberal government of Guatemala and even from the conservative regime in Nicaragua. Bertrand, in turn, sought support from El Salvador. Determined to avoid an international conflict, the United States, after some hesitation, offered to meditate the dispute, hinting to the Honduran president that if he refused the offer, open intervention might follow. Bertrand promptly resigned and left the country. The United States ambassador helped arrange the installation of an interim government headed by Francisco Bográn, who promised to hold free elections. However, General López Gutiérrez, who now effectively controlled the military situation, made it clear that he was determined to be the next president. After considerable negotiation and some confusion, a formula was worked out under which elections were held. López Gutiérrez won easily in a manipulated election, and in October 1920 he assumed the presidency. During Bográn's brief time in office, he had agreed to a United States proposal to invite a United States financial adviser to Honduras. Arthur N. Young of the Department of State was selected for this task and began work in Honduras in August 1920, continuing to August 1921. While there, Young compiled extensive data and made numerous recommendations, even persuading the Hondurans to hire a New York police lieutenant to reorganize their police forces. Young's investigations clearly demonstrated the desperate need for major financial reforms in Honduras, whose always precarious budgetary situation was considerably worsened by the renewal of revolutionary activities. In 1919, for example, the military had spent more than double the amount budgeted for them, accounting for over 57 percent of all federal expenditures. Young's recommendations for reducing the military budget, however, found little favor with the new López Gutiérrez administration, and the government's financial condition remained a major problem. If anything, continued uprisings against the government and the threat of a renewed Central America conflict made the situation even worse. From 1919 to 1924, the Honduran government expended US$7.2 million beyond the amount covered by the regular budgets for military operations. From 1920 through 1923, seventeen uprisings or attempted coups in Honduras contributed to growing United States concern over political instability in Central America. In August 1922, the presidents of Honduras, Nicaragua, and El Salvador met on the U.S.S. Tacoma in the Golfo de Fonseca. Under the watchful eye of the United States ambassadors to their nations, the presidents pledged to prevent their territories from being used to promote revolutions against their neighbors and issued a call for a general meeting of Central American states in Washington at the end of the year. The Washington conference concluded in February with the adoption of the General Treaty of Peace and Amity of 1923, which had eleven supplemental conventions. The treaty in many ways followed the provisions of the 1907 treaty. The Central American court was reorganized, reducing the influence of the various governments over its membership. The clause providing for withholding recognition of revolutionary governments was expanded to preclude recognition of any revolutionary leader, his relatives, or anyone who had been in power six months before or after such an uprising unless the individual's claim to power had been ratified by free elections. The governments renewed their pledges to refrain from aiding revolutionary movements against their neighbors and to seek peaceful resolutions for all outstanding disputes. The supplemental conventions covered everything from the promotion of agriculture to the limitation of armaments. One, which remained unratified, provided for free trade among all of the states except Costa Rica. The arms limitation agreement set a ceiling on the size of each nation's military forces (2,500 men for Honduras) and included a United States-sponsored pledge to seek foreign assistance in establishing more professional armed forces. The October 1923 Honduran presidential elections and the subsequent political and military conflicts provided the first real tests of these new treaty arrangements. Under heavy pressure from Washington, López Gutiérrez allowed an unusually open campaign and election. The long-fragmented conservatives had reunited in the form of the National Party of Honduras (Partido Nacional de Honduras—PNH), which ran as its candidate General Tiburcio Carías Andino, the governor of the department of Cortés. However, the liberal PLH was unable to unite around a single candidate and split into two dissident groups, one supporting former president Policarpo Bonilla, the other advancing the candidacy of Juan Angel Arias. As a result, each candidate failed to secure a majority. Carías received the greatest number of votes, with Bonilla second, and Arias a distant third. By the terms of the Honduran constitution, this stalemate left the final choice of president up to the legislature, but that body was unable to obtain a quorum and reach a decision. In January 1924, López Gutiérrez announced his intention to remain in office until new elections could be held, but he repeatedly refused to specify a date for the elections. Carías, reportedly with the support of United Fruit, declared himself president, and an armed conflict broke out. In February the United States, warning that recognition would be withheld from anyone coming to power by revolutionary means, suspended relations with the López Gutiérrez government for its failure to hold elections. Conditions rapidly deteriorated in the early months of 1924. On February 28, a pitched battle took place in La Ceiba between government troops and rebels. Even the presence of the U.S.S. Denver and the landing of a force of United States Marines were unable to prevent widespread looting and arson resulting in over US$2 million in property damage. Fifty people, including a United States citizen, were killed in the fighting. In the weeks that followed, additional vessels from the United States Navy Special Service Squadron were concentrated in Honduran waters, and landing parties were put ashore at various points to protect United States interests. One force of marines and sailors was even dispatched inland to Tegucigalpa to provide additional protection for the United States legation. Shortly before the arrival of the force, López Gutiérrez died, and what authority remained with the central government was being exercised by his cabinet. General Carías and a variety of other rebel leaders controlled most of the countryside but failed to coordinate their activities effectively enough to seize the capital. In an effort to end the fighting, the United States government dispatched Sumner Welles to the port of Amapala; he had instructions to try to produce a settlement that would bring to power a government eligible for recognition under the terms of the 1923 treaty. Negotiations, which were once again held on board a United States cruiser, lasted from April 23 to April 28. An agreement was worked out that provided for an interim presidency headed by General Vicente Tosta, who agreed to appoint a cabinet representing all political factions and to convene a Constituent Assembly within ninety days to restore constitutional order. Presidential elections were to be held as soon as possible, and Tosta promised to refrain from being a candidate. Once in office, the new president showed signs of reneging on some of his pledges, especially those related to the appointment of a bipartisan cabinet. Under heavy pressure from the United States delegation, however, he ultimately complied with the provisions of the peace agreement. Keeping the 1924 elections on track proved to be a difficult task. To put pressure on Tosta to conduct a fair election, the United States continued an embargo on arms to Honduras and barred the government from access to loans—including a requested US$75,000 from the Banco Atlántida. Furthermore, the United States persuaded El Salvador, Guatemala, and Nicaragua to join in declaring that, under the 1923 treaty provision, no leader of the recent revolution would be recognized as president for the coming term. These pressures ultimately helped persuade Carías to withdraw his candidacy and also helped ensure the defeat of an uprising led by General Gregorio Ferrera of the PNH. The PNH nominated Miguel Paz Barahona (1925–29), a civilian, for president. The PLH, after some debate, refused to nominate a candidate, and on December 28 Paz Barahona won virtual unanimous election. Restoration of order (1925-1931) Despite another minor uprising led by General Ferrera in 1925, Paz Barahona's administration was, by Honduran standards, rather tranquil. The banana companies continued to expand, the government's budgetary situation improved, and there was even an increase in labor organizing. On the international front, the Honduran government, after years of negotiations, finally concluded an agreement with the British bondholders to liquidate most of the immense national debt. The bonds were to be redeemed at 20 percent of face value over a thirty-year period. Back interest was forgiven, and new interest accrued only over the last fifteen years of this arrangement. Under the terms of this agreement, Honduras, at last, seemed on the road to fiscal solvency. Fears of disturbances increased again in 1928 as the scheduled presidential elections approached. The ruling PNH nominated General Carías while the PLH, united again following the death of Policarpo Bonilla in 1926, nominated Vicente Mejía Colindres. To the surprise of most observers, both the campaign and the election were conducted with a minimum of violence and intimidation. Mejía Colindres won a decisive victory—obtaining 62,000 votes to 47,000 for Carías. Even more surprising was Carías's public acceptance of defeat and his urging of his supporters to accept the new government. Mejía Colindres took office in 1929 with high hopes for his administration and his nation. Honduras seemed on the road to political and economic progress. Banana exports, then accounting for 80 percent of all exports, continued to expand. By 1930 Honduras had become the world's leading producer of the fruit, accounting for one-third of the world's supply of bananas. United Fruit had come increasingly to dominate the trade, and in 1929 it bought out the Cuyamel Fruit Company, one of its two principal remaining rivals. Because conflicts between these companies had frequently led to support for rival groups in Honduran politics, had produced a border controversy with Guatemala, and may have even contributed to revolutionary disturbances, this merger seemed to promise greater domestic tranquility. The prospect for tranquility was further advanced in 1931 when Ferrera and his insurgents were killed, while leading one last unsuccessful effort to overthrow the government, after government troops discovered their hiding place in Chamelecon. Many of Mejía Colindres's hopes, however, were dashed with the onset of the Great Depression. Banana exports peaked in 1930, then declined rapidly. Thousands of workers were laid off, and the wages of those remaining on the job were reduced, as were the prices paid to independent banana producers by the giant fruit companies. Strikes and other labor disturbances began to break out in response to these conditions, but most were quickly suppressed with the aid of government troops. As the depression deepened, the government's financial situation deteriorated; in 1931 Mejía Colindres was forced to borrow US$250,000 from the fruit companies to ensure that the army would continue to be paid. The Era of Tiburcio Carías Andino (1932-1949) Despite growing unrest and severe economic strains, the 1932 presidential elections in Honduras were relatively peaceful and fair. The peaceful transition of power was surprising because the onset of the depression had led to the overthrow of governments elsewhere throughout Latin America, in nations with much stronger democratic traditions than those of Honduras. After United Fruit bought out Cuyamal, Sam Zemurray, a strong supporter of the Liberal Party, left the country and the Liberals were short on cash by the 1932 general election. Mejía Colindres, however, resisted pressure from his own party to manipulate the results to favor the PLH candidate, Angel Zúñiga Huete. As a result, the PNH candidate, Carías, won the election by a margin of some 20,000 votes. On November 16, 1932, Carías assumed office, beginning what was to be the longest period of continuous rule by an individual in Honduran history. Lacking, however, was any immediate indication that the Carías administration was destined to survive any longer than most of its predecessors. Shortly before Carías's inauguration, dissident liberals, despite the opposition of Mejía Colindres, had risen in revolt. Carías had taken command of the government forces, obtained arms from El Salvador, and crushed the uprising in short order. Most of Carías's first term in office was devoted to efforts to avoid financial collapse, improve the military, engage in a limited program of road building, and lay the foundations for prolonging his own hold on power. The economic situation remained extremely bad throughout the 1930s. In addition to the dramatic drop in banana exports caused by the depression, the fruit industry was further threatened by the outbreak in 1935 of epidemics of Panama disease (a debilitating fungus) and sigatoka (leaf blight) in the banana-producing areas. Within a year, most of the country's production was threatened. Large areas, including most of those around Trujillo, were abandoned, and thousands of Hondurans were thrown out of work. By 1937 a means of controlling the disease had been found, but many of the affected areas remained out of production because a significant share of the market formerly held by Honduras had shifted to other nations. Carías had made efforts to improve the military even before he became president. Once in office, both his capacity and his motivation to continue and to expand such improvements increased. He gave special attention to the fledgling air force, founding the Military Aviation School in 1934 and arranging for a United States colonel to serve as its commandant. As months passed, Carías moved slowly but steadily to strengthen his hold on power. He gained the support of the banana companies through opposition to strikes and other labor disturbances. He strengthened his position with domestic and foreign financial circles through conservative economic policies. Even in the height of the depression, he continued to make regular payments on the Honduran debt, adhering strictly to the terms of the arrangement with the British bondholders and also satisfying other creditors. Two small loans were paid off completely in 1935. Political controls were instituted slowly under Carías. The Communist Party of Honduras (Partido Comunista de Honduras—PCH) was outlawed, but the PLH continued to function, and even the leaders of a small uprising in 1935 were later offered free air transportation should they wish to return to Honduras from their exile abroad. At the end of 1935, however, stressing the need for peace and internal order, Carías began to crack down on the opposition press and political activities. Meanwhile, the PNH, at the president's direction, began a propaganda campaign stressing that only the continuance of Carías in office could give the nation continued peace and order. The constitution, however, prohibited immediate reelection of presidents. The method chosen by Carías to extend his term of office was to call a constituent assembly that would write a new constitution and select the individual to serve for the first presidential term under that document. Except for the president's desire to perpetuate himself in office, there seemed little reason to alter the nation's basic charter. Earlier constituent assemblies had written thirteen constitutions (only ten of which had entered into force), and the latest had been adopted in 1924. The handpicked Constituent Assembly of 1936 incorporated thirty of the articles of the 1924 document into the 1936 constitution. The major changes were the elimination of the prohibition on immediate reelection of a president and vice president and the extension of the presidential term from four to six years. Other changes included restoration of the death penalty, reductions in the powers of the legislature, and denial of citizenship and therefore the right to vote to women. Finally, the new constitution included an article specifying that the incumbent president and vice president would remain in office until 1943. But Carías, by then a virtual dictator, wanted even more, so in 1939 the legislature, now completely controlled by the PNH, obediently extended his term in office by another six years (to 1949). The PLH and other opponents of the government reacted to these changes by attempting to overthrow Carías. Numerous efforts were made in 1936 and 1937, but all were successful only in further weakening the PNH's opponents. By the end of the 1930s, the PNH was the only organized functioning political party in the nation. Numerous opposition leaders had been imprisoned, and some had reportedly been chained and put to work in the streets of Tegucigalpa. Others, including the leader of the PLH, Zúñiga Huete, had fled into exile. During his presidency, Carías cultivated close relations with his fellow Central American dictators, generals Jorge Ubico in Guatemala, Maximiliano Hernández Martínez in El Salvador, and Anastasio Somoza García in Nicaragua. Relations were particularly close with Ubico, who helped Carías reorganize his secret police and also captured and shot the leader of a Honduran uprising who had made the mistake of crossing into Guatemalan territory. Relations with Nicaragua were somewhat more strained as a result of the continuing border dispute, but Carías and Somoza managed to keep this dispute under control throughout the 1930s and 1940s. The value of these ties became somewhat questionable in 1944 when popular revolts in Guatemala and El Salvador deposed Ubico and Hernández Martínez. For a time, it seemed as if revolutionary contagion might spread to Honduras as well. A plot, involving some military officers as well as opposition civilians, had already been discovered and crushed in late 1943. In May 1944, a group of women began demonstrating outside of the Presidential Palace in Tegucigalpa, demanding the release of political prisoners. Despite strong government measures, tension continued to grow, and Carías was ultimately forced to release some prisoners. This gesture failed to satisfy the opposition, and antigovernment demonstrations continued to spread. In July several demonstrators were killed by troops in San Pedro Sula. In October a group of exiles invaded Honduras from El Salvador but were unsuccessful in their efforts to topple the government. The military remained loyal, and Carías continued in office. Anxious to curb further disorders in the region, the United States began to urge Carías to step aside and allow free elections when his current term in office expired. Carías, who by then was in his early seventies, ultimately yielded to these pressures and announced October 1948 elections, in which he would refrain from being a candidate. He continued, however, to find ways to use his power. The PNH nominated Carías's choice for president—Juan Manuel Gálvez, who had been minister of war since 1933. Exiled opposition figures were allowed to return to Honduras, and the PLH, trying to overcome years of inactivity and division, nominated Zúñiga Huete, the same individual whom Carías had defeated in 1932. The PLH rapidly became convinced that it had no chance to win and, charging the government with manipulation of the electoral process, boycotted the elections. This act gave Gálvez a virtually unopposed victory, and in January 1949, he assumed the presidency. Evaluating the Carías presidency is a difficult task. His tenure in office provided the nation with a badly needed period of relative peace and order. The country's fiscal situation improved steadily, education improved slightly, the road network expanded, and the armed forces were modernized. At the same time, nascent democratic institutions withered, opposition and labor activities were suppressed, and national interests at times were sacrificed to benefit supporters and relatives of Carías or major foreign interests. New Reform (1949-1954) Once in office, Gálvez demonstrated more independence than had generally been anticipated. Some policies of the Carías administration, such as road building and the development of coffee exports, were continued and expanded. By 1953 nearly one-quarter of the government's budget was devoted to road construction. Gálvez also continued most of the prior administration's fiscal policies, reducing the external debt and ultimately paying off the last of the British bonds. The fruit companies continued to receive favorable treatment at the hands of the Gálvez administration; for example, United Fruit received a highly favorable twenty-five-year contract in 1949. Galvez, however, instituted some notable alterations from the preceding fifteen years. Education received increased attention and began to receive a larger share of the national budget. Congress actually passed an income tax law, although enforcement was sporadic at best. The most obvious change was in the political arena. A considerable degree of press freedom was restored, the PLH and other groups were allowed to organize, and even some labor organization was permitted. Labor also benefited from legislation during this period. Congress passed, and the president signed, legislation establishing the eight-hour workday, paid holidays for workers, limited employer responsibility for work-related injuries, and regulations for the employment of women and children. In October 1955, after the general strike in 1954, young military reformists staged a coup that installed a provisional junta. Capital punishment was abolished in 1956, though the last person to be executed was in 1940 (The current president Porfirio "Pepe" Lobo has tried to bring it back). There were constituent assembly elections in 1957 which appointed Ramón Villeda as President, and itself becoming a national Congress with a 6-year term. The PLH ruled during 1957–63. The military began to become a professional institution independent of politics, with the newly created military academy graduating its first class in 1960. In October 1963, conservative military officers preempted constitutional elections and deposed Villeda in a bloody coup. These officers exiled PLH members and governed under General Oswaldo López until 1970. A civilian president for the PNH, Ramón Ernesto Cruz, took power briefly in 1970 until, in December 1972, Oswaldo López staged another coup. This time round, he adopted more progressive policies, including land reform. López's successors continued armed forces modernization programs, building army and security forces, and concentrating on Honduran air force superiority over its neighbors. During the governments of General Juan Alberto Melgar (1975–78) and General Policarpo Paz (1978–82), Honduras built most of its physical infrastructure and electricity and terrestrial telecommunications systems, both of which are state monopolies. The country experienced economic growth during this period, with greater international demand for its products and the increased availability of foreign commercial capital. Constituent assembly (1980) In 1979, the country returned to civilian rule. A constituent assembly was popularly elected in April 1980 and general elections were held in November 1981. A new constitution was approved in 1982 and the PLH government of Roberto Suazo assumed power. Roberto Suazo Córdova won the elections with a promise to carry out an ambitious program of economic and social development in Honduras in order to tackle the country's recession. During this time, Honduras also assisted the Contra guerillas. President Suazo launched ambitious social and economic development projects sponsored by American development aid. Honduras became host to the largest Peace Corps mission in the world, and nongovernmental and international voluntary agencies proliferated. Between 1979 and 1985, under John Negroponte's appointment as U.S. diplomat from 1981 to 1985, U.S. military and economic aid to Honduras jumped from $31 million to $282 million. Between 1979 and 1985, U.S. development aid fell from 80% of the total to 6%. The United States established a continuing military presence in Honduras with the purpose of supporting the Contra guerillas fighting the Nicaraguan government and also developed an air strip and a modern port in Honduras. Though spared the bloody civil wars wracking its neighbors, the Honduran army quietly waged a campaign against Marxist-Leninist militias such as Cinchoneros Popular Liberation Movement, notorious for kidnappings and bombings, and many non-militants. The operation included a CIA-backed campaign of extrajudicial killings by government-backed units, most notably Battalion 316. President Suazo, relying on U.S. support, created ambitious social and economic development projects to help with a severe economic recession and with the perceived threats of regional instability. As the November 1985 election approached, the PLH could not settle on a presidential candidate and interpreted election law as permitting multiple candidates from any one party. The PLH claimed victory when its presidential candidates collectively outpolled the PNH candidate, Rafael Leonardo Callejas, who received 42% of the total vote. José Azcona, the candidate receiving the most votes (27%) among the PLH, assumed the presidency in January 1986. With strong endorsement and support from the Honduran military, the Suazo Administration ushered in the first peaceful transfer of power between civilian presidents in more than 30 years. In 1989 he oversaw the dismantling of Contras which were based in Honduras. In January 1990, Rafael Leonardo Callejas, having won the presidential election, took office, concentrating on economic reform, reducing the deficit. He began a movement to place the military under civilian control and laid the groundwork for the creation of the public prosecution service. In 1993, PLH candidate Carlos Roberto Reina was elected with 56% of the vote against PNH contender Oswaldo Ramos. He won on a platform calling for a "Moral Revolution," making active efforts to prosecute corruption and pursued those responsible for alleged human rights abuses in the 1980s. The Reina administration successfully increased civilian control over the armed forces, transferring the national police from military to civilian authority. In 1996, Reina named his own defense minister, breaking the precedent of accepting the nominee of the armed forces leadership. His administration substantially increased Central Bank net international reserves, reduced inflation to 12.8% a year, restored a better pace of economic growth (about 5% in 1997), and held down spending to achieve a 1.1% non-financial public sector deficit in 1997. PLH's Carlos Roberto Flores took office on 27 January 1998, as Honduras' fifth democratically elected President since free elections were restored in 1981, with a 10% margin over his main opponent PNH nominee Nora Gúnera de Melgar (the widow of former leader Juan Alberto Melgar). Flores inaugurated International Monetary Fund (IMF) programs of reform and modernization of the Honduran Government and economy, with emphasis on maintaining the country's fiscal health and improving international competitiveness. In October 1998, Hurricane Mitch devastated Honduras, leaving more than 5,000 people dead and 1.5 million displaced. Damages totaled nearly $3 billion. International donors came forward to assist in rebuilding infrastructure, donating US$1400 million in 2000. Honduras in the twenty-first century In November 2001, the National Party won presidential and parliamentary elections. The PNH gained 61 seats in Congress and the PLH won 55. The PLH candidate Rafael Pineda was defeated by the PNH candidate Ricardo Maduro, who took office in January 2002. Maduro administration emphasized on stopping mara growth, especially Mara 18 and Mara Salvatrucha. Jose Manuel Zelaya Rosales of the Liberal Party of Honduras won the 27 November 2005 presidential elections with less than a 4% margin of victory, the smallest margin ever in Honduran electoral history. Zelaya's campaign theme was "citizen power," and he vowed to increase transparency and combat narcotrafficking, while maintaining macroeconomic stability. The Liberal Party won 62 of the 128 congressional seats, just short of an absolute majority. In 2009 Zelaya caused controversy with his call to have a constitutional referendum in June to decide about convening a Constitutional National Assembly to formulate a new constitution. The constitution explicitly bars changes to some of its clauses, including the term limit, and the move precipitated a Constitutional Crisis. An injunction against holding the referendum was issued by the Honduran Supreme Court. Zelaya rejected the ruling and sacked Romeo Vásquez Velásquez, the head of Honduras's armed forces. Vásquez had refused to help with the referendum because he did not want to violate the law. The sacking was deemed unlawful by the Supreme Court as well as by the Congress and Vásquez was reinstated. The President then further defied the Supreme Court by pressing ahead with the vote, which the Court had deemed "illegal". The military had confiscated the ballots and polls in a military base in Tegucigalpa. On June 27, a day before the election, Zelaya followed by a big group of supporters entered the base and ordered, as Commanding Officer of the Armed Forces, for the ballots and polls to be returned to him. The congress saw this as abuse of power and ordered his capture. On June 28, 2009, the military removed Zelaya from office and deported him to Costa Rica, a neutral country. Elvin Santos, the vice-president during the start of Zelaya's term, had resigned in order to run for president in the coming elections, and by presidential line of succession the head of Congress, Roberto Micheletti, was appointed president. However, due to the stance taken by the United Nations and the Organization of American States on use of military force to depose a president, most countries in the region and in the world continued to recognize Zelaya as the President of Honduras and denounced the actions as an assault on democracy . Honduras continued to be ruled by Micheletti's administration under strong foreign pressure. On November 29, democratic general elections were held, with former Congressional president and 2005 nominee, Pepe Lobo as victor. Inaugurated on January 27, 2010, Pepe Lobo and his administration focused throughout the first year for foreign recognition of presidential legitimacy and Honduras's reinstitution in the OAS. See also - History of the Americas - History of Central America - History of Latin America - History of North America - List of Presidents of Honduras - Politics of Honduras - Spanish colonization of the Americas - "Background Note: Honduras". Department of State. - Paine, Richard R and Freter, AnnCorinne 1996 "Environmental Degradation and the Classic Maya Collapse at Copan, Honduras" Ancient Mesoamerica 7:37–47 Cambridge University Press - Newson, Linda The Cost of Conquest: Indian Decline in Honduras Under Spanish Rule. Dellplain Latin American Studies; No. 20, Westview Press, Boulder - Pascal Girot (1994). ei = LyxCTZKfDIa0sAPCm5nCCg & sa = X & oi = book_result & ct = result & resnum = 5 & ved = 0CEIQ6AEwBA # v = onepage & q = Honduras% 20francisco% 20of% 20montejo% 20pedro% 20of% 20alvarado & f = false The Americas. pp. 284–285. ISBN 0-415-08836-4. Retrieved 27 January 2011. - "Honduras: A Country Study". Tim Merrill for the Library of Congress. 1995. Retrieved 2011-03-20. - Karen Ordahl Kupperman, Providence Island: The Other Puritan Colony, 1631-41 (Cambridge, 19xx), pp. - Vera, Robustiano (1899). Apuntes para la Historia de Honduras (in Spanish). Tegucigalpa: Imprenta El Correo. - M. W. "The Mosqueto Indian and his Golden River," in Ansham Churchill, A Collection of Voyages and Travels (London, 1732), vol. 6 - Troy Floyd, The Anglo-Spanish Struggle for Mosquitia (Albuquerque, 1967). - Glen Chambers, Race, Nation and West Indian Immigration to Honduras, 1890-1940 (Baton Rouge: Louisiana State University Press, 2010). - Soluri, Banana Culture: Agriculture, Consumption, and Environmental Change in Honduras and the United States (Austin: University of Texas Press: 2005) - Glen Chambers, Race Nation and West Indian Immigration to Honduras, 1890-1940 (Baton Rouge: Louisiana State University Press, 2010). - "CARIBBEAN: Alarums". Time. 4 May 1931. - The New York Times > Washington > Cables Show Central Negroponte Role in 80's Covert War Against Nicaragua - "Background Note: Honduras". United States Department of State. - "Cinchoneros Popular Liberation Movement". - "A survivor tells her story" baltimoresun.com, 15 June 1995, retrieved 8 January 2007. - "With Contras' Fate at Stake, Honduran Is Man in Middle at Talks" NY Times, 7 August 1990, retrieved 5 November 2009. - Sigue rechazo a la cuarta urna - Freddy, Cuevas; Jorge Rueda, Carlos Rodriguez, Edith M. Lederer (26 June 2009). "Honduras heads toward crisis over referendum". Associated Press. Retrieved 26 June 2009. - Honduran leader pushes ahead with divisive vote - "Honduran leader defies top court". BBC. 26 June 2009. Retrieved 5 January 2010. - See Voice of America, "Honduran President Ousted by Military" (28 June 2009) - Elecciones Generales "Estadisticas 2009" (29 Nov. 2010)
http://en.wikipedia.org/wiki/History_of_Honduras
13
14
See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free! From Citizendium, the Citizens' Compendium In cryptography, block ciphers are one of the two main types of symmetric cipher; they operate on fixed-size blocks of plaintext, giving a block of ciphertext for each. The other main type are stream ciphers, which generate a continuous stream of keying material to be mixed with messages. The basic function of block ciphers is to keep messages or stored data secret; the intent is that an unauthorised person be completely unable to read the enciphered material. Block ciphers therefore use a key and are designed to be hard to read without that key. Of course an attacker's intent is exactly the opposite; he wants to read the material without authorisation, and often without the key. Seecryptanalysis for his methods. Among the best-known and most widely used block ciphers are two US government standards. The Data Encryption Standard (DES) from the 1970s is now considered obsolete; the Advanced Encryption Standard (AES) replaced it in 2002. To choose the new standard, the National Institute of Standards and Technology ran an AES competition. Fifteen ciphers were entered, five finalists selected, and eventually AES chosen. Text below gives an overview; for details of the process and the criteria, and descriptions of all fifteen candidates, see the AES competition article. These standards greatly influenced the design of other block ciphers, and the latter part of this article is divided into sections based on that. DES and alternatives describes 20th century block ciphers, all with the 64-bit block size of DES. The AES generation describes the next generation, the first 21st century ciphers, all with the 128-bit block size of AES. Large-block ciphers covers a few special cases that do not fit in the other sections. Block ciphers are essential components in many security systems. However, just having a good block cipher does not give you security, much as just having good tires does not give you transportation. It may not even help; good tires are of little use if you need a boat. Even in systems where block ciphers are needed, they are never the whole story. This section gives an overview of the rest of the story; it aims to provide a context for the rest of the article by mentioning some issues that, while outside the study of the ciphers themselves, are crucially important in understanding and using these ciphers. Any cipher is worthless without a good key. Keys must be kept secure, they should be large enough and sufficiently random that searching for the key (a brute force attack) is effectively impossible, and in any application which encrypts large volumes of data, the key must be changed from time to time. See the cryptography article for discussion. It is hard to design any system that must withstand adversaries; seecryptography is difficult. In particular, block ciphers must withstandcryptanalysis; it is impossible to design a good block cipher, or to evaluate the security of one, without a thorough understanding of the available attack methods. Also, Kerckhoffs' Principle applies to block ciphers; no cipher can be considered secure unless it can resist an attacker who knows all its details except the key in use. Analysis of security claims cannot even begin until all internal details of a cipher are published, so anyone making security claims without publishing those details will be either ignored or mocked by most experts. A block cipher defines how a single block is encrypted; a mode of operation defines how multiple block encryptions are combined to achieve some larger goal. Using a mode that is inappropriate for the application at hand may lead to insecurity, even if the cipher itself is secure. A block cipher can be used to build another cryptographic function such as a random number generator, a stream cipher, or a cryptographic hash. These are primarily a matter of choosing the correct mode, but there are more general design issues as well; see the linked articles for details. Block ciphers are often used as components in hybrid cryptosystems; these combine public key (asymmetric) cryptography with secret key (symmetric) techniques such as block ciphers or stream ciphers. Typically, the symmetric cipher is the workhorse that encrypts large amounts of data; the public key mechanism manages keys for the symmetric cipher and providesauthentication. Generally other components such as cryptographic hashes and a cryptographically strong random number generator are required as well. Such a system can only be as strong as its weakest link, and it may not even be that strong. Using secure components including good block ciphers is certainly necessary, but just having good components does not guarantee that the system will be secure. See hybrid cryptosystem for how the components fit together, and information security for broader issues. That said, we turn to the block ciphers themselves. One could say there are only three things to worry about in designing a block cipher: - make the block size large enough that an enemy cannot create a code book, collecting so many known plaintext/ciphertext pairs that the cipher is broken. - make the key size large enough that he cannot use a brute force attack, trying all possible keys - then design the cipher well enough that no other attack is effective Getting adequate block size and key size is the easy part; just choose large enough numbers. This section describes how those choices are made. Making ciphers that resist attacks that are cleverer than brute force (see cryptanalysis) is far more difficult. The following section, Principles and techniques covers ideas and methods for that. Later on, we describe two generations of actual ciphers. The 20th century ciphers use 64-bit blocks and key sizes from 56 bits up. The 21st century ciphers use 128-bit blocks and 128-bit or larger keys. If two or more ciphers use the same block and key sizes, they are effectively interchangeable. One can replace another in almost any application without requiring any other change to the application. This might be done to comply with a particular government's standards, to replace a cipher against which some new attack had been discovered, to provide efficiency in a particular environment, or simply to suit a preference. Nearly all cryptographic libraries give a developer a choice of components, and some protocols such as IPsec allow a network administrator to select ciphers. This may be a good idea if all the available ciphers are strong, but if some are weak it just gives the developer or administrator, neither of whom is likely to be an expert on ciphers, an opportunity to get it wrong. There is an argument that supporting multiple ciphers is an unnecessary complication. On the other hand, being able to change ciphers easily if one is broken provides a valuable safety mechanism. Striking some sort of balance with a few strong ciphers is probably the best policy. The block size of a cipher is chosen partly for implementation convenience; using a multiple of 32 bits makes software implementations simpler. However, it must also be large enough to guard against code book attacks. DES and the generation of ciphers that followed it all used a 64-bit block size. To weaken such a cipher significantly the attacker must build up a code book with 232 blocks, 32 gigabytes of data, all encrypted with the same key, As long as the cipher user changes keys reasonably often, a code book attack is not a threat. Procedures and protocols for block cipher usage therefore always include a re-keying policy. However, with Moore's Law making larger code books more practical, NIST chose to play it safe in their AES specifications; they used a 128-bit block size. This was a somewhat controversial innovation at the time (1998), since it meant changes to a number of applications and it was not absolutely clear that the larger size was necessary. However, it has since become common practice; later ciphers such as Camellia,SEED and ARIA also use 128 bits. There are also a few ciphers which either support variable block size or have a large fixed block size. See the section on large-block ciphers for details. In theory, any cipher except a one-time pad can be broken by a brute force attack; the enemy just has to try keys until he finds the right one. However, the attack is practical only if the cipher's key size is inadequate. If the key uses bits, there are 2n possible keys and on average the attacker must test half of them, so the average cost of the attack is 2n-1 encryptions. Current block ciphers all use at least 128-bit keys, which makes brute force attacks utterly impractical. Suppose an attacker has a billion processors in a monster parallel machine (several orders of magnitude more than any current machine) and each processor can test a billion keys a second (also a generous estimate; if the clock is k GHz, the processor must do an encryption in k cycles to achieve this). This amazingly powerful attacker can test about 260 keys a second, so he needs 267 seconds against a 128-bit key. There are about 225seconds in a year, so that is about 242 years. This is over 4,000,000,000,000 (four trillion) years so the cipher is clearly secure against brute force. Many ciphers support larger keys as well; the reasons are discussed in the brute force attack article. Principles and techniques This section introduces the main principles of block cipher design, defines standard terms, and describes common techniques. Iterated block ciphers Nearly all block ciphers are iterated block ciphers; they have multiple rounds, each applying the same transformation to the output of the previous round. At setup time, a number of round keys or subkeys are computed from the primary key; the method used is called the cipher's key schedule. In the actual encryption or decryption, each round uses its own round key. This allows the designer to define some relatively simple transformation, called a round function, and apply it repeatedly to create a cipher with enough overall complexity to thwart attacks. Three common ways to design iterated block ciphers — SP networks, Feistel structures and the Lai-Massey construction — and two important ways to look at the complexity requirements — avalanche and nonlinearity — are covered in following sections. Any iterated cipher can be made more secure by increasing the number of rounds or made faster by reducing the number. In choosing the number of rounds, the cipher designer tries to strike a balance that achieves both security and efficiency simultaneously. Often a safety margin is applied; if the cipher appears to be secure after a certain number of rounds, the designer specifies a somewhat larger number for actual use. There is a trade-off that can be made in the design. With a simple fast round function, many rounds may be required to achieve adequate security; for example, GOST and TEA both use 32 rounds. A more complex round function might allow fewer rounds; for example, IDEAuses only 8 rounds. Since the ciphers with fast round functions generally need more rounds and the ones with few rounds generally need slower round functions, neither strategy is clearly better. Secure and reasonably efficient ciphers can be designed either way, and compromises are common. In cryptanalysis it is common to attack reduced round versions of a cipher. For example, in attacking a 16-round cipher, the analyst might start by trying to break a two-round or four-round version. Such attacks are much easier. Success against the reduced round version may lead to insights that are useful in work against the full cipher, or even to an attack that can be extended to break the full cipher. Whitening and tweaking Nearly all block ciphers use the same basic design, an iterated block cipher with multiple rounds. However, some have additional things outside that basic structure. Whitening involves mixing additional material derived from the key into the plaintext before the first round, or into the ciphertext after the last round. or both. The technique was introduced by Ron Rivest in DES-X and has since been used in other ciphers such as RC6, Blowfish and Twofish. If the whitening material uses additional key bits, as in DES-X, then this greatly increases resistance to brute force attacks because of the larger key. If the whitening material is derived from the primary key during key scheduling, then resistance to brute force is not increased since the primary key remains the same size. However, using whitening is generally much cheaper than adding a round, and it does increase resistance to other attacks; see papers cited for DES-X. A recent development is the tweakable block cipher . Where a normal block cipher has only two inputs, plaintext and key, a tweakable block cipher has a third input called the tweak. The tweak, along with the key, controls the operation of the cipher. Whitening can be seen as one form of tweaking, but many others are possible. If changing tweaks is sufficiently lightweight, compared to the key scheduling operation which is often fairly expensive, then some new modes of operation become possible. Unlike the key, the tweak need not always be secret, though it should be somewhat random and in some applications it should change from block to block. Tweakable ciphers and the associated modes are an active area of current research. The Hasty Pudding Cipher was one of the first tweakable ciphers, pre-dating the Tweakable Block Ciphers paper and referring to what would now be called the tweak as "spice". The designer wants changes to quickly propagate through the cipher. This was named the avalanche effect in a paper by Horst Feistel. The idea is that changes should build up like an avalanche, so that a tiny initial change (consider a snowball tossed onto a mountain) quickly creates large effects. The term and its exact application were new, but the basic concept was not; avalanche is a variant of Claude Shannon's diffusion, and that in turn is a formalisation of ideas that were already in use. If a single bit of input or of the round key is changed at round , that should affect all bits of the ciphertext by round for some reasonably small . Ideally, would be 1, but this is not generally achieved in practice. Certainly must be much less than the total number of rounds; if is large, then the cipher will need more rounds to be secure. The strict avalanche criterion is a strong version of the requirement for good avalanche properties. Complementing any single bit of the input or the key should give exactly a 50% chance of a change in any given bit of output. In Claude Shannon's . terms, a cipher needs both confusion and diffusion, and a general design principle is that of the product cipher which combines several operations to achieve both goals. This goes back to the combination of substitution and transposition in various classical ciphers from before the advent of computers. All modern block ciphers are product ciphers. Two structures are very commonly used in building block ciphers — SP networks and the Feistel structure. The Lai-Massey construction is a third alternative, less common than the other two. In Shannon's terms, all of these are product ciphers. Any of these structures is a known quantity for a cipher designer, part of the toolkit. He or she gets big chunks of a design — an overall cipher structure with a well-defined hole for the round function to fit into— from the structure, This leaves him or her free to concentrate on the hard part, designing the actual round function. None of these structures gives ideal avalanche in a single round but, with any reasonable round function, all give excellent avalanche after a few rounds. Not all block ciphers use one of these structures, but most do. This section describes these common structures. A substitution-permutation network or SP network or SPN is Shannon's own design for a product cipher. It uses two layers in each round: a substitution layer provides confusion, then a permutation layer provides diffusion. The S-layer typically uses look-up tables called substitution boxes or S-boxes, though other mechanisms are also possible. The input is XOR-ed with a round key, split into parts and each part used as an index into an S-box. The S-box output then replaces that part so the combined S-box outputs become the S-layer output. S-boxes are discussed in more detail in their own section below. The P-layer permutes the resulting bits, providing diffusion or in Feistel's terms helping to ensure avalanche. A single round of an SP network does not provide ideal avalanche; output bits are affected only by inputs to their S-box, not by all input bits. However, the P-layer ensures that the output of one S-box in one round will affect several S-boxes in the next round so, after a few rounds, overall avalanche properties can be very good. Another way to build an iterated block cipher is to use the Feistel structure. This technique was devised byHorst Feistel of IBM and used in DES. Such ciphers are known as Feistel ciphers or Feistel networks. In Shannon's terms, they are another class of product cipher. Feistel ciphers are sometimes referred to as Luby-Rackoff ciphers after the authors of a theoretical paper analyzing some of their properties. Later work based on that shows that a Feistel cipher with seven rounds can be secure. In a Feistel cipher, each round uses an operation called the F-function whose input is half a block and a round key; the output is a half-block of scrambled data which is XOR-ed into the other half-block of text. The rounds alternate direction — in one data from the left half-block is input and the right half-block is changed, and in the next round that is reversed. Showing the half-blocks as left and right, bitwise XOR as (each bit of the output word is the XOR of the corresponding bits of the two input words) and round key for round as kn, even numbered rounds are then: and odd-numbered rounds are Since XOR is its own inverse (abb=a for any a,b) and the half-block that is used as input to the F-function is unchanged in each round, reversing a Feistel round is straightforward. Just calculate the F-function again with the same inputs and XOR the result into the ciphertext to cancel out the previous XOR. For example, the decryption step matching the first example above is: In some ciphers, including those based on SP networks, all operations must be reversible so that decryption can work. The main advantage of a Feistel cipher over an SP network is that the F-function itself need not be reversible, only repeatable. This gives the designer extra flexibility; almost any operation he can think up can be used in the F-function. On the other hand, in the Feistel construction, only half the output changes in each round while an SP network changes all of it in a single round. A single round in a Feistel cipher has less than ideal avalanche properties; only half the output is changed. However, the other half is changed in the next round so, with a good F-function, a Feistel cipher can have excellent overall avalanche properties within a few rounds. It is possible to design a Feistel cipher so that the F-function itself has ideal avalanche properties — every output bit depends nonlinearly on every input bit and every key bit —details are in a later section. There is a variant called an unbalanced Feistel cipher in which the block is split into two unequal-sized pieces rather than two equal halves. Skipjack was a well-known example. There are also variations which treat the text as four blocks rather than just two; MARS and CAST-256 are examples. The hard part of Feistel cipher design is of course the F-function. Design goals include efficiency, easy implementation, and good avalanche properties. Also, it is critically important that the F-function be highly nonlinear. All other operations in a Feistel cipher are linear and a cipher without enough nonlinearity is weak; see below. This structure was introduced in a thesis by Xuejia Lai, supervised by James Massey, in a cipher which later became the International Data Encryption Algorithm, IDEA. It has since been used in other ciphers such as FOX, later renamed IDEA NXT. Perhaps the best-known analysis is by Serge Vaudenay, one of the designers of FOX. One paper proposes a general class of "quasi-Feistel networks", with the Lai-Massey scheme as one instance, and shows that several of the well-known results on Feistel networks (such as the Luby-Rackoff and Patarin papers referenced above) can be generalised to the whole class. Another gives some specific results for the Lai-Massey scheme. To be secure, every cipher must contain nonlinear operations. If all operations in a cipher were linear then the cipher could be reduced to a system of linear equations and be broken by an algebraic attack. The attacker can choose which algebraic system to use; for example, against one cipher he might treat the text as a vector of bits and use Boolean algebra while for another he might choose to treat it as a vector of bytes and use arithmetic modulo 28. The attacker can also try linear cryptanalysis. If he can find a good enough linear approximation for the round function and has enough known plaintext/ciphertext pairs, then this will break the cipher. Defining "enough" in the two places where it occurs in the previous sentence is tricky; see linear cryptanalysis. What makes these attacks impractical is a combination of the sheer size of the system of equations used (large block size, whitening, and more rounds all increase this) and nonlinearity in the relations involved. In any algebra, solving a system of linear equations is more-or-less straightforward provided there are more equations than variables. However, solving nonlinear systems of equations is far harder, so the cipher designer strives to introduce nonlinearity to the system, preferably to have at least some components that are not even close to linear. Combined with good avalanche properties and enough rounds, this makes both direct algebraic analysis and linear cryptanalysis prohibitively difficult. There are several ways to add nonlinearity; some ciphers rely on only one while others use several. One method is mixing operations from different algebras. If the cipher relies only on Boolean operations, the cryptanalyst can try to attack using Boolean algebra; if it uses only arithmetic operations, he can try normal algebra. If it uses both, he has a problem. Of course arithmetic operations can be expressed in Boolean algebra or vice versa, but the expressions are inconveniently (for the cryptanalyst!) complex and nonlinear whichever way he tries it. For example, in the Blowfish F-function, it is necessary to combine four 32-bit words into one. This is not done with just addition, x = a+b+c+d or just Boolean operations x = abcd but instead with a mixture, x = ((a+b)c)+d. On most computers this costs no more, but it makes the analyst's job harder. Rotations, also called circular shifts, on words or registers are nonlinear in normal algebra, though they are easily described in Boolean algebra. GOST uses rotations by a constant amount, CAST-128 and CAST-256 use a key-dependent rotation in the F-function, and RC5, RC6 and MARS all use data-dependent rotations. A general operation for introducing nonlinearity is the substitution box or S-box; see following section. Nonlinearity is also an important consideration in the design of stream ciphers and cryptographic hashalgorithms. For hashes, much of the mathematics and many of the techniques used are similar to those for block ciphers. For stream ciphers, rather different mathematics and methods apply (see Berlekamp-Massey algorithm for example), but the basic principle is the same. S-boxes or substitution boxes are look-up tables. The basic operation involved is a = sbox[b] which, at least for reasonable sizes of a and b, is easily done on any computer. S-boxes are described as by , with representing the number of input bits and the number of output bits. For example, DES uses 6 by 4 S-boxes. The storage requirement for an by S-box is 2mn bits, so large values of (many input bits) are problematic. Values up to eight are common and MARS has a 9 by 32 S-box; going much beyond that would be expensive. Large values of (many output bits) are not a problem; 32 is common and at least one system, the Tiger hash algorithm , uses 64. S-boxes are often used in the S-layer of an SP Network. In this application, the S-box must have an inverse to be used in decryption. It must therefore have the same number of bits for input and output; only by S-boxes can be used. For example, AES is an SP network with a single 8 by 8 S-box and Serpent is one with eight 4 by 4 S-boxes. Another common application is in the F-function of a Feistel cipher. Since the F-function need not be reversible, there is no need to construct an inverse S-box for decryption and S-boxes of any size may be used. With either an SP network or a Feistel construction, nonlinear S-boxes and enough rounds give a highly nonlinear cipher. The first generation of Feistel ciphers used relatively small S-boxes, 6 by 4 for DES and 4 by 4 for GOST. In these ciphers the F-function is essentially one round of an SP Network. The eight S-boxes give 32 bits of S-box output. Those bits, reordered by a simple transformation, become the 32-bit output of the F-function. Avalanche properties are less than ideal since each output bit depends only on the inputs to one S-box. The output transformation (a bit permutation in DES, a rotation in GOST) compensates for this, ensuring that the output from one S-box in one round affects several S-boxes in the next round so that good avalanche is achieved after a few rounds. Later Feistel ciphers use larger S-boxes; CAST-128 or CAST-256 and Blowfishall use four 8 by 32 S-boxes. They do not use S-box bits directly as F-function output. Instead, they take a 32-bit word from each S-box, then combine them to form a 32-bit output. This gives an F-function with ideal avalanche properties — every output bit depends on all S-box output words, and therefore on all input bits and all key bits. With the Feistel structure and such an F-function, complete avalanche — all 64 output bits depend on all 64 input bits — is achieved in three rounds. No output transformation is required in such an F-function, and Blowfish has none. However, one may be used anyway; the CAST ciphers add a key-dependent rotation. These ciphers are primarily designed for software implementation, rather than the 1970s hardware DES was designed for, so looking up a full computer word at a time makes sense. An 8 by 32 S-box takes one K byte of storage; several can be used on a modern machine without difficulty. They need only four S-box lookups, rather than the eight in DES or GOST, so the F-function and therefore the whole cipher can be reasonably efficient. There is an extensive literature on the design of good S-boxes, much of it emphasizing achieving high nonlinearity though other criteria are also used. See external links. The CAST S-boxes use bent functions (the most highly nonlinear Boolean functions) as their columns. That is, the mapping from all the input bits to any single output bit is a bent function. Such S-boxes meet the strict avalanche criterion ; not only does every every bit of round input and every bit of round key affect every bit of round output, but complementing any input bit has exactly a 50% chance of changing any given output bit. A paper on generating the S-boxes is Mister & Adams "Practical S-box Design" . Bent functions are combined to get additional desirable traits — a balanced S-box (equal probability of 0 and 1 output), miniumum correlation among output bits, and high overall S-box nonlinearity. Blowfish uses a different approach, generating random S-boxes as part of the key scheduling operation at cipher setup time. Such S-boxes are not as nonlinear as the carefully constructed CAST ones, but they are nonlinear enough and, unlike the CAST S-boxes, they are unknown to an attacker. In perfectly nonlinear S-boxes , not only are all columns bent functions (the most nonlinear possible Boolean functions), but all linear combinations of columns are bent functions as well. This is possible only if , there are at least twice as many input bits as output bits. Such S-boxes are therefore not much used. S-boxes in analysis S-boxes are sometimes used as an analytic tool even for operations that are not actually implemented as S-boxes. Any operation whose output is fully determined by its inputs can be described by an S-box; concatenate all inputs into an index, look that index up, get the output. For example, the IDEA cipher uses a multiplication operation with two 16-bit inputs and one 16-bit output; it can be modeled as a 32 by 16 S-box. In an academic paper, one might use such a model in order to apply standard tools for measuring S-box nonlinearity. A well-funded cryptanalyst might actually build the S-box (8 gigabytes of memory) either to use in his analysis or to speed up an attack. Resisting linear & differential attacks Two very powerful cryptanalytic methods of attacking block ciphers are linear cryptanalysis anddifferential cryptanalysis. The former works by finding linear approximations for the nonlinear components of a cipher, then combining them using the piling-up lemma to attack the whole cipher. The latter looks at how small changes in the input affect the output, and how such changes propagate through multiple rounds. These are the only known attacks that break DES with less effort than brute force, and they are completely general attacks that apply to any block cipher.. Both these attacks, however, require large numbers of known or chosen plaintexts, so a simple defense against them is to re-key often enough that the enemy cannot collect sufficient texts. Techniques introduced for CAST go further, building a cipher that is provably immune to linear or differential analysis with any number of texts. The method, taking linear cryptanalysis as our example and abbreviating it LC, is as follows: - start from properties of the round function (for CAST, from bent functions in the S-boxes) - derive a limit , the maximum possible quality of any linear approximation to a single round - consider the number of rounds, , as a variable - derive an expression for , the effort required to break the cipher by LC, in terms of and - find the minimum such that exceeds the effort required for brute force, making LCimpractical - derive an expression for , the number of chosen plaintexts required for LC, also in terms of and (LC with only known plaintext requires more texts, so it can be ignored) - find the minimum such that exceeds the number of possible plaintexts, 2blocksize, making LC impossible A similar approach applied to differentials gives values for that make differential cryptanalysis impractical or impossible. Choose the actual number of rounds so that, at a minimum, both attacks are impractical. Ideally, make both impossible, then add a safety factor. This type of analysis is now a standard part of the cryptographer's toolkit. Many of the AES candidates, for example, included proofs along these lines in their design documentation, and AES itself uses such a calculation to determine the number of rounds required for various key sizes. DES and alternatives The Data Encryption Standard, DES, is among the the best known and most thoroughly analysed block ciphers. It was invented by IBM and was made a US government standard, for non-classified government data and for regulated industries such as banking, in the late 70s. From then until about the turn of the century, it was very widely used. It is now considered obsolete because its 56-bit key is too short to resist brute force attacks if the opponents have recent technology. The DES standard marked the beginning of an era in cryptography. Of course, much work continued to be done in secret by military and intelligence organisations of various nations, but from the time of DES cryptography also developed as an open academic discipline complete with journals, conferences, courses and textbooks. In particular, there was a lot of work related to block ciphers. For an entire generation, every student of cryptanalysis tried to find a way to break DES and every student of cryptography tried to devise a cipher that was demonstrably better than DES. Very few succeeded. Every new cryptanalytic technique invented since DES became a standard has been tested against DES. None of them have broken it completely, but two — differential cryptanalysis and linear cryptanalysis— give attacks theoretically significantly better than brute force. This does not appear to have much practical importance since both require enormous numbers of known or chosen plaintexts, all encrypted with the same key, so reasonably frequent key changes provide an effective defense. All the older publicly known cryptanalytic techniques have also been tried, or at least considered, for use against DES; none of them work. DES served as a sort of baseline for cipher design through the 80s and 90s; the design goal for almost any 20th century block cipher was to replace DES in some of its many applications with something faster, more secure, or both. All these ciphers used 64-bit blocks, like DES, but most used 128-bit or longer keys for better resistance to brute force attacks. Ciphers of this generation include: - The Data Encryption Standard itself, the first well-known Feistel cipher, using 16 rounds and eight 6 by 4 S-boxes. - The GOST cipher, a Soviet standard similar in design to DES, a 32-round Feistel cipher using eight 4 by 4 S-boxes. - IDEA, the International Data Encryption Algorithm, a European standard, not a Feistel cipher, with only 8 rounds and no S-boxes. - RC2, a Feistel cipher from RSA Security which was approved for easy export from the US (provided it was used with only a 40-bit key), so widely deployed. - RC5, a Feistel cipher from RSA security. This was fairly widely deployed, often replacing RC2 in applications. - CAST-128, a widely used 16-round Feistel cipher, with 8 by 32 S-boxes. - Blowfish, another widely used 16-round Feistel cipher with 8 by 32 S-boxes. - The Tiny Encryption Algorithm, or TEA, designed to be very small and fast but still secure, a 32-round Feistel cipher without S-boxes. - Skipjack, an algorithm designed by the NSA for use in the Clipper chip, a 32-round unbalanced Feistel cipher. - SAFER and LOKI, two families of ciphers which each included an original version against which Lars Knudsen found an attack and a revised version to block that attack. Each had a descendant which was an AES candidate. - Triple DES, applying DES three times with different keys Many of the techniques used in these ciphers came from DES and many of the design principles came from analysis of DES. However, there were also new design ideas. The CAST ciphers were the first to use large S-boxes which allow the F-function of a Feistel cipher to have ideal avalanche properties, and to use bent functions in the S-box columns. Blowfish introduced key-dependent S-boxes. Several introduced new ways to achieve nonlinearity: data-dependent rotations in RC5, key-dependent rotations in CAST-128, a clever variant on multiplication in IDEA, and the pseudo-Hadamard transform in SAFER. The era effectively ended when the US government began working on a new cipher standard to replace their Data Encryption Standard, the Advanced Encryption Standard or AES. A whole new generation of ciphers arose, the first 21st century block ciphers. Of course these designs still drew on the experience gained in the post-DES generation, but overall these ciphers are quite different. In particular, they all use 128-bit blocks and most support key sizes up to 256 bits. The AES generation By the 90s, the Data Encryption Standard was clearly obsolete; its small key size made it more and more vulnerable to brute force attacks as computers became faster. The US National Institute of Standards and Technology (NIST) therefore began work on an Advanced Encryption Standard, AES, a block cipher to replace DES in government applications and in regulated industries. To do this, they ran a very open international AES competition, starting in 1998. Their requirements specified a block cipher with 128-bit block size and support for 128, 192 or 256-bit key sizes. Evaluation criteria included security, performance on a range of platforms from 8-bit CPUs (e.g. in smart cards) up, and ease of implementation in both software and hardware. Fifteen submissions meeting basic criteria were received. All were iterated block ciphers; in Shannon's terms all were product ciphers. Most used an SP network or Feistel structure, or variations of those. Several had proofs of resistance to various attacks. The AES competitionarticle covers all candidates and many have their own articles as well. Here we give only a summary. After much analysis and testing, and two conferences, the field was narrowed to five finalists: - Twofish, a cipher with key-dependent S-boxes, from a team at Bruce Schneier's company Counterpane - MARS, a variant of Feistel cipher using data-dependent rotations, from IBM - Serpent, an SP network, from an international group of well-known players - RC6, a cipher using data-dependent rotations, from a team led by Ron Rivest - Rijndael. an SP network, from two Belgian designers An entire generation of block ciphers used the 64-bit block size of DES, but since AES many new designs use a 128-bit block size. As discussed under size parameters, if two or more ciphers have the same block and key sizes, then they are effectively interchangeable; replacing one cipher with another requires no other changes in an application. When asked to implement AES, the implementer might include the other finalists — Twofish, Serpent. RC6 and MARS — as well. This provides useful insurance against the (presumably unlikely) risk of someone finding a good attack on AES. Little extra effort is required since open source implementations of all these ciphers are readily available, seeexternal links. All except RC6 have completely open licenses. There are also many other ciphers that might be used. There were ten AES candidates that did not make it into the finals: - CAST-256, based on CAST-128 and with the same theoretical advantages - DFC, based on another theoretical analysis proving resistance to various attacks. - Hasty Pudding, a variable block size tweakable cipher - DEAL, a Feistel cipher using DES as the round function - FROG, an innovative cipher; interesting but weak - E2, from Japan - CRYPTON, a Korean cipher with some design similarities to AES - MAGENTA, Deutsche Telekom's candidate, quickly broken - LOKI97, one of the LOKI family of ciphers, from Australia - SAFER+, one of the SAFER family of ciphers, from Cylink Corporation Some should not be considered. Magenta and FROG have been broken, DEAL is slow, and E2 has been replaced by its descendant Camellia. There are also some newer 128-bit ciphers that are widely used in certain countries: - Camellia, an 18-round Feistel cipher widely used in Japan and one of the standard ciphers for the NESSIE (New European Schemes for Signatures, Integrity and Encryption) project. - SEED, developed by the Korean Information Security Agency (KISA) and widely used in Korea. For most applications a 64-bit or 128-bit block size is a fine choice; nearly all common block ciphers use one or the other. Such ciphers can be used to encrypt objects larger than their block size; just choose an appropriate mode of operation. For nearly all ciphers, the block size is a power of two. Joan Daemen's PhD thesis, though, had two exceptions:3-Way uses 96 bits (three 32-bit words) and BaseKing 192 (three 64-bit words). Neither cipher was widely used, but they did influence later designs. Daemen was one of the designers of Square and of Rijndael which became the Advanced Encryption Standard. A few ciphers supporting larger block sizes do exist; this section discusses them. A block cipher with larger blocks may be more efficient; it takes fewer block operations to encrypt a given amount of data. It may also be more secure in some ways; diffusion takes place across a larger block size, so data is more thoroughly mixed, and large blocks make a code book attack more difficult. On the other hand, great care must be taken to ensure adequate diffusion within a block so a large-block cipher may need more rounds, larger blocks require more padding, and there is not a great deal of literature on designing and attacking such ciphers so it is hard to know if one is secure. Large-block ciphers are inconvenient for some applications and simply do not fit in some protocols. Some block ciphers, such as Block TEA and Hasty Pudding, support variable block sizes. They may therefore be both efficient and convenient in applications that need to encrypt many items of a fixed size, for example disk blocks or database records. However, just using the cipher in ECB mode to encrypt each block under the same key is unwise, especially if encrypting many objects. With ECB mode, identical blocks will encrypt to the same ciphertext and give the enemy some information. One solution is to use a tweakable cipher such as Hasty Pudding with the block number or other object identifier as the tweak. Another is to useCBC mode with an initialisation vector derived from an object identifier. Cryptographic hash algorithms can be built using a block cipher as a component. There are general-purpose methods for this that can use existing block ciphers; Applied Cryptography gives a long list and describes weaknesses in many of them. However, some hashes include a specific-purpose block cipher as part of the hash design. One example is Whirlpool, a 512-bit hash using a block cipher similar in design to AES but with 512-bit blocks and a 512-bit key. Another is the Advanced Hash Standard candidateSkein which uses a tweakable block cipher calledThreefish. Threefish has 256-bit, 512-bit and 1024-bit versions; in each version block size and key size are both that number of bits. It is possible to go the other way and use any cryptographic hash to build a block cipher; again Applied Cryptography has a list of techniques and describes weaknesses. The simplest method is to make a Feistel cipher with double the hash's block size; the F-function is then just to hash text and round key together. This technique is rarely used, partly because a hash makes a rather expensive round function and partly because the block cipher block size would have to be inconveniently large; for example using a 160-bit bit hash such as SHA-1would give a 320-bit block cipher. The hash-to-cipher technique was, however, important in one legal proceeding, the Bernstein case. At the time, US law strictly controlled export of cryptography because of its possible military uses, but hash functions were allowed because they are designed to provide authentication rather than secrecy. Bernstein's code built a block cipher from a hash, effectively circumventing those regulations. Moreover, he sued the government over his right to publish his work, claiming the export regulations were an unconstitutional restriction on freedom of speech. The courts agreed, effectively striking down the export controls. It is also possible to use a public key operation as a block cipher. For example, one might use the RSA algorithm with 1024-bit keys as a block cipher with 1024-bit blocks. Since the round function is itself cryptographically secure, only one round is needed. However, this is rarely done; public key techniques are expensive so this would give a very slow block cipher. A much more common practice is to use public key methods, block ciphers, and cryptographic hashes together in a hybrid cryptosystem. - ↑ M. Liskov, R. Rivest, and D. Wagner (2002), "Tweakable Block Ciphers", LNCS, Crypto 2002 - ↑ Horst Feistel (1973), "Cryptography and Computer Privacy", Scientific American - ↑ 3.0 3.1 A. F. Webster and Stafford E. Tavares (1985), "On the design of S-boxes", Advances in Cryptology - Crypto '85 (Lecture Notes in Computer Science) - ↑ C. E. Shannon (1949), "Communication Theory of Secrecy Systems", Bell Systems Technical Journal 28: pp.656-715 - ↑ M. Luby and C. Rackoff, "How to Construct Pseudorandom Permutations and Pseudorandom Functions", SIAM J. Comput - ↑ Jacques Patarin (Oct 2003), "Luby-Rackoff: 7 Rounds Are Enough for Security", Lecture Notes in Computer Science 2729: 513 - 529 - ↑ X. Lai (1992), "On the Design and Security of Block Ciphers", ETH Series in Information Processing, v. 1 - ↑ S. Vaudenay (1999), On the Lai-Massey Scheme, Springer-Verlag, LCNS - ↑ Aaram Yun, Je Hong Park and Jooyoung Lee (2007), Lai-Massey Scheme and Quasi-Feistel Networks - ↑ Yiyuan Luo, Xuejia Lai, Zheng Gong and Zhongming Wu (2009), Pseudorandomness Analysis of the Lai-Massey Scheme - ↑ Ross Anderson & Eli Biham (1996), "Tiger: a fast new hash function", Fast Software Encryption, Third International Workshop Proceedings - ↑ S. Mister, C. Adams (August, 1996), "Practical S-Box Design", Selected Areas in Cryptography (SAC '96): 61-76 - ↑ Kaisa Nyberg (1991), "Perfect nonlinear S-boxes", Eurocrypt'91, LNCS 547 - ↑ Serge Vaudenay (2003), "Decorrelation: A Theory for Block Cipher Security", Journal of Cryptology - ↑ Kaissa Nyberg and Lars Knudsen (1995), "Provable security against a differential attack", Journal of Cryptology - ↑ 16.0 16.1 Schneier, Bruce (2nd edition, 1996,), Applied Cryptography, John Wiley & Sons, ISBN 0-471-11709-9
http://en.citizendium.org/wiki/Block_cipher
13
48
Purchasing power parity (PPP) is a theory of long-term equilibrium exchange rates based on relative price levels of two countries. The idea originated with the School of Salamanca in the 16th century and was developed in its modern form by Gustav Cassel in 1918. The concept is founded on the law of one price; the idea that in absence of transaction costs, identical goods will have the same price in different markets. In its "absolute" version, the purchasing power of different currencies is equalized for a given basket of goods. In the "relative" version, the difference in the rate of change in prices at home and abroad—the difference in the inflation rates—is equal to the percentage depreciation or appreciation of the exchange rate. The best-known and most-used purchasing power parity exchange rate is the Geary-Khamis dollar (the "international dollar"). PPP exchange rate (the "real exchange rate") fluctuations are mostly due to different rates of inflation between the two economies. Aside from this volatility, consistent deviations of the market and PPP exchange rates are observed, for example (market exchange rate) prices of non-traded goods and services are usually lower where incomes are lower. (A U.S. dollar exchanged and spent in India will buy more haircuts than a dollar spent in the United States). Basically, PPP deduces exchange rates between currencies by finding goods available for purchase in both currencies and comparing the total cost for those goods in each currency. There can be marked differences between PPP and market exchange rates. For example, the World Bank's World Development Indicators 2005 estimated that in 2003, one Geary-Khamis dollar was equivalent to about 1.8 Chinese yuan by purchasing power parity—considerably different from the nominal exchange rate. This discrepancy has large implications; for instance, GDP per capita in India is about US$1,100 while on a PPP basis it is about US$3,000. This is frequently used to assert that India is the world's fourth-largest economy, but such a calculation would only be valid under the PPP theory, at nominal exchange rates its economy is only the eleventh largest. At the other extreme, Denmark's nominal GDP per capita is around US$62,100, but its PPP figure is only US$37,304. The PPP exchange-rate calculation is controversial because of the difficulties of finding comparable baskets of goods to compare purchasing power across countries. Estimation of purchasing power parity is complicated by the fact that countries do not simply differ in a uniform price level; rather, the difference in food prices may be greater than the difference in housing prices, while also less than the difference in entertainment prices. People in different countries typically consume different baskets of goods. It is necessary to compare the cost of baskets of goods and services using a price index. This is a difficult task because purchasing patterns and even the goods available to purchase differ across countries. Thus, it is necessary to make adjustments for differences in the quality of goods and services. Additional statistical difficulties arise with multilateral comparisons when (as is usually the case) more than two countries are to be compared. When PPP comparisons are to be made over some interval of time, proper account needs to be made of inflationary effects. An example of one measure of PPP is the Big Mac Index popularized by The Economist, which looks at the prices of a Big Mac burger in McDonald's restaurants in different countries. If a Big Mac costs US$4 in the United States and GBP£3 in the United Kingdom, the PPP exchange rate would be £3 for $4. The Big Mac Index is presumably useful because it is based on a well-known good whose final price, easily tracked in many countries, includes input costs from a wide range of sectors in the local economy, such as agricultural commodities (beef, bread, lettuce, cheese), labor (blue and white collar), advertising, rent and real estate costs, transportation, etc. However, in some emerging economies, western fast food represents an expensive niche product price well above the price of traditional staples—i.e. the Big Mac is not a mainstream 'cheap' meal as it is in the west but a luxury import for the middle classes and foreigners. Although it is not perfect, the index still offers significant insight and an easy example to the understanding of PPP. The exchange rate reflects transaction values for traded goods between countries in contrast to non-traded goods, that is, goods produced for home-country use. Also, currencies are traded for purposes other than trade in goods and services, e.g., to buy capital assets whose prices vary more than those of physical goods. Also, different interest rates, speculation, hedging or interventions by central banks can influence the foreign-exchange market. The PPP method is used as an alternative to correct for possible statistical bias. The Penn World Table is a widely cited source of PPP adjustments, and the so-called Penn effect reflects such a systematic bias in using exchange rates to outputs among countries. For example, if the value of the Mexican peso falls by half compared to the U.S. dollar, the Mexican Gross Domestic Product measured in dollars will also halve. However, this exchange rate results from international trade and financial markets. It does not necessarily mean that Mexicans are poorer by a half; if incomes and prices measured in pesos stay the same, they will be no worse off assuming that imported goods are not essential to the quality of life of individuals. Measuring income in different countries using PPP exchange rates helps to avoid this problem. PPP exchange rates are especially useful when official exchange rates are artificially manipulated by governments. Countries with strong government control of the economy sometimes enforce official exchange rates that make their own currency artificially strong. By contrast, the currency's black market exchange rate is artificially weak. In such cases a PPP exchange rate is likely the most realistic basis for economic comparison. The main reasons why different measures do not perfectly reflect standards of living are PPP calculations are often used to measure poverty rates. The goods that the currency has the "power" to purchase are a basket of goods of different types: The more a product falls into category 1 the further its price will be from the currency exchange rate. (Moving towards the PPP exchange rate.) Conversely, category 2 products tend to trade close to the currency exchange rate. (For more details of why, see: Penn effect). More processed and expensive products are likely to be tradable, falling into the second category, and drifting from the PPP exchange rate to the currency exchange rate. Even if the PPP "value" of the Ethiopian currency is three times stronger than the currency exchange rate, it won't buy three times as much of internationally traded goods like steel, cars and microchips, but non-traded goods like housing, services ("haircuts"), and domestically produced crops. The relative price differential between tradables and non-tradables from high-income to low-income countries is a consequence of the Balassa-Samuelson effect, and gives a big cost advantage to labour intensive production of tradable goods in low income countries (like Ethiopia), as against high income countries (like Switzerland). The corporate cost advantage is nothing more sophisticated than access to cheaper workers, but because the pay of those workers goes further in low-income countries than high, the relative pay differentials (inter-country) can be sustained for longer than would be the case otherwise. (This is another way of saying that the wage rate is based on average local productivity, and that this is below the per capita productivity that factories selling tradable goods to international markets can achieve.) An equivalent cost benefit comes from non-traded goods that can be sourced locally (nearer the PPP-exchange rate than the nominal exchange rate in which receipts are paid). These act as a cheaper factor of production than is available to factories in richer countries. PPP calculations tend to overemphasise the primary sectoral contribution, and underemphasise the industrial and service sectoral contributions to the economy of a nation. In addition to methodological issues presented by the selection of a basket of goods, PPP estimates can also vary based on the statistical capacity of participating countries. The International Comparison Program, which PPP estimates are based off, require the disaggregation of national accounts into production, expenditure or (in some cases) income, and not all participating countries routinely disaggregate their data into such categories. Some aspects of PPP comparison are theoretically impossible or unclear. For example, there is no basis for comparison between the Ethiopian laborer who lives on teff with the Thai laborer who lives on rice, because teff is impossible to find in Thailand and vice versa, so the price of rice in Ethiopia or teff in Thailand cannot be determined. As a general rule, the more similar the price structure between countries, the more valid the PPP comparison. PPP levels will also vary based on the formula used to calculate price matrices. Different possible formulas include GEKS-Fisher, Geary-Khamis, IDB, and the superlative method. Each has advantages and disadvantages. Linking regions presents another methodological difficulty. In the 2005 ICP round, regions were compared by using a list of some 1,000 identical items for which a price could be found for 18 countries, selected so that at least two countries would be in each region. While this was superior to earlier "bridging" methods, which is not fully take into account differing quality between goods, it may serve to overstate the PPP basis of poorer countries, because the price indexing on which PPP is based will assign to poorer countries the greater weight of goods consumed in greater shares in richer countries. The 2005 ICP round resulted in large downward adjustments of PPP (or upward adjustments of price level) for several Asian countries, including China (-40%), India (-36%), Bangladesh (-42%) and the Philippines (-43%). Surjit Bhalla has argued that these adjustments are unrealistic. For example, in the case of China, backward extrapolation of 2005 ICP PPP based on Chinese annual growth rates would yield a 1952 PPP per capita of $153 1985 International dollars, but Pritchett has persuasively argued that $250 1985 dollars is the minimum required to sustain a population, or has ever been observed for more than a short period. Therefore, both the 2005 ICP PPP for China and China's growth rates cannot both be correct. Angus Maddison has calculated somewhat slower growth rates for China than official figures, but even under his calculations, the 1952 PPP per capita comes to only $229. Deaton Heston has suggested that the discrepancy can be explained by the fact that the 2005 ICP examined only urban prices, which overstate the national price level for Asian countries, and also the fact that Asian countries adjusted for productivity across noncomparable goods such as government services, whereas non-Asian countries did not make such an adjustment. Each of these two factors, according to him, would lead to an underestimation of GDP by PPP of about 12%. The GDP number for all reporting areas are one number in the reporting area's local currency. Therefore, in the local currency the PPP and market (or government) exchange rate is always 1.0 to its own currency, so the PPP and market exchange rate GDP number is always per definition the same for any duration of time, anytime, in that area's currency. The only time the PPP exchange rate and the market exchange rate can differ is when the GDP number is converted into another currency. Only because of different base numbers (because of for example "current" or "constant" prices, or an annualized or averaged number) are the USD to USD PPP exchange rate not 1.0, see the IMF data here: http://www.imf.org/external/pubs/ft/weo/2004/02/data/dbcoutm.cfm?SD=1980&ED=2005&R1=1&R2=1&CS=3&SS=2&OS=C&DD=0&OUT=1&C=111&S=NGDP_R-NGDP-NGDPD-NGDPRPC-NGDPPC-NGDPDPC-PPPWGT-PPPEX-PPPPC&CMP=0&x=59&y=15. The PPP exchange rate is 1.023 from 1980 to 2002, and the "constant" and "current" price is the same in 2000, because that's the base year for the "constant" (inflation adjusted) currency.
http://www.uniblogger.com/en/Purchasing_power_parity
13
17
The early settlers who came to America found a land of dense wilderness, interlaced with creeks, rivers, and streams. Within this wilderness was an extensive network of trails, many of which were created by the migration of the buffalo and used by the Native American Indians as hunting and trading routes. These primitive trails were at first crooked and narrow. Over time, the trails were widened, straightened and improved by settlers for use by horse and wagons. These became some of the first roads in the new land. After the American Revolution, the National Government began to realize the importance of westward expansion and trade in the development of the new Nation. As a result, an era of road building began. This period was marked by the development of turnpike companies, our earliest toll roads in the United States. In 1792, the first turnpike was chartered and became known as the Philadelphia and Lancaster Turnpike in Pennsylvania. It was the first road in America covered with a layer of crushed stone. The boom in turnpike construction began, resulting in the incorporation of more than 50 turnpike companies in Connecticut, 67 in New York, and others in Massachusetts and around the country. A notable turnpike, the Boston-Newburyport Turnpike, was 32 miles long and cost approximately $12,500 per mile to construct. As the Nation grew, so did the need for improved roads. In 1806, the Federal Government passed legislation to fund the National Road, known as the Cumberland Road. This road would stretch from Maryland through Pennsylvania, over the Cumberland Mountains, to the Ohio River. For a period of time, these roads served the new Nation well. However, with the use of heavier wagons and the large movements of entire families across the country, a strain on the infrastructure was evident. The roads in this country were still dirt and gravelï€ paved, rutted and impassable in bad weather. Toward the 1880s, America began to see the increased use of bicycles as a form of transportation, which led to the "Good Roads Movement," mainly through bicyclist clubs across the country. In addition, with the advent of the automobile, new and better roads were required. The Federal Government responded by creating the Office of Road Inquiry in 1893. This agency was responsible for collecting data, answering questions, and assisting in road improvements. Later, this infant agency grew to help finance road construction (Post Office Appropriation Act of 1912), the beginning of Federal-aid roads. Soon, connecting highways emerged from contributions of State and local governments as well as Federal financing. People were traveling further and more frequently. World War I saw greater dependence on these vital roadways, especially manufacturing centers. Following the war, the Federal Highway Act of 1921 provided financial assistance to the States to build roads and bridges. The need for a nationwide interconnecting system of highways became clearer. By the end of the 1920s, more than half of all American families owned automobiles. Engineers were kept busy building highways, bridges, and tunnels, especially in the larger cities such as New York, Boston, Los Angeles, and San Francisco. Tolls were used on many roads, bridges, and tunnels to help pay for this building boom. The Holland Tunnel in New York was completed in the mid 1920s and opened up routes into the heart of New York City. It was referred to as the "Eighth Wonder of the World." The Golden Gate Bridge in San Francisco, built in the 1930s, provided access into San Francisco from across the bay. World War II created even greater reliance on our vital highway systems. The roads, bridges, and tunnels served as defense routes for the war effort. After the war, the growth of the suburbs increased the use of the automobile. The use of the automobile grew to include not only trips to work but to social activities and recreational outlets as well. In the immediate post-World War II era, several States recognized that modern, high quality highway systems were needed to meet this demand. The Pennsylvania Turnpike was the first of these roads, and was an immediate success. From around 1945 to 1955, many States, mainly located in the North and East, began to build State turnpikes on their primary long-distance travel corridors. Beginning around the time of World War I, the Federal Government, for primarily military reasons, began to study the possibility of building high-quality roads across the Nation. One option for the financing of these roads was to collect tolls. However, the Federal-Aid Highway Act, enacted in 1956—which provided for a coast-to-coast highway system, connecting important cities and industrial centers to one another— was legislated as a tax-supported system, not a toll system. With the implementation of Federal-aid to States to build the Interstate System, proposals for additional toll roads languished. By 1963, the last of the toll roads planned before the Federal-aid system was legislated opened, and few additional proposals were seriously considered. By 1980, the Nation's highway transportation infrastructure began to show signs of age through heavy use. There was general public concern that the U.S. was falling behind in its commitment to building and maintaining highway infrastructure. Several trends contributed to this perception. There had been phenomenal growth in the purchase and use of highway vehicles. There was an acknowledgment that governments at all levels were short of funds, and that in some cases, rather than continuing to raise taxes, it would be easier to defer maintenance and reconstruction of infrastructure of all kinds. Furthermore, there was a timing problem in that roads built in the peak years of new Interstate construction (roughly 1960-1980) were approaching the end of their design life and were wearing out. These concerns were one reason the toll road concept began to re-emerge. Another reason toll facilities are being reconsidered is the increasing ability of electronic equipment to identify vehicles and record and store large amounts of data: a technology that is transforming our way of thinking about toll collection. Electronic toll collection (ETC) leads to significant declines in the operating costs of toll facilities. Furthermore, ETC, by not requiring the vehicle to stop, reduces lines at tollbooths, reduces vehicle operating costs, and therefore directly benefits the traveling public. Public acceptance and familiarity with the ease, accuracy, privacy, and fairness of ETC are likely to make these toll-charging methods much more pervasive on toll roads in the near future. Technology does come at a cost. For example, more work must be done to increase compatibility among competing electronic toll collection technologies, but the shortcomings can and will be overcome. But toll financing concepts are changing in other ways. In some circles, the proposition is put forward that goods and services currently provided by the public sector could also be provided by the private sector, perhaps with gains in efficiency. Highway facilities are identified as one of the areas where the private sector might be willing to invest if there were a high probability of recouping the investment through the collection of tolls. With the possibility of privately financed toll roads, some large engineering and construction management firms believe that a highway market might exist that had not been explored by their firms. Under typical public provision of U.S. highways, the State does (or contracts out) the design work and then awards distinct contracts to carry out parts of the completed plans. If the project meets certain criteria, it is eligible for Federal-aid reimbursement (Federal-aid pays the State back a portion of its costs of construction). Some private firms, however, have proposed to do the whole process themselves and take advantage of efficiencies such as simultaneous design and construction. Furthermore, there was the feeling by these firms that the time might be right to put some of their own equity into these projects, and finance, build, and operate the entire facility themselves. These forces appear to suggest that both public and private toll roads may be additional means of financing and constructing U.S. highway facilities in the near future. Public-private partnerships, defined as an agreement between the public (government) and the private sector to develop, finance, construct, operate, own, and maintain highway facilities, will be one of the alternatives. To what extent they could become a major force in highway finance will depend on the abilities of the individual public-private ventures to overcome existing institutional barriers. It should not be surprising to find that States which pass toll road legislation do not follow a fixed pattern as each State confronts unique circumstances. But the following provisions in State toll road legislation are common: A successful toll road project can be built with virtually any mix of public and private financial sponsorship. Several prototypical models have developed, incorporating increasing amounts of private involvement along with non governmental funds. As the private sector contributes more equity financing and assumes more risks, the partnership develops more characteristics of full privatization. The structures described here fit along a continuum from traditional public to mostly private: Although the traditional public toll authority does not incorporate private sector participation in the ways that the models described in the following sub-bullets do, it nonetheless provides an alternative structure for tollways. The following illustrates a number of variations of the traditional public toll authority. The Federal-aid Highway Program, Title 23 of the United States Code (23 U.S.C.), offers States and/or other public entities a variety of opportunities to toll motor vehicles to finance Interstate construction and reconstruction, promote efficiency in the use of highways, reduce traffic congestion and/or improve air quality. In addition to providing States and/or other public entities the authority to toll motor vehicles, the Value Pricing Pilot program is unique in providing grants for pre-implementation and non-construction related implementation costs of tolling, and for non-highway related pricing activities. The tolling and pricing programs include: This demonstration program permits tolling on selected facilities to manage high levels of congestion, reduce emissions in a non-attainment or maintenance area under the Clean Air Act Amendments, or finance added Interstate lanes for the purpose of reducing congestion. The Secretary is authorized to carry out 15 demonstration projects through 2009 to allow States, public authorities, or public or private entities designated by States to collect a toll from motor vehicles at an eligible toll facility for any highway, bridge, or tunnel, including on the Interstate. An "eligible toll facility" includes those accomplishing any of the following: Federal share of project cost of a facility tolled under this program, including installation of the toll collection facility, is not to exceed 80 percent. Section 1121 of SAFETEA-LU replaces Section 102(a) of Title 23 of the United States Code (23 U.S.C.) with a new Section 166 that clarifies some aspect of the operation of HOV facilities and provides more exceptions to the vehicle occupancy requirements for HOV facilities. It also authorizes States to create High Occupancy Toll (HOT) lanes. Specifically, this section allows States to charge tolls to vehicles that do not meet the established occupancy requirements to use an HOV lane if the State establishes a program that addresses the selection of certified vehicles and procedures for enforcing the restrictions. Tolls under this section may be charged on both Interstate and non-Interstate facilities. There is no limit on the number of projects or the number of states that can participate. If a State desires to allow HOT vehicles to use HOV lane by creating a HOT lane or converting an existing HOV lane to a HOT lanes, an Expression of Interest should be sent to the Tolling and Pricing Team and the local Division Office to initiate a Federal Review process. For more information about the Federal Review, refer to the Federal-Aid Highway Program Guidance on HOV Lanes. The revised version with additional information related to HOT Lanes and new requirements sated in 23 USC 166 will be published in the Federal Register in early 2006. SAFETEA-LU continued the authority initially provided in Section 1216 (b) of TEA-21, by allowing up to three existing Interstate facilities (highway, bridge, or tunnel) to be tolled to fund needed reconstruction or rehabilitation on Interstate highway corridors that could not otherwise be adequately maintained or functionally improved without the collection of tolls. Each of the three facilities must be in a different State. There is no special funding authorized for this program. By law, Interstate maintenance funds may not be used on a facility for which tolls are being collected under this program. This program authorizes up to three facilities on the Interstate System to toll for the purpose of financing the construction of new Interstate highways. A State or an interstate "compact of States" may submit a single candidate project under this program. Each applicant must demonstrate that financing the construction of the facility with the collection of tolls is the most efficient and economical way to advance the project. The State must agree not to enter into a non-compete agreement with a private party under which the State is prevented from improving or expanding the capacity of public roads in the vicinity of the toll facility to address conditions resulting from traffic diverted to nearby roads from the toll facility. There is no special funding authorized for this program. By law, Interstate maintenance funds may not be used on a facility for which tolls are being collected under this program. The "At-a-Glance" features of this program are as follows: Under 23 U.S.C. 129, Federal participation is allowed in the following five types of toll activities. If Federal-aid funds are used for construction of or improvements to a toll facility or the approach to a toll facility or if a State plans to reconstruct and convert a free highway, bridge or tunnel previously constructed with Federal-aid funds to a toll facility, a toll agreement under Section 129(a)(3) must be executed. There is no limit to the number of agreements that may be executed. The Value Pricing Pilot (VPP) program, initially authorized in the Intermodal Surface Transportation Efficiency Act (ISTEA) as the Congestion Pricing Pilot Program, and most recently renewed with the passage of the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), encourages implementation and evaluation of value pricing pilot projects to manage congestion on highways through tolling and other pricing mechanisms. This is the only program that provides funding to support studies and implementation aspects of a tolling or pricing project. The program is limited to 15 slots (which FHWA has reserved for "states") of which only one vacancy remains. Each state can have multiple projects. SAFETEA-LU provided a total of $59 million for fiscal years (FY) 2005-2009 for the VPP program. $11 million was authorized for FY 2005 and $12 million was authorized for each of FYs 2006 through 2009. Of the amounts made available to carry out the program, $3 million will be set-aside in each of the fiscal years 2006 through 2009 for value pricing projects that do not involve highway tolls. Funds available for the VPP program can be used to support pre-implementation study activities and to pay for implementation costs of value pricing projects. For information on all of the above see: http://www.ops.fhwa.dot.gov/tolling_pricing/index.htm. |<< Previous||Table of Content||Next >>|
http://www.fhwa.dot.gov/policyinformation/tollpage/history.cfm
13
20
By David Osher, Anju Sidana, and Patrick Kelly Learning is not just a cognitive process. Research shows that powerful social and emotional factors affect learning.1 Some of these factors involve social relationships. These social factors include the teacher's relationship with the student, the student's relationship with other students, the teacher's and student's relationships with the student's family and with staff, the overall climate of the student's learning environment, and the support provided to teachers and other staff to provide a caring and supportive environment. Other factors are individual and often involve emotional matters. These emotional factors include the student's motivation; sense of self and of ability to succeed, both in life and academically; mental and physical wellness; and ability to manage his or her own emotions and relationships with teachers and other students. These social and emotional factors influence students' abilities to attend to learning, their ability to direct their learning, and their engagement in learning activities. These factors also influence teachers' abilities to connect with, challenge, and support their students. For example, it is hard for students to attend to learning if they are angry at a teacher's sarcasm or worried about aggression by fellow students. Similarly, if students cannot handle the frustration of not succeeding the first time they try something or think that they will be teased by the people around them if they do not succeed, they may not persevere with academic tasks or take the risks necessary to learn. By providing students with support that addresses their social and emotional needs and building strong social and emotional conditions for learning, staff in neglected or delinquent (N or D) settings—as well as other schools—can help improve learning outcomes that cannot be addressed through academic remediation alone. Social and emotional factors are important for all students. However, they are particularly important for students served in N or D settings—students who often come from economically disadvantaged backgrounds, students of color, and students with emotional, behavioral, and learning disabilities. Research suggests that it is hard to improve academic outcomes for these students, both individually and collectively, without addressing the social and emotional barriers to learning that they face. For example, research in low-performing schools in Chicago that "turned around" showed that the highest level of turn-around took place in schools that combined a strong academic focus with equally strong doses of student support.2 Similar research in Alaska and New York, as well as research on national data sets, suggests that learning and academic performance improve when the conditions for learning improve.3 Figure 1 demonstrates the dual impact of social support and academic press. Source: Lee, V., Smith, J., Perry, T., & Smylie, M. A. (1999). Social support, academic press, and student achievement: A view from the middle grades in Chicago. Chicago: Consortium on Chicago School Research, Chicago Annenberg. Youth involved in the juvenile justice system typically lag 2 or more years behind their peers in basic academic skills, experience learning disabilities and mental and behavioral problems in much higher proportions than their peers,4 and often lack the social-emotional skills necessary for dealing with the challenges that they face.5 To help these youth advance academically, the social-emotional factors that create effective conditions for learning must be addressed. The four social and emotional conditions for learning are: - Safety-Learners must be, and feel, safe. Safety involves emotional as well as physical safety-for example, being safe from sarcasm and ridicule. - Support-Learners must feel connected to teachers and the learning setting, must have access to appropriate support, and must be aware of and know how to access the support. - Social and Emotional Learning (SEL)-Learners need to learn to manage their emotions and relationships positively and be surrounded by peers who also have socially responsible behavior. - Engagement and Challenge-Learners youth need to be actively engaged in learning endeavors that are relevant to them and that enable them to develop the skills and capacities to reach positive life goals. Research shows that when basic psychological needs (such as the need to belong and the need to be physically secure) are fulfilled, students are more apt to commit to the school community's norms and rules.8 These four conditions are interdependent and reinforce each other.6 For example, teachers who have positive relationships with students will find it is easier to engage students7 and to develop their students' social and emotional skills. Similarly, social and emotional learning contributes to safe and challenging learning environments. In this technical issue brief, we explore how each of the four conditions for learning applies to children and youth in or at risk of being placed in juvenile justice facilities or programs for neglected youth. We also introduce approaches that may help facilities increase the presence of these conditions and provide a number of "additional resources" within each section for further exploration of research and practical applications. Finally, we discuss how to assess the social and emotional strengths of students and the conditions for learning in N or D settings. Safety encompasses freedom from physical harm (such as peer violence and substance abuse) and threats of physical harm as well as freedom from emotional harm (such as teasing and relational bullying). This includes both actual and perceived levels of risk, with an emphasis on the condition as it is experienced by the student. Individuals in a safe school environment are able to share a sense of mutual trust and respect. Additionally, a safe school environment fulfills students' core psychological needs, including the need to belong, be autonomous, and be physically secure. Research shows that when basic psychological needs such as these are fulfilled, students are more apt to align with and commit to the school community's norms and rules.8 Similarly, evidence also indicates that unsafe school environments are associated with higher levels of negative risk-taking behavior and disengagement from school.9 Many factors undermine safety and the perception of safety in N or D settings. These can include poor relationships among racial and ethnic groups within the facility, gang rivalry between and among students, reactive and punitive approaches on the part of institutional staff, the lack of positive behavioral supports, and untreated, undertreated, or poorly treated mental health disorders such as depression and posttraumatic stress disorder. Mental health needs are particularly important for schools that work with children and youth who are neglected or delinquent.10 For example, a study of a random, stratified sample of 1,172 males and 657 females ages 10-18 who were arrested and detained in Cook County, Illinois, found that nearly two-thirds of males and three-quarters of females met diagnostic criteria for a mental disorder.11 Similarly, a randomized study of youth from all 15 of Maryland's juvenile facilities found that 53 percent met diagnostic criteria for a psychiatric disorder and that two-thirds of those with any mental health diagnosis had more than one mental or substance use diagnosis.12 The needs of neglected children and youth are no different. A national study of the prevalence of mental health disorders among children and youth in the child welfare system found that 50 percent of these children and youth had mental health problems.13 A national study of the prevalence of mental health disorders among children and youth in the child welfare system found that 50 percent of these children and youth had mental health problems.13 Multiple strategies can be used by administrators as first steps to increase levels of safety within a setting that serves youth who are N or D, including mental health screening; the transfer of accurate personal and academic records (at entry and exit) that provide a comprehensive view of a student's history and needs; appropriate student placement/separation that takes into consideration physical size, gender, and gang affiliation; the application of positive behavioral approaches;14 and reducing—and ideally preventing—the use of punitive measures such as restraints. These strategies are further addressed below. Mental Health Screening Many youth may arrive at N or D settings with unidentified mental health needs. This is particularly the case for youth of color, as research suggests that they have had less access to mental health screening, assessment, and intervention. For example, Black youth are less likely than their White counterparts to be referred to treatment centers and more likely to be referred to juvenile justice settings.15 Accurate Records Transfer and Intake Screening Administrators can prioritize efforts and create policies to ensure that records transferred to and from their facility at student entry and exit are as accurate and complete as possible. To address basic student needs with the proper services (thus contributing to a student's sense of personal security) and minimize disruption of service delivery during transition, facilities should receive records that include at least the following information: - Mental health history, including suicide risk - Substance abuse history - Math and reading levels - Records of full academic data, including assessment results, transcripts, and report cards Information from student records should be used in conjunction with intake screening and ongoing assessment to create comprehensive and up-to-date profiles for students entering facilities. Because students typically leave facilities with little or no notice, it is important that their records be updated frequently so that the documentation they bring to their next placement is accurate. Assessments should be culturally competent and identify both strengths and needs. (See the "Assessing the Social and Emotional Strengths and Needs of Students and the Social and Emotional Conditions of Learning in Your School" section of this document for more information on strengths-based assessments.) In addition to using information from student records and intake screenings to inform decisions about the proper provision of services (such as mental health services, mentoring, or suicide prevention programs), it is also important to share with a student's teachers and classroom aides, as appropriate, any non-academic information that may facilitate teaching and learning. Such non-academic information might focus on specific student strengths or needs. For example, information on needs might include whether the student has any cognitive disabilities that affect his or her ability to process information, whether the student recently experienced a traumatic event, or whether the student is affiliated with a specific gang that may have a prominent presence—or rivalry—in the facility. Appropriate Youth Placement/Separation Although it is common practice among facilities to separate youth of differing physical size to help avoid safety issues, decisions about youth placement should also take into consideration gang affiliation and gender issues. It is important that staff and faculty learn about and be able to identify local gang signs; if present in large numbers, members of competing gangs may need to be separated. Separation on the basis of gender is also a vital first step in creating a secure environment for girls, many of whom have experienced rape, molestation, and other sexual and physical abuse. In fact, approximately 3 out of 4 girls in the juvenile justice system report a history of abuse, and at least 50 percent of these young women meet the diagnostic criteria for posttraumatic stress disorder.16 Attempts at suicide are also higher among girls (who are more likely to internalize negative feelings and to resort to self-harm or substance abuse) than among boys.17 Use of Restraints Because youth who are neglected or delinquent may be survivors of past trauma (particularly female adolescents in the system), the use of punitive control measures such as restraints significantly takes away from students' experiences of safety, and may even retraumatize victims. Using these types of measures should be minimized as much as possible. The National Mental Health Association's position statement on the treatment of confined youth who have mental health needs states that, "When restraint must be used to prevent injury to self or others, there should be stringent procedural safeguards, limitations on time, periodic reviews and documentation. Generally, these techniques should be used only in response to extreme threats to life or safety and after other less restrictive control techniques have been tried and failed."18 Additional Resources for Safety - Blueprint for Change: A Comprehensive Model for the Identification and Treatment of Youth With Mental Health Needs in Contact With the Juvenile Justice System (PDF)—This model guide is a conceptual and practical framework for juvenile justice and mental health systems to use when developing strategies, policies, and services aimed at improving mental health services for youth involved in the juvenile justice system. - In Harm's Way: A Primer in Detention Suicide Prevention (PDF)—This document describes a local model for suicide prevention in juvenile detention and residential facilities, and is meant to provide those who work in such settings with information on lessons learned. - NDTAC Presentation on Mental Health Screening and Assessment (MS Word) — This presentation describes why mental health screenings are needed, how they work, and which tests are appropriate for youth in N or D settings. - National Mental Health Association's (NMHA) Principles for Treatment of Confined Students With Mental Health Needs — This paper highlights nine principles that should be taken into consideration to meet the mental health needs of youth in the juvenile justice system. - Screening and Assessing Mental Health and Substance Use Disorders Among Youth in the Juvenile Justice System: A Resource Guide for Practitioners — This resource presents information on instruments that can be used to screen and assess youth for mental health- and substance use-related disorders at various stages of the juvenile justice process. The guide includes profiles of more than 50 instruments, guidelines for selecting instruments, and best practice recommendations for diverse settings and situations. - Suicide Prevention and Juvenile Justice Resources (PDF) — An annotated list of resources for suicide prevention in the juvenile justice setting. The list is organized in the context of the public health approach to suicide prevention. Links are included if the resource is available electronically. - Suicide Prevention in Juvenile Facilities — This National Criminal Justice Reference Service (NCJRS) publication on suicide prevention includes a list of the critical components of a suicide prevention policy. Accurate Records Transfer and Intake Screening - Legislation and Interagency Relationships Aid in the Successful Transfer of Student Records—This brief discusses how legislation and interagency relationships work together to aid in student transitions. - NDTAC's Self Study Toolkit: Records Transfer Module—This module helps administrators determine how successful their facility is at transferring student records. Use of Restraints and Seclusion - The Alliance to Prevent Restraint, Aversive Interventions, and Seclusion (APRAIS)—APRAIS was founded by members of the Nation's leading education, research, and advocacy organizations to protect children from abuse in their schools, treatment programs, and residential facilities. APRAIS works to identify the laws, regulations, and loopholes that permit the use of aversive interventions, nonemergency restraints, and seclusion. This Web site provides a parent guide as well as other information and materials on aversive procedures, seclusion, and nonemergency restraint. - National Coordinating Center to Reduce and Eliminate the Use of Seclusion and Restraint—This Web site provides an overview of the National Coordinating Center to Reduce and Eliminate the Use of Seclusion and Restraint, whose purpose is to promote the implementation and evaluation of best practice approaches to preventing and reducing the use of seclusion and restraint in mental health settings. - A Roadmap to Seclusion and Restraint Free Mental Health Services for Persons of All Ages—This training curriculum emphasizes the importance of creating cultural change within organizations to impact seclusion and restraint reduction. It outlines best practices in the use of trauma-informed care and other aspects to support resiliency and recovery of people with mental illnesses while avoiding seclusion and restraint practices that can harm rather than help. - Plan for a Continuum of Community Based Services for Female Status Offenders and Delinquents (PDF)—This 2005 report by the Connecticut Department of Children and Families makes recommendations for gender-responsive policies that address the needs of girls involved in the juvenile justice system. The report also touches upon the negative impact of using restraint and seclusion with female juvenile offenders. General Resources for Safety in Schools - Early Warning, Timely Response (PDF)—Prepared for the Departments of Education and Justice at the request of the White House, this guide presents a brief summary of the research on violence prevention and intervention and crisis response in schools. It describes the early and imminent warning signs that relate to violence and other troubling behaviors and details action steps for preventing violence, intervening and getting help for troubled children, and responding to school violence. - Safeguarding Our Children: An Acton Guide (PDF)—Prepared for the Departments of Education and Justice, this guide helps schools develop and implement a comprehensive violence prevention and response plan, which can be customized to fit each school's particular strengths. - Addressing Student Problem Behavior—This online tool is a set of three user-friendly guides to functional behavioral assessment and related interventions. When teachers have strong content knowledge and are willing to adapt their pedagogies to meet student needs, adding good teacher-student relationships and strong encouragement to the mix may be key. It may help Black and Hispanic students seek help more readily, engage their studies deeply, and ultimately overcome skill gaps that are due in substantial measure to past and present disparities in family-background advantages and associated social inequities. What Doesn't Meet the Eye: Understanding and Addressing Racial Disparities in High- Achieving Suburban Schools (p.4)26 Support includes the availability of help to meet the student's social, emotional, behavioral, and academic needs. Support also refers to the student's sense of connection and attachment to the adults in the facility and of being cared about and treated well and respectfully by them. Optimizing the experience of support requires creating caring connections with adults who can offer encouragement, nurturing, and are significantly involved in the life of the student—even one high-quality supportive relationship with an adult early in high school, particularly for students of color, has been shown to have dramatic effects.19 When high expectations are emphasized in conjunction with support, the result, typically, is an increase in academic achievement; this seems to be especially true for students of color and students who are at risk of academic failure.20 For example, adolescent perceptions of connections with teachers have been shown to predict academic growth in mathematics,21 and teacher nurturance has been found to be the most consistent negative predictor of poor academic performance and problematic social behavior.22 Similarly, in another study, teachers who had high-quality relationships with their students had 31 percent fewer discipline problems, rule violations, and related problems over a year's time than teachers who did not.23 Many youth in N or D facilities view adults as being uncaring, manipulative, and punitive. Although this perception may not be accurate, it nevertheless affects their ability to learn from the adults whom they encounter. Caring support for these youth may be even more crucial than for other adolescents, yet is less accessible. In addition, N or D settings must struggle with the risk of peer contagion-the negative peer influences that may arise from grouping deviant youth together. In fact, research shows that "a third of the positive effects of juvenile justice interventions are offset by adverse effects of interventions administered in a deviant peer-group context."24 The development of caring relationships between staff and students can reduce this risk. The issue is not just hiring caring staff, but also creating capacity to care.25 Capacity building involves training and support.26 Staff training should involve the entire staff-not just the education staff-and be reinforced by coaching, supervision, and programs such as positive behavioral interventions and support. Focused training and support can enhance staff: - Cultural and linguistic competency - Understanding of their ability to use positive behavioral approaches - Understanding of how learning disabilities and emotional and behavioral disorders affect student behavior - Expectations for student success - Ability to identify student strengths At the same time, training and support can reduce: - Deficit-oriented approaches to students - Reactive and punitive approaches to students that create or escalate problem behavior Finally, teacher training should also have a combined emphasis on content, pedagogy, and relationships. 27 Research shows that successful mentoring programs match mentors and mentees, support the mentoring process, and are at least 1 year long.28 One of the ways that supportive relationships between students and staff members in facilities may be enhanced is through a structured mentoring process. Research shows that successful mentoring programs match mentors and mentees, support the mentoring process, and are at least 1 year long.28 When implementing a mentoring program at a facility, program planners should also consider including mentoring in the aftercare plan developed for youth being released from the facility. For students who have mental health or other special education needs, support also involves appropriately addressing their needs in an effective and caring manner. Effective approaches tend to be strengths based, individualized, youth driven, and culturally and linguistically appropriate and competent. Frequently, such approaches combine cognitive and behavioral components with the development of positive relationships between the student and the individual(s) providing the intervention. In the case of girls and young women, effective support must address the impact of trauma and abuse. Online resources for addressing mental health needs include the Technical Assistance Partnership for Child and Family Mental Health (TAP) Web site, and the Center for Effective Collaboration and Practice (CECP) Web site, which includes useful information on youth-guided approaches, cultural competence, positive behavioral interventions and supports, and effective mental health treatments. Additional Resources for Support - NDTAC's Mentoring Toolkit: Resources for Developing Programs for Incarcerated Youth—This toolkit provides a complete list of considerations for developing a mentoring program. - Office of Juvenile Justice and Delinquency Prevention (OJJDP) Resources on Mentoring, including model programs. - NDTAC Article: The Post Traumatic Stress Disorder (PTSD) Project in Pennsylvania—This article details the PTSD curriculum, which is intended to nurture experiences of safety, bonding, and other positive conditions for the 70 to 90 percent of girls who enter the juvenile justice system with a history of abuse. These teenage girls typically also suffer from any of a host of mental health disorders, including PTSD. - Wingspread Declaration: A National Strategy for Improving School Connectedness (PDF)—This one-page declaration, based on empirical evidence, aims to form the basis for creating school and classroom environments where all students are engaged and feel part of the educational experience. - School Connectedness: Improving Students' Lives (PDF)—This report, prepared by the Military Child Initiative at the Johns Hopkins Bloomberg School of Public Health, distills research and practical advice on how to promote connectedness. - Safe, Drug-Free, and Effective Schools for ALL Students: What Works!—Prepared for the Department of Education's Office of Special Education and Rehabilitative Services, this study reports on schools in three communities where parents, teachers, administrators, and students work together to make the schools safe and effective learning environments for all students by providing a caring and supportive environment that includes positive behavioral supports and social emotional learning. - Deviant Peer Influences in Intervention and Public Policy for Youth (PDF)—A report published by the Society for Research in Child Development (SRCD) that examines the impact delinquent peers have on one another and discusses related policy options for programs serving youth in placement. Social and Emotional Learning (SEL) Social and emotional learning creates a foundation for academic achievement, maintenance of good physical and mental health, parenting, citizenship, and productive employment. Social and emotional learning (SEL) is a process through which children and adults learn to understand and manage their emotions and relationships. This includes developing (or enhancing) the ability to demonstrate caring and concern for others, establish positive relationships, make responsible decisions, value and address diversity, and handle challenging situations effectively. SEL creates a foundation for academic achievement, maintenance of good physical and mental health, parenting, citizenship, and productive employment. SEL helps create a positive school environment. If there are positive conditions for learning, and the capacity for SEL is built, then the result is greater capacity and engagement on the part of the children. There is less problematic behavior and better academic results. The development of SEL competencies is important for child and adolescent development, and these competencies form the basis of a student's ability to respond to "academic frustrations, inappropriate adult behavior, and antisocial peer behavior."29 SEL contributes to successful academic outcomes, safe environments, and the ability of children and youth to make successful transitions. Research syntheses suggest the importance of SEL to academic achievement. For example, a recent meta-analysis of 207 SEL interventions in schools that applied the What Works Clearinghouse Improvement Index showed that the improvement index for those students who received the intervention was 11 percentage points higher than for the comparison-group students. Table 1 provides the effect sizes for different outcomes in this meta-analysis.30 |Outcome Area||Post N||Effect Size| |Academic achievement tests||27||.32*| |Positive social behavior||96||.25*| |Effect sizes denoted with * are statistically significant, p < .05| Analyses of the impact of effective SEL interventions on academics show that that they affect school-related attitudes, behavior, and performance.31 Improved attitudes include: - A stronger sense of community (bonding), - More academic motivation and higher aspirations, and - More positive attitudes toward school. Resulting behavioral improvements include: - Understanding the consequences of behavior, - Coping effectively with school stressors, - More classroom participation, - Greater effort to achieve, - Fewer hostile negotiations at school, - Fewer suspensions, and - Increased engagement. Resulting performance improvements include: - Increased grades and achievement, - Increases in being on track to graduate, and - Fewer dropouts. SEL is equally important in reducing problem behavior. For example, a meta-analysis of 165 studies of school-based prevention found that self-control or social competency programming that employed cognitive-behavioral and behavioral instructional methods consistently was effective in reducing dropout rate, nonattendance, conduct problems, and substance use.32 SEL is of particular relevance in improving outcomes for children and youth in N or D programs and settings. These young people often have poor social communication skills and lack proper anger management and conflict resolution capacities.33 Studies by Kenneth Dodge and John Lochman have shown that aggressive youth often have a distorted perception of aggression in that they over-perceive aggression in peers and under-perceive their own aggressive behaviors.34 Other experts have suggested that many youth view violence as a functional and commonplace solution for solving problems.35 Not surprisingly, a relationship exists between SEL and safety. For example, an examination of SEL and safety scores for all Chicago high schools found a strong correlation (.72) between safety and SEL scales.36 Surveys of employers also suggest the importance of SEL in the workplace. Eight of the 16 competencies that the Secretary of Labor's Commission on Achieving Necessary Skills deemed as skills necessary for high school graduates or individuals entering the workforce relate to SEL: self-esteem, sociability, integrity/honesty, problemsolving, self-management, responsibility, listening, and decisionmaking.37 If we want to help youth who are neglected or delinquent deal with the psychosocial challenges that they face, we must pay special attention to the development of social skills and conflict resolution techniques. By teaching students about problemsolving, decisionmaking, and resisting negative social pressures, educators can help students combat psychosocial obstacles to learning. One strategy used to teach students to resist negative social pressures is social resistance training. Social resistance training has proved valuable in helping youth to steer clear of alcohol, tobacco, and more dangerous drugs. This type of training places emphasis on the development of social persuasion techniques that enable youth to avoid risky behaviors by turning the social situation around in their favor.38 Teaching social persuasion techniques involves the modeling and rehearsing of proper refusal techniques. Examples of program interventions are LifeSkills Training and Say It Straight. In the area of conflict resolution, it is important to start changing youths' paradigm of violence. Effective conflict resolution and problemsolving strategies help youth realize that violence begets more violence. Successful conflict resolution programs have lowered the level of violence in schools, juvenile facilities, and communities at large.39 View the OJJDP Model Programs Guide for more information on Conflict Resolution/Interpersonal Skills. Although teaching specific social skills is important, it is just as important to teach general SEL skills. These skills are: - Self Awareness-Recognizing one's emotions and values as well as one's strengths and limitations - Self Management-Managing emotions and behaviors to achieve one's goals - Social Awareness-Showing understanding and empathy for others - Relationship Skills-Forming positive relationships, working in teams, and dealing effectively with conflict - Responsible Decision making-Making ethical, constructive choices about personal and social behavior The most effective educational strategies address these psychosocial issues with a multipronged approach, which teaches and models skills, provides opportunities to practice the skills and coaching on how the skills are being implemented, and creates opportunities to practice, adapt, and generalize these skills in natural settings. Service Learning, for example, is an effective method of experiential learning for youth who are neglected or delinquent that places the student in an active role in his or her community. Service learning projects may involve such activities as creating books for children or cards for the elderly or participating in community-service projects. However, to be valuable, a service learning program must meet four characteristics: - it addresses actual community needs - it is coordinated in collaboration with school and community, which in the case of secure neglected or delinquent settings could include the institution - it is integrated into each student's academic curriculum - it allocates a specific time for the student the reflect on the experience When youth who have encountered challenges in their lives engage in service learning, they can reap a number of benefits, including a more positive self image, reduced stress and feelings of helplessness, self-respect and responsibility, and a sense of reclamation.40 Children and youth who face more intense SEL challenges may require more intensive approaches if they are to change negative thoughts, beliefs, and behaviors. An effective approach in this case may be cognitive-behavioral therapy/treatment (CBT). CBT is a problem-focused approach that helps youth identify and change the beliefs, thoughts, and patterns of behavior that contribute to their problems. CBT combines two very effective kinds of psychotherapy—cognitive therapy and behavioral therapy. Cognitive therapy concentrates on thoughts, assumptions, and beliefs. It helps people recognize and change faulty or maladaptive thinking patterns. The OJJDP Model Programs Guide provides the example of a young person who is having trouble completing a math problem: The student may repetitively think that he or she is stupid, not a good student, and can't do math. Students can learn to replace these thoughts with more realistic thoughts such as "This problem is difficult; I'll ask for help."41 Behavioral therapy concentrates on changing behaviors and environments that maintain problematic behaviors. Successful programs that have implemented CBT specifically root out negative thoughts and "reinforce positive behavior by using CBT strategies delivered by teachers, mentors, tutors, peers, and school staff."42 CBT techniques can be applied at various levels: at an individualized level (e.g., personalized interaction with a teacher); at a classroom level (see The Incredible Years); at a schoolwide level (see School Transitional Environmental Program); and at the community level (see Movimiento Ascendencia [Upward Movement]).43 Additional Resources for SEL - The Starr Commonwealth program is a model program that combines SEL approaches with an emphasis on restorative justice. - The OJJDP Model Programs Guide offers a number of strategies and approaches to SEL programming. - Aggression Replacement Training® (ART®)—ART is a multimodal psychoeducational intervention designed to alter the behavior of chronically aggressive adolescents and young children. The goal of ART is to improve social skill competence, anger control, and moral reasoning. - Safe and Sound: An Educational Leader's Guide to Evidence-Based Social and Emotional Learning (SEL) Programs (PDF)—In an easy-to-read "consumer report" fashion, this guide distills what is known about effective SEL instruction and provides information on effective programs for the classroom that promote SEL. It discusses the associated costs, the grades covered, the level of rigorous evidence, which schools are most effective at teaching SEL skills, and which promote professional development for teachers. - What Works in Character Education (PDF) describes effective character education practices. - Character and Academics: What Good Schools Do—This Phi Delta Kappan article describes effective approaches to character education. Engagement and Challenge Students must be challenged with high expectations, must be personally motivated, must feel that school is connected to larger life goals, and must be given tangible academic opportunities. Engagement involves energizing a student's interest in the educational process. Engagement is multidimensional. It has academic, behavioral, cognitive, and psychological dimensions, which are enhanced when the other conditions for learning are present.44 Engagement is enhanced when learning builds upon student strengths, addresses the student's interests, and is perceived by the student as being relevant to his or her future. Culturally competent approaches that address individual learning needs and provide an appropriate balance between challenge and support can enhance engagement.45 Challenge involves setting and promoting high expectations for all students, connecting the curriculum to the larger picture and the outside world, fighting boredom, and encouraging the intellectual curiosity of all students.46 For students to be engaged and feel challenged in their academic setting, they must "experience a climate of high expectations for achievement (and related school behavior) that is shared and reinforced by other students, their friends, their teachers, and their family."47 Students must be challenged with high expectations, must be personally motivated, must feel that school is connected to larger life goals, and must be given tangible academic opportunities. Challenge should be of special interest in neglected and delinquent settings because best practices as well as expectations in N or D education now stress the importance of providing youth who are neglected, delinquent, or at risk of educational failure with a rigorous and challenging learning environment, in which: - The curriculum in all academic areas focuses on "comprehension and complex, meaningful problemsolving tasks," so that youth enhance their cognitive skills. - The curriculum consists of skills than can be easily applied to real-life situations. - The curriculum emphasizes team-based approaches, e.g., cooperative learning, tutoring among peers, and "team problemsolving activities." - The curriculum highlights "metacognition"—the ability of a student to perceive his or her strengths and weaknesses. - The curriculum employs materials in all subject areas that are based on "life and social skills competencies."48 Many authors have pointed to the necessity of fostering a "creative, exciting" learning environment that is tailored to students' interests.49 Student learning can be enhanced by personalization, culturally responsive instruction, active learning, experiential learning, identifying and building upon student knowledge and interests, service learning, access to rigorous learning opportunities, and the creative use of technology to scaffold learning and to promote higher-order thinking.50 To create an enriching learning environment, many organizational features should be present: - Strong academic leadership - A safe school environment - Adequate space and equipment - A variety of print and nonprint instructional materials - Library services - Measurable performance outcomes - Instructional support services51 It is also crucial that there school-to-work linkages exist and that a strong emphasis be placed on vocational training. Studies have shown an inverse correlation between vocational and employability skills and recidivism among youth members.52 As an example of one such program that links school to work, the Smyrna Beach Employability Skill Training teaches students applicable job skills by having them participate in a school-based business, which is set up to mirror an actual place of employment. Students engage in the production, promotion, and sales of their products, and in return they receive a paycheck.53 Additional Resources for Engagement and Challenge - Making the Juvenile Justice–Workforce System Connection (PDF) stresses the importance of developing employability skills to reduce recidivism rates for youth who are neglected or delinquent. - Engaging Schools, Fostering High School Students' Motivation to Learn—This book, which can be searched and read online, was prepared by the National Research Council and Institute of Medicine of the National Academy of Sciences Board on Children, Youth and Families. It summarizes the best research on engagement and motivation. - Creating Culturally Responsive Schools describes approaches supported by the research that educators can employ to promote culturally responsive education. - Center for Implementing Technology in Education (CITEd) supports leadership at State and local education agencies to integrate instructional technology for all students to achieve high educational standards. - Techmatrix—A powerful tool for finding assistive and learning technology products for students who have special needs. - The Access Center—The mission of the Access Center is to provide technical assistance that strengthens State and local capacity to help students who have disabilities effectively learn in the general education curriculum. - National Center on Student Progress Monitoring—The center provides technical assistance to States and districts and disseminates information about progress monitoring practices proven to work in different academic content areas (grades K-5). - National Center on Response to Intervention—Response to Intervention (RTI) can help teachers maximize student achievement through early identification of learning or behavioral difficulties, the provision of appropriate evidence-based interventions, and the monitoring of student progress based on achievement and other performance data. Assessing the Social and Emotional Strengths and Needs of Students and the Social and Emotional Conditions of Learning in Your School Needs should be assessed at both the individual and institutional levels. Individual assessments will help you develop interventions for individual students and monitor their progress. Schoolwide assessment will help you identify the array of schoolwide, group, and individual strategies you may need to develop and will help you monitor schoolwide progress. Individual assessments have traditionally employed deficit-oriented instruments. During the past decades, there has been movement toward employing strengths-based assessments. One of the most researched measures is the Behavioral and Emotional Rating Scale (BERS).54 The BERS is a 52-item scale normed on a racially and ethnically representative national sample of 2,176 children without disabilities and 861 children with emotional and behavioral disorders, ages 5 to 18. It is completed by adults familiar with the youth and measures emotional and behavioral strengths for five empirically derived factors: interpersonal strengths (e.g., accepts "no" for an answer), family involvement (e.g., participates in family activities), intrapersonal strengths (e.g., demonstrates self-confidence), school functioning (e.g., completes school tasks on time), and affective strengths (e.g., accepts a hug). Schoolwide assessments collect information on how students experience the school climate. A number of reputable school climate assessments exist. Effective surveys should have valid and reliable items and scales. They can be administered on a schoolwide basis or to a sample of students, and their data can be disaggregated to see how subgroups of students are experiencing the school environment. To maximize the honesty of student responses, it is important that student confidentiality be ensured. One well researched instrument for measuring the conditions for learning, initially developed for use in Chicago, is now being used in a number of U.S. districts. Both a middle grades version and a high school version of this survey are available (see the American Institutes for Research Conditions for Learning survey information on the SSSTA [Safe and Supportive Schools Technical Assistance Center] School Climate Measurement Web page). The surveys are administered annually in elementary and high schools in Cleveland and Syracuse. This type of data could be used when developing State N or D report cards. The data from the survey are also reported back to schools, in both an aggregated and disaggregated manner for continuous improvement by school improvement teams, who review the scores to identify needs and successes. Other school climate measures also may be found on SSSTA's School Climate Measurement Web page. To create a positive learning environment for all students, it is important to assess and enhance the four social and emotional conditions for learning. Students must feel both physically and emotionally safe from harm. They must feel that the adults in their lives care about them and are there to support them. Students also have to be equipped with the social and emotional skills to deal with their behaviors and actions in nonviolent, mature, and reasoned ways. Finally, it is important that all students feel engaged and challenged in their learning environment, with high expectations set for all. Only when the four conditions for learning are addressed can a comprehensive plan for student learning be truly effective. Moreover, creating such a plan requires buy-in from key stakeholders—students, teachers, administrators, counselors, parents, and members of the community. A successful plan also requires a long-term commitment by teachers, facility staff, administrators, and policymakers. With such a plan, every child can be given the opportunity to learn in the best learning environment possible—an environment that is supported strongly by effective conditions for learning. 1Becker, B., & Luthar, S. (2002), Social-emotional factors affecting achievement outcomes among disadvantaged students: Closing the achievement gap. Educational Psychologist, 37(4), 197–214; Cambourne, B. (2002). The conditions of learning: Is learning natural? The Reading Teacher,55(8), 758–762. 2American Institutes for Research. (2007). Student connection score report. Unpublished report. Washington, DC: Author. 3Spier, E., Cai, C., Kendziora, K., & Osher, D. (2007). School climate and connectedness and student achievement. Juneau, AK: Association of Alaska School Boards; Kendziora, K., Osher, D., Van Buren, E., Sochet, M., & King, K. (2006). Safe Schools/Successful Students Initiative annual report. New York: United Way of New York City; Greenberg, E., Skidmore, D., & Rhodes, D. (2004, April). Climates for learning: Mathematics achievement and its relationship to schoolwide student behavior, schoolwide parental involvement, and school morale. Paper presented at the annual meeting of the American Educational Researchers Association, San Diego, CA. 4Quinn, M. M., Rutherford, R. B., Leone, P. E., Osher, D. M., & Poirier, J. M. (2005). Students with disabilities in detention and correctional settings. Exceptional Children, 71(3), 339–345. 5Coffey, O. D., & Gemignani, M. G. (1994). Effective practices in juvenile correctional education: A study of the literature and research 1980–1992. Washington, DC: U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. 6Osher, D., Sprague, S., Axelrod, J., Keenan, S., Weissberg, R., Kendziora, K., & Zins, J. (2007). A comprehensive approach to addressing behavioral and academic challenges in contemporary schools. In J. Grimes & A. Thomas (Eds.), Best practices in school psychology (5th ed.; pp. 1263–1278). Bethesda, MD: National Association of School Psychologists. 7National Research Council, and Institute of Medicine of the National Academy of Sciences, Board on Children, Youth and Families. (2004). Engaging schools, fostering high school students' motivation to learn. Washington, DC: National Academic Press. 8Spier, E., Cai, C., Kendziora, K., & Osher, D. (2007). School climate and connectedness and student achievement. Juneau, AK: Association of Alaska School Boards. 9Osher, D., Dwyer, K., & Jimerson, S. (2006). Foundations of school violence and safety. In S. Jimerson & M. Furlong (Eds.), Handbook of school violence and school safety: From research to practice (pp. 51–71). Mahwah, NJ: Lawrence Erlbaum Associates. 10Kendziora, K. T., & Osher, D. (2004). Fostering resilience among youth in the juvenile justice system. In C. S. Clauss-Ehlers & M. D. Weist (Eds.), Community planning to foster resilience in children (pp. 177–195). New York: Kluwer Academic Publishers; Meredith, L. (2004, February). Mental health screening and assessment (MS Word), Presented at the Second Regional Transition Conference of the Neglected and Delinquent Technical Assistance Center, New Orleans, LA. 11Teplin, L. A., Abram, K. M., McClelland, G. M., Dulcan, M. K., & Mericle, A. A. (2002). Psychiatric disorders in youth in juvenile detention. Archives of General Psychiatry, 59, 1133–1143. 12Shelton, D. (2001). Emotional disorders in young offenders. Journal of Nursing Scholarship, 33, 259–263. 13Burns, B., Phillips, S., Wagner, H., Barth, R., Kolko, D., Campbell, Y., & Yandsverk, J. (2004). Mental health need and access to mental health services by youths involved with child welfare: A national survey. Journal of the American Academy of Child and Adolescent Psychiatry, 43(8), 960–970. 14Brock, L., & Quinn, M. (2006). NDTAC Issue Brief: The positive behavioral interventions and supports (PBIS) model. 15Marsteller, F., Brogan, D., Smith, I., Ash, P., Daniels, M. S., Rolka, M. S., et al. (1997). The prevalence of substance use disorders among juveniles admitted to regional youth detention centers operated by the Georgia Department of Children and Youth Services. Final report. Rockville, MD: Center for Substance Abuse Treatment; Sheppard, V. B., & Benjamin-Coleman, R. (2001). Determinants of service placements for youth with serious emotional and behavioral disturbances. Community Mental Health Journal, 37, 53–65; Woodruff, D., Osher, D., Hoffman, C., Gruner, A., King, M., Snow, S., & McIntire, J. (1999). The role of education in a system of care: Effectively serving children with emotional or behavioral disorders (Vol. III in Systems of Care: Promising Practices in Children's Mental Health series). Washington, DC: Center for Effective Collaboration and Practice & American Institutes for Research. 16Evans, W., Albers, E., Macari, D., & Mason, A. (1996). Suicide ideation, attempts, and abuse among incarcerated gang and nongang delinquents. Child and Adolescent Social Work Journal, 13, 115–126; Cauffman, E., Feldman, S. S., Waterman, J., & Steiner, H. (1998). Posttraumatic stress disorder among female juvenile offenders. Journal of the American Academy of Child and Adolescent Psychiatry, 37, 1209–1216. 17National Mental Health Association, as cited in Kendziora, K. T., & Osher, D. (2004). Fostering resilience among youth in the juvenile justice system. In C. S. Clauss-Ehlers & M. D. Weist (Eds.), Community planning to foster resilience in children (pp. 177–195). New York: Kluwer Academic Publishers. 18National Mental Health Association. (n.d.). Children with emotional disorders in the juvenile justice system. 19Osterman, K. F. (2000). Students' need for belonging in the school community. Review of Educational Research, 70(3), 323; Croninger, R. G., & Lee, V. E. (2001). Social capital and dropping out of high schools: Benefits to at-risk students of teachers' support and guidance. Teachers College Record, 103(4), 548–581; Connell, J. P., Halpern-Felsher, B., Clifford, E., Crichlow, W., & Usinger, P. (1995). Hanging in there: Behavioral, psychological, and contextual factors affecting whether African-American adolescents stay in school. Journal of Adolescent Research, 10(1), 41–63; Gambone, M. A., Klem, A. M., & Connell, J. P. (2002). Finding out what matters for youth: Testing key links in a community action framework for youth development (PDF). Philadelphia: Youth Development Strategies, Inc., & Institute for Research and Reform in Education. 20Osher, D., Cartledge, G., Oswald, D., Artiles, A. J., & Coutinho, M. (2004). Issues of cultural and linguistic competency and disproportionate representation. In R. Rutherford, M. Quinn, & S. Mather (Eds.), Handbook of research in emotional and behavioral disorders (pp. 54–77). New York: Guilford Publications; Stipek, D. (2006). Relationships matter. Education Leadership, 64(1), 46–49; Ferguson, R. (2002). What doesn't meet the eye: Understanding and addressing racial disparities in high-achieving suburban schools (PDF). 21Gregory, A., & Weinstein, R. S. (2004). Connection and regulation at home and in school: Predicting growth in achievement for adolescents. Journal of Adolescent Research, 19, 405–427. 22Wentzel, K. R. (2002). Are good teachers like good parents? Teaching styles and student adjustment in early adolescence. Child Development, 73, 287–301. 23Waters, T., Marzano, B., & McNulty, B. (2003). Balanced leadership: What 30 years of research tells us about the effect of leadership on student achievement. Aurora, CO: Mid-continent Research for Education and Learning. 24Eddy. J. M., & Chamberlain, P. (2000). Family management and deviant peer association as mediators of the impact of treatment condition on youth antisocial behavior. Journal of Consulting and Clinical Psychology, 68, 857–863. 25Quinn, M. M., Osher, D., Hoffman, C. C., & Hanley, T. V. (1998). Safe, drug-free, and effective schools for ALL students: What works! Washington, DC: Center for Effective Collaboration and Practice & American Institutes for Research. 26Ferguson, R. (2002). What doesn't meet the eye: Understanding and addressing racial disparities in high-achieving suburban schools (PDF). 27Ferguson, R. (2002). What doesn't meet the eye: Understanding and addressing racial disparities in high-achieving suburban schools (PDF). 28National Evaluation and Technical Assistance Center for the Education of Children and Youth Who Are Neglected, Delinquent, or At Risk. (n.d.). The mentoring toolkit: Resources for developing programs for incarcerated youth (abridged version) (MS Word). 29Osher, D., Sprague, S., Axelrod, J., Keenan, S., Weissberg, R., Kendziora, K., & Zins, J. (2007). A comprehensive approach to addressing behavioral and academic challenges in contemporary schools. In J. Grimes & A. Thomas (Eds.), Best practices in school psychology (5th ed.; pp. 1263–1278). Bethesda, MD: National Association of School Psychologists. 30Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. (2008). The effects of social and emotional learning on the behavior and academic performance of school children. Chicago, IL: Collaborative for Academic, Social, and Emotional Learning. 31Zins, J. E., Weissberg, R. P., Wang, M. C., & Walberg, H. J. (2004). Building academic success on social and emotional learning: What does the research say? New York: Teachers College Press. 32Wilson, D. B., Gottfredson, D. C., & Najaka, S. S. (2001). School-based prevention of problem behaviors: A meta-analysis. Journal of Quantitative Criminology, 17, 247–272. 33Coffey, O. D., Gemignani, M. G. (1994). Effective practices in juvenile correctional education: A study of the literature and research 1980–1992. Washington, DC: U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. 34Dodge, K., & Lochman, J. (1998). Distorted perception in dyadic interactions of aggressive and non-aggressive boys: Effects of prior expectations, context, and boys' age. Development and Psychopathology, 10, 495–512. 35Dishion, T. J., Patterson, G. R., Stoolmiller, M., & Skinner, M. (1991). Family, school, and behavioral antecedents to early adolescent involvement with antisocial peers. Developmental Psychology, 27, 172–180. 36American Institutes for Research. (2007). Student connection score report. Unpublished report. Washington DC: Author. 37U.S. Department of Labor, Secretary of Labor's Commission on Achieving Necessary Skills. (1992). Skills and tasks for jobs: A SCANS report for America 2000 (PDF). Washington, DC: Author. 38Dusenbury, L., & Falco, M. (1995). Eleven components of effective drug abuse prevention curricula. Journal of School Health, 65, 420–425. 39LeBoeuf, D., & Delany-Shabazz, R. M. (1997). Conflict resolution: Fact sheet #55. Washington, DC: U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. 40Keenan, S., & Muscott, H. S. (2005, September). Creating positive and responsive environments for students with emotional/behavioral disorders in inclusive schools. Presented at the Council for Children with Behavioral Disorders International Conference, Dallas, TX. 41U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. (n.d.). Cognitive behavioral treatment. 42U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. (n.d.). Cognitive behavioral treatment. 43Lipsey, M. W., Chapman, G. L., & Landenberger, N. A. (2001). Research findings from prevention and intervention studies: Cognitive-behavioral programs for offenders. Philadelphia: American Academy of Political and Social Science; McLaughlin, T. F., & Vacha, E. (1992). School programs for at-risk children and youth: A review. Education and Treatment of Children 15, 255–267. 44Christenson, S. L., & Thurlow, M. L. (2004). School dropouts: Prevention considerations, interventions, and challenges. Current Directions in Psychological Science, 13(1), 36–39. 45Moll, L. C., & Greenberg, J. B. (1990). Creating zones of possibilities: Creating social contexts for instruction. In L. C. Moll (Ed.), Vygotsky and education: Instruction implications and applications of sociohistorical psychology. (pp. 319–348). New York: Cambridge University Press; Solano-Flores, G., & Nelson-Barber, S. (2001). On the cultural validity of science assessments. Journal of Research in Science Teaching, 38, 553–573. 46Osher, D., Sprague, S., Axelrod, J., Keenan, S., Weissberg, R., Kendziora, K., & Zins, J. (2007). A comprehensive approach to addressing behavioral and academic challenges in contemporary schools. In J. Grimes & A. Thomas (Eds.), Best practices in school psychology (5th ed.; pp. 1263–1278). Bethesda, MD: National Association of School Psychologists. 47Osher, D., Sprague, S., Axelrod, J., Keenan, S., Weissberg, R., Kendziora, K., & Zins, J. (2007). A comprehensive approach to addressing behavioral and academic challenges in contemporary schools. In J. Grimes & A. Thomas (Eds.), Best practices in school psychology (5th ed.; pp. 1263–1278). Bethesda, MD: National Association of School Psychologists. 48Coffey, O. D., & Gemignani, M. G. (1994). Effective practices in juvenile correctional education: A study of the literature and research 1980–1992. Washington, DC: U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. 49Miller, J. (1996). Race, gender and juvenile justice: An examination of disposition decision-making for delinquent girls. In M. D. Schwartz & D. Milovanovic (Eds.), Race, gender and class in criminology: The intersection (pp. 219–246). New York: Garland. 50Osher, D., Sprague, S., Axelrod, J., Keenan, S., Weissberg, R., Kendziora, K., & Zins, J. (2007). A comprehensive approach to addressing behavioral and academic challenges in contemporary schools. In J. Grimes & A. Thomas (Eds.), Best practices in school psychology (5th ed.; pp. 1263–1278). Bethesda, MD: National Association of School Psychologists. 51Juvenile Justice Educational Enhancement Program, Florida State University, Center for Criminology and Public Policy Research. (1999). Toward best practices in juvenile justice education (PDF). In 1999 Annual Report to Florida Department of Education (pp. 81–132). Tallahassee, FL: Author. 52Juvenile Justice Educational Enhancement Program, Florida State University, Center for Criminology and Public Policy Research. (1999). Toward best practices in juvenile justice education (PDF). In 1999 Annual Report to Florida Department of Education (pp. 81–132). Tallahassee, FL: Author. 53Casey, R. E. (1996). Delinquency prevention through vocational entrepreneurship. Preventing School Failure, 40(2), 60–62. 54Epstein, M. H. (1999). The development and validation of a scale to assess the emotional and behavioral strengths of children and adolescents. Remedial and Special Education, 20, 258–262.
http://www.neglected-delinquent.org/nd/resources/spotlight/cflbrief200803.asp
13
24
Two of Germany's most famous writers, Goethe and Schiller, identified the central aspect of most of Germany's history with their poetic lament, "Germany? But where is it? I cannot find that country." Until 1871, there was no "Germany." Instead, Europe's German-speaking territories were divided into several hundred kingdoms, principalities, duchies, bishoprics, fiefdoms and independent cities and towns. Finding the answer to "the German question"--what form of statehood for the German speaking lands would arise, and which form could provide central Europe with peace and stability--has defined most of German history. This history of many independent polities has found continuity in the F.R.G.'s federal structure. It is also the basis for the decentralized nature of German political, economic, and cultural life that lasts to this day. The Holy Roman Empire Between 962 and the beginning of the 19th century, the German territories were loosely organized into the Holy Roman Empire of the German Nation. The initially non-hereditary Emperor, elected by the many princes, dukes, and bishops of the constituent lands and confirmed by the Pope, nominally governed over a vast territory, but had very limited ability to intervene in the affairs of the hundreds of entities that made up the Empire, many of which would often wage war against each other. The Empire was never able to develop into a centralized state. Beginning in 1517 with Martin Luther's posting of his 95 Theses on the door of the Wittenberg Castle church, the German-speaking territories bore the brunt of the pan-European struggles unleashed by the Reformation. The leaders of the German kingdoms and principalities chose sides, leading to a split of the Empire into Protestant and Catholic regions, with the Protestant strongholds mostly in the North and East, the Catholic in the South and West. The split along confessional lines also laid the groundwork for the later development of the most powerful German states--Prussia and Austria--as the Prussian Hohenzollern line adopted Protestantism and the Hapsburgs remained Catholic. The tension culminated in the 30 Years War (1618-1648), a combination of wars within the Empire and between outside European states that were fought on German land. These wars, which ended in a rough stalemate, devastated the German people and economy, definitively strengthened the rule of the various German rulers at the cost of the (Habsburg) Emperor (though Habsburg Austria remained the dominant single German entity within the Empire), and established the continued presence of both Catholics and Protestants in German territories. The Rise of Prussia The 18th and 19th centuries were marked by the rise of Prussia as the second powerful, dominant state in the German-speaking territories alongside Austria, and Austrian-Prussian rivalry became the dominant political factor in German affairs. Successive Prussian kings succeeded in modernizing, centralizing, and expanding the Prussian state, creating a modern bureaucracy and the Continent's strongest military. Despite Prussia's emphasis on militarism and authority, Prussia also became a center of the German Enlightenment and was known for its religious tolerance, with its western regions being predominantly Catholic and Jews being granted complete legal equality by 1812. After humiliating losses to Napoleon's armies, Prussia embarked on a series of administrative, military, economic, and education reforms that eventually succeeded in turning Prussia into the Continent's strongest state. Following Napoleon's defeat, the 1814-1815 Congress of Vienna replaced the Holy Roman Empire with the German Confederation, made up of 38 independent states. A loose confederation, this construct had no common citizenship, legal system, or administrative or executive organs. It did, however, provide for a Federal Diet that met in Frankfurt--a Congress of deputies of the constituent states who would meet to discuss issues affecting the Confederation as a whole. The Path to Unification: The Customs Union and the 1848 Revolutions Prussia led a group of 18 German states that formed the German Customs Union in 1834, and the Prussian Thaler eventually became the common currency used in this region. The Customs Union greatly enhanced economic efficiency, and paved the way for Germany to become a single economic unit during the 19th century's period of rapid industrialization. Austria chose to remain outside the German Customs Union, preferring instead to form its own customs union with the Hapsburg territories--a further step down the path of a unified Germany that did not include Austria. France's 1848 February Revolution that overthrew King Louis Phillipe of France also sparked a series of popular uprisings throughout the German states. Panicking local leaders provided several political, social, and economic concessions to the demonstrators, including agreeing to a national assembly that would discuss the constitutional form of a united Germany, individual rights, and economic order. The assembly rapidly devolved into competing factions; meanwhile, the conservative leaders of the German states reconstituted their power. When the assembly finally determined that there should be a united, federal Germany (excluding Austria) with universal male suffrage, organized as a constitutional monarchy under an Emperor--and offered that emperor title to the King of Prussia--there was no longer any interest or political reason (least of all in absolutist, powerful Prussia) for the leaders to assent. The Prussian monarch rejected the assembly's offer, and the assembly was forcefully disbanded without achieving any of the stated goals of the 1848 revolutionaries. Nevertheless, the 1848 Revolutions did leave a lasting legacy. The factions of the ill-fated national assembly went on to develop into political parties. Certain economic and social reforms, such as the final abolition of feudal property structures, remained. The idea of German unity was firmly established. And the revolutionaries' colors--black, red, and gold--became firmly ensconced as the colors of German democratic and liberal aspirations. Unification and Imperial Germany German nationalism developed into an important unifying and sometimes liberalizing force during this time, though it became increasingly marked by an exclusionary, racially-based definition of nationhood that included anti-Semitic tendencies. However, eventual unification of Germany was essentially the result of Prussian expansionism rather than the victory of nationalist sentiment. Prussia's economic growth outstripped Austria's during the latter half of the 19th century and Prussia-controlled Germany became one of Europe's industrial powerhouses. Under Chancellor Otto von Bismarck, Prussia defeated Austria (1866) and France (1870) in wars that paved the way for the formation of the German Empire under Emperor Wilhelm I in 1871. Germany became a federal state, with foreign and military policy determined at the national level, but many other policies remained the purview of the states. Internally, Bismarck waged a struggle against Catholicism, which he viewed as an agent of Austria (ironically, these anti-Catholic efforts--which eventually failed--actually ended up consolidating a lasting political role for Germany's Catholics), and tried to both co-opt and repress the emerging socialist movement by passing the age's most progressive social insurance and worker protection legislation while clamping down on Socialist activities. Externally, Bismarck then moved to consolidate the stability of the new Empire, launching a string of diplomatic initiatives to form a complex web of alliances with other European powers to ensure that Germany did not become surrounded by hostile powers and avoid Germany's involvement in further wars. However, Emperor William II disagreed vehemently with Bismarck, firing him in 1890. Wilhelm II had global aspirations for Germany, including acquisition of overseas colonies. His dynamic expansion of military power and confrontational foreign policies contributed to tensions on the continent. The fragile European balance of power, which Bismarck had helped to create, broke down in 1914. World War I and its aftermath, including the Treaty of Versailles, ended the German Empire. The Weimar Republic and Fascism's Rise and Defeat The postwar Weimar Republic (1919-33) was established as a broadly democratic state, but the government was severely handicapped and eventually doomed by economic problems and the rise of the political extremes. The dozens of political parties represented in the federal parliament never allowed stable government formation, creating political chaos. (This lesson led to the decision by the creators of the F.R.G. to limit parliamentary representation to parties that garner at least 5% of the vote, and to install other safeguards designed to enhance the stability of German governments.) The hyperinflation of 1923, the world depression that began in 1929, and the social unrest stemming from resentment toward the conditions of the Versailles Treaty worked to destroy the Weimar government. The National Socialist (Nazi) Party, led by Adolf Hitler, stressed nationalist and racist themes while promising to put the unemployed back to work. The party blamed many of Germany's ills on the alleged influence of Jewish and non-German ethnic groups. The party also gained support in response to fears of growing communist strength. In the 1932 elections, the Nazis won a third of the vote. In a fragmented party structure, this gave the Nazis a powerful parliamentary caucus, which they used to undermine the Republic. Continued instability resulted in President Paul von Hindenburg offering the chancellorship to Hitler in January 1933. After President von Hindenburg died in 1934, Hitler assumed that office as well. Once in power, Hitler and his party first undermined and then abolished democratic institutions and opposition parties. The Nazi leadership immediately jailed many Jewish citizens and opposition figures and withdrew their political rights. Hitler's Nuremburg Laws subsequently deprived all of Germany's Jews of their political rights and also of their economic assets and professional licenses, foreshadowing the systematic plundering of Jewish assets throughout Nazi-occupied territory. The Nazis implemented a program of genocide, at first through incarceration and forced labor and then by establishing death camps. In a catastrophe generally known as the Holocaust or Shoah, roughly six million European Jews from Germany and Nazi-occupied countries were murdered in these death camps and in the killing fields set up behind military lines on the Eastern Front. Nazi forces also carried out a campaign of ethnic extermination against Europe's Roma/Sinti and murdered thousands of Eastern Europeans, homosexuals, mentally disabled people, Freemasons, Jehovah’s Witnesses, and opposition figures, among others. Nazi revanchism and expansionism led to World War II, which resulted in the destruction of Germany's political and economic infrastructures and led to its division. After Germany's unconditional surrender on May 8, 1945, the United States, the United Kingdom, the U.S.S.R. and, later, France occupied the country and assumed responsibility for its administration. The commanders in chief exercised supreme authority in their respective zones and acted in concert on questions affecting the whole country. The United States, the United Kingdom, and the Soviet Union agreed at Potsdam in August 1945 to treat Germany as a single economic unit with some central administrative departments in a decentralized framework. However, Soviet policy turned increasingly toward dominating the part of Europe where Soviet armies were present, including eastern Germany. In 1948, the Soviets, in an attempt to abrogate agreements for Four-Power control of the city, blockaded Berlin. Until May 1949, the Allied-occupied part of Berlin was kept supplied only by an Allied airlift. The "Berlin airlift" succeeded in forcing the Soviets to accept, for the time being, the Allied role and the continuation of freedom in a portion of the city, West Berlin. Political Developments in West Germany The United States and the United Kingdom moved to establish a nucleus for a future German government by creating a central Economic Council for their two zones. The program later provided for a constituent assembly, an occupation statute governing relations between the Allies and the German authorities, and the political and economic merger of the French with the British and American zones. The western portion of the country became the Federal Republic of Germany. On May 23, 1949, the Basic Law, which came to be known as the constitution of the Federal Republic of Germany, was promulgated. Konrad Adenauer became the first federal Chancellor on September 20, 1949. The next day, the occupation statute came into force, granting powers of self-government with certain exceptions. As part of an ongoing commitment to deal with its historic responsibility, the Federal Republic of Germany took upon itself a leading role in the field of Holocaust education and support for research into this dark period of history. It has also paid out nearly 63 billion Euros as a measure of compensation to Jewish survivors and heirs of the Holocaust and other victims of Nazism, such as forced laborers from many European countries. The F.R.G. quickly progressed toward fuller sovereignty and association with its European neighbors and the Atlantic community. The London and Paris agreements of 1954 restored full sovereignty (with some exceptions) to the F.R.G. in May 1955 and opened the way for German membership in the North Atlantic Treaty Organization (NATO) and the Western European Union (WEU). The three Western Allies retained occupation powers in Berlin and certain responsibilities for Germany as a whole, including responsibility for the determination of Germany's eastern borders. Under the new arrangements, the Allies stationed troops within the F.R.G. for NATO defense, pursuant to stationing and status-of-forces agreements. With the exception of 45,000 French troops, Allied forces were under NATO's joint defense command. (France withdrew from NATO's military command structure in 1966.) Political life in the F.R.G. was remarkably stable and orderly. After Adenauer's chancellorship (1949-63), Ludwig Erhard (1963-66) and Kurt Georg Kiesinger (1966-69) served as Chancellor. Between 1949 and 1966 the united caucus of the Christian Democratic Union (CDU) and Christian Social Union (CSU), either alone or with the smaller Free Democratic Party (FDP), formed the government. Kiesinger's 1966-69 "Grand Coalition" included the F.R.G.'s two largest parties, CDU/CSU and the Social Democratic Party (SPD). After the 1969 election, the SPD, headed by Willy Brandt, formed a coalition government with the FDP. Brandt resigned in May 1974, after a senior member of his staff was uncovered as an East German spy. Helmut Schmidt (SPD) succeeded Brandt, serving as Chancellor from 1974 to 1982. Hans-Dietrich Genscher, a leading FDP official, became Vice Chancellor and Foreign Minister, a position he would hold until 1992. In October 1982, the FDP joined forces with the CDU/CSU to make CDU Chairman Helmut Kohl the Chancellor. Following national elections in March 1983, Kohl emerged in firm control of both the government and the CDU. He served until the CDU's election defeat in 1998. In 1983, a new political party, the Greens, entered the Bundestag for the first time. Political Developments in East Germany In the Soviet zone, the Communist Party forced the Social Democratic Party to merge in 1946 to form the Socialist Unity Party (SED). Under Soviet direction, a constitution was drafted on May 30, 1949, and adopted on October 7 when the German Democratic Republic was proclaimed. On October 11, 1949, a SED government under Wilhelm Pieck was established. The Soviet Union and its East European allies immediately recognized the G.D.R. The United States and most other countries did not recognize the G.D.R. until a series of agreements in 1972-73. The G.D.R. established the structures of a single-party, centralized, communist state. On July 23, 1952, the G.D.R. abolished the traditional Laender and established 14 Bezirke (districts). Formally, there existed a "National Front"--an umbrella organization nominally consisting of the SED, four other political parties controlled and directed by the SED, and the four principal mass organizations (youth, trade unions, women, and culture). However, control was clearly and solely in the hands of the SED. Balloting in G.D.R. elections was not secret. On July 17, 1953, East Germans revolted against totalitarian rule. The F.R.G. marked the bloody revolt by making the date the West German National Day, which remained until reunification. During the 1950s, East Germans fled to the West by the millions. The Soviets made the inner German border increasingly tight, but Berlin's Four-Power status countered such restrictions. Berlin thus became an escape point for even greater numbers of East Germans. On August 13, 1961, the G.D.R. began building a wall through the center of Berlin, slowing down the flood of refugees and dividing the city. The Berlin Wall became the symbol of the East's political debility and the division of Europe. In 1969, Chancellor Brandt announced that the F.R.G. would remain firmly rooted in the Atlantic Alliance but would intensify efforts to improve relations with Eastern Europe and the G.D.R. The F.R.G. commenced this "Ostpolitik" by negotiating nonaggression treaties with the Soviet Union, Poland, Czechoslovakia, Bulgaria, and Hungary. Based upon Brandt's policies, in 1971 the Four Powers concluded a Quadripartite Agreement on Berlin to address practical questions the division posed, without prejudice to each party's view of the city's Four Power status. The F.R.G.'s relations with the G.D.R. posed particularly difficult questions. Though anxious to relieve serious hardships for divided families and to reduce friction, the F.R.G. under Brandt was intent on holding to its concept of "two German states in one German nation." Relations improved, however, and in September 1973, the F.R.G. and the G.D.R. were admitted to the United Nations. The two Germanys exchanged permanent representatives in 1974, and, in 1987, G.D.R. head of state Erich Honecker paid an official visit to the F.R.G. Shortly after World War II, Berlin became the seat of the Allied Control Council, which was to have governed Germany as a whole until the conclusion of a peace settlement. In 1948, however, the Soviets refused to participate any longer in the quadripartite administration of Germany. They also refused to continue the joint administration of Berlin and drove the government elected by the people of Berlin out of its seat in the Soviet sector and installed a communist regime in its place. From then until unification, the Western Allies continued to exercise supreme authority--effective only in their sectors--through the Allied Kommandatura. To the degree compatible with the city's special status, however, they turned over control and management of city affairs to the Berlin Senat (executive) and House of Representatives, governing bodies established by constitutional process and chosen by free elections. The Allies and German authorities in the F.R.G. and West Berlin never recognized the communist city regime in East Berlin or G.D.R. authority there. During the years of Berlin's isolation--176 kilometers (110 mi.) inside the former G.D.R.--the Western Allies encouraged a close relationship between the Government of West Berlin and that of the F.R.G. Representatives of the city participated as non-voting members in the F.R.G. parliament; appropriate West German agencies, such as the supreme administrative court, had their permanent seats in the city; and the governing mayor of Berlin took his turn as President of the Bundesrat. In addition, the Allies carefully consulted with the F.R.G. and Berlin Governments on foreign policy questions involving unification and the status of Berlin. Between 1948 and 1990, major events such as fairs and festivals took place in West Berlin, and the F.R.G. encouraged investment in commerce by special concessionary tax legislation. The results of such efforts, combined with effective city administration and the Berliners' energy and spirit, were encouraging. Berlin's morale remained high, and its industrial production considerably surpassed its prewar level. During the summer of 1989, rapid changes took place in the G.D.R. Pressures for political opening throughout Eastern Europe had not seemed to affect the G.D.R. regime. However, Hungary ended its border restrictions with Austria, and a growing flood of East Germans began to take advantage of this route to West Germany. Thousands of East Germans also tried to reach the West by staging sit-ins at F.R.G. diplomatic facilities in other East European capitals. The exodus generated demands within the G.D.R. for political change, and mass demonstrations in several cities--particularly in Leipzig--continued to grow. On October 7, Soviet leader Mikhail Gorbachev visited Berlin to celebrate the 40th anniversary of the establishment of the G.D.R. and urged the East German leadership to pursue reform. On October 18, Erich Honecker resigned and was replaced by Egon Krenz. The exodus continued unabated, and pressure for political reform mounted. Finally, on November 9, the G.D.R. allowed East Germans to travel freely. Thousands poured through the Berlin Wall into the western sectors of Berlin. The Wall was opened. On November 28, F.R.G. Chancellor Kohl outlined a 10-point plan for the peaceful unification of the two Germanys. In December, the G.D.R. Volkskammer eliminated the SED's monopoly on power. The SED changed its name to the Party of Democratic Socialism (PDS), and numerous political groups and parties formed. The communist system had been eliminated. A new Prime Minister, Hans Modrow, headed a caretaker government that shared power with the new, democratically oriented parties. In early February 1990, Chancellor Kohl rejected the Modrow government's proposal for a unified, neutral Germany. Kohl affirmed that a unified Germany must be a member of NATO. Finally, on March 18, the first free elections were held in the G.D.R., and Lothar de Maiziere (CDU) formed a government under a policy of expeditious unification with the F.R.G. The freely elected representatives of the Volkskammer held their first session on April 5, and the G.D.R. peacefully evolved from a communist to a democratically elected government. Four Power Control Ends In 1990, as a necessary step for German unification and in parallel with internal German developments, the two German states and the Four Powers--the United States, U.K., France, and the Soviet Union--negotiated to end Four Power reserved rights for Berlin and Germany as a whole. These "Two-plus-Four" negotiations were mandated at the Ottawa Open Skies conference on February 13, 1990. The six foreign ministers met four times in the ensuing months in Bonn (May 5), Berlin (June 22), Paris (July 17), and Moscow (September 12). The Polish Foreign Minister participated in the part of the Paris meeting that dealt with the Polish-German borders. Of key importance was overcoming Soviet objections to a united Germany's membership in NATO. The Alliance was already responding to the changing circumstances, and, in NATO, issued the London Declaration on a transformed NATO. On July 16, after a bilateral meeting, Gorbachev and Kohl announced an agreement in principle to permit a united Germany in NATO. This cleared the way for the signing of the "Treaty on the Final Settlement With Respect to Germany" in Moscow on September 12. In addition to terminating Four Power rights, the treaty mandated the withdrawal of all Soviet forces from Germany by the end of 1994. This made it clear that the current borders were final and definitive, and specified the right of a united Germany to belong to NATO. It also provided for the continued presence of British, French, and American troops in Berlin during the interim period of the Soviet withdrawal. In the treaty, the Germans renounced nuclear, biological, and chemical weapons and stated their intention to reduce German armed forces to 370,000 within 3 to 4 years after the Conventional Armed Forces in Europe (CFE) Treaty, signed in Paris on November 19, 1990, entered into force. German unification could then proceed. In accordance with Article 23 of the F.R.G.'s Basic Law, the five Laender (which had been reestablished in the G.D.R.) acceded to the F.R.G. on October 3, 1990. The F.R.G. proclaimed October 3 as its new national day. On December 2, 1990, all-German elections were held for the first time since 1933. The Final Settlement Treaty ended Berlin's special status as a separate area under Four Power control. Under the terms of the treaty between the F.R.G. and the G.D.R., Berlin became the capital of a unified Germany. The Bundestag voted in June 1991 to make Berlin the seat of government. The Government of Germany asked the Allies to maintain a military presence in Berlin until the complete withdrawal of the Western Group of Forces (ex-Soviet) from the territory of the former G.D.R. The Russian withdrawal was completed August 31, 1994. On September 8, 1994, ceremonies marked the final departure of Western Allied troops from Berlin. In 1999, the formal seat of the federal government moved from Bonn to Berlin. Berlin also is one of the Federal Republic's 16 Laender. Sources:CIA World Factbook (March 2012) U.S. Dept. of State Country Background Notes ( March 2012)
http://globaledge.msu.edu/countries/germany/history
13
22
The Jeanes Supervisors In the years after the Civil War, African-Americans saw education as their ticket out of poverty and into the American dream. In 1865 delegates to a black church-sponsored convention in South Carolina urged that the state establish public schools throughout the state. The 1868 Constitution, written at the outset of Reconstruction, called for free public schools open to both blacks and whites. But the dream was not realized for many years. Although the state created the first public schools in the early 1700s, most of the few that existed through much of that century were in the Lowcountry and provided only a rudimentary education to poor children and a limited number of paying pupils. Churches tried to fill in the gap with schools that provided limited education for the poor. The wealthy attended private schools or were tutored at home. Most significant, education was available only to whites through most of the antebellum period, with the exception of a limited number of schools provided by and for the small population of free blacks. Under an 1834 law, teaching enslaved people to read and write was illegal. As a result, at the end of the Civil War the newly freed African-Americans lacked even the minimal education of white South Carolinians. Similar conditions existed throughout the South. The dream of an educational system for all South Carolinians died with the end of Reconstruction and the shift in power back to southern whites who did not value education for the masses. Within a few years, they drastically cut state spending for education and nearly eliminated state supervision of local schools. While white schools received little funding, black schools received even less. Although philanthropic organizations provided some help to the black schools, it was not enough to equalize black and white education spending in the state. In 1880 the average state and local spending for white schools was $2.75 per pupil and the average for black schools was $2.51 per pupil. Over the years, as funding of white schools slowly increased, the money provided for black schools decreased. By 1895, the year when a newly adopted constitution formally instituted separate schools for the races, white schools were receiving an average of $3.11 per pupil. Funding for black schools had dropped to $1.05 per pupil. In the early 1900s the state began to gradually improve the quality of education for whites, but made little effort to improve the black schools. The idea that "to educate a negro is to spoil a laborer" (Norton 1, 1984) prevailed among the white leadership. Under the gun as the courts began to look more closely at the criterion of "separate but equal," the state began to spend money on the black schools in the early 1950s. They even passed a state sales tax to fund these expenditures, hoping to maintain segregated schools. Not until the 1960s did the two separate school systems become one. Although states throughout the South made few efforts to help African-Americans until the middle of the twentieth century, some private efforts were made. The Peabody Fund, set up in 1867, was the first. It provided a model for other philanthropic efforts, including the Slater, Jeanes, Randolph and Rosenwald Funds. The Jeanes fund was certainly one of the most significant. The Jeanes Supervisors provided educational assistance to black schools and black students all over the South. They were also active in other parts of the nation and even beyond national borders, because of the success of the Jeanes model. An early experiment with Jeanes supervision in Liberia in the late 1920s had to be terminated due to a yellow fever epidemic, but the Jeanes program was later used as a model overseas, with teachers who supervised schools in Asia, Africa, the Virgin Islands, and Latin America. Anna T. Jeanes, a Quaker from Philadelphia, was one of ten children in a wealthy family. She was a well-to-do single woman in the 1800s who was interested in the causes of her day. None of her brothers and sisters left heirs. So in time, she inherited a great deal of money. Around the turn of the century, she began to donate her fortune to charity, and in 1907, shortly before she died, she gave one million dollars to a fund of income-bearing securities, to provide education to black students in rural areas of the South. Historian Donald Stone, interviewed for a documentary on the Jeanes Supervisors, recounted a story told by William J. Edwards, the author of Twenty-Five Years in the Black Belt, that may explain how a sheltered Quaker lady who never traveled very far from home decided to set up a foundation to benefit rural black children. Edwards stated that he was on a fund-raising trip in the North at the behest of Booker T. Washington of Tuskegee Institute. Washington, of course, was the African-American leader who believed that the way for blacks to succeed was through vocational education and a willingness to focus their efforts on improving their economic rather than their political status. At a meeting in Philadelphia, a trustee of Tuskegee arranged the introduction of Edwards to Anna Jeanes. Miss Jeanes had become interested in the problems of small schools struggling to survive without the help of philanthropic organizations or state monies. As a result of that meeting, Miss Jeanes told Washington and a colleague from Hampton Institute to put together a board of trustees and to spend her money in the rural areas where most African-Americans lived. She wanted to provide supervisors for rural schools. They would serve as consultants and assistants to the teachers, most of whom had little training. Many of the Jeanes Supervisors themselves were sent to the traditionally black colleges such as Tuskegee and Hampton Institutes for in-service training for their jobs. The foundation set up at the behest of Anna Jeanes became known as the Negro Rural School Fund and lasted until 1936. Many prominent white men served on its board, including, at various times, six men who had served as U.S. presidents. Although several other educational foundations existed during this period, the Jeanes fund was the only one with African-Americans sitting on the board and wielding some power over how to spend the money. Miss Jeanes had insisted that Booker T. Washington sit on the board, and that he have the authority to pick other members. He selected other African-Americans to serve, selecting men who favored industrial education for blacks, rather than educating blacks for the professions, which might challenge the social and economic status quo. The monies were invested in secure government bonds, and when interest rates fell in the 1930s, rural schools had to depend on other foundations and federal aid to supplement. With these additional funds, the Fund was able to increase the low salaries of the Jeanes supervisors and employ them for a longer period of months each year. The fund merged with several other educational funds in the late 1930s, including the Virginia Randolph Fund, which had been created with monies raised by the Jeanes Supervisors. The result of this merger was the Southern Education Foundation. The early 1900s was an era where many whites saw little need for black schooling, and what schooling was available took place under woeful conditions. Black children who attended school went to classes in buildings that one former teacher described as "shacks," class size in black schools was double that in the white schools, and black teachers, who were paid far less than the white teachers, usually had little education. Most children received no more than a few years of schooling, at best, and the school year was much shorter for blacks than for whites. Nevertheless, the Jeanes Foundation moved cautiously, aware that creation of their program could be perceived as a threat to the status quo. They hired white men to approach southern public officials to ask for black teachers for the black schools and to gain official approval to even enroll the black children in school. Jeanes Supervisors were primarily black women. Why were so few men Jeanes teachers? One retired Jeanes Supervisor told educator Juanie Noland that it all came down to power. There was no way, she claimed, that the white communities would have allowed black men to have so much power. Jeanes Supervisors were seen as leaders in their communities, as people to whom one could turn for help. The women were supervised, in turn, by the state agents for the black schools, who were white men. These men seem to have been sympathetic to the needs of the people whose schools they oversaw, and to have earned the respect of both blacks and whites. State agents were employed through the state departments of education to administer funds provided by philanthropic organizations. They recruited and placed Jeanes teachers, and worked hard to get state and local funds for the program. They acted as liaisons to the Southern Education Foundation after it was created in 1937, and attended their conferences. On the other hand, the path traveled was not so smooth for many of the Jeanes teachers in their day to day relations with their local superintendents. A teacher interviewed for a documentary about the Jeanes Supervisors paints a picture of white superintendents who did not always respect the black women they supervised. Jeanes Supervisors faced many of the same problems that working women have always faced and that black women often faced in their relationships with white men. The program was modeled on the work of Virginia Randolph, an African-American educator in Virginia. Teaching in a small rural school, she began by beautifying and cleaning the building and grounds, and getting the families involved in fund-raising for the schools as well as beautifying their own homes. She tried to demonstrate the virtues of cleanliness and good sanitation by example. Visiting the home of one pupil, she found a family member, sick with tuberculosis, lying in squalor. Later that week, she brought flour bags to school, and the children were thrilled that they were going to make beds and clothes for their dolls. If they kept their work clean, they were given a piece of cotton flannel to make a blanket. Then they made straw mattresses, and beds from boxes, and finally, rag dolls. Later they showed all of this to parents. With interest aroused, Randolph was able to suggest that the families surprise the sick woman with a gift of sheets, made from sugar bags, and fruit. It was a beginning. As time went on, Randolph focused on providing a vocational education as well as academics, believing that a well-rounded child would be best prepared for the world of work. Money was short, so she put the children to work to plan entertainments as fund-raisers. She faced organized opposition from parents over these activities. Many wanted her to focus solely on academics, and saw any labor as a step backwards for their children. One Sunday evening, she listened in silence as a minister insulted her from his pulpit. She was finally forced to respond and defend herself. Randolph won that battle. Over the years, she focused on vocational as well as academic education, teaching the girls to sew and the boys to make baskets. She spent her own money for supplies, and knocked on the doors of the white schools to get scrap material for her projects. Virginia Randolph was "discovered" by Jackson Davis, the superintendent of schools in Henrico County, Virginia, who wanted to improve schools for all children, white and black. Davis had visited Hampton Institute and became interested in the work of their home economics and vocational agriculture extension teachers, whose salaries were paid for by the philanthropist Anna Jeanes. He came across Miss Randolph while making his rounds in Henrico County, and was greatly impressed with her abilities. After the Jeanes Fund was established, he wrote to Dr. James Dillard, first president of the Jeanes Board of Trustees, requesting funds for an experiment he had in mind. In his 1908 letter, Davis said: "I am anxious to make industrial training an essential part of the work in the Negro schools in Henrico County...Many of the schools have organized Improvement Leagues in their communities and have made the school buildings and grounds more attractive...They have also made a beginning with various kinds of hand-work, such as sewing, making baskets of white-oak, mats of corn shucks, fishing nets...using materials already at hand..." (Jones 41). Davis proposed having a teacher who would work with all the schools within a county. The Jeanes Foundation agreed to pay the salary of one industrial teacher. Davis then persuaded Randolph to leave her one-room schoolhouse, despite her reluctance to leave her children. She became the first Jeanes Supervisor, eventually working in Virginia, North Carolina and Georgia. The Virginia Randolph Museum, a historic landmark in Henrico County, Virginia, was established to honor her memory. (For more information about Virginia Randolph, contact the museum at (804) 262-3363, 270-1435, or 737-3593.) Other Jeanes supervisors branched out all over the South working under what became known as the Henrico Plan. By 1914, there were 118 Jeanes teachers in 119 counties in the South. The Jeanes Supervisors had their work cut out for them. They often had to teach in one-room schoolhouses or schools held in churches. The school year lasted only about seven months, one former Supervisor remembers, because children had to help harvest the crops in the fall. Black schools had few books and those they did have were usually second-hand. Jeanes teachers did more than just teach, or supervise vocational education programs. They were heavily involved in the communities in which they lived and worked, working with churches and other community groups, helping to improve sanitation and make the communities better places to live. One retired Jeanes Supervisor recalled having children write and produce health-related skits, and encouraging the community to give blood through the American Red Cross. Another remembers distributing surplus food so often that the children called her "The Apple Lady." In a day with no school breakfast or lunch programs, Jeanes teachers even cooked meals for children in schools that had kitchen facilities. A former Jeanes Supervisor said: "...they did whatever needed to be done, to the extent that they were able to do it." Initially, much of the focus was on vocational education and on improving school facilities. Later they began to supervise academic classes and to move into curriculum development, especially after the federal government began to play a role in providing industrial education to schools under a 1917 law. They helped to raise money for school programs, including field days and commencements, and encouraged parents, teachers and students to work together. In a sense, one historian points out, the Jeanes Supervisors were ahead of their time, encouraging parents to take an interest in and become involved in their children's schools. In an interview for a documentary on the Jeanes Supervisors, Educator Juanie Noland commented that they did far more than improve the curriculum of the schools. "The major thing they did was to raise money...thousands and thousands of dollars... They were masters at this and they were very successful." The Jeanes Supervisors had to wear many hats. Ms. Noland points out that although their roles varied depending on the school system where they worked, they often became de-facto superintendents of schools. Their responsibilities might include getting books for black schools, setting up meetings for black teachers, and perhaps most important of all, serving as diplomats. They had to tread an often delicate path between black teachers, white superintendents, parents, and the predominantly white community, acting as a "liaison" among all these groups. Jeanes Supervisors sometimes exhibited extraordinary courage. Eldridge McMillan, president of the Southern Education Foundation, recounts how a Jeanes Supervisor even helped to persuade the African-American people of her community to register to vote, no small thing in 1944. In the early 1950s, the NAACP was gathering data through the Ashmore Project to demonstrate to the courts that separate was not equal. The Southern Education Foundation, although somewhat cautious about supporting what it perceived to be a radical organization, lent them Jeanes teachers, the experts on education. The Jeanes program grew and changed over the years. By 1952, there were 510 Jeanes teachers in sixteen southern states. Professional opportunities had increased as well. Many of the Jeanes teachers now had more opportunities to attend national meetings and to earn graduate degrees through such programs as the Tuskegee-Grambling joint graduate program. What was the impact of the Jeanes program in South Carolina? At the time when the Jeanes Fund was established, educational opportunities for African-Americans in the state were extremely limited. Most black teachers had little formal education. In the early years of the twentieth century, most of the black children who attended school anywhere in the South were enrolled in the first through fourth grade and only attended school for a few months a year. No one program could overcome such odds in the short-term. But the Jeanes program could help. South Carolina's first Jeanes Industrial Teachers faced many challenges. Julia A. Berry began working in Sumter County in 1909, but little information about her is available. Ten other individuals also began working as Jeanes Teachers in South Carolina that year. They helped to improve life in their communities and focused on industrial education in the schools, as did the supervisors throughout the South. The program expanded slowly in the state, in part because South Carolina did not have a state official with specific responsibility for black education at this time to act as an advocate. State Directors of Negro Education were not appointed until 1918. A second factor cited is the way the program was funded. In the beginning, the Jeanes Foundation had to bear the entire cost, with the hope that states and their counties would begin to help out. For the program to spread, states and counties would have to take on some responsibility, but the idea of black supervisors for black teachers was not especially popular in South Carolina. However, school officials and boards of education generally responded positively once they had seen what the Jeanes teachers could do. In a study of Jeanes Teachers in South Carolina, a scholar points out that in all those counties where Jeanes teachers worked, the county paid part of their salary (Woodfaulk 80-81). Jeanes Teachers in South Carolina also became involved in development of Homemakers' Clubs starting in 1918. With state funding South Carolina's fifteen Jeanes Teachers helped to organize 292 clubs with 4,644 members. Working in their counties that summer, they taught African-Americans how to can and preserve fruits and vegetables and taught women how to do the kind of baking necessitated with the limits imposed on a nation at war. The Supervisors gave nearly 300 canning demonstrations and made over 2,000 home visits, helping people learn to become more self-sufficient. Most white and many black educators in the early and mid twentieth century favored industrial education for African-Americans, but they faced many obstacle implementing such programs. South Carolina, like other southern states, was unwilling to provide the equipment and supplies needed, buildings were inadequate, and the school year was too short. Despite these handicaps, the Jeanes Teachers worked hard to see that children were able to learn as many skills as possible. They visited homes, urged children to attend school, and even started libraries in the schools. They worked to improve health and sanitation in the community, fought for better school facilities and for more supplies, and developed programs to train teachers. Reports of the State Superintendent of Education note that the Jeanes Teachers made a difference (Woodfaulk 86-87). About 180 individuals served as Jeanes Supervisors in South Carolina during the sixty years of the program. Reports from the National Jeanes Journal provide a glimpse of the accomplishments of South Carolina's Jeanes Supervisors. Jeanes Supervisors from each county reported news that was published in this report each year. A sampling of the activities included in the 1950 and 1951 editions of this journal is indicative. A Jeanes supervisor from Marlboro County reported on her twenty-six years of work. Teachers were now better educated and quality of instruction had improved, new buildings had replaced the old, race relations had improved, and the relationship between the home and school had improved. She had helped to organize PTAs and clubs, and worked with health officials to improve health in her community. A part-time Jeanes Supervisor from Pender County related that she had organized a countywide planning committee of principals and teachers which met twice a month to plan school visitations, opportunities for teachers to attend professional meetings, organize local PTAs, etc. A Jeanes Supervisor from Chesterfield County reported that the number of teachers had more than tripled during her tenure, and where previously the teachers had no college education or preparation, all now had at least one year of college training and received some in-service training. The school term had increased from three and one-half months to eight or nine months, and there were now two accredited high schools. A supervisor in Marion County reported that when she had arrived fourteen years earlier, her first task was to help build two schools. At the time, there were twenty-six schools, but most were not well-maintained and only four had electric lights. Now only one school lacked electric lights, and a second school became an accredited high school. Schools continued to be painted and repaired, with rooms added when needed. Jeanes Supervisors in South Carolina, as in other states, began with the basics at a time when the state provided little, if anything, to educate African-American children. Interviews conducted with several retired Jeanes Teachers in the 1980s are also enlightening. Lola Carter Myers, a native of Camden, and 1932 graduate of Benedict College, recalled that she had begun her teaching career in Manning. After teaching for two years, she was selected to attend a six week workshop to train Jeanes teachers how to work with people in rural communities. The Jeanes teacher of the 1930s had to be unmarried, have a bachelor's degree and own a car. Because Mrs. Myers did not have a car, she was not able to enter the program at that time. However, she became a Jeanes teacher in 1944. The course taught the prospective supervisors that they must be patient and that they must be able to get along with the trustees who did the hiring. Mrs. Myers remembered working with principals in Barnwell County who had not even completed high school. One of her responsibilities was to help with the school registers, which contained the information about students' attendance and their progress. One principal she worked with did a good job but his writing was almost illegible. She had to prepare the register and do his monthly reports. Many of the teachers were also unable to prepare adequate reports, and she would invite them individually to her house and help them correct their reports. She also worked with the teachers to help them prepare lessons in academic subjects as well as handicrafts. Like other Jeanes teachers, she was active in the community and played the piano for local churches. This also gave her the chance to interact with many of the families and talk about the schools. Bessie Picket Haile, another Camden native, graduated from Shaw University in 1932 and began her teaching career that year at the junior college in Seneca. Next she taught seventh grade for two years in Winnsboro and then taught at a local high school. Invited to become a Jeanes Teacher, she attended a summer training session and was assigned to Cherokee County. When she went to Gaffney in late August, she found that the schools would not be opening until October. Her first task was to persuade the school trustees to open school earlier. By February schools would close again so students could prepare to work on the next season's crops. She encountered much opposition, especially from the white trustees. In October she began to meet with teachers She found that many had little academic preparation for their jobs. She had to teach them how to do lesson plans, an especially difficult task in one-teacher schools with many grades. Mrs. Haile made many friends, but left the program after just one year and went back to teaching. Lela Sessions, a Berkeley County native, attended high school at Avery Institute in Charleston because her home county had no accredited high school in the 1930s. She had no money for college, but won a church-sponsored oratorical contest and with it a scholarship to Allen University. After she graduated in 1944, she held a series of teaching positions at elementary and high schools in South Carolina. She returned home to teach at a training school in Berkeley County. When a Jeanes teacher position was created in Berkeley County, she was asked to apply because they wanted to hire a local person. She worked with thirty-seven schools in the county, all one and two-teacher schools. She often had to drive on unpaved roads to reach the rural schools. The buildings were in poor shape, but in those areas with well prepared teachers, the students had good academic skills. Mrs. Sessions had to help the teachers obtain all kinds of materials to use in their classes, including newspapers and magazines. Although very little money was available to buy supplies, active PTAs in the county helped a little. To help her students, Sessions would sometimes drive to Charleston and fly to Atlanta for the day to the offices of the Southern Education Foundation. She worked hard to obtain funds for the schools. Mrs. Sessions also became active in the state Jeanes Association and the National Association of Supervisors (Woodfaulk 101-127). The Jeanes programs remained in place until the 1960s, when school desegregation became a reality. As black schools closed and black teachers and administrators were absorbed into the integrated system, what to do with the Jeanes teachers became an issue. Having black Jeanes teachers supervise white teachers would have been awkward at the time and would not have been acceptable, one scholar pointed out. In addition, federal monies were becoming available to fund new programs, and some saw the Jeanes Supervisors as redundant. This spelled the end of an era. The Southern Education Foundation, however, has continued to work with schools to improve education. How can one sum up the contributions of the Jeanes Supervisors, these educational pioneers? One scholar likes to refer to them as "pre-cursers of the Peace Corp," women who didn't make much money, but did anything that they could to help. Another sees them as early resource people, similar to today's resource teachers who try to make sure that children have what they need to learn. Another comments that these women provided African-American children with a sense of pride by teaching them black history at a time when it was not found in any textbooks. "We took straw and we made bricks and we built houses," says one retired supervisor. Perhaps, though, as stated by Eldridge McMillan, their slogan sums it up best: the Jeanes Supervisors always did the "next needed thing." Carol Sears Botsch, Political Science, USC Aiken, [email protected] Note: Records of the Jeanes Supervisors can be found in the archives of the Southern Education Foundation in the Atlanta University Center Library on the Clark-Atlanta University campus and in the Anna T. Jeanes Foundation Papers at the Rockefeller Archive Center in Pocantico Hills, N.Y. Clarke, Vernon F. and James C. Isaac. The Jeanes Supervisors: Striving to Educate (Breaking New Ground Productions, 1994) video funded by Georgia Humanities Council. Jones, Lance G. E. The Jeanes Teacher in the United States 1908-1933. Chapel Hill: University of North Carolina Press, 1937. NASC Interim History Writing Committee. The Jeanes Story: A Chapter in the History of American Education 1908-1968. Atlanta: Southern Education Foundation, 1979. Norton, John. "A History Worth Retelling" in "A Special Report: Our Schools" the State (January 15, 1984), 3. Norton, John. "Free Schools: an impossible dream?" in "A Special Report: Our Schools" the State (January 15, 1984), 5. Page, Levona. "Blacks abided unrequited romance with education" in "A Special Report: Our Schools" the State (January 15, 1984), 13. Page, Levona. "Blacks 'were supposed to have inferior schools' " in "A Special Report: Our Schools" the State (January 15, 1984), 14. Smith, Alice Brown. Forgotten Foundations: The Role of Jeanes Teachers in Black Education. New York: Vantage Press, 1997. Witty, Elaine P. "Virginia Randolph" in Jessie Carney Smith (editor), Notable Black American Women. Detroit: Gale Research Inc., 1992, pp. 918-921. Woodfaulk, Courtney Sanabria. The Jeanes Teachers of South Carolina: The Emergence, Existence, and Significance of Their Work. Columbia, S.C.: University of South Carolina, 1992, unpublished doctoral dissertation. last updated 8/9/98 BACK TO SUBJECT INDEX The University of South
http://www.usca.edu/aasc/jeanes.htm
13
47
Economic history > Causes of the decline > The gold standard Some economists believe that the Federal Reserve allowed or caused the huge declines in the American money supply partly to preserve the gold standard. Under the gold standard, each country set the value of its currency in terms of gold and took monetary actions to defend the fixed price. It is possible that had the Federal Reserve expanded greatly in response to the banking panics, foreigners could have lost confidence in the United States' commitment to the gold standard. This could have led to large gold outflows, and the United States could have been forced to devalue. Likewise, had the Federal Reserve not tightened in the fall of 1931, it is possible that there would have been a speculative attack on the dollar and the United States would have been forced to abandon the gold standard along with Great Britain. While there is debate about the role the gold standard played in limiting U.S. monetary policy, there is no question that it was a key factor in the transmission of America's economic decline to the rest of the world. Under the gold standard, imbalances in trade or asset flows gave rise to international gold flows. For example, in the mid-1920s intense international demand for American assets such as stocks and bonds brought large inflows of gold to the United States. Likewise, a decision by France after World War I to return to the gold standard with an undervalued franc led to trade surpluses and substantial gold inflows. (See also balance of trade.) Britain chose to return to the gold standard after World War I at the prewar parity. Wartime inflation, however, implied that the pound was overvalued, and this overvaluation led to trade deficits and substantial gold outflows after 1925. To stem the gold outflow, the Bank of England raised interest rates substantially. High interest rates depressed British spending and led to high unemployment in Great Britain throughout the second half of the 1920s. Once the U.S. economy began to contract severely, the tendency for gold to flow out of other countries and toward the United States intensified. This took place because deflation in the United States made American goods particularly desirable to foreigners, while low income reduced American demand for foreign products. To counteract the resulting tendency toward an American trade surplus and foreign gold outflows, central banks throughout the world raised interest rates. Maintaining the international gold standard, in essence, required a massive monetary contraction throughout the world to match the one occurring in the United States. The result was a decline in output and prices in countries throughout the world that also nearly matched the downturn in the United States. Financial crises and banking panics occurred in a number of countries besides the United States. In May 1931 payment difficulties at the Creditanstalt, Austria's largest bank, set off a string of financial crises that enveloped much of Europe and were a key factor forcing Britain to abandon the gold standard. Among the countries hardest hit by bank failures and volatile financial markets were Austria, Germany, and Hungary. These widespread banking crises could have been the result of poor regulation and other local factors, or simple contagion from one country to another. In addition, the gold standard, by forcing countries to deflate along with the United States, reduced the value of banks' collateral and made them more vulnerable to runs. As in the United States, banking panics and other financial market disruptions further depressed output and prices in a number of countries. ·Culture and society in the Great Depression
http://britannica.com/blackhistory/article-234444
13
25
A diode is an electrical device allowing current to move through it in one direction with far greater ease than in the other. The most common kind of diode in modern circuit design is the semiconductor diode, although other diode technologies exist. Semiconductor diodes are symbolized in schematic diagrams such as Figure below. The term “diode” is customarily reserved for small signal devices, I ≤ 1 A. The term rectifier is used for power devices, I > 1 A. Semiconductor diode schematic symbol: Arrows indicate the direction of electron current flow. When placed in a simple battery-lamp circuit, the diode will either allow or prevent current through the lamp, depending on the polarity of the applied voltage. (Figure below) Diode operation: (a) Current flow is permitted; the diode is forward biased. (b) Current flow is prohibited; the diode is reversed biased. When the polarity of the battery is such that electrons are allowed to flow through the diode, the diode is said to be forward-biased. Conversely, when the battery is “backward” and the diode blocks current, the diode is said to be reverse-biased. A diode may be thought of as like a switch: “closed” when forward-biased and “open” when reverse-biased. Oddly enough, the direction of the diode symbol's “arrowhead” points against the direction of electron flow. This is because the diode symbol was invented by engineers, who predominantly use conventional flow notation in their schematics, showing current as a flow of charge from the positive (+) side of the voltage source to the negative (-). This convention holds true for all semiconductor symbols possessing “arrowheads:” the arrow points in the permitted direction of conventional flow, and against the permitted direction of electron flow. Diode behavior is analogous to the behavior of a hydraulic device called a check valve. A check valve allows fluid flow through it in only one direction as in Figure below. Hydraulic check valve analogy: (a) Electron current flow permitted. (b) Current flow prohibited. Check valves are essentially pressure-operated devices: they open and allow flow if the pressure across them is of the correct “polarity” to open the gate (in the analogy shown, greater fluid pressure on the right than on the left). If the pressure is of the opposite “polarity,” the pressure difference across the check valve will close and hold the gate so that no flow occurs. Like check valves, diodes are essentially “pressure-” operated (voltage-operated) devices. The essential difference between forward-bias and reverse-bias is the polarity of the voltage dropped across the diode. Let's take a closer look at the simple battery-diode-lamp circuit shown earlier, this time investigating voltage drops across the various components in Figure below. Diode circuit voltage measurements: (a) Forward biased. (b) Reverse biased. A forward-biased diode conducts current and drops a small voltage across it, leaving most of the battery voltage dropped across the lamp. If the battery's polarity is reversed, the diode becomes reverse-biased, and drops all of the battery's voltage leaving none for the lamp. If we consider the diode to be a self-actuating switch (closed in the forward-bias mode and open in the reverse-bias mode), this behavior makes sense. The most substantial difference is that the diode drops a lot more voltage when conducting than the average mechanical switch (0.7 volts versus tens of millivolts). This forward-bias voltage drop exhibited by the diode is due to the action of the depletion region formed by the P-N junction under the influence of an applied voltage. If no voltage applied is across a semiconductor diode, a thin depletion region exists around the region of the P-N junction, preventing current flow. (Figure below (a)) The depletion region is almost devoid of available charge carriers, and acts as an insulator: Diode representations: PN-junction model, schematic symbol, physical part. The schematic symbol of the diode is shown in Figure above (b) such that the anode (pointing end) corresponds to the P-type semiconductor at (a). The cathode bar, non-pointing end, at (b) corresponds to the N-type material at (a). Also note that the cathode stripe on the physical part (c) corresponds to the cathode on the symbol. If a reverse-biasing voltage is applied across the P-N junction, this depletion region expands, further resisting any current through it. (Figure below) Depletion region expands with reverse bias. Conversely, if a forward-biasing voltage is applied across the P-N junction, the depletion region collapses becoming thinner. The diode becomes less resistive to current through it. In order for a sustained current to go through the diode; though, the depletion region must be fully collapsed by the applied voltage. This takes a certain minimum voltage to accomplish, called the forward voltage as illustrated in Figure below. Inceasing forward bias from (a) to (b) decreases depletion region thickness. For silicon diodes, the typical forward voltage is 0.7 volts, nominal. For germanium diodes, the forward voltage is only 0.3 volts. The chemical constituency of the P-N junction comprising the diode accounts for its nominal forward voltage figure, which is why silicon and germanium diodes have such different forward voltages. Forward voltage drop remains approximately constant for a wide range of diode currents, meaning that diode voltage drop is not like that of a resistor or even a normal (closed) switch. For most simplified circuit analysis, the voltage drop across a conducting diode may be considered constant at the nominal figure and not related to the amount of current. Actually, forward voltage drop is more complex. An equation describes the exact current through a diode, given the voltage dropped across the junction, the temperature of the junction, and several physical constants. It is commonly known as the diode equation: The term kT/q describes the voltage produced within the P-N junction due to the action of temperature, and is called the thermal voltage, or Vt of the junction. At room temperature, this is about 26 millivolts. Knowing this, and assuming a “nonideality” coefficient of 1, we may simplify the diode equation and re-write it as such: You need not be familiar with the “diode equation” to analyze simple diode circuits. Just understand that the voltage dropped across a current-conducting diode does change with the amount of current going through it, but that this change is fairly small over a wide range of currents. This is why many textbooks simply say the voltage drop across a conducting, semiconductor diode remains constant at 0.7 volts for silicon and 0.3 volts for germanium. However, some circuits intentionally make use of the P-N junction's inherent exponential current/voltage relationship and thus can only be understood in the context of this equation. Also, since temperature is a factor in the diode equation, a forward-biased P-N junction may also be used as a temperature-sensing device, and thus can only be understood if one has a conceptual grasp on this mathematical relationship. A reverse-biased diode prevents current from going through it, due to the expanded depletion region. In actuality, a very small amount of current can and does go through a reverse-biased diode, called the leakage current, but it can be ignored for most purposes. The ability of a diode to withstand reverse-bias voltages is limited, as it is for any insulator. If the applied reverse-bias voltage becomes too great, the diode will experience a condition known as breakdown (Figure below), which is usually destructive. A diode's maximum reverse-bias voltage rating is known as the Peak Inverse Voltage, or PIV, and may be obtained from the manufacturer. Like forward voltage, the PIV rating of a diode varies with temperature, except that PIV increases with increased temperature and decreases as the diode becomes cooler -- exactly opposite that of forward voltage. Diode curve: showing knee at 0.7 V forward bias for Si, and reverse breakdown. Typically, the PIV rating of a generic “rectifier” diode is at least 50 volts at room temperature. Diodes with PIV ratings in the many thousands of volts are available for modest prices. Being able to determine the polarity (cathode versus anode) and basic functionality of a diode is a very important skill for the electronics hobbyist or technician to have. Since we know that a diode is essentially nothing more than a one-way valve for electricity, it makes sense we should be able to verify its one-way nature using a DC (battery-powered) ohmmeter as in Figure below. Connected one way across the diode, the meter should show a very low resistance at (a). Connected the other way across the diode, it should show a very high resistance at (b) (“OL” on some digital meter models). Determination of diode polarity: (a) Low resistance indicates forward bias, black lead is cathode and red lead anode (for most meters) (b) Reversing leads shows high resistance indicating reverse bias. Of course, to determine which end of the diode is the cathode and which is the anode, you must know with certainty which test lead of the meter is positive (+) and which is negative (-) when set to the “resistance” or “Ω” function. With most digital multimeters I've seen, the red lead becomes positive and the black lead negative when set to measure resistance, in accordance with standard electronics color-code convention. However, this is not guaranteed for all meters. Many analog multimeters, for example, actually make their black leads positive (+) and their red leads negative (-) when switched to the “resistance” function, because it is easier to manufacture it that way! One problem with using an ohmmeter to check a diode is that the readings obtained only have qualitative value, not quantitative. In other words, an ohmmeter only tells you which way the diode conducts; the low-value resistance indication obtained while conducting is useless. If an ohmmeter shows a value of “1.73 ohms” while forward-biasing a diode, that figure of 1.73 Ω doesn't represent any real-world quantity useful to us as technicians or circuit designers. It neither represents the forward voltage drop nor any “bulk” resistance in the semiconductor material of the diode itself, but rather is a figure dependent upon both quantities and will vary substantially with the particular ohmmeter used to take the reading. For this reason, some digital multimeter manufacturers equip their meters with a special “diode check” function which displays the actual forward voltage drop of the diode in volts, rather than a “resistance” figure in ohms. These meters work by forcing a small current through the diode and measuring the voltage dropped between the two test leads. (Figure below) Meter with a “Diode check” function displays the forward voltage drop of 0.548 volts instead of a low resistance. The forward voltage reading obtained with such a meter will typically be less than the “normal” drop of 0.7 volts for silicon and 0.3 volts for germanium, because the current provided by the meter is of trivial proportions. If a multimeter with diode-check function isn't available, or you would like to measure a diode's forward voltage drop at some non-trivial current, the circuit of Figure below may be constructed using a battery, resistor, and voltmeter Measuring forward voltage of a diode without“diode check” meter function: (a) Schematic diagram. (b) Pictorial diagram. Connecting the diode backwards to this testing circuit will simply result in the voltmeter indicating the full voltage of the battery. If this circuit were designed to provide a constant or nearly constant current through the diode despite changes in forward voltage drop, it could be used as the basis of a temperature-measurement instrument, the voltage measured across the diode being inversely proportional to diode junction temperature. Of course, diode current should be kept to a minimum to avoid self-heating (the diode dissipating substantial amounts of heat energy), which would interfere with temperature measurement. Beware that some digital multimeters equipped with a “diode check” function may output a very low test voltage (less than 0.3 volts) when set to the regular “resistance” (Ω) function: too low to fully collapse the depletion region of a PN junction. The philosophy here is that the “diode check” function is to be used for testing semiconductor devices, and the “resistance” function for anything else. By using a very low test voltage to measure resistance, it is easier for a technician to measure the resistance of non-semiconductor components connected to semiconductor components, since the semiconductor component junctions will not become forward-biased with such low voltages. Consider the example of a resistor and diode connected in parallel, soldered in place on a printed circuit board (PCB). Normally, one would have to unsolder the resistor from the circuit (disconnect it from all other components) before measuring its resistance, otherwise any parallel-connected components would affect the reading obtained. When using a multimeter which outputs a very low test voltage to the probes in the “resistance” function mode, the diode's PN junction will not have enough voltage impressed across it to become forward-biased, and will only pass negligible current. Consequently, the meter “sees” the diode as an open (no continuity), and only registers the resistor's resistance. (Figure below) Ohmmeter equipped with a low test voltage (<0.7 V) does not see diodes allowing it to measure parallel resistors. If such an ohmmeter were used to test a diode, it would indicate a very high resistance (many mega-ohms) even if connected to the diode in the “correct” (forward-biased) direction. (Figure below) Ohmmeter equipped with a low test voltage, too low to forward bias diodes, does not see diodes. Reverse voltage strength of a diode is not as easily tested, because exceeding a normal diode's PIV usually results in destruction of the diode. Special types of diodes, though, which are designed to “break down” in reverse-bias mode without damage (called zener diodes), which are tested with the same voltage source / resistor / voltmeter circuit, provided that the voltage source is of high enough value to force the diode into its breakdown region. More on this subject in a later section of this chapter. In addition to forward voltage drop (Vf) and peak inverse voltage (PIV), there are many other ratings of diodes important to circuit design and component selection. Semiconductor manufacturers provide detailed specifications on their products -- diodes included -- in publications known as datasheets. Datasheets for a wide variety of semiconductor components may be found in reference books and on the internet. I prefer the internet as a source of component specifications because all the data obtained from manufacturer websites are up-to-date. A typical diode datasheet will contain figures for the following parameters: Maximum repetitive reverse voltage = VRRM, the maximum amount of voltage the diode can withstand in reverse-bias mode, in repeated pulses. Ideally, this figure would be infinite. Maximum DC reverse voltage = VR or VDC, the maximum amount of voltage the diode can withstand in reverse-bias mode on a continual basis. Ideally, this figure would be infinite. Maximum forward voltage = VF, usually specified at the diode's rated forward current. Ideally, this figure would be zero: the diode providing no opposition whatsoever to forward current. In reality, the forward voltage is described by the “diode equation.” Maximum (average) forward current = IF(AV), the maximum average amount of current the diode is able to conduct in forward bias mode. This is fundamentally a thermal limitation: how much heat can the PN junction handle, given that dissipation power is equal to current (I) multiplied by voltage (V or E) and forward voltage is dependent upon both current and junction temperature. Ideally, this figure would be infinite. Maximum (peak or surge) forward current = IFSM or if(surge), the maximum peak amount of current the diode is able to conduct in forward bias mode. Again, this rating is limited by the diode junction's thermal capacity, and is usually much higher than the average current rating due to thermal inertia (the fact that it takes a finite amount of time for the diode to reach maximum temperature for a given current). Ideally, this figure would be infinite. Maximum total dissipation = PD, the amount of power (in watts) allowable for the diode to dissipate, given the dissipation (P=IE) of diode current multiplied by diode voltage drop, and also the dissipation (P=I2R) of diode current squared multiplied by bulk resistance. Fundamentally limited by the diode's thermal capacity (ability to tolerate high temperatures). Operating junction temperature = TJ, the maximum allowable temperature for the diode's PN junction, usually given in degrees Celsius (oC). Heat is the “Achilles' heel” of semiconductor devices: they must be kept cool to function properly and give long service life. Storage temperature range = TSTG, the range of allowable temperatures for storing a diode (unpowered). Sometimes given in conjunction with operating junction temperature (TJ), because the maximum storage temperature and the maximum operating temperature ratings are often identical. If anything, though, maximum storage temperature rating will be greater than the maximum operating temperature rating. Thermal resistance = R(Θ), the temperature difference between junction and outside air (R(Θ)JA) or between junction and leads (R(Θ)JL) for a given power dissipation. Expressed in units of degrees Celsius per watt (oC/W). Ideally, this figure would be zero, meaning that the diode package was a perfect thermal conductor and radiator, able to transfer all heat energy from the junction to the outside air (or to the leads) with no difference in temperature across the thickness of the diode package. A high thermal resistance means that the diode will build up excessive temperature at the junction (where its critical) despite best efforts at cooling the outside of the diode, and thus will limit its maximum power dissipation. Maximum reverse current = IR, the amount of current through the diode in reverse-bias operation, with the maximum rated inverse voltage applied (VDC). Sometimes referred to as leakage current. Ideally, this figure would be zero, as a perfect diode would block all current when reverse-biased. In reality, it is very small compared to the maximum forward current. Typical junction capacitance = CJ, the typical amount of capacitance intrinsic to the junction, due to the depletion region acting as a dielectric separating the anode and cathode connections. This is usually a very small figure, measured in the range of picofarads (pF). Reverse recovery time = trr, the amount of time it takes for a diode to “turn off” when the voltage across it alternates from forward-bias to reverse-bias polarity. Ideally, this figure would be zero: the diode halting conduction immediately upon polarity reversal. For a typical rectifier diode, reverse recovery time is in the range of tens of microseconds; for a “fast switching” diode, it may only be a few nanoseconds. Most of these parameters vary with temperature or other operating conditions, and so a single figure fails to fully describe any given rating. Therefore, manufacturers provide graphs of component ratings plotted against other variables (such as temperature), so that the circuit designer has a better idea of what the device is capable of. Now we come to the most popular application of the diode: rectification. Simply defined, rectification is the conversion of alternating current (AC) to direct current (DC). This involves a device that only allows one-way flow of electrons. As we have seen, this is exactly what a semiconductor diode does. The simplest kind of rectifier circuit is the half-wave rectifier. It only allows one half of an AC waveform to pass through to the load. (Figure below) Half-wave rectifier circuit. For most power applications, half-wave rectification is insufficient for the task. The harmonic content of the rectifier's output waveform is very large and consequently difficult to filter. Furthermore, the AC power source only supplies power to the load one half every full cycle, meaning that half of its capacity is unused. Half-wave rectification is, however, a very simple way to reduce power to a resistive load. Some two-position lamp dimmer switches apply full AC power to the lamp filament for “full” brightness and then half-wave rectify it for a lesser light output. (Figure below) Half-wave rectifier application: Two level lamp dimmer. In the “Dim” switch position, the incandescent lamp receives approximately one-half the power it would normally receive operating on full-wave AC. Because the half-wave rectified power pulses far more rapidly than the filament has time to heat up and cool down, the lamp does not blink. Instead, its filament merely operates at a lesser temperature than normal, providing less light output. This principle of “pulsing” power rapidly to a slow-responding load device to control the electrical power sent to it is common in the world of industrial electronics. Since the controlling device (the diode, in this case) is either fully conducting or fully nonconducting at any given time, it dissipates little heat energy while controlling load power, making this method of power control very energy-efficient. This circuit is perhaps the crudest possible method of pulsing power to a load, but it suffices as a proof-of-concept application. If we need to rectify AC power to obtain the full use of both half-cycles of the sine wave, a different rectifier circuit configuration must be used. Such a circuit is called a full-wave rectifier. One kind of full-wave rectifier, called the center-tap design, uses a transformer with a center-tapped secondary winding and two diodes, as in Figure below. Full-wave rectifier, center-tapped design. This circuit's operation is easily understood one half-cycle at a time. Consider the first half-cycle, when the source voltage polarity is positive (+) on top and negative (-) on bottom. At this time, only the top diode is conducting; the bottom diode is blocking current, and the load “sees” the first half of the sine wave, positive on top and negative on bottom. Only the top half of the transformer's secondary winding carries current during this half-cycle as in Figure below. Full-wave center-tap rectifier: Top half of secondary winding conducts during positive half-cycle of input, delivering positive half-cycle to load.. During the next half-cycle, the AC polarity reverses. Now, the other diode and the other half of the transformer's secondary winding carry current while the portions of the circuit formerly carrying current during the last half-cycle sit idle. The load still “sees” half of a sine wave, of the same polarity as before: positive on top and negative on bottom. (Figure below) Full-wave center-tap rectifier: During negative input half-cycle, bottom half of secondary winding conducts, delivering a positive half-cycle to the load. One disadvantage of this full-wave rectifier design is the necessity of a transformer with a center-tapped secondary winding. If the circuit in question is one of high power, the size and expense of a suitable transformer is significant. Consequently, the center-tap rectifier design is only seen in low-power applications. The full-wave center-tapped rectifier polarity at the load may be reversed by changing the direction of the diodes. Furthermore, the reversed diodes can be paralleled with an existing positive-output rectifier. The result is dual-polarity full-wave center-tapped rectifier in Figure below. Note that the connectivity of the diodes themselves is the same configuration as a bridge. Dual polarity full-wave center tap rectifier Another, more popular full-wave rectifier design exists, and it is built around a four-diode bridge configuration. For obvious reasons, this design is called a full-wave bridge. (Figure below) Full-wave bridge rectifier. Current directions for the full-wave bridge rectifier circuit are as shown in Figure below for positive half-cycle and Figure below for negative half-cycles of the AC source waveform. Note that regardless of the polarity of the input, the current flows in the same direction through the load. That is, the negative half-cycle of source is a positive half-cycle at the load. The current flow is through two diodes in series for both polarities. Thus, two diode drops of the source voltage are lost (0.7·2=1.4 V for Si) in the diodes. This is a disadvantage compared with a full-wave center-tap design. This disadvantage is only a problem in very low voltage power supplies. Full-wave bridge rectifier: Electron flow for positive half-cycles. Full-wave bridge rectifier: Electron flow for negative half=cycles. Remembering the proper layout of diodes in a full-wave bridge rectifier circuit can often be frustrating to the new student of electronics. I've found that an alternative representation of this circuit is easier both to remember and to comprehend. It's the exact same circuit, except all diodes are drawn in a horizontal attitude, all “pointing” the same direction. (Figure below) Alternative layout style for Full-wave bridge rectifier. One advantage of remembering this layout for a bridge rectifier circuit is that it expands easily into a polyphase version in Figure below. Three-phase full-wave bridge rectifier circuit. Each three-phase line connects between a pair of diodes: one to route power to the positive (+) side of the load, and the other to route power to the negative (-) side of the load. Polyphase systems with more than three phases are easily accommodated into a bridge rectifier scheme. Take for instance the six-phase bridge rectifier circuit in Figure below. Six-phase full-wave bridge rectifier circuit. When polyphase AC is rectified, the phase-shifted pulses overlap each other to produce a DC output that is much “smoother” (has less AC content) than that produced by the rectification of single-phase AC. This is a decided advantage in high-power rectifier circuits, where the sheer physical size of filtering components would be prohibitive but low-noise DC power must be obtained. The diagram in Figure below shows the full-wave rectification of three-phase AC. Three-phase AC and 3-phase full-wave rectifier output. In any case of rectification -- single-phase or polyphase -- the amount of AC voltage mixed with the rectifier's DC output is called ripple voltage. In most cases, since “pure” DC is the desired goal, ripple voltage is undesirable. If the power levels are not too great, filtering networks may be employed to reduce the amount of ripple in the output voltage. Sometimes, the method of rectification is referred to by counting the number of DC “pulses” output for every 360o of electrical “rotation.” A single-phase, half-wave rectifier circuit, then, would be called a 1-pulse rectifier, because it produces a single pulse during the time of one complete cycle (360o) of the AC waveform. A single-phase, full-wave rectifier (regardless of design, center-tap or bridge) would be called a 2-pulse rectifier, because it outputs two pulses of DC during one AC cycle's worth of time. A three-phase full-wave rectifier would be called a 6-pulse unit. Modern electrical engineering convention further describes the function of a rectifier circuit by using a three-field notation of phases, ways, and number of pulses. A single-phase, half-wave rectifier circuit is given the somewhat cryptic designation of 1Ph1W1P (1 phase, 1 way, 1 pulse), meaning that the AC supply voltage is single-phase, that current on each phase of the AC supply lines moves in only one direction (way), and that there is a single pulse of DC produced for every 360o of electrical rotation. A single-phase, full-wave, center-tap rectifier circuit would be designated as 1Ph1W2P in this notational system: 1 phase, 1 way or direction of current in each winding half, and 2 pulses or output voltage per cycle. A single-phase, full-wave, bridge rectifier would be designated as 1Ph2W2P: the same as for the center-tap design, except current can go both ways through the AC lines instead of just one way. The three-phase bridge rectifier circuit shown earlier would be called a 3Ph2W6P rectifier. Is it possible to obtain more pulses than twice the number of phases in a rectifier circuit? The answer to this question is yes: especially in polyphase circuits. Through the creative use of transformers, sets of full-wave rectifiers may be paralleled in such a way that more than six pulses of DC are produced for three phases of AC. A 30o phase shift is introduced from primary to secondary of a three-phase transformer when the winding configurations are not of the same type. In other words, a transformer connected either Y-Δ or Δ-Y will exhibit this 30o phase shift, while a transformer connected Y-Y or Δ-Δ will not. This phenomenon may be exploited by having one transformer connected Y-Y feed a bridge rectifier, and have another transformer connected Y-Δ feed a second bridge rectifier, then parallel the DC outputs of both rectifiers. (Figure below) Since the ripple voltage waveforms of the two rectifiers' outputs are phase-shifted 30o from one another, their superposition results in less ripple than either rectifier output considered separately: 12 pulses per 360o instead of just six: Polyphase rectifier circuit: 3-phase 2-way 12-pulse (3Ph2W12P) A peak detector is a series connection of a diode and a capacitor outputting a DC voltage equal to the peak value of the applied AC signal. The circuit is shown in Figure below with the corresponding SPICE net list. An AC voltage source applied to the peak detector, charges the capacitor to the peak of the input. The diode conducts positive “half cycles,” charging the capacitor to the waveform peak. When the input waveform falls below the DC “peak” stored on the capacitor, the diode is reverse biased, blocking current flow from capacitor back to the source. Thus, the capacitor retains the peak value even as the waveform drops to zero. Another view of the peak detector is that it is the same as a half-wave rectifier with a filter capacitor added to the output. *SPICE 03441.eps C1 2 0 0.1u R1 1 3 1.0k V1 1 0 SIN(0 5 1k) D1 3 2 diode .model diode d .tran 0.01m 50mm .end Peak detector: Diode conducts on positive half cycles charging capacitor to the peak voltage (less diode forward drop). It takes a few cycles for the capacitor to charge to the peak as in Figure below due to the series resistance (RC “time constant”). Why does the capacitor not charge all the way to 5 V? It would charge to 5 V if an “ideal diode” were obtainable. However, the silicon diode has a forward voltage drop of 0.7 V which subtracts from the 5 V peak of the input. Peak detector: Capacitor charges to peak within a few cycles. The circuit in Figure above could represent a DC power supply based on a half-wave rectifier. The resistance would be a few Ohms instead of 1 kΩ due to a transformer secondary winding replacing the voltage source and resistor. A larger “filter” capacitor would be used. A power supply based on a 60 Hz source with a filter of a few hundred µF could supply up to 100 mA. Half-wave supplies seldom supply more due to the difficulty of filtering a half-wave. The peak detector may be combined with other components to build a crystal radio A circuit which removes the peak of a waveform is known as a clipper. A negative clipper is shown in Figure below. This schematic diagram was produced with Xcircuit schematic capture program. Xcircuit produced the SPICE net list Figure below, except for the second, and next to last pair of lines which were inserted with a text editor. *SPICE 03437.eps * A K ModelName D1 0 2 diode R1 2 1 1.0k V1 1 0 SIN(0 5 1k) .model diode d .tran .05m 3m .end Clipper: clips negative peak at -0.7 V. During the positive half cycle of the 5 V peak input, the diode is reversed biased. The diode does not conduct. It is as if the diode were not there. The positive half cycle is unchanged at the output V(2) in Figure below. Since the output positive peaks actually overlays the input sinewave V(1), the input has been shifted upward in the plot for clarity. In Nutmeg, the SPICE display module, the command “plot v(1)+1)” accomplishes this. V(1)+1 is actually V(1), a 10 Vptp sinewave, offset by 1 V for display clarity. V(2) output is clipped at -0.7 V, by diode D1. During the negative half cycle of sinewave input of Figure above, the diode is forward biased, that is, conducting. The negative half cycle of the sinewave is shorted out. The negative half cycle of V(2) would be clipped at 0 V for an ideal diode. The waveform is clipped at -0.7 V due to the forward voltage drop of the silicon diode. The spice model defaults to 0.7 V unless parameters in the model statement specify otherwise. Germanium or Schottky diodes clip at lower voltages. Closer examination of the negative clipped peak (Figure above) reveals that it follows the input for a slight period of time while the sinewave is moving toward -0.7 V. The clipping action is only effective after the input sinewave exceeds -0.7 V. The diode is not conducting for the complete half cycle, though, during most of it. The addition of an anti-parallel diode to the existing diode in Figure above yields the symmetrical clipper in Figure below. *SPICE 03438.eps D1 0 2 diode D2 2 0 diode R1 2 1 1.0k V1 1 0 SIN(0 5 1k) .model diode d .tran 0.05m 3m .end Symmetrical clipper: Anti-parallel diodes clip both positive and negative peak, leaving a ± 0.7 V output. Diode D1 clips the negative peak at -0.7 V as before. The additional diode D2 conducts for positive half cycles of the sine wave as it exceeds 0.7 V, the forward diode drop. The remainder of the voltage drops across the series resistor. Thus, both peaks of the input sinewave are clipped in Figure below. The net list is in Figure above Diode D1 clips at -0.7 V as it conducts during negative peaks. D2 conducts for positive peaks, clipping at 0.7V. The most general form of the diode clipper is shown in Figure below. For an ideal diode, the clipping occurs at the level of the clipping voltage, V1 and V2. However, the voltage sources have been adjusted to account for the 0.7 V forward drop of the real silicon diodes. D1 clips at 1.3V +0.7V=2.0V when the diode begins to conduct. D2 clips at -2.3V -0.7V=-3.0V when D2 conducts. *SPICE 03439.eps V1 3 0 1.3 V2 4 0 -2.3 D1 2 3 diode D2 4 2 diode R1 2 1 1.0k V3 1 0 SIN(0 5 1k) .model diode d .tran 0.05m 3m .end D1 clips the input sinewave at 2V. D2 clips at -3V. The clipper in Figure above does not have to clip both levels. To clip at one level with one diode and one voltage source, remove the other diode and source. The net list is in Figure above. The waveforms in Figure below show the clipping of v(1) at output v(2). D1 clips the sinewave at 2V. D2 clips at -3V. There is also a zener diode clipper circuit in the “Zener diode” section. A zener diode replaces both the diode and the DC voltage source. A practical application of a clipper is to prevent an amplified speech signal from overdriving a radio transmitter in Figure below. Over driving the transmitter generates spurious radio signals which causes interference with other stations. The clipper is a protective measure. Clipper prevents over driving radio transmitter by voice peaks. A sinewave may be squared up by overdriving a clipper. Another clipper application is the protection of exposed inputs of integrated circuits. The input of the IC is connected to a pair of diodes as at node “2” of Figure above . The voltage sources are replaced by the power supply rails of the IC. For example, CMOS IC's use 0V and +5 V. Analog amplifiers might use ±12V for the V1 and V2 sources. The circuits in Figure below are known as clampers or DC restorers. The corresponding netlist is in Figure below. These circuits clamp a peak of a waveform to a specific DC level compared with a capacitively coupled signal which swings about its average DC level (usually 0V). If the diode is removed from the clamper, it defaults to a simple coupling capacitor– no clamping. What is the clamp voltage? And, which peak gets clamped? In Figure below (a) the clamp voltage is 0 V ignoring diode drop, (more exactly 0.7 V with Si diode drop). In Figure below, the positive peak of V(1) is clamped to the 0 V (0.7 V) clamp level. Why is this? On the first positive half cycle, the diode conducts charging the capacitor left end to +5 V (4.3 V). This is -5 V (-4.3 V) on the right end at V(1,4). Note the polarity marked on the capacitor in Figure below (a). The right end of the capacitor is -5 V DC (-4.3 V) with respect to ground. It also has an AC 5 V peak sinewave coupled across it from source V(4) to node 1. The sum of the two is a 5 V peak sine riding on a - 5 V DC (-4.3 V) level. The diode only conducts on successive positive excursions of source V(4) if the peak V(4) exceeds the charge on the capacitor. This only happens if the charge on the capacitor drained off due to a load, not shown. The charge on the capacitor is equal to the positive peak of V(4) (less 0.7 diode drop). The AC riding on the negative end, right end, is shifted down. The positive peak of the waveform is clamped to 0 V (0.7 V) because the diode conducts on the positive peak. Clampers: (a) Positive peak clamped to 0 V. (b) Negative peak clamped to 0 V. (c) Negative peak clamped to 5 V. *SPICE 03443.eps V1 6 0 5 D1 6 3 diode C1 4 3 1000p D2 0 2 diode C2 4 2 1000p C3 4 1 1000p D3 1 0 diode V2 4 0 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end V(4) source voltage 5 V peak used in all clampers. V(1) clamper output from Figure above (a). V(1,4) DC voltage on capacitor in Figure (a). V(2) clamper output from Figure (b). V(3) clamper output from Figure (c). Suppose the polarity of the diode is reversed as in Figure above (b)? The diode conducts on the negative peak of source V(4). The negative peak is clamped to 0 V (-0.7 V). See V(2) in Figure above. The most general realization of the clamper is shown in Figure above (c) with the diode connected to a DC reference. The capacitor still charges during the negative peak of the source. Note that the polarities of the AC source and the DC reference are series aiding. Thus, the capacitor charges to the sum to the two, 10 V DC (9.3 V). Coupling the 5 V peak sinewave across the capacitor yields Figure above V(3), the sum of the charge on the capacitor and the sinewave. The negative peak appears to be clamped to 5 V DC (4.3V), the value of the DC clamp reference (less diode drop). Describe the waveform if the DC clamp reference is changed from 5 V to 10 V. The clamped waveform will shift up. The negative peak will be clamped to 10 V (9.3). Suppose that the amplitude of the sine wave source is increased from 5 V to 7 V? The negative peak clamp level will remain unchanged. Though, the amplitude of the sinewave output will increase. An application of the clamper circuit is as a “DC restorer” in “composite video” circuitry in both television transmitters and receivers. An NTSC (US video standard) video signal “white level” corresponds to minimum (12.5%) transmitted power. The video “black level” corresponds to a high level (75% of transmitter power. There is a “blacker than black level” corresponding to 100% transmitted power assigned to synchronization signals. The NTSC signal contains both video and synchronization pulses. The problem with the composite video is that its average DC level varies with the scene, dark vs light. The video itself is supposed to vary. However, the sync must always peak at 100%. To prevent the sync signals from drifting with changing scenes, a “DC restorer” clamps the top of the sync pulses to a voltage corresponding to 100% transmitter modulation. [ATCO] A voltage multiplier is a specialized rectifier circuit producing an output which is theoretically an integer times the AC peak input, for example, 2, 3, or 4 times the AC peak input. Thus, it is possible to get 200 VDC from a 100 Vpeak AC source using a doubler, 400 VDC from a quadrupler. Any load in a practical circuit will lower these voltages. A voltage doubler application is a DC power supply capable of using either a 240 VAC or 120 VAC source. The supply uses a switch selected full-wave bridge to produce about 300 VDC from a 240 VAC source. The 120 V position of the switch rewires the bridge as a doubler producing about 300 VDC from the 120 VAC. In both cases, 300 VDC is produced. This is the input to a switching regulator producing lower voltages for powering, say, a personal computer. The half-wave voltage doubler in Figure below (a) is composed of two circuits: a clamper at (b) and peak detector (half-wave rectifier) in Figure prior, which is shown in modified form in Figure below (c). C2 has been added to a peak detector (half-wave rectifier). Half-wave voltage doubler (a) is composed of (b) a clamper and (c) a half-wave rectifier. Referring to Figure above (b), C2 charges to 5 V (4.3 V considering the diode drop) on the negative half cycle of AC input. The right end is grounded by the conducting D2. The left end is charged at the negative peak of the AC input. This is the operation of the clamper. During the positive half cycle, the half-wave rectifier comes into play at Figure above (c). Diode D2 is out of the circuit since it is reverse biased. C2 is now in series with the voltage source. Note the polarities of the generator and C2, series aiding. Thus, rectifier D1 sees a total of 10 V at the peak of the sinewave, 5 V from generator and 5 V from C2. D1 conducts waveform v(1) (Figure below), charging C1 to the peak of the sine wave riding on 5 V DC (Figure below v(2)). Waveform v(2) is the output of the doubler, which stabilizes at 10 V (8.6 V with diode drops) after a few cycles of sinewave input. *SPICE 03255.eps C1 2 0 1000p D1 1 2 diode C2 4 1 1000p D2 0 1 diode V1 4 0 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage doubler: v(4) input. v(1) clamper stage. v(2) half-wave rectifier stage, which is the doubler output. The full-wave voltage doubler is composed of a pair of series stacked half-wave rectifiers. (Figure below) The corresponding netlist is in Figure below. The bottom rectifier charges C1 on the negative half cycle of input. The top rectifier charges C2 on the positive halfcycle. Each capacitor takes on a charge of 5 V (4.3 V considering diode drop). The output at node 5 is the series total of C1 + C2 or 10 V (8.6 V with diode drops). *SPICE 03273.eps *R1 3 0 100k *R2 5 3 100k D1 0 2 diode D2 2 5 diode C1 3 0 1000p C2 5 3 1000p V1 2 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Full-wave voltage doubler consists of two half-wave rectifiers operating on alternating polarities. Note that the output v(5) Figure below reaches full value within one cycle of the input v(2) excursion. Full-wave voltage doubler: v(2) input, v(3)voltage at mid point, v(5) voltage at output Figure below illustrates the derivation of the full-wave doubler from a pair of opposite polarity half-wave rectifiers (a). The negative rectifier of the pair is redrawn for clarity (b). Both are combined at (c) sharing the same ground. At (d) the negative rectifier is re-wired to share one voltage source with the positive rectifier. This yields a ±5 V (4.3 V with diode drop) power supply; though, 10 V is measurable between the two outputs. The ground reference point is moved so that +10 V is available with respect to ground. Full-wave doubler: (a) Pair of doublers, (b) redrawn, (c) sharing the ground, (d) share the same voltage source. (e) move the ground point. A voltage tripler (Figure below) is built from a combination of a doubler and a half wave rectifier (C3, D3). The half-wave rectifier produces 5 V (4.3 V) at node 3. The doubler provides another 10 V (8.4 V) between nodes 2 and 3. for a total of 15 V (12.9 V) at the output node 2 with respect to ground. The netlist is in Figure below. Voltage tripler composed of doubler stacked atop a single stage rectifier. Note that V(3) in Figure below rises to 5 V (4.3 V) on the first negative half cycle. Input v(4) is shifted upward by 5 V (4.3 V) due to 5 V from the half-wave rectifier. And 5 V more at v(1) due to the clamper (C2, D2). D1 charges C1 (waveform v(2)) to the peak value of v(1). *SPICE 03283.eps C3 3 0 1000p D3 0 4 diode C1 2 3 1000p D1 1 2 diode C2 4 1 1000p D2 3 1 diode V1 4 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage tripler: v(3) half-wave rectifier, v(4) input+ 5 V, v(1) clamper, v(2) final output. A voltage quadrupler is a stacked combination of two doublers shown in Figure below. Each doubler provides 10 V (8.6 V) for a series total at node 2 with respect to ground of 20 V (17.2 V). The netlist is in Figure below. Voltage quadrupler, composed of two doublers stacked in series, with output at node 2. The waveforms of the quadrupler are shown in Figure below. Two DC outputs are available: v(3), the doubler output, and v(2) the quadrupler output. Some of the intermediate voltages at clampers illustrate that the input sinewave (not shown), which swings by *SPICE 03441.eps *SPICE 03286.eps C22 4 5 1000p C11 3 0 1000p D11 0 5 diode D22 5 3 diode C1 2 3 1000p D1 1 2 diode C2 4 1 1000p D2 3 1 diode V1 4 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage quadrupler: DC voltage available at v(3) and v(2). Intermediate waveforms: Clampers: v(5), v(4), v(1). Some notes on voltage multipliers are in order at this point. The circuit parameters used in the examples (V= 5 V 1 kHz, C=1000 pf) do not provide much current, microamps. Furthermore, load resistors have been omitted. Loading reduces the voltages from those shown. If the circuits are to be driven by a kHz source at low voltage, as in the examples, the capacitors are usually 0.1 to 1.0 µF so that milliamps of current are available at the output. If the multipliers are driven from 50/60 Hz, the capacitor are a few hundred to a few thousand microfarads to provide hundreds of milliamps of output current. If driven from line voltage, pay attention to the polarity and voltage ratings of the capacitors. Finally, any direct line driven power supply (no transformer) is dangerous to the experimenter and line operated test equipment. Commercial direct driven supplies are safe because the hazardous circuitry is in an enclosure to protect the user. When breadboarding these circuits with electrolytic capacitors of any voltage, the capacitors will explode if the polarity is reversed. Such circuits should be powered up behind a safety shield. A voltage multiplier of cascaded half-wave doublers of arbitrary length is known as a Cockcroft-Walton multiplier as shown in Figure below. This multiplier is used when a high voltage at low current is required. The advantage over a conventional supply is that an expensive high voltage transformer is not required– at least not as high as the output. Cockcroft-Walton x8 voltage multiplier; output at v(8). The pair of diodes and capacitors to the left of nodes 1 and 2 in Figure above constitute a half-wave doubler. Rotating the diodes by 45o counterclockwise, and the bottom capacitor by 90o makes it look like Figure prior (a). Four of the doubler sections are cascaded to the right for a theoretical x8 multiplication factor. Node 1 has a clamper waveform (not shown), a sinewave shifted up by 1x (5 V). The other odd numbered nodes are sinewaves clamped to successively higher voltages. Node 2, the output of the first doubler, is a 2x DC voltage v(2) in Figure below. Successive even numbered nodes charge to successively higher voltages: v(4), v(6), v(8) D1 7 8 diode C1 8 6 1000p D2 6 7 diode C2 5 7 1000p D3 5 6 diode C3 4 6 1000p D4 4 5 diode C4 3 5 1000p D5 3 4 diode C5 2 4 1000p D6 2 3 diode D7 1 2 diode C6 1 3 1000p C7 2 0 1000p C8 99 1 1000p D8 0 1 diode V1 99 0 SIN(0 5 1k) .model diode d .tran 0.01m 50m .end Cockcroft-Walton (x8) waveforms. Output is v(8). Without diode drops, each doubler yields 2Vin or 10 V, considering two diode drops (10-1.4)=8.6 V is realistic. For a total of 4 doublers one expects 4·8.6=34.4 V out of 40 V. Consulting Figure above, v(2) is about right;however, v(8) is <30 V instead of the anticipated 34.4 V. The bane of the Cockcroft-Walton multiplier is that each additional stage adds less than the previous stage. Thus, a practical limit to the number of stages exist. It is possible to overcome this limitation with a modification to the basic circuit. [ABR] Also note the time scale of 40 msec compared with 5 ms for previous circuits. It required 40 msec for the voltages to rise to a terminal value for this circuit. The netlist in Figure above has a “.tran 0.010m 50m” command to extend the simulation time to 50 msec; though, only 40 msec is plotted. The Cockcroft-Walton multiplier serves as a more efficient high voltage source for photomultiplier tubes requiring up to 2000 V. [ABR] Moreover, the tube has numerous dynodes, terminals requiring connection to the lower voltage “even numbered” nodes. The series string of multiplier taps replaces a heat generating resistive voltage divider of previous designs. An AC line operated Cockcroft-Walton multiplier provides high voltage to “ion generators” for neutralizing electrostatic charge and for air purifiers. A popular use of diodes is for the mitigation of inductive “kickback:” the pulses of high voltage produced when direct current through an inductor is interrupted. Take, for example, this simple circuit in Figure below with no protection against inductive kickback. Inductive kickback: (a) Switch open. (b) Switch closed, electron current flows from battery through coil which has polarity matching battery. Magnetic field stores energy. (c) Switch open, Current still flows in coil due to collapsing magnetic field. Note polarity change on coil. (d) Coil voltage vs time. When the pushbutton switch is actuated, current goes through the inductor, producing a magnetic field around it. When the switch is de-actuated, its contacts open, interrupting current through the inductor, and causing the magnetic field to rapidly collapse. Because the voltage induced in a coil of wire is directly proportional to the rate of change over time of magnetic flux (Faraday's Law: e = NdΦ/dt), this rapid collapse of magnetism around the coil produces a high voltage “spike”. If the inductor in question is an electromagnet coil, such as in a solenoid or relay (constructed for the purpose of creating a physical force via its magnetic field when energized), the effect of inductive “kickback” serves no useful purpose at all. In fact, it is quite detrimental to the switch, as it causes excessive arcing at the contacts, greatly reducing their service life. Of the practical methods for mitigating the high voltage transient created when the switch is opened, none so simple as the so-called commutating diode in Figure below. Inductive kickback with protection: (a) Switch open. (b)Switch closed, storing energy in magnetic field. (c) Switch open, inductive kickback is shorted by diode. In this circuit, the diode is placed in parallel with the coil, such that it will be reverse-biased when DC voltage is applied to the coil through the switch. Thus, when the coil is energized, the diode conducts no current in Figure above (b). However, when the switch is opened, the coil's inductance responds to the decrease in current by inducing a voltage of reverse polarity, in an effort to maintain current at the same magnitude and in the same direction. This sudden reversal of voltage polarity across the coil forward-biases the diode, and the diode provides a current path for the inductor's current, so that its stored energy is dissipated slowly rather than suddenly in Figure above (c). As a result, the voltage induced in the coil by its collapsing magnetic field is quite low: merely the forward voltage drop of the diode, rather than hundreds of volts as before. Thus, the switch contacts experience a voltage drop equal to the battery voltage plus about 0.7 volts (if the diode is silicon) during this discharge time. In electronics parlance, commutation refers to the reversal of voltage polarity or current direction. Thus, the purpose of a commutating diode is to act whenever voltage reverses polarity, for example, on an inductor coil when current through it is interrupted. A less formal term for a commutating diode is snubber, because it “snubs” or “squelches” the inductive kickback. A noteworthy disadvantage of this method is the extra time it imparts to the coil's discharge. Because the induced voltage is clamped to a very low value, its rate of magnetic flux change over time is comparatively slow. Remember that Faraday's Law describes the magnetic flux rate-of-change (dΦ/dt) as being proportional to the induced, instantaneous voltage (e or v). If the instantaneous voltage is limited to some low figure, then the rate of change of magnetic flux over time will likewise be limited to a low (slow) figure. If an electromagnet coil is “snubbed” with a commutating diode, the magnetic field will dissipate at a relatively slow rate compared to the original scenario (no diode) where the field disappeared almost instantly upon switch release. The amount of time in question will most likely be less than one second, but it will be measurably slower than without a commutating diode in place. This may be an intolerable consequence if the coil is used to actuate an electromechanical relay, because the relay will possess a natural “time delay” upon coil de-energization, and an unwanted delay of even a fraction of a second may wreak havoc in some circuits. Unfortunately, one cannot eliminate the high-voltage transient of inductive kickback and maintain fast de-magnetization of the coil: Faraday's Law will not be violated. However, if slow de-magnetization is unacceptable, a compromise may be struck between transient voltage and time by allowing the coil's voltage to rise to some higher level (but not so high as without a commutating diode in place). The schematic in Figure below shows how this can be done. (a) Commutating diode with series resistor. (b) Voltage waveform. (c) Level with no diode. (d) Level with diode, no resistor. (e) Compromise level with diode and resistor. A resistor placed in series with the commutating diode allows the coil's induced voltage to rise to a level greater than the diode's forward voltage drop, thus hastening the process of de-magnetization. This, of course, will place the switch contacts under greater stress, and so the resistor must be sized to limit that transient voltage at an acceptable maximum level. Diodes can perform switching and digital logic operations. Forward and reverse bias switch a diode between the low and high impedance states, respectively. Thus, it serves as a switch. Diodes can perform digital logic functions: AND, and OR. Diode logic was used in early digital computers. It only finds limited application today. Sometimes it is convenient to fashion a single logic gate from a few diodes. Diode AND gate An AND gate is shown in Figure above. Logic gates have inputs and an output (Y) which is a function of the inputs. The inputs to the gate are high (logic 1), say 10 V, or low, 0 V (logic 0). In the figure, the logic levels are generated by switches. If a switch is up, the input is effectively high (1). If the switch is down, it connects a diode cathode to ground, which is low (0). The output depends on the combination of inputs at A and B. The inputs and output are customarily recorded in a “truth table” at (c) to describe the logic of a gate. At (a) all inputs are high (1). This is recorded in the last line of the truth table at (c). The output, Y, is high (1) due to the V+ on the top of the resistor. It is unaffected by open switches. At (b) switch A pulls the cathode of the connected diode low, pulling output Y low (0.7 V). This is recorded in the third line of the truth table. The second line of the truth table describes the output with the switches reversed from (b). Switch B pulls the diode and output low. The first line of the truth table recordes the Output=0 for both input low (0). The truth table describes a logical AND function. Summary: both inputs A and B high yields a high (1) out. A two input OR gate composed of a pair of diodes is shown in Figure below. If both inputs are logic low at (a) as simulated by both switches “downward,” the output Y is pulled low by the resistor. This logic zero is recorded in the first line of the truth table at (c). If one of the inputs is high as at (b), or the other input is high, or both inputs high, the diode(s) conduct(s), pulling the output Y high. These results are reordered in the second through fourth lines of the truth table. Summary: any input “high” is a high out at Y. OR gate: (a) First line, truth table (TT). (b) Third line TT. (d) Logical OR of power line supply and back-up battery. A backup battery may be OR-wired with a line operated DC power supply in Figure above (d) to power a load, even during a power failure. With AC power present, the line supply powers the load, assuming that it is a higher voltage than the battery. In the event of a power failure, the line supply voltage drops to 0 V; the battery powers the load. The diodes must be in series with the power sources to prevent a failed line supply from draining the battery, and to prevent it from over charging the battery when line power is available. Does your PC computer retain its BIOS setting when powered off? Does your VCR (video cassette recorder) retain the clock setting after a power failure? (PC Yes, old VCR no, new VCR yes.) Diodes can switch analog signals. A reverse biased diode appears to be an open circuit. A forward biased diode is a low resistance conductor. The only problem is isolating the AC signal being switched from the DC control signal. The circuit in Figure below is a parallel resonant network: resonant tuning inductor paralleled by one (or more) of the switched resonator capacitors. This parallel LC resonant circuit could be a preselector filter for a radio receiver. It could be the frequency determining network of an oscillator (not shown). The digital control lines may be driven by a microprocessor interface. Diode switch: A digital control signal (low) selects a resonator capacitor by forward biasing the switching diode. The large value DC blocking capacitor grounds the resonant tuning inductor for AC while blocking DC. It would have a low reactance compared to the parallel LC reactances. This prevents the anode DC voltage from being shorted to ground by the resonant tuning inductor. A switched resonator capacitor is selected by pulling the corresponding digital control low. This forward biases the switching diode. The DC current path is from +5 V through an RF choke (RFC), a switching diode, and an RFC to ground via the digital control. The purpose of the RFC at the +5 V is to keep AC out of the +5 V supply. The RFC in series with the digital control is to keep AC out of the external control line. The decoupling capacitor shorts the little AC leaking through the RFC to ground, bypassing the external digital control line. With all three digital control lines high (≥+5 V), no switched resonator capacitors are selected due to diode reverse bias. Pulling one or more lines low, selects one or more switched resonator capacitors, respectively. As more capacitors are switched in parallel with the resonant tuning inductor, the resonant frequency decreases. The reverse biased diode capacitance may be substantial compared with very high frequency or ultra high frequency circuits. PIN diodes may be used as switches for lower capacitance. If we connect a diode and resistor in series with a DC voltage source so that the diode is forward-biased, the voltage drop across the diode will remain fairly constant over a wide range of power supply voltages as in Figure below (a). According to the “diode equation” here, the current through a forward-biased PN junction is proportional to e raised to the power of the forward voltage drop. Because this is an exponential function, current rises quite rapidly for modest increases in voltage drop. Another way of considering this is to say that voltage dropped across a forward-biased diode changes little for large variations in diode current. In the circuit shown in Figure below (a), diode current is limited by the voltage of the power supply, the series resistor, and the diode's voltage drop, which as we know doesn't vary much from 0.7 volts. If the power supply voltage were to be increased, the resistor's voltage drop would increase almost the same amount, and the diode's voltage drop just a little. Conversely, a decrease in power supply voltage would result in an almost equal decrease in resistor voltage drop, with just a little decrease in diode voltage drop. In a word, we could summarize this behavior by saying that the diode is regulating the voltage drop at approximately 0.7 volts. Voltage regulation is a useful diode property to exploit. Suppose we were building some kind of circuit which could not tolerate variations in power supply voltage, but needed to be powered by a chemical battery, whose voltage changes over its lifetime. We could form a circuit as shown and connect the circuit requiring steady voltage across the diode, where it would receive an unchanging 0.7 volts. This would certainly work, but most practical circuits of any kind require a power supply voltage in excess of 0.7 volts to properly function. One way we could increase our voltage regulation point would be to connect multiple diodes in series, so that their individual forward voltage drops of 0.7 volts each would add to create a larger total. For instance, if we had ten diodes in series, the regulated voltage would be ten times 0.7, or 7 volts in Figure below (b). Forward biased Si reference: (a) single diode, 0.7V, (b) 10-diodes in series 7.0V. So long as the battery voltage never sagged below 7 volts, there would always be about 7 volts dropped across the ten-diode “stack.” If larger regulated voltages are required, we could either use more diodes in series (an inelegant option, in my opinion), or try a fundamentally different approach. We know that diode forward voltage is a fairly constant figure under a wide range of conditions, but so is reverse breakdown voltage, and breakdown voltage is typically much, much greater than forward voltage. If we reversed the polarity of the diode in our single-diode regulator circuit and increased the power supply voltage to the point where the diode “broke down” (could no longer withstand the reverse-bias voltage impressed across it), the diode would similarly regulate the voltage at that breakdown point, not allowing it to increase further as in Figure below (a). (a) Reverse biased Si small-signal diode breaks down at about 100V. (b) Symbol for Zener diode. Unfortunately, when normal rectifying diodes “break down,” they usually do so destructively. However, it is possible to build a special type of diode that can handle breakdown without failing completely. This type of diode is called a zener diode, and its symbol looks like Figure above (b). When forward-biased, zener diodes behave much the same as standard rectifying diodes: they have a forward voltage drop which follows the “diode equation” and is about 0.7 volts. In reverse-bias mode, they do not conduct until the applied voltage reaches or exceeds the so-called zener voltage, at which point the diode is able to conduct substantial current, and in doing so will try to limit the voltage dropped across it to that zener voltage point. So long as the power dissipated by this reverse current does not exceed the diode's thermal limits, the diode will not be harmed. Zener diodes are manufactured with zener voltages ranging anywhere from a few volts to hundreds of volts. This zener voltage changes slightly with temperature, and like common carbon-composition resistor values, may be anywhere from 5 percent to 10 percent in error from the manufacturer's specifications. However, this stability and accuracy is generally good enough for the zener diode to be used as a voltage regulator device in common power supply circuit in Figure below. Zener diode regulator circuit, Zener voltage = 12.6V). Please take note of the zener diode's orientation in the above circuit: the diode is reverse-biased, and intentionally so. If we had oriented the diode in the “normal” way, so as to be forward-biased, it would only drop 0.7 volts, just like a regular rectifying diode. If we want to exploit this diode's reverse breakdown properties, we must operate it in its reverse-bias mode. So long as the power supply voltage remains above the zener voltage (12.6 volts, in this example), the voltage dropped across the zener diode will remain at approximately 12.6 volts. Like any semiconductor device, the zener diode is sensitive to temperature. Excessive temperature will destroy a zener diode, and because it both drops voltage and conducts current, it produces its own heat in accordance with Joule's Law (P=IE). Therefore, one must be careful to design the regulator circuit in such a way that the diode's power dissipation rating is not exceeded. Interestingly enough, when zener diodes fail due to excessive power dissipation, they usually fail shorted rather than open. A diode failed in this manner is readily detected: it drops almost zero voltage when biased either way, like a piece of wire. Let's examine a zener diode regulating circuit mathematically, determining all voltages, currents, and power dissipations. Taking the same form of circuit shown earlier, we'll perform calculations assuming a zener voltage of 12.6 volts, a power supply voltage of 45 volts, and a series resistor value of 1000 Ω (we'll regard the zener voltage to be exactly 12.6 volts so as to avoid having to qualify all figures as “approximate” in Figure below (a) If the zener diode's voltage is 12.6 volts and the power supply's voltage is 45 volts, there will be 32.4 volts dropped across the resistor (45 volts - 12.6 volts = 32.4 volts). 32.4 volts dropped across 1000 Ω gives 32.4 mA of current in the circuit. (Figure below (b)) (a) Zener Voltage regulator with 1000 Ω resistor. (b) Calculation of voltage drops and current. Power is calculated by multiplying current by voltage (P=IE), so we can calculate power dissipations for both the resistor and the zener diode quite easily: A zener diode with a power rating of 0.5 watt would be adequate, as would a resistor rated for 1.5 or 2 watts of dissipation. If excessive power dissipation is detrimental, then why not design the circuit for the least amount of dissipation possible? Why not just size the resistor for a very high value of resistance, thus severely limiting current and keeping power dissipation figures very low? Take this circuit, for example, with a 100 kΩ resistor instead of a 1 kΩ resistor. Note that both the power supply voltage and the diode's zener voltage in Figure below are identical to the last example: Zener regulator with 100 kΩ resistor. With only 1/100 of the current we had before (324 µA instead of 32.4 mA), both power dissipation figures should be 100 times smaller: Seems ideal, doesn't it? Less power dissipation means lower operating temperatures for both the diode and the resistor, and also less wasted energy in the system, right? A higher resistance value does reduce power dissipation levels in the circuit, but it unfortunately introduces another problem. Remember that the purpose of a regulator circuit is to provide a stable voltage for another circuit. In other words, we're eventually going to power something with 12.6 volts, and this something will have a current draw of its own. Consider our first regulator circuit, this time with a 500 Ω load connected in parallel with the zener diode in Figure below. Zener regulator with 1000 Ω series resistor and 500 Ω load. If 12.6 volts is maintained across a 500 Ω load, the load will draw 25.2 mA of current. In order for the 1 kΩ series “dropping” resistor to drop 32.4 volts (reducing the power supply's voltage of 45 volts down to 12.6 across the zener), it still must conduct 32.4 mA of current. This leaves 7.2 mA of current through the zener diode. Now consider our “power-saving” regulator circuit with the 100 kΩ dropping resistor, delivering power to the same 500 Ω load. What it is supposed to do is maintain 12.6 volts across the load, just like the last circuit. However, as we will see, it cannot accomplish this task. (Figure below) Zener non-regulator with 100 KΩ series resistor with 500 Ω load.> With the larger value of dropping resistor in place, there will only be about 224 mV of voltage across the 500 Ω load, far less than the expected value of 12.6 volts! Why is this? If we actually had 12.6 volts across the load, it would draw 25.2 mA of current, as before. This load current would have to go through the series dropping resistor as it did before, but with a new (much larger!) dropping resistor in place, the voltage dropped across that resistor with 25.2 mA of current going through it would be 2,520 volts! Since we obviously don't have that much voltage supplied by the battery, this cannot happen. The situation is easier to comprehend if we temporarily remove the zener diode from the circuit and analyze the behavior of the two resistors alone in Figure below. Non-regulator with Zener removed. Both the 100 kΩ dropping resistor and the 500 Ω load resistance are in series with each other, giving a total circuit resistance of 100.5 kΩ. With a total voltage of 45 volts and a total resistance of 100.5 kΩ, Ohm's Law (I=E/R) tells us that the current will be 447.76 µA. Figuring voltage drops across both resistors (E=IR), we arrive at 44.776 volts and 224 mV, respectively. If we were to re-install the zener diode at this point, it would “see” 224 mV across it as well, being in parallel with the load resistance. This is far below the zener breakdown voltage of the diode and so it will not “break down” and conduct current. For that matter, at this low voltage the diode wouldn't conduct even if it were forward-biased! Thus, the diode ceases to regulate voltage. At least 12.6 volts must be dropped across to “activate” it. The analytical technique of removing a zener diode from a circuit and seeing whether or not enough voltage is present to make it conduct is a sound one. Just because a zener diode happens to be connected in a circuit doesn't guarantee that the full zener voltage will always be dropped across it! Remember that zener diodes work by limiting voltage to some maximum level; they cannot make up for a lack of voltage. In summary, any zener diode regulating circuit will function so long as the load's resistance is equal to or greater than some minimum value. If the load resistance is too low, it will draw too much current, dropping too much voltage across the series dropping resistor, leaving insufficient voltage across the zener diode to make it conduct. When the zener diode stops conducting current, it can no longer regulate voltage, and the load voltage will fall below the regulation point. Our regulator circuit with the 100 kΩ dropping resistor must be good for some value of load resistance, though. To find this acceptable load resistance value, we can use a table to calculate resistance in the two-resistor series circuit (no diode), inserting the known values of total voltage and dropping resistor resistance, and calculating for an expected load voltage of 12.6 volts: With 45 volts of total voltage and 12.6 volts across the load, we should have 32.4 volts across Rdropping: With 32.4 volts across the dropping resistor, and 100 kΩ worth of resistance in it, the current through it will be 324 µA: Being a series circuit, the current is equal through all components at any given time: Calculating load resistance is now a simple matter of Ohm's Law (R = E/I), giving us 38.889 kΩ: Thus, if the load resistance is exactly 38.889 kΩ, there will be 12.6 volts across it, diode or no diode. Any load resistance smaller than 38.889 kΩ will result in a load voltage less than 12.6 volts, diode or no diode. With the diode in place, the load voltage will be regulated to a maximum of 12.6 volts for any load resistance greater than 38.889 kΩ. With the original value of 1 kΩ for the dropping resistor, our regulator circuit was able to adequately regulate voltage even for a load resistance as low as 500 Ω. What we see is a tradeoff between power dissipation and acceptable load resistance. The higher-value dropping resistor gave us less power dissipation, at the expense of raising the acceptable minimum load resistance value. If we wish to regulate voltage for low-value load resistances, the circuit must be prepared to handle higher power dissipation. Zener diodes regulate voltage by acting as complementary loads, drawing more or less current as necessary to ensure a constant voltage drop across the load. This is analogous to regulating the speed of an automobile by braking rather than by varying the throttle position: not only is it wasteful, but the brakes must be built to handle all the engine's power when the driving conditions don't demand it. Despite this fundamental inefficiency of design, zener diode regulator circuits are widely employed due to their sheer simplicity. In high-power applications where the inefficiencies would be unacceptable, other voltage-regulating techniques are applied. But even then, small zener-based circuits are often used to provide a “reference” voltage to drive a more efficient amplifier circuit controlling the main power. Zener diodes are manufactured in standard voltage ratings listed in Table below. The table “Common zener diode voltages” lists common voltages for 0.3W and 1.3W parts. The wattage corresponds to die and package size, and is the power that the diode may dissipate without damage. Common zener diode voltages Zener diode clipper: A clipping circuit which clips the peaks of waveform at approximately the zener voltage of the diodes. The circuit of Figure below has two zeners connected series opposing to symmetrically clip a waveform at nearly the Zener voltage. The resistor limits current drawn by the zeners to a safe value. *SPICE 03445.eps D1 4 0 diode D2 4 2 diode R1 2 1 1.0k V1 1 0 SIN(0 20 1k) .model diode d bv=10 .tran 0.001m 2m .end Zener diode clipper: The zener breakdown voltage for the diodes is set at 10 V by the diode model parameter “bv=10” in the spice net list in Figure above. This causes the zeners to clip at about 10 V. The back-to-back diodes clip both peaks. For a positive half-cycle, the top zener is reverse biased, breaking down at the zener voltage of 10 V. The lower zener drops approximately 0.7 V since it is forward biased. Thus, a more accurate clipping level is 10+0.7=10.7V. Similar negative half-cycle clipping occurs a -10.7 V. (Figure below) shows the clipping level at a little over ±10 V. Zener diode clipper: v(1) input is clipped at waveform v(2). Schottky diodes are constructed of a metal-to-N junction rather than a P-N semiconductor junction. Also known as hot-carrier diodes, Schottky diodes are characterized by fast switching times (low reverse-recovery time), low forward voltage drop (typically 0.25 to 0.4 volts for a metal-silicon junction), and low junction capacitance. The schematic symbol for a schottky diode is shown in Figure below. Schottky diode schematic symbol. The forward voltage drop (VF), reverse-recovery time (trr), and junction capacitance (CJ) of Schottky diodes are closer to ideal than the average “rectifying” diode. This makes them well suited for high-frequency applications. Unfortunately, though, Schottky diodes typically have lower forward current (IF) and reverse voltage (VRRM and VDC) ratings than rectifying diodes and are thus unsuitable for applications involving substantial amounts of power. Though they are used in low voltage switching regulator power supplies. Schottky diode technology finds broad application in high-speed computer circuits, where the fast switching time equates to high speed capability, and the low forward voltage drop equates to less power dissipation when conducting. Switching regulator power supplies operating at 100's of kHz cannot use conventional silicon diodes as rectifiers because of their slow switching speed . When the signal applied to a diode changes from forward to reverse bias, conduction continues for a short time, while carriers are being swept out of the depletion region. Conduction only ceases after this tr reverse recovery time has expired. Schottky diodes have a shorter reverse recovery time. Regardless of switching speed, the 0.7 V forward voltage drop of silicon diodes causes poor efficiency in low voltage supplies. This is not a problem in, say, a 10 V supply. In a 1 V supply the 0.7 V drop is a substantial portion of the output. One solution is to use a schottky power diode which has a lower forward drop. Tunnel diodes exploit a strange quantum phenomenon called resonant tunneling to provide a negative resistance forward-bias characteristics. When a small forward-bias voltage is applied across a tunnel diode, it begins to conduct current. (Figure below(b)) As the voltage is increased, the current increases and reaches a peak value called the peak current (IP). If the voltage is increased a little more, the current actually begins to decrease until it reaches a low point called the valley current (IV). If the voltage is increased further yet, the current begins to increase again, this time without decreasing into another “valley.” The schematic symbol for the tunnel diode shown in Figure below(a). Tunnel diode (a) Schematic symbol. (b) Current vs voltage plot (c) Oscillator. The forward voltages necessary to drive a tunnel diode to its peak and valley currents are known as peak voltage (VP) and valley voltage (VV), respectively. The region on the graph where current is decreasing while applied voltage is increasing (between VP and VV on the horizontal scale) is known as the region of negative resistance. Tunnel diodes, also known as Esaki diodes in honor of their Japanese inventor Leo Esaki, are able to transition between peak and valley current levels very quickly, “switching” between high and low states of conduction much faster than even Schottky diodes. Tunnel diode characteristics are also relatively unaffected by changes in temperature. Reverse breakdown voltage versus doping level. After Sze [SGG] Tunnel diodes are heavily doped in both the P and N regions, 1000 times the level in a rectifier. This can be seen in Figure above. Standard diodes are to the far left, zener diodes near to the left, and tunnel diodes to the right of the dashed line. The heavy doping produces an unusually thin depletion region. This produces an unusually low reverse breakdown voltage with high leakage. The thin depletion region causes high capacitance. To overcome this, the tunnel diode junction area must be tiny. The forward diode characteristic consists of two regions: a normal forward diode characteristic with current rising exponentially beyond VF, 0.3 V for Ge, 0.7 V for Si. Between 0 V and VF is an additional “negative resistance” characteristic peak. This is due to quantum mechanical tunneling involving the dual particle-wave nature of electrons. The depletion region is thin enough compared with the equivalent wavelength of the electron that they can tunnel through. They do not have to overcome the normal forward diode voltage VF. The energy level of the conduction band of the N-type material overlaps the level of the valence band in the P-type region. With increasing voltage, tunneling begins; the levels overlap; current increases, up to a point. As current increases further, the energy levels overlap less; current decreases with increasing voltage. This is the “negative resistance” portion of the curve. Tunnel diodes are not good rectifiers, as they have relatively high “leakage” current when reverse-biased. Consequently, they find application only in special circuits where their unique tunnel effect has value. To exploit the tunnel effect, these diodes are maintained at a bias voltage somewhere between the peak and valley voltage levels, always in a forward-biased polarity (anode positive, and cathode negative). Perhaps the most common application of a tunnel diode is in simple high-frequency oscillator circuits as in Figure above(c), where it allows a DC voltage source to contribute power to an LC “tank” circuit, the diode conducting when the voltage across it reaches the peak (tunnel) level and effectively insulating at all other voltages. The resistors bias the tunnel diode at a few tenths of a volt centered on the negative resistance portion of the characteristic curve. The L-C resonant circuit may be a section of waveguide for microwave operation. Oscillation to 5 GHz is possible. At one time the tunnel diode was the only solid-state microwave amplifier available. Tunnel diodes were popular starting in the 1960's. They were longer lived than traveling wave tube amplifiers, an important consideration in satellite transmitters. Tunnel diodes are also resistant to radiation because of the heavy doping. Today various transistors operate at microwave frequencies. Even small signal tunnel diodes are expensive and difficult to find today. There is one remaining manufacturer of germanium tunnel diodes, and none for silicon devices. They are sometimes used in military equipment because they are insensitive to radiation and large temperature changes. There has been some research involving possible integration of silicon tunnel diodes into CMOS integrated circuits. They are thought to be capable of switching at 100 GHz in digital circuits. The sole manufacturer of germanium devices produces them one at a time. A batch process for silicon tunnel diodes must be developed, then integrated with conventional CMOS processes. [SZL] The Esaki tunnel diode should not be confused with the resonant tunneling diode CH 2, of more complex construction from compound semiconductors. The RTD is a more recent development capable of higher speed. Diodes, like all semiconductor devices, are governed by the principles described in quantum physics. One of these principles is the emission of specific-frequency radiant energy whenever electrons fall from a higher energy level to a lower energy level. This is the same principle at work in a neon lamp, the characteristic pink-orange glow of ionized neon due to the specific energy transitions of its electrons in the midst of an electric current. The unique color of a neon lamp's glow is due to the fact that its neon gas inside the tube, and not due to the particular amount of current through the tube or voltage between the two electrodes. Neon gas glows pinkish-orange over a wide range of ionizing voltages and currents. Each chemical element has its own “signature” emission of radiant energy when its electrons “jump” between different, quantized energy levels. Hydrogen gas, for example, glows red when ionized; mercury vapor glows blue. This is what makes spectrographic identification of elements possible. Electrons flowing through a PN junction experience similar transitions in energy level, and emit radiant energy as they do so. The frequency of this radiant energy is determined by the crystal structure of the semiconductor material, and the elements comprising it. Some semiconductor junctions, composed of special chemical combinations, emit radiant energy within the spectrum of visible light as the electrons change energy levels. Simply put, these junctions glow when forward biased. A diode intentionally designed to glow like a lamp is called a light-emitting diode, or LED. Forward biased silicon diodes give off heat as electron and holes from the N-type and P-type regions, respectively, recombine at the junction. In a forward biased LED, the recombination of electrons and holes in the active region in Figure below (c) yields photons. This process is known as electroluminescence. To give off photons, the potential barrier through which the electrons fall must be higher than for a silicon diode. The forward diode drop can range to a few volts for some color LEDs. Diodes made from a combination of the elements gallium, arsenic, and phosphorus (called gallium-arsenide-phosphide) glow bright red, and are some of the most common LEDs manufactured. By altering the chemical constituency of the PN junction, different colors may be obtained. Early generations of LEDs were red, green, yellow, orange, and infra-red, later generations included blue and ultraviolet, with violet being the latest color added to the selection. Other colors may be obtained by combining two or more primary-color (red, green, and blue) LEDs together in the same package, sharing the same optical lens. This allowed for multicolor LEDs, such as tricolor LEDs (commercially available in the 1980's) using red and green (which can create yellow) and later RGB LEDs (red, green, and blue), which cover the entire color spectrum. The schematic symbol for an LED is a regular diode shape inside of a circle, with two small arrows pointing away (indicating emitted light), shown in Figure below. LED, Light Emitting Diode: (a) schematic symbol. (b) Flat side and short lead of device correspond to cathode, as well as the internal arrangement of the cathode. (c) Cross section of Led die. This notation of having two small arrows pointing away from the device is common to the schematic symbols of all light-emitting semiconductor devices. Conversely, if a device is light-activated (meaning that incoming light stimulates it), then the symbol will have two small arrows pointing toward it. LEDs can sense light. They generate a small voltage when exposed to light, much like a solar cell on a small scale. This property can be gainfully applied in a variety of light-sensing circuits. Because LEDs are made of different chemical substances than silicon diodes, their forward voltage drops will be different. Typically, LEDs have much larger forward voltage drops than rectifying diodes, anywhere from about 1.6 volts to over 3 volts, depending on the color. Typical operating current for a standard-sized LED is around 20 mA. When operating an LED from a DC voltage source greater than the LED's forward voltage, a series-connected “dropping” resistor must be included to prevent full source voltage from damaging the LED. Consider the example circuit in Figure below (a) using a 6 V source. Setting LED current at 20 ma. (a) for a 6 V source, (b) for a 24 V source. With the LED dropping 1.6 volts, there will be 4.4 volts dropped across the resistor. Sizing the resistor for an LED current of 20 mA is as simple as taking its voltage drop (4.4 volts) and dividing by circuit current (20 mA), in accordance with Ohm's Law (R=E/I). This gives us a figure of 220 Ω. Calculating power dissipation for this resistor, we take its voltage drop and multiply by its current (P=IE), and end up with 88 mW, well within the rating of a 1/8 watt resistor. Higher battery voltages will require larger-value dropping resistors, and possibly higher-power rating resistors as well. Consider the example in Figure above (b) for a supply voltage of 24 volts: Here, the dropping resistor must be increased to a size of 1.12 kΩ to drop 22.4 volts at 20 mA so that the LED still receives only 1.6 volts. This also makes for a higher resistor power dissipation: 448 mW, nearly one-half a watt of power! Obviously, a resistor rated for 1/8 watt power dissipation or even 1/4 watt dissipation will overheat if used here. Dropping resistor values need not be precise for LED circuits. Suppose we were to use a 1 kΩ resistor instead of a 1.12 kΩ resistor in the circuit shown above. The result would be a slightly greater circuit current and LED voltage drop, resulting in a brighter light from the LED and slightly reduced service life. A dropping resistor with too much resistance (say, 1.5 kΩ instead of 1.12 kΩ) will result in less circuit current, less LED voltage, and a dimmer light. LEDs are quite tolerant of variation in applied power, so you need not strive for perfection in sizing the dropping resistor. Multiple LEDs are sometimes required, say in lighting. If LEDs are operated in parallel, each must have its own current limiting resistor as in Figure below (a) to ensure currents dividing more equally. However, it is more efficient to operate LEDs in series (Figure below (b)) with a single dropping resistor. As the number of series LEDs increases the series resistor value must decrease to maintain current, to a point. The number of LEDs in series (Vf) cannot exceed the capability of the power supply. Multiple series strings may be employed as in Figure below (c). In spite of equalizing the currents in multiple LEDs, the brightness of the devices may not match due to variations in the individual parts. Parts can be selected for brightness matching for critical applications. Multiple LEDs: (a) In parallel, (b) in series, (c) series-parallel Also because of their unique chemical makeup, LEDs have much, much lower peak-inverse voltage (PIV) ratings than ordinary rectifying diodes. A typical LED might only be rated at 5 volts in reverse-bias mode. Therefore, when using alternating current to power an LED, connect a protective rectifying diode anti-parallel with the LED to prevent reverse breakdown every other half-cycle as in Figure below (a). Driving an LED with AC The anti-parallel diode in Figure above can be replaced with an anti-parallel LED. The resulting pair of anti-parallel LED's illuminate on alternating half-cycles of the AC sinewave. This configuration draws 20 ma, splitting it equally between the LED's on alternating AC half cycles. Each LED only receives 10 mA due to this sharing. The same is true of the LED anti-parallel combination with a rectifier. The LED only receives 10 ma. If 20 mA was required for the LED(s), The resistor value could be halved. The forward voltage drop of LED's is inversely proportional to the wavelength (λ). As wavelength decreases going from infrared to visible colors to ultraviolet, Vf increases. While this trend is most obvious in the various devices from a single manufacturer, The voltage range for a particular color LED from various manufacturers varies. This range of voltages is shown in Table below. Optical and electrical properties of LED's |LED||λ nm (= 10 -9m)||Vf(from)||Vf (to)| |white, blue, violet||-||3||4| As lamps, LEDs are superior to incandescent bulbs in many ways. First and foremost is efficiency: LEDs output far more light power per watt of electrical input than an incandescent lamp. This is a significant advantage if the circuit in question is battery-powered, efficiency translating to longer battery life. Second is the fact that LEDs are far more reliable, having a much greater service life than incandescent lamps. This is because LEDs are “cold” devices: they operate at much cooler temperatures than an incandescent lamp with a white-hot metal filament, susceptible to breakage from mechanical and thermal shock. Third is the high speed at which LEDs may be turned on and off. This advantage is also due to the “cold” operation of LEDs: they don't have to overcome thermal inertia in transitioning from off to on or vice versa. For this reason, LEDs are used to transmit digital (on/off) information as pulses of light, conducted in empty space or through fiber-optic cable, at very high rates of speed (millions of pulses per second). LEDs excel in monochromatic lighting applications like traffic signals and automotive tail lights. Incandescents are abysmal in this application since they require filtering, decreasing efficiency. LEDs do not require filtering. One major disadvantage of using LEDs as sources of illumination is their monochromatic (single-color) emission. No one wants to read a book under the light of a red, green, or blue LED. However, if used in combination, LED colors may be mixed for a more broad-spectrum glow. A new broad spectrum light source is the white LED. While small white panel indicators have been available for many years, illumination grade devices are still in development. Efficiency of lighting |Lamp type||Efficiency lumen/watt||Life hrs||notes| |White LED, future||100||100,000||R&D target| |Halogen||15-17||2000||high quality light| |Compact fluorescent||50-100||10,000||cost effective| |Sodium vapor, lp||70-200||20,000||outdoor| A white LED is a blue LED exciting a phosphor which emits yellow light. The blue plus yellow approximates white light. The nature of the phosphor determines the characteristics of the light. A red phosphor may be added to improve the quality of the yellow plus blue mixture at the expense of efficiency. Table above compares white illumination LEDs to expected future devices and other conventional lamps. Efficiency is measured in lumens of light output per watt of input power. If the 50 lumens/watt device can be improved to 100 lumens/watt, white LEDs will be comparable to compact fluorescent lamps in efficiency. LEDs in general have been a major subject of R&D since the 1960's. Because of this it is impractical to cover all geometries, chemistries, and characteristics that have been created over the decades. The early devices were relatively dim and took moderate currents. The efficiencies have been improved in later generations to the point it is hazardous to look closely and directly into an illuminated LED. This can result in eye damage, and the LEDs only required a minor increase in dropping voltage (Vf) and current. Modern high intensity devices have reached 180 lumens using 0.7 Amps (82 lumens/watt, Luxeon Rebel series cool white), and even higher intensity models can use even higher currents with a corresponding increase in brightness. Other developments, such as quantum dots, are the subject of current research, so expect to see new things for these devices in the future The laser diode is a further development upon the regular light-emitting diode, or LED. The term “laser” itself is actually an acronym, despite the fact its often written in lower-case letters. “Laser” stands for Light Amplification by Stimulated Emission of Radiation, and refers to another strange quantum process whereby characteristic light emitted by electrons falling from high-level to low-level energy states in a material stimulate other electrons in a substance to make similar “jumps,” the result being a synchronized output of light from the material. This synchronization extends to the actual phase of the emitted light, so that all light waves emitted from a “lasing” material are not just the same frequency (color), but also the same phase as each other, so that they reinforce one another and are able to travel in a very tightly-confined, nondispersing beam. This is why laser light stays so remarkably focused over long distances: each and every light wave coming from the laser is in step with each other. (a) White light of many wavelengths. (b) Mono-chromatic LED light, a single wavelength. (c) Phase coherent laser light. Incandescent lamps produce “white” (mixed-frequency, or mixed-color) light as in Figure above (a). Regular LEDs produce monochromatic light: same frequency (color), but different phases, resulting in similar beam dispersion in Figure above (b). Laser LEDs produce coherent light: light that is both monochromatic (single-color) and monophasic (single-phase), resulting in precise beam confinement as in Figure above (c). Laser light finds wide application in the modern world: everything from surveying, where a straight and nondispersing light beam is very useful for precise sighting of measurement markers, to the reading and writing of optical disks, where only the narrowness of a focused laser beam is able to resolve the microscopic “pits” in the disk's surface comprising the binary 1's and 0's of digital information. Some laser diodes require special high-power “pulsing” circuits to deliver large quantities of voltage and current in short bursts. Other laser diodes may be operated continuously at lower power. In the continuous laser, laser action occurs only within a certain range of diode current, necessitating some form of current-regulator circuit. As laser diodes age, their power requirements may change (more current required for less output power), but it should be remembered that low-power laser diodes, like LEDs, are fairly long-lived devices, with typical service lives in the tens of thousands of hours. A photodiode is a diode optimized to produce an electron current flow in response to irradiation by ultraviolet, visible, or infrared light. Silicon is most often used to fabricate photodiodes; though, germanium and gallium arsenide can be used. The junction through which light enters the semiconductor must be thin enough to pass most of the light on to the active region (depletion region) where light is converted to electron hole pairs. In Figure below a shallow P-type diffusion into an N-type wafer produces a PN junction near the surface of the wafer. The P-type layer needs to be thin to pass as much light as possible. A heavy N+ diffusion on the back of the wafer makes contact with metalization. The top metalization may be a fine grid of metallic fingers on the top of the wafer for large cells. In small photodiodes, the top contact might be a sole bond wire contacting the bare P-type silicon top. Photodiode: Schematic symbol and cross section. Light entering the top of the photodiode stack fall off exponentially in with depth of the silicon. A thin top P-type layer allows most photons to pass into the depletion region where electron-hole pairs are formed. The electric field across the depletion region due to the built in diode potential causes electrons to be swept into the N-layer, holes into the P-layer. Actually electron-hole pairs may be formed in any of the semiconductor regions. However, those formed in the depletion region are most likely to be separated into the respective N and P-regions. Many of the electron-hole pairs formed in the P and N-regions recombine. Only a few do so in the depletion region. Thus, a few electron-hole pairs in the N and P-regions, and most in the depletion region contribute to photocurrent, that current resulting from light falling on the photodiode. The voltage out of a photodiode may be observed. Operation in this photovoltaic (PV) mode is not linear over a large dynamic range, though it is sensitive and has low noise at frequencies less than 100 kHz. The preferred mode of operation is often photocurrent (PC) mode because the current is linearly proportional to light flux over several decades of intensity, and higher frequency response can be achieved. PC mode is achieved with reverse bias or zero bias on the photodiode. A current amplifier (transimpedance amplifier) should be used with a photodiode in PC mode. Linearity and PC mode are achieved as long as the diode does not become forward biased. High speed operation is often required of photodiodes, as opposed to solar cells. Speed is a function of diode capacitance, which can be minimized by decreasing cell area. Thus, a sensor for a high speed fiber optic link will use an area no larger than necessary, say 1 mm2. Capacitance may also be decreased by increasing the thickness of the depletion region, in the manufacturing process or by increasing the reverse bias on the diode. PIN diode The p-i-n diode or PIN diode is a photodiode with an intrinsic layer between the P and N-regions as in Figure below. The P-Intrinsic-N structure increases the distance between the P and N conductive layers, decreasing capacitance, increasing speed. The volume of the photo sensitive region also increases, enhancing conversion efficiency. The bandwidth can extend to 10's of gHz. PIN photodiodes are the preferred for high sensitivity, and high speed at moderate cost. PIN photodiode: The intrinsic region increases the thickness of the depletion region. Avalanche photo diode:An avalanche photodiode (APD)designed to operate at high reverse bias exhibits an electron multiplier effect analogous to a photomultiplier tube. The reverse bias can run from 10's of volts to nearly 2000 V. The high level of reverse bias accelerates photon created electron-hole pairs in the intrinsic region to a high enough velocity to free additional carriers from collisions with the crystal lattice. Thus, many electrons per photon result. The motivation for the APD is to achieve amplification within the photodiode to overcome noise in external amplifiers. This works to some extent. However, the APD creates noise of its own. At high speed the APD is superior to a PIN diode amplifier combination, though not for low speed applications. APD's are expensive, roughly the price of a photomultiplier tube. So, they are only competitive with PIN photodiodes for niche applications. One such application is single photon counting as applied to nuclear physics. A photodiode optimized for efficiently delivering power to a load is the solar cell. It operates in photovoltaic mode (PV) because it is forward biased by the voltage developed across the load resistance. Monocrystalline solar cells are manufactured in a process similar to semiconductor processing. This involves growing a single crystal boule from molten high purity silicon (P-type), though, not as high purity as for semiconductors. The boule is diamond sawed or wire sawed into wafers. The ends of the boule must be discarded or recycled, and silicon is lost in the saw kerf. Since modern cells are nearly square, silicon is lost in squaring the boule. Cells may be etched to texture (roughen) the surface to help trap light within the cell. Considerable silicon is lost in producing the 10 or 15 cm square wafers. These days (2007) it is common for solar cell manufacturer to purchase the wafers at this stage from a supplier to the semiconductor industry. P-type Wafers are loaded back-to-back into fused silica boats exposing only the outer surface to the N-type dopant in the diffusion furnace. The diffusion process forms a thin n-type layer on the top of the cell. The diffusion also shorts the edges of the cell front to back. The periphery must be removed by plasma etching to unshort the cell. Silver and or aluminum paste is screened on the back of the cell, and a silver grid on the front. These are sintered in a furnace for good electrical contact. (Figure below) The cells are wired in series with metal ribbons. For charging 12 V batteries, 36 cells at approximately 0.5 V are vacuum laminated between glass, and a polymer metal back. The glass may have a textured surface to help trap light. Silicon Solar cell The ultimate commercial high efficiency (21.5%) single crystal silicon solar cells have all contacts on the back of the cell. The active area of the cell is increased by moving the top (-) contact conductors to the back of the cell. The top (-) contacts are normally made to the N-type silicon on top of the cell. In Figure below the (-) contacts are made to N+ diffusions on the bottom interleaved with (+) contacts. The top surface is textured to aid in trapping light within the cell.. [VSW] High efficiency solar cell with all contacts on the back. Adapted from Figure 1 [VSW] Multicyrstalline silicon cells start out as molten silicon cast into a rectangular mold. As the silicon cools, it crystallizes into a few large (mm to cm sized) randomly oriented crystals instead of a single one. The remainder of the process is the same as for single crystal cells. The finished cells show lines dividing the individual crystals, as if the cells were cracked. The high efficiency is not quite as high as single crystal cells due to losses at crystal grain boundaries. The cell surface cannot be roughened by etching due to the random orientation of the crystals. However, an antireflectrive coating improves efficiency. These cells are competitive for all but space applications. Three layer cell: The highest efficiency solar cell is a stack of three cells tuned to absorb different portions of the solar spectrum. Though three cells can be stacked atop one another, a monolithic single crystal structure of 20 semiconductor layers is more compact. At 32 % efficiency, it is now (2007) favored over silicon for space application. The high cost prevents it from finding many earth bound applications other than concentrators based on lenses or mirrors. Intensive research has recently produced a version enhanced for terrestrial concentrators at 400 - 1000 suns and 40.7% efficiency. This requires either a big inexpensive Fresnel lens or reflector and a small area of the expensive semiconductor. This combination is thought to be competitive with inexpensive silicon cells for solar power plants. [RRK] [LZy] Metal organic chemical vapor deposition (MOCVD) deposits the layers atop a P-type germanium substrate. The top layers of N and P-type gallium indium phosphide (GaInP) having a band gap of 1.85 eV, absorbs ultraviolet and visible light. These wavelengths have enough energy to exceed the band gap. Longer wavelengths (lower energy) do not have enough energy to create electron-hole pairs, and pass on through to the next layer. A gallium arsenide layers having a band gap of 1.42 eV, absorbs near infrared light. Finally the germanium layer and substrate absorb far infrared. The series of three cells produce a voltage which is the sum of the voltages of the three cells. The voltage developed by each material is 0.4 V less than the band gap energy listed in Table below. For example, for GaInP: 1.8 eV/e - 0.4 V = 1.4 V. For all three the voltage is 1.4 V + 1.0 V + 0.3 V = 2.7 V. [BRB] High efficiency triple layer solar cell. |Layer||Band gap||Light absorbed| |Gallium indium phosphide||1.8 eV||UV, visible| |Gallium arsenide||1.4 eV||near infrared| |Germanium||0.7 eV||far infrared| Crystalline solar cell arrays have a long usable life. Many arrays are guaranteed for 25 years, and believed to be good for 40 years. They do not suffer initial degradation compared with amorphous silicon. Both single and multicrystalline solar cells are based on silicon wafers. The silicon is both the substrate and the active device layers. Much silicon is consumed. This kind of cell has been around for decades, and takes approximately 86% of the solar electric market. For further information about crystalline solar cells see Honsberg. [CHS] Amorphous silicon thin film solar cells use tiny amounts of the active raw material, silicon. Approximately half the cost of conventional crystalline solar cells is the solar cell grade silicon. The thin film deposition process reduces this cost. The downside is that efficiency is about half that of conventional crystalline cells. Moreover, efficiency degrades by 15-35% upon exposure to sunlight. A 7% efficient cell soon ages to 5% efficiency. Thin film amorphous silicon cells work better than crystalline cells in dim light. They are put to good use in solar powered calculators. Non-silicon based solar cells make up about 7% of the market. These are thin-film polycrystalline products. Various compound semiconductors are the subject of research and development. Some non-silicon products are in production. Generally, the efficiency is better than amorphous silicon, but not nearly as good as crystalline silicon. Cadmium telluride as a polycrystalline thin film on metal or glass can have a higher efficiency than amorphous silicon thin films. If deposited on metal, that layer is the negative contact to the cadmium telluride thin film. The transparent P-type cadmium sulfide atop the cadmium telluride serves as a buffer layer. The positive top contact is transparent, electrically conductive fluorine doped tin oxide. These layers may be laid down on a sacrificial foil in place of the glass in the process in the following pargraph. The sacrificial foil is removed after the cell is mounted to a permanent substrate. Cadmium telluride solar cell on glass or metal. A process for depositing cadmium telluride on glass begins with the deposition of N-type transparent, electrically conducive, tin oxide on a glass substrate. The next layer is P-type cadmium telluride; though, N-type or intrinsic may be used. These two layers constitute the NP junction. A P+ (heavy P-type) layer of lead telluride aids in establishing a low resistance contact. A metal layer makes the final contact to the lead telluride. These layers may be laid down by vacuum deposition, chemical vapor deposition (CVD), screen printing, electrodeposition, or atmospheric pressure chemical vapor deposition (APCVD) in helium. [KWM] A variation of cadmium telluride is mercury cadmium telluride. Having lower bulk resistance and lower contact resistance improves efficiency over cadmium telluride. Cadmium Indium Gallium diSelenide solar cell (CIGS) Cadmium Indium Gallium diSelenide: A most promising thin film solar cell at this time (2007) is manufactured on a ten inch wide roll of flexible polyimide– Cadmium Indium Gallium diSelenide (CIGS). It has a spectacular efficiency of 10%. Though, commercial grade crystalline silicon cells surpassed this decades ago, CIGS should be cost competitive. The deposition processes are at a low enough temperature to use a polyimide polymer as a substrate instead of metal or glass. (Figure above) The CIGS is manufactured in a roll to roll process, which should drive down costs. GIGS cells may also be produced by an inherently low cost electrochemical process. [EET] Solar cell properties |Solar cell type||Maximum efficiency||Practical efficiency||Notes| |Selenium, polycrystalline||0.7%||-||1883, Charles Fritts| |Silicon, single crystal||-||4%||1950's, first silicon solar cell| |Silicon, single crystal PERL, terrestrial, space||25%||-||solar cars, cost=100x commercial| |Silicon, single crystal, commercial terrestrial||24%||14-17%||$5-$10/peak watt| |Cypress Semiconductor, Sunpower, silicon single crystal||21.5%||19%||all contacts on cell back| |Gallium Indium Phosphide/ Gallium Arsenide/ Germanium, single crystal, multilayer||-||32%||Preferred for space.| |Advanced terrestrial version of above.||-||40.7%||Uses optical concentrator.| |Silicon, amorphous||13%||5-7%||Degrades in sun light. Good indoors for calculators or cloudy outdoors.| |Cadmium telluride, polycrystalline||16%||-||glass or metal substrate| |Copper indium arsenide diselenide, polycrystalline||18%||10%||10 inch flexible polymer web. [NTH]| |Organic polymer, 100% plastic||4.5%||-||R&D project| A variable capacitance diode is known as a varicap diode or as a varactor. If a diode is reverse biased, an insulating depletion region forms between the two semiconductive layers. In many diodes the width of the depletion region may be changed by varying the reverse bias. This varies the capacitance. This effect is accentuated in varicap diodes. The schematic symbols is shown in Figure below, one of which is packaged as common cathode dual diode. Varicap diode: Capacitance varies with reverse bias. This varies the frequency of a resonant network. If a varicap diode is part of a resonant circuit as in Figure above, the frequency may be varied with a control voltage, Vcontrol. A large capacitance, low Xc, in series with the varicap prevents Vcontrol from being shorted out by inductor L. As long as the series capacitor is large, it has minimal effect on the frequency of resonant circuit. Coptional may be used to set the center resonant frequency. Vcontrol can then vary the frequency about this point. Note that the required active circuitry to make the resonant network oscillate is not shown. For an example of a varicap diode tuned AM radio receiver see “electronic varicap diode tuning,” Ch 9 Some varicap diodes may be referred to as abrupt, hyperabrupt, or super hyper abrupt. These refer to the change in junction capacitance with changing reverse bias as being abrupt or hyper-abrupt, or super hyperabrupt. These diodes offer a relatively large change in capacitance. This is useful when oscillators or filters are swept over a large frequency range. Varying the bias of abrupt varicaps over the rated limits, changes capacitance by a 4:1 ratio, hyperabrupt by 10:1, super hyperabrupt by 20:1. Varactor diodes may be used in frequency multiplier circuits. See “Practical analog semiconductor circuits,” Varactor multiplier The snap diode, also known as the step recovery diode is designed for use in high ratio frequency multipliers up to 20 gHz. When the diode is forward biased, charge is stored in the PN junction. This charge is drawn out as the diode is reverse biased. The diode looks like a low impedance current source during forward bias. When reverse bias is applied it still looks like a low impedance source until all the charge is withdrawn. It then “snaps” to a high impedance state causing a voltage impulse, rich in harmonics. An applications is a comb generator, a generator of many harmonics. Moderate power 2x and 4x multipliers are another application. A PIN diode is a fast low capacitance switching diode. Do not confuse a PIN switching diode with a PIN photo diode here. A PIN diode is manufactured like a silicon switching diode with an intrinsic region added between the PN junction layers. This yields a thicker depletion region, the insulating layer at the junction of a reverse biased diode. This results in lower capacitance than a reverse biased switching diode. Pin diode: Cross section aligned with schematic symbol. PIN diodes are used in place of switching diodes in radio frequency (RF) applications, for example, a T/R switch here. The 1n4007 1000 V, 1 A general purpose power diode is reported to be usable as a PIN switching diode. The high voltage rating of this diode is achieved by the inclusion of an intrinsic layer dividing the PN junction. This intrinsic layer makes the 1n4007 a PIN diode. Another PIN diode application is as the antenna switch here for a direction finder receiver. PIN diodes serve as variable resistors when the forward bias is varied. One such application is the voltage variable attenuator here. The low capacitance characteristic of PIN diodes, extends the frequency flat response of the attenuator to microwave frequencies. An IMPATT diode is reverse biased above the breakdown voltage. The high doping levels produce a thin depletion region. The resulting high electric field rapidly accelerates carriers which free other carriers in collisions with the crystal lattice. Holes are swept into the P+ region. Electrons drift toward the N regions. The cascading effect creates an avalanche current which increases even as voltage across the junction decreases. The pulses of current lag the voltage peak across the junction. A “negative resistance” effect in conjunction with a resonant circuit produces oscillations at high power levels (high for semiconductors). IMPATT diode: Oscillator circuit and heavily doped P and N layers. The resonant circuit in the schematic diagram of Figure above is the lumped circuit equivalent of a waveguide section, where the IMPATT diode is mounted. DC reverse bias is applied through a choke which keeps RF from being lost in the bias supply. This may be a section of waveguide known as a bias Tee. Low power RADAR transmitters may use an IMPATT diode as a power source. They are too noisy for use in the receiver. [YMCW] A gunn diode is solely composed of N-type semiconductor. As such, it is not a true diode. Figure below shows a lightly doped N- layer surrounded by heavily doped N+ layers. A voltage applied across the N-type gallium arsenide gunn diode creates a strong electric field across the lightly doped N- layer. Gunn diode: Oscillator circuit and cross section of only N-type semiconductor diode. As voltage is increased, conduction increases due to electrons in a low energy conduction band. As voltage is increased beyond the threshold of approximately 1 V, electrons move from the lower conduction band to the higher energy conduction band where they no longer contribute to conduction. In other words, as voltage increases, current decreases, a negative resistance condition. The oscillation frequency is determined by the transit time of the conduction electrons, which is inversely related to the thickness of the N- layer. The frequency may be controlled to some extent by embedding the gunn diode into a resonant circuit. The lumped circuit equivalent shown in Figure above is actually a coaxial transmission line or waveguide. Gallium arsenide gunn diodes are available for operation from 10 to 200 gHz at 5 to 65 mw power. Gunn diodes may also serve as amplifiers. [CHW] [IAP] The Shockley diodeis a 4-layer thyristor used to trigger larger thyristors. It only conducts in one direction when triggered by a voltage exceeding the breakover voltage, about 20 V. See “Thyristors,” The Shockley Diode. The bidirectional version is called a diac. See “Thyristors,” The DIAC. A constant-current diode, also known as a current-limiting diode, or current-regulating diode, does exactly what its name implies: it regulates current through it to some maximum level. The constant current diode is a two terminal version of a JFET. If we try to force more current through a constant-current diode than its current-regulation point, it simply “fights back” by dropping more voltage. If we were to build the circuit in Figure below(a) and plot diode current against diode voltage, we'd get a graph that rises at first and then levels off at the current regulation point as in Figure below(b). Constant current diode: (a) Test circuit, (b) current vs voltage characteristic. One application for a constant-current diode is to automatically limit current through an LED or laser diode over a wide range of power supply voltages as in Figure below. Of course, the constant-current diode's regulation point should be chosen to match the LED or laser diode's optimum forward current. This is especially important for the laser diode, not so much for the LED, as regular LEDs tend to be more tolerant of forward current variations. Another application is in the charging of small secondary-cell batteries, where a constant charging current leads to predictable charging times. Of course, large secondary-cell battery banks might also benefit from constant-current charging, but constant-current diodes tend to be very small devices, limited to regulating currents in the milliamp range. Diodes manufactured from silicon carbide are capable of high temperature operation to 400oC. This could be in a high temperature environment: down hole oil well logging, gas turbine engines, auto engines. Or, operation in a moderate environment at high power dissipation. Nuclear and space applications are promising as SiC is 100 times more resistant to radiation compared with silicon. SiC is a better conductor of heat than any metal. Thus, SiC is better than silicon at conducting away heat. Breakdown voltage is several kV. SiC power devices are expected to reduce electrical energy losses in the power industry by a factor of 100. Diodes based on organic chemicals have been produced using low temperature processes. Hole rich and electron rich conductive polymers may be ink jet printed in layers. Most of the research and development is of the organic LED (OLED). However, development of inexpensive printable organic RFID (radio frequency identification) tags is on going. In this effort, a pentacene organic rectifier has been operated at 50 MHz. Rectification to 800 MHz is a development goal. An inexpensive metal insulator metal (MIM) diode acting like a back-to-back zener diode clipper has been delveloped. Also, a tunnel diode like device has been fabricated. The SPICE circuit simulation program provides for modeling diodes in circuit simulations. The diode model is based on characterization of individual devices as described in a product data sheet and manufacturing process characteristics not listed. Some information has been extracted from a 1N4004 data sheet in Figure below. Data sheet 1N4004 excerpt, after [DI4]. The diode statement begins with a diode element name which must begin with “d” plus optional characters. Example diode element names include: d1, d2, dtest, da, db, d101. Two node numbers specify the connection of the anode and cathode, respectively, to other components. The node numbers are followed by a model name, referring to a subsequent “.model” statement. The model statement line begins with “.model,” followed by the model name matching one or more diode statements. Next, a “d” indicates a diode is being modeled. The remainder of the model statement is a list of optional diode parameters of the form ParameterName=ParameterValue. None are used in Example below. Example2 has some parameters defined. For a list of diode parameters, see Table below. General form: d[name] [anode] [cathode] [modelname] .model ([modelname] d [parmtr1=x] [parmtr2=y] . . .) Example: d1 1 2 mod1 .model mod1 d Example2: D2 1 2 Da1N4004 .model Da1N4004 D (IS=18.8n RS=0 BV=400 IBV=5.00u CJO=30 M=0.333 N=2) The easiest approach to take for a SPICE model is the same as for a data sheet: consult the manufacturer's web site. Table below lists the model parameters for some selected diodes. A fallback strategy is to build a SPICE model from those parameters listed on the data sheet. A third strategy, not considered here, is to take measurements of an actual device. Then, calculate, compare and adjust the SPICE parameters to the measurements. Diode SPICE parameters |IS||IS||Saturation current (diode equation)||A||1E-14| |RS||RS||Parsitic resistance (series resistance)||Ω||0| |n||N||Emission coefficient, 1 to 2||-||1| |CD(0)||CJO||Zero-bias junction capacitance||F||0| |m||M||Junction grading coefficient||-||0.5| |-||-||0.33 for linearly graded junction||-||-| |-||-||0.5 for abrupt junction||-||-| |pi||XTI||IS temperature exponent||-||3.0| |-||-||pn junction: 3.0||-||-| |kf||KF||Flicker noise coefficient||-||0| |af||AF||Flicker noise exponent||-||1| |FC||FC||Forward bias depletion capacitance coefficient||-||0.5| |BV||BV||Reverse breakdown voltage||V||∞| |IBV||IBV||Reverse breakdown current||A||1E-3| If diode parameters are not specified as in “Example” model above, the parameters take on the default values listed in Table above and Table below. These defaults model integrated circuit diodes. These are certainly adequate for preliminary work with discrete devices For more critical work, use SPICE models supplied by the manufacturer [DIn], SPICE vendors, and other sources. [smi] SPICE parameters for selected diodes; sk=schottky Ge=germanium; else silicon. |1N4004 data sheet||18.8n||-||2||-||30p||0.333||-||-||-||400||5u| Otherwise, derive some of the parameters from the data sheet. First select a value for spice parameter N between 1 and 2. It is required for the diode equation (n). Massobrio [PAGM] pp 9, recommends ".. n, the emission coefficient is usually about 2." In Table above, we see that power rectifiers 1N3891 (12 A), and 10A04 (10 A) both use about 2. The first four in the table are not relevant because they are schottky, schottky, germanium, and silicon small signal, respectively. The saturation current, IS, is derived from the diode equation, a value of (VD, ID) on the graph in Figure above, and N=2 (n in the diode equation). ID = IS(eVD/nVT -1) VT = 26 mV at 25oC n = 2.0 VD = 0.925 V at 1 A from graph 1 A = IS(e(0.925 V)/(2)(26 mV) -1) IS = 18.8E-9 The numerical values of IS=18.8n and N=2 are entered in last line of Table above for comparison to the manufacturers model for 1N4004, which is considerably different. RS defaults to 0 for now. It will be estimated later. The important DC static parameters are N, IS, and RS. Rashid [MHR] suggests that TT, τD, the transit time, be approximated from the reverse recovery stored charge QRR, a data sheet parameter (not available on our data sheet) and IF, forward current. ID = IS(eVD/nVT -1) τD = QRR/IF We take the TT=0 default for lack of QRR. Though it would be reasonable to take TT for a similar rectifier like the 10A04 at 4.32u. The 1N3891 TT is not a valid choice because it is a fast recovery rectifier. CJO, the zero bias junction capacitance is estimated from the VR vs CJ graph in Figure above. The capacitance at the nearest to zero voltage on the graph is 30 pF at 1 V. If simulating high speed transient response, as in switching regulator power supplies, TT and CJO parameters must be provided. The junction grading coefficient M is related to the doping profile of the junction. This is not a data sheet item. The default is 0.5 for an abrupt junction. We opt for M=0.333 corresponding to a linearly graded junction. The power rectifiers in Table above use lower values for M than 0.5. We take the default values for VJ and EG. Many more diodes use VJ=0.6 than shown in Table above. However the 10A04 rectifier uses the default, which we use for our 1N4004 model (Da1N4001 in Table above). Use the default EG=1.11 for silicon diodes and rectifiers. Table above lists values for schottky and germanium diodes. Take the XTI=3, the default IS temperature coefficient for silicon devices. See Table above for XTI for schottky diodes. The abbreviated data sheet, Figure above, lists IR = 5 µA @ VR = 400 V, corresponding to IBV=5u and BV=400 respectively. The 1n4004 SPICE parameters derived from the data sheet are listed in the last line of Table above for comparison to the manufacturer's model listed above it. BV is only necessary if the simulation exceeds the reverse breakdown voltage of the diode, as is the case for zener diodes. IBV, reverse breakdown current, is frequently omitted, but may be entered if provided with BV. Figure below shows a circuit to compare the manufacturers model, the model derived from the datasheet, and the default model using default parameters. The three dummy 0 V sources are necessary for diode current measurement. The 1 V source is swept from 0 to 1.4 V in 0.2 mV steps. See .DC statement in the netlist in Table below. DI1N4004 is the manufacturer's diode model, Da1N4004 is our derived diode model. SPICE circuit for comparison of manufacturer model (D1), calculated datasheet model (D2), and default model (D3). SPICE netlist parameters: (D1) DI1N4004 manufacturer's model, (D2) Da1N40004 datasheet derived, (D3) default diode model. *SPICE circuit <03468.eps> from XCircuit v3.20 D1 1 5 DI1N4004 V1 5 0 0 D2 1 3 Da1N4004 V2 3 0 0 D3 1 4 Default V3 4 0 0 V4 1 0 1 .DC V4 0 1400mV 0.2m .model Da1N4004 D (IS=18.8n RS=0 BV=400 IBV=5.00u CJO=30 +M=0.333 N=2.0 TT=0) .MODEL DI1N4004 D (IS=76.9n RS=42.0m BV=400 IBV=5.00u CJO=39.8p +M=0.333 N=1.45 TT=4.32u) .MODEL Default D .end We compare the three models in Figure below. and to the datasheet graph data in Table below. VD is the diode voltage versus the diode currents for the manufacturer's model, our calculated datasheet model and the default diode model. The last column “1N4004 graph” is from the datasheet voltage versus current curve in Figure above which we attempt to match. Comparison of the currents for the three model to the last column shows that the default model is good at low currents, the manufacturer's model is good at high currents, and our calculated datasheet model is best of all up to 1 A. Agreement is almost perfect at 1 A because the IS calculation is based on diode voltage at 1 A. Our model grossly over states current above 1 A. First trial of manufacturer model, calculated datasheet model, and default model. Comparison of manufacturer model, calculated datasheet model, and default model to 1N4004 datasheet graph of V vs I. model model model 1N4004 index VD manufacturer datasheet default graph 3500 7.000000e-01 1.612924e+00 1.416211e-02 5.674683e-03 0.01 4001 8.002000e-01 3.346832e+00 9.825960e-02 2.731709e-01 0.13 4500 9.000000e-01 5.310740e+00 6.764928e-01 1.294824e+01 0.7 4625 9.250000e-01 5.823654e+00 1.096870e+00 3.404037e+01 1.0 5000 1.000000e-00 7.395953e+00 4.675526e+00 6.185078e+02 2.0 5500 1.100000e+00 9.548779e+00 3.231452e+01 2.954471e+04 3.3 6000 1.200000e+00 1.174489e+01 2.233392e+02 1.411283e+06 5.3 6500 1.300000e+00 1.397087e+01 1.543591e+03 6.741379e+07 8.0 7000 1.400000e+00 1.621861e+01 1.066840e+04 3.220203e+09 12. The solution is to increase RS from the default RS=0. Changing RS from 0 to 8m in the datasheet model causes the curve to intersect 10 A (not shown) at the same voltage as the manufacturer's model. Increasing RS to 28.6m shifts the curve further to the right as shown in Figure below. This has the effect of more closely matching our datasheet model to the datasheet graph (Figure above). Table below shows that the current 1.224470e+01 A at 1.4 V matches the graph at 12 A. However, the current at 0.925 V has degraded from 1.096870e+00 above to 7.318536e-01. Second trial to improve calculated datasheet model compared with manufacturer model and default model. Changing Da1N4004 model statement RS=0 to RS=28.6m decreases the current at VD=1.4 V to 12.2 A. .model Da1N4004 D (IS=18.8n RS=28.6m BV=400 IBV=5.00u CJO=30 +M=0.333 N=2.0 TT=0) model model 1N4001 index VD manufacturer datasheet graph 3505 7.010000e-01 1.628276e+00 1.432463e-02 0.01 4000 8.000000e-01 3.343072e+00 9.297594e-02 0.13 4500 9.000000e-01 5.310740e+00 5.102139e-01 0.7 4625 9.250000e-01 5.823654e+00 7.318536e-01 1.0 5000 1.000000e-00 7.395953e+00 1.763520e+00 2.0 5500 1.100000e+00 9.548779e+00 3.848553e+00 3.3 6000 1.200000e+00 1.174489e+01 6.419621e+00 5.3 6500 1.300000e+00 1.397087e+01 9.254581e+00 8.0 7000 1.400000e+00 1.621861e+01 1.224470e+01 12. Suggested reader exercise: decrease N so that the current at VD=0.925 V is restored to 1 A. This may increase the current (12.2 A) at VD=1.4 V requiring an increase of RS to decrease current to 12 A. Zener diode: There are two approaches to modeling a zener diode: set the BV parameter to the zener voltage in the model statement, or model the zener with a subcircuit containing a diode clamper set to the zener voltage. An example of the first approach sets the breakdown voltage BV to 15 for the 1n4469 15 V zener diode model (IBV optional): .model D1N4469 D ( BV=15 IBV=17m ) The second approach models the zener with a subcircuit. Clamper D1 and VZ in Figure below models the 15 V reverse breakdown voltage of a 1N4477A zener diode. Diode DR accounts for the forward conduction of the zener in the subcircuit. .SUBCKT DI-1N4744A 1 2 * Terminals A K D1 1 2 DF DZ 3 1 DR VZ 2 3 13.7 .MODEL DF D ( IS=27.5p RS=0.620 N=1.10 + CJO=78.3p VJ=1.00 M=0.330 TT=50.1n ) .MODEL DR D ( IS=5.49f RS=0.804 N=1.77 ) .ENDS Zener diode subcircuit uses clamper (D1 and VZ) to model zener. Tunnel diode: A tunnel diode may be modeled by a pair of field effect transistors (JFET) in a SPICE subcircuit. [KHM] An oscillator circuit is also shown in this reference. Gunn diode: A Gunn diode may also be modeled by a pair of JFET's. [ISG] This reference shows a microwave relaxation oscillator. Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information. Jered Wierzbicki (December 2002): Pointed out error in diode equation -- Boltzmann's constant shown incorrectly. Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
http://openbookproject.net/electricCircuits/Semi/SEMI_3.html
13
60
The optimism bias (also known as unrealistic or comparative optimism) is a bias that causes a person to believe that they are less at risk of experiencing a negative event compared to others. There are four factors that cause a person to be optimistically biased: their desired end state, their cognitive mechanisms, the information they have about themselves versus others, and overall mood. The optimistic bias is seen in a number of situations. For example: people believing that they are less at risk of being a crime victim, smokers believing that they are less likely to contract lung cancer or disease than other smokers, first-time bungee jumpers believing that they are less at risk of an injury than other jumpers, or traders who think they are less exposed to losses in the markets. Although the optimism bias occurs for both positive events, such as believing oneself to be more financially successful than others and negative events, such as being less likely to have a drinking problem, there is more research and evidence suggesting that the bias is stronger for negative events. However, different consequences result from these two types of events: positive events often lead to feelings of well being and self-esteem, while negative events lead to consequences involving more risk, such as engaging in risky behaviors and not taking precautionary measures for safety. The optimistic bias is typically measured through two determinants of risk: absolute risk, where individuals are asked to estimate their likelihood of experiencing a negative event compared to their actual chance of experiencing a negative event (comparison against self), and comparative risk, where individuals are asked to estimate the likelihood of experiencing a negative event (their personal risk estimate) compared to others of the same age and sex (a target risk estimate). Problems can occur when trying to measure absolute risk because it is extremely difficult to determine the actual risk statistic for a person. Therefore, the optimistic bias is primarily measured in comparative risk forms, where people compare themselves against others, through direct and indirect comparisons. Direct comparisons ask whether an individual's own risk of experiencing an event is negative, positive or equal than someone else's risk, while indirect comparisons ask individuals to provide separate estimates of their own risk of experiencing an event and other's risk of experiencing the same event. After obtaining scores, researchers are able to use the information to determine if there is a difference in the average risk estimate of the individual compared to the average risk estimate of their peers. Generally in negative events, the mean risk of an individual appears lower than the risk estimate of others. This is then used to demonstrate the bias' effect. The optimistic bias can only be defined at a group level, because at an individual level the positive assessment could be true. Likewise, difficulties can arise in measurement procedures, as it is difficult to determine when someone is being optimistic, realistic, or pessimistic. Research suggests that the bias comes from an overestimate of group risks rather than underestimating one's own risk. The factors leading to the optimistic bias can be categorized into four different groups: desired end states of comparative judgment, cognitive mechanisms, information about the self versus a target, and underlying affect. These are explained more in detail below. Desired end states of comparative judgment Many explanations for the optimistic bias come from the goals that people want and outcomes they wish to see. People tend to view their risks as less than others because they believe that this is what other people want to see. These explanations include self-enhancement, self-presentation, and perceived control. Self-enhancement suggests that optimistic predictions are satisfying and that it feels good to think that positive events will happen. People can control their anxiety and other negative emotions if they believe they are better off than others. People tend to focus on finding information that supports what they want to see happen, rather than what will happen to them. With regards to the optimistic bias, individuals will perceive events more favorably, because that is what they would like the outcome to be. This also suggests that people might lower their risks compared to others to make themselves look better than average: they are less at risk than others and therefore better. Studies suggest that people attempt to establish and maintain a desired personal image in social situations. People are motivated to present themselves towards others in a good light, and some researchers suggest that the optimistic bias is a representative of self-presentational processes:people want to appear more well off than others. However, this is not through conscious effort. In a study where participants believed their driving skills would be either tested in either real-life or driving simulations, people who believed they were to be tested had less optimistic bias and were more modest about their skills than individuals who would not be tested. Studies also suggest that individuals who present themselves in a pessimistic and more negative light are generally less accepted by the rest of society. This might contribute to overly optimistic attitudes. Personal control/perceived control People tend to be more optimistically biased when they believe they have more control over events than others. For example, people are more likely to think that they will not be harmed in a car accident if they are driving the vehicle. Another example is that if someone believes that they have a lot of control over becoming infected with HIV, they are more likely to view their risk of contracting the disease to be low. Studies have suggested that the greater perceived control someone has, the greater their optimistic bias. Stemming from this, control is a stronger factor when it comes to personal risk assessments, but not when assessing others. A meta-analysis reviewing the relationship between the optimistic bias and perceived control found that a number of moderators contribute to this relationship. In previous research, participants from the United States generally had higher levels of optimistic bias relating to perceived control than those of other nationalities. Students also showed larger levels of the optimistic bias than non-students. The format of the study also demonstrated differences in the relationship between perceived control and the optimistic bias: direct methods of measurement suggested greater perceived control and greater optimistic bias as compared to indirect measures of the bias. The optimistic bias is strongest in situations where an individual needs to rely heavily on direct action and responsibility of situations. An opposite factor of perceived control is that of prior experience. Prior experience is typically associated with less optimistic bias, which some studies suggest is from either a decrease in the perception of personal control, or make it easier for individuals to imagine themselves at risk. Prior experience suggests that events may be less controllable than previously believed. The optimistic bias is possibly also influenced by three cognitive mechanisms that guide judgments and decision-making processes: the representativeness heuristic, singular target focus, and interpersonal distance. The estimates of likelihood associated with the optimistic bias are based on how closely an event matches a person's overall idea of the specific event. Some researchers suggest that the representative heuristic is a reason for the optimistic bias: individuals tend to think in stereotypical categories rather than about their actual targets when making comparisons. For example, when drivers are asked to think about a car accident, they are more likely to associate a bad driver, rather than just the average driver. Individuals compare themselves with the negative elements that come to mind, rather than an overall accurate comparison between them and another driver. Additionally, when individuals were asked to compare themselves towards friends, they chose more vulnerable friends based on the events they were looking at. Individuals generally chose a specific friend based on if they resemble a given example, rather than just an average friend. People find examples that relate directly to what they are asked, resulting in representativeness heuristics. Singular target focus One of the difficulties of the optimistic bias is that people know more about themselves than they do about others. While individuals know how to think about themselves as a single person, they still think of others as a generalized group, which leads to biased estimates and inabilities to sufficiently understand their target or comparison group. Likewise, when making judgments and comparisons about their risk compared to others, people generally ignore the average person, but primarily focus on their own feelings and experiences. Perceived risk differences occur depending on how far or close a compared target is to an individual making a risk estimate. The greater the perceived distance between the self and the comparison target, the greater the perceived difference in risk. When one brings the comparison target closer to the individual, risk estimates appear closer together than if the comparison target was someone more distant to the participant. There is support for perceived social distance in determining the optimistic bias. Through looking at comparisons of personal and target risk between the in-group level contributes to more perceived similarities than when individuals think about outer-group comparisons which lead to greater perceived differences. In one study, researchers manipulated the social context of the comparison group, where participants made judgements for two different comparison targets: the typical student at their university and a typical student at another university. Their findings showed that not only did people work with the closer comparison first, but also had closer ratings to themselves than the "more different" group. Studies have also noticed that people demonstrate more optimistic bias when making comparisons when the other is a vague individual, but biases are reduced when the other is a familiar person, such as a friend or family member. This also is determined due to the information they have about the individuals closest to them, but not having the same information about other people. Information about self versus target Individuals know a lot more about themselves than they do about others. Because information about others is less available, information about the self versus others leads people to make specific conclusions about their own risk, but results in them having a harder time making conclusions about the risks of others. This leads to differences in judgments and conclusions about self-risks compared to the risks of others, leading to larger gaps in the optimistic bias. Person-positivity bias is the tendency to evaluate an object more favorably the more the object resembles an individual human being. Generally, the more a comparison target resembles a specific person, the more familiar it will be. However, groups of people are considered to be more abstract concepts, which leads to less favorable judgments. With regards to the optimistic bias, when people compare themselves to an average person, whether someone of the same sex or age, the target continues to be viewed as less human and less personified, which will result in less favorable comparisons between the self and others. Egocentric thinking refer to how individuals know more of their own personal information and risk that they can use to form judgments and make decisions. One difficulty, though, is that people have a large amount of knowledge about themselves, but no knowledge about others. Therefore, when making decisions, people have to use other information available to them, such as population data, in order to learn more about their comparison group. This can relate to an optimism bias because while people are using the available information they have about themselves, they have more difficulty understanding correct information about others. This self-centered thinking is seen most commonly in adolescents and college students, who generally think more about themselves than others. It is also possible that someone can escape egocentric thinking. In one study, researchers had one group of participants list all factors that influenced their chances of experiencing a variety of events, and then a second group read the list. Those who read the list showed less optimistic bias in their own reports. It's possible that greater knowledge about others and their perceptions of their chances of risk bring the comparison group closer to the participant. Underestimating average person's control Also regarding egocentric thinking, it is possible that individuals underestimate the amount of control the average person has. This is explained in two different ways: - People underestimate the control that others have in their lives. - People completely overlook that others have control over their own outcomes. For example, many smokers believe that they are taking all necessary precautionary measures so that they won't get lung cancer, such as smoking only once a day, or using filtered cigarettes, and believe that others are not taking the same precautionary measures. However, it is likely that many other smokers are doing the same things. The last factor of optimistic bias is that of underlying affect and affect experience. Research has found that people show less optimistic bias when experiencing a negative mood, and more optimistic bias when in a positive mood. Sad moods reflect greater memories of negative events, which lead to more negative judgments, while positive moods promote happy memories and more positive feelings. This suggests that overall negative moods, including depression, result in increased personal risk estimates but less optimistic bias overall. Anxiety also leads to less optimistic bias, continuing to suggest that overall positive experiences and positive attitudes lead to more optimistic bias in events. In health, the optimistic bias tends to prevent individuals from taking on preventative measures for good health. Therefore, researchers need to be aware of the optimistic bias and the ways it can prevent people from taking precautionary measures in life choices. For example, people who underestimate their comparative risk of heart disease know less about heart disease, and even after reading an article with more information, are still less concerned about risk of heart disease. Because the optimistic bias can be a strong force in decision-making, it is important to look at how risk perception is determined and how this will result in preventative behaviors. Risk perceptions are particularly important for individual behaviors, such as exercise, diet, and even sunscreen use. A large portion of risk prevention focuses on adolescents. Especially with health risk perception, adolescence is associated with an increased frequency of risky health-related behaviors such as smoking, drugs, and unsafe sex. While adolescents are aware of the risk, this awareness does not change behavior habits. Adolescents with strong positive optimistic bias toward risky behaviors had an overall increase in the optimistic bias with age. However, many times there are methodological problems in these tests. Unconditional risk questions in cross-sectional studies are used consistently, leading to problems, as they ask about the likelihood of an action occurring, but does not determine if there is an outcome, or compare events that haven't happened to events that have. Concerning vaccines, perceptions of those who have not been vaccinated are compared to the perceptions of people who have been. Other problems which arise include the failure to know a person's perception of a risk. Knowing this information will be helpful for continued research on optimistic bias and preventative behaviors. Attempts to alter and eliminate Studies have shown that it is very difficult to eliminate the optimistic bias, however some people believe that trying to reduce the optimistic bias will encourage people to adapt to health-protective behaviors. Researchers suggest that the optimistic bias cannot be reduced, and that by trying to reduce the optimistic bias the end result was generally even more optimistically biased. In a study of four different tests to reduce the optimistic bias, researchers found that regardless of the attempts to reduce the bias, through lists of risk factors, participants perceiving themselves as inferior to others, participants asked to think of high-risk individuals, and giving attributes of why they were at risk, all increased the bias rather than decreased it. Although studies have tried to reduce the optimistic bias through reducing distance, overall, the optimistic bias still remains. Although research has suggested that it is very difficult to eliminate the bias, some factors may help in closing the gap of the optimistic bias between an individual and their target risk group. First, by placing the comparison group closer to the individual, the optimistic bias can be reduced: studies found that when individuals were asked to make comparisons between themselves and close friends, there was almost no difference in the likelihood of an event occurring. Additionally, by actually experiencing an event leads to a decrease in the optimistic bias. While this only applies to events with prior experience, knowing the previously unknown will result in less optimism of it not occurring. Policy, planning, and management Optimism bias influences decisions and forecasts in policy, planning, and management, e.g., the costs and completion times of planned decisions tend to be underestimated and the benefits overestimated due to optimism bias. The term planning fallacy for this effect was first proposed by Daniel Kahneman and Amos Tversky. The opposite of optimism bias is pessimism bias, because the principles of the optimistic bias continue to be in effect in situations where individuals regard themselves as worse off than others. Optimism may occur from either a distortion of personal estimates, representing personal optimism, or a distortion for others, representing personal pessimism. - Shepperd, James A.; Patrick Carroll, Jodi Grace, Meredith Terry (2002). "Exploring the Causes of Comparative Optimism". Psychologica Belgica 42: 65–98. - Chapin, John; Grace Coleman (2009). "Optimistic Bias: What you Think, What you Know, or Whom you Know?". North American Journal of Psychology 11 (1): 121–132. - Weinstein, Neil D.; William M. Klein (1996). "Unrealistic Optimism: Present and Future". Journal of Social and Clinical Psychology 15 (1): 1–8. doi:10.1521/jscp.19188.8.131.52. - Elder; Alexander "Trading for a Living; Psychology, Trading Tactics, Money Management" John Wiley & Sons 1993, Intro - sections "Psychology is the Key" & "The Odds are against You", And Part I "Individual Psychology", Section 5 "Fantasy versus Reality" ISBN 0-47159224-2 - Gouveia, Susana O.; Valerie Clarke (2001). "Optimistic bias for negative and positive events". Health Education 101 (5): 228–234. doi:10.1108/09654280110402080. - Helweg-Larsen, Marie; James A. Shepperd (2001). "Do Moderators of the Optimistic Bias Affect Personal or Target Risk Estimates? A Review of the Literature". Personality and Social Psychology Review 5 (1): 74–95. doi:10.1207/S15327957PSPR0501_5. - Klein, Cynthia T. F.; Marie Helweg-Larsen (2002). "Perceived Control and the Optimistic Bias: A Meta-analytic Review". Psychology and Health 17 (4): 437–446. doi:10.1080/0887044022000004920. - Radcliffe, Nathan M.; William M. P. Klein (2002). "Dispositional, Unrealistic, and Comparative Optimism: Differential Relations with the Knowledge and Processing of Risk Information and Beliefs about Personal Risk". Personality and Social Psychology Bulletin 28: 836–846. doi:10.1177/0146167202289012. - McKenna, F. P; R. A. Stanier, C. Lewis (1991). "Factors underlying illusionary self-assessment of driving skill in males and females". Accident Analysis and Prevention 23: 45–52. doi:10.1016/0001-4575(91)90034-3. PMID 2021403. - Helweg-Larsen, Marie; Pedram Sadeghian, Mary S. Webb (2002). "The stigma of being pessimistically biased". Journal of Social and Clinical Psychology 21 (1): 92=107. - Harris, Peter (1996). "Sufficient grounds for optimism?: The relationship between perceived controllability and optimistic bias". Journal of Social and Clinical Psychology (http://search.proquest.com/docview/61536420/136283117CD63890645/1?accountid=10506) 15 (1): 9–52. - Weinstein, Neil D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology 39: 806–820. doi:10.1037/0022-35184.108.40.2066. - Perloff, Linda S; Barbara K. Fetzer (1986). "Self-other judgments and perceived vulnerability to victimization". Journal of Personality and Social Psychology 50: 502–510. doi:10.1037/0022-35220.127.116.112. - Harris, P; Wendy Middleton, Richard Joiner (2000). "The typical student as an in-group member: eliminating optimistic bias by reducing social distance". European Journal of Social Psychology 30: 235–253. doi:10.1002/(SICI)1099-0992(200003/04)30:2<235::AID-EJSP990>3.0.CO;2-G. - Weinstein, Neil D. (1987). "Unrealistic Optimism: About Susceptibility in Health Problems: Conclusions from a Community-Wide Sample". Journal of Behavioral Medicine 10 (5): 481–500. doi:10.1007/BF00846146. PMID 3430590. - Bränström, Richard; Yvonne Brandberg (2010). "Health Risk Perception, Optimistic Bias, and Personal Satisfaction". American Journal of Health Behavior 34 (2): 197–205. PMID 19814599. - Brewer, Noel T.; Gretchen B. Chapman, Fredrick X. Gibbons, Meg Gerrard, Kevin D. McCaul, Neil D. Weinstein (2007). "Meta-analysis of the Relationship Between Risk Perception and Health Behavior: The Example of Vaccination". Health Psychology 26 (2): 136=145. doi:10.1037/0278-618.104.22.168. - Gerrard, Meg; Frederick X. Gibbons, Alida C. Benthin, Robert M. Hessling (1996). "A Longitudinal Study of the Reciprocal Nature of Risk Behaviors and Cognitions in Adolescents: What You Do Shapes What You Think, and Vice Versa". Health Psychology 15 (5): 344–354. PMID 8891713. - Weinstein, Neil D.; William M. Klein (1995). "Resistance of Personal Risk Perceptions to Debiasing Interventions". Health Psychology 14 (2): 132–140. doi:10.1037/0278-622.214.171.124. PMID 7789348. - Pezzo, Mark V.; Litman, Jordan A.; Pezzo, Stephanie P. (2006). "On the distinction between yuppies and hippies: Individual differences in prediction biases for planning future tasks". Personality and Individual Differences 41 (7): 1359–1371. doi:10.1016/j.paid.2006.03.029. ISSN 01918869. - Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures". TIMS Studies in Management Science 12: 313–327.
http://en.wikipedia.org/wiki/Optimism_bias
13
18
Percents - GMAT Math Study Guide Table of Contents The Concept of Percentages Percent, when broken apart, literally means per 100. A percent represents a part of 100. For example, 20% means 20 per 100. Since a percent is an amount per 100, percents can be represented as fractions with a denominator of 100. 55% = 55/100 100% = 100/100 125% = 125/100 250% = 250/100 0.5% = 5/1000 When a percentage is represented as a fraction, it can be added, subtracted, multiplied, and divided just like any other fraction. A percent can be represented as a decimal. The following relationship characterizes how percents and decimals interact. Stated in sentence form: To move from percent form to decimal form, move the decimal point two slots to the left. Consider the following examples: 5 * (.01) = .05 Note: .05 is the result of taking 5 and moving the decimal point two slots to the left. What is 130% represented as a decimal? 130 * (.01) = 1.3 Note: 1.3 is the result of taking 130 and moving the decimal point two slots to the left. What is 0.5% represented as a decimal? 0.5 * (.01) = 0.005 Note: 0.005 is the result of taking 0.5 and moving the decimal point two slots to the left. When a percentage is represented as a decimal, it can be added, subtracted, multiplied, and divided just like any other number. The following chart lays out the relationship between percents, fractions, and decimals. Percent Change vs. Percent Of While most students find percentages to be an easier topic than one such as combinatorics, some individuals initially trip on the difference between a percent change and a percent of a number. Practically, this is the difference between saying "the price jumped 50%" and "the current price is 150% of the old price." Both of these phrases refer to the same amount, but are stated differently. Percents are commonly used to measure or report the change in an amount. For example, a news reporter might say, "stocks rose 1.5% today" or a demographer might write, "minority representation in the population fell 3.5% during the past decade." The formula for calculating percent changes is: This formula can also be expressed in decimal form. In other words, the following formula calculates the percent change between two numbers and represents this change in decimal form. The following examples illustrate the use of this formula. End Value = 9 Start Value = 10 Percent Change [as a percent] = ((9 - 10)/10) * 100 = -.1 * 100 = -10% End Value = 60 Start Value = 50 Percent Change [as a percent] = ((60 - 50)/50) * 100 = .2 * 100 = 20% It is possible to calculate the percent change of a percent. Consider the following example: End Value = 85% = .85 Start Value = 75% = .75 Using Percents: Percent Change [as a percent] = ((85% - 75%)/75%) * 100 = 13.3% * 100 = 13.3% Using Decimals: Percent Change [as a decimal] = ((.85 - .75)/.75) * 100 = .133 A Common Mistake in Working With Percent Decreases Some students confuse a percent decrease of a certain percentage with finding the percent of a certain amount. The following example elucidates this confusion: Common Mistake: IndexToday = 5000(.45) This calculation yields 45% of last year's index value. However, the question pertains to a 45% fall. Since the index's value fell 45%, its current value is 100% - 45% = 55% of last year's index value. Correct Calculation: IndexToday = 5000(1-.45) = 5000(.55) = 2750 Another common use of percents is as a measure of another number. For example, a stock analyst might say, "MicroMake's stock is trading at 130% of MacroMake's stock price." Similarly, a political historian might say, "President George W. Bush's approval rating in late November 2004 was about 50%, which is about 55% of his approval rating in late September 2001." In these instances, percents are being used not to describe change, but to compare amounts or quantities. When working with percents that are used to compare different quantities, it is often best to translate each percent into decimals and set up equations or ratios. Consider the following examples: Translate 50% into decimal format: 50% = .5 Translate the question into an equation: .5(40) = ? .5(40) = 20 The following is a slightly more difficult example: Let X = the percent as a decimal Translate the question into an equation: X(80) = 20 X = 20/80 = 1/4 = .25 Translate X into a percent: .25(100) = 25% Percents can also be used to compare the size of percents. Consider the example with President George W. Bush's approval rating mentioned above. Let A = President Bush's approval rating in late September 2001 Condense Question Down to Simplify: 50% is 55% of A Translate Into Equation: .50 = .55A A [as a decimal] = .9 A [as a percent] = .9(100) = 90% If a number rises by 30% and then falls by 35%, by what percent did it change from beginning to end? The topic of recursive (or successive) percents addresses this question. Consider an example: Let DowBeginning of 2004 = X DowEnd of 2007 = X(1 + 30%) = X(1.3) DowEnd of 2008 = [X(1.3)](1-.35) = X(.845) Percent Change = (End - Start/Start)*100 Percent Change = (X(.845) - X/X)*100 = -15.5% Strategy: Picking Numbers (Especially 100) Many students find it easier to solve problems involving percents by picking numbers instead of using theoretical variables. The previous question can be solved this way: Let DowBeginning of 2004 = 100 [pick the number 100 instead of using a variable] DowEnd of 2007 = 100(1.3) DowEnd of 2008 = 100(1.3)(1-.35) = 84.5 The choice of 100 as a value for the Dow at the beginning of 2004 makes calculating the percent change from 2004 through 2008 much easier, as the next step should indicate. Percent Change = (End - Start/Start)*100 Percent Change = (84.5 - 100/100)*100 = -15.5% Interest Rate Problems One rather common and important application of percents is the topic of interest rates and money. An important formula that relates interest, principal, and time follows: I = PRT I = Interest Payment P = Principal R = Interest Rate T = Time Period T = 1 since the question asks for the interest, I, in the first year (i.e., a one year time period--not the entire 10 year time period) P = $100,000 R = 5% = 0.05 I = $100,000(.05)(1) = $5000 While the above formula helps solve many problems, there are other problems that require another formula. The following formula is fundamental to the relationship between interest, time, present value, and future value: FV = Future Value = The amount of money to be received or owed at a future date t time periods from now PV = Present Value = The amount of money to be received or owed at present (i.e., now) r = Interest Rate = The interest rate on the money, expressed as a decimal t = Time = The amount of time to pass between PV and FV Note: The time period, t, and interest rate, r, must be expressed in the same terms. For example, one cannot use an annual interest rate and express time in terms of months. If you are using a value of t that expresses time in months, you must use a monthly interest rate. For more on this topic, see the compound interest section. The following is an example of a common introductory interest rate problem. PV = $100,000 r = 5% = 0.05 t = 2 FV = $100,000(1 + .05)2 = $110,250 Types of GMAT Problems - Expressing Percentages If X is Y percent of Z then the following arithmetic statement is true: X/Z=Y/100.What percent of 60 is 25?Correct Answer: C - The phrasing of the question is difficult for some students. "What percent of 60 is 25?" is the same question as "25 is what percent of 60?" Many students find this later way of phrasing the question easier to work with. - If X is Y percent of Z, then X/Z=Y/100; For example, 10 is 50% of 20; X = 10, Y = 50, and Z = 20 - Similarly, if you wanted to know what percent of 100 is 50, you would intuitively know that it is 50% since it is 50/100 or 50 per cent (literally, per 100). - Applying this logic to the problem at hand: 25/60= .42 = 42% - Thus, 25 is 42% of 60 - If this way of solving the problem is difficult to conceptualize, consider another approach. It should be clear that 30 is 50% (or 1/2) or 60. Since 25 is less than 30, 25 must be less than 50% of 60. This means that any answer that is not less than 50% is wrong. - Since 25% is 1/4 and 1/4 of 60 is 15 (since 15*4=60), 25% is too small. By process of elimination, the answer is 42% - Determining the Percent Change The formula for percent change is: % Change= (F-I)/I x100 where F is the final value and I is the initial value.A house sold for $500,000 in 1990 and sold ten years later for $400,000. By what percent did the value of the house change?Correct Answer: D - % Change= (F-I)/I x100, where F is the final value and I is the initial value. - ($400,000-$500,000)/$500,000= -0.2. A negative value means that the house fell in value. Thus, the value of the house dropped by 20%. - Comparing Percentages More advanced problems will require an understanding of percentage (or fraction) comparisons. To solve these, a simple relationship between two fractions must be discovered and solved.In a large forest, 300 deer were caught, tagged, and returned during 2001. During 2002, 500 deer were caught at random, of which only 20 had tags from the previous year. If the percent of deer in the forest that had tags during the second year and were caught in the 500 deer sample is representative of the percent of the total deer population in the forest with tags, what is the total deer population in the forest (assuming no change in population between 2001 and 2002)?Correct Answer: E - Let N = the total number of deer in the forest. - During the first year, the percent of deer in the entire population with tags was: 300/N - 20/500 is the percent of deer caught during the second year that had tags. Since this sample percent matches the percent for the entire population (i.e., the total number of tagged deer divided by the total number of deer), the two ratios are equal. - Equating these two percents: Sample = Population N = (300/1)*(500/20)
http://www.platinumgmat.com/gmat_study_guide/percents
13
32
Coordination Compounds are the backbone of modern inorganic and bio–inorganic chemistry and chemical industry. In the previous Unit we learnt that the transition metals form a large number of complex compounds in which the metal atoms are bound to a number of anions or neutral molecules. In modern terminology such compounds are called coordination compounds. The chemistry of coordination compounds is an important and challenging area of modern inorganic chemistry. New concepts of chemical bonding and molecular structure have provided insights into the functioning of vital components of biological systems. Chlorophyll, haemoglobin and vitamin B12 are coordination compounds of magnesium, iron and cobalt respectively. Variety of metallurgical processes, industrial catalysts and analytical reagents involve the use of coordination compounds. Coordination compounds also find many applications in electroplating, textile dyeing and medicinal chemistry. 9.1 Werner’s Theory of Coordination Compounds Alfred Werner (1866-1919), a Swiss chemist was the first to formulate his ideas about the structures of coordination compounds. He prepared and characterised a large number of coordination compounds and studied their physical and chemical behaviour by simple experimental techniques. Werner proposed the concept of a primary valence and a secondary valence for a metal ion. Binary compounds such as CrCl3, CoCl2 or PdCl2 have primary valence of 3, 2 and 2 respectively. In a series of compounds of cobalt(III) chloride with ammonia, it was found that some of the chloride ions could be precipitated as AgCl on adding excess silver nitrate solution in cold but some remained in solution. 1 mol CoCl3.6NH3 (Yellow) gave 3 mol AgCl 1 mol CoCl3.5NH3 (Purple) gave 2 mol AgCl 1 mol CoCl3.4NH3 (Green) gave 1 mol AgCl 1 mol CoCl3.4NH3 (Violet) gave 1 mol AgCl These observations, together with the results of conductivity measurements in solution can be explained if (i) six groups in all, either chloride ions or ammonia molecules or both, remain bonded to the cobalt ion during the reaction and (ii) the compounds are formulated as shown in Table 9.1, where the atoms within the square brackets form a single entity which does not dissociate under the reaction conditions. Werner proposed the term secondary valence for the number of groups bound directly to the metal ion; in each of these examples the secondary valences are six. |Colour||Formula||Solution conductivity corresponds to| Note that the last two compounds in Table 9.1 have identical empirical formula, CoCl3.4NH3, but distinct properties. Such compounds are termed as isomers. Werner in 1898, propounded his theory of coordination compounds. The main postulates are: 1. In coordination compounds metals show two types of linkages (valences)-primary and secondary. 2. The primary valences are normally ionisable and are satisfied by negative ions. 3. The secondary valences are non ionisable. These are satisfied by neutral molecules or negative ions. The secondary valence is equal to the coordination number and is fixed for a metal. 4. The ions/groups bound by the secondary linkages to the metal have characteristic spatial arrangements corresponding to different coordination numbers. In modern formulations, such spatial arrangements are called coordination polyhedra. The species within the square bracket are coordination entities or complexes and the ions outside the square bracket are called counter ions. He further postulated that octahedral, tetrahedral and square planar geometrical shapes are more common in coordination compounds of transition metals. Thus, [Co(NH3)6]3+, [CoCl(NH3)5]2+ and [CoCl2(NH3)4]+ are octahedral entities, while [Ni(CO)4] and [PtCl4]2− are tetrahedral and square planar, respectively. On the basis of the following observations made with aqueous solutions, assign secondary valences to metals in the following compounds: Formula Moles of AgCl precipitated per mole of the compounds with excess AgNO3 |Formula||Moles of AgCl precipitated per mole of the compounds with excess AgNO3| (i) Secondary 4 (ii) Secondary 6 (iii) Secondary 6 (iv) Secondary 6 (v) Secondary 4 Difference between a double salt and a complex Both double salts as well as complexes are formed by the combination of two or more stable compounds in stoichiometric ratio. However, they differ in the fact that double salts such as carnallite, KCl.MgCl2.6H2O, Mohr’s salt, FeSO4.(NH4)2SO4.6H2O, potash alum, KAl(SO4)2.12H2O, etc. dissociate into simple ions completely when dissolved in water. However, complex ions such as [Fe(CN)6]4− of K4Fe(CN)6, do not dissociate into Fe2+ and CN− ions. Werner was born on December 12, 1866, in Mülhouse, a small community in the French province of Alsace. His study of chemistry began in Karlsruhe (Germany) and continued in Zurich (Switzerland), where in his doctoral thesis in 1890, he explained the difference in properties of certain nitrogen containing organic substances on the basis of isomerism. He extended vant Hoff’s theory of tetrahedral carbon atom and modified it for nitrogen. Wer ner showed optical and electrical differences between complex compounds based on physical measurements. In fact, Werner was the first to discover optical activity in certain coordination compounds. He, at the age of 29 years became a full professor at Technische Hochschule in Zurich in 1895. Alfred Werner was a chemist and educationist. His accomplishments included the development of the theory of coordination compounds. This theory, in which Werner proposed evolutionary ideas about how atoms and molecules are linked together, was formulated in a span of only three years, from 1890 to 1893. The remainder of his career was spent gathering the experimental support required to validate his new ideas. Werner became the first Swiss chemist to win the Nobel Prize in 1913 for his work on the linkage of atoms and the coordination theory. 9.2 Definations of Some Important Terms Pertaining to Coordination Compounds (a) Coordination entity A coordination entity constitutes a central metal atom or ion bonded to a fixed number of ions or molecules. For example, [CoCl3(NH3)3] is a coordination entity in which the cobalt ion is surrounded by three ammonia molecules and three chloride ions. Other examples are [Ni(CO)4], [PtCl2(NH3)2], [Fe(CN)6]4− , [Co(NH3)6]3+ . (b) Central atom/ion In a coordination entity, the atom/ion to which a fixed number of ions/groups are bound in a definite geometrical arrangement around it, is called the central atom or ion. For example, the central atom/ion in the coordination entities: [NiCl2(H2O)4], [CoCl(NH3)5]2+ and [Fe(CN)6]3– are Ni2+, Co3+ and Fe3+, respectively. These central atoms/ions are also referred to as Lewis acids. The ions or molecules bound to the central atom/ion in the coordination entity are called ligands. These may be simple ions such as Cl− , small molecules such as H2O or NH3, larger molecules such as H2NCH2CH2NH2 or N(CH2CH2NH2)3 or even macromolecules, such as proteins. When a ligand is bound to a metal ion through a single donor atom, as with Cl− , H2O or NH3, the ligand is said to be unidentate. When a ligand can bind through two donor atoms as in H2NCH2CH2NH2 (ethane-1,2-diamine) or C2O42− (oxalate), the ligand is said to be didentate and when several donor atoms are present in a single ligand as in N(CH2CH2NH2)3, the ligand is said to be polydentate. Ethylenediaminetetraacetate ion (EDTA ) is an important hexadentate ligand. It can bind through two nitrogen and four oxygen atoms to a central metal ion. When a di- or polydentate ligand uses its two or more donor atoms to bind a single metal ion, it is said to be a chelate ligand. The number of such ligating groups is called the denticity of the ligand. Such complexes, called chelate complexes tend to be more stable than similar complexes containing unidentate ligands (for reasons see Section 9.8). Ligand which can ligate through two different atoms is called ambidentate ligand. Examples of such ligands are the NO2− and SCN− ions. NO2−ion can coordinate either through nitrogen or through oxygen to a central metal atom/ion. Similarly, SCN− ion can coordinate through the sulphur or nitrogen atom. (d) Coordination number The coordination number (CN) of a metal ion in a complex can be defined as the number of ligand donor atoms to which the metal is directly bonded. For example, in the complex ions, [PtCl6]2– and [Ni(NH3)2+] , the coordination number of Pt and Ni are 6 and 4 respectively. Similarly, in the complex ions, [Fe(C2O4)3]3– and [Co(en)3]3+ , the coordination number of both, Fe and Co, is 6 because C2O42– and en (ethane-1,2-diamine) are didentate ligands. It is important to note here that coordination number of the central atom/ion is determined only by the number of sigma bonds formed by the ligand with the central atom/ion. Pi bonds, if formed between the ligand and the central atom/ion, are not counted for this purpose. (e) Coordination sphere The central atom/ion and the ligands attached to it are enclosed in square bracket and is collectively termed as the coordination sphere. The ionisable groups are written outside the bracket and are called counter ions. For example, in the complex K4[Fe(CN)6], the coordination sphere is [Fe(CN)6]4– and the counter ion is K+ . The spatial arrangement of the ligand atoms which are directly attached to the central atom/ion defines a coordination polyhedron about the central atom. The most common coordination polyhedra are octahedral, square planar and tetrahedral. For example, [Co(NH3)6]3+ is octahedral, [Ni(CO)4] is tetrahedral and [PtCl4]2− is square planar. Fig. 9.1 shows the shapes of different coordination polyhedra. (g) Oxidation number of central atom The oxidation number of the central atom in a complex is defined as the charge it would carry if all the ligands are removed along with the electron pairs that are shared with the central atom. The oxidation number is represented by a Roman numeral in parenthesis following the name of the coordination entity. For example, oxidation number of copper in [Cu(CN)4]3– is +1 and it is written as Cu(I). (h) Homoleptic and heteroleptic complexes Complexes in which a metal is bound to only one kind of donor groups, e.g., [Co(NH3)6]3+ , are known as homoleptic. Complexes in which a metal is bound to more than one kind of donor groups, e.g., [Co(NH3)4Cl2]+ , are known as heteroleptic. 9.3 Nomenclature of Coodination Compounds Nomenclature is important in Coordination Chemistry because of the need to have an unambiguous method of describing formulas and writing systematic names, particularly when dealing with isomers. The formulas and names adopted for coordination entities are based on the recommendations of the International Union of Pure and Applied Chemistry (IUPAC). 9.3.1 Formulas of Mononuclear Coordination Entities Information about the constitution of the compound in a concise and convenient manner. Mononuclear coordination entities contain a single central metal atom. The following rules are applied while writing the formulas: (i) The central atom is listed first. (ii) The ligands are then listed in alphabetical order. The placement of a ligand in the list does not depend on its charge. (iii) Polydentate ligands are also listed alphabetically. In case of abbreviated ligand, the first letter of the abbreviation is used to determine the position of the ligand in the alphabetical order. (iv) The formula for the entire coordination entity, whether charged or not, is enclosed in square brackets. When ligands are polyatomic, their formulas are enclosed in parentheses. Ligand abbreviations are also enclosed in parentheses. (v) There should be no space between the ligan ds and the metal within a coordination sphere. (vi) When the formula of a charged coordination entity is to be written without that of the counter ion, the charge is indicated outside the square brackets as a right superscript with the number before the sign. For example, [Co(CN)63−] , [Cr(H2O)6]3+ , etc. (vii) The charge of the cation(s) is balanced by the charge of the anion(s). 9.3.2 Naming of Mononuclear Coordination Compounds The names of coordination compounds are derived by following the principles of additive nomenclature. Thus, the groups that surround the central atom must be identified in the name. They are listed as prefixes to the name of the central atom along with any appropriate multipliers. The following rules are used when naming coordination compounds: (i) The cation is named first in both positively and negatively charged coordination entities. (ii) The ligands are named in an alphabetical order before the name of the central atom/ion. (This procedure is reversed from writing formula). (iii) Names of the anionic ligands end in –o, those of neutral and cationic ligands are the same except aqua for H2O, ammine for NH3, carbonyl for CO and nitrosyl for NO. These are placed within enclosing marks ( ). (iv) Prefixes mono, di, tri, etc., are used to indicate the number of the individual ligands in the coordination entity. When the names of the ligands include a numerical prefix, then the terms, bis, tris, tetrakis are used, the ligand to which they refer being placed in parentheses. For example, [NiCl2(PPh3)2] is named as dichlorobis(triphenylphosphine)nickel(II). (v) Oxidation state of the metal in cation, anion or neutral coordination entity is indicated by Roman numeral in parenthesis. (vi) If the complex ion is a cation, the metal is named same as the element. For example, Co in a complex cation is called cobalt and Pt is called platinum. If the complex ion is an anion, the name of the metal ends with the suffix – ate. For example, Co in a complex anion, [Co (SCN)4]2− is called cobaltate. For some metals, the Latin names are used in the complex anions, e.g., ferrate for Fe. (vii) The neutral complex molecule is named similar to that of the complex cation. The following examples illustrate the nomenclature for coordination compounds. 1. [Cr(NH3)3(H2O)3]Cl3 is named as: Explanation: The complex ion is inside the square bracket, which is a cation. The amine ligands are named before the aqua ligands according to alphabetical order. Since there are three chloride ions in the compound, the charge on the complex ion must be +3 (since the compound is electrically neutral). From the charge on the complex ion and the charge on the ligands, we can calculate the oxidation number of the metal. In this example, all the ligands are neutral molecules. Therefore, the oxidation number of chromium must be the same as the charge of the complex ion, +3. 2. [Co(H2NCH2CH2NH2)3]2(SO4)3 is named as: tris(ethane-1,2–diammine)cobalt(III) sulphate Explanation: The sulphate is the counter anion in this molecule. Since it takes 3 sulphates to bond with two complex cations, the charge on each complex cation must be +3. Further, ethane-1,2– diamine is a neutral molecule, so the oxidation number of cobalt in the complex ion must be +3. Remember that you never have to indicate the number of cations and anions in the name of an ionic compound. 3. [Ag(NH3)2][Ag(CN)2] is named as: diamminesilver(I) dicyanoargentate(I) Example 9.2 Write the formulas for the following coordination compounds: (i) Tetraammineaquachloridocobalt(III) chloride (ii) Potassium tetrahydroxozincate(II) (iii) Potassium trioxalatoaluminate(III) Example 9.3 Write the IUPAC names of the following coordination compounds: (ii) Potassium trioxalatochromate(III) (iii) Dichloridobis(ethane-1,2-diamine)cobalt(III) chloride (iv) Pentaamminecarbonatocobalt(III) chloride (v) Mercury tetrathiocyanatocobaltate(III) 9.1 Write the formulas for the following coordination compounds: (i) Tetraamminediaquacobalt(III) chloride (ii) Potassium tetracyanonickelate(II) (iii) Tris(ethane–1,2–diamine) chromium(III) chloride (v) Dichloridobis(ethane–1,2–diamine)platinum(IV) nitrate (vi) Iron(III) hexacyanoferrate(II) 9.2 Write the IUPAC names of the following coordination compounds: 9.4 Isomerism in Coordination Compounds Isomers are two or more compounds that have the same chemical formula but a different arrangement of atoms. Because of the different arrangement of atoms, they differ in one or more physical or chemical properties. Two principal types of isomerism are known among coordination compounds. Each of which can be further subdivided. (i) Geometrical isomerism (ii) Optical isomerism (b) Structural isomerism (i) Linkage isomerism (ii) Coordination isomerism (iii) Ionisation isomerism (iv) Solvate isomerism Stereoisomers have the same chemical formula and chemical bonds but they have different spatial arrangement. Structural isomers have different bonds. A detailed account of these isomers are given below. 9.4.1 Geometric Isomerism This type of isomerism arises in heteroleptic complexes due to different possible geometric arrangements of the ligands. Important examples of this behaviour are found with coordination numbers 4 and 6. In a square planar complex of formula [MX2L2] (X and L are unidentate), the two ligands X may be arranged adjacent to each other in a cis isomer, or opposite to each other in a trans isomer as depicted in Fig. 9.2. Other square planar complex of the type MABXL (where A, B, X, L are unidentates) shows three isomers-two cis and one trans. You may attempt to draw these structures. Such isomerism is not possible for a tetrahedral geometry but similar behaviour is possible in octahedral complexes of formula [MX2L4] in which the two ligands X may be oriented cis or trans to each other (Fig. 9.3). Another type of geometrical isomerism occurs in octahedral coordination entities of the type [Ma3b3] like [Co(NH3)3(NO2)3]. If three donor atoms of the same ligands occupy adjacent positions at the corners of an octahedral face, we have the facial (fac) isomer. When the positions are around the meridian of the octahedron, we get the meridional (mer) isomer (Fig. 9.5). Why is geometrical isomerism not possible in tetrahedral complexes having two different types of unidentate ligands coordinated with the central metal ion ? Tetrahedral complexes do not show geometrical isomerism because the relative positions of the unidentate ligands attached to the central metal atom are the same with respect to each other. 9.4.2 Optical Isomerism Optical isomers are mirror images that cannot be superimposed on one another. These are called as enantiomers. The molecules or ions that cannot be superimposed are called chiral. The two forms are called dextro (d) and laevo (l) depending upon the direction they rotate the plane of polarised light in a polarimeter (d rotates to the right, l to the left). Optical isomerism is common in octahedral complexes involving didentate ligands (Fig. 9.6). In a coordination entity of the type [PtCl2(en)2]2+ , only the cis-isomer shows optical activity (Fig. 9.7). Out of the following two coordination entities which is chiral (optically active)? Solution The two entities are represented as Out of the two, (a) cis – [CrCl2(ox)2] is chiral (optically active). 9.4.3 Linkage Isomerism Linkage isomerism arises in a coordination compound containing ambidentate ligand. A simple example is provided by complexes containing the thiocyanate ligand, NCS–, which may bind through the nitrogen to give M–NCS or through sulphur to give M–SCN. Jørgensen discovered such behaviour in the complex [Co(NH3)5(NO2)]Cl2, which is obtained as the red form, in which the nitrite ligand is bound through oxygen (–ONO), and as the yellow form, in which the nitrite ligand is bound through nitrogen (–NO2). 9.4.4 Coordination Isomerism This type of isomerism arises from the interchange of ligands between cationic and anionic entities of different metal ions present in a complex. An example is provided by [Co(NH3)6][Cr(CN)6], in which the NH3 ligands are bound to Co3+ and the CN– ligands to Cr3+ . In its coordination isomer [Cr(NH3)6][Co(CN)6], the NH3 ligands are bound to Cr3+ and the CN– ligands to Co3+ . 9.4.5 Ionisation Isomerism This form of isomerism arises when the counter ion in a complex salt is itself a potential ligand and can displace a ligand which can then become the counter ion. An example is provided by the ionisation isomers [Co(NH3)5SO4]Br and [Co(NH3)5Br]SO4. 9.4.6 Solvate Isomerism This form of isomerism is known as ‘hydrate isomerism’ in case where water is involved as a solvent. This is similar to ionisation isomerism. Solvate isomers differ by whether or not a solvent molecule is directly bonded to the metal ion or merely present as free solvent molecules in the crystal lattice. An example is provided by the aqua complex [Cr(H2O)6]Cl3 (violet) and its solvate isomer [Cr(H2O)5Cl]Cl2.H2O (grey-green). 9.3 Indicate the types of isomerism exhibited by the following complexes and draw the structures for these isomers: 9.4 Give evidence that [Co(NH3)5Cl]SO4 and [Co(NH3)5SO4]Cl are ionisation isomers. Werner was the first to describe the bonding features in coordination compounds. But his theory could not answer basic questions like: (i) Why only certain elements possess the remarkable property of forming coordination compounds? (ii) Why the bonds in coordination compounds have directional properties? (iii) Why coordination compounds have characteristic magnetic and optical properties? Many approaches have been put forth to explain the nature of bonding in coordination compounds viz. Valence Bond Theory (VBT),Crystal Field Theory (CFT), Ligand Field Theory (LFT) and Molecular Orbital Theory (MOT). We shall focus our attention on elementary treatment of the application of VBT and CFT to coordination compounds. 9.5.1 Valence Bond Theory According to this theory, the metal atom or ion under the influence of ligands can use its (n-1)d, ns, np or ns, np, nd orbitals for hybridisation to yield a set of equivalent orbitals of definite geometry such as octahedral, tetrahedral, square planar and so on (Table 9.2). These hybridised orbitals are allowed to overlap with ligand orbitals that can donate electron pairs for bonding. This is illustrated by the following examples. |Coordination number||Type of hybridisation||Distribution of hybrid orbitals in space| It is usually possible to predict the geometry of a complex from the knowledge of its magnetic behaviour on the basis of the valence bond theory. In the diamagnetic octahedral complex,[Co(NH3)6]3+ , the cobalt ion is in +3 oxidation state and has the electronic configuration 3d6. The hybridisation scheme is as shown in diagram. Six pairs of electrons, one from each NH3 molecule, occupy the six hybrid orbitals. Thus, the complex has octahedral geometry and is diamagnetic because of the absence of unpaired electron. In the formation of this complex, since the inner d orbital (3d) is used in hybridisation, the complex, [Co(NH3)6]3+ is called an inner orbital or low spin or spin paired complex. The paramagnetic octahedral complex, [CoF6]3− uses outer orbital (4d ) in hybridisation (sp3d ). It is thus called outer orbital or high spin or spin free complex. Thus: In tetrahedral complexes one s and three p orbitals are hybridised to form four equivalent orbitals oriented tetrahedrally. This is ill-ustrated below for [NiCl42−]. Here nickel is in +2 oxidation state and the ion has the electronic configuration 3d8. The hybridisation scheme is as shown in diagram. Each Cl- ion donates a pair of electrons. The compound is paramagnetic since it contains two unpaired electrons. Similarly, [Ni(CO)4] has tetrahedral geometry but is diamagnetic since nickel is in zero oxidation state and contains no unpaired electron. In the square planar complexes, the hybridisation involved is dsp2. An example is [Ni(CN)4]2–. Here nickel is in +2 oxidation state and has the electronic configuration 3d8. The hybridisation scheme is as shown in diagram: Each of the hybridised orbitals receives a pair of electrons from a cyanide ion. The compound is diamagnetic as evident from the absence of unpaired electron. It is important to note that the hybrid orbitals do not actually exist. In fact, hybridisation is a mathematical manipulation of wave equation for the atomic orbitals involved. 9.5.2 Magnetic Properties of Coordination Compounds The magnetic moment of coordination compounds can be measured by the magnetic susceptibility experiments. The results can be used to obtain information about the structures adopted by metal complexes. A critical study of the magnetic data of coordination compounds of metals of the first transition series reveals some complications. For metal ions with upto three electrons in the d orbitals, like Ti3+(d1); V3+(d2); Cr3+(d3); two vacant d orbitals are available for octahedral hybridisation with 4s and 4p orbitals. The magnetic behaviour of these free ions and their coordination entities is similar. When more than three 3d electrons are present, the required pair of 3d orbitals for octahedral hybridisation is not directly available (as a consequence of Hund’s rule). Thus, for d (Cr2+, Mn3+), d5(Mn2+, Fe3+), d6(Fe2+,Co3+) cases, a vacant pair of d orbitals results only by pairing of 3d electrons which leaves two, one and zero unpaired electrons, respectively. The magnetic data agree with maximum spin pairing in many cases, especially with coordination compounds containing d6 ions. However, with species containing d4 and d5 ions there are complications. [Mn(CN)6]3– has magnetic moment of two unpaired electrons while [MnCl6]3– has a paramagnetic moment of four unpaired electrons. [Fe(CN)6]3– has magnetic moment of a single unpaired electron while [FeF6]3– has a paramagnetic moment of five unpaired electrons. [CoF6]3– is paramagnetic with four unpaired electrons while [Co(C2O4)3]3− is diamagnetic. This apparent anomaly is explained by valence bond theory in terms of formation of inner orbital and outer orbital coordination entities. [Mn(CN)6]3– , [Fe(CN)6]3– and [Co(C2O4)3]3– are inner orbital complexes involving d2sp3 hybridisation, the former two complexes are paramagnetic and the latter diamagnetic. On the other hand, [MnCl6]3– , [FeF6]3– and [CoF6-]3– are outer orbital complexes involving sp3d2 hybridisation and are paramagnetic corresponding to four, five and four unpaired electrons. The spin only magnetic moment of [MnBr4]2– is 5.9 BM. Predict the geometry of the complexion ? Since the coordination number of Mn2+ ion in the complex ion is 4, it will be either tetrahedral (sp3 hybridisation) or square planar (dsp2 hybridisation). But the fact that the magnetic moment of the complex ion is 5.9 BM, it should be tetrahedral in shape rather than square planar because of the presence of five unpaired electrons in the d orbitals. 9.5.3 Limitations of Valence Bond Theory While the VB theory, to a larger extent, explains the formation, structures and magnetic behaviour of coordination compounds, it suffers from the following shortcomings: (i) It involves a number of assumptions. (ii) It does not give quantitative interpretation of magnetic data. (iii) It does not explain the colour exhibited by coordination compounds. (iv) It does not give a quantitative interpretation of the thermodynamic or kinetic stabilities of coordination compounds. (v) It does not make exact predictions regarding the tetrahedral and square planar structures of 4-coordinate complexes. (vi) It does not distinguish between weak and strong ligands. 9.5.4 Crystal Field Theory The crystal field theory (CFT) is an electrostatic model which considers the metal-ligand bond to be ionic arising purely from electrostatic interactions between the metal ion and the ligand. Ligands are treated as point charges in case of anions or dipoles in case of neutral molecules. The five d orbitals in an isolated gaseous metal atom/ion have same energy, i.e., they are degenerate. This degeneracy is maintained if a spherically symmetrical field of negative charges surrounds the metal atom/ion. However, when this negative field is due to ligands (either anions or the negative ends of dipolar molecules like NH3 and H2O) in a complex, it becomes asymmetrical and the degeneracy of the d orbitals is lifted. It results in splitting of the d orbitals. The pattern of splitting depends upon the nature of the crystal field. Let us explain this splitting in different crystal fields. (a) Crystal field splitting in octahedral coordination entities In an octahedral coordination entity with six ligands surrounding the metal atom/ion, there will be repulsion between the electrons in metal d orbitals and the electrons (or negative charges) of the ligands. Such a repulsion is more when the metal d orbital is directed towards the ligand than when it is away from the ligand. Thus, the dx2 − y2 and dz2 orbitals which point towards the axes along the direction of the ligand will experience more repulsion and will be raised in energy; and the dxy, dyz and dxz orbitals which are directed between the axes will be lowered in energy relative to the average energy in the spherical crystal field. Thus, the degeneracy of the d orbitals has been removed due to ligand electron-metal electron repulsions in the octahedral complex to yield three orbitals of lower energy, t2g set and two orbitals of higher energy, eg set. This splitting of the degenerate levels due to the presence of ligands in a definite geometry is termed as crystal field splitting and the energy separation is denoted Δo (the subscript o is for octahedral) (Fig.9.8). Thus, the energy of the two eg orbitals will increase by (3/5) Δo and that of the three t2g will decrease by (2/5)Δo. The crystal field splitting,Δo, depends upon the field d orbitals produced by the ligand and charge on the metal ion. Some ligands are able to produce strong fields in which case, the splitting will be large whereas others produce weak fields and consequently result in small splitting of d orbitals. In general, ligands can be arranged in a series in the order of increasing field strength as given below: I– < Br– < SCN– < Cl– < S2– < F– < OH– < C2O42– < H2O < NCS–< edta4– < NH3 < en < CN– < CO Such a series is termed as spectrochemical series. It is an experimentally determined series based on the absorption of light by complexes with different ligands. Let us assign electrons in the d orbitals of metal ion in octahedral coordination entities. Obviously, the single d electron occupies one of the lower energy t2g orbitals. In d2 and d3 coordination entities, the d electrons occupy the t2g orbitals singly in accordance with the Hund’s rule. For d4 ions, two possible patterns of electron distribution arise: (i) the fourth electron could either enter the t2g level and pair with an existing electron, or (ii) it could avoid paying the price of the pairing energy by occupying the e g level. Which of these possibilities occurs, depends on the relative magnitude of the crystal field splitting, Δo and the pairing energy, P (P represents the energy required for electron pairing in a single orbital). The two options are: (i) If Δo < P, the fourth electron enters one of the eg orbitals giving the configuration t2g3e1g. Ligands for which Δo< P are known as weak field ligands and form high spin complexes. (ii) If Δo > P, it becomes more energetically favourable for the fourth electron to occupy a t2g orbital with configuration t2g4eg0 . Ligands which produce this effect are known as strong field ligands and form low spin complexes. Calculations show that d4 to d7 coordination entities are more stable for strong field as compared to weak field cases. (b) Crystal field splitting in tetrahedral coordination entities In tetrahedral coordination entity formation, the d orbital splitting (Fig. 9.9) is inverted and is smaller as compared to the octahedral field splitting. For the same metal, the same ligands and metal-ligand distances, it can be shown that Δt = (4/9) Δ0. Consequently, the orbital splitting energies are not sufficiently large for forcing pairing and, therefore, low spin configurations are rarely observed. 9.5.5 Colour in Coordination Compounds In the previous Unit, we learnt that one of the most distinctive properties of transition metal complexes is their wide range of colours. This means that some of the visible spectrum is being removed from white light as it passes through the sample, so the light that emerges is no longer white. The colour of the complex is complementary to that which is absorbed. The complementary colour is the colour generated from the wavelength left over; if green light is absorbed by the complex, it appears red. Table 9.3 gives the relationship of the different wavelength absorbed and the colour observed. The colour in the coordination compounds can be readily explained in terms of the crystal field theory. Consider, for example, the complex [Ti(H2O)6]3+, which is violet in colour. This is an octahedral complex where the single electron (Ti3+ is a 3d1 system) in the metal d orbital is in the t2g level in the ground state of the complex. The next higher state available for the electron is the empty eg level. If light corresponding to the energy of yellow-green region is absorbed by the complex, it would excite the electron from t2g level to the eg level (t2g1 eg0 → t2g0 eg1 ). Consequently, the complex appears violet in colour (Fig. 9.10). The crystal field theory attributes the colour of the coordination compounds to d-d transition of the electron. It is important to note that in the absence of ligand, crystal field splitting does not occur and hence the substance is colourless. For example, removal of water from [Ti(HM2O)6]Cl3 on heating renders it colourless. Similarly, anhydrous CuSO4 is white, but CuSO4.5H2O is 3+ blue in colour. The influence of the ligand on the colour of a complex may be illustrated by considering the [Ni(H2O)6]2+ complex, which forms when nickel(II) chloride is dissolved in water. If the didentate ligand, ethane-1,2-diamine(en) is progressively added in the molar ratios en:Ni, 1:1, 2:1, 3:1, the following series of reactions and their associated colour changes occur: [Ni(H2O)6]2+ + en (aq)= [Ni(H2O)4(en)] (aq) + 2H2O green pale blue [Ni(H2O)4(en)]2+(aq) + en (aq) = [Ni(H2O)2(en)2]2+(aq) + 2H2O [Ni(H2O)2(en)2](aq) + en (aq) = [Ni(en)3]2+ (aq) This sequence is shown in Fig. 9.11. The colours produced by electronic transitions within the d orbitals of a transition metal ion occur frequently in everyday life. Ruby [Fig.9.12(a)] is aluminium oxide (Al2O3) containing about 0.5-1% Cr3+ ions (d3), which are randomly distributed in positions normally occupied by Al3+ . We may view these chromium(III) species as octahedral chromium(III) complexes incorporated into the alumina lattice; d–d transitions at these centres give rise to the colour. In emerald [Fig.9.12(b)], Cr3+ ions occupy octahedral sites in the mineral beryl (Be3Al2Si6O18). The absorption bands seen in the ruby shift to longer wavelength, namely yellow-red and blue, causing emerald to transmit light in the green region. 9.5.6 Limitations of Crystal Field Theory The crystal field model is successful in explaining the formation, structures, colour and magnetic properties of coordination compounds to a large extent. However, from the assumptions that the ligands are point charges, it follows that anionic ligands should exert the greatest splitting effect. The anionic ligands actually are found at the low end of the spectrochemical series. Further, it does not take into account the covalent character of bonding between the ligand and the central atom. These are some of the weaknesses of CFT, which are explained by ligand field theory (LFT) and molecular orbital theory which are beyond the scope of the present study. 9.5 Explain on the basis of valence bond theory that [Ni(CN)4]2− ion with square planar structure is diamagnetic and the [NiCl4]2− ion with tetrahedral geometry is paramagnetic. 9.6 [NiCl4]2− is paramagnetic while [Ni(CO)4] is diamagnetic though both are tetrahedral. Why? 9.7 [Fe(H2O)6]3+ is strongly paramagnetic whereas [Fe(CN)6]3− is weakly paramagnetic. Explain. 9.8 Explain [Co(NH3)6]3+ is an inner orbital complex whereas [Ni(NH3)6]2+ is an outer orbital complex. 9.9 Predict the number of unpaired electrons in the square planar [Pt(CN)4]2−ion. 9.10 The hexaquo manganese(II) ion contains five unpaired electrons, while the hexacyanoion contains only one unpaired electron. Explain using Crystal Field Theory. 9.6 Bonding in Metal Carbonyls The homoleptic carbonyls (compounds containing carbonyl ligands only) are formed by most of the transition metals. These carbonyls have simple, well defined structures. Tetracarbonylnickel(0) is tetrahedral, pentacarbonyliron(0) is trigonalbipyramidal while hexacarbonyl chromium(0) is octahedral. Decacarbonyldimanganese(0) is made up of two square pyramidal Mn(CO)5 units joined by a Mn – Mn bond. Octacarbonyldicobalt(0) has a Co – Co bond bridged by two CO groups (Fig.9.13). The metal-carbon bond in metal carbonyls possess both s and p character. The M–C σ bond is formed by the donation of lone pair of electrons on the carbonyl carbon into a vacant orbital of the metal. The M–C π bond is formed by the donation of a pair of electrons from a filled d orbital of metal into the vacant antibonding π* orbital of carbon monoxide. The metal to ligand bonding creates a synergic effect which strengthens the bond between CO and the metal (Fig.9.14). 9.7 Stability of Coordination Compounds The stability of a complex in solution refers to the degree of association between the two species involved in the state of equilibrium. The magnitude of the (stability or formation) equilibrium constant for the association, quantitatively expresses the stability. Thus, if we have a reaction of the type: M + 4L € ML4 then the larger the stability constant, the higher the proportion of ML4 that exists in solution. Free metal ions rarely exist in the solution so that M will usually be surrounded by solvent molecules which will compete with the ligand molecules, L, and be successively replaced by them. For simplicity, we generally ignore these solvent molecules and write four stability constants as follows: M + L € ML K1 = [ML]/[M][L] ML + L € ML2 K2 = [ML2]/[ML][L] ML3 + L € ML4 K4 = [ML4]/[ML3][L] where K1, K2, etc., are referred to as stepwise stability constants. Alternatively, we can write the overall stability constant thus: M +4L € ML4 β4 = [ML4]/[M][L]4 The stepwise and overall stability constant are therefore related as follows: β4 = K1 × K2 × K3 × K4 or more generally, βn = K1 × K2 × K3 × K4 ……. Kn If we take as an example, the steps involved in the formation of the cuprammonium ion, we have the following: Cu2+ + NH3 € Cu(NH3)2+ K1 = [Cu(NH3)2+]/[Cu2+][NH3] Cu(NH3)2+ + NH3 € Cu(NH3)22+ K2 = [Cu(NH3)22+]/[Cu(NH3)][NH3] etc. where K1, K2 are the stepwise stability constants and overall stability constant. Also β4 = [Cu(NH3)42+]/[Cu2+][NH3)4 The addition of the four amine groups to copper shows a pattern found for most formation constants, in that the successive stability constants decrease. In this case, the four constants are: logK1 = 4.0, logK2 = 3.2, logK3 = 2.7, logK4 = 2.0 or log β4 = 11.9 The instability constant or the dissociation constant of coordination compounds is defined as the reciprocal of the formation constant. 9.11 Calculate the overall complex dissociation equilibrium constant for the Cu(NH3)42+ ion, given that β4 for this complex is 2.1 × 1013 . 9.8 Importance and Applications of Coordination Compounds The coordination compounds are of great importance. These compounds are widely present in the mineral, plant and animal worlds and are known to play many important functions in the area of analytical chemistry, metallurgy, biological systems, industry and medicine. These are described below: • Coordination compounds find use in many qualitative and quantitative chemical analysis. The familiar colour reactions given by metal ions with a number of ligands (especially chelating ligands), as a result of formation of coordination entities, form the basis for their detection and estimation by classical and instrumental methods of analysis. Examples of such reagents include EDTA, DMG (dimethylglyoxime), α–nitroso–β–naphthol, cupron, etc. • Hardness of water is estimated by simple titration with Na2EDTA. The Ca2+ and Mg2+ ions form stable complexes with EDTA. The selective estimation of these ions can be done due to difference in the stability constants of calcium and magnesium complexes. • Some important extraction processes of metals, like those of silver and gold, make use of complex formation. Gold, for example, combines with cyanide in the presence of oxygen and water to form the coordination entity [Au(CN)2]− in aqueous solution. Gold can be separated in metallic form from this solution by the addition of zinc (Unit 6). • Similarly, purification of metals can be achieved through formation and subsequent decomposition of their coordination compounds. For example, impure nickel is converted to [Ni(CO)4], which is decomposed to yield pure nickel. •Coordination compounds are of great importance in biological systems. The pigment responsible for photosynthesis, chlorophyll, is a coordination compound of magnesium. Haemoglobin, the red pigment of blood which acts as oxygen carrier is a coordination compound of iron. Vitamin B12, cyanocobalamine, the anti– pernicious anaemia factor, is a coordination compound of cobalt. Among the other compounds of biological importance with coordinated metal ions are the enzymes like, carboxypeptidase A and carbonic anhydrase (catalysts of biological systems). •Coordination compounds are used as catalysts for many industrial processes. Examples include rhodium complex, [(Ph3P)3RhCl], a Wilkinson catalyst, is used for the hydrogenation of alkenes. •Articles can be electroplated with silver and gold much more smoothly and evenly from solutions of the complexes, [Ag(CN)2]– and [Au(CN)2]− than from a solution of simple metal ions. •In black and white photography, the developed film is fixed by washing with hypo solution which dissolves the undecomposed AgBr to form a complex ion, [Ag(S2O3)2]3− . •There is growing interest in the use of chelate therapy in medicinal chemistry. An example is the treatment of problems caused by the presence of metals in toxic proportions in plant/animal systems. Thus, excess of copper and iron are removed by the chelating ligands D–penicillamine and desferrioxime B via the formation of coordination compounds. EDTA is used in the treatment of lead poisoning. Some coordination compounds of platinum effectively inhibit the growth of tumours. Examples are: cis–platin and related compounds. The chemistry of coordination compounds is an important and challenging area of modern inorganic chemistry. During the last fifty years, advances in this area, have provided development of new concepts and models of bonding and molecular structure, novel breakthroughs in chemical industry and vital insights into the functioning of critical components of biological systems. The first systematic attempt at explaining the formation, reactions, structure and bonding of a coordination compound was made by A. Werner. His theory postulated the use of two types of linkages (primary and secondary) by a metal atom/ion in a coordination compound. In the modern language of chemistry these linkages are recognised as the ionisable (ionic) and non-ionisable (covalent) bonds, respectively. Using the property of isomerism, Werner predicted the geometrical shapes of a large number of coordination entities. The Valence Bond Theory (VBT) explains with reasonable success, the formation, magnetic behaviour and geometrical shapes of coordination compounds. It, however, fails to provide a quantitative interpretation of magnetic behaviour and has nothing to say about the optical properties of these compounds. The Crystal Field Theory (CFT) to coordination compounds is based on the effect of different crystal fields (provided by the ligands taken as point charges), on the degeneracy of d orbital energies of the central metal atom/ion. The splitting of the d orbitals provides different electronic arrangements in strong and weak crystal fields. The treatment provides for quantitative estimations of orbital separation energies, magnetic moments and spectral and stability parameters. However, the assumption that ligands consititute point charges creates many theoretical difficulties. The metal–carbon bond in metal carbonyls possesses both σ and π character. The ligand to metal is σ bond and metal to ligand is π bond. This unique synergic bonding provides stability to metal carbonyls. The stability of coordination compounds is measured in terms of stepwise stability (or formation) constant (K) or overall stability constant (β). The β stabilisation of coordination compound due to chelation is called the chelate effect. The stability of coordination compounds is related to Gibbs energy, enthalpy and entropy terms. Coordination compounds are of great importance. These compounds provide critical insights into the functioning and structures of vital components of biological systems. Coordination compounds also find extensive applications in metallurgical processes, analytical and medicinal chemistry. 9.1 Explain the bonding in coordination compounds in terms of Werner’s postulates. 9.2 FeSO4 solution mixed with (NH4)2SO4 solution in 1:1 molar ratio gives the test of Fe2+ ion but CuSO4 solution mixed with aqueous ammonia in 1:4 molar ratio does not give the test of Cu2+ ion. Explain why? 9.3 Explain with two examples each of the following: coordination entity, ligand, coordination number, coordination polyhedron, homoleptic and heteroleptic. 9.4 What is meant by unidentate, didentate and ambidentate ligands? Give two examples for each. 9.5 Specify the oxidation numbers of the metals in the following coordination entities: 9.6 Using IUPAC norms write the formulas for the following: (ii) Potassium tetrachloridopalladate(II) (iv) Potassium tetracyanonickelate(II) (vi) Hexaamminecobalt(III) sulphate (vii) Potassium tri(oxalato)chromate(III) Using IUPAC norms write the systematic names of the following: 9.8 List various types of isomerism possible for coordination compounds, giving an example of each. 9.9 How many geometrical isomers are possible in the following coordination entities? 9.10 Draw the structures of optical isomers of: 9.11 Draw all the isomers (geometrical and optical) of: 9.12 Write all the geometrical isomers of [Pt(NH3)(Br)(Cl)(py)] and how many of these will exhibit optical isomers? 9.13 Aqueous copper sulphate solution (blue in colour) gives: (i) a green precipitate with aqueous potassium fluoride and (ii) a bright green solution with aqueous potassium chloride. Explain these experimental results. 9.14 What is the coordination entity formed when excess of aqueous KCN is added to an aqueous solution of copper sulphate? Why is it that no precipitate of copper sulphide is obtained when H2S(g) is passed through this solution? 9.15 Discuss the nature of bonding in the following coordination entities on the basis of valence bond theory: 9.16 Draw figure to show the splitting of d orbitals in an octahedral crystal field. 9.17 What is spectrochemical series? Explain the difference between a weak field ligand and a strong field ligand. 9.18 What is crystal field splitting energy? How does the magnitude of Δo decide the actual configuration of d orbitals in a coordination entity? 9.19 [Cr(NH3)6]3+ is paramagnetic while [Ni(CN)4]2− is diamagnetic. Explain why? 9.20 A solution of [Ni(H2O)6]2+ is green but a solution of [Ni(CN)4]2− is colorless. Explain. 9.21 [Fe(CN)6]4− and [Fe(H2O]2+ are of different colours in dilute solutions. why? 9.22 Discuss the nature of bonding in metal carbonyls. 9.23 Give the oxidation state, d orbital occupation and coordination number of the central metal ion in the following complexes: 9.24 Write down the IUPAC name for each of the following complexes and indicate the oxidation state, electronic configuration and coordination number. Also give stereochemistry and magnetic moment of the complex: 9.25 What is meant by stability of a coordination compound in solution? State the factors which govern stability of complexes. 9.26 What is meant by the chelate effect? Give an example. 9.27 Discuss briefly giving an example in each case the role of coordination compounds in: (i) biological systems (ii) medicinal chemistry and (iii) analytical chemistry (iv) extraction/metallurgy of metals. 9.28 How many ions are produced from the complex Co(NH3)6Cl2 in solution? 9.29 Amongst the following ions which one has the highest magnetic moment value? 9.30 The oxidation number of cobalt in K[Co(CO)4] is 9.31 Amongst the following, the most stable complex is 9.32 What will be the correct order for the wavelengths of absorption in the visible region for the following: [Ni(NO2)6]4− , [Ni(NH3)6]2+ , [Ni(H2O)6]2+ ? Answers to Some Intext Questions 9.1 (i) [Co(NH3)4(H2O)2]Cl3 9.2 (i)Hexaamminecobalt(III) chloride 9.3 (i) Both geometrical (cis-, trans-) and optical isomers for cis can exist. (ii) Two optical isomers can exist. (iii) There are 10 possible isomers. (Hint: There are geometrical, ionisation and linkage isomers possible). (iv) Geometrical (cis-, trans-) isomers can exist. 9.4 The ionisation isomers dissolve in water to yield different ions and thus react differently to various reagents: [Co(NH3)5Br]SO4 + Ba2+ → BaSO4(s) [Co(NH3)5SO4]Br + Ba2+ → No reaction [Co(NH3)5Br]SO4 + Ag+ → No reaction [Co(NH3)5SO4]Br + Ag+ → AgBr (s) 9.6 In Ni(CO)4, Ni is in zero oxidation state whereas in NiCl42− , it is in +2 oxidation state. In the presence of CO ligand, the unpaired d electrons of Ni pair up but Cl− being a weak ligand is unable to pair up the unpaired electrons. 9.7 In presence of CN−, (a strong ligand) the 3d electrons pair up leaving only one unpaired electron. The hybridisation is d2sp3 forming inner orbital complex. In the presence of H2O, (a weak ligand), 3d electrons do not pair up. The hybridisation is sp3d2 forming an outer orbital complex containing five unpaired electrons, it is strongly paramagnetic. 9.8 In the presence of NH3, the 3d electrons pair up leaving two d orbitals empty to be involved in d2sp3 hybridisation forming inner orbital complex in case of [Co(NH3)6]3 . In Ni(NH3)62+ , Ni is in +2 oxidation state and has d8 configuration, the hybridisation involved is sp3d2 forming outer orbital complex. 9.9 For square planar shape, the hybridisation is dsp2 . Hence the unpaired electrons in 5d orbital pair up to make one d orbital empty for dsp2 hybridisation. Thus there is no unpaired electron. 9.11 The overall dissociation constant is the reciprocal of overall stability constant i.e. 1/ β4 = 4.7 × 10−14 I. Multiple Choice Questions (Type-I) 1. Which of the following complexes formed by Cu2+ ions is most stable? 2. The colour of the coordination compounds depends on the crystal field splitting. What will be the correct order of absorption of wavelength of light in the visible region, for the complexes, [Co(NH3)6]3+ , [Co(CN)6]3– , [Co(H2O)6]3+ (i) [Co(CN)6]3– > [Co(NH3)6]3+ > [Co(H2O)6]3+ (ii) [Co(NH3)6]3+ > [Co(H2O)6]3+ > [Co(CN)6]3– (iii) [Co(H2O)6]3+ > [Co(NH3)6]3+ > [Co(CN)6]3– (iv) [Co(CN)6]3– > [Co(NH3)6]3+ > [Co(H2O)6]3+ 3. When 0.1 mol CoCl3(NH3)5 is treated with excess of AgNO3, 0.2 mol of AgCl are obtained. The conductivity of solution will correspond to (i) 1:3 electrolyte (ii) 1:2 electrolyte (iii) 1:1 electrolyte (iv) 3:1 electrolyte 4. When 1 mol CrCl3⋅6H2O is treated with excess of AgNO3, 3 mol of AgCl are obtained. The formula of the complex is : 5. The correct IUPAC name of [Pt(NH3)2Cl2] is (i) Diamminedichloridoplatinum (II) (ii) Diamminedichloridoplatinum (IV) (iii) Diamminedichloridoplatinum (0) (iv) Dichloridodiammineplatinum (IV) 6. The stabilisation of coordination compounds due to chelation is called the chelate effect. Which of the following is the most stable complex species? 7. Indicate the complex ion which shows geometrical isomerism. 8. The CFSE for octahedral [CoCl6]4– is 18,000 cm–1. The CFSE for tetrahedral [CoCl4]2– will be (i) 18,000 cm–1 (ii) 16,000 cm–1 (iii) 8,000 cm–1 (iv) 20,000 cm–1 9. Due to the presence of ambidentate ligands coordination compounds show isomerism. Palladium complexes of the type [Pd(C6H5)2(SCN)2] and [Pd(C6H5)2(NCS)2] are (i) linkage isomers (ii) coordination isomers (iii) ionisation isomers (iv) geometrical isomers 10. The compounds [Co(SO4)(NH3)5]Br and [Co(SO4)(NH3)5]Cl represent (i) linkage isomerism (ii) ionisation isomerism (iii) coordination isomerism (iv) no isomerism 11. A chelating agent has two or more than two donor atoms to bind to a single metal ion. Which of the following is not a chelating agent? 12. Which of the following species is not expected to be a ligand? 13. What kind of isomerism exists between [Cr(H2O)6]Cl3 (violet) and [Cr(H2O)5Cl]Cl2⋅H2O (greyish-green)? (i) linkage isomerism (ii) solvate isomerism (iii) ionisation isomerism (iv) coordination isomerism 14. IUPAC name of [Pt(NH3)2Cl(NO2)] is : (i) Platinum diaminechloronitrite (ii) Chloronitrito-N-ammineplatinum (II) (iii) Diamminechloridonitrito-N-platinum (II) (iv) Diamminechloronitrito-N-platinate (II) II. Multiple Choice Questions (Type-II) Note : In the following questions two or more options may be correct. 15. Atomic number of Mn, Fe and Co are 25, 26 and 27 respectively. Which of the following inner orbital octahedral complex ions are diamagnetic? 16. Atomic number of Mn, Fe, Co and Ni are 25, 26 27 and 28 respectively. Which of the following outer orbital octahedral complexes have same number of unpaired electrons? 17. Which of the following options are correct for [Fe(CN)6]3– (i) d2sp3 hybridisation (ii) sp3d2 hybridisation 18. An aqueous pink solution of cobalt(II) chloride changes to deep blue on addition of excess of HCl. This is because____________. (i) [Co(H2O)6]2+ is transformed into [CoCl6]4– (ii) [Co(H2O)6]2+ is transformed into [CoCl4]2– (iii) tetrahedral complexes have smaller crystal field splitting than octahedral complexes. (iv) tetrahedral complexes have larger crystal field splitting than octahedral complex. 19. Which of the following complexes are homoleptic? (ii) [Co(NH3)4 Cl2]+ 20. Which of the following complexes are heteroleptic? (ii) [Fe(NH3)4 Cl2]+ 21. Identify the optically active compounds from the following : (ii) trans– [Co(en)2 Cl2]+ (iii) cis– [Co(en)2 Cl2]+ (iv) [Cr (NH3)5Cl] 22. Identify the correct statements for the behaviour of ethane-1, 2-diamine as a ligand. (i) It is a neutral ligand. (ii) It is a didentate ligand. (iii) It is a chelating ligand. (iv) It is a unidentate ligand. 23. Which of the following complexes show linkage isomerism? (i) [Co(NH3)5 (NO2)]2+ III. Short Answer Type 24. Arrange the following complexes in the increasing order of conductivity of their solution: [Co(NH3)3Cl3], [Co(NH3)4Cl2] Cl, [Co(NH3)6]Cl3 , [Cr(NH3)5Cl]Cl2 25. A coordination compound CrCl3⋅4H2O precipitates silver chloride when treated with silver nitrate. The molar conductance of its solution corresponds to a total of two ions. Write structural formula of the compound and name it. 26. A complex of the type [M(AA)2X2]n+ is known to be optically active. What does this indicate about the structure of the complex? Give one example of such complex. 27. Magnetic moment of [MnCl4]2– is 5.92 BM. Explain giving reason. 28. On the basis of crystal field theory explain why Co(III) forms paramagnetic octahedral complex with weak field ligands whereas it forms diamagnetic octahedral complex with strong field ligands. 29. Why are low spin tetrahedral complexes not formed? 30. Give the electronic configuration of the following complexes on the basis of Crystal Field Splitting theory. [CoF6]3–, [Fe(CN)6]4– and [Cu(NH3)6]2+. 31. Explain why [Fe(H2O)6]3+ has magnetic moment value of 5.92 BM whereas [Fe(CN)6]3– has a value of only 1.74 BM. 32. Arrange following complex ions in increasing order of crystal field splitting energy (ΔO) : [Cr(Cl)6]3–, [Cr(CN)6]3–, [Cr(NH3)6]3+. 33. Why do compounds having similar geometry have different magnetic moment? 34. CuSO4.5H2O is blue in colour while CuSO4 is colourless. Why? 35. Name the type of isomerism when ambidentate ligands are attached to central metal ion. Give two examples of ambidentate ligands. IV. Matching Type Note : In the following questions match the items given in Columns I and II. 36. Match the complex ions given in Column I with the colours given in Column II and assign the correct code : |Column I (Complex ion)||Column II (Colour)| |D.||(Ni (H2O)4 (en)]2+ (aq)||4.||Yellowish orange| (i) A (1) B (2) C (4) D (5) (ii) A (4) B (3) C (2) D (1) (iii) A (3) B (2) C (4) D (1) (iv) A (4) B (1) C (2) D (3) 37. Match the coordination compounds given in Column I with the central metal atoms given in Column II and assign the correct code : |Column I (Coordination Compound)||Column II (Central metal atom)| (i) A (5) B (4) C (1) D (2) (ii) A (3) B (4) C (5) D (1) (iii) A (4) B (3) C (2) D (1) (iv) A (3) B (4) C (1) D (2) 38. Match the complex ions given in Column I with the hybridisation and number of unpaired electrons given in Column II and assign the correct code : |Column I (Complex ion)||Column II (Hybridisation, number of unpaired electrons)| (i) A (3) B (1) C (5) D (2) (ii) A (4) B (3) C (2) D (1) (iii) A (3) B (2) C (4) D (1) (iv) A (4) B (1) C (2) D (3) 39. Match the complex species given in Column I with the possible isomerism given in Column II and assign the correct code : |Column I (Complex species)||Column II (Isomerism)| (i) A (1) B (2) C (4) D (5) (ii) A (4) B (3) C (2) D (1) (iii) A (4) B (1) C (5) D (3) (iv) A (4) B (1) C (2) D (3) 40. Match the compounds given in Column I with the oxidation state of cobalt present in it (given in Column II) and assign the correct code. |Column I (Compound)||Column II (Oxidation state of Co)| (i) A (1) B (2) C (4) D (5) (ii) A (4) B (3) C (2) D (1) (iii) A (5) B (1) C (4) D (2) (iv) A (4) B (1) C (2) D (3) V. Assertion and Reason Type Note : In the following questions a statement of assertion followed by a statement of reason is given. Choose the correct answer out of the following choices. (i) Assertion and reason both are true, reason is correct explanation of assertion. (ii) Assertion and reason both are true but reason is not the correct explanation of assertion. (iii) Assertion is true, reason is false. (iv) Assertion is false, reason is true. 41. Assertion : Toxic metal ions are removed by the chelating ligands. Reason : Chelate complexes tend to be more stable. 42. Assertion : [Cr(H2O)6]Cl2 and [Fe(H2O)6]Cl2 are reducing in nature. Reason : Unpaired electrons are present in their d-orbitals. 43. Assertion : Linkage isomerism arises in coordination compounds containing ambidentate ligand. Reason : Ambidentate ligand has two different donor atoms. 44. Assertion : Complexes of MX6 and MX5L type (X and L are unidentate) do not show geometrical isomerism. Reason : Geometrical isomerism is not shown by complexes of coordination number 6. 45. Assertion : ([Fe(CN)6]3– ion shows magnetic moment corresponding to two unpaired electrons. Reason : Because it has d2sp3 type hybridisation. VI. Long Answer Type 46. Using crystal field theory, draw energy level diagram, write electronic configuration of the central metal atom/ion and determine the magnetic moment value in the following : (i) [CoF6]3–, [Co(H2O)6]2+ , [Co(CN)6]3– (ii) [FeF6]3–, [Fe(H2O)6]2+, [Fe(CN)6]4– 47. Using valence bond theory, explain the following in relation to the complexes given below: [Mn(CN)6]3– , [Co(NH3)6]3+, [Cr(H2O)6]3+ , [FeCl6]4– (i) Type of hybridisation. (ii) Inner or outer orbital complex. (iii) Magnetic behaviour. (iv) Spin only magnetic moment value. 48. CoSO4Cl.5NH3 exists in two isomeric forms ‘A’ and ‘B’. Isomer ‘A’ reacts with AgNO3 to give white precipitate, but does not react with BaCl2. Isomer ‘B’ gives white precipitate with BaCl2 but does not react with AgNO3. Answer the following questions. (i) Identify ‘A’ and ‘B’ and write their structural formulas. (ii) Name the type of isomerism involved. (iii) Give the IUPAC name of ‘A’ and ‘B’. 49. What is the relationship between observed colour of the complex and the wavelength of light absorbed by the complex? 50. Why are different colours observed in octahedral and tetrahedral complexes for the same metal and same ligands? I. Multiple Choice Questions (Type-I) 1. (ii) 2. (iii) 3. (ii) 4. (iv) 5. (i) 6. (iii) 7. (i) 8. (iii) 9. (i) 10. (iv) 11. (i) 12. (ii) 13. (ii) 14. (iii) II. Multiple Choice Questions (Type-II) 15. (i), (iii) 16. (i), (iii) 17. (i), (iii) 18. (ii), (iii) 19. (i), (iii) 20. (ii), (iv) 21. (i), (iii) 22. (i), (ii), (iii) 23. (i), (iii) III. Short Answer Type 24. [Co(NH3)3Cl3] < [Cr(NH3)5Cl]Cl < [Co(NH3)5Cl]Cl2 < [Co(NH3)6]Cl3 25. [Co(H2O)4Cl2]Cl (tetraaquadichloridocobalt(III) chloride) 26. An optically active complex of the type [M(AA)2X2]n+ indicates cisoctahedral structure, e.g. cis-[Pt(en)2Cl2]2+ or cis-[Cr(en)2Cl2]+ 27. The magnetic moment of 5.92 BM corresponds to the presence of five unpaired electrons in the d-orbitals of Mn2+ ion. As a result the hybridisation involved is sp3 rather than dsp2. Thus tetrahedral structure of [MnCl4]2– complex will show 5.92 BM magnetic moment value. 28. With weak field ligands; ΔO < p, the electronic configuration of Co (III) will be t2g4 eg2 and it has 4 unpaired electrons and is paramagnetic. With strong field ligands, Δ0 > p, the electronic configuration will be t2g6g eg0. It has no unpaired electrons and is diamagnetic. 29. Because for tetrahedral complexes, the crystal field stabilisation energy is lower than pairing energy. [CoF6]3–, Co3+(d6) t2g4 eg2, [Fe(CN)6]4– , Fe2+(d6) t2g6 eg0, [Cu(NH3)6]2+ , Cu2+ (d9) t2g6 eg3, 31. [Fe(CN)6]3– involves d2sp3 hybridisation with one unpaired electron and [Fe(H2O)6]3+ involves sp3d2 hybridisation with five unpaired electrons. This difference is due to the presence of strong ligand CN– and weak ligand H2O in these complexes. 32. Crystal field splitting energy increases in the order [Cr(Cl)6]3– < [Cr(NH3)6]3+ < [Cr(CN)6]3– 33. It is due to the presence of weak and strong ligands in complexes, if CFSE is high, the complex will show low value of magnetic moment and vice versa, e.g. [CoF6]3– and [Co(NH3)6]3+ , the former is paramagnetic and the latter is diamagnetic. 34. In CuSO4.5H2O, water acts as ligand as a result it causes crystal field splitting. Hence d—d transition is possible in CuSO4.5H2O and shows colour. In the anhydrous CuSO4 due to the absence of water (ligand), crystal field splitting is not possible and hence no colour. 35. Linkage isomerism IV. Matching Type 36. (ii) 37. (i) 38. (ii) 39. (iv) 40. (i) V. Assertion and Reason Type 41. (i) 42. (ii) 43. (i) 44. (ii) 45. (iv) Number of unpaired electrons = 4 Number of unpaired electron = 5 Number of unpaired electron = 4 Fe2+ = 3d6 Since CN– is strong field ligand all the electrons get paired. No unpaired electrons so diamegnetic Mn3+ = 3d4 Co3+ = 3d6 (ii) Inner orbital complex Cr3+ = 3d3 (ii) Inner orbital complex (iv) 3.87 BM Fe2+ = 3d6 (ii) Outer orbital complex (iv) 4.9 BM 48. (i) A – [Co(NH3)5SO4]Cl B – [Co(NH3)5Cl]SO4 (ii) Ionisation isomerism (iii) (A), Pentaamminesulphatocobalt (III) chloride (B), Pentaamminechlorocobalt (III) sulphate. 49. When white light falls on the complex, some part of it is absorbed. Higher the crystal field splitting, lower will be the wavelength absorbed by the complex. The observed colour of complex is the colour generated from the wavelength left over. 50. Δt = (4/9) Δ0. So higher wavelength is absorbed in octahedral complex than tetrahedral complex for same metal and ligands.
http://textbook.s-anand.net/ncert/class-xii/chemistry/9-coordination-compound
13
16
The Constitution For The United States Its Sources and Its Application A Index Preface Preamble Article I Article II Article III Article IV Article V Article VI Article VII Letter of Transmittal Ratification 1st 12 Amendment Proposals "Bill of Rights" Amend. I - X Amend. XI -XXVII Landmark Court Missing Original 13th Amendment Constitution History A Quiz for Loyal Americans Section 1. The executive Power shall be vested in a President of the United States of America. He shall hold his Office during the Term of four Years, and, together with the Vice President, chosen for the same Term, 75 75 In Woodrow Wilson's "History of the American People" (Vol.3, p.71) it is pointed out that the laws of the new government were to be imperative instead of advisory; "It was provided with the Executive the Confederation had lacked; a person in whose authority should be concentrated the whole administrative force of its government." In Green's "History of the English People" it is stated that Cromwell's experience with the Long Parliament (1640-1660) confirmed his belief in the need of an executive power, entirely apart from the legislature, "as a condition of civil liberty." In the examination of Article 1, relating to the Legislative Department of the government, it has been seen that the President has a great power in that department as well as in his own, in approving or vetoing bills passed by the Senate and the House of Representatives. He has an influence in the Judicial Department, too, for he appoints 89 the judges; but, of course, only with the approval of the Senate. He is as much a creation of the Constitution as the Legislative Department (Congress) or the Judicial Department (the Supreme Court and inferior courts), and he is therefore as independent of both as they are of each other and of him. But for misconduct he may be impeached by the House and tried by the Senate, The Chief Justice presiding 17 at the trial. It was the intention of the Founders of the Republic that the Executive (President) should be a strong branch of the government. While the Colonies had had more than enough of a kingly executive wielding great and arbitrary power in a stubborn way, they had later learned from experience with governors of the States under the Articles of Confederation (1781-1789) that an executive with defined and limited powers is essential to good government. In those days the legislature was most feared as possible usurper of power. The lawless record of the Long Parliament of England was only a century and a half away, while many acts of later Parliament were believed to be transgressions of both constitutional and natural rights. James Otis and other colonial leaders declared that Parliament enacted laws against the Colonies "which neither God nor man ever empowered them to make." Hence the check of the President's veto, and the numerous definite limitations upon the power of Congress. When the work of framing Article II had been done some thought that a monarch had been set up in the President; but, of course, that was unreasonable, as the Constitution provides for his election by popular vote, as he cannot raise a dollar for an army or for any other purpose, as he cannot declare war, as he is subject to removal by impeachment, and as he can do but very little beyond executing the laws of the Legislative Department (Congress). But within his sphere he is powerful and independent. "Abraham Lincoln," wrote James Bryce, "wielded more authority than any single Englishmen has done since Oliver Cromwell." But much of Lincoln's war power, and particularly that for the use of which he was most criticized, the suspension of the privilege of the writ of habeas corpus was given to him by Congress for the term of the war only. So in 1917 Congress gave to President Wilson extraordinary powers for prosecuting the war against Germany. In the Constitutional Convention many favored a plural Executive, consisting of two or more men. Jefferson, who was not in the Convention, favored a one man Executive, pointing out that "A Committee of the States" provided for in the Articles of Confederation to act during recess of Congress "quarreled very soon, split into two parties, abandoned their post, and left the Government without any visible head until the next meeting of Congress." In the "Federalist" a single executive was advocated by Hamilton because of "decision, activity, secrecy, and dispatch" and because plurality "tends to conceal faults and destroy responsibility." The length of the term and whether there should be more than one term were much debated. A resolution was passed by the Convention that the President be not eligible for reelection, Washington voting against it. Jefferson wrote strongly for one term, but he lived to change his mind and serve two terms. Later, he wrote that the example of four Presidents retiring at the end of eight years would have "the force of precedent and usage" against any man who might seek a third term. President Grant sought a third term in 1880, but he was defeated in the Republican nominating convention. Theodore Roosevelt, who served three years of the second term of McKinley and a four year term thereafter, sought a third in 1912. Failing to secure the nomination in the Republican convention, he ran on a third party ticket and lost. Franklin D. Roosevelt was the first president to be elected for a third term, when he ran for a third time in 1940. Although the Constitutional Convention passed a resolution for one term, the committee to which it was finally referred never reported it back. Terms were proposed ranging in length from during good behavior down to three years. The Convention fixed the term at seven years, but the report came back from the committee showing four years, not disclosing, however, the reason for the change. In the first Congress under the new order (1789) consideration was given to choosing titles for the President and Vice President. "His Excellency" and "His Highness" and other titles were suggested, but as the House of Representatives had already addressed him simply as The President, it was finally resolved to adhere to his constitutional title, "President of the United States of America." to be elected, as follows 76 76 Over and over the Constitutional Convention debated the question of how the President should be elected. It was proposed that he be chosen by Congress; by "electors chosen by the people in election districts"; by the governors of the States; by the Senate; and by the votes of all the people. The suggestion that the people could choose the President was described as "vicious", while Mr. Wilson of Pennsylvania stood staunchly for the popular vote. James Madison said that "if it is a fundamental principle of free government that the legislative, executive and judiciary powers shall be separately exercised, it is equally so that they be independently exercised"; and he declared that there is even greater reason why the Executive should be independent of the Legislative branch than why the Judiciary should be. Although at first the Convention voted that Congress should elect the President, it was, after full discussion of a question "the most difficult of all which we have had to decide", concluded to choose by the electors mentioned in the next paragraph, probably following the provision of the Constitution of Maryland for the election of State senators. Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress: 77 77 This is the "electoral vote" of a State. Those of all the States together make the vote of the so-called "electoral college." the vote of a State consists of one vote for each of the two senators and one vote for each representative. When the number of members in the National House of Representatives is changed by the growth of population, this necessarily increases the number of votes in the "electoral college." When Washington was first elected (1788) there was a total of sixty-nine electoral votes, that being the number of senators and representatives of the States participating, New York having failed to choose electors and Rhode Island and North Carolina not yet having ratified the Constitution. In 1996 there are 535 electoral votes in all the United States, the number of senators being 100 and the House of Representatives membership has remained fixed since 1921 at 435. It was the intention of the Constitutional Convention that the electors, chosen as each State might think the best way, should meet and vote their individual preferences, thus excluding the influence of Congress, and also the influence of the voters at large, who were thought incompetent to chose a President; and that is the way Washington was elected twice and Adams once. But during the administration of Adams, friends of Jefferson in Congress held a conference or caucus and announced him as their candidate. This became the settled method of announcement. Later the caucus was superseded by the party convention, which adopted a platform and nominated candidates, a method which still prevails. In the beginning some of the States chose their electors by their legislatures, some according to districts, and some otherwise. Now they are chosen by ballot of the whole people. On the same ballot are the names of the presidential and vice-presidential candidates of the party, for whom the electors are expected (though not obliged by the Constitution) to vote. but no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector. [The Electors shall meet in their respective States, and vote by Ballot for two persons, of whom one at least shall not be an Inhabitant of the same State with themselves. And they shall make a List of all the Persons voted for, and of the Number of Votes for each; which List they shall sign and certify, and transmit sealed to the Seat of the Government of the United States, directed to the President of the Senate. The President of the Senate shall, in the Presence of the Senate and House of Representatives, open all the Certificates, and the Votes shall then be counted. The Person having the greatest Number of Votes shall be the President, if such Number be a Majority of the whole Number of Electors appointed; and if there be more than one who have such Majority, and have an equal Number of Votes, then the House of Representatives shall immediately chuse by Ballot one of them for President; and if no Person have a Majority, then from the five highest on the List the said House shall in like Manner chuse the President. But in chusing the President, the Votes shall be taken by States, the Representation from each State having one Vote; A quorum for this Purpose shall consist of a Member or Members from two-thirds of the States, and a Majority of all the States shall be necessary to a Choice. In every Case, after the Choice of the President, the Person having the greatest Number of Votes of the Electors shall be the Vice President. But if there should remain two or more who have equal Votes, the Senate shall chuse from them by Ballot the Vice President.]" 78 78 This paragraph in brackets was superseded on September 25, 1804, when the Twelfth Amendment was promulgated. The paragraph is retained here for its historic value. The electors then voted for persons, not for a President and a Vice President. Of the persons voted for they could not designate the one they preferred for the chief office and the one for second place. The candidate receiving the highest number of votes became President. The next highest number made the Vice President regardless of political belief. Thus all the electors voted for George Washington. The next number in size voted for John Adams. That made Washington President and Adams Vice President. By that method John Adams of the Federalist (or National) party later (1797) became President, receiving seventy-one electoral votes, and Thomas Jefferson, an intense anti-Federalist, Vice President, sixty-eight votes being the next highest number. The anti-Federalists were, in addition to being opposed to a strong National (as distinguished from State) government, in favor of intimate relations with the new Republic of France, while the Federalists declared that all foreign alliances must be avoided. In his Farewell Address (September 17, 1796) Washington spoke repeatedly and powerfully against implicating ourselves in European affairs. Such conflict of opinion and the consequent want of harmony within the administration made an amendment to the Constitution necessary. In the presidential election of 1800 Thomas Jefferson and Aaron Burr received seventy- three electoral votes each. The election therefore went to the House of Representatives, in which, after thirty-five ballotings, Jefferson was chosen. That made Burr Vice President, for "in every Case, after the Choice of the President, the Person having the greatest Number of Votes of the Electors shall be Vice President." The changes made will be considered in the study of the Twelfth Amendment. 165 The Congress may determine the Time of chusing the Electors, and the Day on which they shall give their Votes; which Day shall be the same throughout the United States. 79 79 As elections in different States were held at different times, Congress acted (1872) under this clause and directed that the electors be appointed in each State "on the Tuesday next after the first Monday in November in every fourth year"; and the electors are required to "meet and give their votes on the second Monday in January next following their appointment at such place in each State as the legislature of such State shall direct", usually the capital being by the State legislature designated as the place. No person except a natural born Citizen, or a Citizen of the United States, at the time of the Adoption of this Constitution, shall be eligible to the Office of President; neither shall any Person be eligible to that Office who shall not have attained to the Age of thirty-five Years, and been fourteen Years a Resident within the United States. 80 80 Many of foreign birth who had helped to create the United States would have been rendered ineligible had not the provision been inserted making eligible those of foreign birth who at the time of the adoption of the Constitution were citizens of the United States. The lapse of time long since removed that class and left the excepting clause the mere record of an interesting historic fact. Seven of the signers of the Constitution were foreign born: James Wilson, Robert Morris and Thomas Fitzsimons of Pennsylvania, Alexander Hamilton of New York, William Paterson of New Jersey, James McHenry of Maryland, and Pierce Butler of South Carolina. Some members of the Constitutional Convention argued for a financial qualification also. It was suggested that the President should be worth in property at least $100,000. The proposal was rejected. The first President was a man of large means. Most of the Presidents have been poor in property. It is an interesting fact that the one-House Congress sitting under the Articles of Confederation passed, while the Constitutional Convention was in session (July I3, 1787), "an ordinance for the government of the territory northwest of the river Ohio" (now Ohio, Indiana, Illinois, Michigan, and Wisconsin) in which it was provided that the governor to be appointed by Congress should, besides being a resident of the district, "have a freehold estate therein in 500 acres of land while in the exercise of his office." The judges of the court created were each required to own a like area. The belief then was common that ownership of property added to stability of character and citizenship. In Case of the Removal of the President from Office, or of his Death, Resignation, or Inability to discharge the Powers and Duties of the said Office, the same shall devolve on the Vice President, and the Congress may by Law provide for the Case of Removal, Death, Resignation or Inability, both of the President and Vice President, declaring what Officer shall then act as President, and such Officer shall act accordingly, until the Disability be removed, or a President shall be elected." 81 81 Congress has made no provision, evidently believing it unnecessary under the foregoing language, for the performance of the duties of the President in time of his inability alone. For nearly three months after being shot (July , 2 1881) President Garfield was unable to perform the duties of his place, but Vice President Arthur did not because of that "inability" assume "the powers or duties of the said office." After the President's death (September 19, 1881) Mr. Arthur succeeded to the post. In 1919 - 1920 President Wilson's sickness caused such "inability" for several months that not even Cabinet officers or representatives of foreign nations were permitted to see him. The language of the Constitution clearly expresses the intent that in case of such inability, even when temporary, the Vice President shall discharge the duties of the office. The Supreme Court of New Hampshire held under a similar provision in the constitution of that State that the governor's office was "vacant," when his temporary inability from sickness and the needs of public service required the duties to be performed by a substitute, and that in such circumstances the President of the State Senate could be compelled by writ of mandamus from court to assume and discharge the duties. In 1948 Congress enacted that if for reason of death, resignation, removal, inability, or failure to qualify, there is neither President nor Vice President to discharge the office, the Speaker of the House shall resign and act as President; if there be no Speaker, the President pro tempore of the Senate shall resign and act: in either case to the end of the term. On failure of both President-elect and Vice President-elect to qualify, any officer named shall serve only until a President or a Vice President qualifies. Should there be no President pro tempore to act, a member of the Cabinet shall serve, beginning with the Secretary of State. The Constitution of the United States of Brazil (1890) is more clear than ours and provides that the Vice President shall take the place of the President "in case of temporary disability and succeed him in case of vacancy. The President shall, at stated Times, receive for his Services, a Compensation, which shall neither be encreased nor diminished during the Period for which he shall have been elected, 82 82 The first Congress, by an Act of September 24, 1789, fixed the salary of the President at $25,000 a year. The Act of March 3, 1873, doubled President Grant's salary the day before his second term began and increased those of the Vice President, the members of the Cabinet, the Justices of the Supreme Court, and the members of Congress themselves. It was made retroactive as to Congressmen. This was contrary to popular opinion and also to the practice of legislators in the States not to increase their compensation during the term for which they were elected. Owing to public disapproval, one of the first steps of the next Congress was to reduce (January 20, 1874) all of the advances of salaries except those of the President and the Justices of the Supreme Court, the Constitution forbidding 98 Congress to diminish those. In 1909 the salary of the President was advanced to $75,000, with an allowance from time to time for traveling expenses such as Congress may deem necessary and not exceeding $25,000 a year. President Washington declined a salary. See Note 33 for advances of Congressional salaries. and he shall not receive within that Period any other Emolument from the United States, or any of them. 83 83 Of the provisions of this paragraph Alexander Hamilton wrote in the "Federalist" (No. LXXIII): "They [Congress] can neither weaken his fortitude by operating upon his necessities, nor corrupt his integrity by appealing to his avarice...., Nor will he be at liberty to receive any other emolument than that which may have been determined by the first act. He can, of course, have no pecuniary inducement to renounce or desert the independence intended for him by the Constitution." Before he enter on the Execution of his Office, he shall take the following Oath 84 or Affirmation: -- " I do solemnly swear (or affirm) that I will faithfully execute the Office of President of the United States, and will to the best of my Ability, preserve, protect and defend the Constitution of the United States." 84 The oath is usually administered at the Capitol by the Chief Justice of the United State "before" the President-elect takes office on Jan. 20. But it may be taken elsewhere and before any officer empowered by law to administer oaths. Prior to the ratification of Amendment XX, the president's term of office began on March 4th. President Grant's second term expired on Sunday, March 4, 1877 and Rutherford B. Hayes took the oath at the White House on Saturday and again at the Capitol on Monday. Upon the death of President Garfield (September 19, 1881) the oath was taken by Vice President Arthur in New York City and later he took it again in Washington. An interesting note, John Gaillard, a Senator from South Carolina, served as President for one day when President James Monroe refused to take the oath of office beginning his second term on March 4, 1821, a Sunday, taking the oath on Monday, March 5. Section 2. The President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States, when called, into the actual Service of the United States; 85 85 This is a constitutional right which Congress has no power to diminish. In the Convention it was proposed that he be not permitted to head an army in the field, but the proposal was rejected. In practice, however; no President has led an army or commanded a navy. The Secretary of War and the Secretary of the Navy carry out the wishes of the commander in chief. The experience of General Washington during the Revolution with the dilatory methods of Congress probably brought the Convention to the idea that there should be no divided authority when troops are "called into the actual service of the United States." Some of the early Constitutions of the States made the governors commanders; and the ordinance creating Northwest Territory (1787) made the governor "commander in chief of the militia", with authority to "appoint and commission officers in the same below the rank of general officers." Formerly some of the States thought that they should determine whether the militia should be sent to the service of the Nation, but the Supreme Court of the United States held that "the authority to decide whether the exigency has arisen belongs exclusively to the President and his decision is conclusive upon all other persons." If many States were to come to many conclusions upon such a subject the Nation might in the meanwhile be destroyed. c68 In time of war much of the power exercised by the President is delegated to him by Congress for the time being. During the Civil War Congress so aided the President that it was described as "a giant committee of ways and means." In 1862 it authorized President Lincoln to take possession of railroads when necessary for public safety. In World War I Congress authorized the President to take over and (operate the railroads as an instrumentality of war, which he did. It passed many acts giving him extraordinary powers, such as the Conservation of Food Act, the War Finance Corporation Act, the Trading with the Enemy Act, and many others. Such authority expires either by a time limit in the act itself or by subsequent repeal by Congress. he may require the Opinion, in writing, of the principal Officer in each of the executive Departments, upon any subject relating to the Duties of their respective Offices, 86 86 The "principal officer" is a member of the President's Cabinet. At least twice the Constitutional Convention refused to hamper the President by an advisory council which might influence his conclusions. In Colonial times the royal governor had a council with a considerable power. But in the course of events there has grown up a cabinet somewhat resembling the council which the Convention rejected. However, it is not a Constitutional body, and the President is in no way bound by the opinion of his cabinet, nor is he obliged to consult it at all. Some Presidents, knowing that the majority of the members of the cabinet were not in sympathy with a particular policy, have gone forward without consulting them. Others have listened to suggestions and then acted at pleasure. Jefferson called for a vote in cabinet meetings, his vote counting one with the others. But he believed that he had the right to independent action. Lincoln wrote the Emancipation Proclamation without consulting his cabinet; but he read it during a meeting for suggestions and amendments. The first "principal officer" created under this clause was the Secretary of State, brought into being by an act of the first Congress, July 27, 1789. His department was then called the Department of Foreign Affairs. Next came the Secretary of War (August 7, 1789), the Secretary of the Treasury (September 2, 1789), the Attorney-General (September 24, 1789), the Postmaster General (May 8, 1794), the Secretary of the Navy (April 30, 1798), the Secretary of the Interior (March 3, 1849), the Secretary of Agriculture (May 15, 1862), the Secretary of Commerce (February 14,1903), and the Secretary of Labor (March 4, 1913). In Chile there is a Council of State resembling our President's cabinet, made up of three persons chosen by the Senate, three by the House of Deputies, and five by the President. Its duties are advisory, except in some cases in which the Constitution requires submission to the Council. Thus to a degree the President is restricted. and he shall have Power to grant Reprieves and Pardons for Offences against the United States, except in Cases of Impeachment. 87 87 With one exception the power to pardon is absolute. The judgment of the United States Senate in an impeachment trial 25 is beyond the reach of executive clemency. Otherwise an appointee of the President who might be convicted in an impeachment trial could be pardoned and reappointed to the office for which he had been adjudged unfit. Such was the method of the sovereign of England in protecting his favorites from punishment. In the Act of Settlement (1701) providing for a successor to Queen Anne, the Parliament declared that no pardon by the King could be used to exculpate one who had been impeached "by the Commons in Parliament." On Christmas day, 1868, President Johnson issued a general proclamation granting full pardon "unconditionally and without reservation" to those who had acted against the Union in the Civil War. The judiciary committee of the Senate questioned his power, but the Senate took no action. The Supreme Court has said that the President's pardoning power is beyond control or limitation by Congress. c80 He shall have Power, by and with the Advice and Consent of the Senate, to make Treaties, provided two-thirds of the Senators present concur; 88 88 A treaty is a written contract between two governments respecting matters of mutual welfare, such as peace, the acquisition of territory, the defining of boundaries, the needs of trade, the rights of citizenship, the ownership or inheritance of property, the benefits of copyrights and patents, or any other subject. During the time of the Continental Congress (1774-1781) many treaties were made by it on behalf of the States by name. The Congress was then the only governmental authority. While the Articles of Confederation were in effect (1781 - 1789) the one-House Congress, even after creating a Department of Foreign Affairs (1781), retained supervisory power over treaties and some other international matters; and it was by this method that the Treaty of Paris (1783), by which England recognized the independence of the United States, was negotiated. Twelve other treaties were entered into by Congress. But when the present Constitution was framed, creating a President and a Congress of two Houses, it was determined to let the President, the executive head of the Nation, negotiate treaties with other governments and to empower the Senate to ratify or reject them. In the Constitutional Convention a committee's report gave to the Senate the full power to make treaties. One delegate favored giving the power to the two Houses of Congress. Probably as a compromise the method stated in the Constitution was adopted. The subject received no more than ordinary consideration. It was pointed out in the "Federalist" by Alexander Hamilton that treaty making is neither legislative nor executive, but that it appeared that the executive is "the more fit agent in those transactions, while the vast importance of the trust and the operation of the treaties as laws plead strongly for the participation of the whole or a portion of the legislative body in the office of making them." The Senate must finally approve a treaty by a two-thirds majority before it can become effective. The reason for this given by Alexander Hamilton was that a man raised from humble station to the height and power of the Presidency might be unable to withstand the temptation of avarice or ambition by aiding a foreign power to the detriment of the United States. A precedent for thus abrogating a treaty made by the President and approved by the Senate may be found as far back as July 7, 1789, when Congress passed "An Act to Declare the Treaties heretofore Concluded with France no longer Obligatory on the United States" because they "have been repeatedly violated on the part of the French government." As a law of Congress may thus supersede a treaty, so a treaty may supplant an act of Congress, the latest expression of the National will being controlling. While in this clause the Constitution names the President and the Senate as the makers of a treaty, other provisions sometimes require the concurrence of the House of Representatives; for as all money bills must originate in that House 37, it may refuse to provide the means for effectuating the treaty. Of course, many treaties need no such aid from the House; but the House may constitutionally render null a treaty in which it disbelieves and which cannot be effectual without the expenditure of money. The Reverdy Johnson-Lord Clarendon Treaty of 1869, which attempted to settle all difference with England from 1853 down, was rejected by the Senate by a vote of 54 to 1, largely because it was felt that Johnson should have exacted an apology for acts done by England during the Civil War in claimed violation of neutrality. On February 16, 1893, just before the expiration of his term, President Harrison sent a treaty to the Senate for the annexation of Hawaii. When President Cleveland took office he withdrew the treaty, as he questioned the validity of the revolutionary provisional government which had been set up under the protection of marines from a man-of-war of the United States lying in the harbor of Honolulu. In Cleveland's administration (1897) the Senate declined to approve a treaty made with England because it proposed to submit American "interests in all cases to the decisions of an outside tribunal." The treaty was drawn after a very serious dispute with England regarding the boundary between British Guiana and Venezuela, our government interposing under the Monroe Doctrine for the protection of the last-named State. President Washington consulted with the Senate respecting treaties which he intended to negotiate. The practice has not been generally followed by his successors, though from time to time it has been adopted. In 1846, in the midst of a threatening controversy with great Britain respecting the northwest boundary of the United States from the Rocky Mountains to the Pacific Coast, which negotiations in 1818, in 1824, in 1826, and in 1844 had failed to settle, President Polk transmitted to the Senate a proposal "of Her Britannic Majesty for the adjustment of the Oregon question" and asked for its advice. Referring to Washington's practice as "rarely resorted to in later times", he said that it "was, in my judgment, eminently wise and may on occasion of great importance be properly revived." These were his reasons: "The Senate are a branch of the treaty-making power, and by consulting them in advance of his own action upon important measures of foreign policy which may ultimately come before them for their consideration, the President secures harmony of action between that body and himself. The Senate are, moreover, a branch of the war-making power, and it may be eminently proper for the Executive to take the opinion and advice of that body in advance upon any great question which may involve in its decision the issue of peace or war." President Polk concluded the message by saying that if the majority of the Senate necessary to ratify (two thirds) should "advise the acceptance of this proposition... I shall conform my action to their advice." But he said that should the Senate by a two-thirds vote decline to give advice or express an opinion, then he would "consider it my duty to reject the offer." On June 12, 1846, two days later, the Senate passed a resolution that "the President of the United States be, and he is hereby, advised to accept the proposal of the British Government... for a convention to settle boundaries." After the Spanish War President McKinley sent three senators to the peace conference at Paris. A resolution of disapproval was introduced in the Senate, but it was not passed. One objection was that such a course would tend to give the President an undue influence over the Senate, probably because senators serving with the President in the negotiation of a treaty might be less inclined to independent judgment when the treaty should come up in the Senate for ratification. At the close of the War of 1812 with England two members of Congress were appointed by President Madison to attend the peace conference at Ghent, the Speaker of the House, Henry Clay, and Senator James A. Bayard of Delaware. Believing that they could not serve in two capacities, they resigned from Congress. President Harding appointed two senators as delegates to the Washington Conference (November 12, 1921 - February 6, 1922), in which nine nations drafted treaties, some for the reduction of armaments and others respecting the general peace of the world. The Senate may (1) approve, (2) reject, (3) approve with amendments, (4) approve upon condition that specified changes will be made, and (5) approve with reservations or interpretations. In some instances it has failed to act at all. In 1795 the Senate approved the Jay Treaty with Great Britain "on condition" that certain changes be made to our commercial advantage; and the British Government accepted the conditions. The rejection of a treaty by the Senate "can be the subject of no complaint", said our State Department to Great Britain when the treaty of 1869 regarding the Alabama Claims was not approved, "and can give no occasion for dissatisfaction or criticism." In 1804 Secretary of State Madison had occasion to give Spain a like hint. "When peculiarities of this sort in the structure of a government are sufficiently known to other governments", said he, "they have no right to take exception at the inevitable effect of them." Many treaties have been approved by the Senate and many disapproved. Treaties suggesting any modification of or departure from our Constitutional system have been rejected. Thus in President Roosevelt's administration a number of arbitration treaties negotiated by Secretary of State Hay with various countries provided for referring to The Hague Tribunal questions of a Constitutional nature and also disputes respecting the interpretation of treaties themselves. As the reference to the Tribunal would be by the President, the Senate would be shorn, it believed, of part of its Constitutional duties in treaty-making matters. When the Senate amended the treaties so as to retain what it conceived to be its Constitutional jurisdiction of the subject, the President refused to go further. The Hague Tribunal arose out of conferences in 1899 and 1907 held at the capital of Holland upon the suggestion of Nicholas II of Russia, who recommended an "understanding not to increase for a fixed period the present effectives of the armed military and naval forces and at the same time not to increase the budgets pertaining thereto, and a preliminary examination of the means by which even a reduction may be effected in the future in the forces and budgets above mentioned." The first conference was attended by representatives of twenty-six nations. Forty-four nations were represented in the conference of 1907. Owing to the opposition of Germany, the subject of excessive armaments was abandoned. But many plans for the improvement of international practices were put in motion. The first question to be decided by The Hague Tribunal was submitted by the United States, relating to a fund owing to Californians by Mexico. Many questions of the kind formerly settled by war have been disposed of at The Hague. The most notable disagreement of this kind arose in 1919, when the treaty negotiated by President Wilson at Paris (June 28, 1919) closing the World War and constructing a League of Nations was laid before the Senate. It was believed by the Senate that the proposals to submit to an international tribunal certain questions would change our Constitutional form of government -- would require the United States to go to war without a declaration by Congress 55; would commit the Nation to the expenditure of money which Congress might not wish to appropriate 37; and would turn over to the balloting of nations the disposition of many of our most important Constitutional affairs. The Senate therefore proposed to ratify the treaty "with reservations and understandings." The Senate reserved to Congress the right to withdraw from the League and to be the sole judge as to whether its obligations had been fulfilled; declined to assume any obligation to preserve the territorial integrity or political independence of any other country, or to use the military or naval forces except as Congress might desire to do; declined to accept any mandate or guardianship over another nation except as Congress night determine; reserved to the Government of the United States exclusively the determination of domestic and political questions; declined to submit to arbitration or to the Council of the League of Nations the "long established policy commonly known as the Monroe Doctrine"; withheld its assent to the article of the treaty giving the Chinese province of Shantung to Japan; and declined to be limited in armament except as Congress might direct. Some other reservations were made. When the treaty with the reservations came to final vote in the Senate on March 19, 1920, it received forty-nine yeas and thirty-five nays, or seven votes fewer than the necessary two thirds to make a ratification. President Wilson declined to offer any concessions to the views of the Senate. and he shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States, whose Appointments are not herein otherwise provided for, and which shall be established by Law: but the Congress may by Law vest the appointment of such inferior Officers, as they think proper, in the President alone, in the Courts of Law, or in the Heads of Departments. 89 89 In the Constitutional Convention serious objection was taken to this provision, as the President might refuse his assent to necessary measures of Congress until appointments objectionable to the Senate had been confirmed. It was argued that this authority to appoint would invest him with power leading toward monarchy. Benjamin Franklin was of this belief. However, in practice the plan has worked very well. It is probably true that some Presidents have to some extent used their appointing power to influence Congress, refusing to fill offices within the control of members until a bill favored by him had been passed. On the other hand, it is believed that the Senate has sometimes used its power to approve appointments to influence the President to conform to its wishes. In a message dated March 1, 1886, President Cleveland declined to inform the Senate why he had removed a United States attorney from office without its consent, declaring that it. had no Constitutional authority in the matter; and he referred to "the threat proposed in the resolutions now before the Senate that no confirmation will be made unless the demands of that body be complied with" as insufficient to deter him from his duty to maintain the Chief Magistracy "unimpaired in all its dignity and vigor." For removing, in disregard of the Tenure of Office Act, Edwin M. Stanton, a hostile Secretary of War, President Johnson was impeached by the House, but the Senate failed to convict. The Tenure of Office Act was repealed on March 3, 1887, a year after the spirited message of President Cleveland just before mentioned, in which he spoke of the Act as by a Congress "overwhelmingly and bitterly opposed politically to the President" and "determined upon the subjugation of the Executive to legislative will." He considered the passage of the Act as an admission by Congress that it had no Constitutional basis for its claim. The first appointment to the cabinet to be denied confirmation by the Senate was that of Roger B. Taney (later Chief Justice of the United States) to the Secretaryship of the Treasury in F834. He had helped Jackson undo the United States Bank. The President shall have Power to fill up all Vacancies that may happen during the Recess of the Senate, by granting Commissions which shall expire at the End of their next Session. 90 90 Like many another clause of the Constitution, this one was copied from a State. The Constitution of North Carolina had such a provision. When the Senate is not in session to confirm appointments, the President may nevertheless meet the needs of the public service. But should the Senate during its next session not confirm a recess appointment (as it is called) the appointment will expire with that session. This is to prevent the President from building up the executive power by putting in office men not deemed suitable by the Senate. Section 3. He shall from time to time give to the Congress Information of the State of the Union, 91 91 This mandate has been carried out by the annual and the special messages of the Presidents, the annual message at the opening of Congress in December and the special message when a matter of unusual importance comes up, such as a disagreement with a foreign government, or a disaster calling for the granting of relief, or the conservation of the forests and minerals, and the like. Washington and Adams delivered their messages orally. Jefferson, who was not a ready speaker, asked leave to submit his in writing, saying that Congress might then consider a message at its convenience. The written message remained the practice until 1913, when President Wilson revived the oral address to Congress. Because the President is required by the Constitution to give information to Congress from time to time, Congress from the beginning has claimed, conversely, the right to ask the President for information. Washington was called upon by the House of Representatives for papers regarding the defeat of General St. Clair's forces in 1791 by the Miami Indians. After a three-day consideration of the question by Washington and his cabinet, which was regarded as of the greatest importance as a precedent, it was decided that the House had a right to copies of the papers. In 1909 President Roosevelt refused to permit the Attorney-General to make answer to a resolution of the Senate asking why no legal proceedings had been begun against a corporation named for violation of the Sherman Anti-Trust law. and recommend to their Consideration such Measures as he shall judge necessary and expedient; 92 92 In England the Parliament is supreme, and the King must sign any bill submitted to him, even his own death warrant, as one writer on English law expressed it. Therefore, English authorities have been astonished by the activity of our President in legislation, which often amounts (in the opinion of some) to domination. But it was the intention of the Fathers of the Republic that the President should be an active power. In addition to conferring upon him unqualified authority to sign or veto bills passed by Congress 38, they command him in this clause to recommend to the consideration of Congress such legislation as he should judge necessary and expedient. Through the reports of the members of his cabinet his information on the state of the country is complete, and he is therefore probably better equipped to make recommendations than any other man. At any rate, he is made by the Constitution an important part of the legislative mechanism of our government. he may, on extraordinary Occasions, convene both Houses, or either of them, 93 93 The Senate convenes in extra session immediately after the new President has taken the oath, to confirm his appointments, especially those of his cabinet officials. The House of Representatives never has been called in session alone. Both Houses have been called in special session, but not often. The first special session was called by President John Adams (1787) because of violations by France of the law of neutrality with respect to American commerce during a war with England. President Madison (1809) called a special session because of violations of neutrality by England, and later (1813) he called a special session regarding peace with England after the War of 1812. President Van Buren (1837) called a special session on account of financial troubles following the suspension, in Jackson's term, of the National Bank. Eighteen days after calling (1841) for financial reasons a special session President Harrison died. A special session was called by President Pierce because of the failure of the previous session "to make provision for the support of the Army" and on account of many troubles with the Indians. The great special session was that called by President Lincoln for July 4, 1861, preparatory to conducting the Civil War. President Hayes (1877) called a special session because the previous one had failed to support the Army, and later (1879) he called another because the preceding Congress had failed to make an appropriation for the Legislative, the Judicial, and the Executive departments of the Government. President Cleveland called a special session (1893) on account of "the existence of an alarming and extraordinary business situation", which was caused by the act requiring the Government to purchase a fixed quantity of silver each year. President McKinley called a special session (1897) for the reason that "for more than three years" current expenditures had been greater than receipts, and he advocated a tariff law to raise the necessary revenue. and in Case of Disagreement between them, with Respect to the Time of Adjournment, he may adjourn them to such Time as he shall think proper; 94 94 It never has been necessary for the President to exercise this authority. The working of a written constitution furnishes many like illustrations of the potency of the mere existence of a clearly defined power. Having in mind the very serious dissensions between the King of England and Parliament, and between the two Houses of Parliament themselves, respecting convening and adjourning, and the length of sessions, and the legal rights of one another, the framers of our Constitution provided that Congress shall assemble at least once a year 27; that neither House shall adjourn for more than three days without the consent of the other, nor to any other place than that in which the two Houses shall be sitting 31; and that, finally if they cannot agree upon adjournment (but only when there is disagreement), the President may adjourn them. Charles I was determined that his ministers should not be responsible to Parliament. Remember," he said, "that Parliaments are altogether in my power for their calling, sitting, and dissolution; and, therefore, as I find the fruits of them to be good or evil they are to continue or not to be." When in March, 1629, Charles sent orders for the dissolution of Parliament, the Speaker of the House of Commons was forcibly prevented from leaving the chair until the House had voted resolutions in condemnation of the King's illegal practices. "None have gone about to break Parliaments," declared John Eliot, in words which proved to be prophetic of the beheading of Charles, "but in the end Parliaments have broken them." he shall, receive Ambassadors and other public Ministers; 95 95 This merely makes definite a matter of formality in international relations. Each government has some one to deal with the representatives of other nations, and the Constitution makes the President that one in this country. The Secretary of State acts for him in most affairs, He may refuse to receive a representative deemed objectionable. He may also dismiss an ambassador by giving him passports to leave the country, as has happened where the conduct of a representative has been openly offensive. President Cleveland (1888) gave the ambassador from England his passports because he wrote a letter during the presidential political campaign which was widely published and which made comments adverse to the Cleveland administration. The ambassador from Austria was so dismissed by President rence in our affairs before we entered the World War. An objectionable minister who has not vagrantly offended may be quietly recalled by his government upon the request of the President. Almonte, the Mexican minister at Washington, demanded his passports and went home when (1845) Congress passed a resolution to accept the proposal of the Republic of Texas to come into the Union as a State. When the Department of State (first called Foreign Affairs) was established by Congress the law provided that the principal officer of the Department, now the Secretary of State, should carry on correspondence with other governments "in such manner that the President of the United States shall from time to time order or instruct." President Grant felt, that his prerogative in this respect had been invaded by a joint resolution of Congress directing the Secretary of State "to acknowledge a dispatch of congratulation from the Argentine Republic and the high appreciation of Congress of the compliment thus conveyed." The President vetoed the resolution and said that the "adoption has inadvertently involved the exercise of a power which infringes upon the Constitutional rights of the executive." he shall take Care that the Laws be faithfully executed, and shall Commission all the Officers of the United States. 96 96 This Constitution and the laws of Congress made in pursuance of it, and the treaties, are declared to be 133 "the supreme law of the land,. . . anything in the constitution or laws of any State to the contrary notwithstanding." These National laws are over all. The courts in every State are "bound thereby." It is made the duty of the President to "take care" that these laws are observed and fully executed. Contrasting the Constitution with the Articles of Confederation in this respect, Woodrow Wilson's "History of the American People" (Vol. 3, p. 71) says: "It conferred upon the Federal Government powers which would make it at once strong and independent.... Its Laws were to be, not advisory, but imperative, and were to operate, not upon the States, but directly upon individuals, like the laws of any sovereign." Ruling that a United States marshal who had killed a man in the act of assaulting a Federal judge traveling in the performance of his duty could not be tried on a charge of murder under the laws of California, where the deed was done, the Supreme Court of the United States said (1890): "We hold it to be an incontrovertible principle that the Government of the United States may, by means of physical force, exercised through its official agents, execute on every foot of American soil the powers and functions that belong to it." When physical force is not necessary the United States executes the Constitution and its laws and treaties through its judicial tribunals and its marshals. Thus where the Supreme Court of a State undertook to release by habeas corpus a man in the custody of a United States officer on a charge of having violated an Act of Congress, its action was reversed (1858) by the Supreme Court of the United States, Chief Justice Taney saying: "For no one will suppose that a government which has now lasted nearly seventy years, enforcing its laws by its own tribunals and preserving the union of the States, could have lasted a single year or fulfilled the high trusts committed to it if offenses against its laws could not have been punished without the consent of the State in which the culprit was found . . . And the powers of the General Government, and of the States, although both exist and are exercised within the same territorial limits, are yet separate and distinct sovereignties, acting separately and independently of each other within their respective spheres. And the sphere of action appropriated to the United States is as far beyond the reach of the judicial process issued by a State judge or a State court as if the line of division was traced by landmarks and monuments visible to the eye." c75, c102 The duty of the President "to take care that the laws be faithfully executed" cannot be interfered with by the Judicial Department. In 1867 the Supreme Court of the United States held that it had no jurisdiction to entertain a bill for injunction presented by the State of Mississippi to prevent President Johnson and General Ord from executing two laws of Congress passed on March 2 and March 23 of that year over the President's veto and known as the Reconstruction Acts. The first of those acts recited that no legal government or adequate protection for life and property existed in Mississippi and some other southern States and that it was necessary that peace and good order be enforced until a loyal republican State government could be established, and it accordingly divided the States into five military districts and made it the duty of the President to assign an officer of the army to each district with a sufficient military force to maintain order and punish offenders. c59 The second act provided machinery for registering voters and forming new constitutions in the States, "But we are fully satisfied that this court has no jurisdiction of a bill to enjoin the President in the performance of his official duties," said Chief Justice Taney in denying the application. c87 In 1864 a citizen of Indiana was arrested by the military authorities, tried by a military court on the charge of disloyal acts, when the civil courts were "open and in the proper and unobstructed exercise of their judicial functions", and sentenced to be hanged. He was not a resident of a seceded State, nor a prisoner of war, nor a person in the military or naval service. The sentence had been under consideration by President Lincoln before his death, and it was finally approved by President Johnson as commander in chief 85 of the military forces. Holding that the prisoner should be discharged by writ of habeas corpus because the military tribunal had no legal existence, that "it is the birthright of every American citizen when charged with crime to be tried and punished according to law", and that "if in Indiana he conspired with bad men to assist the enemy he is punishable for it in the courts of Indiana", the Supreme Court of the United States made (1866) this comment upon the contention that the approval of the sentence by the President gave it legal value: "He is controlled by law and has his appropriate sphere of duty, which is to execute, not to make, the laws." c87 Section 4. The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors. 96a 96a Treason and bribery were the worst offences in the public life of England at that time. By a later provision of the Constitution 113 the many and vague treasons in English law were reduced in this country to two definite faults: (1) waging war against the United States, or (2) adhering to its enemies. In 1787, while the Constitutional Convention was in session, Warren Hastings, the first Governor General of Bengal, was by the House of Commons impeached "of high crimes and misdemeanors." Hence, probably, the same words in our Constitution. As the charges against Hastings were of confiscation of property and oppressiveness in government, the English definition of the words may be inferred from the accusation. The managers of the impeachment of President Johnson contended that "an impeachable crime or misdemeanor... may consist of a violation of the Constitution, of law, of an official oath, or of duty, by an act committed or omitted, or, without violating a positive law, by abuse of discretionary powers from improper motives, or from any improper purpose." Reproduction of all or any parts of the above text may be used for general information. This HTML presentation is copyright by Barefoot, October 1996 Mirroring is not Netiquette without the Express Permission of Barefoot Visit Barefoot's World and Educate Yo'Self This set of pages on the Constitution started Mar 23, 1996 Completed Oct 10,1996 - Last Revised July 4, 2006 Three mighty important things, Pardn'r, LOVE And PEACE and FREEDOM
http://www.barefootsworld.net/constit3.html
13
35
Politics of the United Kingdom |This article is part of the series: Politics and government of the United Kingdom The United Kingdom is a unitary democracy governed within the framework of a constitutional monarchy, in which the Monarch is the head of state and the Prime Minister of the United Kingdom is the head of government. Executive power is exercised by Her Majesty's Government, on behalf of and by the consent of the Monarch, as well as by the devolved Governments of Scotland and Wales, and the Northern Ireland Executive. Legislative power is vested in the two chambers of the Parliament of the United Kingdom, the House of Commons and the House of Lords, as well as in the Scottish parliament and Welsh and Northern Ireland assemblies. The judiciary is independent of the executive and the legislature. The highest national court is the Supreme Court of the United Kingdom. The UK political system is a multi-party system. Since the 1920s, the two largest political parties have been the Conservative Party and the Labour Party. Before the Labour Party rose in British politics the Liberal Party was the other major political party along with the Conservatives. Though coalition and minority governments have been an occasional feature of parliamentary politics, the first-past-the-post electoral system used for general elections tends to maintain the dominance of these two parties, though each has in the past century relied upon a third party to deliver a working majority in Parliament. The current Conservative-Liberal Democrat coalition government is the first coalition since 1974. With the partition of Ireland, Northern Ireland received home rule in 1920, though civil unrest meant direct rule was restored in 1972. Support for nationalist parties in Scotland and Wales led to proposals for devolution in the 1970s though only in the 1990s did devolution actually happen. Today, Scotland, Wales and Northern Ireland each possess a legislature and executive, with devolution in Northern Ireland being conditional on participation in certain all-Ireland institutions. The United Kingdom remains responsible for non-devolved matters and, in the case of Northern Ireland, co-operates with the Republic of Ireland. It is a matter of dispute as to whether increased autonomy and devolution of executive and legislative powers has contributed to a reduction in support for independence. The principal pro-independence party, the Scottish National Party, won an overall majority of MSPs at the 2011 Scottish parliament elections and now forms the Scottish Government administration, with plans to hold a referendum on negotiating for independence. In Northern Ireland, the largest Pro-Belfast Agreement party, Sinn Féin, not only advocates Northern Ireland's unification with the Republic of Ireland, but also abstains from taking their elected seats in the Westminster government, as this would entail taking a pledge of allegiance to the British monarch. The constitution of the United Kingdom is uncodified, being made up of constitutional conventions, statutes and other elements such as EU law. This system of government, known as the Westminster system, has been adopted by other countries, especially those that were formerly parts of the British Empire. The United Kingdom is also responsible for several dependencies, which fall into two categories: the Crown dependencies, in the immediate vicinity of the UK, and British Overseas Territories, which originated as colonies of the British Empire. The Crown The British Monarch, currently Her Majesty Queen Elizabeth II, is the Chief of State of the United Kingdom. Though she takes little direct part in government, the Crown remains the fount in which ultimate executive power over Government lies. These powers are known as Royal Prerogative and can be used for a vast amount of things, such as the issue or withdrawal of passports, to the dismissal of the Prime Minister or even the Declaration of War. The powers are delegated from the Monarch personally, in the name of the Crown, and can be handed to various ministers, or other Officers of the Crown, and can purposely bypass the consent of Parliament. The head of Her Majesty's Government; the Prime Minister, also has weekly meetings with the sovereign, where she may express her feelings, warn, or advise the Prime Minister in the Government's work. - The power to dismiss and appoint a Prime Minister - The power to dismiss and appoint other ministers - The power to summon, prorogue and dissolve Parliament - The power to grant or refuse Royal Assent to bills (making them valid and law) - The power to commission officers in the Armed Forces - The power to command the Armed Forces of the United Kingdom - The power to appoint members to the Queen's Council - The power to issue and withdraw passports - The power to grant Prerogative of mercy (though Capital Punishment is abolished, this power is still used to remedy errors in sentence calculation) - The power to grant honours - The power to create corporations via Royal Charter - The power to ratify and make treaties - The power to declare War and Peace - The power to deploy the Armed Forces overseas - The power to recognize states - The power to credit and receive diplomats Executive power in the United Kingdom is exercised by the Sovereign, Queen Elizabeth II, via Her Majesty's Government and the devolved national authorities - the Scottish Government, the Welsh Assembly Government and the Northern Ireland Executive. The United Kingdom Government The monarch appoints a Prime Minister as the head of Her Majesty's Government in the United Kingdom, guided by the strict convention that the Prime Minister should be the member of the House of Commons most likely to be able to form a Government with the support of that House. In practice, this means that the leader of the political party with an absolute majority of seats in the House of Commons is chosen to be the Prime Minister. If no party has an absolute majority, the leader of the largest party is given the first opportunity to form a coalition. The Prime Minister then selects the other Ministers which make up the Government and act as political heads of the various Government Departments. About twenty of the most senior government ministers make up the Cabinet and approximately 100 ministers in total comprise the government. In accordance with constitutional convention, all ministers within the government are either Members of Parliament or peers in the House of Lords. As in some other parliamentary systems of government (especially those based upon the Westminster System), the executive (called "the government") is drawn from and is answerable to Parliament - a successful vote of no confidence will force the government either to resign or to seek a parliamentary dissolution and a general election. In practice, members of parliament of all major parties are strictly controlled by whips who try to ensure they vote according to party policy. If the government has a large majority, then they are very unlikely to lose enough votes to be unable to pass legislation. The Prime Minister and the Cabinet The Prime Minister is the most senior minister in the Cabinet. She/he is responsible for chairing Cabinet meetings, selecting Cabinet ministers (and all other positions in Her Majesty's government), and formulating government policy. The Prime Minister is the de facto leader of the UK government, since s/he exercises executive functions that are nominally vested in the sovereign (by way of the Royal Prerogatives). Historically, the British monarch was the sole source of executive powers in the government. However, following the rule of the Hanoverian monarchs, an arrangement of a "Prime Minister" chairing and leading the Cabinet began to emerge. Over time, this arrangement became the effective executive branch of government, as it assumed the day-to-day functioning of the British government away from the sovereign. Theoretically, the Prime Minister is primus inter pares (,i.e. Latin for "first among equals") among his/her Cabinet colleagues. While the Prime Minister is the senior Cabinet Minister, s/he is theoretically bound to make executive decisions in a collective fashion with the other Cabinet ministers. The Cabinet, along with the PM, consists of Secretaries of State from the various government departments, the Lord High Chancellor, the Lord Privy Seal, the President of the Board of Trade, the Chancellor of the Duchy of Lancaster and Ministers without portfolio. Cabinet meetings are typically held weekly, while Parliament is in session. Government departments and the Civil Service The Government of the United Kingdom contains a number of ministries known mainly, though not exclusively as departments, e.g., Ministry of Defence. These are politically led by a Government Minister who is often a Secretary of State and member of the Cabinet. He or she may also be supported by a number of junior Ministers. In practice, several government departments and Ministers have responsibilities that cover England alone, with devolved bodies having responsibility for Scotland, Wales and Northern Ireland, (for example - the Department of Health), or responsibilities that mainly focus on England (such as the Department for Education). Implementation of the Minister's decisions is carried out by a permanent politically neutral organisation known as the civil service. Its constitutional role is to support the Government of the day regardless of which political party is in power. Unlike some other democracies, senior civil servants remain in post upon a change of Government. Administrative management of the Department is led by a head civil servant known in most Departments as a Permanent Secretary. The majority of the civil service staff in fact work in executive agencies, which are separate operational organisations reporting to Departments of State. Devolved national administrations Scottish Government The Scottish Government is responsible for all issues that are not explicitly reserved to the United Kingdom Parliament at Westminster, by the Scotland Act; including NHS Scotland, education, justice, rural affairs, and transport. It manages an annual budget of more than £25 billion. The government is led by the First Minister, assisted by various Ministers with individual portfolios and remits. The Scottish Parliament nominates a Member to be appointed as First Minister by the Queen. The First Minister then appoints his Ministers (now known as Cabinet Secretaries) and junior Ministers, subject to approval by the Parliament. The First Minister, the Ministers (but not junior ministers), the Lord Advocate and Solicitor General are the Members of the 'Scottish Executive', as set out in the Scotland Act 1998. They are collectively known as "the Scottish Ministers". Welsh Government The Welsh Government and the National Assembly for Wales have more limited powers than those devolved to Scotland, although following the passing of the Government of Wales Act 2006 and the Welsh devolution referendum, 2011, the Assembly can now legislate in some areas through an Act of the National Assembly for Wales. Following the 2011 election, Welsh Labour held exactly half of the seats in the Assembly, falling just short of an overall majority. A Welsh Labour Government was subsequently formed headed by Carwyn Jones. Northern Ireland Executive The Northern Ireland Executive and Assembly have powers closer to those already devolved to Scotland. The Northern Ireland Executive is led by a diarchy, currently First Minister Peter Robinson (Democratic Unionist Party) and deputy First Minister Martin McGuinness (Sinn Féin). The UK Parliament is the supreme legislative body in the United Kingdom (i.e., there is parliamentary sovereignty), and Government is drawn from and answerable to it. Parliament is bicameral, consisting of the House of Commons and the House of Lords. There is also a devolved Scottish Parliament and devolved Assemblies in Wales and Northern Ireland, with varying degrees of legislative authority. UK Parliament House of Commons The Countries of the United Kingdom are divided into parliamentary constituencies of broadly equal population by the four Boundary Commissions. Each constituency elects a Member of Parliament (MP) to the House of Commons at General Elections and, if required, at by-elections. As of 2010 there are 650 constituencies (there were 646 before that year's general election. Of the 650 MPs, all but one - Lady Sylvia Hermon - belong to a political party. In modern times, all Prime Ministers and Leaders of the Opposition have been drawn from the Commons, not the Lords. Alec Douglas-Home resigned from his peerages days after becoming Prime Minister in 1963, and the last Prime Minister before him from the Lords left in 1902 (the Marquess of Salisbury). One party usually has a majority in Parliament, because of the use of the First Past the Post electoral system, which has been conducive in creating the current two party system. The monarch normally asks a person commissioned to form a government simply whether it can survive in the House of Commons, something which majority governments are expected to be able to do. In exceptional circumstances the monarch asks someone to 'form a government' with a parliamentary minority which in the event of no party having a majority requires the formation of a coalition government. This option is only ever taken at a time of national emergency, such as war-time. It was given in 1916 to Andrew Bonar Law, and when he declined, to David Lloyd George and in 1940 to Winston Churchill. A government is not formed by a vote of the House of Commons, it is a commission from the monarch. The House of Commons gets its first chance to indicate confidence in the new government when it votes on the Speech from the Throne (the legislative programme proposed by the new government). House of Lords The House of Lords was previously a largely hereditary aristocratic chamber, although including life peers, and Lords Spiritual. It is currently mid-way through extensive reforms, the most recent of these being enacted in the House of Lords Act 1999. The house consists of two very different types of member, the Lords Temporal and Lords Spiritual. Lords Temporal include appointed members (life peers with no hereditary right for their descendants to sit in the house) and ninety-two remaining hereditary peers, elected from among, and by, the holders of titles which previously gave a seat in the House of Lords. The Lords Spiritual represent the established Church of England and number twenty-six: the Five Ancient Sees (Canterbury, York, London, Winchester and Durham), and the 21 next-most senior bishops. The House of Lords currently acts to review legislation initiated by the House of Commons, with the power to propose amendments, and can exercise a suspensive veto. This allows it to delay legislation if it does not approve it for twelve months. However, the use of vetoes is limited by convention and by the operation of the Parliament Acts 1911 and 1949: the Lords may not veto the "money bills" or major manifesto promises (see Salisbury convention). Persistent use of the veto can also be overturned by the Commons, under a provision of the Parliament Act 1911. Often governments will accept changes in legislation in order to avoid both the time delay, and the negative publicity of being seen to clash with the Lords. However the Lords still retain a full veto in acts which would extend the life of Parliament beyond the 5 year term limit introduced by the Parliament Act 1911. The House of Lords was replaced as the final court of appeal on civil cases within the United Kingdom on 1 October 2009, by the Supreme Court of the United Kingdom. Devolved national legislatures Though the UK parliament remains the sovereign parliament, Scotland has a parliament and Wales and Northern Ireland have assemblies. De jure, each could have its powers broadened, narrowed or changed by an Act of the UK Parliament. However, Scotland has a tradition of popular sovereignty as opposed to parliamentary sovereignty and the fact that the Scottish parliament was established following a referendum would make it politically difficult to significantly alter its powers without popular consent. The UK is therefore a unitary state with a devolved system of government. This contrasts with a federal system, in which sub-parliaments or state parliaments and assemblies have a clearly defined constitutional right to exist and a right to exercise certain constitutionally guaranteed and defined functions and cannot be unilaterally abolished by Acts of the central parliament. England, therefore, is the only country in the UK not to have a devolved English parliament. However, senior politicians of all main parties have voiced concerns in regard to the West Lothian Question, which is raised where certain policies for England are set by MPs from all four constituent nations whereas similar policies for Scotland or Wales might be decided in the devolved assemblies by legislators from those countries alone. Alternative proposals for English regional government have stalled, following a poorly received referendum on devolved government for the North East of England, which had hitherto been considered the region most in favour of the idea, with the exception of Cornwall, where there is widespread support for a Cornish Assembly, including all five Cornish MPs. England is therefore governed according to the balance of parties across the whole of the United Kingdom. The government has no plans to establish an English parliament or assembly although several pressure groups are calling for one. One of their main arguments is that MPs (and thus voters) from different parts of the UK have inconsistent powers. Currently an MP from Scotland can vote on legislation which affects only England but MPs from England (or indeed Scotland) cannot vote on matters devolved to the Scottish parliament. Indeed, the former Prime Minister Gordon Brown, who is an MP for a Scottish constituency, introduced some laws that only affect England and not his own constituency. This anomaly is known as the West Lothian question. The policy of the UK Government in England was to establish elected regional assemblies with no legislative powers. The London Assembly was the first of these, established in 2000, following a referendum in 1998, but further plans were abandoned following rejection of a proposal for an elected assembly in North East England in a referendum in 2004. Unelected regional assemblies remain in place in eight regions of England. Scottish Parliament The Scottish Parliament is the national, unicameral legislature of Scotland, located in the Holyrood area of the capital Edinburgh. The Parliament, informally referred to as "Holyrood" (cf. "Westminster"), is a democratically elected body comprising 129 members who are known as Members of the Scottish Parliament, or MSPs. Members are elected for four-year terms under the mixed member proportional representation system. As a result, 73 MSPs represent individual geographical constituencies elected by the plurality ("first past the post") system, with a further 56 returned from eight additional member regions, each electing seven MSPs. The current Scottish Parliament was established by the Scotland Act 1998 and its first meeting as a devolved legislature was on 12 May 1999. The parliament has the power to pass laws and has limited tax-varying capability. Another of its roles is to hold the Scottish Government to account. The "devolved matters" over which it has responsibility include education, health, agriculture, and justice. A degree of domestic authority, and all foreign policy, remains with the UK Parliament in Westminster. The public take part in Parliament in a way that is not the case at Westminster through Cross-Party Groups on policy topics which the interested public join and attend meetings of alongside Members of the Scottish Parliament (MSPs). The resurgence in Celtic language and identity, as well as 'regional' politics and development, has contributed to forces pulling against the unity of the state. This was clearly demonstrated when - although some argue it was influenced by general public dillusionment with Labour - the Scottish National Party (SNP) became the largest party in the Scottish Parliament by one seat. Alex Salmond (leader of SNP) has since made history by becoming the first First Minister of Scotland from a party other than Labour. The SNP govern as a minority administration at Holyrood. Nevertheless, recent opinion polls have suggested that nationalism (i.e., a desire to break up the UK) is rising within Scotland and England. However, the polls have been known to be inaccurate in the past (for example, in the run up to the 1992 General Election). Moreover, polls carried out in the 1970s and the 1990s showed similar results, only to be debunked at elections. While support for breaking up the UK was strongest in Scotland, there was still a clear lead for unionism over nationalism. However, an opinion poll in May 2012 showed support for independence at only 31%, a record low, showing the chance of independence being very low. National Assembly for Wales The National Assembly for Wales is the devolved assembly with power to make legislation in Wales. The Assembly comprises 60 members, who are known as Assembly Members, or AMs (Welsh: Aelod y Cynulliad). Members are elected for four-year terms under an additional members system, where 40 AMs represent geographical constituencies elected by the plurality system, and 20 AMs from five electoral regions using the d'Hondt method of proportional representation. The Assembly was created by the Government of Wales Act 1998, which followed a referendum in 1997. On its creation, most of the powers of the Welsh Office and Secretary of State for Wales were transferred to it. The Assembly had no powers to initiate primary legislation until limited law-making powers were gained through the Government of Wales Act 2006. Its primary law-making powers were enhanced following a Yes vote in the referendum on 3 March 2011, making it possible for it to legislate without having to consult the UK parliament, nor the Secretary of State for Wales in the 20 areas that are devolved. Northern Ireland Assembly The government of Northern Ireland was established as a result of the 1998 Good Friday Agreement. This created the Northern Ireland Assembly. The Assembly is a unicameral body consisting of 108 members elected under the Single Transferable Vote form of proportional representation. The Assembly is based on the principle of power-sharing, in order to ensure that both communities in Northern Ireland, unionist and nationalist, participate in governing the region. It has power to legislate in a wide range of areas and to elect the Northern Ireland Executive (cabinet). It sits at Parliament Buildings at Stormont in Belfast. The Assembly has authority to legislate in a field of competences known as "transferred matters". These matters are not explicitly enumerated in the Northern Ireland Act 1998 but instead include any competence not explicitly retained by the Parliament at Westminster. Powers reserved by Westminster are divided into "excepted matters", which it retains indefinitely, and "reserved matters", which may be transferred to the competence of the Northern Ireland Assembly at a future date. Health, criminal law and education are "transferred" while royal relations are all "excepted". While the Assembly was in suspension, due to issues involving the main parties and the Provisional Irish Republican Army (IRA), its legislative powers were exercised by the UK government, which effectively had power to legislate by decree. Laws that would normally be within the competence of the Assembly were passed by the UK government in the form of Orders-in-Council rather than legislative acts. There has been a significant decrease in violence over the last twenty years, though the situation remains tense, with the more hard-line parties such as Sinn Féin and the Democratic Unionist Party now holding the most parliamentary seats (see Demographics and politics of Northern Ireland). The United Kingdom does not have a single legal system due to it being created by the political union of previously independent countries with the terms of the Treaty of Union guaranteeing the continued existence of Scotland's separate legal system. Today the UK has three distinct systems of law: English law, Northern Ireland law and Scots law. Recent constitutional changes saw a new Supreme Court of the United Kingdom come into being in October 2009 that took on the appeal functions of the Appellate Committee of the House of Lords. The Judicial Committee of the Privy Council, comprising the same members as the Supreme Court, is the highest court of appeal for several independent Commonwealth countries, the UK overseas territories, and the British crown dependencies. England, Wales and Northern Ireland Both English law, which applies in England and Wales, and Northern Ireland law are based on common-law principles. The essence of common-law is that law is made by judges sitting in courts, applying their common sense and knowledge of legal precedent (stare decisis) to the facts before them. The Courts of England and Wales are headed by the Senior Courts of England and Wales, consisting of the Court of Appeal, the High Court of Justice (for civil cases) and the Crown Court (for criminal cases). The Supreme Court of the United Kingdom is the highest court in the land for both criminal and civil cases in England, Wales, and Northern Ireland and any decision it makes is binding on every other court in the hierarchy. Scots law, a hybrid system based on both common-law and civil-law principles, applies in Scotland. The chief courts are the Court of Session, for civil cases, and the High Court of Justiciary, for criminal cases. The Supreme Court of the United Kingdom serves as the highest court of appeal for civil cases under Scots law. Sheriff courts deal with most civil and criminal cases including conducting criminal trials with a jury, known that as Sheriff solemn Court, or with a Sheriff and no jury, known as (Sheriff summary Court). The Sheriff courts provide a local court service with 49 Sheriff courts organised across six Sheriffdoms. Electoral systems Various electoral systems are used in the UK: - The first-past-the-post system is used for general elections to the House of Commons, and also for some local government elections in England and Wales. - The plurality-at-large voting (the bloc vote) is also used for some local government elections in England and Wales. - The Additional Member System is used for elections to the Scottish Parliament, the National Assembly for Wales (Welsh Assembly) and London Assembly. The system is implemented differently in each of the three locations. - The single transferable vote system is used in Northern Ireland to elect the Northern Ireland Assembly, local councils, and Members of the European Parliament, and in Scotland to elect local councils. - The Alternative Vote system is used for by-elections in Scottish local councils. - The party-list proportional representation system is used for European Parliament elections in England, Scotland and Wales. - The supplementary vote is used to elect directly-elected mayors in England, including the mayor of London. The use of the first-past-the-post to elect members of Parliament is unusual among European nations. The use of the system means that MPs are sometimes elected from individual constituencies with a plurality (receiving more votes than any other candidate, but not an absolute majority of 50 percent plus one vote), due to three or more candidates receiving a significant share of the vote. Elections and political parties in the United Kingdom are affected by Duverger's law, the political science principle which states that plurality voting systems, such as first-past-the-post, tend to lead to the development of two-party systems. The UK, like several other states, has sometimes been called a "two-and-a-half" party system, because parliamentary politics is dominated by the Labour Party and Conservative Party, with the Liberal Democrats holding a significant number of seats (but still substantially less than Labour and the Conservatives), and several small parties (some of them regional or nationalist) trailing far behind in number of seats. In the last few general elections, voter mandates for Westminster in the 40% ranges have been swung into 60% parliamentary majorities. No single party has won a majority of the popular vote since the Third National Government of Stanley Baldwin in 1935. On two occasions since World War II - 1951 and February 1974 - a party that came in second in the popular vote actually came out with the larger number of seats. Electoral reform for parliamentary elections have been proposed many times. The Jenkins Commission report in October 1998 suggested implementing the Alternative Vote Top-up (also called Alternative Vote Plus or AV+) in parliamentary elections. Under this proposal, most MPs would be directly elected from constituencies by the alternative vote, with a number of additional members elected from "top-up lists." However, no action was taken by the Labour government and the time. There are a number of groups in the UK campaigning for electoral reform, including the Electoral Reform Society, Make Votes Count Coalition and Fairshare. The 2010 general election resulted in a hung parliament (no single party being able to command a majority in the House of Commons). This was only the second general election since World War II to return a hung parliament, the first being the February 1974 election. The Conservatives gained the most seats (ending 13 years of Labour government) and the largest percentage of the popular vote, but fell 20 seats short of a majority. The Conservatives and Liberal Democrats entered into a new coalition government, headed by David Cameron. Under the terms of the coalition agreement the government committed itself to hold a referendum in May 2011 on whether to change parliamentary elections from first-past-the-post to AV. Electoral reform was a majority priority for the Liberal Democrats, who favour proportional representation but were able to negotiate only a referendum on AV with the Conservatives. The coalition partners plan to campaign on opposite sides, with the Liberal Democrats supporting AV and the Conservatives opposing it. Political parties There are two main parties in the United Kingdom: the Conservative Party, and the Labour Party. There is also a significant third party, the Liberal Democrats. The modern Conservative Party was founded in 1834 and is an outgrowth of the Tory movement or party, which began in 1678. Today it is still colloquially referred to as the Tory Party and its members as Tories. The Liberal Democrats were formed in 1988 by a merger of the Liberal Party and the Social Democratic Party (SDP), a Labour breakaway formed in 1981. The Liberals and SDP had contested elections together as the SDP–Liberal Alliance for seven years before. The modern Liberal Party had been founded in 1859 as an outgrowth of the Whig movement or party (which began at the same time as the Tory party and was its historical rival) as well as the Radical and Peelite tendencies. The Liberal Party was one of the two dominant parties (along with the Conservatives) from its founding until the 1920s, when it rapidly declined and was supplanted on the left by the Labour Party, which was founded in 1900 and formed its first government in 1924. Since that time, the Labour and Conservatives parties have been dominant, with the Liberal Democrats also holding a significant number of seats and increasing their share of the vote in parliamentary general elections in the four elections 1992. Minor parties also hold seats in parliament: - The Scottish National Party, founded in 1934, advocates for Scottish independence and has had continuous representation in Parliament since 1967. The SNP currently leads a majority government in the Scottish Parliament. - Plaid Cymru, the Welsh nationalist party, has had continuous representation in Parliament since 1974. Plaid has the third-largest number of seats in the National Assembly for Wales, after Welsh Labour and the Welsh Conservative & Unionist Party, and participated with the former in the coalition agreement in the Assembly before the 2011 election. - In Northern Ireland, all 18 MPs are from parties that only contest elections in Northern Ireland (except for Sinn Féin, which contests elections in both Northern Ireland and the Republic of Ireland). The unionism Democratic Unionist Party (DUP), the republican Sinn Féin, the nationalist Social Democratic and Labour Party (SDLP), and the nonsectarian Alliance Party of Northern Ireland all gained seats in Parliament in the 2010 election, the Alliance Party for the first time. Sinn Féin has a policy of abstentionism and so its MPs refuse to take their seats in Parliament. DUP, Sinn Féin, the Ulster Unionist Party (UUP), and the SDLP are considered the four major parties in Northern Ireland, holding the most seats in the Northern Ireland Assembly. In the most recent general election in 2010, the result amounted to a hung parliament, and after several days of negotiations, the Labour Party left the government with the Conservatives and the Liberal Democrats operating a coalition government. Conservatives (Tories) The Conservative Party won the largest number of seats at the 2010 general election, returning 307 MPs, though not enough to make an overall majority. As a result of negotiations following the election, they entered a formal coalition with the Liberal Democrats to form a majority government. The Conservative party can trace its origin back to 1662, with the Court Party and the Country Party being formed in the aftermath of the English Civil War. The Court Party soon became known as the Tories, a name that has stuck despite the official name being 'Conservative'. The term "Tory" originates from the Exclusion Bill crisis of 1678-1681 - the Whigs were those who supported the exclusion of the Roman Catholic Duke of York from the thrones of England, Ireland and Scotland, and the Tories were those who opposed it. Both names were originally insults: a "whiggamore" was a horse drover (See Whiggamore Raid), and a "tory" (Tóraidhe) was an Irish term for an outlaw, later applied to Irish Confederates and Irish Royalists, during the Wars of the Three Kingdoms. Generally, the Tories were associated with lesser gentry and the Church of England, while Whigs were more associated with trade, money, larger land holders (or "land magnates"), expansion and tolerance of Catholicism. The Rochdale Radicals were a group of more extreme reformists who were also heavily involved in the cooperative movement. They sought to bring about a more equal society, and are considered by modern standards to be left-wing. After becoming associated with repression of popular discontent in the years after 1815, the Tories underwent a fundamental transformation under the influence of Robert Peel, himself an industrialist rather than a landowner, who in his 1834 "Tamworth Manifesto" outlined a new "Conservative" philosophy of reforming ills while conserving the good. Though Peel's supporters subsequently split from their colleagues over the issue of free trade in 1846, ultimately joining the Whigs and the Radicals to form what would become the Liberal Party, Peel's version of the party's underlying outlook was retained by the remaining Tories, who adopted his label of Conservative as the official name of their party. The crushing defeat of the 1997 election saw the Conservative Party lose over half their seats from 1992 and saw the party re-align with public perceptions of them. In 2008, the Conservative Party formed a pact with the Ulster Unionist Party to select joint candidates for European and House of Commons elections; this angered the DUP as by splitting the Unionist vote, republican parties will be elected in some areas. After thirteen years as the official opposition, the Party returned to power as part of a coalition with the Liberal Democrats in 2010. Historically, the party has been the mainland party most pre-occupied by British Unionism, as attested to by the party's full name, the Conservative & Unionist Party. This resulted in the merger between the Conservatives and Joseph Chamberlain's Liberal Unionist Party, composed of former Liberals who opposed Irish home rule. The unionist tendency is still in evidence today, manifesting sometimes as a scepticism or opposition to devolution, firm support for the continued existence of the United Kingdom in the face of separatist nationalism, and a historic link with the cultural unionism of Northern Ireland. The Labour Party won the second largest number of seats in the House of Commons at the 2010 general election, with 258 MPs. The history of the Labour party goes back to 1900 when a Labour Representation Committee was established which changed its name to "The Labour Party" in 1906. After the First World War, this led to the demise of the Liberal Party as the main reformist force in British politics. The existence of the Labour Party on the left of British politics led to a slow waning of energy from the Liberal Party, which has consequently assumed third place in national politics. After performing poorly in the elections of 1922, 1923 and 1924, the Liberal Party was superseded by the Labour Party as the party of the left. Following two brief spells in minority governments in 1924 and 1929–1931, the Labour Party had its first true victory after World War II in the 1945 "khaki election". Throughout the rest of the twentieth century, Labour governments alternated with Conservative governments. The Labour Party suffered the "wilderness years" of 1951-1964 (three straight General Election defeats) and 1979-1997 (four straight General Election defeats). During this second period, Margaret Thatcher, who became leader of the Conservative party in 1975, made a fundamental change to Conservative policies, turning the Conservative Party into an economic neoliberal party. In the General Election of 1979 she defeated James Callaghan's troubled Labour government after the winter of discontent. For most of the 1980s and the 1990s, Conservative governments under Thatcher and her successor John Major pursued policies of privatization, anti-trade-unionism, and, for a time, monetarism, now known collectively as Thatcherism. The Labour Party elected left-winger Michael Foot as their leader after their 1979 election defeat, and he responded to dissatisfaction with the Labour Party by pursuing a number of radical policies developed by its grass-roots members. In 1981 several right-wing Labour MPs formed a breakaway group called the Social Democratic Party (SDP), a move which split Labour and is widely believed to have made Labour unelectable for a decade. The SDP formed an alliance with the Liberal Party which contested the 1983 and 1987 general elections as a centrist alternative to Labour and the Conservatives. After some initial success, the SDP did not prosper (partly due to its unfavourable distribution of votes in the FPTP electoral system), and was accused by some of splitting the anti-Conservative vote. The SDP eventually merged with the Liberal Party to form the Liberal Democrats in 1988. Support for the new party has increased since then, and the Liberal Democrats (often referred to as LibDems) in 1997 and 2001 gained an increased number of seats in the House of Commons. The Labour Party was badly defeated in the Conservative landslide of the 1983 general election, and Michael Foot was replaced shortly thereafter by Neil Kinnock as leader. Kinnock expelled the far left Militant tendency group (now called the Socialist Party of England and Wales) and moderated many of the party's policies. Yet he was in turn replaced by John Smith after Labour defeats in the 1987 and 1992 general elections. Tony Blair became leader of the Labour party after John Smith's sudden death from a heart attack in 1994. He continued to move the Labour Party towards the 'centre' by loosening links with the unions and embracing many of Margaret Thatcher's liberal economic policies. This, coupled with the professionalising of the party machine's approach to the media, helped Labour win a historic landslide in the 1997 General Election, after 18 years of Conservative government. Some observers say the Labour Party had by then morphed from a democratic socialist party to a social democratic party, a process which delivered three general election victories but alienated some of its core base - leading to the formation of the Socialist Labour Party (UK). Liberal Democrats The Liberal Democrats won the third largest number of seats at the 2010 general election, returning 57 MPs. The Conservative Party failed to win an overall majority, and the Liberal Democrats entered government for the first time as part of a coalition. The Liberal Democrats were formed in 1988 by a merger of the Liberal Party with the Social Democratic Party, but can trace their origin back to the Whigs and the Rochdale Radicals who evolved into the Liberal Party. The term 'Liberal Party' was first used officially in 1868, though it had been in use colloquially for decades beforehand. The Liberal Party formed a government in 1868 and then alternated with the Conservative Party as the party of government throughout the late 19th century and early 20th century. The Liberal Democrats are heavily a party on Constitutional and Political Reforms, including changing the voting system for General Elections (UK Alternative Vote referendum, 2011), abolishing the House of Lords and replacing it with an 300 member elected Senate, introducing Fixed Five Year Parliaments, and introducing a National Register of Lobbyists. Some members have been described as obsessed with House of Lords Reform, including the party's leader, Nick Clegg. Scottish and Welsh Nationalists Members of the Scottish National Party and Plaid Cymru work together as a single parliamentary group following a formal pact signed in 1986. This group currently has 9 MPs. The Scottish National Party has enjoyed parliamentary representation continuously since 1967 and had 6 MPs elected at the 2010 election. Following the 2007 Scottish parliament elections, the SNP emerged as the largest party with 47 MSPs and formed a minority government with Alex Salmond the First Minister. After the 2011 Scottish election, the SNP won enough seats to form a majority government. Plaid Cymru has enjoyed parliamentary representation continuously since 1974 and had 3 MPs elected at the 2010 election. Following the 2007 Welsh Assembly elections, they joined Labour as the junior partner in a coalition government, but have fallen down to the third largest party in the Assembly after the 2011 Assembly elections, and become an opposition party. Northern Ireland parties The Democratic Unionist Party had 8 MPs elected at the 2010 election. Founded in 1971 by Ian Paisley, it has grown to become the larger of the two main unionist political parties in Northern Ireland. Other Northern Ireland parties represented at Westminster include the Social Democratic and Labour Party (3 MPs), the Alliance Party of Northern Ireland (1 MP) and Sinn Féin (5 MPs). Sinn Féin MPs refuse to take their seats and sit in a 'foreign' parliament. Other parliamentary parties The Green Party of England and Wales gained its second MP, Caroline Lucas, in the 2010 General Election (the first MP was Cynog Dafis, Ceredigion 1992 who was elected on a joint Plaid Cyru/Green Party ticket). It also has seats in the European Parliament, two seats on the London Assembly and around 120 local councillors. There are usually a small number of Independent politicians in parliament with no party allegiance. In modern times, this has usually occurred when a sitting member leaves their party, and some such MPs have been re-elected as independents. The only current Independent MP is Lady Hermon, previously of the Ulster Unionist Party. However, since 1950 only two new members have been elected as independents without having ever stood for a major party: - Martin Bell represented the Tatton constituency in Cheshire between 1997 and 2001. He was elected following a "sleaze" scandal involving the sitting Conservative MP, Neil Hamilton—Bell, a BBC journalist, stood as an anticorruption independent candidate, and the Labour and Liberal Democrat parties withdrew their candidates from the election. - Dr. Richard Taylor MP was elected for the Wyre Forest constituency in the 2001 on a platform opposing the closure of Kidderminster hospital. He later established Health Concern, the party under which he ran in 2005. Non-Parliamentary political parties Other UK political parties exist, but generally do not succeed in returning MPs to Parliament. The United Kingdom Independence Party (UKIP) has 13 seats in the European Parliament as well as seats in the House of Lords and a number of local councillors. (UKIP) has become an emerging alternative party among some voters. Campaigning mainly on issues such as immigration and EU withdrawal, they have posed a recent challenge to the Conservative-Labour duopoly after securing more votes than either party at the Eastleigh by-election. On 22 April 2008 it welcomed the defection of Bob Spink MP for Castle Point, to date its only MP. However, Bob Spink later claimed to have never joined UKIP and does not sit as a UKIP MP. Two UKIP members were elected to the London Assembly in 2000, but they quit the party in February 2005 to join Veritas which they quit in September 2005 to sit as One London members. They were not re-elected in 2008. Other parties include: the Socialist Labour Party (UK), the Free England Party, the Communist Party of Britain, the Socialist Party (England and Wales), the Socialist Workers Party, the Scottish Socialist Party, the Liberal Party, Mebyon Kernow (a Cornish nationalist party) in Cornwall, Veritas, the Communist Left Alliance (in Fife) and the Pirate Party UK. Several local parties contest only within a specific area, a single county, borough or district. Examples include the Better Bedford Independent Party, which was one of the dominant parties in Bedford Borough Council and led by Bedford's former Mayor, Frank Branston. The most notable local party is Health Concern, which controlled a single seat in the UK Parliament from 2001 to 2010. The Jury Team, launched in March 2009 and described as a "non-party party", is an umbrella organisation seeking to increase the number of independent members of both domestic and European members of Parliament in Great Britain. The Official Monster Raving Loony Party was founded in 1983. The OMRLP are distinguished by having a deliberately bizarre manifesto, which contains things that seem to be impossible or too absurd to implement – usually to highlight what they see as real-life absurdities. In spite of (or perhaps because of) a reputation more satirical than serious, they have routinely been successful in local elections. Current political landscape Since winning the largest number of seats and votes in the 2010 general election, the Conservatives under David Cameron are now behind the Labour Party now led by Ed Milliband. Their coalition partners have also experienced a decline in support in opinion polls. At the same time, support for the UK Independence Party has shown a considerable advance, with some polls now placing them in third place ahead of the Lib Dems. UKIP's growing strength was illustrated by the result of the Eastleigh be-election in which the party advanced by 24% to take second place from the Conservatives, less than 5% behind the Lib Dems who retained the seat. In Scotland, the Scottish National Party made some strong advances, winning the Scottish parliamentary election in May 2007 and gaining support in most national opinion polls since then. In July 2008, the SNP achieved a remarkable by-election victory in Glasgow East, winning the third safest Labour seat in Scotland with a swing of 22.54%. However, in October of the same year, despite confident public predictions by the SNP's leader Alex Salmond that they would win another by-election in Glenrothes, the seat was comfortably won by Labour with a majority of 6,737 and an increased share of the vote. Given that the SNP won the equivalent Scottish Parliament seat of Central Fife in 2007 this was viewed as a significant step back for the SNP. More recently, the SNP significantly out-polled the Labour Party in the 2009 European election. However in the 2010 General election the SNP fell significantly short of expectations, winning only the six seats they had won in the previous General election of 2005. On the contrary they managed to win an overall majority of seats in the 2011 Scottish parliamentary election, retaining control of the Scottish government in the process. Local government The UK is divided into a variety of different types of Local Authorities, with different functions and responsibilities. England has a mix of two-tier and single-tier councils in different parts of the country. In Greater London, a unique two-tier system exists, with power shared between the London borough councils, and the Greater London Authority which is headed by an elected mayor. European Union The United Kingdom first joined the European Economic Community in January 1973, and has remained a member of the European Union (EU) that it evolved into; UK citizens, and other EU citizens resident in the UK, elect 78 members to represent them in the European Parliament in Brussels and Strasbourg. The UK's membership in the Union has been objected to over questions of sovereignty, and in recent years there have been divisions in both major parties over whether the UK should form greater ties within the EU, or reduce the EU's supranational powers. Opponents of greater European integration are known as "Eurosceptics", while supporters are known as "Europhiles". Division over Europe is prevalent in both major parties, although the Conservative Party is seen as most divided over the issue, both whilst in Government up to 1997 and after 2010, and between those dates as the opposition. However, the Labour Party is also divided, with conflicting views over UK adoption of the euro whilst in Government (1997–2010), although the party is largely in favour of further integration where in the country's interest. UK nationalists have long campaigned against European integration. The strong showing of the eurosceptic United Kingdom Independence Party (UKIP) in the 2004 European Parliament elections has shifted the debate over UK relations with the EU. In March 2008, Parliament decided to not hold a referendum on the ratification of the Treaty of Lisbon, signed in December 2007. This was despite the Labour government promising in 2004 to hold a referendum on the previously proposed Constitution for Europe. International organization participation - African Development Bank - Asian Development Bank - Australia Group - Bank for International Settlements - Commonwealth of Nations - Caribbean Development Bank (non-regional) - Council of Europe - Euro-Atlantic Partnership Council - European Bank for Reconstruction and Development - European Investment Bank - European Space Agency - European Union - Food and Agriculture Organization - G5, G6, G7, G8 - Inter-American Development Bank - International Atomic Energy Agency - International Bank for Reconstruction and Development - International Civil Aviation Organization - International Chamber of Commerce - International Confederation of Free Trade Unions - International Criminal Court - International Criminal Police Organization - Interpol - International Development Association - International Energy Agency - International Federation of Red Cross and Red Crescent Societies - International Finance Corporation - International Fund for Agricultural Development - International Hydrographic Organization - International Labour Organization - International Maritime Organization - International Monetary Fund - International Olympic Committee (IOC) - International Organization for Migration (IOM) (observer) - International Organization for Standardization (ISO) - International Red Cross and Red Crescent Movement - International Telecommunications Satellite Organization (Intelsat) - International Telecommunication Union (ITU) - International Whaling Commission - Non-Aligned Movement (NAM) (guest) - North Atlantic Treaty Organization (NATO) - Nuclear Energy Agency (NEA) - Nuclear Suppliers Group (NSG) - Organisation for Economic Co-operation and Development - Organisation for the Prohibition of Chemical Weapons - Organization for Security and Co-operation in Europe (OSCE) - Organization of American States (OAS) (observer) - Permanent Court of Arbitration - Secretariat of the Pacific Community (SPC) - United Nations - United Nations Conference on Trade and Development (UNCTAD) - United Nations Economic Commission for Africa (associate) - United Nations Economic Commission for Europe - United Nations Economic Commission for Latin America and the Caribbean - United Nations Economic and Social Commission for Asia and the Pacific - United Nations High Commissioner for Refugees (UNHCR) - United Nations Industrial Development Organization (UNIDO) - United Nations Interim Administration Mission in Kosovo (UNMIK) - United Nations Iraq-Kuwait Observation Mission (UNIKOM) - United Nations Mission in Bosnia and Herzegovina (UNMIBH) - United Nations Mission in Sierra Leone (UNAMSIL) - United Nations Observer Mission in Georgia (UNOMIG) - United Nations Peacekeeping Force in Cyprus (UNFICYP) - United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) - United Nations Security Council (permanent member) - Universal Postal Union (UPU) - Western European Union - World Confederation of Labour - World Customs Organization - World Health Organization - World Intellectual Property Organization - World Meteorological Organization - World Trade Organization - Zangger Committee See also - British political scandals - British Polling Council - List of British political defections - Pressure groups in the United Kingdom - Referendums in the United Kingdom - "General Election results through time, 1945–2001". BBC News. Retrieved 2006-05-19. - Dyer, Clare (2003-10-21). "Mystery lifted on Queen's powers". The Guardian (London). - About the Scottish Executive, Scotland.gov.uk - "Structure and powers of the Assembly". BBC News. 1992-04-09. Retrieved 2008-10-21. - "Devolved Government - Ministers and their departments". Northern Ireland Executive. Retrieved 2008-10-17.[dead link] - The formal request from the monarch is either to (a) form a government capable of surviving in the House of Commons (which by implication does not require a majority behind it, given that skilled minority governments can and do survive for long periods); or (b) form a government capable of commanding a majority in the Commons, which by implication requires a majority behind it - Jones, George (2006-01-17). "Baker seeks end to West Lothian question". London: The Daily Telegraph. Retrieved 2006-05-16. - "No English parliament — Falconer". BBC. 2006-03-10. Retrieved 2006-05-16. - BBC News 2001 - Blair gets Cornish assembly call - BBC news 2003 - Prescott pressed on Cornish Assembly poll - including The Campaign for an English Parliament - "Scottish Parliament Word Bank". Scottish Parliament. Retrieved 2006-11-14. - "Scottish Parliament MSPs". Scottish Parliament. Retrieved 2006-11-14. - "The Celtic League". Retrieved 2006-05-20. - "Welsh firmly back Britain's Union". BBC News. 2007-01-16. Retrieved 2007-02-05. - "Wales says Yes in referendum vote". BBC News. 4 March 2011. - PDF (252 KB), Department for Constitutional Affairs. Retrieved on 2006-05-22 - Oxford English Dictionary (Second Edition 1989). Whig n.2, whiggamore, and tory 1. a. - Pact will 'empower NI electorate' BBC News, 6 December 2008 - Plaid Cymru/Scottish National Party Parliamentary Teams www.parliament.uk, accessed August 15, 2008 - "Tory? UKIP? Now I'm just an inde says MP Bob". Echo News. Retrieved 2009-05-31. - /8084538.stm English Democrat wins mayor vote BBC NEWS 5 June 2009 - Gourlay, Chris (2009-03-08). "Tycoon finances 'X Factor' party to clean up politics". London: The Sunday Times. Retrieved 2009-05-10. - SNP claims record poll lead over Labour www.theherald.co.uk, accessed August 14, 2008 - SNP stuns Labour in Glasgow East news.bbc.co.uk, accessed August 14, 2008 - Browne, Anthony (14 September 2005). "Europe Wins The Power To Jail British Citizens". London: The Times. Retrieved 20 October 2008. - "UK rebel lawmakers beaten on EU vote". CNN. 2008-03-05. Archived from the original on 2008-03-09. Retrieved 2008-03-05. - Prospect Magazine - UK based political magazine focussing on British and international politics, cultural essays and arguments - British Politics - the only academic journal devoted purely to the study of political issues in Britain - Directgov, main entry point for citizens to the UK government - Official UK parliament website - Official UK parliamentary membership by party - British Government and Politics on the Internet from the Keele University School of Politics - British Politics and Policy at LSE The London School of Economics' UK politics and policy blog - ePolitix - UK Politics news website - British Government and Politics Compiled by a retired English Librarian - Women's Parliamentary Radio Interviews and resources about women politicians in the UK
http://en.wikipedia.org/wiki/United_Kingdom_%e2%80%93_European_Union_relations
13
33
Although the Goddard Space Flight Center received its official designation on the first of May 1959, Goddard's roots actually date back far beyond that. In a sense, they date back almost as far as civilization itself - for people have been gazing into the night sky and wondered about its secrets for thousands of years. In the fourth century B.C., Aristotle created a model of the universe that astronomers relied on for more than a millennium. His assumption that the universe revolved around the Earth proved to be incorrect, but his effort was no different than that of modern scientists trying to solve the riddles of black holes or dark matter.1 The roots of Goddard's work in rocket development and atmospheric research date back several centuries, as well. The first... ...reported use of rocket technology was in the year 1232, when the Chin Tarters developed a "fire arrow" to fend off a Mongol assault on the city of Kai-feng-fu. In 1749, Scotsman Alexander Wilson was sending thermometers aloft on kites to measure upper-air temperatures. One hundred and fifty years later, meteorologists were beginning to accurately map the properties of the atmosphere using kites and balloons.2 Robert J. Goddard, for whom the Goddard Space Flight Center is named, received his first patents for a multi-stage rocket and liquid rocket propellents in 1914, and his famous paper on "A Method of Reaching Extreme Altitudes" was published in 1919. But it would not be until the close of World War II that all these long-standing interests and efforts would come together to create the foundation for modern space science and, eventually, the Goddard Space.Flight Center.3 A certain amount of rocket research was being conducted in the United States even during the war. But the Germans had made far greater advancements in rocket technology. Before the end of the war, German scientists had developed a large, operational ballistic rocket weapon known as the "V-2.". When the war came to a close, the U.S. military brought a number of these rockets back to the United States to learn more about their handling and operation. The Army planned to fire the V-2s at the White Sands Proving Ground in New Mexico. The Army's interest was in furthering the design of ballistic missiles. But the military recognized the research opportunity the rocket firings presented and offered to let interested groups instrument the rockets for high-altitude scientific research.4 The V-2 program helped spark the development of other rockets, and research with "sounding rockets," as these small upper atmosphere rockets were called, expanded greatly over the next few years. The results from these rocket firings also began to gain the attention of the international scientific community. In 1951, the International Council of Scientific Unions had suggested organizing a third "International Polar Year" in 1957. The first two such events had been held in 1882 and 1932 and focused on accurately locating meridians (longitudinal lines) of the Earth. A third event was proposed after an interval of only 25 years because so many rapid advances had been made in technology and instrumentation since the beginning of WWII. Scientists in the 1950s could look at many more aspects of the Earth and the atmosphere than their predecessors could even a decade earlier. In 1952, the proposed event was approved by the Council and renamed the International Geophysical Year (IGY) to reflect this expanded focus on studying the whole Earth and it immediate surroundings.5 The U.S. scientists quickly agreed to incorporate rocket soundings as part of their contribution to the IGY. But a loftier goal soon emerged. In October 1954, the International Council's IGY committee issued a formal challenge to participating countries to attempt to launch a satellite as part of the IGY. In July 1955, President Dwight D. Eisenhower picked up the gauntlet. The United States, he announced, would launch "small, unmanned Earth-circling satellites as part of the U.S. participation in the IGY."6 In September 1956, the Soviet Union announced that it, too, would launch a satellite the following year. The race was on. Sputnik, Vanguard, and the Birth of NASA The U.S. satellite project was to be a joint effort of the National Academy of Sciences (NAS), the National Science Foundation (NSF) and the Department of Defense (DOD). The NAS was in charge of selecting the experiments for the satellite, the NSF would provide funding, and the Defense Department would provide the launch vehicle. Sparked by the V-2 launch program, the Naval Research Lab (NRL) already had begun work on a rocket called the Viking, and the NRL proposed to mate the Viking with a smaller "Aerobee." The Aerobee was a rocket that had evolved from a rocket JPL had first tested in 1945 and was used extensively for sounding rocket research. The Viking would be the first stage, the Aerobee would be the second stage, and another small rocket would serve as the third stage. The proposal was approved and dubbed "Project Vanguard."7 Yet despite these efforts, the Americans would not be the first into space. On 4 October 1957, the Russians launched Sputnik I - and changed the world forever. The launch of Sputnik was disappointing to U.S. scientists, who had hoped to reach space first. But following good scientific etiquette, they swallowed their pride and gave credit to the Soviets for their impressive accomplishment. The rest of the U.S., however, had a very different reaction. Coming as it did at the height of the Cold War, the launch of Sputnik sent an astounding wave of shock and fear across the country. The Russians appeared to have proven themselves technologically more advanced. Aside from a loss of prestige and possible economic considerations from falling behind the Russians in technological ability, the launch raised questions of national security, as well. If the Soviets could conquer space, what new threats could they pose? The situation was not helped by a second successful Sputnik launch a month later or the embarrassing, catastrophic failure of a Vanguard rocket two seconds after launch in early December. Space suddenly became a national priority. Congress began ramping up to deal with the "crisis." President Eisenhower created a post of Science Adviser to the President and asked his Science Advisory Committee to develop a national policy on space. That policy would lead to the National Aeronautics and Space Act of 1958 that created the National Aeronautics and Space Administration.8 In the months following the launch of Sputnik, numerous proposals were put forth about how the development of space capability should be organized. But in the end, President Eisenhower decided that the best way to pursue a civilian space program with speed and efficiency was to put its leadership under a strengthened and redesignated National Advisory Committee for Aeronautics (NACA). Proposed legislation for the creation of this new agency was sent to Congress on 2 April 1958 and signed into law on 29 July 1958.9 The Space Act outlined a tremendously ambitious list of objectives for the new agency. While the administrative and political debate over a new space agency was being conducted, work continued on the the IGY satellite project. The Vanguard rocket project had been approved not because the Viking and Aerobee were the only rocket programs underway, but because the military did not want to divert any of its its intercontinental ballistic missile (ICBM) efforts to the civilian IGY project. But the launch of Sputnik and the subsequent Vanguard failure changed that situation. Getting a satellite into orbit was now a top national priority. In November 1957, the Army Ballistics Missile Agency was given permission to attempt the launch of a satellite using a proven Jupiter C missile from the Redstone Arsenal in Huntsville, Alabama. The United States finally achieved successful space flight on 31 January 1958 when a Jupiter rocket successfully launched a small cylinder named Explorer I into orbit11. In retrospect, it's interesting to speculate how history might have been different had the Army's Jupiter missile been chosen as the satellite launch vehicle from the outset, rather than the Vanguard. The United States might well have beaten the Soviets into space. But without the public fear and outcry at losing our technological edge, there might well not have been the public support for the creation of NASA and its extensive space program.12 Meanwhile, the Vanguard program still continued, although it was struggling. A third rocket broke apart in flight just five days after the successful Explorer I launch. Finally, on 17 March 1958, a Vanguard rocket successfully launched Vanguard I - a six-inch sphere weighing only four pounds - into orbit. The Explorer I and Vanguard I satellites proved we could reach space. The next task was to create an organization that could manage our effort to explore it - an effort that would become one of the most enormous and expensive endeavors of the 20th century.13 Origins of the Goddard Space Flight Center As planning began for the new space agency in the summer of 1958, it quickly became clear that a research center devoted to the space effort would have to be added to the existing NACA aeronautical research centers. The space program was going to involve big contracts and complicated projects, and the founding fathers of NASA wanted to make sure there was enough in-house expertise to manage the projects and contracts effectively. Even before the Space Act was signed into law, Hugh Dryden, who became the Deputy Administrator of NASA, began looking for a location for the new space center. Dryden approached a friend of his in the Department of Agriculture about obtaining a tract of government land near the Beltsville Agricultural Research Center in... ....Maryland. Dr. John W. Townsend, who became the first head of the space science division at Goddard and, later, one of the Center's directors, was involved in the negotiations for the property. The process, as he recalls, was rather short. He (the department of Agriculture representative) said, "Are you all good guys?" I said "Yes." He said, "Will you keep down the development?" I said, "Yes." He spreads out a map and says, "How much do you want?" And that was that. We had our place.14 On 1 August 1958, Maryland's Senator J. Glenn Beall announced that the new "Outer Space Agency" would establish its laboratory and plant in Greenbelt, Maryland. But while the new research center was, in fact, built in Greenbelt, Senator Beall's press release shows how naive even decision-makers were about how huge the space effort would become. Beall confidently asserted that the research center would employ 650 people, and that "all research work in connection with outer space programs will be conducted at the Greenbelt installation."15 The initial cadre of personnel for the new space center - and NASA itself, for that matter - was assembled through a blanket transfer authority granted to NASA to insure the agency had the resources it needed to do its job. One of the first steps was the transfer of the entire Project Vanguard mission and staff from the NRL to the new space agency, a move that was actually included in the executive order that officially opened the doors of NASA on 1 October 1958.16 The 157 people in the Vanguard project became one of the first groups incorporated into what was then being called the "Beltsville Space Center." In December 1958, 47 additional scientists from the NRL's sounding rocket branch also transferred to NASA, including branch head John Townsend. Fifteen additional scientists, including Dr. Robert Jastrow, also transferred to the new space center from the NRL's theoretical division. The Space Task Group at the Langley Research Center, responsible for the manned space flight effort that would become Project Mercury, was initially put under administrative control of the Beltsville center, as well, although the group's 250 employees remained at Langley. A propulsion-oriented space task group from the Lewis Research Center was also put under the control of the new space center. The space center's initial cadre was completed in April 1959 with the transfer of a group working on the TIROS meteorological satellite for the Army Signal Corps Research and Development Laboratory in Ft. Monmouth, New Jersey.17 The Beltsville Space Center was officially designated as a NASA research center on 15 January 1959, after the initial personnel transfers had been completed. On 1 May 1959, the Beltsville facility was renamed the Goddard Space Flight Center, in honor of Dr. Robert J. Goddard. 18 Although Goddard existed administratively by May of 1959, it still did not exist in any physical sense. Construction finally began on the first building at the Beltsville Space... .....Center in April 1959,19 but it would be some time before the facilities there were ready to be occupied. In the meantime, the Center's employees were scattered around the country. The Lewis and Langley task groups were still at those research centers. The NRL scientists were working out of temporary quarters in two abandoned warehouses next to the Naval Lab facilities. Additional administrative personnel were housed in space at the Naval Receiving Station and at NASA's temporary headquarters in the old Cosmos Club Building, also known as the Dolly Madison House on H Street in Washington, D.C. Robert Jastrow's theoretical division was housed above the Mazor Furniture Store in Silver Spring, Maryland.20 The different groups may have been one organization on paper but, in reality, operations were fairly segmented. The Center did not even have an official director and would not have one until September 1959. Until then, working relationships and facilities were both somewhat improvised. Not surprisingly, the working conditions in those early days were also less than ideal. Offices were cramped cubicles and desks were sometimes made of packing crates. Laboratory facilities were equally rough. One of the early engineers remembers using chunks of dry ice in makeshift "cold boxes" to cool circuitry panels and components. The boxes were effective, but researchers had to make sure they didn't breathe too deeply or keep their heads in the boxes too long, because the process also formed toxic carbonic acid fumes. But there was a kind of raw enthusiasm for the work - a pioneering challenge with few rules and seemingly limitless potential - that more than made up for the rudimentary facilities. It helped that many of the scientists also came from a background in sounding rockets. Sounding rocket research, especially in the early days, was a field that demanded a lot of flexibility and ingenuity. Because their work had begun long before the post-Sputnik flood of funding, these scientists were accustomed to very basic, low-budget operations. Comfort may not have been at a premium in Goddard's early days, but scientists who had braved the frigid North Atlantic to fire rockoons (rockets carried to high altitude by helium balloons before being fired) had certainly seen a lot worse.21 As 1959 progressed, Goddard continued to grow. By June, the new research center had 391 employees in the Washington area and, by the end of 1959, its personnel numbered 579.22 As the personnel grew, so did the physical facilities at the Greenbelt, Maryland, site. By September 1959, the first building was ready to be occupied. The plan for Goddard's physical facilities was to create a campus-like atmosphere that would accommodate the many different jobs the Center was to perform. The buildings were numbered in order of construction, and there was a general plan to put laboratories and computer facilities on one side, utility buildings in the center of the campus, and offices on the other side. Most of the buildings were one, two, or three-story structures that blended inconspicuously into the landscape. The one exception was Building 8, which was built to house the manned space flight program personnel. Robert Gilruth, who was in charge of the program, supposedly wanted a tall structure, so the building was designed with six stories. The original plan to incorporate the manned space flight program at Goddard also resulted in the construction of a special bay tall enough to house Mercury capsules as part of the test and evaluation facility in Building 5. By 1961, however, this aspect of NASA's program had been moved to the new space center in Houston, Texas. So Building 8 was used to house administration offices, instead.23 Even as formal facilities developed, it still took something of a pioneer's spirit to work at Goddard during the early days. The Center was built in a swampy, wooded area, and wood planks often had to be stretched across large sections of mud between parking areas and offices. And on more than one occasion, displaced local snakes found their way into employees' cars, leading to distinctive screams coming from the parking lot at the end of the day.24 Improvisation and flexibility were critical skills to have in the scientific and engineering work that was done, as well. Space was a new endeavor, and there were few guidelines as to how to proceed - either in terms of what should be done or how that goal should be accomplished. At the very beginning, there was no established procedure to decide which experiments should be pursued, and there was a shortage of space scientists who were interested or ready to work with satellites. As a result, the first scientists recruited or transferred to Goddard had a lot of freedom to make their own decisions about what ought to be done. In 1959, NASA Headquarters announced that it would select the satellite experiments, but a shortage of qualified scientists at that level resulted in Goddard scientists initially taking part in the evaluation process. Experiments from outside scientists were incorporated into virtually all the satellite projects, but there were soon more scientists and proposals than there were flight opportunities. The outside scientific community began to complain that Goddard scientists had an unfair advantage. It took a while to sort out, but by 1961 NASA had developed a procedure that is still the foundation of how experiments are selected today. Headquarters issues Announcements of flight Opportunities (AOs), and scientists from around the country can submit proposals for experiments for the upcoming project. The proposals are evaluated by sub-committees organized by NASA Headquarters. The committees are made up of scientists from both NASA and the outside scientific community, but members do not evaluate proposals that might compete with their own work. These groups also conduct long-range mission planning, along with the National Academy of Sciences' Space Science Board.25 The final selection of experiments for satellite missions is made by a steering committee of NASA scientists. Because of a possible conflict of interest, the selection board took care to ensure a fairness in selecting space science research.26 Yet in the early days of Goddard, uncertainty about how to choose which experiments to pursue was only part of the challenge. The work itself required a flexible, pragmatic approach. Nobody had built satellites before, so there was no established support industry. Scientists drew upon their sounding rocket experience and learned as they went. Often, they learned lessons the hard way. Early summaries of satellite launches and results are peppered with notes such as "two experiment booms failed to deploy properly, however...," "Satellite's tracking beacon failed...," and, all too often, "liftoff appeared normal, but orbit was not achieved."27 Launch vehicles were clearly the weakest link in the early days, causing much frustration for space scientists. In 1959, only four of NASA's ten scientific satellite launches succeeded.28 In this environment of experimentation with regard to equipment as well as cosmic phenomena, Goddard scientists and engineers were constantly inventing new instruments, systems, and components, and they often had to fly something to see if it would really work. This talent for innovation became one of the strengths of Goddard, leading to the development of everything from an artificial sun to help test satellites to modular and servicable spacecraft, to solid state recorder technology and microchip technology for space applications. This entrepreneurial environment also spawned a distinct style and culture that would come to characterize Goddard's operations throughout its developmental years. It was a very pragmatic approach that stressed direct, solution-focused communication with the line personnel doing the work and avoided formal paperwork unless absolutely necessary. One early radio astronomy satellite, for example, required a complex system to keep it pointed in the right direction and an antenna array that was taller than the Empire State Building. After heated debate as to how the satellite should be built, the project manager approved one engineer's design and asked him to document it for him. On the launch day, when asked for the still-missing documentation, the engineer ripped off a corner of a piece of notebook paper, scribbled his recommendation, and handed it to the project manager. As one of the early scientists said, the Center's philosophy was "Don't talk about it, don't write about it - do it!"29 Dedicating the new space center This innovative and pragmatic approach to operations permeated the entire staff of the young space center, a trait that proved very useful in everything from spacecraft design to Goddard's formal dedication ceremonies. Construction of the facilities at Goddard progressed through 1959 and 1960. By the spring of 1961, NASA decided the work was far enough along to organize formal dedication ceremonies. But while there were several buildings that were finished and occupied, the Center was still lacking a few elements necessary for a dedication. A week before the ceremonies, the Secret Service came out to survey the site, because it was thought President Kennedy might attend. They told Goddard's director of administration, Mike Vaccaro, that he had to have a fence surrounding the Center. It rained for a solid week before the dedication, but Vaccaro managed to find a contractor who worked a crew 24 hours a day in the rain and mud to cut down trees and put in a chain link fence. After all that, the President did not attend the ceremonies. But... ....someone then decided that a dedication couldn't take place without a flagpole to mark the Center's entrance. Vaccaro had three days to find a flagpole - a seemingly impossible deadline to meet while still complying with government procurement regulations. One of his staff said there was a school being closed down that had a flagpole outside it, so Vaccaro spoke to the school board and then created a specification that described that flagpole so precisely that the school was the only bid that fit the bill. He then sent some of his staff over to dig up the flagpole and move it over to the Center's entrance gate - where it still stands today. There was also the problem of a bust statue. The dedication ceremony was supposed to include the unveiling of a bronze bust of Robert J. Goddard. But the sculptor commissioned to create the bust got behind schedule, and all he had done by the dedication date was a clay model. Vaccaro sent one of his employees to bring the clay sculpture to the Center for the ceremonies, anyway. To make things worse, the taxi bringing the bust back to the Center stopped short at one point, causing the bust to fall to the floor of the cab. The bust survived pretty much intact, but its nose broke off. Undaunted, Vaccaro and his employees pieced the nose back together and simply spray painted the clay bronze, finishing with so little time to spare that the paint was still wet when the bust was finally unveiled.30 But the ceremonies went beautifully, the Goddard Space Flight Center was given its formal send-off, and the Center could settle back down to the work of getting satellites into orbit. The Early Years In the view of those who were present at the time, the 1960s were a kind of golden age for Goddard. There was an entrepreneurial enthusiasm among its employees, and NASA was too new and still too small to have much in the way of bureaucracy, paperwork, or red tape. The scientists were being given the opportunity to be the first into a new territory. Sounding rockets and satellites weren't just making little refinements of already known phenomena and theories - they were exploring the space around Earth for the first time. Practically everything the scientists did was something that had never been done before, and they were discovering significant and new surprises and phenomena on almost every flight. Because of the impetus behind the Mercury, Gemini and Apollo space programs, space scientists also suddenly found themselves with a level of funding they had never had available before. Although there were many frustrations associated with learning how to operate in space and develop reliable technology that could survive its rigors, support for that effort was almost limitless. The Apollo program was "the rising tide that lifted all boats," as one Goddard manager put it. There was also a sense of mission, importance and purpose that has been difficult to duplicate since. We were going to space and we were going to be first to the Moon, and our national security, prestige, and pride was seen as dependent on how well we did the job.31 The Goddard Institute for Space Studies In this kind of environment, both the space program and Goddard grew quickly. Even before Goddard completed its formal dedication ceremonies, plans were laid for the establishment of a separate Goddard Institute for Space Studies in New York City. Two of the big concerns in the early days of the space program were attracting top scientists to work with the new agency and insuring there would be space-skilled researchers coming out of the universities. Early in NASA's development, the agency set aside money for both research and facilities grants to universities to help create strong space science departments.32 But one of Goddard's early managers thought the link should be personal as well as financial. Dr. Robert Jastrow had transferred to Goddard to head up the theoretical division in the fall of 1958. He argued that if Goddard wanted to attract the top theoretical physicists from academia to work with the space program, it had to have a location more convenient to leading universities. By late 1960, he had convinced managers at Goddard and Headquarters to allow him to set up a separate Goddard institute in New York. The Goddard Institute for Space Studies (GISS) provided a gathering point for theoretical physicists and space scientists in the area. But the institute offered them another carrot, as well - some of the most powerful computers in existence at the time. The computers were a tremendous asset in crunching the impossibly big numbers involved in problems of theoretical physics and orbital projections. Over the years, the Goddard Institute organized conferences and symposia and offered research fellowships to graduate students in the area. It also kept its place at the forefront of computer technology. In 1975, the first fourth generation computer to be put into use anywhere in the United States was installed at the Goddard Institute in New York.33 Goddard's international ties and projects were expanding quickly, as well. In part the growth was natural, because Goddard and the space program itself grew out of an international scientific effort - the International Geophysical Year. Scientists also tended to see their community as global rather than national, which made international projects much easier to organize. Furthermore, the need for a world-wide network of ground stations to track the IGY satellites forced the early space scientists and engineers to develop working relationships with international partners even before NASA existed. These efforts were enhanced both by the Space Act that created NASA, which specified international cooperation as a priority for the new agency, and by the simple fact that there was significant interest among other countries in doing space research. Early NASA managers quickly set down a very simple policy about international projects that still guides the international efforts NASA undertakes. There were only two main rules. The first was that there would be no exchange of funds between... ....NASA and international partners. Each side would contribute part of the project. The second was that the results would be made available to the whole international community. The result was a number of highly successful international satellites created by joint teams who worked together extremely well - sometimes so well that it seemed that they all came from a single country.34 In April 1962, NASA launched Ariel I - a joint effort between Goddard and the United Kingdom and the first international satellite. Researchers in the U.K. developed the instruments for the satellite, and Goddard managed development of the satellite and the overall project. Ariel was followed five months later by Alouette I, a cooperative venture between NASA and Canada. Although Alouette was the second international satellite, it was the first satellite in NASA's international space research program that was developed entirely by another country.35 These early satellites were followed by others. Over the years, Goddard's international ties grew stronger through additional cooperative scientific satellite projects and the development of ground station networks. Today, international cooperation is a critical component of both NASA's scientific satellite and human space flight programs. The work Goddard conducted throughout the 1960s was focused on basics: conquering the technical challenges of even getting into space, figuring out how to get satellites to work reliably once they got there, and starting to take basic measurements of what existed beyond the Earth's atmosphere. The first few satellites focused on taking in situ measurements of forces and particles that existed in the immediate vicinity of Earth, but the research quickly expanded to astronomy, weather satellites, and communication satellites. Indeed, one of the initial groups that was transferred to form Goddard was a group from the Army Signal Corps that was already working on development of a weather satellite called the Television Infrared Observation Satellite (TIROS). The first TIROS satellite was launched in April 1960. Four months later, the first communications satellite was launched into a successful orbit. The original charter for NASA limited its research to passive communications satellites, leaving active communications technology to the Department of Defense. So the first communications satellite was an inflatable mylar sphere called "Echo," which simply bounced communications signals back to the ground. The limitation against active communications satellite research was soon lifted, however, and civilian prototypes of communications satellites with active transmitters were in orbit by early 1963.36 As the 1960s progressed, the size of satellites grew along with the funding for the space program. The early satellites were simple vehicles with one or two main experiments. Although small satellites continued to be built and launched, the mid-1960s saw the evolution of a new Observatory-class of satellites, as well - spacecraft weighing as much as one thousand pounds, with multiple instruments and experiments. In... ....part, the bigger satellites reflected advances in launch vehicles that allowed bigger payloads to get into orbit. But they also paralleled the rapidly expanding sights, funding, and goals of the space program. The research conducted with satellites also expanded during the 1960s. Astronomy satellites were a little more complex to design, because they had to have the ability to remain pointed at one spot for a length of time. Astronomers also were not as motivated as their space physics colleagues to undertake the challenge of space-based research, because many astronomy experiments could be conducted from ground observatories. Nonetheless, space offered the opportunity to look at objects in regions of the electromagnetic spectrum obscured by the Earth's atmosphere. The ability to launch larger satellites brought that opportunity within reach as it opened the door to space-based astronomy telescopes. Goddard launched its first Orbiting Astronomical Observatory (OAO) in 1966. That satellite failed, but another OAO launched two years later was very successful. These OAO satellites laid the groundwork for Goddard's many astronomical satellites that followed, including the Hubble Space Telescope. Goddard scientists also were involved in instrumenting some of the planetary probes that were already being developed in the 1960s, such as the Pioneer probes into interplanetary space and the Ranger probes to the Moon. The other main effort underway at Goddard in the 1960s involved the development of tracking and communication facilities and capabilities for both the scientific satellites and the manned space flight program. Goddard became the hub of the massive, international tracking and communications wheel that involved aircraft, supertankers converted into mobile communications units, and a wide diversity of ground stations. This system provided NASA with a kind of "Internet" that stretched not only around the world, but into space, as well. Every communication to or from any spacecraft came through this network. A duplicate mission control center was also built at Goddard in case the computers at the main control room at the Johnson Space Center in Houston, Texas failed for any reason. Whether it was in tracking, data, satellite engineering, or space science research, the 1960s were a heady time to work for NASA. The nation was behind the effort, funding was flowing from Congress faster than scientists and engineers could spend it, and there was an intoxicating feeling of exploration. Almost everything Goddard was doing had never been done before. Space was the new frontier, and the people at Goddard knew they were pioneers in the endeavor of the century. This is not to say that there were no difficulties, frustrations, problems, or disappointments in the 1960s. Tensions between the Center and NASA Headquarters increased as NASA projects got bigger. Goddard's first director, Harry Goett, came to Goddard from the former NACA Ames Research Center. He was a fierce defender of his people and believed vehemently in the independence of field centers. Unfortunately, Goddard was not only almost in Headquarters' back yard, it was also under a much more intense spotlight because of its focus on space. The issues between Goddard and NASA Headquarters were not unique to Goddard, or even to NASA. Tension exists almost inherently between the Headquarters and field installations of any institution or corporation. While both components are necessary to solve the myriad of big-picture and hands-on problems the organization faces, their different tasks and perspectives often put Headquarters and field personnel in conflict with each other. In order to run interference for field offices and conduct long-range planning, funding, or legislative battles, Headquarters personnel need information and a certain.... ....amount of control over what happens elsewhere in the organization. Yet to field personnel who are shielded from these large-scale threats and pressures, this oversight and control is often seen as unwelcome interference. In the case of NASA, Headquarters had constant pressure from Congress to know what was going on, and it had a justifiable concern about managing budgets and projects that were truly astronomical. To allow senior management to keep tabs... ....on different projects and to maintain a constant information flow from the Centers to Headquarters, NASA designated program managers at Headquarters who would oversee the agency's various long-term, continuing endeavors, such as astronomy. Those program managers would oversee the shorter-term individual projects, such as a single astronomy satellite, that were being managed by Goddard or the other NASA field centers.37 These program managers were something of a sore spot for Goett and the Goddard managers, who felt they knew well enough how to manage their work and, like typical field office managers, sometimes saw this oversight as unwelcome interference. Managers at other NASA Centers shared this opinion, but the tension was probably higher at Goddard because it was so close to Headquarters. Program managers wanted to sit in on meetings, and Goett wanted his project managers and scientists left alone. Tensions over authority and management escalated between Goett and Headquarters until Goett was finally replaced in 1965.38 The increasing attention paid to the space program had other consequences, as well. If it created more support and funding for the work, it also put projects in the eye of a public that didn't necessarily understand that failure was an integral part of the scientific process. The public reaction to early launch failures, especially the embarrassing Vanguard explosion in December 1957, made it very clear to the NASA engineers and scientists that failure, in any guise, was unacceptable. This situation intensified after the Apollo I fire in 1967 that cost the lives of three astronauts. With each failure, oversight and review processes got more detailed and complex, and the pressure to succeed intensified. As a result, Goddard's engineers quickly developed a policy of intricate oversight of contractors and detailed testing of components and satellites. Private industry has become more adept at building satellites, and NASA is now reviewing this policy with the view that it may increase costs unnecessarily and duplicate manpower and effort. In the future, satellites may be built more independently by private companies under performance-based contracts with NASA. But in the early days, close working relationships with contractors and detailed oversight of satellite building were two of the critical elements that led to Goddard's success. The Post-Apollo Era The ending of the Apollo program brought a new era to NASA, and to Goddard, as well. The drive to the Moon had unified NASA and garnered tremendous support for space efforts from Congress and the country in general. But once that goal was achieved, NASA's role, mission and funding became a little less clear. In some ways, Goddard's focus on scientific missions and a diversity of projects helped protect it from some of the cutbacks that accompanied the end of the Apollo program in 1972. But there were still two Reductions in Force (RIFs)39 at Goddard after the final Apollo 17 mission that hurt the high morale and enthusiasm that had characterized the Center throughout its first decade. Yet despite the cutbacks, the work at Goddard was still expanding into new areas. Even as the Apollo program wound down, NASA was developing a new launch vehicle that what would become known as the Space Shuttle. The primary advantage of the Shuttle was seen as its reusable nature. But an engineer at Goddard named Frank Ceppolina saw another distinct opportunity with the Shuttle. With its large cargo bay and regular missions into low Earth orbit, he believed the Shuttle could be used as a floating workshop to retrieve and service satellites in orbit. Goddard had already pioneered the concept of modular spacecraft design with its Orbiting Geophysical Observatory (OGO) satellites in the 1960s. But in 1974, Ceppolina took that concept one step further by proposing a Multi-mission Modular Spacecraft (MMS) with easily replaceable, standardized modules that would support a wide variety of different instruments. The modular approach would not only reduce manufacturing costs, it would also make it possible to repair the satellite on station, because repairing it would be a fairly straightforward matter of removing and replacing various modules. The first modular satellite was called the "Solar Max" spacecraft. It was designed to look at solar phenomena during a peak solar activity time and was launched in 1980. About a year after launch it developed problems and, in 1984, it became the first satellite to be repaired in space by Shuttle astronauts. The servicing allowed the satellite to gather additional valuable scientific data. But perhaps the biggest benefit of the Solar Max repair mission was the experience it gave NASA in servicing satellites. That experience would prove invaluable a few years later when flaws discovered in the Hubble Space Telescope forced NASA to undertake a massive and difficult repair effort to save the expensive and high-visibility Hubble mission.40 Goddard made significant strides in space science in the years following Apollo, developing projects that would begin to explore new wavelengths and farther distances in the galaxy and the universe. The International Ultraviolet Explorer (IUE) launched in 1978, has proven to be one of the most successful and productive satellites ever put into orbit. It continued operating for almost 19 years - 14 years beyond its expected life span - and generated more data and scientific papers than any other satellite to date. Goddard's astronomy work also expanded into the high-energy astronomy field in the 1970s. The first Small Astronomy Satellite, which mapped X-ray sources across the sky, was launched in 1970. A gamma-ray satellite followed in 1972. Goddard also had instruments on the High Energy Astronomical Observatory (HEAO) satellites, which were managed by the Marshall Space Flight Center.41 The HEAO satellites also marked the start of a competition between Marshall and Goddard that would intensify with the development of the Hubble Space Telescope. When the HEAO satellites were being planned in the late 1960s and early 1970s, Goddard had a lot of different projects underway. Senior managers at the Marshall Space Flight Center, however, were eagerly looking for new work projects to keep the center busy and alive. Marshall's main project had been the development of the Saturn rocket for the Apollo program and, with the close of the Apollo era, questions began to come up about whether Marshall was even needed anymore. When the HEAO project came up, the response of Goddard's senior management was that the Center was too busy to take on the project unless the Center was allowed to hire more civil servants to do the work. Marshall, on the other hand, enthusiastically promised to make the project a high priority and assured Headquarters that it already had the staff on board to manage it. In truth, Marshall had a little bit of experience with building structures for astronomy, having developed the Apollo Telescope Mount for Skylab, and the Center had shown an interest in doing high-energy research. When it got the HEAO project, however, Marshall still had an extremely limited space science capability. From a strictly scientific standpoint, Goddard would have been the logical center to run the project. But the combination of the available work force at Marshall and the enthusiasm and support that Center showed for the project led NASA Headquarters to choose Marshall over Goddard to manage the HEAO satellites. The loss of HEAO to Marshall was a bitter pill for some of Goddard's scientists to swallow. Goddard had all but owned the scientific satellite effort at NASA for more than a decade and felt a great deal of pride and investment in the expertise it had developed in the field. It was an adjustment to have to start sharing that pie. What made the HEAO loss particularly bitter in retrospect, however, was that it gave Marshall experience in telescope development - experience that factored heavily in Headquarters' decision to award the development of the Hubble Space Telescope to Marshall, as well. There were other reasons for giving the Hubble telescope to Marshall - including concern among some in the external scientific community that Goddard scientists still had too much of an inside edge on satellite research projects. Goddard was going to manage development of Hubble's scientific instruments and operation of the telescope once it was in orbit. If Goddard managed the development of the telescope as well, its scientists would know more about all aspects of this extremely powerful new tool than any of the external scientists. By giving the telescope project to Marshall to develop, that perceived edge was softened a bit. Indeed, Hubble was perceived to be such a tremendously powerful tool for research that the outside community did not even want to rely on NASA Headquarters to decide which astronomers should be given time on the telescope. At the insistence of the general astronomical community, an independent Space Telescope Science Institute was set up to evaluate and select proposals from astronomers wanting to conduct research with the Hubble. The important point, however, was that the telescope project was approved. It would become the largest astronomical telescope ever put into space - a lens into mysteries and wonders of the universe no one on Earth had ever been able to see before.42 The field of space-based Earth science, which in a sense had begun with the first TIROS launch in 1960, also continued to evolve in the post-Apollo era. The first of a second generation of weather satellites was launched in 1970 and, in 1972, the first Earth Resources Technology Satellite (ERTS) was put into orbit. By looking at the the reflected radiation of the Earth's land masses with high resolution in different wavelengths, the ERTS instruments could provide information about the composition, use and health of the land and vegetation in different areas. The ERTS satellite became the basis of the Landsat satellites that still provide remote images of Earth today. Other satellites developed in the 1970s began to look more closely at the Earth's atmosphere and oceans, as well. The Nimbus-7 satellite, for example, carried new instruments that, among other things, could measure the levels of ozone in the... ....atmosphere and phytoplankton in the ocean. As instruments and satellites that could explore the Earth's resources and processes evolved, however, Earth scientists found themselves caught in the middle of an often politically charged tug-of-war between science and application. Launching satellites to look at phenomena or gather astronomical or physics data in space typically has been viewed as a strictly scientific endeavor whose value lies in the more esoteric goal of expanding knowledge. Satellites that have looked back on Earth, however, have always been more closely linked with practical applications of their data - a fact that has both advantages and disadvantages for the scientists involved. When Goddard began, all of the scientific satellites were organized under the "Space Sciences and Applications" directorate. Although the Center was working on developing weather and communications satellites, the technology and high resolution instruments needed for more specific resource management tasks did not yet exist. In addition, it was the height of the space race and science and space exploration for its own sake had a broad base of support in Congress and in society at large. In the post-Apollo era, however, NASA found itself needing to justify its expenditures, which led to a greater emphasis on proving the practical benefits of space. At NASA headquarters a separate "applications" office was created to focus on satellite projects that had, or could have, commercial applications. In an effort to focus efforts on more "applications" research (communications, meteorology, oceanography and remote imaging of land masses) as well as scientific studies, Goddard's senior management decided to split out "applications" functions into a new directorate at the Center, as well. In many ways, the distinction between science and application is a fine one. Often, the data collected is the same - the difference lies only in how it is analyzed or used. A satellite that maps snow cover over time, for example, can be used to better understand whether snow cover is changing as a result of global climate system changes. But that same information is also extremely useful in predicting snow melt runoff, which is closely linked with water resource management. A satellite that looks at the upper atmosphere will collect data that can help scientists understand the dynamics of chemical processes in that region. That same information, however, can also be used to determine how much damage pollutants are causing or whether we are, in fact, depleting our ozone layer. For this reason, Earth scientists can be more affected by shifting national priorities than their space science counterparts.43 The problem is the inseparable policy implications of information pertaining to our own planet. If we discover that the atmosphere of Mars is changing, nobody feels any great need to do anything about it. If we discover that pollutants in the air are destroying our own atmosphere, however, it creates a great deal of pressure to do something to remedy the situation. Scientists can argue that information is neutral - that it can show less damage than environmentalists claim as well as more severe dangers than we anticipated. But the fact remains that, either way, the data from Earth science research can have political implications that impact the support those efforts receive. The applicability of data on the ozone layer, atmospheric pollution and environmental damage may have prompted additional funding support at times when environmental issues were a priority. But the political and social implications of this data also may have made Earth science programs more... ...susceptible to attack and funding cuts when less sympathetic forces were in power.44 Yet despite whatever policy issues complicate Earth science research, advances in technology throughout the 1970s certainly made it possible to learn more about the Earth and get a better perspective on the interactions between ocean, land mass and atmospheric processes than we ever had before. The Space Shuttle Era As NASA moved into the 1980s, the focus that drove many of the agency's other efforts was the introduction of the Space Shuttle. In addition to the sheer dollars and manpower it took to develop the new spacecraft, the Shuttle created... ....new support issues and had a significant impact on how scientific satellites were designed and built. In the Apollo era, the spacecraft travelled away from the Earth, so a ground network of tracking stations could keep the astronauts in sight and in touch with mission controllers at almost all times. The Shuttle, however, was designed to stay in near-Earth orbit. This meant that the craft would be in range of any given ground station for only a short period of time. This was the case with most scientific satellites, but real-time communication was not as critical when there were no human lives at stake. Satellites simply used tape recorders to record their data and transmitted it down in batches when they passed over various ground stations. Shuttle astronauts, on the other hand, needed to be in continual communication with mission control. Goddard had gained a lot of experience in communication satellites in the early days of the Center and had done some research with geosynchronous communication satellite technology in the 1970s that offered a possible solution to the problem. A network of three geosynchronous satellites, parked in high orbits 22,300 miles above the Earth, could keep any lower Earth-orbiting satellite - including the Space Shuttle - in sight at all times. In addition to its benefits to the Shuttle program, the system could save NASA money over time by eliminating the need for the worldwide network of ground stations that tracked scientific satellites. The biggest problem with such a system was its development costs. NASA budgets were tight in the late 1970s and did not have room for a big budget item like the proposed Tracking and Data Relay Satellite System (TDRSS). So the agency worked out an arrangement to lease time on the satellites from a contractor who agreed to build the spacecraft at its own cost. Unfortunately, the agreement offered NASA little control or leverage with the contractor, and the project ran into massive cost and schedule overruns. It was a learning experience for NASA, and not one managers recall fondly. Finally, Goddard renegotiated the contract and took control of the TDRSS project. The first TDRSS satellite was finally launched from the Space Shuttle in April 1983. The second TDRSS was lost with the Shuttle "Challenger" in 1986, but the system finally became operational in 1989. The TDRSS project also required the building of a new ground station to communicate with the satellites and process their data. The location best suited for maximum coverage of the satellites was at the White Sands Missile Range in New Mexico. So in 1978, Goddard began building the TDRSS White Sands Ground Terminal (WSGT). The first station became operational in 1983, and a complete back-up facility, called the Second TDRSS Ground Terminal (STGT), became operational in 1994. The second station was built because the White Sands complex is the sole ground link for the TDRSS, and the possibility of a losing contact with the Shuttle was unacceptable. The second site insures that there will always be a.... ....working communications and data link for the TDRSS satellites.45 The edict that TDRSS would also become the system for all scientific satellite tracking and data transmission did not please everyone, because it meant every satellite had to be designed with the somewhat cumbersome TDRSS antennas. But the Shuttle's impact on space science missions went far beyond tracking systems or antenna design. Part of the justification of the Shuttle was that it could replace the expendable launch vehicles (rockets) used by NASA and the military to get satellites into orbit. As a result, the stockpile of smaller launch rockets was not replenished, and satellites had to be designed to fit in the Shuttle bay instead. There were some distinct advantages to using the Space Shuttle as a satellite launch vehicle. Limitations on size and weight - critical factors with the smaller launch vehicles - became much less stringent, opening the door for much bigger satellites. Goddard's Compton Gamma Ray Observatory, for example, weighed more than 17 tons. The Space Shuttle also opened up the possibility of having astronauts service satellites in space.46 On the other hand, using the Shuttle as the sole launch vehicle complicated the design of satellites, because they now had to undergo significantly more stringent safety checks to make sure their systems posed no threat to the astronauts who would travel into space with the cargo. But the biggest disadvantage of relying exclusively on the Shuttle hit home with savage impact in January 1986 when the Shuttle "Challenger" exploded right after lift-off. The Shuttle fleet was grounded for almost three years and, because the Shuttle was supposed to eliminate the need for them, there were few remaining expendable launch rockets. Even if there had been a large number of rockets available, few of the satellites that had been designed for the spacious... ....cargo bay of the Shuttle would fit the smaller weight and size limitations of other launch vehicles. Most satellites simply had to wait for the Shuttle fleet to start flying again. The 1980s brought some administrative changes to Goddard, as well. NASA's Wallops Island, Virginia flight facility had been created as an "Auxiliary Flight Research Station" associated with the NACA's Langley Aeronautical Laboratory in 1945.47 Its remote location on the Atlantic coast of Virginia made it a perfect site for testing aircraft models and launching small rockets. As the space program evolved, Wallops became one of the mainstays of NASA's sounding rocket program and operated numerous aircraft for scientific research purposes, as well. It also launched some of the National Science Foundation's smaller research balloons and provided tracking and other launch support services for NASA and the Department of Defense. Yet although its work expanded over the years, Wallops' small size, lower-budget projects, and remote location allowed it to retain the pragmatic, informal, entrepreneurial style that had characterized Goddard and much of NASA itself in the early days of the space program. People who worked at Wallops typically came from the local area, and there was a sense of family, loyalty, and fierce independence that characterized the facility. As one of NASA smaller research stations, however, Wallops was in a less protected political position than some of its larger and higher profile counterparts. In the early 1980s, a proposal emerged to close the Wallops Station as a way of reducing NASA's operating costs. In an effort to save the facility, NASA managers decided instead to incorporate Wallops into the Goddard Space Flight Center. Goddard was a logical choice because Wallops was already closely linked with Goddard on many of its projects. The aircraft at Wallops were sometimes used to help develop instruments that later went on Goddard satellites. Goddard also had a sounding rocket division that relied on Wallops for launch, range, tracking and data support. As time went on, Wallops had begun to develop some of the smaller, simpler sounding rocket payloads, as well. By the late 1970s, NASA headquarters was even considering transferring Goddard's entire sounding rocket program to Wallops. In 1982, Wallops Island Station became the Wallops Island Flight Facility, managed under the "Suborbital Projects and Operations" directorate at Goddard.48 At the same time, the remaining sounding rocket projects at Goddard-Greenbelt were transferred down to Wallops. The personnel at Goddard who had been working on sounding rockets had to refocus their talents. So they turned their entrepreneurial efforts to the next generation of small-budget, hands-on projects - special payloads for the Space Shuttle.49 As the 1980s progressed, Goddard began putting together a variety of small payloads to take up spare room in the Shuttle cargo bay. They ranged from $10,000 "Get Away Special" (GAS) experiments that even schoolchildren could develop to multi-million dollar Spartan satellites that the Shuttle astronauts release overboard at the start of a mission and pick up again before returning to Earth. The Post-Challenger Era: A New Dawn All of NASA was rocked on the morning of 28 January 1986 when the Shuttle "Challenger" exploded 73 seconds after launch. While many insiders at NASA were dismayed at what appeared to have been a preventable tragedy, they were not, as a whole, surprised that the Shuttle had had an accident. These were people who had witnessed numerous rockets with cherished experiments explode or fail during the launch process. They had lived through the the Orbiting Solar Observatory accident, the Apollo 1 fire, and the Apollo 13 crisis. They knew how volatile rocket technology was and how much of a research effort the Shuttle was, regardless of how much it was touted as a routine transportation system for space. These were veteran explorers who knew that for all the excitement and wonder space offered, it was a dangerous and unforgiving realm. Even twenty-five years after first reaching orbit, we were still beginners, getting into space by virtue of brute force. There was nothing routine about it. It was an understanding of just how risky the Shuttle technology was that drove a number of people within NASA to argue against eliminating the other, expendable launch vehicles. The Air Force was also concerned about relying on the Shuttle for all its launch needs. The Shuttle accident, however, settled the case. A new policy supporting a "mixed fleet" of launch vehicles was created, and expendable launch vehicles went back into production.50 Unfortunately, a dearth of launch vehicles was not the only impact the Challenger accident had on NASA or Goddard. The tragedy shattered NASA's public image, leading to intense public scrutiny of its operations and a general loss of confidence in its ability to conduct missions safely and successfully. Some within NASA wondered if the agency would even survive. To make things worse, the Challenger accident was followed four months later with the loss of a Delta rocket carrying a new weather satellite into orbit, and the loss a year later by an Atlas-Centaur rocket carrying a Department of Defense satellite. While these were not NASA projects, the agency received the criticism and the consequential public image of a Federal entity that could not execute its tasks. Launches all but came to a halt for almost two years, and even the scientific satellite projects found themselves burdened with more safety checks and oversight processes. The Shuttle resumed launches in 1989, but NASA took another hit in 1990 when it launched the much-touted Hubble Space Telescope, only to discover that the telescope had a serious flaw in its main mirror. As the last decade of the century began, NASA needed some big successes to regain the nation's confidence in the agency's competence and value. Goddard would help provide those victories. One of Goddard's biggest strengths was always its expertise in spacecraft construction. Most of the incredibly successful Explorer class of satellites, for example, were built in-house at Goddard.But the size and complexity of space science projects at Goddard - and even the Center's Explorer satellites - had grown dramatically over the years. From the early Explorer spacecraft, which could be designed, built and launched in one to three years, development and launch cycles had grown until they stretched 10 years or more. Aside from the cost of these large projects, they entailed much more risk for the scientists involved. If a satellite took 15 years from inception to launch, its scientists had to devote a major portion of their careers to the... ....project. If it failed, the cost to their careers would be enormous. In part, the growth in size and complexity of satellites was one born of necessity. To get sharp images of distant stars, the Hubble Space Telescope had to be big enough to collect large amounts of light. In the more cost-conscious era following Apollo, where new satellite starts began to dwindle every year, the pressure also increased to put as many things as possible on every new satellite that was approved. But in 1989, Tom Huber, Goddard's director of engineering, began advocating for Goddard to begin building a new line of smaller satellites. In a sense, these "Small Explorers," or SMEX satellites, would be a return to Goddard's roots in innovative, small and quickly produced spacecraft. But because technology had progressed, they could incorporate options such as fiber optic technology, standard interfaces, solid state recorders, more advanced computers that fit more power and memory into less space, and miniature gyros and star trackers. Some of these innovations, such as the solid state recorders and advanced microchip technology for space applications, had even been developed in-house at Goddard. As a result, these small satellites could be even more capable than some of the larger projects Goddard had built in the past. The goal of the SMEX satellites was to cost less than $30 million and take less than three years to develop. The program has proved highly successful, launching five satellites since 1992, and is continuing to develop advanced technology to enable the design of even more capable, inexpensive spacecraft.51 In late 1989, Goddard launched the Cosmic Background Explorer (COBE) satellite aboard a Delta launch rocket. Originally scheduled for launch aboard the Space Shuttle, the COBE satellite, which was built in-house at Goddard, had been totally redesigned in less than 36 months after the Challenger accident to fit the nose cone of a Delta rocket. Using... ....complex instruments, COBE went in search of evidence to test the "Big Bang" theory of how the universe began - and found it. Famed cosmologist Stephen Hawking called the NASA-University COBE team's discovery "the discovery of the century, if not of all time."52 The COBE satellite had perhaps solved one of the most fascinating mysteries in existence - the origins of the universe in which we live. It had taken 15 years to develop, but the COBE satellite offered the public proof that NASA could take on a difficult mission, complete it successfully, and produce something of value in the process. Goddard reached out into another difficult region of the universe when it launched the Compton Gamma Ray Observatory in 1990. The Compton was the second of NASA's planned "Four Great Observatories" that would explore the universe in various regions of the electromagnetic spectrum. The Hubble Space Telescope was to cover the visible and ultraviolet regions, the Compton was to explore the gamma ray region, and two additional observatories were to investigate phenomena in X-ray and infrared wavelengths. At over 17 tons, the Compton was the largest satellite ever launched into orbit, and its task was to explore some of the highest energy and perplexing phenomena in the cosmos. Three years later, Goddard found itself taking on an even more difficult challenge when the Center undertook the first Hubble servicing mission - better known as the Hubble repair mission. The odds of successfully developing and implementing a fix for the telescope, which had a flaw not in one instrument but in its central mirror, were estimated at no better than 50%. But because of Goddard's earlier successful pioneering efforts with servicable satellites, the Hubble had been designed to be serviced in space. This capability, and Goddard's previous experience repairing the Solar Max satellite, provided the critical components that made the Hubble repair possible. Fired with the same enthusiasm and sense of crisis that had fueled the Apollo program, the Goddard team assigned to manage the project, working with a hand-picked Shuttle crew from Houston's Johnson Space Center, succeeded beyond expectation. The success of such a difficult mission earned the team a Collier Trophy - the nation's highest award for the greatest aeronautical achievement in any given year.53 Even as Goddard launched the Compton Observatory and the Hubble Space Telescope to explore new regions of the universe, NASA announced the start of a massive new initiative to explore the planet we call home. Dubbed "Mission to Planet Earth" when it was introduced in 1990, the effort was expected to spend thirty billion dollars over at least 15 years in order to take a long-term, systems-oriented look at the health of the planet.. In some ways, the program was a natural outgrowth of increasing environmental concerns over the years and the improved ability of satellites to analyze the atmosphere and oceans of our planet. But it received a big boost... ...when a hole in the ozone layer was discovered in 1985. That discovery, as one researcher put it, "dramatized that the planet was at risk, and the potential relevance of NASA satellite technology to understanding that risk." In the wake of the Challenger disaster, Mission to Planet Earth was also seen as one of the top "leadership initiatives" that could help NASA recover from the tragedy and regain the support of the American public.54 Although numerous NASA centers would participate in the MTPE effort, the program office was located at Goddard. It was a natural choice, because Goddard was the main Earth Science center in the agency anyway. Earth Science was broken out of the Space and Earth Sciences directorate and its research began to take on a new sense of relevance in the public eye. As with earlier Earth science efforts, however, the political and social implications of this data also have made the program more susceptible to shifting national priorities than its space science counterparts. In the past eight years, the program has been scaled back repeatedly. Its budget is now down to seven billion dollars and the name of the program has been changed to Earth Science Enterprise.55 There are numerous reasons for the cutback of the program. But it can be argued that we find money for the items that are high national priorities. And one factor in the changing fortunes of the Mission to Planet Earth program is undeniably the shifting agendas that affect NASA funding. Nevertheless, the more moderate Earth Science Enterprise program will still give scientists their first real opportunity to study the planet's various oceanographic and atmospheric processes as an integrated system instead of individual components - a critical step toward understanding exactly how our planet operates and how our actions impact its health. In short, Goddard's work in the early 1990s helped bring NASA out of the dark post-Challenger era and helped create in a new energy, enthusiasm and curiosity about both planet Earth and other bodies in the universe. We now had the technology to reach back to the very beginning of time and the outer reaches of the universe. The Hubble... ....servicing mission made possible the beautiful images of far-away galaxies, stars, nebulae and planets that now flow into publications on a regular basis. These images have not only provided valuable clues to scientific questions about the cosmos, they have also fired the imaginations of both children and adults, generating a new enthusiasm for space exploration and finding out more about the galaxy and universe we call home. At the same time, we had the technology to begin to piece together answers about where El Nino weather patterns came from, how our oceans and atmosphere work together to create and control our climate, and how endangered our environment really is. These advances provided critical support for NASA at a time when many things about the agency, and the Goddard Space Flight Center, were changing. Better, Faster, Cheaper As we head into the twenty-first century, the world is changing at a rapid pace. The electronic superhighways of computers and communications are making the world a smaller place, but the marketplace a more global one. Concerns about the United States' competitiveness are growing as international competition increases. The crisis-driven days of the space race are also over, and cost now is a serious concern when Congress looks at whether or not additional space projects should be funded. This need to be more cost-efficient is driving changes both within Goddard itself and in its relationships with outside industry. Goddard recently underwent a major administrative reorganization in the hopes of making better use of its engineers' time. Instead of being scattered around the Center, its almost 2,000 engineers are being organized almost entirely into either a new Applied Engineering and Technology (AET) directorate or a new Systems, Technology, and Advanced Concepts (STAAC) Directorate. In essence, AET will provide the hands-on engineering support for whatever projects are underway at the Center, and STAAC will work on advanced concepts and systems engineering for future projects. Again, this change in matrix structure within Goddard is not a new concept. The Center has gone back and forth a couple of times between putting engineers with scientists on project teams and trying to follow a stricter discipline-oriented organization. The advantages of a project-based organization are that the engineers get to really focus on one job at a time and build synergistic relationships with the scientists with whom they are working. These relationships often lead to innovative ideas or concepts that the individual engineers or scientists might not have come to on their own. The disadvantage of this structure, which is a greater concern in times of tighter budgets, is that even if those engineers have excess time during lulls in the project, it can't easily be taken advantage of by anyone else in the Center. Their talent is tied up in one place, which can also lead to territorial "fiefdoms" instead of a more ideal Center-wide cooperation.56 At the present time, the changes are administrative only. The engineers are still being co-located with their scientist colleagues. How or if that changes in the future remains to be seen, as does the success of the reorganization in general. After all, the impact of any administrative change is determined more... ....by how it is implemented than how it looks on paper, and the success of that can only be determined once the change has been made.57 Another issue facing Goddard is the recurring question of who should be building the spacecraft. One of the strengths of Goddard has always been its in-house ability to design and build both spacecraft and instruments. The Center's founders created this in-house capability for two reasons. First, there was little in the way of a commercial spacecraft industry at the time Goddard was started. Second, although most of the satellites actually would be built by contractors, the founders of NASA believed that the agency had to have hands-on knowledge of building spacecraft in order to manage those contracts effectively. Over the years, the commercial spacecraft industry has grown and matured tremendously, leading to periodic discussions as to whether NASA should leave the spacecraft building jobs entirely to the private sector. After all, there is general agreement that the government, in the form of NASA, should not do what industry is capable of doing. In truth, however, the issue isn't quite that simple. In the late 1970s, Goddard's senior management all but stopped in-house satellite building at the Center, focusing the engineers' efforts on instrument building, instead. The rationale was that industry was capable of building satellites and NASA should be working on developing advanced technology sensors and instruments. Yet even aside from the argument that keeping in-house competence was necessary to effectively manage contracts with industry, there were flaws to this rationale. For one thing, building satellites in-house had a significant indirect effect on the employees at Goddard. The ability to help design spacecraft helped attract bright young engineers to the Center, which is always an important concern in a field where industry jobs generally pay better than NASA positions. Furthermore, knowing that some of the spacecraft sitting on top of launch vehicles had been built in-house gave Goddard employees a sense of pride and involvement in the space program that instrument building alone could not create. Taking away that element caused a huge drop in the Center's morale. Indeed, when Tom Young became the Center's director in 1980, one of his first moves was to restore the building of in-house satellites in the hopes of rebuilding morale.58 The commercial space industry has matured even further in the past 20 years, and the question about whether Goddard still should be building in-house satellites has been raised again in recent years. In the end, the answer is probably "Yes". The question lies more in the type and number of satellite projects the Center should undertake. The goal is for Goddard to pursue one or two in-house projects that involve advanced spacecraft technology and to contract out projects that involve more proven spacecraft concepts. At the same time, Goddard is taking advantage of the expertise now present in the commercial satellite industry by introducing a new "Rapid Spacecraft Procurement Initiative," with the goal of reducing the development time and cost of new spacecraft. By "pre-qualifying" certain standard spacecraft designs from various commercial satellite contractors, Goddard hopes to make it possible for some experiments to be integrated into a spacecraft and launched within as short a time frame as a year. Not every experiment can be fit into a standard spacecraft design, but there are certainly some which could benefit from this quick-turnaround system.The contracts developed by Goddard for this initiative are now being used by not only other NASA Centers, but by the Air Force, as well.59 A more complex issue is how involved NASA should be in even managing the spacecraft built by industry. Historically, Goddard has employed a very thorough and detailed oversight policy with the contracts it manages. One of the reasons the Center developed this careful, conservative policy was to avoid failure in the high-profile, high-dollar realm of NASA. As a result, the concern of NASA engineers tends to be to make sure the job is done right, regardless of the cost. While industry engineers have the same interest in excellence and success, they sometimes have greater pressure to watch the bottom line. Goddard managers quote numerous examples of times contractors only agreed to conduct additional pre-launch tests after Goddard engineers managing the contract insisted on it. They also recall various instances where Goddard finally sent its own engineers to a contractor's factory to personally supervise projects that were in trouble. Industry, on the other hand, can argue that Goddard's way of building satellites is not necessarily the only right way and this double-oversight slows down innovation and greatly increases the cost of building satellites. And in an era of decreasing federal budgets, deciding how much oversight is good or enough becomes an especially sticky issue. Currently, the trend seems to be toward a more hands-off, performance-based contract relationship with industry. Industry simply delivers a successful satellite or doesn't get paid. Some argue that a potential disadvantage to this approach is that it could rob industry engineers of the advice and experience Goddard might be... ....able to offer. Goddard's scientists and engineers have a tremendous corporate memory and have learned many lessons the hard way. So sharing that expertise might prove more cost-effective in the long run than the bottom line salary and labor allocation figures of a more hands-off system might suggest. In the end, there is truth in what all parties say. It's hard to say what the "right" answer is because, for all our progress in the world of space, we are still feeling our way and learning from our mistakes as we keep reaching out to try new things. The exact nature and scope of NASA's mission has been the subject of frequent debate since the end of the Apollo program. But NASA certainly has an edict to do those things that for reasons of cost, risk, or lack of commercial market value, industry can or will not undertake. In the early 1960s, the unknowns and risk of... ....failure were far too high and the potential profit far too uncertain for industry to fund the development of anything but communication satellites. Today, that situation is changing. In some cases, a commercial market for the data is developing. In others, the operations once considered too risky for anyone but NASA to perform are now considered routine enough to contract out to private companies, which are also much more capable than they once were. Some tracking and data functions that were a part of the Goddard Space Flight Center since its inception, for example, were recently moved down to the Marshall Space Flight Center, where they will be managed by a private company under contract to a supervising Space Operations Management Office at the Johnson Space Center.60 NASA is also starting to relinquish its hold on the launching of rockets itself. In years past, all launches were conducted at government facilities for reasons of both safety and international politics. But that is beginning to change. The state of Virginia is already in the process of building a commercial space port at the Wallops Island facility in partnership with private industry. The payloads and launch vehicles using the space port will be developed privately, and the consortium will contract with NASA to provide launch range, radar, telemetry, tracking and safety analysis services.61 NASA has also used a privately developed, airplane-launched rocket called the "Pegasus" to send a number of small satellites into space. It should be noted, however, that the Pegasus vehicle went through a series of developmental problems before it became a reliable system. The same is true of the SeaWIFS satellite, which is currently providing very useful data on ocean color but which was developed under a very different type of contract than most scientific satellites. The SeaWIFS spacecraft was developed independently by the Orbital Sciences Corporation and NASA paid only for the data it uses. While the satellite is now generating very good data, it ran into many developmental difficulties and delays that caused both NASA and the contractor a lot of aggravation. On the one hand, because NASA paid the majority of the money up front, there was less incentive for the contractor to keep on schedule. On the other hand, the up-front, fixed-price lease meant that the contractor absorbed the costs of the problems and delays when they occurred.62 Fixed-price contracts work well in many arenas. The complication with scientific satellites is that these spacecraft are not generally proven designs. It's difficult to foresee ahead of time what problems are going to arise in a research project that's breaking new ground. Indeed, there are a lot of uncertainties amidst the tremendous atmosphere of change facing Goddard, NASA and the world at large, and it remains to be seen how they will all work out. Most likely, it will take a number of missteps and failures before the right mix and/or approach is found. The process will also undoubtedly entail the same pendulum swings between different approaches that has characterized Goddard throughout its history. And since external circumstances and goals are constantly changing, there may never be one "correct" mix or answer found. In the end, our efforts in space are still an exploration into the unknown. On the cutting edge of technology and knowledge, change is the only constant - in theories of the universe as well as technology, priorities, and operating techniques. Once upon a time, Goddard's biggest challenge was overcoming the technical obstacles to operating in space. Today, Goddard's challenge is to find the flexibility to keep up with a rapidly changing world without losing the magic that has made the Center so successful over the past forty years. The new frontier for Goddard is now much broader than just space itself. The Center has to be open to reinventing itself, infusing new methods and a renewed sense of entrepreneurial innovation and teamwork into its operations while continuing to push boundaries in technology development, space and Earth exploration for the benefit of the human race. It has to be flexible enough to work as part of broader NASA, university and industry and international teams in a more global and cost-constrained space industry and world. It has to find a way to reach forward into new areas of research, commercial operations, and more efficient procedures without losing the balance between cost and results, science and engineering, basic research and applications, inside and outside efforts. And, most importantly, Goddard has to accomplish all of these things while preserving the most valuable strength it has - the people that make it all possible.
http://history.nasa.gov/SP-4312/ch2.htm
13
61
Liberalism (from the Latin liberalis, "of freedom") is the belief in the importance of liberty and equal rights. Liberals espouse a wide array of views depending on their understanding of these principles, but most liberals support such fundamental ideas as constitutions, liberal democracy, free and fair elections, human rights, free trade, secular society, and the market economy. These ideas are often accepted even among political groups that do not openly profess a liberal ideological orientation. Liberalism encompasses several intellectual trends and traditions, but the dominant variants are classical liberalism, which became popular in the 18th century, and social liberalism, which became popular in the 20th century. Liberalism first became a powerful force in the Age of Enlightenment, rejecting several foundational assumptions that dominated most earlier theories of government, such as hereditary status, established religion, absolute monarchy, and the Divine Right of Kings. Early liberal thinkers such as John Locke, who is often regarded as the founder of liberalism as a distinct philosophical tradition, employed the concept of natural rights and the social contract to argue that the rule of law should replace autocratic government, that rulers were subject to the consent of the governed, and that private individuals had a fundamental right to life, liberty, and property. The American Revolution and the French Revolution used liberal philosophy to justify the violent overthrow of autocratic rule, paving the way for the development of modern history. The 19th century saw liberal governments established in nations across Europe, Latin America, and North America. Liberal power increased even further in the 20th century, when liberal democracies triumphed in two world wars and survived major ideological challenges from fascism and communism. Today, liberals are organized politically on all major continents. They have played a decisive role in the growth of republics, the spread of civil rights and civil liberties, the establishment of the modern welfare state, the institution of religious toleration and religious freedom, and the development of globalization. They have also shown strong support for regional and international organizations, including the European Union and the United Nations, hoping to reduce conflict through diplomacy and multilateral negotiations. To highlight the importance of liberalism in modern life, political scientist Alan Wolfe claimed that "liberalism is the answer for which modernity is the question". |Part of the Politics series on| Words such as liberal, liberty, and libertarian all trace their history to the Latin liber, which means "free". One of the first recorded instances of the word liberal occurs in 1375, when it was used to describe the liberal arts. The word's early connection with the classical education of a medieval university soon gave way to a proliferation of different denotations and connotations. Liberal could refer to "free in bestowing" as early as 1387, "made without stint" in 1433, "freely permitted" in 1530, and "free from restraint"—often as a pejorative remark—in the 16th and the 17th centuries. In 16th century England, liberal could have positive or negative attributes in referring to someone's generosity or indiscretion. In Much Ado About Nothing, Shakespeare wrote of "a liberal villaine" who "hath...confest his vile encounters". With the rise of the Enlightenment, the word acquired decisively more positive undertones, being defined as "free from narrow prejudice" in 1781 and "free from bigotry" in 1823. In 1815, the first use of the word liberalism appeared in English. By the middle of the 19th century, liberal started being used as a fully politicized term for parties and movements all over the world. The Oxford Encyclopedic English Dictionary defines the word liberal as "giving freely, generous, not sparing; open-minded, not prejudiced ... for general broadening of the mind". It also defines the word as "regarding many traditional beliefs as dispensable, invalidated by modern thought, or liable to change". Identifying any definitive meaning for the word, however, has proven challenging to scholars and to the general public. The widespread use of the word liberal often inspires people to understand it based on a wide array of factors, including geographic location or political orientation. The American political scientist Louis Hartz echoed this frustration and confusion, writing that "Liberalism is an even vaguer term, clouded as it is by all sorts of modern social reform connotations, and even when one insists on using it in the Lockian sense...there are aspects of our original life in the Puritan colonies and the South which hardly fit its meaning". Hartz emphasized the European origin of the word, conceptualizing a liberal as someone who believes in liberty, equality, and capitalism—in opposition to the association that American conservatives have tried to establish between liberalism and centralized government. The history of liberalism spans the better part of the last four centuries, beginning in the English Civil War and continuing after the end of the Cold War. Liberalism started as a major doctrine and intellectual endeavor in response to the religious wars gripping Europe during the 16th and 17th centuries, although the historical context for the ascendancy of liberalism goes back to the Middle Ages. The first notable incarnation of liberal agitation came with the American Revolution, and liberalism fully exploded as a comprehensive movement against the old order during the French Revolution, which set the pace for the future development of human history. Classical liberals, who broadly emphasized the importance of free markets and civil liberties, dominated liberal history for a century after the French Revolution. The onset of the First World War and the Great Depression, however, accelerated the trends begun in late 19th century Britain towards a new liberalism that emphasized a greater role for the state in ameliorating devastating social conditions. By the beginning of the 21st century, liberal democracies and their fundamental characteristics—support for constitutions, civil rights and individual liberties, pluralistic society, and the welfare state—had prevailed in most regions around the world. European experiences during the Middle Ages were often characterized by fear, uncertainty, and warfare—the latter being especially endemic in medieval life. A symbiotic relationship emerged between the Catholic Church and regional rulers: the Church gave kings and queens authority to rule while the latter spread the message of the Christian faith and did the bidding of Christian social and military forces. The influence of the Church can be seen by the fact that the very term often referred to European society as a whole. In the 14th century, however, disputes over papal successions and the enormous casualty rates of the Black Death incensed people across the continent because they believed that the Church was ineffective. The emergence of the Renaissance in the 15th century also helped to weaken unquestioning submission to the Church by reinvigorating interest in science and in the classical world. In the 16th century, the Protestant Reformation developed from sentiments that viewed the Church as an oppressive ruling order too involved in the feudal and baronial structure of European society. The Church launched a Counter Reformation to contain these bubbling sentiments, but the effort unraveled in the Thirty Years War of the 17th century. In England, a massive civil war led to the execution of King Charles I in 1649. Parliament ultimately succeeded—with the Glorious Revolution of 1688—in establishing a limited and constitutional monarchy. The main facets of early liberal ideology emerged from these events, and historians Colton and Palmer characterize the period in the following light: |“||The unique thing about England was that Parliament, in defeating the king, arrived at a workable form of government. Government remained strong but came under parliamentary control. This determined the character of modern England and launched into the history of Europe and of the world the great movement of liberalism.||”| The early hero of that movement was the English philosopher John Locke. Locke debated recent political controversies with some of the most famous intellectuals of the day, but his greatest rival was Thomas Hobbes. Hobbes and Locke looked at the political world and disagreed on several substantial issues, although their arguments inspired later social contract theories outlining the relationship between people and their governments. Locke developed a radical political notion, arguing that government acquires consent from the governed. His celebrated Two Treatises (1690), the foundational text of liberal ideology, outlined his major ideas. Once humans moved out of their natural state and formed societies, Locke argued as follows: "Thus that which begins and actually constitutes any political society is nothing but the consent of any number of freemen capable of a majority to unite and incorporate into such a society. And this is that, and that only, which did or could give beginning to any lawful government in the world". The stringent insistence that lawful government did not have a supernatural basis was a sharp break with most previous traditions of governance. The intellectual journey of liberalism continued beyond Locke with the Enlightenment, a period of profound intellectual vitality that questioned old traditions and influenced several monarchies throughout the 18th century. The ideas circulating in the Enlightenment had a powerful impact in North America and in France. The American colonies had been loyal British subjects for decades, but they declared independence in 1776 after harsh British taxation policies. Military engagements in the American Revolution began in 1775 and were largely complete by 1781. After the war, the colonies held a Constitutional Convention in 1787 to resolve the problems stemming from the Articles of Confederation. The resulting Constitution of the United States settled on a republic. The American Revolution was an important struggle in liberal history, and it was quickly followed by the most important: the French Revolution. Three years into the French Revolution, German writer Johann von Goethe reportedly told the defeated Prussian soldiers after the Battle of Valmy that "from this place and from this time forth commences a new era in world history, and you can all say that you were present at its birth". Historians widely regard the Revolution as one of the most important events in human history, and the end of the early modern period is attributed to the onset of the Revolution in 1789. The Revolution is often seen as marking the "dawn of the modern era," and its convulsions are widely associated with "the triumph of liberalism". For liberals, the Revolution was their defining moment, and later liberals approved of the French Revolution almost entirely—"not only its results but the act itself," as two historians noted. The French Revolution began in May 1789 with the convocation of the Estates-General. The first year of the Revolution witnessed, among other major events, the Storming of the Bastille in July and the passage of the Declaration of the Rights of Man and of the Citizen in August. The next few years were dominated by tensions between various liberal assemblies and a conservative monarchy intent on thwarting major reforms. A republic was proclaimed in September 1792. External conflict and internal squabbling significantly radicalized the Revolution, culminating in the brutal Reign of Terror. After the fall of Robespierre and the Jacobins, the Directory assumed control of the French state in 1795 and held power until 1799, when it was replaced by the Consulate under Napoleon Bonaparte. Napoleon ruled as First Consul for about five years, centralizing power and streamlining the bureaucracy along the way. The Napoleonic Wars, pitting the heirs of a revolutionary state against the old monarchies of Europe, started in 1805 and lasted for a decade. Along with their boots and Charleville muskets, French soldiers brought to the rest of the European continent the liquidation of the feudal system, the liberalization of property laws, the end of seigneurial dues, the abolition of guilds, the legalization of divorce, the disintegration of Jewish ghettos, the collapse of the Inquisition, the permanent destruction of the Holy Roman Empire, the elimination of church courts and religious authority, the establishment of the metric system, and equality under the law for all men. Napoleon wrote that "the peoples of Germany, as of France, Italy and Spain, want equality and liberal ideas," with some historians suggesting that he may have been the first person ever to use the word liberal in a political sense. He also governed through a method that one historian described as "civilian dictatorship," which "drew its legitimacy from direct consultation with the people, in the form of a plebiscite". Napoleon did not always live up the liberal ideals he espoused, however. His most lasting achievement, the Civil Code, served as "an object of emulation all over the globe," but it also perpetuated further discrimination against women under the banner of the "natural order". The First Empire eventually collapsed in 1815, but this period of chaos and revolution introduced the world to a new movement and ideology that would soon crisscross the globe. Liberals in the 19th century wanted to develop a world free from government intervention, or at least free from too much government intervention. They championed the ideal of negative liberty, which constitutes the absence of coercion and the absence of external constraints. They believed governments were cumbersome burdens and they wanted governments to stay out of the lives of individuals. Liberals simultaneously pushed for the expansion of civil rights and for the expansion of free markets and free trade. The latter kind of economic thinking had been formalized by Adam Smith in his monumental Wealth of Nations (1776), which revolutionized the field of economics and established the "invisible hand" of the free market as a self-regulating mechanism that did not depend on external interference. Sheltered by liberalism, the laissez-faire economic world of the 19th century emerged with full tenacity, particularly in the United States and in the United Kingdom. Politically, liberals saw the 19th century as a gateway to achieving the promises of 1789. In Spain, the Liberales, the first group to use the liberal label in a political context, fought for the implementation of the 1812 Constitution for decades—overthrowing the monarchy in 1820 as part of the Trienio Liberal and defeating the conservative Carlists in the 1830s. In France, the July Revolution of 1830, orchestrated by liberal politicians and journalists, removed the Bourbon monarchy and inspired similar uprisings elsewhere in Europe. Frustration with the pace of political progress, however, sparked even more gigantic revolutions in 1848. Revolutions spread throughout the Austrian Empire, the German states, and the Italian states. Governments fell rapidly. Liberal nationalists demanded written constitutions, representative assemblies, greater suffrage rights, and freedom of the press. A second republic was proclaimed in France. Serfdom was abolished in Prussia, Galicia, Bohemia, and Hungary. Metternich shocked Europe when he resigned and fled to Britain in panic and disguise. Eventually, however, the success of the revolutionaries petered out. Without French help, the Italians were easily defeated by the Austrians. Austria also managed to contain the bubbling nationalist sentiments in Germany and Hungary, helped along by the failure of the Frankfurt Assembly to unify the German states into a single nation. Under abler leadership, however, the Italians and the Germans wound up realizing their dreams for independence. The Sardinian Prime Minister, Camillo di Cavour, was a shrewd liberal who understood that the only effective way for the Italians to gain independence was if the French were on their side. Napoleon III agreed to Cavour's request for assistance and France defeated Austria in the Franco-Austrian War of 1859, setting the stage for Italian independence. German unification transpired under the leadership of Otto von Bismarck, who decimated the enemies of Prussia in war after war, finally triumphing against France in 1871 and proclaiming the German Empire in the Hall of Mirrors at Versailles, ending another saga in the drive for nationalization. The French proclaimed a third republic after their loss in the war, and the rest of French history transpired under republican eyes. Just a few decades after the French Revolution, liberalism went global. The liberal and conservative struggles in Spain also replicated themselves in Latin American countries like Mexico and Ecuador. From 1857 to 1861, Mexico was gripped in the bloody War of Reform, a massive internal and ideological confrontation between the liberals and the conservatives. The liberal triumph there parallels with the situation in Ecuador. Similar to other nations throughout the region at the time, Ecuador was steeped in turmoil, with the people divided between rival liberal and conservative camps. From these conflicts, García Moreno established a conservative government was eventually overthrown in the Liberal Revolution of 1895. The Radical Liberals who toppled the conservatives were led by Eloy Alfaro, a firebrand who implemented a variety of sociopolitical reforms, including the separation of church and state, the legalization of divorce, and the establishment of public schools. Although liberals were active throughout the world in the 19th century, it was in Britain that the future character of liberalism would take shape. The liberal sentiments unleashed after the revolutionary era of the previous century ultimately coalesced into the Liberal Party, formed in 1859 from various Radical and Whig elements. The Liberals produced one of the greatest British prime ministers—William Gladstone, who was also known as the Grand Old Man. Under Gladstone, the Liberals reformed education, disestablished the Church of Ireland, and introduced the secret ballot for local and parliamentary elections. Following Gladstone, and after a period of Conservative domination, the Liberals returned with full strength in the general election of 1906, aided by working class voters worried about food prices. After that historic victory, the Liberal Party shifted from its classical liberalism and laid the groundwork for the future British welfare state, establishing various forms of health insurance, unemployment insurance, and pensions for elderly workers. This new kind of liberalism would sweep over much of the world in the 20th century. The 20th century started perilously for liberalism. The First World War proved a major challenge for liberal democracies, although they ultimately defeated the dictatorial states of the Central Powers. The war precipitated the collapse of older forms of government, including empires and dynastic states. The number of republics in Europe reached 13 by the end of the war, as compared with only three at the start of the war in 1914. This phenomenon became readily apparent in Russia. Before the war, the Russian monarchy was reeling from losses to Japan and political struggles with the Kadets, a powerful liberal bloc in the Duma. Facing huge shortages in basic necessities along with widespread riots in early 1917, Czar Nicholas II abdicated in March, ending three centuries of Romanov rule and allowing liberals to declare a republic. Under the uncertain leadership of Alexander Kerensky, however, the Provisional Government mismanaged Russia's continuing involvement in the war, prompting angry reactions from the Petrograd workers, who drifted further and further to the left. The Bolsheviks, a communist group led by Vladimir Lenin, seized the political opportunity from this confusion and launched a second revolution in Russia during the same year. The communist victory presented a major challenge for liberalism because it precipitated a rise in totalitarian regimes, but the economic problems that rocked the Western world in the 1930s proved even more devastating. The Great Depression fundamentally changed the liberal world. There was an inkling of a new liberalism during the First World War, but modern liberalism fully hatched in the 1930s as a response to the Depression, which inspired John Maynard Keynes to revolutionize the field of economics. Classical liberals, such as economist Ludwig von Mises, posited that completely free markets were the optimal economic units capable of effectively allocating resources—that over time, in other words, they would produce full employment and economic security. Keynes spearheaded a broad assault on classical economics and its followers, arguing that totally free markets were not ideal, and that hard economic times required intervention and investment from the state. Where the market failed to properly allocate resources, for example, the government was required to stimulate the economy until private funds could start flowing again—a "prime the pump" kind of strategy designed to boost industrial production. The social liberal program launched by President Roosevelt in the United States, the New Deal, proved very popular with the American public. In 1933, when FDR came into office, the unemployment rate stood at roughly 25 percent. The size of the economy, measured by the gross national product, had fallen to half the value it had in early 1929. The electoral victories of FDR and the Democrats precipitated a deluge of deficit spending and public works programs. In 1940, the level of unemployment had fallen by 10 points to around 15 percent. Additional state spending and the gigantic public works program sparked by the Second World War eventually pulled the United States out of the Great Depression. From 1940 to 1941, government spending increased by 59 percent, the gross domestic product skyrocketed 17 percent, and unemployment fell below 10 percent for the first time since 1929. By 1945, after vast government spending, public debt stood at a staggering 120 percent of GNP, but unemployment had been effectively eliminated. Most nations that emerged from the Great Depression did so with deficit spending and strong intervention from the state. The economic woes of the period prompted widespread unrest in the European political world, leading to the rise of fascism as an ideology and a movement that heavily criticized liberalism. Broadly speaking, fascist ideology emphasized elite rule and absolute leadership, a rejection of equality, the imposition of patriarchal society, a stern commitment to war as an instrument of natural behavior, and the elimination of supposedly inferior or subhuman groups from the structure of the nation. The fascist and nationalist grievances of the 1930s eventually culminated in the Second World War, the deadliest conflict in human history. The Allies prevailed in the war by 1945, and their victory set the stage for the Cold War between communist states and liberal democracies. The Cold War featured extensive ideological competition and several proxy wars. While communist states and liberal democracies competed against one another, an economic crisis in the 1970s inspired a temporary move away from Keynesian economics across many Western governments. This classical liberal renewal, known as neoliberalism, lasted through the 1980s and the 1990s, bringing about economic privatization of previously state-owned industries. However, recent economic troubles have prompted a resurgence in Keynesian economic thought. Meanwhile, nearing the end of the 20th century, communist states in Eastern Europe collapsed precipitously, leaving liberal democracies as the only major forms of government. At the beginning of the Second World War, the number of democracies around the world was about the same as it had been forty years before. After 1945, liberal democracies spread very quickly. Even as late as 1974, roughly 75 percent of all nations were considered dictatorial, but now more than half of all countries are democracies. This last achievement spoke volumes about the influence of liberalism to the American intellectual Francis Fukuyama, who speculated on the "end of history" by claiming: |“||What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such; that is, the end point of...ideological evolution and the universalization of...liberal democracy as the final form of human government.||”| Liberalism—both as a political current and an intellectual tradition—is mostly a modern phenomenon that started in the 17th century, although some liberal philosophical ideas had precursors in classical antiquity. The Roman Emperor Marcus Aurelius praised "the idea of a polity administered with regard to equal rights and equal freedom of speech, and the idea of a kingly government which respects most of all the freedom of the governed". Scholars have also recognized a number of principles familiar to contemporary liberals in the works of several Sophists and in the Funeral Oration by Pericles. Liberal philosophy symbolizes an extensive intellectual tradition that has examined and popularized some of the most important and controversial principles of the modern world. Its immense scholarly and academic output has been characterized as containing "richness and diversity," but that diversity often has meant that liberalism comes in different formulations and presents a challenge to anyone looking for a clear definition. Though all liberal doctrines possess a common heritage, scholars frequently assume that those doctrines contain "separate and often contradictory streams of thought". The objectives of liberal theorists and philosophers have differed across various times, cultures, and continents. The diversity of liberalism can be gleaned from the numerous adjectives that liberal thinkers and movements have attached to the very term liberalism, including classical, egalitarian, economic, social, welfare-state, ethical, humanist, deontological, perfectionist, democratic, and institutional, to name a few. Despite these variations, liberal thought does exhibit a few definite and fundamental conceptions. At its very root, liberalism is a philosophy about the meaning of humanity and society. Political philosopher John Gray identified the common strands in liberal thought as being individualist, egalitarian, meliorist, and universalist. The individualist element avers the ethical primacy of the human being against the pressures of social collectivism, the egalitarian element assigns the same moral worth and status to all individuals, the meliorist element asserts that successive generations can improve their sociopolitical arrangements, and the universalist element affirms the moral unity of the human species and marginalizes local cultural differences. The meliorist element has been the subject of much controversy, defended by thinkers such as Immanuel Kant, who believed in human progress, while suffering from attacks by thinkers such as Rousseau, who believed that human attempts to improve themselves through social cooperation would fail. Describing the liberal temperament, Gray claimed that it "has been inspired by skepticism and by a fideistic certainty of divine revelation ... it has exalted the power of reason even as, in other contexts, it has sought to humble reason's claims". The liberal philosophical tradition has searched for validation and justification through several intellectual projects. The moral and political suppositions of liberalism have been based on traditions such as natural rights and utilitarian theory, although sometimes liberals even requested support from scientific and religious circles. Through all these strands and traditions, scholars have identified the following major common facets of liberal thought: believing in equality and individual liberty, supporting private property and individual rights, supporting the idea of limited constitutional government, and recognizing the importance of related values such as pluralism, toleration, autonomy, and consent. Early liberals, including John Locke and Baruch Spinoza, attempted to determine the purpose of government in a liberal society. To these liberals, securing the most essential amenities of life—liberty and private property among them—required the formation of a "sovereign" authority with universal jurisdiction. In a natural state of affairs, liberals argued, humans were driven by the instincts of survival and self-preservation, and the only way to escape from such a dangerous existence was to form a common and supreme power capable of arbitrating between competing human desires. This power could be formed in the framework of a civil society that allows individuals to make a voluntary social contract with the sovereign authority, transferring their natural rights to that authority in return for the protection of life, liberty, and property. These early liberals often disagreed in their opinion of the most appropriate form of government, but they all shared the belief that liberty was natural and that its restriction needed strong justification. Liberals generally believed in limited government, although several liberal philosophers decried government outright, with Thomas Paine writing that "government even in its best state is a necessary evil". As part of the project to limit the powers of government, various liberal theorists—such as James Madison and the Baron de Montesquieu—conceived the notion of separation of powers, a system designed to equally distribute governmental authority among the executive, legislative, and judicial branches. Finally, governments had to realize, liberals maintained, that poor and improper governance gave the people authority to overthrow the ruling order through any and all possible means—even through outright violence and revolution, if needed. Contemporary liberals, heavily influenced by social liberalism, have continued to support limited constitutional government while also advocating for state services and provisions to ensure equal rights. Modern liberals claim that formal or official guarantees of individual rights are irrelevant when individuals lack the material means to benefit from those rights, urging a greater role for government in the administration of economic affairs. Beyond identifying a clear role for government in modern society, liberals also have obsessed over the meaning and nature of the most important principle in liberal philosophy: liberty. From the 17th century until the 19th century, liberals—from Adam Smith to John Stuart Mill—conceptualized liberty as the absence of interference from government and from other individuals, claiming that all people should have the freedom to develop their own unique abilities and capacities without being sabotaged by others. Mill's On Liberty (1859), one of the classic texts in liberal philosophy, proclaimed that "the only freedom which deserves the name, is that of pursuing our own good in our own way". Support for laissez-faire capitalism is often associated with this principle, with Friedrich Hayek arguing in the The Road to Serfdom (1944) that reliance on free markets would preclude totalitarian control by the state. Beginning in the late 19th century, however, a new conception of liberty entered the liberal intellectual arena. This new kind of liberty became known as positive liberty to distinguish it from the prior negative version, and it was first developed by British philosopher Thomas Hill Green. Green rejected the idea that humans were driven solely by self-interest, emphasizing instead the complex circumstances that are involved in the evolution of our moral character. In a very profound step for the future of modern liberalism, he also tasked social and political institutions with the enhancement of individual freedom and identity. Foreshadowing the new liberty as the freedom to act rather than to avoid suffering from the acts of others, Green wrote the following: |“||If it were ever reasonable to wish that the usage of words had been other than it has been...one might be inclined to wish that the term 'freedom' had been confined to the...power to do what one wills.||”| Rather than previous liberal conceptions viewing society as populated by selfish individuals, Green viewed society as an organic whole in which all individuals have a duty to promote the common good. His ideas spread rapidly and were developed by other thinkers such as L. T. Hobhouse and John Hobson. In a few short years, this New Liberalism had become the essential social and political program of the Liberal Party in Britain, and it would encircle much of the world in the 20th century. In addition to examining negative and positive liberty, liberals have tried to understand the proper relationship between liberty and democracy. As they struggled to expand suffrage rights, liberals increasingly understood that people left out of the democratic decision-making process were liable to the tyranny of the majority, a concept explained in Mill's On Liberty and in Democracy in America (1835) by Alexis de Tocqueville. As a response, liberals began demanding proper safeguards to thwart majorities in their attempts at suppressing the rights of minorities. Besides liberty, liberals have developed several other principles important to the construction of their philosophical structure, such as equality, pluralism, and toleration. Highlighting the confusion over the first principle, Voltaire commented that "equality is at once the most natural and at times the most chimeral of things". All forms of liberalism assume, in some basic sense, that individuals are equal. In maintaining that people are naturally equal, liberals assume that they all possess the same right to liberty. In other words, no one is inherently entitled to enjoy the benefits of liberal society more than anyone else, and all people are equal subjects before the law. Beyond this basic conception, liberal theorists diverge on their understanding of equality. American philosopher John Rawls emphasized the need to ensure not only equality under the law, but also the equal distribution of material resources that individuals required to develop their aspirations in life. Libertarian thinker Robert Nozick disagreed with Rawls, championing the former version of Lockean equality instead. To contribute to the development of liberty, liberals also have promoted concepts like pluralism and toleration. By pluralism, liberals refer to the proliferation of opinions and beliefs that characterize a stable social order. Unlike many of their competitors and predecessors, liberals do not seek conformity and homogeneity in the way that people think; in fact, their efforts have been geared towards establishing a governing framework that harmonizes and minimizes conflicting views, but still allows those views to exist and flourish. For liberal philosophy, pluralism leads easily to toleration. Since individuals will hold diverging viewpoints, liberals argue, they ought to uphold and respect the right of one another to disagree. From the liberal perspective, toleration was initially connected to religious toleration, with Spinoza condemning "the stupidity of religious persecution and ideological wars". Toleration also played a central role in the ideas of Kant and John Stuart Mill. Both thinkers believed that society will contain different conceptions of a good ethical life and that people should be allowed to make their own choices without interference from the state or other individuals. As one of the first modern ideologies, liberalism has had a profound impact on the ones that followed it. In particular, some scholars suggest that liberalism gave rise to feminism, although others maintain that liberal democracy is inadequate for the realization of feminist objectives. Liberal feminism, the dominant tradition in feminist history, hopes to eradicate all barriers to gender equality—claiming that the continued existence of such barriers eviscerates the individual rights and freedoms ostensibly guaranteed by a liberal social order. British philosopher Mary Wollstonecraft is widely regarded as the pioneer of liberal feminism, with A Vindication of the Rights of Woman (1792) expanding the boundaries of liberalism to include women in the political structure of liberal society. Less friendly to the goals of liberalism has been conservatism. Like liberalism, conservatism is complex and amorphous, laying claims to several intellectual traditions over the last three centuries. Edmund Burke, considered by some to be the first major proponent of modern conservative thought, offered a blistering critique of the French Revolution by assailing the liberal pretensions to the power of rationality and to the natural equality of all humans. However, a few variations of conservatism, like conservative liberalism, expound some of the same ideas and principles championed by classical liberalism, including "small government and thriving capitalism". Even more uncertain is the relationship between liberalism and socialism. Socialism began as a concrete ideology in the 19th century with the writings of Karl Marx, and it too—as with liberalism and conservatism—fractured into several major movements in the decades after its founding. The most prominent eventually became social democracy, which can be broadly defined as a project that aims to correct what it regards as the intrinsic defects of capitalism by reducing the inequalities that exist within an economic system. Several commentators have noted strong similarities between social liberalism and social democracy, with one political scientist even calling American liberalism "bootleg social democracy". Another movement associated with modern democracy, Christian democracy, hopes to spread Catholic social ideas and has gained a large following in some European nations. The early roots of Christian democracy developed as a reaction against the industrialization and urbanization associated with laissez-faire liberalism in the 19th century. Despite these complex relationships, some scholars have argued that liberalism actually "rejects ideological thinking" altogether, largely because such thinking could lead to unrealistic expectations for human society. Liberalism is frequently cited as the dominant ideology of modern times. Politically, liberals have organized extensively throughout the world. Liberal parties, think tanks, and other institutions are common in many nations, although they advocate for different causes based on their ideological orientation. Liberal parties can be center-left, centrist, or center-right depending on their location. They can further be divided based on their adherence to social liberalism or classical liberalism, although all liberal parties and individuals share basic similarities, including the support for civil rights and democratic institutions. On a global level, liberals are united in the Liberal International, which contains over 100 influential liberal parties and organizations from across the ideological spectrum. Some parties in the LI are among the most famous in the world, such as the Liberal Party of Canada, while others are among the smallest, such as the Liberal Party of Gibraltar. Regionally, liberals are organized through various institutions depending on the prevailing geopolitical context. In the European Parliament, for example, the Alliance of Liberals and Democrats for Europe is the predominant group that represents the interest of European liberals. In Europe, liberalism has a long tradition dating back to 17th century. Scholars often split those traditions into English and French versions, with the former version of liberalism emphasizing the expansion of democratic values and constitutional reform and the latter rejecting authoritarian political and economic structures, as well as being involved with nation-building. The continental French version was deeply divided between moderates and progressives, with the moderates tending to elitism and the progressives supporting the universalization of fundamental institutions, such as universal suffrage, universal education, and the expansion of property rights. Over time, the moderates displaced the progressives as the main guardians of continental European liberalism. Moderates were identified with liberalism or liberal conservatism whereas progressives could fall under a number of left-wing camps, from liberalism and radicalism to republicanism and social democracy. A prominent example of these divisions is the German Free Democratic Party, which was historically divided between national liberal and social liberal factions. Modern European liberal parties exhibit diverse tendencies. Before the First World War, liberal parties dominated the European political scene, but they were gradually displaced by socialists and social democrats in the early 20th century. The fortunes of liberal parties since World War II have been mixed, with some gaining strength while others suffered from continuous declines. The fall of the Soviet Union and the breakup of Yugoslavia at the end of the 20th century, however, allowed the formation of many liberal parties throughout Eastern Europe. These parties developed varying ideological characters. Some, such as the Slovenian Liberal Democrats or the Lithuanian Social Liberals, have been characterized as center-left. Others, such as the Romanian National Liberal Party, have been classified as center-right. Meanwhile, some liberal parties in Western Europe have undergone renewal and transformation, coming back to the political limelight after historic disappointments. Perhaps the most famous instance is the Liberal Democrats in Britain. The Liberal Democrats are the heirs of the once-mighty Liberal Party, which suffered a huge erosion of support to the Labour Party in the early 20th century. After nearly vanishing from the British political scene altogether, the Liberals eventually united with the Social Democratic Party, a Labour splinter group, in 1988—forming the current Liberal Democrats along the way. The Liberal Democrats have earned significant popular support in the general election of 2005 and in local council elections, marking the first time in decades that a British party with a liberal ideology has achieved such electoral success. Both in Britain and elsewhere in Western Europe, liberal parties have often cooperated with socialist and social democratic parties, as evidenced by the Purple Coalition in the Netherlands during the late 1990s and into the 21st century. The Purple Coalition, one of the most consequential in Dutch history, brought together the progressive left-liberal D66, the market liberal and center-right VVD, and the socialist Labour Party—an unusual combination that ultimately legalized same-sex marriage, euthanasia, and prostitution while also instituting a non-enforcement policy on marijuana. In North America, unlike in Europe, the word liberalism almost exclusively refers to social liberalism in contemporary politics. The dominant Canadian and American parties, the Liberal Party and the Democratic Party, are frequently identified as being modern liberal or center-left organizations in the academic literature. In Canada, the long-dominant Liberal Party, affectionately known as the Grits, ruled the country for nearly 70 years during the 20th century. The party produced some of the most famous prime ministers in Canadian history, including Pierre Trudeau and Jean Chrétien, and has been primarily responsible for the development of the Canadian welfare state. The enormous success of the Liberals—virtually unmatched in any other liberal democracy—has prompted many political commentators over time to identify them as the nation's natural governing party. In the United States, modern liberalism traces its history to the popular presidency of Franklin Delano Roosevelt, who initiated the New Deal in response to the Great Depression and won an unprecedented four elections. The New Deal coalition established by FDR left a decisive legacy and impacted many future American presidents, including John F. Kennedy, a self-described liberal who defined a liberal as "someone who looks ahead and not behind, someone who welcomes new ideas without rigid reactions...someone who cares about the welfare of the people". In the late 20th century, a conservative backlash against the kind of liberalism championed by FDR and JFK developed in the Republican Party. This brand of conservatism primarily reacted against the civil unrest and the cultural changes that transpired during the 1960s. It launched into power presidents such as Ronald Reagan, George H. W. Bush, and George W. Bush. Economic woes in the early 21st century, however, led to a resurgence of social liberalism with the election of Barack Obama in the 2008 presidential election. In Latin America, liberal agitation dates back to the 19th century, when liberal groups frequently fought against and violently overthrew conservative regimes in several countries across the region. Liberal revolutions in countries such as Mexico and Ecuador ushered in the modern world for much of Latin America. Latin American liberals generally emphasized free trade, private property, and anti-clericalism. Today, market liberals in Latin America are organized in the Red Liberal de América Latina, a network that brings together dozens of liberal parties and organizations. RELIAL features parties as geographically diverse as the Mexican Nueva Alianza and the Cuban Liberal Union, which aims to secure power in Cuba. Some major liberal parties in the region continue, however, to align themselves with social liberal ideas and policies—a notable case being the Colombian Liberal Party, which is a member of the Socialist International. Another famous example is the Paraguayan Authentic Radical Liberal Party, one of the most powerful parties in the country, which has also been classified as center-left. In Asia, liberalism is a much younger political current than in Europe or the Americas. Continentally, liberals are organized through the Council of Asian Liberals and Democrats, which includes powerful parties such the Liberal Party in the Philippines, the Democratic Progressive Party in Taiwan, and the Democrat Party in Thailand. Two notable examples of liberal influence can be found in India and Australia, although several Asian nations have rejected important liberal principles. In Australia, liberalism is primarily championed by the center-right Liberal Party. The Liberals in Australia support free markets as well as social conservatism. In India, the most populous democracy in the world, the Indian National Congress has long dominated political affairs. The INC was founded in the late 19th century by liberal nationalists demanding the creation of a more liberal and autonomous India. Liberalism continued to be the main ideological current of the group through the early years of the 20th century, but socialism gradually overshadowed the thinking of the party in the next few decades. A famous struggle led by the INC eventually earned India's independence from Britain. In recent times, the party has adopted more of a liberal streak, championing open markets while simultaneously seeking social justice. In its 2009 Manifesto, the INC praised a "secular and liberal" Indian nationalism against the nativist, communal, and conservative ideological tendencies it claims are espoused by the right. In general, the major theme of Asian liberalism in the past few decades has been the rise of democratization as a method facilitate the rapid economic modernization of the continent. Several Asian nations, however, notably China, are challenging Western liberalism with a combination of authoritarian government and capitalism, while in others, notably Myanmar, liberal democracy has been replaced by military dictatorship. Liberalism in Africa is comparatively weak. In recent times, however, liberal parties and institutions have made a major push for political power. On a continental level, liberals are organized in the Africa Liberal Network, which contains influential parties such as the Popular Movement in Morocco, the Democratic Party in Senegal, and the Rally of the Republicans in Côte d'Ivoire. Among African nations, South Africa stands out for having a notable liberal tradition that other countries on the continent lack. In the middle of the 20th century, the Liberal Party and the Progressive Party were formed to oppose the apartheid policies of the government. The Liberals formed a multiracial party that originally drew considerable support from urban Africans and college-educated whites. It also gained supporters from the "westernized sectors of the peasantry", and its public meetings were heavily attended by black Africans. The party had 7,000 members at its height, although its appeal to the white population as a whole was too small to make any meaningful political changes. The Liberals were disbanded in 1968 after the government passed a law that prohibited parties from having multiracial membership. Today, liberalism in South Africa is represented by the Democratic Alliance, the official opposition party to the ruling African National Congress. The Democratic Alliance is the second largest party in the National Assembly and currently leads the provincial government of Western Cape. The fundamental elements of contemporary society have liberal roots. The early waves of liberalism expanded constitutional and parliamentary government, popularized economic individualism, and established a clear distinction between religious and political authority. One of the greatest liberal triumphs involved replacing the capricious nature of royalist and absolutist rule with a decision-making process encoded in written law. Liberals sought and established a constitutional order that prized important individual freedoms, such as the freedom of speech and of association, an independent judiciary and public trial by jury, and the abolition of aristocratic privileges. These sweeping changes in political authority marked the modern transition from absolutism to constitutional rule. The expansion and promotion of free markets was another major liberal achievement. Before they could establish markets, however, liberals had to destroy the old economic structures of the world. In that vein, liberals ended mercantilist policies, royal monopolies, and various other restraints on economic activities. They also sought to abolish internal barriers to trade—eliminating guilds, local tariffs, and prohibitions on the sale of land along the way. Beyond free markets and constitutional government, early liberals also laid the groundwork for the separation of church and state. As heirs of the Enlightenment, liberals believed that any given social and political order emanated from human interactions, not from divine will. Many liberals were openly hostile to religious belief itself, but most concentrated their opposition to the union of religious and political authority—arguing that faith could prosper on its own, without official sponsorship or administration from the state. Later waves of liberal thought and struggle were strongly influenced by the need to expand civil rights. In the 1960s and 1970s, the cause of Second Wave feminism in the United States was advanced in large part by liberal feminist organizations such as National Organization for Women. In addition to supporting gender equality, liberals also have advocated for racial equality in their drive to promote civil rights, and a global civil rights movement in the 20th century achieved several objectives towards both goals. Among the various regional and national movements, the civil rights movement in the United States during the 1960s strongly highlighted the liberal crusade for equal rights. Describing the political efforts of the period, some historians have asserted that "the voting rights campaign marked...the convergence of two political forces at their zenith: the black campaign for equality and the movement for liberal reform," further remarking about how "the struggle to assure blacks the ballot coincided with the liberal call for expanded federal action to protect the rights of all citizens". The Great Society project launched by President Lyndon B. Johnson oversaw the creation of Medicare and Medicaid, the establishment of Head Start and the Job Corps as part of the War on Poverty, and the passage of the landmark Civil Rights Act of 1964—an altogether rapid series of events that some historians have dubbed the Liberal Hour. Another major accomplishment of liberal agitation includes the rise of liberal internationalism, which has been credited with the establishment of global organizations such as the League of Nations and, after the Second World War, the United Nations. The idea of exporting liberalism worldwide and constructing a harmonious and liberal internationalist order has dominated the thinking of liberals since the 18th century. "Wherever liberalism has flourished domestically, it has been accompanied by visions of liberal internationalism," one historian wrote. But resistance to liberal internationalism was deep and bitter, with critics arguing that growing global interdependency would result in the loss of national sovereignty and that democracies represented a corrupt order incapable of either domestic or global governance. Other scholars have praised the influence of liberal internationalism, claiming that the rise of globalization "constitutes a triumph of the liberal vision that first appeared in the eighteenth century" while also writing that liberalism is "the only comprehensive and hopeful vision of world affairs". The gains of liberalism have been significant. In 1975, roughly 40 countries around the world were characterized as liberal democracies, but that number had increased to more than 80 as of as of 2008. Most of the world's richest and most powerful nations are liberal democracies with extensive social welfare programs. |This article is part of the | Politics (from Greek πολιτικος, [politikós]: «citizen», «civilian»), is a process by which groups of people make collective decisions. The term is generally applied to behavior within civil governments, but politics has been observed in other group interactions, including corporate, academic, and religious institutions. It consists of "social relations involving authority or power" and refers to the regulation of a political unit, and to the methods and tactics used to formulate and apply policy. The word "politics" comes from the Greek word "Πολιτικά" (politika), modeled on Aristotle's "affairs of state", the name of his book on governing and governments, which was rendered in English mid-15 century as Latinized "Polettiques". Thus it became "politics" in Middle English c. 1520s (see the Concise Oxford Dictionary). The singular "politic" first attested in English 1430 and comes from Middle French "politique", in turn from Latin "politicus", which is the romanization of the Greek "πολιτικός" (politikos), meaning amongst others "of, for, or relating to citizens", "civil", "civic", "belonging to the state", in turn from "πολίτης" (polites), "citizen" and that from "πόλις" (polis), "city". The origin and development of government institutions is the most visible subject for the study of Politics and its history. The Totem group was the real social unit of the aboriginal Australian. The Totem is not an Australian word but it is generally accepted to designate the name of an institution which is found everywhere among primitive people. The Totem group is primarily a group of people distinguished by the sign of a natural object, such as an animal or tree, who may not intermarry with one another — this is the first rule of primitive social organization; its origin is lost in antiquity ("Alcheringa") but its object is certainly to prevent the intermarriage of close relatives. Marriage takes place between men and women of different Totems; the husband belongs to all the women of his wife's totem and the wife belongs to all the men of the husband's totem at the same time that a communal marriage is established between the men and women of the two different Totems - the men and women being of the same generation. This presents a most valuable objective lesson in social history. There are no unmarried couples; marriage for them is part of the natural order into which they are born. The ceremonies were kept secret and are directed by a "Birraark" or sorcerer, usually an old man. The candidates are instructed in the history of their Totem and on the power of the Birraark. They were initiated into the mystery of the Totem, usually accompanied by an ordeal such as circumcision and then they were tattooed with a seal of identity that marks them for a given Totem and a given generation in that Totem. In this way is constructed the simple system of relationship of the aboriginal Australian before British colonization. The mother took a predominant role, for descent was almost always reckoned through females. Parent, child, brother and sister were the only recognized relationships. Rudimentary as this system may appear to be, it is widely spread among the Malay Archipelago and prevails widely among primitive peoples everywhere. The Totem served the purpose of forbidding intermarriage between close relatives and will deal destruction if this rule is not strictly enforced. These are the rudiments of two of the most important factors in human progress: Religion and Law. The rudimentary notion of Law is very specific about what is prohibited or Taboo. Primitive people do not recognize any duties towards strangers unless there is an abundant food supply in a given area. It is a sure sign of progress if the same area is able to maintain an ever larger number of people. According to legend and the Codex Chimalpopoca, Quetzalcoatl being intoxicated with pulque had incest with his sister Quetzalpetlatl. Upon realizing the act, he declared: "... I've sinned. I'm not fit to rule." He burned his palace, buried his treasures and left forever the beloved city of Tollan, cradle of Toltec civilization. All patriarchal societies are known by certain characteristic features: These features of the development of the patriarchal state of society are as common among the Jews as among the Arabs, among the Aryans as among the Dravidians and even among the Germanic and Celtic peoples. The patriarchal state of society consists of two stages, tribe and clan. The tribe is a large group of hundreds of members who descend from one common male ancestor, sometimes from a fictitious character satisfying the etiquette that descent from the male is the only basis of society. The clan, on the other hand, is a smaller group reaching back into the past for only four generations or so to a common well-known male ancestor. The clan always breaks down into smaller units when its limit is reached. According to the Scottish historian W. F. Skene in volumen 3 of Celtic Scotland, the tribe or larger unit is the oldest. When the tribe breaks down, clans are formed. When the clan system breaks down, it leaves the households or families as independent units. Finally, with the withering away of patriarchal society, the family is dissolved and the individual comes into existence. [[File:|right |thumb |Sun Tzu]] The origin of the State is to be found in the development of the art of warfare. Historically speaking, there is not the slightest difficulty in proving that all political communities of the modern type owe their existence to successful warfare. As a result the new states are forced to organize on military principles. The life of the new community is military allegiance. The military by nature is competitive. Of the institutions by which the state is ruled, that of kingship stands foremost until the French Revolution put an end to the "divine right of kings". Nevertheless, kingship is perhaps the most successful institution of politics. However, the first kings were not institutions but individuals. The earliest kings were successful militarily. They were men not only of great military genius but also great administrators. Kingship becomes an institution through heredity. The king rules his kingdom with the aid of his Council; without it he could not hold his territories. The Council is the king's master mind. The Council is the germ of constitutional government. Long before the council became a bulwark of democracy, it rendered invaluable aid to the institution of kingship by: The greatest of the king's subordinates, the earls in England and Scotland, the dukes and counts in the Continent, always sat as a right on the Council. A conqueror wages war upon the vanquished for vengeance or for plunder but an established kingdom exacts tribute. One of the functions of the Council is to keep the coffers of the king full. Another is the satisfaction of military service and the establishment of lordships by the king to satisfy the task of collecting taxes and soldiers. No political institution is of greater importance than the institution of property. Property is the right vested on the individual or a group of people to enjoy the benefits of an object be it material or intellectual. A right is a power enforced by public trust. Sometimes it happens that the exercise of a right is opposed to public trust. Nevertheless, a right is really the creation of public trust, past, present or future. The growth of knowledge is the key to the history of property as an institution. The more man becomes knowledgeable of an object be it physical or intellectual, the more it is appropriated. The appearance of the State brought about the final stage in the evolution of property from wildlife to husbandry. In the presence of the State, man can hold landed property. The State began granting lordships and ended up conferring property and with it came inheritance. With landed property came rent and in the exchange of goods, profit, so that in modern times, the "lord of the land" of long ago becomes the landlord. If it is wrongly assumed that the value of land is always the same, then there is of course no evolution of property whatever. However, the price of land goes up with every increase in population benefitting the landlord. The landlordism of large land owners has been the most rewarded of all political services. In industry, the position of the landlord is less important but in towns which have grown out of an industry, the fortunate landlord has reaped an enormous profit. Towards the latter part of the Middle Ages in Europe, both the State - the State would use the instrument of confiscation for the first time to satisfy a debt - and the Church - the Church succeeded in acquiring immense quantities of land - were allied against the village community to displace the small landlord and they were successful to the extent that today, the village has become the ideal of the individualist, a place in which every man "does what he wills with his own." The State has been the most important factor in the evolution of the institution of property be it public or private. As a military institution, the State is concerned with the allegiance of its subjects as disloyalty is a risk to its national security. Thus arises the law of treason. Criminal acts in general, breaking the peace and treason make up the whole of criminal law enforced by the State as distinguished from the law enforced by private individuals. State justice has taken the place of clan, feudal, merchant and ecclesiastical justice due to its strength, skill and simplicity. One very striking evidence of the superiority of the royal courts over the feudal and popular courts in the matter of official skill is the fact that, until comparatively late in history, the royal courts alone kept written records of their proceedings. The trial by jury was adopted by the Royal Courts, securing it's popularity and making it a bulwark of liberty. By the time of the Protestant Reformation, with the separation of Church and State, in the most progressive countries, the State succeeded in dealing with the business of administering justice. The making of laws was unknown to primitive societies. That most persistent of all patriarchal societies, the Jewish, retains to a certain extent its tribal law in the Gentile cities of the West. This tribal law is the rudimentary idea of law as it presented itself to people in the patriarchal stage of society, it was custom or observance sanctioned by the approval and practice of ancestors. The intolerable state of affairs in the 10th century where every little town had its own laws and nations like France, Germany, Spain and other countries had no national law till the end of the 18th century, came to an end thanks to three great agencies that helped to create the modern system of law and legislation: Finally there is the enactment of laws or legislation. When progress and development is rapid, the faster method of political representation is adopted. This method does not originate in primitive society but in the State need for money and its use of an assembly to raise the same. From the town assembly, a national assembly and the progress of commerce sprang Parliament all over Europe around the end of the 12th century but not entirely representative or homogenous for the nobility and the clergy. The clergy had amassed a fortune in land, about one-fifth of all Christendom but at the time, in the 12th and 13th centuries, the Church was following a policy of isolation; they adopted the rule of celibacy and cut themselves from domestic life; they refused to plead in a secular court; they refused to pay taxes to the State on the grounds that they had already paid it to the Pope. Since the main object of the king in holding a national assembly was to collect money, the Church could not be left out and so they came to Parliament. The Church did not like it but in most cases they had to come. The medieval Parliament was complete when it represented all the states in the realm: nobles, clergy, peasants and craftsmen but it was not a popular institution mainly because it meant taxation. Only by the strongest pressure of the Crown were Parliaments maintained during the first century of their existence and the best proof of this assertion lies in the fact that in those countries where the Crown was weak, Parliament ceased to exist. The notion that parliaments were the result of a democratic movement cannot be supported by historical facts. Originally, the representative side of Parliament was solely concerned with money; representation in Parliament was a liability rather than a privilege. It is not uncommon that an institution created for one purpose begins to serve another. People who were asked to contribute with large sums of money began to petition. Pretty soon, sessions in Parliament would turn into bargaining tables, the king granting petitions in exchange for money. However, there were two kinds of petitions, one private and the other public and it was from this last that laws were adopted or legislation originated. The king as head of State could give orders to preserve territorial integrity but not until these royal enactments were combined with public petition that successful legislation ever took place. Even to the present day, this has always been the basis of all successful legislation: public custom is adopted and enforced by the State. In the early days of political representation, the majority did not necessarily carry the day and there was very little need for contested elections but by the beginning of the 15th century, a seat in Parliament was something to be cherished. Historically speaking, the dogma of the equality of man is the result of the adoption of the purely practical machinery of the majority but the adoption of the majority principle is also responsible for another institution of modern times: the party system. The party system is an elaborate piece of machinery that pits at least two political candidates against each other for the vote of an electorate; its advantage being equal representation interesting a large number of people in politics; it provides effective criticism of the government in power and it affords an outlet for the ambition of a large number of wealthy and educated people guaranteeing a consistent policy in government. These three institutions: political representation, majority rule and the party system are the basic components of modern political machinery; they are applicable to both central and local governments and are becoming by their adaptability ends in themselves rather than a machinery to achieve some purpose. The administration is one of the most difficult aspects of government. In the enactment and enforcement of laws, the victory of the State is complete but not so in regards to administration the reason being that it is easy to see the advantage of the enactment and enforcement of laws but not the administration of domestic, religious and business affairs which should be kept to a minimum by government. Originally, the state was a military organization. For many years, it was just a territory ruled by a king who was surrounded by a small elite group of warriors and court officials and it was basically rule by force over a larger mass of people. Slowly, however, the people gained political representation for none can really be said to be a member of the State without the right of having a voice in the direction of policy making. One of the basic functions of the State in regards to administration is maintaining peace and internal order; it has no other excuse for interfering in the lives of its citizens. To maintain law and order the State develops means of communication. Historically, the "king's highway" was laid down and maintained for the convenience of the royal armies not as an incentive to commerce. In almost all countries, the State jealously maintains the control of the means of communication and special freedoms such as those delineated in the First Amendment to the United States Constitution are rather limited. The State's original function of maintaining law and order within its borders gave rise to police administration which is a branch of the dispensation of Justice but on its preventive side, police jurisdiction has a special character of its own, which distinguishes it from ordinary judicial work. In the curfew, the State shows early in history the importance of preventing disorder. In early days, next to maintaining law and order, the State was concerned with the raising of revenue. This led eventually to modern State socialism. It was then useful to the State to establish a standard of weights and measures so that value could be generally accepted and finally the State acquired a monopoly of coinage. The regulation of labor by the State as one of its functions dates from the 15th century, when the Black Plague killed around half of the European population. The invariable policy of the State has always being to break down all intermediate authorities and to deal directly with the individual. This was the policy until Adam Smith's The Wealth of Nations was published promoting a strong public reaction against State interference. By its own action, the State raised the issue of the poor or the State relief of the indigent. The State, of course, did not create poverty but by destroying the chief agencies which dealt with it such as the village, the church and the guilds, it practically assumed full responsibility for the poor without exercising any power over it. The Great Poor Law Report of 1834 showed that communism ran rampant in the rural areas of England. In newly developed countries such as the colonies of the British Empire, the State has refused to take responsibility for the poor and the relief of poverty in spite of the fact, that the poor classes lean heavily towards State socialism. Recognizing the great power of the State, it is only natural that in times of great crisis such as an overwhelming calamity the people should invoke general State aid. Political representation has helped to shape State administration. When the voice of the individual can be heard, the danger of arbitrary interference by the State is greatly reduced. To that extent is the increase of State activity popular. There are no hard and fast rules to limit State administration but it is a fallacy to believe that the State is the nation and what the State does is necessarily for the good of the nation. In the first place, even in modern times, the State and the nation are never identical. Even where "universal suffrage" prevails, the fact remains that an extension of State administration means an increased interference of some by others, limiting freedom of action. Even if it is admitted that State and nation are one and the same, it is sometimes difficult to admit that State administration is necessarily good. Finally, the modern indiscriminate advocacy of State administration conceals the fallacy that State officials must necessarily prove more effective in their action than private enterprise. Herein lies the basic difference between Public and Business Administration; the first deals with the public weal while the second deals basically in profit but both require a great deal of education and ethical conduct to avoid the mishaps inherent in the relationship not only of business and labor but also the State and the Administration. According to Aristotle, States are classified into monarchies, aristocracies, timocracies, democracies, oligarchies, and tyrannies. Due to an increase in knowledge of the history of politics, this classification has been abandoned. Generally speaking, no form of government could be considered the best if the best is considered to be the one that is most appropriate under the circumstances. All States are varieties of a single type, the sovereign State. All the Great Powers of the modern world rule on the principle of sovereignty. Sovereign power may be vested on an individual as in an autocratic government or it may be vested on a group as in a constitutional government. Constitutions are written documents that specify and limit the powers of the different branches of government. Although a Constitution is a written document, there is also an unwritten Constitution. The unwritten constitution is continually being written by the Legislative branch of government; this is just one of those cases in which the nature of the circumstances determines the form of government that is most appropriate. Nevertheless, the written constitution is essential. England did set the fashion of written constitutions during the Civil War but after the Restoration abandoned them to be taken up later by the American Colonies after their emancipation and then France after the Revolution and the rest of Europe including the European colonies. There are two forms of government, one a strong central government as in France and the other a local government such as the ancient divisions in England that is comparatively weaker but less bureaucratic. These two forms helped to shape the federal government, first in Switzerland, then in the United States in 1776, in Canada in 1867 and in Germany in 1870 and in the 20th century, Australia. The Federal States introduced the new principle of agreement or contract. Compared to a federation, a confederation's singular weakness is that it lacks judicial power. In the American Civil War, the contention of the Confederate States that a State could secede from the Union was untenable because of the power enjoyed by the Federal government in the executive, legislative and judiciary branches. According to professor A. V. Dicey in An Introduction to the Study of the Law of the Constitution, the essential features of a federal constitution are: a) A written supreme constitution in order to prevent disputes between the jurisdictions of the Federal and State authorities; b) A distribution of power between the Federal and State governments and c) A Supreme Court vested with the power to interpret the Constitution and enforce the law of the land remaining independent of both the executive and legislative branches. A political party is a political organization that typically seeks to attain and maintain political power within government, usually by participating in electoral campaigns, educational outreach or protest actions. Parties often espouse an expressed ideology or vision bolstered by a written platform with specific goals, forming a coalition among disparate interests. Political science, the study of politics, examines the acquisition and application of power and "power corrupts". Related areas of study include political philosophy, which seeks a rationale for politics and an ethic of public behaviour, political economy, which attempts to develop understandings of the relationships between politics and the economy and the governance of the two, and public administration, which examines the practices of governance. Recently in history, political analysts and politicians divide politics into left wing and right wing politics, often also using the idea of center politics as a middle path of policy between the right and left. This classification is comparatively recent (it was not used by Aristotle or Hobbes, for instance), and dates from the French Revolution era, when those members of the National Assembly who supported the republic, the common people and a secular society sat on the left and supporters of the monarchy, aristocratic privilege and the Church sat on the right. The meanings behind the labels have become more complicated over the years. A particularly influential event was the publication of the Communist Manifesto by Karl Marx and Frederick Engels in 1848. The Manifesto suggested a course of action for a proletarian revolution to overthrow the bourgeois society and abolish private property, in the belief that this would lead to a classless and stateless society. The meaning of left-wing and right-wing varies considerably between different countries and at different times, but generally speaking, it can be said that the right wing often values tradition and social stratification while the left wing often values reform and egalitarianism, with the center seeking a balance between the two such as with social democracy or regulated capitalism. According to Norberto Bobbio, one of the major exponents of this distinction, the Left believes in attempting to eradicate social inequality, while the Right regards most social inequality as the result of ineradicable natural inequalities, and sees attempts to enforce social equality as utopian or authoritarian. Some ideologies, notably Christian Democracy, claim to combine left and right wing politics; according to Geoffrey K. Roberts and Patricia Hogwood, "In terms of ideology, Christian Democracy has incorporated many of the views held by liberals, conservatives and socialists within a wider framework of moral and Christian principles." Movements which claim or formerly claimed to be above the left-right divide include Fascist Third-position economic politics in Italy, Gaullism in France, Peronism in Argentina, and National Action Politics in Mexico. Authoritarianism and libertarianism refer to the amount of individual freedom each person possesses in that society relative to the state. One author describes authoritarian political systems as those where "individual rights and goals are subjugated to group goals, expectations and conformities", while libertarians generally oppose the state and hold the individual and his property as sovereign. In their purest form, libertarians are anarchists, who argue for the total abolition of the state, while the purest authoritarians are totalitarians who support state control over all aspects of society. For instance, classical liberalism (also known as laissez-faire liberalism, or, in much of the world, simply liberalism) is a doctrine stressing individual freedom and limited government. This includes the importance of human rationality, individual property rights, free markets, natural rights, the protection of civil liberties, constitutional limitation of government, and individual freedom from restraint as exemplified in the writings of John Locke, Adam Smith, David Hume, David Ricardo, Voltaire, Montesquieu and others. According to the libertarian Institute for Humane Studies, "the libertarian, or 'classical liberal,' perspective is that individual well-being, prosperity, and social harmony are fostered by 'as much liberty as possible' and 'as little government as necessary.'" [[File:|right |thumb |NYC UN]] The 20th century witnessed the outcome of two world wars and not only the rise and fall of the Third Reich but also the rise and fall of communism. The development of the Atomic bomb gave the United States a more rapid end to its conflict in Japan in World War II. Later, the development of the Hydrogen bomb became the ultimate weapon of mass destruction. The United Nations has served as a forum for peace in a world threatened by nuclear war. "The invention of nuclear and space weapons has made war unacceptable as an instrument for achieving political ends." Although an all-out final nuclear holocaust is out of the question for man, "nuclear blackmail" comes into question not only on the issue of world peace but also on the issue of national sovereignty. On a Sunday in 1962, the world stood still at the brink of nuclear war during the October Cuban missile crisis from the implementation of U.S. vs U.S.S.R. nuclear blackmail policy. Former President Ronald Reagan was horrified by nuclear weapons and believed in the probable existence of life on other planets. For the President, the fantasy of an invasion from outer space that would force the nations of the world to unite against a common enemy was strong enough to convince anyone that mankind could unite in a common interest such as world peace. At their first meeting in Geneva in 1985, president Reagan brought up the subject of an invasion from outer space to Mikhail Gorbachev. General Colin Powell was convinced that Reagan's peace proposal to Gorbachev was inspired by the 1951 science-fiction film, The Day the Earth Stood Still. On September 21, 1987, Reagan told the General Assembly of the United Nations: "...I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world." |“||Unlimited power is apt to corrupt the minds of those who possess it.||”| Political corruption is the use of legislated powers by government officials for illegitimate private gain. Misuse of government power for other purposes, such as repression of political opponents and general police brutality, is not considered political corruption. Neither are illegal acts by private persons or corporations not directly involved with the government. An illegal act by an officeholder constitutes political corruption only if the act is directly related to their official duties. Forms of corruption vary, but include bribery, extortion, cronyism, nepotism, patronage, graft, and embezzlement. While corruption may facilitate criminal enterprise such as drug trafficking, money laundering, and trafficking, it is not restricted to these activities. The activities that constitute illegal corruption differ depending on the country or jurisdiction. For instance, certain political funding practices that are legal in one place may be illegal in another. In some cases, government officials have broad or poorly defined powers, which make it difficult to distinguish between legal and illegal actions. Worldwide, bribery alone is estimated to involve over 1 trillion US dollars annually. A state of unrestrained political corruption is known as a kleptocracy, literally meaning "rule by thieves". |Books are collections of articles that can be downloaded or ordered in print.| |Find more about Politics on Wikipedia's sister projects:| | Definitions from Wiktionary| | Textbooks from Wikibooks| | Quotations from Wikiquote| | Source texts from Wikisource| | Images and media from Commons| | News stories from Wikinews| | Learning resources from Wikiversity| This page was created, but so far, little content has been added. Everyone is invited to help expand and create educational content for Wikiversity. If you need help learning how to add content, see the editing tutorial and the MediaWiki syntax reference. To help you get started with content, we have automatically added references below to other Wikimedia Foundation projects. This will help you find materials such as information, media and quotations on which to base the development of "Liberalism" as an educational resource. However, please do not simply copy-and-paste large chunks from other projects. You can also use the links in the blue box to help you classify this page by subject, educational level and resource type. These learning materials are part of unit 2 of the course Introduction to International Relations Liberalism seeks to defend and protect individuals' personal, civil, social, and economic rights and freedoms. Often, it is categorized by a laissez-faire style of government. There are two different branches of Liberalism: reform liberalism and classical liberalism. Classical Liberalism was started in the mid-1700 as a reaction against absolute monarchy, religious persecution, and feudal economical and social constraints. Reform Liberalism began in the late 1800s. It was a reaction against the effects of unconstrained capitalism and socialist ideas. Some key thinkers of this ideology include J.S. Mill, J. Dewey, J.M. Keynes, and J. Rawls. Liberalism in general believes that the rational self interest of people will improve society. It also focuses in limited government, individual rights, free trade, and equality. While both branches of liberalism idealizes these values, they have different views on how those should be achieved. Reform liberalism promotes equality of opportunity and the ability to enjoy rights (positive liberty) as a way to encourage democracy. Government policy should be used to create equality. Classical liberalism discourages government intervention and emphasizes free competition (negative liberty). |I.||before liberalism||. . .||7| ||the elements of 1. Civil Liberty. 2. Fiscal Liberty. 3. Personal Liberty. 4. Social Liberty. 5. Economic Liberty. 6. Domestic Liberty. 7. Local, Racial, and National Liberty. 8. International Liberty, 9. Political Liberty and . . . |III.||the movement of theory||. . .||50| |IV.||‘laissez-faire’||. . .||78| |V.||gladstone and mill||. . .||102| |VI.||the heart of liberalism||. . .||116| |VII.||the state and the individual||. . .||138| |VIII.||economic liberalism||. . .||167| |IX.||the future of liberalism||. . .||214| |bibliography||. . .||252| |index||. . .||253| |This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1929, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 80 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works. The basic liberal-libertarian debate is between John Rawls (the famous advocate of Liberalism) and Robert Nozick (pertaining to Libertarianism). Rawls argues that what should motivate an optimal political society is placing ourselves from the point of view outside that society. In other words, in order to find out what kind of government would be the best, one must ask what one would desire if one did not know what role he/she was going to play in that particular society (Rawls calls this the Veil of Ignorance). From behind this Veil of Ignorance, one should not have any knowledge of any of the particular aspects of the social institutions or distributions in the state. Rawls thinks that given this objective disposition, one would come up with a government whose principles were such that all people in the society would have equal opportunities for everything (and inequalities would be allowed, but only if they maintained to benefit the worst off). In contrast, Nozick argues against Rawls that, when pertaining to Distributive Justice, one must not take an ahistorical, patterned view of distribution, but rather look at the situation historically. That is, a distribution is just if it satisfies the principles of justice in attainment and of justice in transfer. For Nozick, one can acquire (own) something in two ways: either that thing was previously unowned, and the person acquired it while leaving enough and as good of it left for others, or that thing was transfered to him/her by an act of legitimately freely informed consent on the part of the previous owner. He allows also for some principles of retribution if an act is unjust, and allows here for there to be some orientations of patterned redistribution. However, at base, Nozick's point is that when trying to decide whether an ownership is just, we must decide whether the individual acquired it legitimately, not whether it brings about the greatest overall happiness (read: Utilitarianism) or benefits the worst off. Thomas Nagel would give a third opinion to try and reconcile these claims by pointing out that the problem in this debate and subsequently in all of political theory is the delicate balancing and often asymmetry between concerns of personal politics (the individual's concerns) and collective politics (the concerns for others). This is a bit of an echoing of Rousseau's theory of the General Will, where he tries to account for individuals acting in one way to support their private, personal desires, and in another way to act according to concerns of the whole. John Stuart Mill also deals with this dichotomy by claiming that societies need to promote the greatest overall happiness for the greatest number of people (his principle of Utilitarianism) while still maintaining that it (the Government) enables the possibility for its citizens to individually pursue the betterment of their own lives. Some liberals believe that freedom is impossible without equality, and that governments should promote equality by providing education and health care supported by taxes. Other liberals believe that taxes are bad, and that people should provide their own education and health care; these people are usually called libertarians today. Most liberal governments today do provide at least some education and health care, though not necessarily equally for all citizens. Other concepts important to some liberals include: In the old days, kings or queens told people what to do(a form of government called a Monarchy), and there was very little freedom. A few hundred years ago, philosophers such as Simón Bolívar, John Stuart Mill and Jeremy Bentham began to write about freedom. Earlier writers, such as Marcus Aurelius, had written about freedom, but this time the idea caught on. The United States of America was the first country to have a constitution which was based on the ideas of Mill and Bentham, and which guaranteed certain rights to all citizens, including freedom of speech, freedom of the press, freedom of religion, the right to assemble (get together in groups), the right to bear arms (weapons), and the right to ask their government to take action (right of petition) or to remove from office rulers they did not like (right of referendum). Another idea that became popular around this time was the idea of free trade. A leading philosopher who promoted free trade was Adam Smith. Most of the wealthy countries in the world today are liberal democracies with more or less free trade. An exception to this rule are the oil-rich countries, not all of which are liberal or democratic. Most of the poor countries in the world are dictatorships, with heavy restrictions on trade. China is a poor country which is rapidly becoming rich, and is trying the experiment of combining dictatorship with free trade. Whether it is possible to have the advantages of free trade without other freedoms remains to be seen. While all liberal governments support free elections, other ideas of liberal government vary a great deal from country to country. For information about liberalism in a particular country, look for an article called "Liberalism in..." and then the name of the country. The government of the United States was created based on a belief in Democracy and personal freedom. However the word "liberalism" has taken on a different meaning in modern times. Liberals in the United States still believe in supporting democracy and freedom, however many liberals also support other ideas. While not all liberals agree on everything, most liberals in the United States agree: Liberals in the United States are also sometimes called "Progressives". The biggest liberal political party in the United States is the Democratic Party. To the contrary, the Green Party are thought to be more 'left', or liberal than Democrats. Liberals in Australia have many different ideas about government than liberals in the United States. Most liberals in Australia believe that government should not increase taxes, and would like a government that has lower taxes and less power over the economy. The main liberal political party in Australia is the Liberal Party of Australia. The Liberal Party also believes that government should support traditional values and morals, something which many conservatives believe as well.
http://www.thefullwiki.org/Liberalism
13
28
the Properties of Gold? Definition of Gold What is the definition of Gold? It is a soft, yellow, corrosion-resistant element, the most malleable and ductile metal, occurring in veins and alluvial deposits. A good thermal and electrical conductor, gold is generally alloyed to increase its strength. The Chemical Properties are the characteristics of a substance, like Gold, which distinguishes it from any other substance. Most common substances, like Gold, exist as States of Matter as solids, liquids, gases and plasma. Refer to the article on Gold for additional information and facts about this substance. - What are the Physical Properties of Gold? What are the Physical Properties of Gold? The Physical properties of Gold are the characteristics that can be observed without changing the substance into another substance. Physical properties are usually those that can be observed using our senses such as color, luster, freezing point, boiling point, melting point, density, hardness and odor. The Physical Properties of Gold are as follows: What are the Physical Properties of Gold? ||It has a shine or glow ||It can be beaten into extremely thin sheets of gold leaf Capable of being shaped or bent ||Good electrical conductor Solubility (ability to be dissolved) ||A relatively soft metal, gold is usually hardened by alloying with copper, silver, or other ||It is a dense metal ||It melts at 1065°C Gold Properties - What are the Chemical Properties of Gold? What are the Chemical Properties of Gold? They are the characteristics that determine how it will react with other substances or change from one substance to another. The better we know the nature of the substance the better we are able to understand it. Chemical properties are only observable during a chemical reaction. Reactions to substances may be brought about by changes brought about by burning, rusting, heating, exploding, tarnishing etc. The Chemical Properties of Gold are as follows: What are the Chemical Properties of Gold? ||Gold is chemically inactive, it's extremely resistant to chemical action ||Ready reducibility from compounds to metal. Auric chloride and chloro-auric acid are its most common compounds. Reactivity with acids ||Aqua regia, a mixture of nitric and hydrochloric acids, has the ability to dissolve gold ||It has one stable isotope, Reactivity with Non-metals ||Gold does not react with the Non-metals, except for halogens, with which it forms halides ||Silver and platinum Info about Gold Properties on Gold properties provide facts and information about the physical and chemical properties of Gold which are useful as homework help for chemistry students. Additional facts and information regarding the Periodic Table and the elements may be accessed via the Periodic Table Site Map.
http://www.elementalmatter.info/gold-properties.htm
13
293
Federal Reserve System The Federal Reserve System (also known as the Federal Reserve; informally The Fed) is the central banking system of the United States. It was created via the Federal Reserve Act of December 23, 1913. All national banks were required to join the system and other banks could join. Federal Reserve Notes were created as part of the legislation, to provide an elastic supply of currency. Composed of a board of governors, twelve regional banks, and numerous private member banks, the Federal Reserve acts as the fiscal agent for the U.S. government, but is maintained independently under rules designed to prevent political interference. Responsible for maintaining the stability of the nation's currency and money supply, it regulates reserve requirements and discount rates for member banks, as well as conducting open market operations to adjust the monetary supply. The Federal Reserve has on occasion faced serious criticism, particularly with regard to its failure to avoid, with accusations of contributing to, the Great Depression and other extreme instabilities in the business cycle in the twentieth century. Given the significant role of the United States in the world, and, following the collapse of the gold standard and the position of the U.S. dollar as a reserve currency, pressures on the Federal Reserve to control inflation and maintain economic stability are severe. Thus, the Federal Reserve cannot act only for the benefit of its nation alone, but is also responsible to serve the world community. - a presidentially-appointed Board of Governors in Washington, D.C.; - the Federal Open Market Committee; - twelve regional Federal Reserve Banks located in major cities throughout the nation; - numerous private member banks, which own varying amounts of stock in the regional Federal Reserve Banks. The first institution with the responsibilities of a central bank in the U.S. was the First Bank of the United States, chartered in 1791 by Alexander Hamilton. As Secretary of the Treasury, Hamilton convinced Congress that the financial needs and credit of the new government required funding the national debt, and creating a national bank. It was modeled after the Bank of England and differed in many ways from today's central banks. It was not solely responsible for the country's money supply; its share was only 20 percent, while private banks accounted for the rest. Tenets the bank was based on included: - Sound finance, with a balanced government budget, except during wartime emergency - Sound banking, with reserves in gold and silver - Being a lender of last resort - The currency notes issued could serve as instruments of national policy - Regulating the national economy. The establishment of the bank raised early questions of constitutionality in the new government. Hamilton argued that the Bank was an effective means to achieve the authorized powers of the government implied under the "necessary and proper" clause of the constitution. The Bank was bitterly opposed by Thomas Jefferson and James Madison, who saw it as an engine for speculation, financial manipulation, and corruption. Secretary of State Jefferson argued that the Bank violated traditional property laws and that its relevance to constitutionally authorized powers was weak. However their chief financial advisor, Albert Gallatin, recognized its value. Congress refused to extend the Bank's charter in 1811, and as a result Madison's government had great difficulty financing the War of 1812. The Second Bank of the United States was chartered in 1816, five years after the expiration of the First Bank. It was founded during the administration of James Madison out of desperation to stabilize the United States dollar. Basically a copy of the First Bank, it had branches across the country and served as the repository for Federal funds until 1836. Andrew Jackson, who became president in 1828, denounced it as an engine of corruption that benefited his enemies and refused to recharter it after a famous dispute with the Bank's president, Nicholas Biddle. The Bank then became a private institution until it became defunct in 1841. From 1837 to 1862, in the "Free Banking Era" there was no formal central bank. From 1862 to 1913, a system of national banks was instituted by the National Banking Act of 1863. A series of bank panics, in 1873, 1893, and 1907, caused by market speculation and the actions of international banks, provided public support for the creation of a centralized banking system, which it was thought would provide greater stability. Following the Panic of 1907, Congress created the National Monetary Commission to draft a plan for reform of the banking system. Senate Republican leader and financial expert Nelson Aldrich was the head of the Commission. After going to Europe with a team of experts and being amazed at how much better were the European central banks, Aldrich in 1910 met with leading bankers, including Paul Warburg, Frank Vanderlip of the National City Bank, Henry Davison of J.P. Morgan Company, and Benjamin Strong, also of J.P. Morgan. In this meeting, the Aldrich Plan was drafted, which became the Federal Reserve Act of 1913. Aldrich realized correctly that a central bank had to be (contradictorily) decentralized somehow, or it would be vulnerable to local politicians and bankers as were the First and Second Banks of the United States. His solution was a regional system. President Woodrow Wilson added the provision that the new regional banks be controlled by a central board appointed by the president. William Jennings Bryan, by now Secretary of State, long-time enemy of Wall Street and still a power in the Democratic party, threatened to destroy the bill. Wilson masterfully came up with a compromise plan that pleased bankers and Bryan alike. Wilson started with the bankers' plan that had been designed for conservative Republicans by banker Paul Warburg. The agrarian wing of the party, led by William Jennings Bryan, wanted a government-owned central bank which could print paper money whenever Congress wanted; Wilson convinced them that because Federal Reserve notes were obligations of the government, the plan fit their demands. Southerners and westerners learned from Wilson that the system was decentralized into 12 districts and surely would weaken New York and strengthen the hinterlands. One key Congressman, Carter Glass, was given credit for the bill, and his home of Richmond, Virginia, was made a district headquarters. Powerful Senator James A. Reed of Missouri was given two district headquarters in St. Louis and Kansas City. Congress passed the Federal Reserve Act in late 1913. Wilson named Warburg and other prominent bankers directors of the new system, pleasing the bankers. The New York branch dominated, and thus the power of the banking system remained in Wall Street. The new system began operations in 1915 and played a major role in financing the Allied and American war efforts. The Federal Reserve's power developed slowly in part due to an understanding at its creation that it was to function primarily as a reserve, a money-creator of last resort to prevent the downward spiral of withdrawal/withholding of funds which characterizes a monetary panic. At the outbreak of World War I, the Federal Reserve was better positioned than the Treasury to issue war bonds, and so became the primary retailer for war bonds under the direction of the Treasury. After the war, Paul Warburg and New York Governor Bank President Benjamin Strong convinced Congress to modify its powers, giving it the ability to both create money, as the 1913 Act intended, and destroy money, as a central bank could. During the 1920s, the Federal Reserve experimented with a number of approaches, alternatively creating and destroying money and, in the eyes of many scholars (notably Milton Friedman), helping to create the late-1920s stock market bubble. In 1928, Strong died. He left a tremendous vacuum in governance from which the bank did not recover in time to react to the 1929 collapse (as it did after 1987's Black Monday), and what most would consider today to be a restrictive policy was adopted, exacerbating the crash. After Franklin D. Roosevelt took office in 1933, the Fed became subordinated to the Executive Branch. In 1951, an accord was reached granting full independence over monetary matters. Organization of the Federal Reserve System The basic structure of the Federal Reserve System includes: - The Board of Governors - The Federal Open Market Committee (FOMC) - The Federal Reserve Banks - The member banks Each privately owned Federal Reserve Bank and each member bank of the Federal Reserve System is subject to oversight by a Board of Governors (see generally). The seven members of the board are appointed by the President and confirmed by the Senate( ). Members are selected to terms of 14 years (unless removed by the President), with the ability to serve for no more than one term ( ). A governor may serve the remainder of another governor's term in addition to his or her own full term. The Federal Open Market Committee (FOMC), created under, comprises the seven members of the board of governors and 5 representatives selected from the Federal Reserve Banks. The representative from the 2nd District, New York, is a permanent member, while the other banks rotate on two and three year intervals. The Federal Reserve Banks and the member banks The twelve regional Federal Reserve Banks, which were established by the Congress as the operating arms of the nation's central banking system, are organized much like private corporations. The Reserve Banks issue shares of stock to "member banks." However, owning Reserve Bank stock is quite different from owning stock in a private company. The Reserve Banks are not operated for profit, and ownership of a certain amount of stock by a "member bank" is, by law, a condition of membership in the system. The stock may not be sold or traded or pledged as security for a loan; dividends are, by law, limited to 6 percent per year. The largest of the Reserve Banks, in terms of assets, is the Federal Reserve Bank of New York, which is responsible for the Second District covering the state of New York, the New York City region, Puerto Rico, and the U.S. Virgin Islands. The dividends paid by the Federal Reserve Banks to member banks are considered partial compensation for the lack of interest paid on member banks' required reserves held at the Federal Reserve Banks. By law, banks in the United States must maintain fractional reserves, most of which are kept on account at the Federal Reserve. The Federal Reserve does not pay interest on these funds. The Federal Reserve Districts are listed below along with their identifying letter and number. These are used on Federal Reserve Notes to identify the issuing bank for each note. - Federal Reserve Bank of Boston A 1 - Federal Reserve Bank of New York B 2 - Federal Reserve Bank of Philadelphia C 3 - Federal Reserve Bank of Cleveland D 4 - Federal Reserve Bank of Richmond E 5 - Federal Reserve Bank of Atlanta F 6 - Federal Reserve Bank of Chicago G 7 - Federal Reserve Bank of St Louis H 8 - Federal Reserve Bank of Minneapolis I 9 - Federal Reserve Bank of Kansas City J 10 - Federal Reserve Bank of Dallas K 11 - Federal Reserve Bank of San Francisco L 12 Legal status and position in government The Board of Governors of the Federal Reserve System is an independent government agency. It is subject to laws like the Freedom of Information Act and the Privacy Act which cover Federal agencies and not private entities. Like some other independent agencies, its decisions do not have to be ratified by the President or anyone else in the executive or legislative branches of government. The Board of Governors does not receive funding from Congress, and the terms of the members of the Board span multiple presidential and congressional terms. Once a member of the Board of Governors is appointed by the president, he or she is relatively independent (although the law provides for the possibility of removal by the President "for cause" under 12 U.S.C. section 242). In Lewis vs. United States, 680 F.2d 1239 (9th Cir. 1982), the United States Court of Appeals for the Ninth Circuit stated that the "the Reserve Banks are not federal instrumentalities for purposes of the FTCA [Federal Tort Claims Act], but are independent, privately owned and locally controlled corporations." The opinion also stated that "the Reserve Banks have properly been held to be federal instrumentalities for some purposes." Central bank independence from political control is a crucial concept in both economic theory and practice. The problem arises as central banks strive to maintain a credible commitment to price stability, when the markets know that there is political pressure to keep interest rates low. Low interest rates tend to keep unemployment below trend, encourage economic growth, and allow for cheap credit and loans. Some models however say such a policy is not sustainable without accelerating inflation in the long term. Thus, a central bank believed to be under political control cannot make a credible commitment to fight inflation, as the markets know that politicians will lobby to keep rates low. It is in this limited sense that the Federal Reserve System is independent. The members of the FOMC are not elected and do not answer to politicians in making their interest rate decisions. The Federal Reserve System is financially independent because it runs a surplus, due in part to its ownership of government bonds. In fact, it returns billions of dollars to the government each year. However, the Federal Reserve is still subject to oversight by the Congress, which periodically reviews its activities and can alter its responsibilities by statute. In general, the Federal Reserve System must work within the framework of the overall objectives of economic and financial policy established by the government. Roles and responsibilities The main tasks of the Federal Reserve System, according to the Board of Governors, are: - conducting the nation’s monetary policy by influencing the monetary and credit conditions in the economy in pursuit of maximum employment, stable prices, and moderate long-term interest rates - supervising and regulating banking institutions to ensure the safety and soundness of the nation’s banking and financial system and to protect the credit rights of consumers - maintaining the stability of the financial system and containing systemic risk that may arise in financial markets - providing financial services to depository institutions, the U.S. government, and foreign official institutions, including playing a major role in operating the nation’s payments system. The Federal Reserve uses several mechanisms to implement monetary policy. These include direct control methods such as regulating the amount of money that a member bank must keep in hand as reserves and changing the discount rates on interest charged to banks that borrow from the Federal Reserve System. The Federal Reserve may also use indirect control methods through open market operations. In its role of setting reserve requirements for the country's banking system, the Federal Reserve regulates what is known as fractional-reserve banking. This is the common practice by banks of retaining only a fraction of their deposits to satisfy demands for withdrawals, lending the remainder at interest to obtain income that can be used to pay interest to depositors and provide profits for the banks' owners. Some people also use the term to refer to fiat money, which is money that is not backed by a tangible asset such as gold. Member banks lend out most of the money they receive as deposits. If the Federal Reserve System determines that member banks must keep in reserve a larger fraction of their deposits, then the amount that the member banks can lend drops, loans become harder to obtain, and interest rates rise. The Federal Reserve System implements monetary policy largely by targeting the federal funds rate. This is the rate that member banks charge each other for overnight loans of federal funds. Member banks borrow from the Federal Reserve System to cover short-term needs. The Federal Reserve System directly sets the "discount rate," which is the interest rate that banks pay to borrow directly from it. This rate has an effect, though usually rather small, on how much money the member banks will lend. Both of these rates influence the Wall Street Journal prime rate, which is usually about three percentage points higher than the federal funds rate. The prime rate is the rate that most banks use to price loans for their best customers. Lower interest rates stimulate economic activity by lowering the cost of borrowing, making it easier for consumers and businesses to buy and build. Higher interest rates slow the economy by increasing the cost of borrowing. Open Market operations The Federal Reserve System also controls the size of the money supply by conducting open market operations, in which the Federal Reserve engages in the lending or purchasing of specific types of securities with authorized participants, known as primary dealers. All open market operations in the United States are conducted by the Open Market Desk at the Federal Reserve Bank of New York. The Open Market Desk has two main tools to make adjustments in the monetary supply: repurchase agreements and outright transactions. To smooth temporary or cyclical changes in the monetary supply, the desk engages in repurchase agreements with its primary dealers. These are essentially secured, short-term loans by the Federal Reserve. Since there is an increase of bank reserves during the term of the agreement, this temporarily increases the money supply. To temporarily contract the money supply, the Federal Reserve may borrow money from the reserve accounts of primary dealers in exchange for Treasury securities as collateral. The other main tool available to the Open Market Desk is the outright transaction, which involves the purchase (or sale) of Treasury securities on the open market. These transactions result in a permanent decrease (or increase) in the money supply. When the Federal Reserve System buys securities, it in effect puts more money into circulation and takes securities out of circulation. With more money around, interest rates tend to drop, and more money is borrowed and spent. When the Federal Reserve sells government securities, the reverse takes place. A large and varied group of criticisms have been directed against the Federal Reserve System. Some of these criticisms relate to inflation and fractional reserve banking generally, but the primary criticism of the Federal Reserve System comes against its power to create money and then charge interest on that money. There have also been specific issues relating to the former chairmanship of Alan Greenspan, specifically, that the Federal Reserves credibility is based on a "cult of personality" around him and his successors. Nonetheless, critics also point to a number of specific criticisms: The Federal Reserve came under serious criticism following the Great Depression. At one extreme are a few economists from the Austrian School and the Chicago Schools of economics who want the Fed abolished. They criticize its expansionary monetary policy in the 1920s, allowing misallocations of capital resources and supporting a massive stock price bubble. Milton Friedman of the Chicago School, has argued that the Federal Reserve did not cause the Great Depression but made it worse by contracting the money supply at the very moment that markets needed liquidity. Friedman has argued that the Federal Reserve could, and should, be replaced by a computer system that sets rates calculated from standard economic metrics. Economists of the Austrian School have argued that the Federal Reserve's manipulation of the money supply to stop "gold flight" from England caused malinvestment, leading to the Great Depression. Another criticism of the Federal Reserve System is that it is shrouded in secrecy. Meetings are held behind closed doors, and the transcripts are released with a lag of five years. Even expert policy analysts are unsure as to the logic behind its decisions. It has also been known to be standoffish in its relations with the media in an effort to maintain its carefully crafted image and resents any public information that runs contrary to this notion. The style of communication used by its representatives is jargon-laden, fence-sitting, and opaque, and is often referred to as "Fed speak." Critics argue that such opacity leads to greater market volatility, as the markets must guess, often with only limited information, about how policy will change in the future. Economists of the Austrian School such as Ludwig von Mises have contended that it was the Federal Reserve's artificial manipulation of the money supply that led to the boom/bust business cycle that was evidenced over the twentieth century. In general, laissez-faire advocates of free banking argue that there is no better judge of the proper interest rate and money supply than the market. Nobel Economist Milton Friedman has said that he "prefer[s] to abolish the Federal Reserve System altogether.". Some political parties, such as the Libertarian Party and the Constitution Party, hold the position that the Federal Reserve should be abolished on legal and economic grounds. They argue that the Federal Reserve proposal was unconstitutional from its inception, because the Federal Reserve System was to be a bank of issue, citing the Constitution which expressly grants Congress "the power to coin money and regulate the value thereof." In popular culture, criticisms include novels and movies suggesting that the people's power over the U.S. Government has been usurped and is instead controlled by the interests of Federal Reserve through their manipulation of monetary policy and corporate banking allies. Others have suggested that the Federal Reserve System was planned in secret by several extremely rich and powerful people for the purposes of furthering their family wealth and political power. Benefits and Future Points of Development The goal in setting forth the Federal Reserve System was to diffuse power, to provide independent views from different parts of the country, and to build a central banking network that would instill confidence within communities across the United States. Paul Warberg, one of the founders of the Federal Reserve wrote the following: One of the striking points of strength of the Reserve System lies in its weakness. This paradox means that the strength of a system of regional banks consists in engendering in the minds of people a comfortable feeling of protection against the dangers of an autocratic central administration. In this respect the Reserve System is to be preferred to the threat which was provided by …(my) United Reserve Bank proposal. There is no doubt that if enacted, it would have offered easier and more tempting targets for political attacks. This political superiority of the Reserve System cannot be too highly appraised, although it is, at the same time, the System’s greatest weakness. The Federal Reserve System needs to continually improve in the following areas: communication, payment system, and monetary policies. The Federal Reserve System must be able to constantly develop communication, as globalization demands immediate world-wide connection. Monetary policies must be constantly updated to be more effective in maintaining stability of the U.S. dollar, which as a reserve currency is crucial to the world economic community. Continued development in the payment system must take place to keep apace with advances in technology, such as electronic payments. The Federal Reserve, as the central bank of the United States, the most influential economic power in the world, carries immense responsibility not only for the United States but for the whole world. As such, it is essential that it continually improves its operations and maintains the trust of the public. - ↑ Arthur Link. 1956 Wilson: The New Freedom. 199-240 - ↑ Who owns the Fed? - ↑ Lewis vs. United States.Retrieved May 28, 2008. - ↑ Fedspeak Remarks by Governor Ben S. Bernanke At the Meetings of the American Economic Association, San Diego, California, January 3, 2004 - ↑ Interview with Peter Jaworski. The Journal 37 (129) (March 15, 2002), Queen's University - ↑ National Platform of the Libertarian Party Retrieved May 28, 2008. - ↑ Thomas M. Hoenig, 2001. Leadership Progress and the Federal Reserve System. Retrieved May 28, 2008. - Board of Governors of the Federal Reserve System. The Federal Reserve System: Purposes & Functions. Toronto: Books for Business, 2002. ISBN 0894991965 - Epstein, Lita & Preston Martin. (2003). The Complete Idiot's Guide to the Federal Reserve. Alpha Books. ISBN 0028643232. - Greider, William (1987). Secrets of the Temple. New York: Simon & Schuster. ISBN 0671675567; nontechnical book explaining the structures, functions, and history of the Federal Reserve, focusing specifically on the tenure of Paul Volcker - Hafer, R. W. The Federal Reserve System: An Encyclopedia. Westport: Greenwood Press, 2005. 280 entries; ISBN 0313328390. - Meyer, Lawrence H. (2004). A Term at the Fed: An Insider's View. New York: HarperBusiness. ISBN 0060542705; focuses on the period from 1996 to 2002, emphasizing Alan Greenspan's chairmanship during the Asian financial crisis, the stock market boom and the financial aftermath of the September 11, 2001 attacks. - Woodward, Bob. (2000) Maestro: Greenspan's Fed and the American Boom. New York: Simon & Schuster/Touchstone ed. 2001. ISBN 0743205626 study of Greenspan in 1990s. - Broz, J. Lawrence. The International Origins of the Federal Reserve System. Ithaca, NY: Cornell University Press. 1997. ISBN 0801433320 - Carosso, Vincent P. "The Wall Street Trust from Pujo through Medina," Business History Review (1973) 47:421-37 - Chandler, Lester V. American Monetary Policy, 1928-41. New York: Harper and Row, 1971. - Epstein, Gerald and Thomas Ferguson. "Monetary Policy, Loan Liquidation and Industrial Conflict: Federal Reserve System Open Market Operations in 1932." Journal of Economic History 44 (December 1984): 957-984. in JSTOR - Friedman, Milton and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton Univ. Press, (1963) 1971. ISBN 0691003548 - Griffin, G. Edward. "The Creature from Jekyll Island|The Creature from Jekyll Island: A Second Look at the Federal Reserve" (1994) reprint 2008 ASIN: B00181HBR0 ISBN 0912986212; Says Fed was created by a conspiracy of bakers; his other books charge Franklin Rooosevelt intentionally brought about Pearl Harbor. - Kubik, Paul J. "Federal Reserve Policy during the Great Depression: The Impact of Interwar Attitudes regarding Consumption and Consumer Credit." Journal of Economic Issues 30 (3) (1996): 829+ . - Link, Arthur. Wilson: The New Freedom. Princeton Univ. Press, 1956. ISBN 069104578X 199-240. Explains how Woodrow Wilson managed to pass the legislation. - Livingston, James. Origins of the Federal Reserve System: Money, Class, and Corporate Capitalism, 1890-1913. Darby, PA: Diane Pub. Co. 1986, ISBN 0788191918 Marxist approach to 1913 policy - Mayhew, Anne. "Ideology and the Great Depression: Monetary History Rewritten." Journal of Economic Issues 17 (June 1983): 353-360. - Meltzer, Allan H. A History of the Federal Reserve, Volume 1: 1913-1951 University of Chicago Press, 2004. ISBN 0226520005 the standard scholarly history - Roberts, Priscilla. "'Quis Custodiet Ipsos Custodes?' The Federal Reserve System's Founding Fathers and Allied Finances in the First World War," Business History Review (1998) 72: 585-603 - Rothbard, Murray N. A History of Money and Banking in the United States: The Colonial Era to World War II. Ludwig Von Mises Institute, 2002. ISBN 0945466331 libertarian who wants no Fed - Rothbard, Murray N. (1994). The Case Against the Fed. Ludwig Von Mises Institute. ISBN 094546617X. libertarian who wants no Fed - Shull, Bernard. The Fourth Branch: The Federal Reserve's Unlikely Rise to Power and Influence. New York: Praeger Publishing, 2005. ISBN 1567206247 - Steindl, Frank G. Monetary Interpretations of the Great Depression. Ann Arbor: University of Michigan Press, 1995. ISBN 0472106007 reviews and evaluates many economists' approaches to explanations - Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976. ISBN 0393092097 - West, Robert Craig. Banking Reform and the Federal Reserve, 1863-1923. (1977) - Wicker, Elmus R. "A Reconsideration of Federal Reserve Policy during the 1920-1921 Depression," Journal of Economic History (1966) 26: 223-238, in JSTOR - Wicker, Elmus. Federal Reserve Monetary Policy, 1917-33. (1966). - Wells, Donald R. The Federal Reserve System: A History. McFarland and Company, 2004. ISBN 078641880X - Wicker, Elmus. The Great Debate on Banking Reform: Nelson Aldrich and the Origins of the Fed Ohio State University Press, 2005. - Wood, John H. A History of Central Banking in Great Britain and the United States (2005) - Wueschner; Silvano A. Charting Twentieth-Century Monetary Policy: Herbert Hoover and Benjamin Strong, 1917-1927 Greenwood Press. (1999) - Money, Banking and the Federal Reserve - Official Federal Reserve web site - How "The Fed" Works @ HowStuffWorks.com - The Money Masters - A Brief History of Central Banking in the United States by Edward Flaherty. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: - Federal_Reserve_System (Aug 1, 2006) history - History_of_central_banking_in_the_United_States (Aug 1, 2006) history Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/p/index.php?title=Federal_Reserve_System&oldid=721241
13
24
Issues in the Education of Students who are Deaf or Hard of Hearing (2005) -- John L. Luckner, Ed.D. Special education in the United States began with the opening of the American Asylum for the Education of the Deaf and Dumb (now the American School for the Deaf) on April 15, 1817 in Hartford, Connecticut. The first teacher of students who were deaf was a deaf man named Laurent Clerc. The school provided an education for students from Connecticut as well as some of the other New England states and also assisted in the establishment of other schools for students who were deaf throughout the country. The school provided training in English grammar, reading, writing, mathematics, religion, and rules of conduct (Moores, 2001). It was also one of the first schools to provide vocational education. All instruction was conducted in sign language. Prior to the opening of the school and ever since, there have been a variety of points of view about how individuals who are deaf should learn to communicate. Debate about whether to use natural sign language, speech, signs in English word order, created sign systems, or how to integrate speech, speech reading and auditory training with sign has been consistent and ongoing. Professionals in the field of deafness, family members and individuals with a hearing loss consistently have been reconciling the differences in each of these perspectives and determining how to proceed. A second long-standing and often debated matter is how people who are deaf should be perceived and treated? While hearing loss has always been part of the human condition, people who can hear have demonstrated divergent reactions to deafness. Many pursued a cure for deafness. Others believed that deaf individuals were inferior to their hearing peers and were in need of salvation. Some have felt p ity and taken care of them. Some have viewed deafness from a perspective of social/cultural difference and treated individuals who were deaf as equals. The unique talents and contributions that many deaf people have made to society have inspired others. While the questions of how to promote the communication skills of individuals who are deaf and how deafness is viewed may be considered as the most controversial subjects in the field, there are a variety of other issues that have consumed the attention of education professionals and families. From this brief introduction it is clear that the field of education of students who are deaf or hard of hearing has a long history filled with diverse viewpoints and many unanswered questions. The purpose of this paper is to identify and briefly describe topics central to this field. This information is provided as background knowledge and/or stimuli for discussion. As noted in the foreword, persons who desire to comment or iden t ify additional issues are encouraged to go to: http://vision.unco.edu/nclid/async-dh/board.html to add your ideas. Heterogeneity of the Population – The population of individuals who are deaf or hard of hearing is very diverse. Hearing losses range from mild through profound. Many individuals are born with a hearing loss, yet a large percentage acquire their hearing loss between the ages of 0 to 3. Approximately 45 percent use speech and residual hearing as their primary mode of communication, 49 percent use speech and sign, and about 6 percent use sign only (Gallaudet Research Institute, 2001). Roughly, 33 percent have a disability in addition to a hearing loss. The majority of students attend regular schools, while about 20 percent attend special schools. The racial/ethnic backgrounds of individuals who are deaf or hard of hearing also vary in a manner s imilar to the racial/ethnic backgrounds of individuals who are hearing. Emotional Perspectives – As far back as recorded history allows us to examine, the topic of how best to educate individuals who are deaf or hard of hearing has been a controversial, and emotionally laden topic. This is still true today. As noted in the introduction, disagreement about what mode of communication to use, where to educate child who is deaf or hard of hearing, and what are the best methods to use to teach children with a hearing loss have been ongoing sources of controversy. While it can be wonderful to interact with passionate and dedicated professionals, families, and individuals who are deaf or hard of hearing, it can also be exhausting to debate divergent views. Understanding the differences of perspective and the heterogeneity of the population can assist us in working through the strong emotions that occasionally accompany an in d ividual’s message. Early Identification and Newborn Hearing Screening – The technology to assist in the identification of a hearing loss in infants is improving rapidly. Universal newborn hearing screening allows families and professionals to identify infants with a hearing loss before these children leave the hospital. Currently, 36 states plus the District of Columbia have mandated a routine hearing screen for all infants before they are discharged from the hospital (Yoshinaga-Itano, 2000). A variety of studies have demonstrated the benefits of early identification and intervention on early language, academic, and social-emotional development (e.g., Calderon & Naidu, 2000; Moeller, 2000; Yoshinaga-Itano, Sedey, Coulter & Mehl, 1998). Early Intervention – Children who are deaf or hard of hearing are at a high risk for delays in communication and l anguage development, poor academic achievement, delays in critical thinking skills and problems with social and emotional development because of the central role that language plays in these essential areas. As a result, most professionals in the field feel strongly that early intervention enhances the development of children with a hearing loss (e.g., Arehart & Yoshinaga- Itano, 1999) based on the work of researchers who have demonstrated that early-identified children who are deaf or hard of hearing have significantly better language, speech, and social-emotional outcomes than children and families who do not receive the services (e.g., Calderon & Naidu, 2000; Moeller, 2000; Yoshinaga-Itano, Sedey, Coulter & Mehl, 1998). Family Involvement – The inability of children who are deaf or hard of hearing to understand their parents’ spoken communication hinders the parent – child relationship (Mars c hark, 1997). Researchers suggest that positive parent-child interaction is a very good predictor of linguistic development (Calderon & Naidu, 2000, Moeller, 2000; Pressman, Pipp-Siegel, Yoshinaga-Itano & Deas, 1999). Consequently, it is beneficial to help parents to develop skills that will form the foundation of good communication with their children. Communication – Communication refers to the process of sharing ideas and information. It is a process that is essential, and many say innate, for all human beings (Owens, 2001). One of the most difficult decisions that a family with a child who is deaf or hard of hearing makes is choosing a communication method. Yet, researchers suggest that early communication development is positively related to language learning, and in turn a variety of other important developmental areas (Calderon & Naidu, 2000). The question of which communication method to use began a s an oral versus manual controversy. Yet, over time this matter has evolved to include questions such as the use of invented sign systems, whether or not to simultaneously speak and sign, the use or lack of use of technology, and whether or not to allow students to view the lips of people speaking to them. Critical Mass – Having a sufficient number of students who are deaf or hard of hearing, an adequate number of teachers and support personnel who have training and experience in working with students who are deaf or hard of hearing, and appropriate curricular resources focused on the needs of students who are deaf or hard of hearing are considered important for establishing effective educational programs for students with a hearing loss (Luetke-Stahlman & Luckner, 1991). With the reauthorization of the Individuals with Disabilities Education Act (IDEA), critical mass has been operationally redefined to mean studen t s who are deaf or hard of hearing should be educated in their local public schools with their hearing peers. This shift in perspective has not met with universal acceptance (Siegel, 2000). Friendships – People of every age view friendships as a vital part of their lives. The concept of friendship means having someone to spend time with, to learn from, to teach, to nurture and to be nurtured by. While families provide much that friends cannot, companions of the same or similar age broaden the experiences of children and youth, helping them stretch and grow beyond the family. Communication problems and differences in modes of communication often adversely impact the ability of students who are deaf or hard of hearing to develop friendships (Luckner, Schauermann & Robb, 1994). Literacy – There is no single definition of literacy. However, when most people tal k about literacy they refer to the ability of an individual to read and write. Researchers in these areas have consistently demonstrated that many individuals who are deaf or hard of hearing are able to acquire the skills to access and use print. Conversely, many students who are deaf or hard of hearing have significant problems in this area (Traxler, 2000). These challenges impact students’ ability to master content subject material, learn independently, and use technology. Placement – The opening of the American School for the Deaf was followed by the inception of other residential schools for deaf students across the United States. Many of those schools were established in areas away from the major population centers. For more than a century residential schools and a few day schools were the only educational options available for students who were deaf. In 1975, Public Law 94-142: The Education of All Handicap p ed Children Act was passed, and a variety of educational options for children with a hearing loss became available. The pros and cons of each option continue to be debated. Currently, the U.S. Department of Education defines six educational placements for students with disabilities. This range of options is needed because individual children require different levels of support based on their unique needs. Currently, the majority of students who are deaf or hard of hearing receive all or part of their education in general education classrooms (Holden-Pitt & Diz, 1998). Whether called mainstreaming or inclusion, integration of students who are deaf or hard of hearing has been a source of controversy (Nowell & Innes, 1997). While many people see the advantages of receiving an education in the general education classroom, many professionals, families and deaf adults are concerned that this type of placement can not meet the educational, social, emotional, or cultural nee d s of all students who are deaf or hard of hearing (Snider, 1995). Educational Outcomes - There are many successful individuals who are deaf or hard of hearing who are performing on or above grade level (Luckner & Muir, 2001). Yet, the overall performance of students who are deaf or hard of hearing is typically far below this. Traxler (2000), in a summary of achievement data for the 9th edition of the Stanford Achievement Test for students who are deaf or hard of hearing indicated that the median grade level for 18 year-old students in reading comprehension was just below the 4th grade. She also reported median grade level scores of 4th grade for vocabulary, 5th grade for problem solving and just below 6th grade in mathematics for 18 year-old students who are deaf or hard of hearing. Similar disappointing academic achievement results were noted by Schildroth and Hotto (1993), w ho indicated that students who are deaf or hard of hearing achieved an average grade level of 4.5 in reading by age 17, and by Allen (1986) who found that students who are deaf or hard of hearing had a median grade level range of 2.9 to 3.2 for reading comprehension and 7.0 to 7.5 for arithmetic computation in their last year of high school. Career Outcomes - The impact of limited academic progress is most evident when the occupational outcomes for individuals who are deaf or hard of hearing are examined. Currently, large numbers of youth who are deaf or hard of hearing receive Supplemental Security Disability Insurance (SSDI) (Danek & Busby, 1999) without being involved in any productive activity (Bullis, Bull, Johnson, & Peters, 1995; Bullis, Davis, Bull, & Johnson, 1997; Lam, 1994). While the exact figures vary from study to study, collectively researchers report that the manner in which students who are dea f or hard of hearing are prepared for the world of work is unsatisfactory. For example, in a national follow-up study, Macleod-Gallinger (1992) reported that 53 percent of the respondents were unemployed one year after graduation. However, the picture improved considerably over time with almost 19 percent of respondents who were deaf or hard of hearing reporting that they were unemployed 10 years after graduation. Curriculum Focus – Individuals’ perspectives of hearing loss influence what they think should be taught to students who are deaf or hard of hearing. Currently, there is general consensus that, to the greatest extent possible, the curriculum for students who are deaf or hard of hearing should be the same as that of hearing students (Moores, 2001). Yet, the question of what specialized skills should also be included in deaf students’ plan of study needs to be answered. Specific areas of study that a re often included in students’ programs of study include: receptive and expressive language development, speech development, auditory training, Deaf culture, emotional development, social skills training, sexuality education, independent learning skills, reading strategy instruction, self-advocacy training, daily living skills, career awareness, and infusion of multicultural issues. The ongoing question that needs to be asked is what should be eliminated from the curriculum to accommodate the addition of these specialized skills? Cultural Identity – The Deaf community is a unique part of American society (Padden & Humphries, 1988). Like most communities, patterns of beliefs, values, behaviors, social customs, and knowledge that represent characteristics of the community are described to define the culture. Membership in the Deaf community, like most communities, varies from place to place. Factors that are o ften noted include: (1) being deaf, (2) using American Sign Language as a primary means of communicating, and (3) attending a residential school for the deaf (Lane, Hoffmeister, & Bahan, 1996). Many children who are deaf are exposed to two languages and two cultures. Frequently the first language and the culture are not the same as their parents. Cochlear Implants – A cochlear implant is an electronic device that is used to stimulate the auditory nerve fibers of an individual who is deaf. A great deal of controversy exists about cochlear implants. Some view this procedure as a wonderful technological advancement, while others see it as an invasive procedure designed to change a deaf person into a hearing person (Schirmer, 2001). Adaptations – There is a national movement for higher educational standards and greater accountability for all students. Simulta n eously, the number of students who are deaf or hard of hearing who receive significant proportions of their education in general education classrooms has increased (Holden-Pitt & Diaz, 1998). Specific adaptations need to be implemented in those general education settings so that students who are deaf or hard of hearing are able to learn, participate, and demonstrate what they are capable of doing. The Colorado Department of Education (1995) suggested that there are two types of adaptations. The first is called accommodations. Accommodations do not significantly change the instructional level, content, or performance criteria. Changes in process are made to provide a student with equal access to learning and results. The second type of adaptation is called a modification. Modifications substantially change what students are expected to learn and demonstrate. Modifications change the course objectives, assessment content, grading process, and possibly the type of diploma e a rned. Examples of adaptations that can be used with students who are deaf or hard of hearing can be found in Luckner and Denzin (1998). Quantity and Quality of Personnel – A necessary prerequisite for the provision of quality educational services for students who are deaf or hard of hearing is to have an appropriate number of qualified teachers available to serve them. A variety of sources indicate that the current and projected demand for teachers of students who are deaf or hard of hearing exceeds the available supply. The need for properly trained and licensed teachers of students who are deaf or hard of hearing exists in all geographical regions of the United States (American Association for Employment in Education, 2002). Administrative Support – Support from principals and special education administrators for teachers and families has strong direct and i n direct effects on the quality of education that a child receives (Gersten, Keating, Yovanoff, & Harniss, 2001). Administrators set the tone of a school’s culture, influence how services for students with a hearing loss are provided, mediate disputes, shape attitudes about family involvement in a child’s education, and determine the types of professional development that are provided for teachers. Well Functioning Teams – The Individuals with Disabilities Education Act (IDEA, 1997) mandates team decision making for assessment, placement and transition planning processes. Yet, researchers suggest that true collaboration and consultation among professionals and families is rare (Fuchs & Fuchs, 1996; Turnbull & Turnbull, 2001). For increased collaborative decision making and problem solving to occur, greater attention needs to be provided to help professionals and families learn to communicate and d evelop reliable partnerships. Transition Services – The transition from school to postsecondary education or the world of work, as well as managing adult responsibilities and living independently, represent a major challenge for many individuals who are deaf or hard of hearing (Danek & Busby, 1999). Students are required to leave a relatively supportive educational system, which usually includes trained special education professionals and specialized services, for the dynamic and unsheltered world of adult living, which typically does not provide the same level of services or support. To better meet the challenges of everyday adult living, professionals, and families along with adult service providers, state agency representatives, community members, and faculty at postsecondary institutions need to work together to develop, implement, monitor and evaluate transition plans that help individuals who are deaf or ha r d of hearing lead personally fulfilling lives (Luckner, 2002). Technology – Technology changes daily. As such, so does the manner in which technology can enhance the lives of individuals who are hearing as well as individuals who are deaf or hard of hearing. The types of technology that are of specific interest to professionals, families, and individuals who are deaf or hard of hearing include hearing aids, telecommunication devices for the deaf (TDD), closed captioning, real-time captioning, soundfields, FM systems, speech and speech reading computer programs, computer assisted notetaking, cochlear implants, and the Internet. In May of 2000 a grant from the federal government was awarded to the Association of College Educators – Deaf/Hard-of-Hearing (ACE-D/HH) (Johnson & Dilka, 2000). One product of that grant has been the development of the Deaf Education Web site (www.de a fed.net), which has more than 4,500 individuals (i.e., preservice and existing teachers, parents, adults who are deaf or hard of hearing, administrators, and university faculty) who have become registered users of the site and as such have access to a nation-wide database of instructional resources and collaborative opportunities. Social Security Disincentive to Work - Many individuals who are deaf or hard of hearing make use of social security income (SSI) and social security disability insurance (SSDI) to help them get settled. These programs provide supplemental income as well as medicaid and medicare benefits. Unfortunately, large numbers of individuals who are deaf or hard of hearing receive SSI or SSDI, do not work, and are uninvolved in any productive activity (Danek & Busby, 1999). Discrimination - Negative attitudes and discrimination toward individuals wi t h disabilities in general, and individuals who are deaf or hard of hearing in particular, are deeply rooted and difficult to change. The primary reasons for this include limited experience interacting with individuals who are deaf or hard of hearing and prejudices and fear on the part of the hearing population (Foster, 1987). Furthermore, there is ample evidence that many workers who are deaf or hard of hearing experience difficulties such as communication stress, social isolation, and unsupportive supervisors, which isolate them from resources within their work organizations that could accelerate their career advancement (Geyer & Schroedel, 1998). Education of individuals who are deaf or hard of hearing has existed for centuries in the United States. Unfortunately, the ongoing debate about how as well as where to educate students with a hearing loss often has overshadowed the great work that has been acc o mplished by professionals, families, and individuals who are deaf or hard of hearing themselves. The purpose of this paper has been to highlight issues that are specific to the field of education of individuals who are deaf or hard of hearing. The goal is to help parents, professionals and individuals who are deaf or hard of hearing make choices that lead to fulfilling lives. While there have been a large number of successful individuals who are deaf or hard of hearing, there also have been far too many persons with a hearing loss who have not received a quality education, shared positive relationships, or had satisfying careers. Hopefully, the future will be filled with less controversy and increased successes. Hopefully, there will be a heightened understanding that all human beings, whether they have a hearing loss or not, are diverse, complex and have specific needs that must be meet for optimum development to occur. Hopefully, each of us will work in partnership to make this a better world for people who are deaf, hard of hearing, or hearing to live and succeed American Association for Employment in Education (2002). 2002 job search handbook for educators. Columbus, OH: Author. Arehart, K.H., & Yoshinaga-Itano, C. (1999). The role of educators of the deaf in the early identification of hearing loss. American Annals of the Deaf, 144, 19-23. Calderon, R., & Naidu, S. (2000). Further support for the benefits of early identification and inervention for children with hearing loss. The Volta Review, 100(5), 53 – 84. Danek, M. M. & Busby, H. (1999). Transition planning and programming: Empowerment through partnership. Washington, DC: Gallaudet University. Foster, S. (1987). Employment experiences of deaf college graduates: An interview study. Journal of Rehabilitation of the Deaf, 21 (1), 1-15. p > Fuchs, D., & Fuchs, L.S. (1996). Consultation as a technology and the politics of school reform. Remedial and Special Education, 17(6), 386-392. Geyer, P. D., & Schroedel, J. D. (1998). Conditions influencing the availability of accommodations for workers who are deaf or hard-of-hearing. Journal of Rehabilitation, 65 (2), 42-50. Holden-Pitt, L., & Diaz, J. (1998). Thirty years of the annual survey of deaf and hard-of-hearing children & youth: A glance over the decades. American Annals of the Deaf, 143, 72-76. Johnson, H., & Dilka, K. (2000). Crossing the realities divide: Preservice teachers as change agents for the field of deaf education. PT3 Catalyst Grant, U.S. Department of Education OPE Grant, CFDA No. 84.342. Lane, H., Hoffmeister, R., & Bahan, B. (1996). A journey into the deaf-world. San Diego, CA: Dawn Sign Press. Luckner, J.L. (2002). Facilitating the transition of st u dents who are deaf or hard of hearing. Austin, TX: Pro-Ed. Luckner, J. & Denzin, P. (1998). In the mainstream: Adaptations for students who are deaf or hard of hearing. Perspectives in Education and Deafness, 17(1), 8 - 11. Luckner, J.L. & Muir, S. (2001). Successful students who are deaf in general education settings. American Annals of the Deaf, 146(5), 450 – 461. Luckner, J.L., Schauermann, D., & Allen, R. (1994). Learning to be a friend. Perspectives in Education and Deafness, 12(5), 2-7. Luetke-Stahlman, B & Luckner, J. L. (1991). Effectively educating students with hearing impairment. New York: Longman. Lytle, R., & Rovins, M. (1997). Reforming deaf education: A paradigm shift from how to teach to what to teach. American Annals of the Deaf, 142(1), 7-15. Macleod-Gallinger, J. (1992). Employment attainments of deaf adults one and ten years after graduation fro m high school. Journal of the American Deafness and Rehabilitation Association, 25(4), 1 - 10. Marschark, M. (1997). Raising and educating a deaf child. New York: Oxford University Press. Moeller, M.P. (2000). Early intervention and language development in children who are deaf and hard of hearing. Pediatrics, 106, E43. Moores, D. (2001). Educating the deaf: Psychology, principles, and practices (5th edition). Boston: Houghton Mifflin Company. Nowell, R. & Innes, J. (1997). Educating children who are deaf or hard of hearing: Inclusion (ERIC Document Reproduction Service No. ED 414 675). Padden, C. & Humphries, T. (1988). Deaf in America: Voices from a culture. Cambridge: Harvard University Press. Pressman, L., Pipp-Siegel, S., Yoshinaga-Itano, C., & Deas, A. (1999). The relation of sensitivity to child expressive language gain in deaf and hard-of-hea r ing children whose caregivers are hearing. Journal of Deaf Studies and Deaf Education, 4, 294-304. Schildroth, A., & Hotto, S.A. (1993). Annual survey of hearing impaired children and youth. American Annals of the Deaf, 138(2), 163-171. Schildroth, A., & Hotto, S.A. (1996). Changes in student and program characteristics, 1984-85 and 1994-95. American Annals of the Deaf, 141(2), 68-71. Schirmer, B.R. (2001). Psychological, social, an educational dimensions of deafness. Boston : Allyn and Bacon. Snider, B.D. (1995). (Ed.), Conference proceedings: Inclusion? Defining quality education for deaf and hard-of-hearing students. Washington, D.C.: College for Continuing Education, Gallaudet University. Stredler-Brown, A. & Arehart, K.H. (2000). Universal newborn hearing screening: Impact on early intervention services. The Volta Review, 100 (5), 85- 117. Traxler, C.B. (2000). T he Stanford Achievement Test, 9th edition: National norming and performance standards for deaf and hard-of-hearing students. Journal of Deaf Studies and Deaf Education, 5(4), 337-348. Yoshinaga-Itano, C., Sedey, A.L., Coulter, D.K., & Mehl, A.L. (1998). The language of early- and later-identified children with hearing loss, Pediatrics, 102, 1161-1171.
http://www.unco.edu/ncssd/resources/issues_dhh.shtml
13
73
Tuesday, May 31, 2011 Among the more important and practical aspects of Basic Thermodynamics, one finds heat conductivity. This is especially useful in the design and construction of buildings to ensure the optimum materials are used, say to make possible staying warm in harsh winters, or staying cool in incendiary summers (which we'll soon see with global warming). A very basic laboratory experiment for the investigation of heat conductivity is shown in Fig. 1. Also included is the corresponding diagrammatical layout showing the components parts, including: different thermometers (which will be at different temperatures t1, t2, t3 and t4), steam inlet and outlet pipes (left side), steam jacket and water jacket. In the experiment we pass steam through the steam jacket and adjust the flow of water through the water jacket to a small stream. After a while, a steady-state flow (indicated by a constant difference (t2 - t1) will be achieved, whereupon the flow of water is adjusted to give a difference betwen thermometers t3 and t4 of about 10 F. One continues observations of the readings of all 4 thermometers until a steady state condition is reached. Once this is established, we read and record: t1, t2, t3 and t4, and catch all the warm water flowing out of the water jacket for a time interval T ~ 10 mins. (The longer the duration of a given trial, the more accurate the results. Needless to say, the thermometers ought to be scrutinized carefully throughout and if any marked fluctuations occur, a new trial should be started, because otherwise the experimental errors will be too large. Finally, one determines the mass of water collected, records the time interval T, and the readings of the four thermometers. The distance L between the thermometers t1 and t2 will also be measured, as well as the diameter d of the test rod. During each test trial, the value of heat Q transferred to the water is determined, which will be estimated by: Q = k A T(t2 - t1)/L where k denotes the 'thermal conductivity' of the material (which will be provided), A is the cross sectional area, L the length, and (t2 - t1) the temperature difference. If a known mass of water (m) passes through the jacket then the total heat received from the end of the test rod will be: Q = mc(t4 - t3) Of course, the experiment can also be performed with the ojective of determining k, the thermal conductivity. If this is the case one will make use of the relationship: k A T(t2 - t1)/L = mc(t4 - t3) so that, on solving for k: k = mc(t4 - t3)L/ A T(t2 - t1) In Fig. 2, a simple diagram is shown which describes the basic principle of heat conductivity. The temperature gradient is defined: (T2 - T1)/ x and the heat passing thorugh per second: Q/t = k(T2 - T1)/x. That is, the product of the thermal conductivity by the temperature gradient. Find the quantity of heat Q, transferred through 2 square meters of a brick wall 12 cm thick in 1 hour, if the temperature on one side is 8 C, and the temperature is 28 C on the other. (k = 0.13 W/mK). Then: Q = kAt[T2 - T1]/ x Q = (0.13 W/mC)(2 m^2)(3600 s) [20 C/ 0.12 m] = 156, 000 Joules 1) A student performs the heat conductivity experiment as shown in Fig. 1, and determines the thermal conductivity of copper to be 390 W/mC. If he then measures the thermometer differences (t4 - t3) = 5 C and (t2 - t1) = 2C, using 0.5kg of warmed water, and his copper test rod for the experiment is 0.5 m long, what would its cross-sectional area A have to be? (Take the specific heat capacity of water = 4200 J/kg K). Also, obtain the % of error in the student's result by looking up the actual thermal conductivity of copper. 2)A plate of copper 0.4 cm thick has a temperature difference of 60 C between its faces. Find: a) the temperature gradient, and b) the quantity of heat that flows through each square centimeter of one face each minute? 3)How many calories per minute will be conducted through a window glass 80 cm x 100 cm by 2mm thick if the difference between the two sides is 20C? 4) A group of 4 astronauts lands on Mars with solar radiation collection material of total area 2000 m^2. If the efficiency of the material is 30%, and the ambient night time temperature on Mars for their base location (Isidis Planitia) is -40 C (10C day time), will they have adequate collecting material if the solar constant on Mars is 620 W/m^2? (Assume insulating material with a thermal conductivity of 0.08 W/mC, and a need to keep the inside area of their domecile at least at 10 C, requiring solar radiant energy collected of at least 1,200 W per minute for an area of 10 m x 10 m.) Estimate the thickness of insulating material they're likely to need in order to make it work. Comment on whether this expedition is even feasible given the limits of their materials, and that no more than 100 m^3 of insulating material can be taken. Two items appearing in the news in the past week disclose the party of Lincoln, once proud and standing on principle and the nation's welfare- now stands for nothing but crass self-interest, and ideology. The items and the GOP reaction show that if this vile party ever gains control of the levers of power again, we are all for the high jump, whether most Americans recognize that or not! The first item concerns the vote to be taken this evening in the House of Representatives on whether or not to raise the debt ceiling. Never mind that in all the decades past this was an automatic, routine decision. No one, no party or person of power in his or her right mind wanted to see the U.S. effectively made to appear as if in default. Not even a remote "cosmetic" default, which is what the Reeps are claiming this vote amounts to, since Treasury Secretary Timothy Geithner has extended the actual deadline to Aug. 2 by being resourceful in terms of government spending.But rather than wait, the Goopers want to hold their vote two months in advance, for what....well, political posturing! Certainly not reality! Their claim is they are the party of "responsibility" because they have proposed massive cuts to Medicare (via substituting a voucher system for actual government payments) while the Dems have proposed nothing similar. But this is a false dichotomy set up only for the benefit of the weak-minded, the gullible, and those not paying attention. Unsaid, especially by the Repukes, is that the Republican budget plan actually INCREASES federal deficits by $5.4 trillion over ten years! It does this by not doing squat about military-defense cuts (current military spending is nearly 5% of GDP prompting one former defense analyst (Chuck Spinney) to assert it amounts to a "war on Social Security and Medicare") even as it allows the wealthiest 1% to continue receiving their Bush tax cuts- equal to one new Lexus each year, on average! Nowhere in any of the Repuke "solutions" or manifestos is there any faint mention of the one thing that would solve the nation's deficit problems most expeditiously: increasing TAXES! Indeed, the Repukes have all signed "loyalty oaths" (compliments of anti-tax zealot Grover Norquist) that absolutely resists any plan to raise revenue by taxes. If the country were analogous to an overspending family with the 'wife' (repukes) mainly using up credit lines on credit cards, then the comparison would be the wife's refusal to work to earn more $$$ to pay for the credit, and instead vowing to cut the family's food, utility and medical budgets! How long do you think such a wife would last before being chucked out on her ass by a responsible hubby? Yet in our country, the Repukes are treated as if they're the next thing to financial wizards and sober stars! Of course, part of the blame for this atrocity must go to the Democrats in congress and the Senate. A more pliable, spineless bunch of weenies and wusses I've not seen in a while. Not only have most now agreed to vote with the Repukes on negating an increase in the debt celing, they are actually talking of making or increasing future cuts instead of demanding the Reeps include higher taxes as part of deficit reduction! Indeed, the word from the Denver Post article this morning (p. 5A, 'Republicans and many Dems oppose Bill')is that: "After the vote fails, the focus will return to a bipartisan group of six congressional leaders who have been in private talkes with Vice President Joe Biden to come up with a massive spending cut package and allow the debt ceiling to rise" Can't the Dem knuckleheads and poseurs understand that as Norquist himself once put it, "Bipartisanship means Democratic date rape to us Republicans!" and that by playing on this losing wicket they are unwittingly ceding the field and advantage to the Repukes? As opposed to demanding from those R-shit heads that they include TAX HIKES in any deficit reduction package? I mean this is a no brainer, or ought to be! There simply can't be a workable package that doesn't include tax increases! Allowing the Reeps to dictate puts all the Dem strategy into the proverbial crapper as well as ceding gravitas on the means for deficit reduction - which the idiotic corporate media will surely get wet dreams over. How many fucking times must I cite reams of evidence that shows simple spending cuts can't work? I have cited examples from sources (e.g. Financial Times) in so many blogs now, it makes my head spin to recall them! As one Brit economist put it: "Claiming you can solve a deficit problem by using only spending cuts is like saying you can cut off your foot, and you will run faster!" - yet that's the detached reality we are left with because the weak-kneed Dems won't come up with their own solutions. As I wrote earlier, two of the best solutions for making Medicare solvent are allowing it to bargain for lowest prescription drug prices, like the VA does, and eliminating Medicare Advantage plans - which were set up in 2003 precisely to bleed the standard program into insolvency! Yet not one fucking Dem I've seen or heard has mentioned either of these, leaving me to believe they are kowtowing to some corporate interest or other, mostly likely Big PhRMA. As to the other reality detachment, that concerns the denial of climate change- global warming, now part of the mantras of all the GOP's illustrious contenders for the presidency. This, despite the fact that many of them actually had proposed changes to policy before getting into the race. But now, as with Newt Gingrich, all that matters is reality be sacrificed rather than upset the mindless Gooper-conservo masses, who only watch FAUX News and read comic books. Nevertheless at least one Republican has the right take, former NY Rep. Sherwood Bohlert who noted: "Never in my life have I been so disappointed in the pretenders to the to the throne from my party" Bingo! And this despite the fact the evidence is now overwhelming that the whole ice shelf of Greenland has been so affected by melting it's on the verge of collapse. (See: Greenland Poised on a Knife Edge, in The New Scientist, Vol. 209, No. 2794, Jan. 2011, p. 8) The article notes that just the 'break off" at the margins of the sheet, which is ongoing, is adding 300 gigatons of melted ice to the oceans each year. If the whole shelf collapses, it will raise global sea levels by 7 m (23. 1'). Are any of the GOP's idiot candidates paying attention? Including to the fact that the acidity of the oceans is already 30% higher than at the start of the Industrial Revolution? I doubt it! Sometimes, when I get this frustrated, I think that humans don't deserve to survive if they are so incapable of stewardship. At least so many. But then I do bear in mind that many are still fighting to make things right, and not just allow the knuckle-dragging buffoons and their lackeys to prevail. I just hope there are more of the former left than the latter despite a recent poll showing only 49% of Americans now believe effects of global warming have begun, compared with 61% in 2008. Are the knuckle-draggers and dummies winning? Stay tuned! Monday, May 30, 2011 Two years ago my dad, a World War II vet, passed away from pneumonia. Today, he's remembered for not only his war service (36 months in the Pacific Theater) but also his steadfast raising of a large family. Dad's military combat was waged on two fronts: against the Japanese Empire in the Phillippines and New Gunea on the one hand, and against malaria on the other - with no fewer than five hospitalizations. Even on being discharged from the Army in April of 1945, the after effects of malaria remained and he'd often come down with severe chills. As we know, the malaria parasites are never finally eliminated but stay in the bloodstream over a life time. As I recall Dad's sacrifices today, I also recall two of the last contacts I had with him, one in April of 2009 (in which he sent his last email from his email machine) and then for Father's Day, on the phone in June, 2009. His email (of April 19) lamented that his youngest son had 'gone off the rails' into hardcore fundamentalist Christianity and his I-Net church website was thrashing others in the family by use of a false self-righteousness. He expressly deplored the attacks he'd seen against my sister, as well as my mother, and to a lesser extent the de facto attacks by implication against him. He regarded ANY attacks against Catholicism as attacks against his own beliefs (especially as he'd converted to Catholicism from being a Southern Baptist). His final hope, at the end of the email, was that my hardcore fundie bro would give it a rest and realize that life is too short to "carry on" waging crusades even in the name of trying to "save" others. He himself seemed to finally realize and appreciate that salvation is a relative thing, and possibly for that reason, refused to condemn his fundie son to Catholic Hell for abandoning the religion in which he was raised. In the end, each will believe as he or she sees fit, and all efforts to undermine, shame, or intimidate others into one's own fold are doomed to failure. All one really accomplishes is alienation, hatred and further isolation. His one wish was that if he did pass away, we'd come together as a family not pull farther apart via false causes, agendas or beliefs. In my last contact with him on Father's Day, his voice was rasping and he appeared to sense the end might be near. We talked briefly about my latest book project, on the Kennedy assassination, and I mentioned that I had dedicated it to him. He expressed thanks and said he hoped he might get to see one draft. Alas, he passed before it could be sent to him. These days, especially around Memorial Day, and near his birthday (May 25) I often find myself going back to read his old emails from 2007-009 which I have kept stored in my 'old' email folder. None of them are very long, except the one from April 19, 2009, when he expressed the hope that a son would soon find his way back to the light and family solidarity. The others were mainly recollections about past events, and current ones. In one, he inquired after my wife Janice's health and gave some advice on car repair after my wife was involved in a serious car collision in central Colorado (hit broadside by a reckless driver who went through a red light). In others he recounted assorted celebrations, including the most recent - for Xmas 2008 at the Port Charlotte Retirement Center, into which he and Mom moved three years earlier. Dad, who provided the center of gravity for the family (always sending out the greeting cards for each and every member) will be sorely missed. But always remembered, especially - on this day - for the extraordinary service he gave to his country. Sunday, May 29, 2011 It was bad enough this week to see provisions of the "Patriot Act" which were due to expire extended again by a bunch of weak-kneed wussies in Congress, including the illustrious House "Tea Party" contingent- otherwise so noisy about their precious freedoms and defending them. Well, where the f*ck were these loudmouths when those extensions passed? Where were ALL of them, who we ordinarily see yapping about time -honored patriots but who are so cowed by the Anti-terror shtick (like so many in the 50s were by Joe McCarthy's anti-commie crusade) that they give it cover and even funding? Don't these dildo-brains understand the security state is already over-extended? Don't they grasp that fundamental REAL rights, like the 4th amendment - are under assault by the Patriot Act? It makes me wonder if any of our politicos ever read Benjamin Franklin's quote that: "Those who would sacrifice liberty for a temporary security deserve neither liberty or security". And by the way, pardon me if I blow off anyone who claims "well, times are different now!" No, they are not! I lived through the fifities and early 60s when the most massive REAL threat to freedom was Soviet Russia which had over 5,000 H-bomb armed ICBMs aimed at us. Never during that whole freaking time, even in the midst of the parlous Cuban Missile Crisis of 1962, were so many civil liberties just chucked like I've seen since 9/11. Anyway, anyone who still believes all these laws are merely innocuous inconveniences needs to familiarize himself or herself with the case of one Scott Crow of Austin, TX. Crow is an activist, specifically a self-described veteran organizer of anti-corporate demonstrations. He recently found, on requesting FOIA files from the FBI, that they'd been monitoring his activities for the past three years including "tracing license plates of cars parked out in front, recording the comings and goings of residents and guests, and in one case speculating about the presence of a strange object on the driveway" (Denver Post, 'Texan's FBI File Reveals Domestic Spying', p. 12 A, 5/29). Well, the strange object turned out to be a quilt that was made for an after-school program, according to Crow (ibid.) Crow also found that more than 440 heavily redacted pages were in his FBI file, many under the rubric of Domestic Terrorism. Of course, this was exactly what many of us worried about when congress - at least most of them - passed the misnamed "Patriot Act" in 2002 without even reading its many provisions. We fretted that, given the open-ended definition of terrorism, just about any and everything would be included and that might well extend to domestic protests, especially against corporations. Now we know how valid those concerns were! Crow himself commented on the extensive documents with mixed anger, astonishment and a degree of flattery that so much government energy could be expended on one little guy who lives in a ramshackle home with a wife, two goats, a dozen chickens and a turkey. According to Crow: "It's just a big farce that the government created such a paper tiger. Al Qaeda and real terrorists are hard to find, but we're easy to find. It's outrageous that they would spend so much money surveilling civil activists....and equating our actions with Al Qaeda" But recall this might not be that strange after all. It was Rachel Maddow, on her MSNBC show some weeks ago, that first brought it to national attention (after OBL's killing) that his intent was never to kill Americans so much as make us spend ourselves into bankruptcy. Maddow's analysis: included evidence of the country's transformation to a national security state, as she cited the vast, increased cost of intelligence, not only for the CIA, but more than a doubling in personnel for the DIA (from 7,500 to 16,500). Additionally, the National Security Agency (NSA) doubled its budget and the number of security -based organizations went from 20 in 2001, to 37 in 2002, then adding 36 the next year, 26 the year after that, then 31 and 32 more, with 20 additional security organizations added in 2007, 2008, and 2009. Not said was that under the Patriot Act, ALL these agencies have compiled a single database that is cross-correlated and they often work together, including the FBI. All of this mind boggling security and military infrastructure was at the cost of pressing domestic needs, including repairing crumbling civil infrastructure (bridges, roads, water and sewer mains etc. - estimated cost from The Society of Civil Engineers: $1.7 trllion) as well as health care for nearly 50 million currently without it. Even now as certain harpies and miscreants aim their sights at Medicare for the elderly, one of the most miserly programs in terms of benefits compared with similar programs in other nations. And goons like Paul Ryan and his henchmen want to make it smaller yet! The point is, with such a vast and over-extended, over-active security state, that gets on average $28 billion a year, ways must always be found to justify the subsidies. The ways, evident from Scott Crow's files, include domestic spying. Where the f*ck are the ultra-Patriot Tea Baggers in all this? Or...are they all A-ok with a massive spy state that also protects corporations from civil protests, portrayed as "domestic terrorism"? According to Mike German, a former FBI agent now at the ACLU: "You have a bunch of guys and women all over the country sent out to find terrorism. Fortunately, there isn't a lot of terrorism in many communities, so they end up pursuing people who are critical of the government". This is bollocks because one of the most fought for rights of Americans, since Thomas Paine's fiery 'Common Sense', is to be critical of the government. And, the day the citizen becomes fearful of doing that is the day we have tyranny returned. For when citizens are fearful of government that's what one has, when government is fearful of the citizen, one has liberty. Strange that all the Tea bagging repukes didn't recall that when they passed the extension for the Patriot Act. Even The Financial Times has noted in an editorial ('Terror and the Law', May 16, p. 8) it's time to put in place time limits. As they note: "Congress needs to put time limits on the post 9/11 powers. Failure to do so in that first sweeping authorization was a dereliction of duty....Emergency powers were justified after 9/11 but allowing them in perpetuity is wrong" We agree and would also add, that if there aren't enough genuine terrorists for the assorted agencies to go after and monitor, and they find they have to pursue innocent Americans exercising their free speech rights, then it's time to exact massive cuts in security funding for all agencies. Or at least, in direct proportion to the actual threat! We also expect the Teepees to get on board with this, or declare themselves ignominous HYPOCRITES! It's absolutely ludicrous as well as hilarious to behold certain fundies going off on the Quelle or Q source tradition (e.g. as "Satanic"), while they accept a bible (KJV) that has absolutely NO objective validation as any unique, sacred source! One just has to scratch his head in wonderment and awe at the chutzpah it takes to readily ignore the immense deficits in an entire BOOK, while carping at a coherent assemblage of Yeshua's sayings that is claimed to form a commonality of source for at least two of the NT gospels (Matthew and Luke). As noted in my earlier blog (before the last), textual analysis by all reputable sources recognizes Q as a putatitive collection of Yeshua's sayings which doesn't exist independently (e.g. as a specific text) but rather can be parsed from the separate gospels, such as Mark. Germane to this Q tradition, is how - when one applies textual analysis to the books, gospels- one can unearth the process whereby the orthodox (Pauline) Church worked and reworked the sayings to fit them into one gospel milieu or another. None of this is mysterious nor does the basis require any "objective proof" (a real howler since the whole historic process of biblical book selection and compilation has been subjective!), since the disclosure of the Q (which as I indicated is more a TRADITION than explicit text)isn't based on a direct, isolated manuscript but rather distilled from the comparison of numerous similar passages in differing NT sources. Thus, one can employ this template to derive a plausible timeline: for example, The Gospel of Mark appears to have committed the sayings to paper about 40 years after the inspiring events, then Matthew and Luke composed their versions some 15-20 years after Mark. ALL SERIOUS biblical scholars accept this timeline, only hucksters of holiness, pretenders and pseudo-scholars do not! Again, I advise those who wish a genuine scholarly insight, as opposed to pseudo-insight, to avail themselves of Yale University's excellent Introduction to the New Testament course by Prof. Dale B. Martin. Of particular import is his lecture: The Historical Jesus Which clearly shows the editing in John to make it conform to orthodoxy. Now, the choice before people is whether to accept the basis and findings of a highly accomplished scholar in the field, teaching at one of the nation's premier Ivy League universities, or the word of someone belonging to an online bible school with an "I-net" church. I think the choice is a no-brainer, but then that is writing as one who's actually taken a real course in textual analysis (including languages used in translation) from a real university! But let's get back to the blind spot inhering in these Q-tradition carpers, which I also exposed (to do with their King James Bible) in the blog before last. As I showed in that blog, their accepted KJV fails on all three critical tests for authenticity: a) no major re-translations or re-doings, b) no major omissions or deletions, c) a consistency with what the earliest original (e.g. Greek) translations (say in the Greek Septuagint) allowed, with no major contradictions. As I showed, the current KJV failed criterion (a) by being a compromise translation to try to bridge the gap between the Puritans and the Church of England. Thus, two distinct translations, call them X and Y, were jury-rigged to give some mutant single translation, call it X^Y, belonging to neither. It's something like taking the head of a cow and grafting it onto a de-capitated bull and saying you now have an authentic cow-bull. Actually it's more like cow bullshit! Making matters worse, the translators were told to preserve as much as possible of the Bishop's Bible of 1568 (then the official English Bible). The translators were also granted wide latitude in how they specifically formed different translations of the text, in many cases being allowed to use the Geneva Bible and some other versions "when they agree better with the text" in Greek or Hebrew. This "mixing and matching" process is believed by many experts (e.g. Geza Vermes) to have been responsible for many of the more blatant contradictions that have emerged, viz. in answer to the question posed 'Are unsaved sinners eternally tormented?: (a) YES (Isa 33:14; Mt 13:40-42, 25:41,46; Mk 9:43-48; Jude 6-7; Re 14:10-11) (b) NO (Eze 18:4; Mt 7:13, 10:28; Lu 13:3,5; John 3:15-16; Ac 3:23; 1Co 15:18; 2Th 2:10; Heb 10:39; 2Pe 3:7,9) This is a huge divide, and a serious blotch on the integrity of the KJV. Indeed, if such a fundamental question as "eternal torment" can't be properly addressed, how many other shibboleths will one find? Meanwhle, the current KJV fails test (b) because we know from historical records (kept by the Anglican Church) that what eventually became the "King James Bible" by 1626-30 was in fact NOT the original, but rather 75% to 90% adopted from William Tyndale's English New Testament, published in 1626. This version was actually published in defiance of then English law - so it is amazing so much of it was then incorporated into the original KJV! Tyndale's tack was to render Scripture in the common language of his time to make it accessible even to a humble plow boy. But this meant ignoring the originally published KJV and resorting to his own translations, basing his ms. on Hebrew and Greek texts. In so doing he'd defied an English law from 1401 that forbade the publication of any English book without Church of England permission. Tyndale got the last laugh, because a year after he was strangled for "heresy" in the Netherlands, King Henry VIII granted a license to a complete "King James Bible" that was more than three-fourths Tyndale's translation from his English New Testament! Thus, the current incarnation of the KJV is not the original translation adopted by the commisson of King James I. Thus, the KJV also fails criterion (b). What about test (c)? This was broken as soon as Tyndale's version was 75% adopted and the correlated parts of the earlier (King James I) ordained sections, removed. More to the point, the KJV rendering of Matt: 25:46: "And these shall go away into everlasting punishment: but the righteous into life eternal" disclosed a total inconsistency with the earliest original (e.g. Greek) translations! Here in Matt. 25:46, the Greek for everlasting punishment is "kolasin aionion." Kolasin is a noun in the accusative form, singular voice, feminine gender and means "punishment, chastening, correction, to cut-off as in pruning a tree to bear more fruit." Meanwhile, "Aionion" is the adjective form of "aion," in the singular form and means "pertaining to an eon or age, an indeterminate period of time. But it does not mean eternal!(Critical examination discloses the Bible speaks of five "aions", minimum, and perhaps many more. If there were "aions" in the past, it must logically mean that each one of them have ended!) Thus, a 'pick as you choose' process for the creation of the KJV combined with inept and cavailer translations of the Greek Septuagint obviously allowed huge errors to creep in, and Matt. 25:46 is an enormous one, given it's the sole place that refers to "everlasting punishment". So if this translation is wrong because of a cavalier Greek translation (of kolasin aionion) then everything to do with it goes out the window. Thus, the KJV fails all THREE tests for validity for an authentic bible! We suggest that before certain "pastors" launch into more tirades against the Q tradition, they examine more completely their own bowdlerized and maimed bible, which is somewhat like a Frankenstein "dog", put together from the excavated carcasses of about ten different doggie cadavers! And not even today's resident geniuses who worship this Frankensteined monstrosity as the "final entity" can even recognize which end is the head and which is the heinie! But hey, maybe as a biblical yardstick its "head" doubles as a heinie! Cream of KJV Bible soup anyone? The matter of the ultimate origin of life, the theory of Abiogenesis (which is often erroneously conflated with the theory of evolution) has been problematical for years. What is sought is a basic explanation for how fundamentally non-living matter could acquire the properties and attributes of life, including being able to reproduce. In principle this isn't that remarkable a stretch, since we already know there exist living entities at the "margins" - the viruses- which display no attributes of life until they become attached to a host. Once in a host, they can appropriate its cell machinery to churn out billions of copies of themselves. Evolution in such organisms is also no biggie. For example, consider a point mutation in a Type A flu virus. Here, a minuscule substitution of amino bases yields a virus imperceptibly different in DNA structure from a predecessor. This is a case of microevolution brought about by mutation. A new 'flu vaccine must be prepared to contain it. The most that flu vaccines achieve is keeping the selection or s-value fairly constant for a majority of influenza viruses, while not entirely eliminating the associated gene frequencies. Hence, yearly vaccines only attempt to reduce the most virulent strains, such as ‘Type A flu’, to the most minimal equilibrium frequency. Total elimination is impossible because there are always new gene mutations of the virus to assume the place of any strains that have been eliminated. At the same time, the ongoing enterprise of preparing new flu vaccines is an indirect acknowledgement of microevolution in the flu virus. Amazingly, there are many tens of thousands of uneducated people who actually don't believe such examples qualify as bona fide evolution! It's as if these forlorn people can't process that the success of natural selection is inextricably bound to the fitness (w) and the selective value, s, e.g. via: w = 1 - s. Meanwhile, we know there are pleuro-pneumonia like organisms or PPLOs for short. The PPLO is as close to the theoretical limit of how small an organism can be . Some figures clarify this. It has about 12 million atoms, and a molecular weight of 2.88 million Daltons . Compared to an amoeba, it weighs about one billions times less. Now, in a remarkable find published in The New Scientist (Vol. 209, No. 2794, p. 11), two investigators: Kunikho Kaneko and Atsushi Kamimura, have made a remarkable breathrough in devising a testable model that is able to replicate the Abiogenesis process. The two basically solved the problem of how a lipid-coated protocell can divide into two (displaying reproduction) when the genetic material replicates. Recall in an earlier blog where I showed the hypothetical protocell reaction wherein a self-sustaining coacervate droplet can use one or two basic reactions involving adenosine triphosphate (ATP) and adenosine diphosphate: L*M + R + ADP + P -> R + L + M + ATP ATP + X + Y + X*Y -> ADP + X*Y + X*Y + P In the above, L*M is some large, indeterminate, energy-rich compound that could serve as ‘food’. Whatever the specific form, it’s conceived here to have two major parts capable of being broken to liberate energy. Compound R is perhaps a protenoid or lipid-coated protocell, but in any case able to act on L*M to decompose it. The problem with this earlier hypothesis was that such lipid-coated protocells lack the machinery to allow for easy division. Kaneko and Kamimura solved this by taking their inspiration (for their model) from living things in which DNA and RNA code for proteins and the proteins catalyse replication of the genetic material. This goes back to biochemist Jacque Monod's concept that the organism is a self-constructing machine. Its macroscopic structure is not imposed upon it by outside forces, instead it shapes itself autonomously by dint of constructive internal (chemical) interactions. Thus in the Kaneko- Kamimura model one has a self-perpetuating system in which a cluster of two types of molecules catalyse replication for one another while also demonstrating rudimentary cell division. In the Kaneko and Kamimura model, as with DNA, the genetic material replicates much more slowly than the other cluster molecules but also takes longer to degrade, so it enables lots of the other molecule to accumulate. Following replication of the heredity carrier the copies drift apart while the molecules between them break down automatically creating two separate entities (see image). This is an exciting breakthrough but some further investigations are needed, specifically ways to circumvent the problem that (in real life) membrane lipids around an RNA molecule don't typically catalyse RNA replication. However, this isn't insurmountable, because all one need do (theoretically) is replace the lipids with hydrophobic peptides. We look forward to further work done by Kaneko and Kamimura as well as others in the microbiology field, working at the forefront of Abiogensis. Saturday, May 28, 2011 In some religious blogs it's become fashionable to debate on whether the Bible or a Church (as a generic "body of Christ") emerged first. On the side of the former are mainly fundamentalists who while they may know how to cite chapter and verse, are oblivious to historical facts. They claim that while a formal Bible may not have existed ab initio there were still coherent sayings of Yeshua that prefigured a later "correct, truthful" bible known as the King James. In fact, I will show this is all codswallop and uses specious arguments including retroactive claims to make a spurious case. In the meantime, those who argue that a single Church existed aren't aware that in fact dozens of differing Christian sects competed with only one prevailing: an orthodox form pushed by Paul of Tarsus. First, let's get to the claim of an accurate compilation of Yeshua's sayings into a claimed original "bible" text that would later evolve to the King James version. Of interest is the tradition designated as "Q" or "Quelle" (the German for "source"). Textual analysis recognizes Q as a collection of Yeshua's sayings which doesn't exist independently (e.g. as a specific text) but rather can be parsed from the separate gospels, such as Mark. Germane to this Q tradition, is how - when one applies textual analysis to the books, gospels- one can unearth the process whereby the orthodox (Pauline) Church worked and reworked the sayings to fit them into one gospel milieu or another. One can also derive a plausible timeline: for example, The Gospel of Mark appears to have committed the sayings to paper about 40 years after the inspiring events, then Matthew and Luke composed their versions some 15-20 years after Mark. Finally, as noted in an earlier blog on this - John was actually an original GNOSTIC gospel that was reworked to conform to the Catholic Orthodoxy and added some 50-75 years after Matthew and Luke. (Again, if one knows Greek, one can easily spot the multiple edits in John that transmute its content from a Gnostic view to an orthodox Catholic one). Here is where contradictory arguments emerge (conflating Church and bona fide Bible existence) because some fundies have insisted that "early church councils" adopted rigorous "principles" to determine whether a given New Testament book was truly inspired by the Holy Spirit. These are generally listed as criteria to meet for explicit questions: 1) Was the author an apostle or have a close connection with an apostle? 2) Was the book being accepted by the Body of Christ at large? 3) Did the book contain consistency of doctrine and orthodox teaching? 4) Did the book bear evidence of high moral and spiritual values that would reflect a work of the Holy Spirit? This is a noble effort, but it actually blows up in the collective faces of the fundies (who later argue against an orthodox Catholic purist take) because at the end of the day they rely on the criteria of a religion-church that they really can't accept! This is a tricky point so needs to be explained in the historical context. The fundamental problem is that whenever one refers explicitly to "early church councils" in antiquity, there can be one and only one meaning: the councils of the Catholic Church, since those were the sole ones then existing. Thus, fundies who inadvertently (or desperately?) invoke "principles" or coda for NT acceptance demanded by "early church Councils" are in fact conferring benediction on the CATHOLIC, PAULINE ORTHODOXY. Thus, they are unwittingly validating the Catholic process for separating wheat from chaff in terms of which books, texts were acceptable and which weren't! A more honest and logical approach, would be to simply argue that no official "Church" or religion existed then that was bequeathed special status by Christ, and that the acceptance of this or that text was under highly unique guidelines independent of "early church councils". Those guidlines would then be provided. Ideally, these criteria will be truly independent from those ordained by the RC Church's councils, which the fundies reject as a "harlot of Babylon". In any case, to be faithful to history, the new principles would have to also be disclosed in the book the fundies most revere: the King James version. But is this even feasible? For logical consistency and to be coherent with any proposed (later) doctrine, the claim would only stand if the final revered product (the current KJV) had not been severely compromised or altered such that it lost content or context. This would require: a) no major re-translations or re-doings, b) no major omissions or deletions, and c) a consistency with what the earliest original (e.g. Greek) translations (say in the Greek Septuagint) allowed, with no major contradictions. Now, let's examine each of these in turn. As for (a) we do know from the extant historical records that the KJV originated when James VI of Scotland (who came to be King James I of England in 1603), commissioned an enclave of experts to Hampton Court near London in 1604, to arrive at a compromise translation to try to bridge the gap between the Puritans and the Church of England. Thus, already we see that a compromise was injected into the mix, for the existing documents of the OT, NT. We also know the assigned objectives were: the translation of the Old Testament from Hebrew, and the New Testament from Greek, to be undertaken and respectively assembled by no less than 47 translators in 6 committes working in London, Oxford and Cambridge. The final results emerged seven years later, in 1611. Back to the project: the translators were all instructed not to translate "church" as "congregation", and to preserve as much as possible the Bishop's Bible of 1568 (then the official English Bible). The translators were also granted wide latitude in how they specifically formed different translations of the text, in many cases being allowed to use the Geneva Bible and some other versions "when they agree better with the text" in Greek or Hebrew. This "mixing and matching" process is believed by many experts (e.g. Geza Vermes) to have been responsible for many of the more blatant contradictions that have emerged and which fundies are unable to explain away no matter how much they try. For example: in answer to the question posed 'Are unsaved sinners eternally tormented?: (a) YES (Isa 33:14; Mt 13:40-42, 25:41,46; Mk 9:43-48; Jude 6-7; Re 14:10-11) (b) NO (Eze 18:4; Mt 7:13, 10:28; Lu 13:3,5; John 3:15-16; Ac 3:23; 1Co 15:18; 2Th 2:10; Heb 10:39; 2Pe 3:7,9) This is a huge divide, and a serious blotch on the integrity of the KJV. Indeed, if such a fundamental question as "eternal torment" can't be properly addressed, how many other shibboleths will one find? Another example of the skewed process appears in the KJV rendering of Matt: 25:46: "And these shall go away into everlasting punishment: but the righteous into life eternal" Here, the Greek for "everlasting punishment" is "kolasin aionion." Kolasin is a noun in the accusative form, singular voice, feminine gender and means "punishment, chastening, correction, to cut-off as in pruning a tree to bear more fruit." Meanwhile, "Aionion" is the adjective form of "aion," in the singular form and means "pertaining to an eon or age, an indeterminate period of time. But it does not mean eternal!(Critical examination discloses the Bible speaks of five "aions", minimum, and perhaps many more. If there were "aions" in the past, it must logically mean that each one of them have ended!) Thus, a 'pick as you choose' process for the creation of the KJV obviously allowed huge errors to creep in, and Matt. 25:46 is an enormous one, given it's the sole place that refers to "everlasting punishment". So if this translation is wrong because of a cavalier Greek translation (of kolasin aionion) then everything to do with it goes out the window. Thus, the KJV fails test (a) for logical consistency. What about (b), i.e. no major omissions or deletions? Again, we know from historical records (kept by the Anglican Church) that what eventually became the "King James Bible" by 1626-30 was in fact NOT the original, but rather 75% to 90% adopted from William Tyndale's English New Testament, published in 1626. This version was actually published in defiance of then English law - so it is amazing so much of it was then incorporated into the original KJV! Tyndale's tack was to render Scripture in the common language of his time to make it accessible even to a humble plow boy. But this meant ignoring the originally published KJV and resorting to his own translations, basing his ms. on Hebrew and Greek texts. In so doing he'd defied an English law from 1401 that forbade the publication of any English book without Church of England permission. But, Tyndale got the last laugh, because a year after he was strangled for "heresy" in the Netherlands, King Henry VIII granted a license to a complete "King James Bible" that was more than three-fourths Tyndale's translation from his English New Testament! Thus, the current incarnation of the KJV is not the original translation adopted by the commisson of King James I. Thus, the KJV also fails criterion (b). Now what about (c), a consistency with the earliest original (e.g. Greek) translations? I already showed this was broken as soon as Tyndale's version was 75% adopted and the correlated parts of the earlier (King James I) ordained sections, removed. More to the point, I gave the specific example of how the earliest Greek text for the meaning of ""kolasin aionion" (punishment for an age) was destroyed and altered to "everlasting punishment". Thus, the original bond was alredy destroyed - perhaps in the 'mix and match' translation process permitted by King James I in his commission of scholars! Thus, the current KJV fails all three logical texts for authenticity and hence can't possibly be the basis for any erstwhile "biblical church" or any founding document, period. Now, what of the claim for an "early Church" itself? Is there such a thing? The answer is 'No!'. While it is true that Christ said "Thou art Peter and upon this rock I will build my Church", that can be interpreted in more than one way. All the evidence, indeed, shows that the earliest conglomerate or "congregation" of followers wasn't a formal "church" by any standard but a polyglot group with shared beliefs, and shared outlook. I would argue that this group never evolved to become a formal Church, and that the latter didn't appear until the Edict of Milan was signed in 313 A.D. under the Emperor Constantine Augustus. The problem with the Edict of Milan is that the then Christians essentially made a pact with "the Devil", i.e. signed on to a deal with the then Emperor that would allocate state religion status to the Christians (no more persecutions!) but at the cost of sharing that stage with the Emperor's own Sol Invictus (Sun worship) religion. Thus, the choice of December 25 for the nativity, since at that time that date was nearest the Winter Solstice or the 're-brith of the Sun' (when the Sun reaches its lowest declination and begins its apparent journey northward on the ecliptic, leading to longer days). Thus, the "church" codified in 313 A.D. was in fact an artifact of the original community called Christian, much like the current KJV is an artifact of the original book called King James version of the Bible. Bible or church-based Christianity? As I said, a false dichotomy. The best plan for people, if committed to a spiritual existence and authentic relationship to whatever that means, is to toss out both church and bible and live without the graven images of either. Friday, May 27, 2011 As another $690 BILLION defense spending bill wends its way through congress, you can lay 100000 to 1 Vegas odds that it will get passed with few problems or rider amendments. There are too many key Senators as well as Reps who now depend for their political livelihood on military spending. Translated: They depend on ordinary American taxpayers to keep pushing military pork to their communities while thousands of other communities endure continued infrastructure decay. This is a damned disgrace, and what's more, in a parlous financial environment in which we'll soon need to raise the national debt ceiling, it is unconscionable! We need to get our miserable asses out of Afghanistan, and we need to do it this summer, not by 2014! We don't have the freaking money - even borrowed from the Chinese- to continue with that bullshit. Now, out of the mouths of 'babes' one find similar sentiments expressed, as in a letter published in yesterday's Denver Post from high school student Abigail L. Cooke. Just when you thought 99% of young people only had their eyes and brains tethered to the social media like Facebook, along comes a surprise and it is a heartening one. Abigail wrote: "Fifteen trillion dollars in debt and counting. Every year our government spends billions on defense, leaving insufficient funds for important things like education. This year, almost $30 million will be cut from my school district’s education budget. That means fewer teachers, and even less arts programs for students like myself. But why? Where will all of this money go? Probably to help further fund our involvement in Afghanistan or Iraq — invasions costing more than $1 trillion when combined. Imagine if just one billion of those dollars were applied to our school districts here in Colorado. That would mean more after-school programs, more teachers, smaller classes, instruments, music and instructors for music programs that our nation’s students are desperate for. It’s time we start putting our money where we truly need it. It’s time for America to start raising scholars, not soldiers This is an excellent letter! It shows this teen's priorities are on much more solid ground than superficial personal concerns. Indeed, her vision and insight would put nearly all politicos to shame. The only small error she committed in her letter was underestimating the disgraceful costs of the occupations in Iraq and Afghanistan which will come to more like $3.3 trillion when all is said and done, and all the returned vets' hospital treatments and therapies must be paid for by the taxpayers. But otherwise, she's nailed it. She's shown (and she knows) we have priorities all screwed up in this country. Our whole domestic tapestry is unravelling as we continue to stubbornly involve ourselves in nations which have no more respect for us - irrespective of the gazillion bytes of PR churned out each day. Even current Secretary of Defense Robert Gates has admitted defense spending cuts must be made, though he seems to take issue with President Obama's conservative $400 billion proposed cuts over 12 years. In fact, this is ridiculous. The cuts ought to be more like $400 billion per YEAR! (Especially given the DoD budget was effectively doubled since 2000.) Just pulling our asses out of both Iraq and Afghanistan and leaving the idiotic nation-building behind would more than achieve that in the next three years. Cutting most of the dumb, money squandering armaments (including 'cloaked' helicopters which, while cool, aren't essential unless you're always into violating other nations' sovereignty via pre-meditated raids etc.) and you get even bigger savings. I also totally disagree with Gates' assessment that "Americans would face tough choices" in a number of decisions, such as which weapons systems to eliminate, and the size of fighting units. Look, this is a no brainer! We already are overstretched across the freaking globe in nearly 44 countries. DO we have to be cop of the world? I don't think so! Nor can we afford that role in our debt environment. As for fighting units, do we really need to continue to maintain bases in Japan, Germany and S. Korea? Last time I checked all those nations had formidable forces that could more than take care of themselves. Gates also whined in his last address: "A smaller military, no matter how superb, will be able to go fewer places and do fewer things" SO WHAT? As I said, we don't need to be scattered in 44 nations across the globe! (See Chapter Five: The State of the American Empire: How the U.S. Shapes the World). We need to be taking care of our own mammoth country and its PEOPLE, left ignored the past decade, and especially the crumbling infrastructure: roads, sewers, water mains, bridges....which is a much bigger threat to our security than some phantom bad guys some place. We have to get it into our fat heads we can't police and patrol the planet. This is foolishness. Gates did also say that however vast the defense spending (the Pentagon approved the largest ever defense budget in February) it "was not the major cause of the nation's fiscal problems". However, he also added in the same breath that it was "nearly impossible to get accurate answers to where the money has been spent and how much?" Good god, man! If you 'can't get accurate answers' there's no telling how much they're pissing away! And let's not forget that we still haven't turned up $1.1 TRILLION that the Pentagon "misplaced" around 1999-2000. This, of course, was well documented by former defense analyst Chuck Spinney in a memorable PBS interview with Bill Moyers in August, 2002. Spinney also pointed out that if money is given via legislation but never accounted for, as to the GAO, then the Pentagon itself becomes an unaccountable and unelected agent that undermines democracy. Spinney is also known for a September 2000, Defense Weekly commentary, in which he called the move to increase the military budget from 2.9% to 4% of the GDP as " tantamount to a declaration of total war on Social Security and Medicare in the following decade." Well, he wasn't off on that one! It is time our politicians and representatives get that into their heads, and begin now with massive cuts to tame the country's over-inflated military empire. Let's not forget it wasn't so much 'barbarians at the gate' that brought Rome to ruin, but military overstretch which all its taxes and property seizures could no longer pay for. They say those who forget the past are doomed to repeat it. Let's hope it's not to late to learn! John F. Kennedy, had he lived, would likely have ready a torrent of “I told you so’s”, in terms of the parlous and vexing state this nation finds itself in, from too many entanglements of only marginal value to our actual national security. He’d also have expressed anger that for all the warnings he’d delivered about “enforcing peace with American weapons of war” (his famous Pax Americana speech at American University, Washington, in June, 1963, see photo) nothing sank in, and President after President never heeded his advice, each preferring to remain hostage to the Military-Industrial complex. Though JFK wasn’t so prescient in a specific context, his American University speech did generically prefigure the horrific consequences if the U.S. insisted on being the policeman of the world, enforcing American terms of peace (via the noxious document NSC-68) with American weapons of war. This speech probably set the foundation for Kennedy’s later plan (under National Security Action Memorandum-263) to pull out of Vietnam (after the 1964 elections when political blowback would be minimal). He could likely see that if, indeed, the U.S. remained in Vietnam - the perils of a much wider war, along with consolidation of the military-industrial-oil complex – would be unavoidable. Alas, JFK was assassinated, and LBJ invoked NSAM-273 to repeal JFK's NSAM-263 and with the phony firing on the Maddox and Turner Joy by N. Vietnamese, we were in for a penny, in for a pound: 58,000 killed, and $269 billion in costs. One would have thought we'd have learned from 'Nam, but the phony Iraq intervention showed we forgot it all. Thursday, May 26, 2011 As the discussion continues unabated about whether the Wednesday New York special election result was an indictment of Paul Ryan's Nazified plan to deny healthcare to seniors (I refuse to dignify it by calling it a "Medicare plan" far less a plan to "save Medicare") it is well to go back into recent history and what life was like before Medicare came onstream in 1966. To put it succinctly: life for the aged in this country was generally nasty, short and brutish - with few options or appeals to assistance if one became seriously ill. Much of this is detailed in several chapters of the Oxford University Press monograph, entitled: One Nation Uninsured, by Jill Quadagno, which gets to the bottom of why there is such massive political aversion to any kind of genuine health care coverage in this country which doesn't drag in the profit motive. Medicare is discussed because it putatively paved the way to at least get a 'Leg into the door' on the all but closed-shop capitalist insurance front, stiffly guarded by the likes of the AMA. By ca. 1960 there were some 19 million citizens over age 65, and some 185,000 physicians (p. 69). The options for medical care, however, were sparse though socially insured medical insurance had been in the pipeline since the Truman administration. But by 1960 the most serious countermeasure to it was the Kerr-Mills plan- which basically confined assistance only to the "aged poor". By 1963 only 28 states had adopted any of it, and barely 148,000 seniors were covered. Again, this out of a total population of 19 million seniors. Was Kerr-Mills a terrific deal? Hell no! In many states that established a Kerr-Mills format there still weren't the funds to finance it. This meant the older person and his immediate family had to be put on the hook for the money allocated to any care. Thus 12 states had 'family responsibility' provisions (p. 60-61) which "effectively imposed means tests on relatives of the aged, deterring many poor, elderly people from applying for support" The author cites as an example the state of Pennsylvania (p. 61) wherein "the elderly had to provide detailed information on their children's finances to qualify for Kerr-Mills". Meanwhile, in New York many seniors actually withdrew their applications on learning their children would be involved, and would have to cough up any extra money for care the state couldn't cover (ibid.). This also meant the adult children had to appear before state boards and answer direct questions concerning their assets and liabilities, how much mortgage they owed - if any, as well as bank account balances. Most seniors, naturally, weren't able to tolerate this level of humiliation and scrutiny of their offspring, not to mention being held liable. In other states under Kerr-Mills, the elder person had to sign away the deed to his home (if he owned one) to pay for all medical bills upon his or her decease. THIS is what life was like for the elderly in the days before Medicare. What about any seniors not among the "aged poor"? Their only option was a miserly private insurance policy with huge costs and meager benefits. Policies typically covered "only a portion of hospital costs and no medical care" (p. 61). This left the elderly with 67-75% of all the expenses to absorb, which puts the plan about on the same level as Paul Ryan's scheme (the GAO estimates seniors will have to pay at least 2/3 of all expenses), if the latter is ever introduced - say if the GOP gains all three branches of government next year, which would be a nightmare come true! As an example, "Continental Casualty and Mutual of Omaha provided only $10 a day for hospital charges and room and board, less than half the average cost". Of course NO prescription drugs were covered, all had to be paid for out of pocket. The average commercial plan - even given this miserliness- was quite expensive, ranging from $580 - $650 a year, when the median elder income was only $2,875. Thus, nearly 25% of income was gobbled up by first tier medical costs, which could easily expand to 50-75% of income if a number of prescriptions were needed, and medical treatments, hospitalizations. Quadagno also notes that (p. 62) "insurers typically skimmed off the younger, healthier elderly" thereby forcing insurers to raise premiums. Thus "many elderly were priced out of the market entirely". In many cases, the consequences were horrific, as illustrated by the case of an 80-year old granny who allowed an umbilical hernia to go unreated, with the result it ultimately protruded to 18" out of her belly and eventually ruptured, with her bleeding to death on the kitchen floor while baking a cake (p. 63). Nor were such events exceptional. Again, there is no reason this couldn't happen under Ryan's plan, and there'd be every incentive for insurance companies to skim the healthier (and younger) seniors as in pre-Medicare days, and zero incentive not to. After all, with no government mandate for providing care, why should the profit -oriented insurance companies put themselves on a downward treadmill or "losing wicket" as we call it in Barbados? They wouldn't if they had any grain of sense. Without a mandate or order from the government, you can also bet your sweet bippy they'd reject any elderly person with a pre-existing condition. This would be the proverbial no-brainer for them! Thus, by the time JFK proposed a government health plan linked to Social Security, in 1960, America's seniors were more than ready. More than ready to stop being parasitized by commercial outfits, or humiliated by the likes of states under the odious Kerr-Mills plan. The only main opponents were the AMA which (p. 68) "ran newspaper ads and TV spots declaring Medicare was socialized medicine and a threat to freedom" and blowhards like Ronnie Reagan who made idiotic recorded talks trying to scare people by asserting (ibid.): "One of the traditional methods of imposing statism on a people has been by way of medicine". Fortunately, most seniors who'd actually experienced the dregs of capitalist medical bestiality didn't buy this hog swill. They organized under groups like the National Concil of Senior Citizens (see image) and turned the tables by imposing relentless pressure on representatives (the most intransigent of whom were Southern Democrats, to whom LBJ had to finally confront and read the 'riot act'). Eventually, the opposing voices were muted and Medicare was passed. Let's hope enough older voters-people become aware of this history before they remotely allow any plan like Paul Ryan's to take them back to the conditions of 1960! Before moving on to more First Law considerations, heat capacity, and specific heat capacity, we look at the solution of the problem at the end of the previous blog (Basic Physics, Part 12): (a) The external work done is W = P (V2 - v1) = 1.01 x 10^5 Pa(0.375 - 0.250)m^3 W = 1.25 x 10^4 J, or 12,500 Joules. (b) delta U = n Cv,m (delta T) = 10(20.2 J/mol K)(T2 - T1), where n = 10 and For (T2 - T1), we first find T1 = 0 C = 273 + 27 = 300K and we then need to find the higher temperature T2. Since for an isobaric process, V ~ T (P = const.) then: V2/ V1 = T2/T1 or T2 = (V2/V1) T1 = [300k] x (0.375 m^3)/ (0.250 m^3) = 450 K Then: (T2 - T1) = (450 K - 300 K) = 150 K, so: delta U = 10(20.2 J/mol K) (150 K) = 30, 300 J (c) Heat applied = delta Q = n C p,m (delta T) = 10(28.5 J/mol K) (150 K) delta Q = 42, 750 J Let's now go back to reiterate and summarize aspects of the First Law of Thermodynamics, by first noting the types of processes one can obtain under which conditions, given +U = Q - W (another way to express the law, with +U as subject): (i) Adiabatic process (for Q = 0, i.e. then +U = -W) (ii) Isobaric process (for P = constant) (iii) Isovolumetric process (for V = constant) (iv) Isothermal process (for T = constant) Other important aspects to note in applying the 1st law: (a) The conservation of energy statement of the 1st law is independent of path, i.e. (Q - W) is completely determined by the initial and final state, not intermediary states, Example: say a gas is going from initial state S(i) with P(i), V(i) to final state S(f) with P(f), V(f), then one finds that (Q - W) is the same for all paths connecting S(i) to S(f). (b) Q is positive (Q > 0) when heat enters the system, and (c) W is positive (W > 0) when work is done BY the system, and vice versa. Now, on to heat capacity! This is a generic as opposed to specific quantity defined as the heat that must be transferred to produce a change in temperature, or: Q = C(T2 - T1). The specific heat capacity, c = C/ m where m is the mass. Then: C = mc and Q = mc (T2 - T1). We also saw already: C' = C/n (= Cp,m, Cv,m) where C' is the molar heat capacity. So, Q= nC' (T2 - T1). The heat capacity has interesting applications apart from prosaic, terrestrial ones. For example, since space is a near-vacuum, m ~ 0, and c ~ 0, so little or no thermal capacity (C) exists. What this means is that energy from the Sun (via radiation) can be transferred through space, without appreciably heating space. Space is 'cold' not because it absolutely 'lacks heat' but because its density (of particles, hence mass) is too low to have much quantity of heat, or 'thermal capacity'. What about in the vicinity of Earth? Similar arguments apply. The higher one is above the Earth, the lower the thermal capacity of the medium - so the lower the amount of heat that can be retained, or measured. The lower in altitude one goes, the greater the number of air particles, and the greater retention of heat- especially if water vapor is also included (since water has a large specific heat capacity). What happens is that the radiant energy (mainly from the infrared spectral region) transfers kinetic energy to the molecules of the atmosphere, thereby raising its internal energy: U = 3kT/2. This internal energy, defined along with the thermal capacity of the air(C) is what enables us to feel warmth. Conversely, the relative absence at higher altitudes makes us feel colder. Specific heat capacity can be measured in simple lab experiments, using the apparatus as set out in the graphic. This image shows: an outer calorimeter (left), inner calorimeter cup (next), and thermometer (far right - inserted in calorimeter cap), and a metal sample for which we may seek to find its specific heat capacity - call it c(x). The practical procedure is then straight forward. Assuming a mass of water (m_w) say of 100g, and mass of calorimeter (m_cal) which must take the inner cup + outer into account, then we can find what we need using a basic heat conservation equation: heat lost by hot substance = heat gained by cold substance + heat gained by calorimeter Then let the unknown metal (mass m_x) be heated to 100 C then deposited in the 200g of water at temperature 27 C in the calorimeter cup (which must be weighed of course). Let the calorimeter be of mass 0.1kg and made of copper for which we know the specific heat capacity c_Cu = 400 J/kg K, then we have: -m_x c(x) (T - T_x) = m_w c(w)(T - T_w) + m_cal cCu (T - T_w) This assumes no net heat loss, and also the initial temperature of the calorimeter and the water are the same, e.g. T_w = 20 C. Obviously then, if the unknown metal x is heated to 100 C and dropped into the calorimeter, heat will be gained by both the water and calorimeter, even as heat is lost from the specimen. Now, if c(w) is known to be 4,200 J.kg K we ought to be able to work out the unknown specific heat capacity c(x) if say, the mass m_x is known. Such calorimetric experiments are extremely important since they show several principles at once. Example problem: For the experimental layout described, let the final temperature attained by the water + calorimeter be 25C. Obtain the unknown specific heat capacity, c(x) if m_x = 0.2 kg: Then we have: T = 25 C, T_w = 20 C and T_x = 100 C. We also have all other quantities so we can obtain c(x). (Let us also bear in mind here that differences in Celsius degrees = differences in Kelvin degrees). Then we may write: c(x) = [m_w c(w)(T - T_w) + m_cal cCu (T - T_w)]/[m_x (T - T_x)] Substituting the measured values of the data: c(x) = [0.2 kg (4200 J/kg K) (5 K) + 0.1 kg(400 J/kg K)(5 K)/ 0.2 kg( 75 K) c(x) = [4200 J + 200 J]/ 7.5 kg K = 4400 J/ 15 kg K = 293 J/kg K (which is most likely an alloy of copper and silver, c(Ag) = 234 J/kg K). (1) Let 5 million calories of solar energy be absorbed by 2 cubic meters of hydrogen gas 100 km above the Earth. If the particle density of the gas is 10,000 atoms per cubic meter, estimate the heat capacity of the gas volume. (Atomic mass of a hydrogen atom, 1 u ~ 1.6 x 10^-27 kg). (2) 10 lbs. of iron and 5 lbs. of aluminum - both at 200 F, are added to 10 lb. of water at 40 F contained in a vessel whose thermal capacity is 0.5 Btu/ deg F. Calculate the final temperature if c(Al) = 0.21 cal/ g C, and c (Fe) = 0.11 cal/g C (Note: specific heat capacities are the same in calories per gram per degree Celsius as in Btu/ F deg). (3)A calorimeter and its contents have a total thermal (heat) capacity of C = 200 cal/ deg C. A body of mass 210 g and at temperature 80 C is placed in the calorimeter resulting in a temperature increase from 10 C to 20 C. Compute the specific heat capacity of the body. Wednesday, May 25, 2011 Well, who'da thought? Democrat Kathy Hochul, running on a firm pro-Medicare stance, bested the (much better funded) Reep candidate Jane Corwin by 47-43% in New York's 26th congressional district special election. The 'W' is being touted, as well it should, as the first major successful test of the pro-Medicare stance vs. the horrific Medicare voucher plan espoused by Paul Ryan and his fellow numbskulls. What this win should also do, is serve as a template by which to beat Repup heinie into oblivion next year, and perhaps take back the House, and amass an even wider majority in the Senate. The point? The Reeps overplayed their hand and now they need to be made to pay through their eyes, ears, nose, and any other place for their hubris! Of course, it wasn't ten minutes after the declared win before the inevitable whining began. Paul Ryan himself accused Democrats of distorting his proposal, averring. “If you can scare seniors into thinking their current benefits are affected, that’s going to have an effect. That’s exactly what happened here,” But this is horse shit. The fact is that it's immaterial whether "current" benefits are affected or not. This was roundly exposed during the recent Easter recess during which Ryan took his dog and pony show around Wisconsin trying to win support. It fell like a frickin' lead balloon!(Often amidst many catcalls, and howls of derision, screams of 'Liar!') The "plan" was roundly eviscerated in confab after confab held by Ryan in various venues. People in the assorted audiences, many who had actually run the numbers, asked Ryan pointedly how they were going to be able to afford to pay nearly two thirds of their own expenses out of pocket when many of their relatives were already struggling (in existing Medicare) to pay roughly 20%. Ryan had no answers other than to try the ploy of asserting that current seniors (55 and over) wouldn't be affected by the changes, only those who turned 55 by 2021. But again, people were too smart for his palaver, and the attempt to split up elder interests didn't work. After some in the audience heckled him with cries of "Liar!" and "Bullshit!" others referenced how they didn't want their younger relatives (already having a hard time finding work for decent pay) to have this onerous monkey on their back. From then, you could see Ryan chastened and backing off. The fact, still unable to be processed by the foolish, ideological Reeps, is that Ryan's Nazi-plan was dead in the water from the word 'Go!' - hell, even Newt Gingrich saw it! Anyone who's tried to obtain private health insurance when over the age of 60 knew the score. Private insurers simply weren't interested in insuring a group that was 5-6 times more likely to be ill or injured or need care than a 35-year old. Well, they weren't interested unless you could cough up LOTS of moola, usually $850 or more a month, plus pay a high deductible of $5,000/ year and sometimes more. This is what one would get under the Ryan Medicare "plan" and that was assuming no pre-existing conditions, unheard of for most over-60 years olds. Thus, Kathy Hochul's constant refrain during her campaign that: "Hey, they just hand you $8,000 to buy insurance, then send you on your way with a 'Good Luck!'", is spot- on. THAT is exactly what would unfold in the Ryan scenario and anyone who'd try to tell you differently is either a liar, an ignoramous or an idiot. Or maybe all three! As Ms. Hochul noted, the advantage of standard government Medicare is that it mandates insurers MUST accept a person over 65 who is qualified (i.e. has worked at least 40 quarters, or disabled), pre-existing conditions or not. It also mandates certain price structures for operations, treatments, and keeps patient costs lower than would otherwise be the case. Even so, they're not insignificant. Starting from July I will have to cough up around $3,300 a year, and that is assuming no major operations or interventions. So it's not a freebie. Yes, as I've written until blue in the face, there are ways to ensure Medicare's long term solvency, and they don't require draconian spending cuts - especially coupled with preserving atrocious tax cuts for the wealthiest like Paul Ryan wants to do. They require only moderate changes, like enabling the government to bargain for lowest prescription prices like the VA, or if that can't work, allow importing of lower cost Canadian drugs. And if PhRMA squeals like stuck pigs, tell them to fuck off. Also, one can eliminate the 'Medicare Advantage' plans which spend $12 billion more a year on average than standard Medicare. Further, the FICA limits can be increased to at least $250 grand, along with no more Bush tax cuts - for middle OR wealthy classes. All these in tandem can resolve the insolvency problem but they require Dems especially to make the honest determination and run with it, as opposed to falling into the Reeps' spending cut trap. If they're dumb enough to do that all bets are off! Hochul's win is devastating to the repups but only if the Demos use it and don't find a way (next year) to yet again seize defeat from the jaws of victory. That means they not only must run on her same justified fear (of the Ryan plan) template, but ALSO have the heart and courage to define, articulate and embrace Medicare changes that don't depend on massive spending cuts. I already listed them above, the question is whether enough Dems will have the cojones to embrace them. In a blog just over a year ago, I cited an issue of Skeptic Magazine (Vol. 15, No. 2, 2009) by James Allen Cheyne, who made reference to a compendium of research which has shown an inverse correlation between religious belief and intelligence as measured by IQ. Cheyne observed (ibid.): "Correlations between measures of intelligence and reported religious belief are remarkably consistent. Approximately 90% of all the studies ever conducted have reported that .....as intelligence (as measured by IQ) goes up, religious belief goes down." At the time I noted it didn't appear so fantastic a claim, based on the statistics he cited, coupled with one's realization that just a moderate IQ (105-115) should be able to see that talking snakes (as in the "Garden of Eden"), plus guys living in whales' bellies, and a man who can walk on water...are all preposterous. No genuinely intelligent person could buy into any of these any more than a smart kid would buy into Santa Claus. In more depth, Cheyne made reference to a particular type of thought he called ACH thinking- or abstract, categorical and hypothetical - which appeared to be mostly missing from believers' and which figured prominently on many IQ tests (such as the Raven's and Wechsler Similarities tests). Such tests featured many questions which constructed an abstract hypothetical from a particular category, then asked the person to predict the consequences, if any. For example, some ACH type questions would be: 1) If Venus and Earth were to exchange orbits, what (if anything) would happen as a consequence to each planet to change it from its current conditions? 2) If a hollow equilateral pyramid were "opened" up and spread out in two dimensions, how would it appear? 3) We observe the red shift of galaxy clusters and interpret cosmic expansion. What would we conclude if all galaxy clusters showed a blue shift- but only up to 1 billion light years distant and no more? 4) If the gravity on Earth were suddenly decreased by half, theorize how would this affect energy costs in two named modes of transportation? 5) Imagine a sphere turned inside out, how would it look in 3 dimensions? In two? None of the above are particularly "easy" but neither are they too difficult for a person aware of basic facts (e.g. that Venus is already closer to the Sun than Earth by about 1/3) but do require the ability to abstract from the conditions of the facts to the given hypothetical to infer the new situation, and assess it. This is the very ability that Cheyne shows is missing as one examines results for religious believers. At the time of the blog, the question as to this IQ deficit in believers was mostly unanswered, but now there may be an empirical basis. (Particularly as Cheyne's largest IQ deficits were observed statistically in Christian Fundamentalists). In a new study, completed at Duke University Medical Center and funded by the National Institutes of Health and the Templeton Foundation, it was found that Protestants who did not have a "born again" experience had significantly more gray matter than either those who reported a life-changing religious experience or unaffiliated (but still religious) adults. The measurments made focused on at least two MRI measurements of the hippocampus region of 268 adults between 1994 and 2005. Those identified as Protestant who did not have a religious conversion or born-again experience — more common among their evangelical brethren — had a bigger hippocampus, as well as atheists who had no religious orientation, period. Also interesting, is that those who professed a Catholic affiliation also had smaller brains, based on hippocampus size. (A putative comparison of brain scans is shown in the accompanying graphic but not exactly to scale so the magnification of the atheist brain scan and Protestant mainline one (center) must be adjusted by a longitudinal factor of about 1:11 and 1:14 smaller respectively compared to the fundy scan). Biologically, we know the hippocampus is an area buried deep in the brain that helps regulate emotion and memory. Atrophy or shrinkage in this region of the brain has long been linked to mental health problems such as depression, dementia and Alzheimer's disease. Damage, which may well be incepted by stress (say the stress of belonging to a minority group - as hypothesized by the researchers) may be one reason for the relative brain size deficit. But I believe a much more likely one (which will have to be tested-confirmed in the future with more detailed scans, say using PET (positron emission tomography) imagery and SPECT: single photon emission tomography, scans) is that the long term disuse of the application of the memory centers (based in the hippocampus) leads inevitably to long term decline (the typical average age of participants in the study was 58). In other words, "use it or lose it". If then the believer constantly disavows facts in critical thinking, and instead of marshalling those facts- say in original thought - has a tendency to rely on a single book or bible to "do his thinking for him", then his brain won't develop the flexibility or capacity of thought needed to adapt and it will lose mass- cells over time, i.e. shrink. This was already theorized as long ago as 1991 by Robert Ornstein in his Evolution of Consciousness. The same can apply to Catholics, also found to have shrunken hippocampi, because they will reject their own critical thought and factual (memory) application, in favor or what the Pope or Vatican says. They will also tend to uncritically accept "saints", miracles and other bilge and pfolderol as replacements for reality. In each of these instances there will also plausibly be recurring failures in taking specialized tests (or IQ tests) which contain a large number of abstract, categorical and hypothetical (e.g. ACH) questions. Obviously, more research and supporting tests need to be conducted, but it seems likely that at least the initial findings comport well with James Allen Cheyne's findings of lower IQs for believers, especially fundies. This ought to tell these folks that there is something deleterious to the brain in holding fast to 2,000 year old sayings (most butchered and bowdlerized) from sheep herding, semi-literate, and scientifically pre-literate nomads! As I perused the Milwaukee-Journal Sentinel Online several days ago, one story caught my eye regarding 'Retirees Underestimate Health Costs'. The piece mentioned how too many grossly underestimate the out of pocket costs that will face them, even with Medicare. I posted a comment observing that this shows the Republicans are out of it with their Medicare repair plans, since the out of pocket costs will be MUCH higher (as with Ryan's plan). Also, the Journal -sentinel piece gave the lie to the widely circulated Repuke myth that Medicare is basically a "freebie". No, it certainly is not! But on scanning many of the other comments I was astounded to behold one after the other bearing essentially the same refrain, which might be summarized as follows: Well, anyone coulda told the feds that none of these entitlements would be sustainable! People, seniors are just gonna have to SAVE more and work longer! Oh yeah? Says WHO? Most of these folks, so righteous in their anti-entitlement mentality have no clue at all, not one in a million, that an American jobless future is already upon us and will not be getting any easier, anytime soon. So once more, it;'s time to pull back the heavy lids of delusion and open some brains to the stiff sunlight of reality! (Hoping that Jack Nicholson's famous refrain (from the movie 'A Few Good Men') "You can't handle the TRUTH!" won't apply to any of my readers.) At least two recent extensive articles, one in TIME and another in The Economist sheds much needed light, along with a recently released working paper for the Council on Foreign Relations, and which was authored by Michael Spence and Sandile Hlathshwayo. The latter's paper specifically warned that "growth and employment are set to diverge for decades in the U.S.". What does this mean exactly? It means that for the next multi-decades, and perhaps forever, economic growth as measured by the GDP and unemployment will be decoupled. Whereas before - much before! - the more workers the more economic growth, that will no longer apply. Now, LESS workers - or should I say LESS AMERICAN workers, will translate into higher GDP. The Jobless future is here, but actually - it has been with us for some time! As far back as 1995, The Wall Street Journal noted the 'Million Missing Men' in a title by the same name, estimating that one million workers aged 55 and over were absent from the work force. They had evidently been downsized then vanished. However, not really! They simply maintained low profiles and after searching for decently remunerative work, gave up and dropped out. Many lived off their wives' earnings, but many others lived off savings and investments of their own, or took odd jobs to just keep heads afloat as they reduced their consumption dramatically - and maybe lived with a relative or friend. In its own article ('Decline of the Working Man', p. 75, April 30th) The Economist observes: "Of all the rich, Group of Seven Economies, America's unemployment rate has the lowest share of 'prime age' males in work: just over 80% of those aged between 25 and 54 have a job, compared to 95% in 1995" Not mentioned, but often noted in assorted AARP Bulletins, are the 45% of those over 55 who have no job. Not even part time work. Even the 80% working figure given by the Economist is somewhat overblown because from the latest BLS stats and census data barely half those 80% working have full time jobs. The rest are underemployed in part time jobs, often patching two or three pissant pay jobs together to make ends meet. As authors William Wolman and Anne Colamosca (The Judas Economy: The Triumph of Capital and the Betrayal of Work) have noted: the effect of chronic underemployment, especially among those over 50 is just as pernicious as long term unemployment. It means, for example, that a large swath of people will not be able to save enough to support any kind of retirement scenario and will depend almost exclusively on Social Security. It is these, of course, for whom Medicare will be most critical for their survival and the injunction to "work more and save more" becomes more a cruel joke at the behest of a clown or moron. The Economist, as good a journal as it is, unfortunately gets the base causes for this situation totally wrong, which is mystifying. They insist, for example, that many Americans have "let their schooling slide", meaning that they often haven't revamped their technical skills or trained for new fields. But this is false. Many reports (including a special series in The Denver Post some four years ago) noted how people in Denver - let go after the tech bust- had retrained only to find their new jobs sent out to India, because of lower wages and few or no benefits. The series also supported Jeremy Rifkin's thesis (in The End of Work) that high tech and white collar redundancy would follow that for lower skilled workers. In the computer-tech domain this is exactly what has happened. While we do see the occasional piece bragging about Google hiring 12,000 new workers, say in California, the mainstream media leaves unreported how many hundreds of thousands of computer-tech jobs are dispatched to Bangalore or Delhi in a given year. The Post series noted that youngsters planning college aren't stupid either, and having seen swatches of good computer jobs dispatched offshore (including perhaps for their parents) they aren't that convinced a computer science degree will get them very far anymore. Nor are they willing to gamble (leaving university with debts in the tens of thousands of dollars, that they'll nail a Google job by beating 100,000 to 1 odds. Hence, more and more turning to medical tech and health services. But even those jobs will be terminated or never emerge, if the Republicans manage to overturn Obama's Affordable Health Care Act! It is precisely because up to 35 million more patients will be added by 2014 that those medical care jobs have a potential to materialize, but not if the legislation is torpedoed by Repuke bean counters who are penny wise and pound foolish! Meanwhile, the TIME piece (May 20, p. 36) notes two elements that only appear to have been superficially covered by The Economist and which are playing new roles in engendering an American jobless future: 1) The re-definition of productivity: that is, productivity is now defined by "cutting jobs and finding ways of making the same products with fewer people". As pointed out by the author (F. Zakaria): "At many major companies profits have returned to 2007 levels but with many thousands fewer workers". 2)The force of globalization: making a single market for many goods and services which don't require American workers for production, OR American consumers to buy them. This single market amounts to more than 400 million having entered the global labor force, from China, India, South Africa, Indonesia and elsewhere. All now with money to spend, and all willing to work at one third or less the pay of an American, and for NO benefits! Both of these are ominous forces, and any American who still insists on wearing rose-colored glasses has only himself to blame if blind-sided. It is clear that these coupled forces will continue with no imminent signs of abating, unless some hidden, unfactored counterforce causes one or both to halt. In the case of (2) the most likely source of haltage would be a massive energy crisis (maybe induced by Peak Oil arriving very quickly with its worst manifestations) or perhaps a pandemic like Bird Flu or another "1918 Spanish Flu" epidemic (since that virus was recently re-engineered using frozen tissue s extracted from dead Eskimos who died of the virus and were found encased in ice). But even here, the costs inflicted on the global markets would likely be every bit as parlous as inflicted on Americans. The most probable result would be that everyone loses, including Americans. The only remaining hope is to persuade American companies to begin to re-hire American workers for decent paying jobs with benefits, not merely McJobs which (at an average remuneration of $18,000) simply won't allow people to save enough not to have to depend almost entirely on "entitlements". (Indeed, Walmart has consistently advised its workers who can't afford its health plans, to try to sign on to Medicaid). The question is how to entice them to create the jobs, and the only way I see is much higher corporate taxation (with zero loopholes) if they don't. Of course, another option is to resurrect the 700,000 public service jobs cut by Reagan during the hysteria over "big bad government". We can recall a goodly set of these were air traffic controller jobs, which loss we're still paying the price for. And while pundits laugh and make jokes about sleeping controllers, no one wants to go near the real reason: too few experienced controllers on the job (which numbers are necessitated by America's antiquated airline route system, as noted in an earlier blog). Then there is the massive infrastructure repair needed, an effort that could easily employ a skilled army of public workers for YEARS, to build new bridges, water and sewer mains and highways. But in the deficit obsession era, no one again wants to go anywhere near this! So, we just allow our infrastructure - the backbone of our nation - to degenerate into 3rd world status. Apart from these possible major influences or checks on the current 'jobless productivity' dynamic, most Americans face an extremely impoverished and bleak future. Which makes it even more critical that the remaining social support systems, including Social Security and Medicare, be ferociously protected against any further weakening - either by Nazified and brutish tea bagging Repukes, or by pussified, wussified Demos afraid of their own shadows and desperately needing backbone transplants - as well as a pointed reminder of the PEOPLE their party used to represent before too much corporate cash was infused for political campaign contributions! Having examined how heat and mechanics are related, and specifically how Newton’s laws give rise to the basic kinetic theory for ideal gases, we now enter the realm of thermodynamics proper. We begin by further inquiring into the link between kinetic theory and temperature, and that we already saw: PV = 2/3 N(½mv^2) Which can be compared to what is called the “empirical equation of state for an ideal gas”: PV = nkT, where k is the Boltzmann constant. Equating the two: 2/3 N(½mv^2) = N kT And solving for T, temperature: T = 2/3k [(½mv^2)] Which discloses the direct link between temperature and the microscopic behavior of a gas. Thus, temperature T is indeed a direct measure of the average molecular kinetic energy of a gas. We can also write this: 3k T/ 2 = (½mv^2) Now, the total translational kinetic energy (for all N molecules of a gas) is just: E = N((½mv^2) = N (3kT/2) = 3 nRT/2 Where we have replaced k using k = R/N_A Where R is the molar gas constant and N_A is the “Avogradro number” R = 8.3 J/ mole-K Now, with these preliminaries out of the way, we can explore further the First Law of Thermodynamics (introduced in the two earlier blogs) but now in terms of the experimental set up shown. Here we have an apparatus consisting of a gas confined by a movable piston such that when heat (Q) is applied (added) using the Bunsen burner, external work W (= P(V2 – V1) ) is done in expanding the contained gas and hence pushing the piston upward. From the First Law: +Q = +U + delta W or delta Q = delta U + p(V2 – V1) = p(delta V) Note again that (delta U) includes translational, rotational and vibrational kinetic molecular energies. The pressure itself P = F(a)/ A where F(a) > mg and A is the area of the piston. Now, consider the experiment in the context of keeping the pressure P constant, then we say the process is isobaric. We also allow that it is reversible, in other words just as I can increase Q to do work to expand the gas, so also I can reduce Q (by lowering the heat of the burner) to decrease the gas. Given n moles of an ideal gas (taking n = const.) then we can write: delta Q = n Cp,m (delta T) where: C p,m = delta Q/ n (delta T) is the molar heat capacity at constant pressure Then for an ideal gas, taking only V and T changing: PV = nR T -> P(delta V) = nR(delta T) delta Q = delta U + delta W becomes n Cp,m (delta T) = n Cv,m (delta T) + nR (delta T) n Cv,m (delta T) = delta U Using some algebra on the earlier equation, and canceling out delta T’s: C p,m = C v,m + R Or R = C p,m - C v,m In other words, the molar gas constant R is the difference between the molar specific heat capacity at constant pressure and the molar specific heat capacity at constant volume. One mole of a gas has a volume of 0.0223 cubic meters at a pressure P = 1.01 x 10^5 N/m^2 at 0 degrees Celsius. If the molar heat capacity at constant pressure is 28.5 J/mol-K find the molar heat capacity at constant volume, C v,m. We have: PV = nRT and R = PV/T = [(1.01 x 10^5 Pa) (0.0223 m^3)]/ 273 K Note: that 0C = 273 Kelvin (K) and for pressure, 1 Pa (Pascal) = 1 N/m^2 Then R = 8.3 J/ mol K C v,m = C p,m – R = (28.5 – 8.3) J/mol K = 20.2 J /mol K Problem for ambitious readers: 20 g of a gas initially at 27 C is heated at a constant pressure of 101 kPa (kiloPascals), so its volume increases from 0.250 m^3 to 0.375 m^3. Find: a) the external work done in the expansion b) the increase in the internal energy U c) the quantity of heat supplied (Q) to achieve the expansion.
http://brane-space.blogspot.com/2011_05_01_archive.html
13
45
Teaching Geography: Workshop 8 Global Forces / Local Impact Readings for Workshop 8 The following material comes from Chapter 4 of Geography for Life. You may read it here or in its complete form in your text. For additional readings, go to Resources. The National Geography Standards for Workshop 8 The National Geography Standards highlighted in this workshop include Standards 3, 8, 11, 14, and 16. As you read, be thinking about how the standards apply in lessons you may have taught. Standard 3: How to Analyze the Spatial Organization of People, Places, and Environments on the Earth's Surface. Thinking in spatial terms is essential to knowing and applying geography. It enables students to take an active, questioning approach to the world around them, and to ask what, where, when, and why questions about people, places, and environments. Thinking spatially enables students to formulate answers to critical questions about past, present, and future patterns of spatial organization, to anticipate the results of events in different locations, and to predict what might happen given specific conditions. Spatial concepts and generalizations are powerful tools for explaining the world at all scales, local to global. They are the building blocks on which geographic understanding develops. Thinking in spatial terms means having the ability to describe and analyze the spatial organization of people, places, and environments on Earth's surface. It is an ability that is central to a person being geographically literate. Geographers refer to both the features of Earth's surface and activities that take place on Earth's surface as phenomena. The phenomena may be physical (topography, streams and rivers, climates, vegetation types, soils), human (towns and cities, population, highways, trade flows, the spread of a disease, national parks), or physical and human taken together (beach resorts in relation to climate, topography, or major population centers). The location and arrangement of both physical and human phenomena form regular and recurring patterns. The description of a pattern of spatial organization begins by breaking it into its simplest components: points, lines, areas, and volumes. These four elements describe the spatial properties of objects: a school can be thought of as a point connected by roads (which are lines) leading to nearby parks and neighborhoods (which are areas), whereas a lake in a park can be thought of as a volume. The next step in the descriptive process is to use such concepts as location, distance, direction, density, and arrangement (linear, grid-like, random) to capture the relationships between the elements of the pattern. Thus the U.S. interstate highway system can be described as lines connecting points over an area - the arrangement is partly grid-like (with north-south and east-west routes as in the central United States) and partly radial or star-shaped (as in the highways centered on Atlanta) - and the pattern of interstates is denser in the East than it is in the West. The analysis of a pattern of spatial organization proceeds with the use of such concepts as movement and flow, diffusion, cost of distance, hierarchy, linkage, and accessibility to explain the reasons for patterns and the functioning of the world. In the case of a physical pattern, such as a river system, there is a complex hierarchical arrangement linking small streams with small drainage basins and large rivers with drainage basins that are the sum total of all of the smaller drainage basins. There are proportional spatial relationships between stream and river length, width, volume, speed, and drainage basin area. The gradual changes that can occur in these properties of a river system are related to climate, topography, and geology. Central to geography is the belief that there is pattern, regularity, and reason to the locations of physical and human phenomena on Earth's surface and that there are spatial structure and spatial processes that give rise to them. Students must be encouraged to think about all aspects of the spatial organization of their world. Understanding the distribution and arrangement of the Earth's physical and human features depends on analyzing data gathered from observation and field study, working with maps and other geographic representations, and posing geographic questions and deriving geographic answers. Spatial relationships, spatial structure, and spatial processes are simple to understand, despite their apparent unfamiliarity. For example, the spatial organization of human settlement on Earth's surface is generally a pattern of a few large cities, which are widely spaced and many smaller towns, which are closer together. A comparative analysis of those cities and towns shows that cities offer a wide range of goods and services whereas small towns offer fewer goods and services. Taken together, the description and the analysis explain why consumers shop where they do, why they often buy different products at different locations, and also why changes occur in this spatial pattern. Understanding patterns of spatial organization enables the geographically informed person to answer three fundamental geographic questions: Why are these phenomena located in these places? How did they get there? Why is this pattern significant? Description and analysis of patterns of spatial organization must occur at scales ranging from local to global. Students confront a world that is increasingly interdependent. Widely separated places are interconnected as a consequence of improved transportation and communication networks. Human decisions at one location have physical impacts at another location. (For example, the decision to burn coal rather than oil in a power plant may result in acid rain damaging vegetation hundreds of miles away. Understanding such spatial linkages requires that students become familiar with a range of spatial concepts and models that can be used to describe and analyze patterns of spatial organization. This knowledge can be grounded in the students' own immediate experiences, and yet it will give the students the power to understand the arrangement of physical and human geographic phenomena anywhere on Earth. Standard 8: The Characteristics and Spatial Distribution of Ecosystems on Earth's Surface. Ecosystems are a key element in the viability of planet Earth as human home. Populations of different plants and animals that live and interact together are called a community. When such a community interacts with the other three components of the physical environment - atmosphere, hydrosphere, and lithosphere - the result is an ecosystem. The cycles of flows and interconnections - physical, chemical, and biological - between the parts of ecosystems form the mosaic of Earth's environments. The geographically informed person needs to understand the spatial distribution, origins, functioning, and maintenance of different ecosystems and to comprehend how humans have intentionally or inadvertently modified these ecosystems. Ecosystems form distinct regions on Earth's surface, which vary in size, shape, and complexity. They exist at a variety of scales, from small and very localized areas (e.g., a single stand of oak trees or a clump of xerophytic grasses) to larger areas with precise geographic boundaries (e.g., a pond, desert biome, island, or beach). Larger scale ecosystems can form continent-wide belts, such as the tundra, taiga, and steppe of northern Asia. The largest ecosystem is the planet itself. All elements of the environment, physical and human, are part of several different but nested ecosystems. Ecosystems, powered by solar energy, are dynamic and ever-changing. Changes in one ecosystem ripple through others with varying degrees of impact. As self-regulating open systems that maintain flows of energy and matter, they naturally move toward maturity, stability, and balance in the absence of major disturbances. In ecological terms, the physical environment can be seen as an interdependent web of production and consumption cycles. The atmosphere keeps plants and animals alive through solar energy, chemical exchanges (e.g., nitrogen-fixing and photosynthesis), and the provision of water. Through evapotranspiration the atmosphere and plants help to purify water. Plants provide the energy to keep animals alive either directly through consumption or indirectly through their death and decay into the soil, where the resultant chemicals are taken up by new plants. Soils keep plants and animals alive and work to cleanse water. The root systems of plants and the mechanical and chemical effects of water percolating through bedrock create new soil layers. Ecosystems therefore help to recycle chemicals needed by living things to survive, redistribute waste products, control many of the pests that cause disease in both humans and plants, and offer a huge pool of resources for humans and other living creatures. However, the stability and balance of ecosystems can be altered by large-scale natural events such as El Niño, volcanic eruptions, fire, or drought. But ecosystems are more drastically transformed by human activities. The web of ecological interdependency is fragile. Human intervention can shatter the balance of energy production and consumption. For example, the overgrazing of pasturelands, coupled with a period of drought, can lead to vegetation loss, the exposure of topsoil layers, and massive soil erosion (as occurred in the 1930s Dust Bowl); tropical forest clear-cutting can lead to soil erosion and ecological breakdown, as is currently occurring in Amazonia; the construction of oil pipelines in tundra environments can threaten the movements of the caribou herds on which indigenous Inuit populations depend. By knowing how ecosystems operate and change, students are able to understand the basic principles that should guide programs for environmental management. Students can understand the ways in which they are dependent on the living and nonliving systems of Earth for their survival. Knowing about ecosystems will enable them to learn how to make reasoned decisions, anticipate the consequences of their choices, and assume responsibility for the outcomes of their choices about the use of the physical environment. It is important that students become well-informed regarding ecosystem issues so they can evaluate conflicting points of view on the use of natural resources. The degree to which present and future generations understand their critical role in the natural functioning of ecosystems will determine in large measure the quality of human life on Earth. Standard 11: The Patterns and Networks of Economic Interdependence on Earth's Surface. Resources are unevenly scattered across the surface of Earth, and no country has all of the resources it needs to survive and grow. Thus each country must trade with others, and Earth is a world of increasing global economic interdependence. Accordingly, the geographically informed person understands the spatial organization of economic, transportation, and communication systems, which produce and exchange the great variety of commodities - raw materials, manufactured goods, capital, and services - which constitute the global economy. The spatial dimensions of economic activity and global interdependence are visible everywhere. Trucks haul frozen vegetables to markets hundreds of miles from growing areas and processing plants. Airplanes move large numbers of business passengers or vacationers. Highways, especially in developed countries, carry the cars of many commuters, tourists, and other travelers. The labels on products sold in American supermarkets typically identify the products as coming from other U.S. states and from other countries. The spatial dimensions of economic activity are more and more complex. For example, petroleum is shipped from Southwest Asia, Africa, and Latin America to major energy-importing regions such as the United States, Japan, and Western Europe. Raw materials and food from tropical areas are exchanged for the processed or fabricated products of the mid-latitude developed countries. Components for vehicles and electronics equipment are made in Japan and the United States, shipped to South Korea and Mexico for partial assembly, returned to Japan and the United States for final assembly intro finished products, then shipped all over the world. Economic activities depend upon capital, resources, power supplies, labor, information, and land. The spatial patterns of industrial labor systems have changed over time. In much of Western Europe, for example, small-scale and spatially dispersed cottage industry was displaced by large-scale and concentrated factory industry after 1760. This change caused rural emigration, the growth of cities, and changes in gender and age roles. The factory has now been replaced by the office as the principal workplace in developed countries. In turn, telecommunications are diminishing the need for a person's physical presence in an office. Economic, social, and therefore spatial relationships change continuously. The world economy has core areas where the availability of advanced technology and investment capital are central to economic development. In addition, it has semi-peripheries where lesser amounts of value are added to industry or agriculture, and peripheries where resource extraction or basic export agriculture are dominant. Local and world economies intermesh to create networks, movement patterns, transportation routes, market areas, and hinterlands. In the developed countries of the world's core areas, business leaders are concerned with such issues as accessibility, connectivity, location, networks, functional regions, and spatial efficiency - factors that play an essential role in economic development and also reflect the spatial and economic interdependence of places on Earth. In developing countries, such as Bangladesh and Guatemala, economic activities tend to be at a more basic level, with a substantial proportion of the population being engaged in the production of food and raw materials. Nonetheless, systems of interdependence have developed at the local, regional, and national levels. Subsistence farming often exists side by side with commercial agriculture. In China, for example, a government-regulated farming system provides for structured production and tight economic links of the rural population to nearby cities. In Latin America and Africa, rural people are leaving the land and migrating to large cities, in part to search for jobs and economic prosperity and in part as a response to overpopulation in marginal agricultural regions. Another important trend is industrialized countries continuing to export their labor-intensive processing and fabrication to developing countries. The recipient countries also profit from the arrangement financially but at a social price. The arrangement can put great strains on centuries-old societal structures in the recipient countries. As world population grows, as energy costs increase, as time becomes more valuable, and as resources become depleted or discovered, societies need economic systems that are more efficient and responsive. It is particularly important, therefore, for students to understand world patterns and networks of economic interdependence and to realize that traditional patterns of trade, human migration, and cultural and political alliances are being altered as a consequence of global interdependence. Standard 14: How Human Actions Modify the Physical Environment. Many of the important issues facing modern society are the consequences - intended and unintended, positive and negative - of human modifications of the physical environment. So it is that the daily news media chronicle such things as the building of dams and aqueducts to bring water to semiarid areas, the loss of wildlife habitat, the reforestation of denuded hills, the depletion of the ozone layer, the ecological effects of acid rain, the reduction of air pollution in certain urban areas, and the intensification of agricultural production through irrigation. Environmental modifications have economic, social, and political implications for most of the world's people. Therefore, the geographically informed person must understand the reasons for and consequences of human modifications of the environment in different parts of the world. Human adaptation to and modification of physical systems are influenced by the geographic context in which people live, their understanding of that context, and their technological ability and inclination to modify it to suit their changing need for things such as food, clothing, water, shelter, energy, and recreational facilities. In meeting their needs, they bring knowledge and technology to bear on physical systems. Consequently, humans have altered the balance of nature in ways that have brought economic prosperity to some areas and created environmental dilemmas and crises in others. Clearing land for settlement, mining, and agriculture provides homes and livelihoods for some but alters physical systems and transforms human populations, wildlife, and vegetation. The inevitable by-products - garbage, air and water pollution, hazardous waste, the overburden from strip mining - place enormous demands on the capacity of physical systems to absorb and accommodate them. The intended and unintended impacts on physical systems vary in scope and scale. They can be local and small-scale (e.g., the terracing of hillsides for rice growing in the Philippines and acid stream pollution from strip mining in eastern Pennsylvania), regional and medium scale (e.g., the creation of agricultural polderlands in the Netherlands and of an urban heat island with its microclimatic effects in Chicago), or global and large-scale (e.g., the clearing of the forests of North America for agriculture or the depletion of the ozone layer by chlorofluorocarbons). Students must understand both the potential of a physical environment to meet human needs and the limitations of that same environment. They must be aware of and understand the causes and implications of different kinds of pollution, resource depletion, and land degradation and the effects of agriculture and manufacturing on the environment. They must know the locations of regions vulnerable to desertification, deforestation, and salinization, and be aware of the spatial impacts of technological hazards such as photochemical smog and acid rain. Students must be aware that current distribution patterns for many plant and animal species area a result of relocation diffusion by humans. In addition, students must learn to pay careful attention to the relationships between population growth, urbanization, and the resultant stress on physical systems. The process of urbanization affects wildlife habitats, natural vegetation, and drainage patterns. Cities create their own microclimates and produce large amounts of solid waste, photochemical smog, and sewage. A growing world population stimulates increases in agriculture, urbanization, and industrialization. These processes expand demands on water resources, resulting in unintended environmental consequences that can alter water quality and quantity. Understanding global interdependence begins with an understanding of global dependence - the modification of Earth's surface to meet human needs. When successful the relationship between people and the physical environment is adaptive; when the modifications are excessive the relationship is maladaptive. Increasingly, students will be required to make decisions about relationships between human needs and the physical environment. They will need to be able to understand the opportunities and limitations presented by the geographical context and to set those contexts within the local to global continuum. Standard 16: The Changes that Occur in the Meaning, Use, Distribution, and Importance of Resources. A resource is any physical material that constitutes part of Earth and which people need and value. There are three basic resources - land, water, and air - that are essential to human survival. However, any other natural material also becomes a resource if and when it becomes available to humans. The geographically informed person must develop an understanding of this concept and of the changes in the spatial distribution, quantity, and quality of resources on Earth's surface. Those changes occur because a resource is a cultural concept, with the value attached to any given resource varying from culture to culture and period to period. Value can be expressed in economic or monetary terms, in legal terms (as in the Clean Air Act), in terms of risk assessment, or in terms of ethics (the responsibility to preserve our National Parks for future generations). The value of a resource depends on human needs and the technology available for its extraction and use. Rock oil seeping from rocks in northwestern Pennsylvania was of only minor value as a medicine until a technology was developed in the mid-nineteenth century that enabled it to be refined into a lamp illuminant. Some resources that were once valuable are no longer important. For example, it was the availability of pine tar and tall timber - strategic materials valued by the English navy - that in the seventeenth century helped spur settlement in northern New England, but that region now uses its vegetative cover (and natural beauty) as a different type of resource - for recreation and tourism. Resources, therefore, are the result of people seeing a need and perceiving an opportunity to meet that need. The quantity and quality of a resource is determined by whether it is a renewable, nonrenewable, or a flow resource. Renewable resources, such as plants and animals, can replenish themselves after they have been used if their physical environment has not been destroyed. If trees are harvested carefully, a new forest will grow to replace the one that was cut. If animals eat grass in a pasture to a certain level, grass will grow again and provide food for animals in the future, as long as the carrying capacity of the land if not exceeded by the pressure of too many animals. Nonrenewable resources, such as minerals and fossil fuels (coal, oil, and natural gas), can be extracted and used only once. Flow resources, such as water, wind, and sunlight, are neither renewable nor nonrenewable because they must be used as, when, and where they occur. The energy in a river can be used to generate electricity, which can be transmitted over great distances. However, that energy must be captured by turbines as the water flows past or it will be lost. The location of resources influences the distribution of people and their activities on the Earth. People live where they can earn a living. Human migration and settlement are linked to the availability of resources, ranging from fertile soils and supplies of freshwater to deposits of metals or pools of natural gas. The patterns of population distribution that result from the relationship between resources and employment change as needs and technologies change. In Colorado, for example, abandoned mining towns reflect the exhaustion of nonrenewable resources (silver and lead deposits), whereas ski resorts reflect the exploitation of renewable resources (snow and scenery). Technology changes the ways in which humans appraise resources, and it may modify economic systems and population distributions. Changes in technology bring into play new ranges of resources from Earth's stock. Since the industrial revolution, for example, technology has shifted from waterpower to coal-generated steam to petroleum-powered engines, and different resources and their source locations have become important. The population of the Ruhr Valley in Germany, for example, grew rapidly in response to the new importance of coal and minerals in industrial ventures. Similarly, each innovation in the manufacture of steel brought a new resource to prominence in the United States, and resulted in locational shifts in steel production and population growth. Demands for resources vary spatially. More resources are used by economically developed countries than by developing countries. For example, the United States uses petroleum at a rate that is five times the world average. As countries develop economically, their demand for resources increases faster than their population grows. The wealth that accompanies economic development enables people to consume more. The consumption of a resource does not necessarily occur where the resource is produced or where the largest reserves of the resource are located. Most of the petroleum produced in Southwest Asia, for example, is consumed in the United States, Europe, and Japan. Sometimes, users of resources feel insecure when they have to depend on other places to supply them with materials that are so important to their economy and standard of living. This feeling of insecurity can become especially strong if two interdependent countries do not have good political relations, share the same values, or understand each other. In some situations, conflict over resources breaks out into warfare. One factor in Japan's involvement in World War II, for example, was that Japan lacked petroleum resources of its own and coveted oil fields elsewhere in Asia, especially after the United States threatened to cut off its petroleum exports to Japan. Conflicts over resources are likely to increase as demand increases. Globally, the increase in demand tends to keep pace with the increase in population. More people on Earth means more need for fertilizers, building materials, food, energy, and everything else produced from resources. Accordingly, if the people of the world are to coexist, Earth's resources must be managed to guarantee adequate supplies for everyone. That means reserves of renewable resources need to be sustained at a productive level, new reserves of nonrenewable resources need to be found and exploited, new applications for flow resources need to be developed, and, wherever possible, cost-effective substitutes - especially for nonrenewable resources - need to be developed. It is essential that students have a solid grasp of the different kinds of resources of the ways in which humans value and use (and compete over) resources, and of the distribution of resources across Earth's surface. The above material is from Geograpy for Life: The National Geography Standards, 1994. The Geography Education Standards Project. 1994 National Geographic Societly, Wahington, D.C. Reprinted with the permission of the National Goeographic Society.
http://www.learner.org/workshops/geography/workshop8/wkp8stan.html
13
59
This lesson deals with credit and wraps up this unit on finance. - Assess both negative and positive incentives associated with credit-card use. - Identify profit as an economic incentive for banks to offer credit cards. As consumers, we have a responsibility to ourselves and others to handle our finances wisely. That means making good decisions about how to use our money. This lesson will deal with who is making a profit by offering credit cards, and what are the incentives are that influence people in choices they make about using credit cards.The students will make a poster advertising a credit card. In their posters they will include at least one incentive to encourage people to pick their card. They will list reasons why their credit card is a good one for people to use and they will also explain it might be a bad idea for people to use their credit card. Card Track: This website list the incentives provided to encourage people to use credit cards. Acceptable Uses of Credit: This is a site that lists credit-card tips. On this site, the students can learn about acceptable ways to use a credit card. Credit Card Minimum Payment Interest Calculator: The calculator on this site will show students how interest becomes a financial problem for people who only pay the minimum payment each month on their credit card account. Drag and Drop Activity: This is an interactive drag and drop activity on positive and negative incentives. Drag and Drop Activity Banking on Our Future: The main site this series uses. This site has many resources for the student and teacher alike. There is a math component in this site that is an added benefit. Note: The students will have to go through a free sign-up process in order to use the website. Here are the fields that students will be required to fill out: Name, Title, Gender, User Name, Password, Secret Question, How You Heard About the Site, and Why You Picked the Site. In "Banking: Part 1" the students learned about the characteristics of money. In "Banking: Part 2" they found out that budgeting is important and that budgeting means we have to make choices. In "Banking: Part 3" they discovered that interest is a benefit savers earn when they save money. In this lesson they will discuss how to be a wise consumer--especially when it comes to credit! Back to [Banking: Part 1]: In the survey of what makes up our money supply, credit cards are not included. Why? They are not classified as money because they don't meet the criteria. People are actually paying for the privilege of using a financial institution's money when they use a credit card. So, how can we get kids to look at credit cards and think before using them? First, they need to understand them. This lesson involves the term "incentives." Incentives can be positive or negative. In this lesson, positive incentives can also be called benefits or rewards, and negative incentives can also be referred to as costs or penalties. People respond predictably to incentives. To get started, ask the students what positive incentives we have for obeying the speed limit. We are safer when we drive within the speed limit. We can make better decisions about pulling out or over if we have an idea of how fast people are going. We know how long it takes to stop at a certain speed, etc. What are the negative incentives that influence us in choices we make about how fast to drive? We can be ticketed if we speed. Our insurance rates might go up if we are ticketed. And we might get in a wreck. These incentives (keeping safe, avoiding tickets, saving money on insurance rates) help to explain why most people will drive pretty close to the speed limit--especially if they see a police officer! People respond to incentives, predictably. What does predictable mean, and how can society predict people's actions? Explain the rewards are positive incentives that make people better off. Penalties are negative incentives that make people worse off. Both positive and negative incentives affect people's choices and behavior. However, people's views of rewards and penalties differ because people have different values. This lesson deals with incentives related to using credit cards. It is a known fact that high school students are inundated with credit cards arriving in the mail. A lesser-known fact is that 8 percent of all bankruptcies are now being declared by people under 25 years old. The students probably know of many credit card names. Discuss what rewards people gain by using credit cards. Show them the Card Track website and discuss the many incentives provided to encourage people to use credit cards. Go to the Acceptable Uses of Credit website and read the "good" and "bad" reasons to use a credit card. You might want to define the word "emergency." A noteworthy phrase is: If you can eat it, drink it, or wear it it is not an emergency! Check out the minimum payment calculator and show the students how interest becomes a financial problem for people who only pay the minimum payment each month on their credit card account. Students will need to assume that only minimum monthly payments are made. Inform them that they will need to insert 2.25% as the Minimum Payment Percent. So, why do we have credit cards? I bet I don't have to tell you! They are easy to use, great to have in emergencies, and sometimes you can even get extras, called incentives, if you use them. Banks respond to incentives, too. Banks and other financial institutions offer credit cards because they have a great economic incentive to do so. The incentive is to earn a profit! The students might understand more about credit cards if they know that they are the ones who are going to be paying the banks the fees and service charges that enable banks to make a profit by offering credit-card services. Finally, have the students go to the site we have been using in this series Banking on Our Future and click on Credit. Follow Zing on his last stop. Positive incentives for students to use credit cards can be found everywhere. Negative incentives include high interest charges and a long-term financial burden for those who run up big balances and pay them off slowly. Positive incentives for banks include profit. Negative incentives include people not paying their bills. Assess what the students have learned by having them complete this Drag and Drop Activity. “This part of the unit discussing finance is wonderful.” “The lesson looks great.”
http://www.econedlink.org/lessons/index.php?lid=591&type=educator
13
28
|Reflection is a process for looking back and integrating new knowledge. Reflections need to occur throughout the building blocks of constructivism and include teacher-led student-driven and teacher reflections. We need to encourage students to stop throughout the learning process and think about their learning. Teachers need to model the reflective process to encourage students to think openly, critically and creatively.| Techniques for Reflections Closing Circle – A quick way to circle around a classroom and ask each student to share one thing they now know about a topic or a connection that they made that will help them to remember or how this new knowledge can be applied in real life. Exit Cards – An easy 5 minute activity to check student knowledge before, during and after a lesson or complete unit of study. Students respond to 3 questions posed by the teacher. Teachers can quickly read the responses and plan necessary instruction. Learning Logs – Short, ungraded and unedited, reflective writing in learning logs is a venue to promote genuine consideration of learning activities. Reflective Journals – Journals can be used to allow students to reflect on their own learning. They can be open-ended or the teacher can provide guiding, reflective questions for the students to respond to. These provide insight on how the students are synthesizing their learning but it also helps the students to make connections and better understand how they learn. Rubrics – Students take time to self-evaluate and peer-evaluate using the rubric that was given or created at the beginning of the learning process. By doing this, students will understand what areas they were very strong in and what areas to improve for next time. Write a Letter – The students write a letter to themselves or to the subject they are studying. This makes the students think of connections in a very personal way. Students enjoy sharing these letters and learn from listening to other ideas. By using a variety of ways to show what they know, such as projects, metaphors or graphic organizers, students are allowed to come to closure on some idea, to develop it and to further their imagination to find understanding. Understanding is taking bits of knowledge in all different curriculum and life experiences and applying this new knowledge. When students apply new knowledge, connections are made and learning is meaningful and relevant. Application is a higher order thinking skill that is critical for true learning to occur. Possible Student Exhibits Analogies - Students compare a topic or unit of study to an inanimate object such as comparing something known to the unknown or some inanimate object to the topic. Blogs – Blogs, short for weblogs, are online journals or diaries that have become popular since the mid 1990′s. Bloggers post personal opinions, random thoughts, connections and real life stories to interact with others via the Web! Weblinks and photos can also be added to the blog. A learner may choose to have their own blog to record their learning on a specific topic. A group of learners could choose to share a blog and read, write, challenge, debate, validate and build shared knowledge as a group. Check out Blogger.com to set up your own personal or professional blog – develop your digital voice and model for your students. Collage – Students cut out or draw pictures to represent a specific topic. To evaluate the level of understanding, students write an explanation or discuss in small groups the significance of the pictures and why they are representative of the topic. This technique encourages students to make connections, to create a visual representation and to then explain or exhibit their understanding. Celebration of Learning – A demonstration where students have the opportunity to share their expertise in several subject areas with other students, teachers and parents. Graphic Organizers – Graphic organizers, also known as mind maps, are instructional tools used to illustrate prior knowledge. Portfolios - A portfolio is a representative collection of an individual student’s work. A student portfolio is generally composed of best work to date and a few “works in progress” that demonstrate the process. Students show their knowledge, skills and abilities in a variety of different ways that are not dependent upon traditional media such as exams and essays. Multiple Intelligences Portfolios are an effective way for students to understand not how smart they are but how they are smart. Project-Based Learning- Students create projects by investigating and making connections from the topic or unit of study to real life situations. Multimedia is one effective tool for students to design their projects. T-charts – A simple t is drawn and students jot down information relating to a topic in two different columns. Venn-Diagram – A graphic organizer that is made with 2 intersecting circles and is used to compare and contrast. Using this tool, students identify what is different about 2 topics and identify the overlap between the two topics in the shared shared area. The Power of Self-Esteem: Build It and They Will Flourish The term “self-esteem,” long the centerpiece of most discussions concerning the emotional well being of young adolescents, has taken a beating lately. Some people who question this emphasis on adolescent self-esteem suggest that it takes time and attention away from more important aspects of education. Others contend that many of the most difficult adolescents suffer from too much self-esteem and our insistence on building higher levels is detrimental to the student and to society. But many experts and middle school educators stand firm in their conviction that since self-worth is rigorously tested during the middle school years, attention to it can only help students become successful. Perhaps, they say, self-esteem simply has not been defined properly or the strategies used to build it have done more harm than good. For example, “Praising kids for a lack of effort is useless,” says Jane Bluestein, a former classroom teacher, school administrator, speaker, and the author of several books and articles on adolescence and self-esteem. “Calling a bad job on a paper a ‘great first draft’ doesn’t do anyone any good. I think we’ve learned that. If I’m feeling stupid and worthless and you tell me I’m smart, that makes you stupid in my eyes,” she says. “It doesn’t make me any better.” But Bluestein and others say that simply because the corrective methods are misguided doesn’t mean middle school educators should not pay close attention to their students’ self-esteem. Jan Burgess, a former principal at Lake Oswego Junior High School in Oregon, explains, “We’ve all seen kids whose parents believe self-esteem is absolutely the highest priority. But heaping praise without warrant is empty praise. Self-esteem is important, and it comes from aiming high and reaching the goal. That is much more meaningful.” On the other hand, James Bierma, a school counselor at Washington Technical Magnet in St. Paul, Minnesota, says he is wary of those who want to reduce praise for students. “I don’t see heaping praise on kids as a big problem. I work in an urban area where we have more than 85% of students in poverty. I wish our students received more praise,” he says. “You can go overboard, but that rarely happens in my dealings with families. Students respond well to praise from parents and school staff.” Robert Reasoner, a former school administrator and the developer of a model for measuring and building self-esteem that has been adopted by schools throughout the United States, says there has been a lot of confusion about the concept of self-esteem. “Some have referred to self-esteem as merely ‘feeling good’ or having positive feelings about oneself,” says Reasoner, who is president of the National Association of Self Esteem. “Others have gone so far as to equate it with egotism, arrogance, conceit, narcissism, a sense of superiority, and traits that lead to violence. Those things actually suggest that self-esteem is lacking.” He notes that self-value is difficult to study and address because it is both a psychological and sociological issue and affects students in many different ways. “Self-esteem is a fluid rather than static condition,” says Sylvia Starkey, a school psychologist and counselor for 16 years in the Lake Oswego School District. She notes that the way adolescents view themselves can depend on how they feel about their competence in a particular activity. It also is influenced by the child’s general temperament and even family birth order, all of which might make it harder to identify the causes of low self-esteem—or raise it. Reasoner says self-esteem can be defined as “the experience of being capable of meeting life’s challenges and being worthy of happiness.” He notes that the worthiness is the psychological aspect of self-esteem, while the competence, or meeting challenges, is the sociological aspect. He notes that when we heap praise on a student, a sense of personal worth may elevate, but competence may not—which can make someone egotistical. Self-esteem, he says, comes from accomplishing meaningful things, overcoming adversity, bouncing back from failure, assuming self-responsibility, and maintaining integrity. Self-Esteem at the Middle Level Middle school students are particularly vulnerable to blows to their self-esteem because they are moving to a more complex, more challenging school environment; they are adjusting to huge physical and emotional changes; and their feelings of self-worth are beginning to come from peers rather than adults, just at a time when peer support can be uncertain, Reasoner says. “Early on, it’s parents who affirm the young person’s worth, then it’s the teacher. In middle school, peer esteem is a powerful source of one’s sense of self,” according to Mary Pat McCartney, a counselor at Bristow Run Elementary School in Bristow, Virginia, and former elementary-level vice president of the American School Counselors Association. No matter how much students have been swamped with praise by well-meaning parents, she says, what their friends think of them is most important. Beth Graney, guidance director at Bull Run Middle School in Gainesville, Virginia, says adults gain their self-esteem through accomplishments and by setting themselves apart from others, while adolescents gain it from their group. “Peer relationships are so critical to kids feeling good about themselves,” she says. Opportunities to Succeed The solution, rather than praising without merit, seems to be providing students with an opportunity to succeed. “Self-esteem that comes from aiming high and reaching goals helps build resilience for students as well,” says Burgess. She says teachers can help kids target their learning and fashion goals that are obtainable, while giving them constructive feedback along the way. “Self-esteem rises and students feel in charge—and this can help parents understand how to heap praise when it is earned.” Bluestein says students often want an opportunity to feel valued and successful. As a group, they can perhaps make a simple decision in class (which of two topics they study first, for example) and individuals might gain from helping others, either collaboratively or as a mentor or tutor. She suggests having students work with others in a lower grade level. As a result, the self-esteem of the students being helped also improves. “Peer helpers, lunch buddies, peer mentors often help kids feel that someone is in their corner and can help them fit in with a larger group,” Graney says. She says parents should encourage their children to find an activity that they like where they can have some success and feel accepted. Bluestein recalls a program she began in which her “worst kids” who seemed to have lower levels of self-worth were asked to work with younger students. Their sense of themselves improved, she says, and eventually they were skipping recess or lunch periods to work with the younger students. Mary Elleen Eisensee, a middle school counselor for more than 30 years at Lake Oswego Junior High School, says if kids can be “guided to accept and support one another, the resulting atmosphere will be conducive for building self-confidence and esteem for everyone.” |Special Care for Special Students Michelle Borba, nationally known author and consultant on self-esteem and achievement in children, says there are five things middle school educators can do easily to improve the self-esteem of their students: Adult Affirmation Is Important Adults play a role, too, by helping students find areas where they can have success and making note of it when they do. They can also just notice students. “Legitimate affirmation makes a huge difference. But plain recognition is just as meaningful. Greeting a student by name even pays big dividends,” says Starkey. She says adult volunteer tutors and mentors help students with social and academic skills and encourage them. An assessment of factors that promote self-esteem in her school district showed such adult attention is very valuable. At Bierma’s school, counselors call parents on Fridays when students’ scores on achievement, attendance, academic, and behavior goals are announced. “It has helped students turn negative behaviors into positive ones.” McCartney says simply treating students respectfully and listening carefully affirms a student’s self-worth. She says teachers can also bolster self-esteem if they allow the students to accidentally “overhear key adults bragging about one of their accomplishments.” Reasoner points out that despite thinking to the contrary, strong self-esteem is critical in the middle school years. Students without it withdraw or develop unhealthy ways of gaining social acceptance, often by responding to peer pressure to engage in sex, drinking, drug abuse, or other harmful behaviors. “Many of these problems can simply be avoided if a child has healthy self-esteem,” Reasoner says. Learning Disabilities: Signs, Symptoms and Strategies A learning disability is a neurological disorder that affects one or more of the basic psychological processes involved in understanding or in using spoken or written language. The disability may manifest itself in an imperfect ability to listen, think, speak, read, write, spell or to do mathematical calculations. Every individual with a learning disability is unique and shows a different combination and degree of difficulties. A common characteristic among people with learning disabilities is uneven areas of ability, “a weakness within a sea of strengths.” For instance, a child with dyslexia who struggles with reading, writing and spelling may be very capable in math and science. Learning disabilities should not be confused with learning problems which are primarily the result of visual, hearing, or motor handicaps; of mental retardation; of emotional disturbance; or of environmental, cultural or economic disadvantages. Generally speaking, people with learning disabilities are of average or above average intelligence. There often appears to be a gap between the individual’s potential and actual achievement. This is why learning disabilities are referred to as “hidden disabilities:” the person looks perfectly “normal” and seems to be a very bright and intelligent person, yet may be unable to demonstrate the skill level expected from someone of a similar age. A learning disability cannot be cured or fixed; it is a lifelong challenge. However, with appropriate support and intervention, people with learning disabilities can achieve success in school, at work, in relationships, and in the community. In Federal law, under the Individuals with Disabilities Education Act (IDEA), the term is “specific learning disability,” one of 13 categories of disability under that law. “Learning Disabilities” is an “umbrella” term describing a number of other, more specific learning disabilities, such as dyslexia and dysgraphia. Find the signs and symptoms of each, plus strategies to help: A language and reading disability Problems with arithmetic and math concepts A writing disorder resulting in illegibility Dyspraxia (Sensory Integration Disorder) Problems with motor coordination Central Auditory Processing Disorder Difficulty processing and remembering language-related tasks Non-Verbal Learning Disorders Trouble with nonverbal cues, e.g., body language; poor coordination, clumsy Visual Perceptual/Visual Motor Deficit Reverses letters; cannot copy accurately; eyes hurt and itch; loses place; struggles with cutting Language Disorders (Aphasia/Dysphasia) Trouble understanding spoken language; poor reading comprehension Symptoms of Learning Disabilities The symptoms of learning disabilities are a diverse set of characteristics which affect development and achievement. Some of these symptoms can be found in all children at some time during their development. However, a person with learning disabilities has a cluster of these symptoms which do not disappear as s/he grows older. Most frequently displayed symptoms: - Short attention span - Poor memory - Difficulty following directions - Inability to discriminate between/among letters, numerals, or sounds - Poor reading and/or writing ability - Eye-hand coordination problems; poorly coordinated - Difficulties with sequencing - Disorganization and other sensory difficulties Other characteristics that may be present: - Performs differently from day to day - Responds inappropriately in many instances - Distractible, restless, impulsive - Says one thing, means another - Difficult to discipline - Doesn’t adjust well to change - Difficulty listening and remembering - Difficulty telling time and knowing right from left - Difficulty sounding out words - Reverses letters - Places letters in incorrect sequence - Difficulty understanding words or concepts - Delayed speech development; immature speech |C L A S S R O O M M A N A G E M E N T| Teachers, Start Your Engines: Weekly Tip - The 3 P’s of Classroom Management – 3 Part Series Part II: Procedures The first building block of good classroom management is positive environment, as we discussed last week. This week we’re going to take a look at the 2nd building block of good classroom management – procedures. For those of you who have been subscribing to this newsletter for a long time, you’ve heard my soap-box about procedures. I simply cannot say enough about this topic. In my mind having set procedures for your classroom means the difference between having an okay year and a great year. It definitely can mean the difference between having a bad year and a good year! Human beings are typically creatures of habit. Even those of us who pride ourselves on being spontaneous have habits. We drink our coffee the same way every morning. Some of us brush our teeth first thing while others wait until after eating breakfast. There are people who live their lives by a watch and others who don’t. Think through your day for just a moment. What activities and/or tasks do you do similarly every single day? Do you walk the dog? Feed the fish? Get dressed? These activities become habits. We tend to complete them the same way (or very close to the same way) every day. You could say that these are procedures for your life. A procedure, simply put, is : 1. An act or a manner of proceeding in any action or process; conduct. 2. A particular course or mode of action. 3. The sequence of actions or instructions to be followed in solving a problem or accomplishing a task. (Source – http://www.dictionary.com) When we create classroom procedures we are developing a course of action and/or a sequence of actions to accomplish a task. For example, an “opening class” procedure may consist of students checking their “mailbox” for returned papers, getting out their journal, sharpening their pencil, and beginning the focus assignment before the bell rings. A “closing” procedure may consist of students putting their journal back in their “mailbox”, turning in the class assignment, cleaning up the area around their desk, and sitting quietly until dismissed. Classroom procedures should be developed for the different activities accomplished daily in your classroom. How do you expect students to turn in homework and classroom assignments? How do you expect students to work together in groups? What are your expectations for students to label their papers for assignments? What do you expect students to do when participating in writing or reading activities, labs, or learning centers? How will students request to go to the restroom, see the nurse, or get materials for class? What about lining up and walking down hallways to Art or recess? All of these actions and activities require procedures. Some procedures should be written down so that students can easily see and refer to what is expected of them. Other procedures will be communicated verbally by the teacher. However, it is vital that you take the time at the beginning of the school year to think about how you want your class to operate. It is this proactive reflection and determination that will make your life easier. Clear communication only happens when you are certain about what you expect. If you only have vague ideas of what you think you want, chaos can easily happen. Don’t forget that your students are human beings also. They are likely to develop their own ways of acting and their own “procedures” to follow if none are specifically given to them. The more prepared you are in the beginning, the less likely your students will come up with their own more creative habits. Take, for example, students entering and leaving your classroom. With clearly marked procedures in place, students know to enter the classroom, get necessary materials, and begin working before the bell rings. This does not mean that you will not have to redirect and remind students to get this done, but it does mean that each one already knows what they should do. When the bell rings most of your students will be sitting at their desk either working or preparing to work. Without a set procedure you’ll end up with students entering class at their leisure, chatting with friends, hanging out about the room doing “whatever” until the bell rings. Then you have to take the time to herd them all back to their seats in order to get class started. This, as some of you know, can take a chunk out of class time. Once you have developed your procedures, be sure to train students in following these procedures. Go over them at the beginning of the year and practice. Stick to these procedures daily so that students can get into the routine and develop the habit. Before the bell rings, remind students of what they should be doing. If you see students not following your procedures/expectations, stop and practice it again until they do it properly. Taking time at the beginning of the year to practice and get into the habit of following these procedures will save time at the end of the school year when everyone is feeling that spring fever. Do not think that you are wasting class time by practicing and revisiting these procedures. Instead, you are wisely using time to reinforce positive habits that will continue throughout the school year. As we discussed last week, a positive environment is only the beginning to good classroom management. The next step is developing classroom procedures. These will then reinforce that positive environment when everyone knows what to do and what is expected. There are no hidden surprises and everyone is on the same page. This results in a teacher who feels less stressed and less likely to show frustration in the classroom. Students respond to this positive atmosphere and tend to behave in a more positive manner. Next week we will discuss the last of the 3 P’s – Productive Students – and how this element increases the likely-hood of having a well-disciplined class. Many educators have become well-versed in modifying the regular classroom curriculum to meet the needs of students with disabilities. Educators are not as experienced, however, in meeting the instructional needs of high-ability students. In a growing number of states, revisions in regulations pertaining to gifted and talented students are requiring that high-ability students, previously served in part-time pull-out programs, must also receive appropriate instruction within the context of their regular classrooms. For example, in Kentucky, high-ability students can no longer be viewed as sufficiently served by a once-monthly or once-weekly program. These students have educational needs that must be met daily, just as students with disabilities have.Many regular education teachers report that meeting the needs of high-ability students equals and often exceeds the challenges of integrating disabled students in their classrooms. High-ability students can be delightful, but they can also be demanding, impatient, perfectionistic, sarcastic, and disruptive. In addition, few regular education teachers have received sufficient training in issues related to gifted and talented education. Before teachers can develop appropriate instructional strategies to meet the needs of high-ability students, they must recognize the value of such efforts. For many educators, services to gifted and talented students may seem to be elitist. However, public education is founded on the belief that all students (including those with high abilities) have the right to instruction appropriate to their needs. Gifted and talented students, like all students, should learn something new every day. General Strategies for Modifying the Curriculum The objectives for modifying standard curricula for high-ability students include: - meeting the learning capacity of the students, - meeting the students’ rapid rates of learning in all or some areas of study, and - providing time and resources so that students can pursue areas of special interest. In order to modify standard curricula for high-ability students, Lois Roets (1993) proposed three options: - lesson modifications, - assignment modifications, and - scheduling modifications. Lessons can be modified through acceleration or enrichment of content. Assignments can be modified through reducing regular classroom work or providing alternate assignments. Scheduling options include providing opportunities for high-ability students to work individually through independent study, shared learning in homogeneous groupings with peers of similar ability and interests, and participation in heterogeneous groupings of mixed-ability students. One way teachers can extend or enrich the content they present is by asking open-ended questions. Such questions stimulate higher order thinking skills and give students opportunities to consider and express personal opinions. Open-ended questions require thinking skills such as comparison, synthesis, insight, judgment, hypothesis, conjecture, and assimilation. Such questions can also increase student awareness of current events. Open-ended questions should be included in both class discussions and assignments. They can also be used as stimulation for the opening or conclusion of a lesson. Another strategy for lesson modification developed by Susan Winebrenner (1992) is to use Bloom’s taxonomy of six levels of thinking to develop lesson content. Bloom’s model implies that the “lower” levels (knowledge, comprehension, and application) require more literal and less complex thinking than the “higher” levels (analysis, evaluation, and synthesis). Teachers are encouraged to develop thematic units with activities for students at all ability levels. This strategy involves four steps. Teachers first choose a theme that can incorporate learning objectives from several different subject areas. Secondly, teachers identify 6 to 10 key concepts or instructional objectives. Third, they determine which learner outcomes or grade-level competencies will be targeted for the unit. Finally, they design instructional activities to cover each of the six levels of thinking. High-ability students are often expected to complete assignments that they find boring or irrelevant because they represent no new learning for them. Allowing them to reduce or skip standard assignments in order to acquire time to pursue alternate assignments or independent projects is called curriculum compacting. The curriculum for a gifted student should be compacted in those areas that represent his or her strengths. When students “buy time” for enrichment or alternate activities, they should use that time to capitalize on their strengths, rather than to improve skills in weaker subjects. For example, a student advanced in math should have a compacted curriculum in that area with opportunities given for enriched study in mathematics. The first step in compacting the curriculum is determining the need to do so. A student is a candidate for compacting if he or she regularly finishes assignments quickly and correctly, consistently scores high on tests related to the modified area, or demonstrates high ability through individualized assessment, but not daily classwork (i.e., he or she is gifted, but unmotivated for the standard curriculum). The second step in compacting the curriculum is to create a written plan outlining which, if any, regular assignments will be completed and what alternate activities will be accomplished. A time frame for the plan should also be determined. Modification plans can be limited to a few days (i.e., length of lesson or chapter) or extend over the course of an entire school year. Alternate assignments for high-ability students can either be projects related to the modified area of study that extend the curriculum, or they can be independent projects that are chosen based on students’ individual interests. Winebrenner (1992) described a strategy in which students use written independent study contracts to research topics of interest to become “resident experts.” The students and teacher decide upon a description and the criteria for evaluating each project. A deadline is determined, and by that date, each student must share his or her project with the entire class. Before choosing their projects, students are also given time to browse various areas of interest. After completing compacted work, students are allowed to look through research materials to explore various topics. A deadline for choosing a topic for independent projects is also given to the students to limit their browsing time. Cooperative learning through traditional heterogeneous groups is often counterproductive for high-ability students. When the learning task involves a great deal of drill and practice, these students often end up doing more teaching than learning. When placed in homogeneous cooperative learning groups, however, gifted students can derive significant learning benefits. This does not mean that high-ability students should never participate in heterogeneous cooperative learning groups. Rather, groupings should be chosen based on the task that is being assigned. When the task includes drill and practice, such as math computation or answering comprehension questions about a novel, gifted students should be grouped together and given a more complex task. When the task includes critical thinking, gifted students should be part of heterogeneous groups to stimulate discussions. Open-ended activities are excellent choices for heterogeneous groupings. Cluster grouping of high-ability students in the same classroom is another option for meeting the needs of gifted students in the regular classroom. The traditional method of assigning students to classes has often been to divide the high-ability students equally among the available classes so each teacher would have his or her “fair share.” Under this system, however, each teacher must develop strategies for modifying the curriculum to meet the needs of the advanced students. With cluster grouping, four to six high-ability students are placed in the same classroom. This system allows the students to learn with and from each other and reduces the need for multiple teachers to develop appropriate instructional modifications. The following case studies describe how the curriculum was modified for three academically able students. Mark entered first grade reading at a fourth-grade level. He had mastered math concepts that challenged his first-grade peers. He was placed in a second-grade class for math instruction and in a third-grade class for reading and spelling instruction. Despite these opportunities, Mark was always the first to finish assignments and spent the majority of his school day reading library books or playing computer games. His parents and teacher were concerned that he was not sufficiently challenged, but as a 6-year-old, he was too young to participate in the district’s pull-out gifted program. They were also concerned that he was having difficulty developing friendships in his classroom since he spent much of the day apart from his homeroom peers. A request for consultation was made to the school psychologist. With input from Mark’s parents and teachers, an independent study contract was developed for Mark to channel his high reading abilities toward study in a specific area. After browsing for a week, he chose dinosaurs as his project area. Mark then narrowed his focus to the Jurassic Period and decided to create a classroom reference book complete with pictures he drew. When he completed his daily work, Mark researched his topic area and worked on his project. When completed, Mark’s teacher asked him to share his project with his classmates. Because he had chosen a topic of high interest to his peers, Mark’s status as “resident expert” on dinosaurs made him attractive to his classmates. Mark’s teacher encouraged these budding friendships by asking the other students to bring dinosaur toys and books from home to share with the class during the following weeks. Katrina’s parents chose to move her from a private school to public school at the end of her third-grade year. Following the advice of the private school staff, Katrina’s parents enrolled her in a second year of third grade at the public school due to reported weaknesses in reading and written expression. After a few weeks of school, Katrina’s teacher approached the school psychologist with her concern that retention may not have been in Katrina’s best interest. The teacher reported that Katrina was performing on grade level in all areas and demonstrated high-ability math skills. Upon meeting with Katrina’s parents, however, they expressed the desire to keep her in the third grade. They felt that Katrina had suffered no harmful effects from the retention since it involved a move to a new school with different peers. Further, Katrina’s parents reported that she felt very comfortable and successful in her classroom. Although the committee decided to keep Katrina in the third grade, they developed a compacted curriculum for her in the area of math. A contract was written specifying modifications for Katrina in the regular class math curriculum. She was required to complete half of the assignments given to her peers, as long as she did so with 90% or higher accuracy. When finished with her modified assignment, Katrina then used her time earned through compacting for enriched study in mathematics. The committee was careful to avoid presenting material to Katrina that she would study in the future to avoid the possibility of repetition. Instead, an enriched program of study was developed that emphasized critical thinking and problem solving related to the addition and subtraction being taught in her classroom. Katrina’s contract included several choices of activities, any of which she could choose to do on a given day, such as creating story problems for the class to solve, drawing pictures or using manipulatives to demonstrate calculation problems, or activities involving measuring, classifying, estimating, and graphing. Katrina’s teacher would present a specific activity choice in these areas that extended and enriched the basic concepts being taught to the class as a whole. With these modifications, Katrina’s advanced skills in math were addressed. Her parents and teacher judged her school year a success, and Katrina made an easy transition to fourth grade, where she was able to work on grade-level material with an average level of accuracy in all areas. Adam demonstrated a very high spoken vocabulary and advanced ideas when participating in class. He completed few of his assignments, though, and showed strong resistance to putting pencil to paper despite obvious high abilities. He was able to read orally at a level 2 years above his fourth-grade status and could perform multidigit calculation problems mentally. However, in the classroom, Adam demonstrated task avoidance and disruptive behaviors. His teacher and parents were frustrated by his lack of work output and behavior problems, and they sought assistance from the school psychologist. In interviewing Adam, the psychologist found that he did not see the need to put on paper answers he already knew. It seemed likely that Adam’s behavior problems were related to boredom and frustration. To test this theory, the psychologist recommended the use of Winebrenner’s (1992) “Most Difficult First” strategy. With this strategy, the teacher identifies the most difficult portion of an assignment and the student is allowed to attempt that portion of the assignment first. If he or she completes it with 100% accuracy, the student is excused from the remainder of the assignment and allowed to use his or her free time to pursue an alternate activity. Adam was resistant to this strategy at first, but he quickly saw its advantages and began completing those assignments that were modified using the strategy. With guidance from the school psychologist, Adam’s teacher then extended modifications to include pretesting and compacting opportunities across the curriculum. Adam used his time earned from compacting to pursue independent projects and recreational reading, and his behavior problems decreased accordingly. The focus of educational services for high-ability students is shifting to the regular classroom. While this expansion of services to the regular classroom is a welcome recognition of the need to challenge high-ability students all day, every day, this initiative also brings with it a significant need to train regular education teachers. Support staff such as educators of gifted and talented students and school psychologists must learn to become effective consultants to assist regular classroom teachers in applying instructional strategies appropriate for meeting the needs of high-ability students. All Means All: Classrooms that Work for Advanced Learners Meeting the needs of all learners means all, including those who learn rapidly or are inherently curious about the world, eating up everything we offer—books, history, geometry proofs, science experiments. Some of these students make themselves known immediately. Others, especially during their middle school years, prefer to hide their talents, their academic interest and enthusiasm, and their abilities. Regardless of whether they are students who need us to draw them out or students whose abilities are immediately apparent, we have a responsibility to help them reach their full potential. Sometimes we are so overwhelmed by the needs of struggling learners that we believe we don’t have time for the gifted, talented, high achieving, and high potential students. But they are just as desperate as any other students for good teachers to help them progress. Middle school is a turning point for them, too. Schools can be structured in many ways to meet the needs of these top students. Part of a continuum of services might include honors or accelerated classes, co-enrollment with the high school, pre-IB (International Baccalaureate) or pre-AP (Advanced Placement) programs that coordinate with high school offerings, multi-age classes, grade acceleration, magnet schools, or honors clusters or teams. But teachers in most middle schools meet these students in heterogeneous classes where there’s a wide range of abilities, interests, learning styles, and special education needs. Cluster grouping is one approach that helps narrow the range. Effective cluster grouping places four to eight high achieving and gifted students in a heterogeneous class that does not include special needs students who require significant attention from the classroom teacher. This number of students ensures that students feel more comfortable doing advanced work and the teacher is more willing to provide it, since there isn’t just one student who needs it. Providing Challenge and Choice Whether in a clustered classroom or a fully heterogeneous one, all teachers can use strategies to help differentiate instruction for gifted, high achieving, and high potential learners. When applied consistently, these strategies help all students make progress throughout the school year. The three components of curriculum that should be adjusted are content, process, and product. Content is the actual material being learned. The process is the way the students are engaging with the material, such as whole class instruction, small group work, online instruction, and independent projects. The product is how the students demonstrate what they have learned. Each approach that follows incorporates one or more of these and helps meet the two most basic needs of these students: challenge and choice. Pre-Assessment: Who Knows What? The cornerstone of any attempt to meet the needs of diverse learners is to find out what they are interested in, how they learn best, and what they already know. This is the purpose of pre-assessment. Administer an interest inventory or a learning styles inventory to all students at the beginning of the school year. Questions can include: What sports do you play? Do you prefer to work alone or with a group? What musical instruments do you play? What do you enjoy learning about? What do you do with your free time? If you had to put together your new desk, would you rather hear the instructions, read the instructions, or watch someone do it and then follow their model? Identify or collect from existing data information about each student’s reading and writing levels in all content areas. If the responsibility for gathering this information is divided among grade level team members, students don’t end up completing four writing samples or filling out six interest inventories during the first two days of school. Teachers should be aware of any student who has been identified as gifted in a specific academic area, in a cognitive ability, or in the visual or performing arts. Criteria for this designation vary by state and district; this is different from the consistent federal guidelines for identifying special education students. This pre-assessment gives teachers a general overview of students’ academic and personal starting points. The next step is to be more teacher- and content-specific. At least two weeks before instruction about a specific unit begins, teachers should give students a pre-assessment covering the content of that unit. Often teachers misuse the K-W-L technique (What do you Know? What do you Want to know? What have you Learned?) for this purpose by doing it as an oral whole-class activity on the first day of a unit. While it is a great way to engage students’ interest in a topic, it is not an effective pre-assessment. The students who know the most stop talking after they offer two or three answers, even if they know more (it’s socially “uncool” and teachers ask “can we hear from anyone else?”) while students who don’t know anything about the topic say “he took my answer” or remain silent. Teachers get a false “read” of the class’s knowledge base. In addition, doing this activity on the first day of an already-planned unit gives them no time to adjust for individual learners’ needs. Instead, pre-assessments should be - Focused on the key information, concepts, and skills of the unit, including the embedded state and local standards. - Relatively short. - Assessed only for instructional planning and grouping (not graded). - Returned to students only at the end of the unit when they can assess their own growth. Other effective pre-assessments can be specially constructed pre-tests, post-tests, journals, incomplete graphic organizers, or open-ended questions. It is often useful to add “What else can you tell me about your experiences with this topic and what you know about it?” Once teachers have a good idea of the starting point for each student, they can select the appropriate materials, pacing, and instructional approaches. This is the foundation of middle school philosophy and differentiation of instruction: start with the student. Tiered assignments, both in class and for homework, are a great way to differentiate instruction when all students need to work on the same content or material. This might include differentiated journal prompts, comprehension questions at different levels of Bloom’s Cognitive Taxonomy, or a range of sophistication in math problems. For example, when students are reading The Gettysburg Address, teachers can develop two sets of questions. One set is for struggling readers or more concrete thinkers with little background knowledge. These questions might emphasize the first three levels of Bloom’s taxonomy (remember, understand, apply) and some key vocabulary words. A second set is for advanced readers or more abstract thinkers. These questions might emphasize the higher levels of Bloom’s taxonomy (analyze, evaluate, create) and include a question about the oratorical devices that made this speech memorable. Both groups get the same number of questions. The whole-class discussion that follows can include all students so everyone benefits from shared insights and knowledge, and encourage critical thinking. Menus of Activities Another approach is to create a menu of choices for learning activities ranging from reading the basal social studies text and creating an outline of its content to analyzing primary source material. Each activity in the menu is assigned a point value and all students must complete the same number of points. Making a basic map may be worth 5 points. Making a map of contemporary Europe and contrasting it with what that same map looked like in 1900 would be worth 20 points. The key is that point values are determined by cognitive complexity, not just quantity or amount of time needed. Through thoughtful coaching by the teacher, all students can learn new material on the assigned topic. Struggling learners may be required to master certain skills needed for state assessments while those who already have those skills may work toward above-grade level proficiency. The emphasis is on meaningful work for all students connected to the unit’s essential questions. Orbital and Independent Study While differentiation is definitely not individualized education, opportunities for independent and orbital studies may be appropriate. For example, in a language arts unit on folk tales, fairy tales, and myths, a student with a lot of background might do an orbital study on Rafe Martin’s book Birdwing (Scholastic 2007). This novel extends Grimm’s fairy tale of “The Six Swans” in which six brothers are cursed and turned into swans. Their sister bravely breaks the spell, but one brother, Ardwin, is left with a birdwing. How he faces his difference, how the author has spun his ideas from the original tale, whether the moral is consistent with Grimm’s intention—all can be part of the students’ study. Significantly advanced students can explore Joseph Campbell’s Hero with a Thousand Faces. While orbital studies extend a given topic, independent study may replace a topic. If students have mastered the goals, objectives, and concepts of this unit and have no interest in the topic, they might substitute in-depth study of a topic of interest such as science or history. 21st Century Technology We are lucky that today, computer and other media technologies provide unique opportunities for student learning. There are online courses at both the high school and college levels for advanced learners. There are Web quests, podcasts, and video lectures from some of the greatest thinkers and teachers of our time. These experiences extend the boundaries of our classrooms and the learning of our students. Differentiation is an approach to teaching and learning, not just a strategy. It has a profound impact on the classroom community. Students must understand that not everyone in the classroom does the same thing at the same time, but everyone gets what they need. It’s not OK to make the top students into junior teachers. It has the opposite effect of what is often intended; rather than build compassion and caring, it creates arrogance and resentment. In addition, often gifted students don’t know how they know what they know or how to explain their leaping insights to more structured learners. Differentiating instruction for advanced learners takes time and resources. Teachers should reach out and ask for help from gifted coordinators, gifted intervention specialists, online resources in the gifted community or in a content area, or from teachers in higher grade levels. When we apply these strategies in our classrooms, we are delighted to see students blossom beyond our wildest dreams. We see students reach up to accept challenges because they see others doing it. We exemplify the ideals of middle schools and ensure that all our students are learning every day. Are Your Students Prepared for the Organizational Demands of Middle School? Middle school moves at a fast pace. Students have many different teachers, each with his or her own homework, test schedules, and due dates. Add to the mix the after-school clubs and sports that students participate in, and it is a challenge to get organized. Good work management and organizational skills are essential for balancing the load and minimizing the stress. For some students, organizational skills come naturally, but for most, they must be learned. While there is little classroom time to assess and train students in work management skills, here are some ideas for how you can help your students be prepared. Help students make the connection Getting students to value good organizational skills is the first step. Teachers can help by connecting the benefits of good organizational skills to the things this age group values most—more independence, less stress, more free time, better grades, and more self-confidence. Organized binders are key A binder is like a compact file cabinet that a student carries around all day to file and retrieve papers, homework, and information. Students must be able to access materials quickly and keep papers neatly stored by subject. Be sure to give students time in class to file papers in the correct place in their binders—no shoving loose papers into backpacks! Planners are essential No matter how good a student’s memory is, he or she must have a central place for recording activities. A student’s planner should contain important dates and events such as bell schedule changes, holiday breaks, exams, homework assignments, and project due dates. It’s a good idea for students to include personal items scheduled during school days such as medical appointments, vacations, and after-school activities. Have a study bud Students should identify a classmate in each class who can be contacted in the event of a forgotten homework assignment or lost worksheet. The study bud can also help when a fellow student is absent and needs a handout or class notes. Study buds should exchange home contact information. A homework space that rocks Encourage students to locate, design, and stock a work space at home. This will help them do their best work in the least amount of time. The space should be quiet and free from distractions such as people talking, TV, and video games. They can deck it out with posters, pictures of friends, or team photos to make it a place they won’t mind hanging out. Make it a “Designer’s Challenge” classroom activity in which students design and photograph their work spaces and vote on the work space “most likely to succeed.” Most students, particularly those fresh out of elementary school, have no idea that a typical middle school teacher works with 100 or more students each day. Unaware of the many demands on a teacher’s time, students continue to believe that, as in elementary school, their teachers will track them down to provide a missing assignment. Encourage students to take personal responsibility for following up. You can role-play various student dilemmas in a “What Would You Do?” classroom activity to help students learn to recognize and follow up on matters that affect their grades. Without basic organizational skills, middle school students can become overwhelmed. In some cases it begins a downward spiral of underachievement that can last into the high school years and beyond. Take some time to help students recognize and appreciate the benefits of good basic organizational skills. Today’s students have a different (not better or worse, just different) mindset from those born 10, 20, 30+ years ago. To reach your students, you have to understand how they think. In order to help the faculty at Beloit College, one of the professors compiles an annual mindset list for the incoming freshmen. You might find this information interesting - and useful. A note about the Beloit College Mindset List To save readers the time and effort of writing to us about the Beloit College Mindset List, we offer four brief explanations. The Mindset List is not a chronological listing of things that happened in the year that the entering first-year students were born. Our effort is to identify a worldview of 18 year-olds in the fall of 2007. We take a risk in some cases of making generalizations, particularly given that our students at Beloit College for instance come from every state and scores of nations. The “Class of 2011″ refers to students entering college this year. They are generally 18 which suggests they were born in 1989. The list identifies the experiences and event horizons of students as they commence higher education and is not meant to reflect on their preparatory education. BELOIT COLLEGE’S MINDSET LIST® FOR THE CLASS OF 2011 Most of the students entering College this fall, members of the Class of 2011, were born in 1989. For them, Alvin Ailey, Andrei Sakharov, Huey Newton, Emperor Hirohito, Ted Bundy, Abbie Hoffman, and Don the Beachcomber have always been dead. - What Berlin wall? - Humvees, minus the artillery, have always been available to the public. - Rush Limbaugh and the “Dittoheads” have always been lambasting liberals. - They never “rolled down” a car window. - Michael Moore has always been angry and funny. - They may confuse the Keating Five with a rock group. - They have grown up with bottled water. - General Motors has always been working on an electric car. - Nelson Mandela has always been free and a force in South Africa. - Pete Rose has never played baseball. - Rap music has always been mainstream. - Religious leaders have always been telling politicians what to do, or else! - “Off the hook” has never had anything to do with a telephone. - Music has always been “unplugged.” - Russia has always had a multi-party political system. - Women have always been police chiefs in major cities. - They were born the year Harvard Law Review Editor Barack Obama announced he might run for office some day. - The NBA season has always gone on and on and on and on. - Classmates could include Michelle Wie, Jordin Sparks, and Bart Simpson. - Half of them may have been members of the Baby-sitters Club. - Eastern Airlines has never “earned their wings” in their lifetime. - No one has ever been able to sit down comfortably to a meal of “liver with some fava beans and a nice Chianti.” - Wal-Mart has always been a larger retailer than Sears and has always employed more workers than GM. - Being “lame” has to do with being dumb or inarticulate, not disabled. - Wolf Blitzer has always been serving up the news on CNN. - Katie Couric has always had screen cred. - Al Gore has always been running for president or thinking about it. - They never found a prize in a Coca-Cola “MagiCan.” - They were too young to understand Judas Priest’s subliminal messages. - When all else fails, the Prozac defense has always been a possibility. - Multigrain chips have always provided healthful junk food. - They grew up in Wayne’s World. - U2 has always been more than a spy plane. - They were introduced to Jack Nicholson as “The Joker.” - Stadiums, rock tours and sporting events have always had corporate names. - American rock groups have always appeared in Moscow. - Commercial product placements have been the norm in films and on TV. - On Parents’ Day on campus, their folks could be mixing it up with Lisa Bonet and Lenny Kravitz with daughter Zöe, or Kathie Lee and Frank Gifford with son Cody. - Fox has always been a major network. - They drove their parents crazy with the Beavis and Butt-Head laugh. - The “Blue Man Group” has always been everywhere. - Women’s studies majors have always been offered on campus. - Being a latchkey kid has never been a big deal. - Thanks to MySpace and Facebook, autobiography can happen in real time. - They learned about JFK from Oliver Stone and Malcolm X from Spike Lee. - Most phone calls have never been private. - High definition television has always been available. - Microbreweries have always been ubiquitous. - Virtual reality has always been available when the real thing failed. - Smoking has never been allowed in public spaces in France. - China has always been more interested in making money than in reeducation. - Time has always worked with Warner. - Tiananmen Square is a 2008 Olympics venue, not the scene of a massacre. - The purchase of ivory has always been banned. - MTV has never featured music videos. - The space program has never really caught their attention except in disasters. - Jerry Springer has always been lowering the level of discourse on TV. - They get much more information from Jon Stewart and Stephen Colbert than from the newspaper. - They’re always texting 1 n other. - They will encounter roughly equal numbers of female and male professors in the classroom. - They never saw Johnny Carson live on television. - They have no idea who Rusty Jones was or why he said “goodbye to rusty cars.” - Avatars have nothing to do with Hindu deities. - Chavez has nothing to do with iceberg lettuce and everything to do with oil. - Illinois has been trying to ban smoking since the year they were born. - The World Wide Web has been an online tool since they were born. - Chronic fatigue syndrome has always been debilitating and controversial. - Burma has always been Myanmar. - Dilbert has always been ridiculing cubicle culture. - Food packaging has always included nutritional labeling. Here is this year’s list, for the Class of 2010: 1. The Soviet Union has never existed and therefore is about as scary as the student union. 2. They have known only two presidents. 3. For most of their lives, major U.S. airlines have been bankrupt. 4. Manuel Noriega has always been in jail in the U.S. 5. They have grown up getting lost in “big boxes”. 6. There has always been only one Germany. 7. They have never heard anyone actually “ring it up” on a cash register. 8. They are wireless, yet always connected. 9. A stained blue dress is as famous to their generation as a third-rate burglary was to their parents’. 10. Thanks to pervasive head phones in the back seat, parents have always been able to speak freely in the front. 11. A coffee has always taken longer to make than a milkshake. 12. Smoking has never been permitted on U.S. airlines. 13. Faux fur has always been a necessary element of style. 14. The Moral Majority has never needed an organization. 15. They have never had to distinguish between the St. Louis Cardinals baseball and football teams. 16. DNA fingerprinting has always been admissible evidence in court. 17. They grew up pushing their own miniature shopping carts in the supermarket. 18. They grew up with and have outgrown faxing as a means of communication. 19. “Google” has always been a verb. 20. Text messaging is their e-mail. 21. Milli Vanilli has never had anything to say. 22. Mr. Rogers, not Walter Cronkite, has always been the most trusted man in America. 23. Bar codes have always been on everything, from library cards and snail mail to retail items. 24. Madden has always been a game, not a Super Bowl-winning coach. 25. Phantom of the Opera has always been on Broadway. 26. “Boogers” candy has always been a favorite for grossing out parents. 27. There has never been a “skyhook” in the NBA. 28. Carbon copies are oddities found in their grandparents’ attics. 29. Computerized player pianos have always been tinkling in the lobby. 30. Non-denominational mega-churches have always been the fastest growing. religious organizations in the U.S. 31. They grew up in minivans. 32. Reality shows have always been on television. 33. They have no idea why we needed to ask “…can we all get along?” 34. They have always known that “In the criminal justice system the people have been represented by two separate yet equally important groups.” 35. Young women’s fashions have never been concerned with where the waist is. 36. They have rarely mailed anything using a stamp. 37. Brides have always worn white for a first, second, or third wedding. 38. Being techno-savvy has always been inversely proportional to age. 39. “So” as in “Sooooo New York,” has always been a drawn-out adjective modifying a proper noun, which in turn modifies something else. 40. Affluent troubled teens in Southern California have always been the subjects of television series. 41. They have always been able to watch wars and revolutions live on television. 42. Ken Burns has always been producing very long documentaries on PBS. 43. They are not aware that “flock of seagulls hair” has nothing to do with birds flying into it. 44. Retin-A has always made America look less wrinkled. 45. Green tea has always been marketed for health purposes. 46. Public school officials have always had the right to censor school newspapers. 47. Small white holiday lights have always been in style. 48. Most of them have never had the chance to eat bad airline food. 49. They have always been searching for “Waldo”. 50. The really rich have regularly expressed exuberance with outlandish birthday parties. 51. Michael Moore has always been showing up uninvited. 52. They never played the game of state license plates in the car. 53. They have always preferred going out in groups as opposed to dating. 54. There have always been live organ donors. 55. They have always had access to their own credit cards. 56. They have never put their money in a “Savings & Loan.” 57. Sara Lee has always made underwear. 58. Bad behavior has always been getting captured on amateur videos. 59. Disneyland has always been in Europe and Asia. 60. They never saw Bernard Shaw on CNN. 61. Beach volleyball has always been a recognized sport. 62. Acura, Lexus, and Infiniti have always been luxury cars of choice. 63. Television stations have never concluded the broadcast day with the national anthem. 64. LoJack transmitters have always been finding lost cars. 65. Diane Sawyer has always been live in Prime Time. 66. Dolphin-free canned tuna has always been on sale. 67. Disposable contact lenses have always been available. 68. “Outing” has always been a threat. 69. Oh, The Places You’ll Go by Dr. Seuss has always been the perfect graduation gift. 70. They have always “dissed” what they don’t like. 71. The U.S. has always been studying global warming to confirm its existence. 72. Richard M. Daley has always been the mayor of Chicago. 73. They grew up with virtual pets to feed, water, and play games with, lest they die. 74. Ringo Starr has always been clean and sober. 75. Professional athletes have always competed in the Olympics. Lists for previous years can be found at http://www.beloit.edu/~pubaff/mindset/.
http://kendrik2.wordpress.com/category/parents/
13
23
Between the World Wars World War I abruptly ended on the eleventh hour of the eleventh day of the eleventh month in 1918. At that time the United States Army had nearly four million men in uniform, half of them overseas. President Wilson negotiated the peace treaty in Paris with other world leaders against a backdrop of immense if short-lived American power. As in the aftermath of the nation's earlier wars, a massive demobilization began which soon reduced the Army to about 224,000 men, a force far smaller than that of the other major powers.1 Limited budgets as well as reduced manpower became the order of the day. Despite Wilson's efforts, the Senate rejected the Treaty of Versailles, and the nation hastened to return to its traditional isolation. The League of Nations, centerpiece of the president's peace plan, was formed without U.S. participation. Yet the years that followed, often viewed as an era of withdrawal for the United States and of stagnation for the Army, brought new developments to the field of military communications. Technical advances in several areas, especially voice radio and radar, had major consequences for the Signal Corps. When events abroad made it clear that the Wilsonian dream of a lasting peace was only that, such innovations helped to shape the nation's military response to the new and more terrible conflict that lay ahead. The silencing of the guns in November 1918 did not complete the U.S. Army's work in Europe. In spite of pressures for rapid demobilization, shipping shortages delayed the departure of most units from European shores until the spring and summer of 1919. Although Pershing embarked for home on 1 September 1919, American troops remained in France through the end of the year. For its part, the Signal Corps gradually turned over its communication lines, both those it had built and those it had leased, to the French. In addition, the Corps had to dispose of vast quantities of surplus war materiel and equipment.2 According to the terms of the Armistice, the Third Army (organized in November 1918) moved up to the Rhine River, and American soldiers continued to occupy a zone in the Rhineland until 1923.3 The 1st Field Signal Battalion comprised part of these forces and operated the German military and civilian telephone and telegraph lines, which had been turned over to the Americans. The unit returned home in October 1921.4 In addition to these activities, the Signal Corps provided communications for the Paris Peace Conference, which began in January 1919. Brig. Gen. Edgar Russel placed John J. Carty of AT&T, who had not yet doffed his uniform, in charge of setting up this system. The Signal Corps installed a telephone central switchboard at the conference site in the Crillon Hotel and provided communications for President Wilson at his residence. Several of the women operators from the front operated these lines. The Signal Corps could also connect the president with the American forces in Germany.5 Despite the importance of its work in Europe, the main story of the Signal Corps, as of the Army, was one of rapid demobilization. From a total at the Armistice of 2,712 officers and 53,277 enlisted men, the Corps had dropped by June 1919 to 1,216 officers and 10,372 men. A year later its strength stood at less than one-tenth its wartime total, with 241 officers and 4,662 enlisted men on the rolls.6 As the soldiers came home, the government lifted the economic restrictions imposed during the war, restored control over the civilian communications systems to the commercial companies, and dismantled most of the wartime boards and commissions.7 These changes were aspects of the return to normalcy, reflective of the nation's resurgent isolationism and its desire to escape from the international arena it had entered during the war. Meanwhile, Congress debated the future military policy of the United States. The War Department favored the maintenance of a large standing Army numbering some 500,000 officers and men, but its proposal failed to win the support of the war-weary public or their representatives in Congress. As part of the usual postwar review of lessons learned, Congress held lengthy hearings on Army reorganization, but twenty months passed before it enacted new defense legislation.8 On 4 June 1920 President Wilson signed into law the National Defense Act of 1920. Written as a series of amendments to the 1916 defense act, the new legislation enacted sweeping changes and remained in effect until 1950.9 It established the Army of the United States, comprised of three components: the Regular Army, the Organized Reserves, and the National Guard. It set the Regular Army's strength at approximately 300,000 (17,700 officers and 280,000 men), with 300 officers and 5,000 men allotted to the Signal Corps.10 The act also abolished the detail system for Signal Corps officers above the rank of captain. In the future, they would receive permanent commissions in the Corps. Congress also abandoned the system of territorial departments within the continental United States and replaced them with nine corps areas. These were intended to serve as tactical commands rather than simply as administrative headquarters. Each corps area would support one Regular Army division.11 Hawaii, the Philippines, and Panama continued to constitute separate departments. Other significant provisions included the creation of the Air Service as a new branch along with the Chemical Warfare Service and the Finance Department.12 Ironically, while the Signal Corps received recognition in the new defense act as a combat arm, changes in doctrine concurrently took away its tactical communications function.13 In April 1919 Pershing had convened a committee of high-ranking officials, called the Superior Board, to examine the organizational and tactical experiences of the war. Col. Parker Hitt, who had served as chief signal officer of the First Army, represented the Signal Corps' interests. Drawing upon the proceedings of boards previously held at the branch level, the board concentrated on the structure of the infantry division and recommended that the division be increased in size to achieve greater firepower even at the expense of mobility. Because Pershing disagreed with the panel's advice, favoring a smaller, more mobile organization, he withheld its report from the War Department for a year.14 For the Signal Corps, the Superior Board's recommendations resulted in a dramatic change in its role within the Army. In their postwar reviews, both the infantry and artillery boards had expressed a desire to retain their own communication troops. The Superior Board agreed and, with the approval of the secretary of war, this modification made its way into policy. Henceforth, the Signal Corps' responsibility for communications would extend only down to division level. Below that echelon the individual arms became responsible for their own internal communications as well as for connecting themselves with the command lines of communication established by the Signal Corps.15 Although the Signal Corps retained overall technical supervision, it no longer controlled communications from the front lines to Washington as it had done successfully during World War I. Understandably, Chief Signal Officer Squier protested the change, arguing that it would result in confusion: This office is more than ever of the opinion that the present system of dividing signaling duties and signaling personnel, in units smaller than divisions, among the various branches of the service, is not wise and a return to the former system which provided Signal Corps personnel for practically all signaling duties is recommended.16 But his protest fell on deaf ears. The Army's revised Field Service Regulations, approved in 1923, reflected the doctrinal changes.17 In a further departure from the past, Congress had given the War Department discretion to determine the Army's force structure at all levels.18 Col. William Lassiter, head of the War Plans Division of the General Staff and a member of the Superior Board, presided over a panel to study the Army's organization. Unlike the Superior Board, this body, designated the Special Committee (but more commonly known as the Lassiter Committee), favored a reduction in the infantry division's size, while retaining its "square" configuration of two brigades and four infantry regiments. Much of the reduction resulted from proposed cuts in the number of support troops. Under its plan, divisional signal assets were reduced to a single company, reflecting their reduced mission under the postwar doctrine. Approved by the Army chief of staff, General Peyton C. March, and written into the tables of organization, the new policy placed the infantry division's signal company (comprising 6 officers and 150 men) in the category of special troops, along with a military police, a light tank, and an ordnance maintenance Yet few of these units were actually organized. For most of the interwar years the Army had just three active infantry divisions in the continental United States (the 1st, 2d, and 3d) and the 1st Cavalry Division. Thus the Signal Corps contained very few tactical units. Signal service companies, meanwhile, served in each of the nine corps areas as well as at Camp Vail, New Jersey, and in Alaska, Hawaii, the Canal Zone, and the Philippines. A shrunken organization carried out a more limited mission in a nation that seemingly wanted to forget about military matters. Despite a booming national economy, the Army did not prosper during the "Roaring Twenties." Budget-minded Congresses never appropriated funds to bring it up to its authorized strength. In 1922 Congress limited the Regular Army to 12,000 commissioned officers and 125,000 enlisted men, only slightly more than had been in uniform when the United States entered World War 1.21 Eventually Congress reduced enlisted strength to 118,000, where it remained until the late 1930s. Army appropriations, meanwhile, stabilized at around $300 million, about half the projected cost of the defense act if fully implemented. The Army remained composed of skeleton organizations with most of its divisions little more than "paper tigers.”22 Under these circumstances, the fate of the Signal Corps was not exceptional. But it did suffer to an unusual degree because its operations were far-flung and its need for costly materiel was great. The Corps' actual strength never reached the SIGNAL STUDENTS TAKE A BREAK FROM THEIR CLASSES figures authorized in the defense act; in 1921 Congress cut its enlisted personnel to 3,000, and by 1926 this figure had dropped to less than 2,200. At the same time, officer strength remained well below 300.23 Moreover, the Signal Corps lost a significant percentage of its skilled enlisted personnel each year to private industry, which could offer them significantly higher salaries.24 The branch's annual appropriation plummeted from nearly $73 million for fiscal year 1919 to less than $2 million for fiscal year 1923, and by 1928 it had risen only slightly.25 The War Department's financial straits dictated that surplus war equipment be used up, even if obsolete, and only limited funds were available to purchase or develop new items. Signal training suffered as well. During demobilization, most of the wartime camps had been shut down. The Signal School at Fort Leavenworth, which had been closed during the war, opened briefly to conduct courses for officers from September 1919 to June 1920 before shutting its doors permanently. But there was an important exception to the general picture of decline: Camp Vail, New Jersey, became the new location of the Signal School, officially opening in October 1919. The school offered training for both officers and enlisted men of the Signal Corps as well as those from other branches.26 In 1920 the school began instructing members of the Reserve Officers Training Corps, and the following year added courses for National Guard and Reserve officers. Students from foreign armies, such as Cuba, Peru, and Chile, also received training at Camp Vail. Here the Corps prepared its field manuals, regulations, and other technical publications as well as its correspondence courses and testing materials.27 The post also had the advantage of being close to New York City, where the students trav- eled to view the latest in commercial communication systems. They gained practical field experience by participating in the annual Army War College maneuvers. Signal officers could further enhance their education by attending communication engineering courses at such institutions as Yale University and the Massachusetts Institute of Technology.28 In 1925 Camp Vail became a permanent post known as Fort Monmouth.29 Here the 51st Signal Battalion (which had fought during World War I as the 55th Telegraph Battalion), the Signal Corps' only active battalion-size unit, made its home during the interwar years, along with the 15th Signal Service Company and the 1st Signal Company.30 Fort Monmouth also became the home of the Signal Corps' Pigeon Breeding and Training Center. Although the Army had sold most of its birds at the end of the war, the Signal Corps retained a few lofts along the Mexican border, in the Panama Canal Zone, and at several camps and flying stations. At Monmouth, the Corps' pigeon experts devoted much effort to training birds to fly at night. Some may also have wished that they could breed the pigeons with parrots so the birds could speak their messages.31 Each year the Corps entered its pigeons in exhibitions and races, winning numerous prizes. In April 1922 the Signal Corps' pigeons participated in a contest that, however ludicrous to a later age, was taken seriously at the time. Responding to an argument raised by the San Francisco press, Maj. Henry H. Arnold of the Army Air Service challenged the pigeons to a race from Portland, Oregon, to San Francisco, to determine whether a pigeon or a plane could deliver a message faster. As the race began, the pigeons disappeared from view while Arnold struggled for forty-five minutes to start his airplane's cold engine. Then he had to make several stops for fuel. Meanwhile, in San Francisco, citizens received telegraphic bulletins of the race's progress, with the pigeons apparently holding their early lead. Bookies did a brisk business as bettors began backing the birds. When Arnold finally landed in San Francisco after a seven-and-a-half-hour journey, he expected to be the loser. But surprisingly, no pigeons had yet arrived, and none did so for two more days. Perhaps aviation was not just for the birds after all.32 Despite the outcome, the Signal Corps did not abandon its use of pigeons, and in 1927 was maintaining about one thousand birds in sixteen lofts in the United States, the Canal Zone, Hawaii, and the Philippines.33 Although the Signal Corps had lost much of its wartime mission, it still performed an important peacetime function by providing the Army's administrative communications. As it had for many years, the Corps continued to operate the telephone and telegraph systems at Army installations and to maintain coast artillery fire control systems. In addition, the Signal Corps received authorization in 1921 to set up a nationwide radio net. Stations were located at the headquarters of each corps area and department, as well as in certain major cities. Each corps area in turn established its own internal system connecting posts, camps, and stations. The 17th Service Company (redesignated in 1925 as the 17th Signal Service Company) operated the net's headquarters in Washington, D.C., which bore the call letters WVA (later changed, appropriately enough, to WAR).34 SIGNAL CORPS SOLDIER DEMONSTRATES THE EMPLOYMENT OF PIGEONS AT CAMP ALFRED VAIL; Stations at Fort Leavenworth, Kansas, and Fort Douglas, Utah, relayed messages to the West Coast. Due to atmospheric disturbances and other forms of interference, good service meant that a message filed in Washington reached the West Coast by the following day.35 Although established to serve as an emergency communications system in the event of the destruction or failure of the commercial wire network, on a day-to-day basis the radio net handled much of the War Department's message traffic formerly carried by commercial telegraph, saving the government a considerable expense. By 1925, 164 stations, including those on Army ships and in Alaska, came under the net's technical supervision, and the chief signal officer described it as "the largest and most comprehensive radio net of its kind in the world today."36 The success of the radio net led to the establishment of the War Department Message Center on 1 March 1923, through the merger of the War Department's telegraph office with the Signal Corps' own telegraph office and radio station. The chief signal officer became the director of the center, which coordinated departmental communications in Washington and dispatched them by the most appropriate means, whether telegraph, radio, or cable. Although originally intended for War Department traffic only, the center eventually handled messages for over fifty federal agencies.37 In an attempt to supplement its limited regular force, the Signal Corps formed the Army Amateur Radio System in 1925, with the net control station located at Fort Monmouth. The system operated every Monday night except during the summer months, when static interfered too greatly. The volunteer operators constituted a sizable pool of skilled personnel upon whom the Army could call in case of emergency. Each corps area signal officer appointed an amateur operator, known as the radio aide, to represent the operators in his area.38 Among President Wilson's concerns during the 1919 peace negotiations in Paris had been the future of postwar communications. In the past British companies had controlled global communications through their ownership of most of the world's submarine cables. During the war the British government had exercised its jurisdiction by intercepting cable traffic. Wilson sought to prevent such a monopoly in the future, and debate at the conference revolved around how the captured German cables would be allocated.39 Radio did not appear as an issue on the agenda at Paris, even though it constituted a new force in international communications that would greatly change the balance of the equation. Indications of its potential importance had appeared during the war when the Navy used its station at New Brunswick, New Jersey, to broadcast news to Europe-in particular, the Fourteen Points enunciated by President Wilson. The Germans in turn had used radio to transmit to the United States their willingness to negotiate an armistice with the Allies. When Wilson crossed the Atlantic to attend the peace conference, he had maintained communication with Washington via radiotelephone. (Due to technological limitations, there would be no transatlantic voice telephone cables until after World War II.) Despite these early achievements, radio remained in its infancy. Lacking a nationwide radio broadcasting network, Wilson was compelled to fight for the peace treaty by embarking upon a strenuous barnstorming tour that destroyed his health.40 After the war the new medium soon fulfilled its promise. Radio technology rapidly moved away from the spark-gap method to the continuous waves generated by vacuum tubes, which were capable of carrying voice and music. Radio's ability to be broadcast made it more difficult for any one party or nation to control the dissemination of information. Instead of the point-to-point communications of the telegraph and telephone, radio could reach all who wanted to listen and who possessed a simple receiver. The era of mass communications had arrived. Within the United States, the Navy endorsed the retention of governmental control over radio as a means to prevent foreign domination of the airwaves. Congress did not act accordingly, however, and the government returned the stations to their owners.41 To counter foreign competition, particularly that of the British-controlled Marconi Company, a solution was soon found. In 1919 an all-American firm, the Radio Corporation of America (RCA), was formed through the merger of General Electric and the American Marconi company. By means of cross-licensing agreements with the industry's leaders (AT&T, Westinghouse, and the United Fruit Company), RCA obtained the use of their radio patents, thus securing a virtual monopoly over the latest technology.42 Under the leadership of its general manager, David Sarnoff, a former Marconi employee, RCA helped to create the nation's first broadcasting network, the National Broadcasting Company (NBC), in 1926.43 With the wartime restrictions lifted, an extraordinary radio boom swept over the United States. It began in November 1920 when the nation's first commercial radio station went on the air, KDKA in Pittsburgh, owned and operated by the Westinghouse Company.44 In 1922, when more than five hundred new stations went on the air, Chief Signal Officer Squier referred to the radio phenomenon as "the outstanding feature of the year in signal communications."45 The thousands of veterans who had received wireless training during the war plus legions of amateur "hams" with their homemade crystal sets fueled the movement. The spectacular growth of private and commercial radio users necessitated, however, more stringent regulation of licenses and frequencies. A power struggle ensued over who should control the medium, the federal government or private enterprise. Since the Commerce Department had been granted certain regulatory powers under the radio act of 1912, Secretary of Commerce Herbert Hoover attempted to bring order out of the chaos by convening a series of conferences among radio officials in Washington. Ultimately, in 1927, Congress enacted a new Radio Act that created an independent agency to oversee the broadcasting industry, the Federal Radio Commission, forerunner of the present Federal Communications Commission (FCC). Radio thus remained a commercially dominated medium, but subject to governmental regulation.46 The Signal Corps played a role in the industry's growth. The Fourth International Radio Conference was to have met in Washington in 1917, but the war forced its postponement. In 1921 Chief Signal Officer Squier headed an American delegation to Paris to help plan the rescheduled meeting. The rapid technological changes of the next several years, however, caused a further delay. When the conference finally convened in Washington in October 1927, a decade after its initial date, one of the chief items on its agenda was the international allocation of radio frequencies.47 Radio technology was beginning to link the entire world together, including remote and inaccessible regions such as Alaska. Radio had a considerable impact upon the Washington-Alaska Military Cable and Telegraph System, which continued to serve as an important component of the Signal Corps' chain of communications. By 1923 over 40 percent of the Alaskan stations employed radio.48 Meanwhile, the deteriorating condition of the underwater cable, nearly twenty years old, mandated its replacement as soon as possible. Despite the Army's restricted budget, the Signal Corps succeeded in securing an appropriation of $1.5 million for the project. First, the Corps acquired a new cable ship, the Dellwood, to replace the Burnside, which had been in service in Alaska since 1903. Under the supervision of Col. George S. Gibbs, who had helped string the original Alaskan telegraph line as a lieutenant, the Corps completed the laying of the new cable in 1924. With five times the capacity of the earlier cable, it more than met the system's existing and anticipated needs. On land, the total mileage of wire lines steadily dwindled as radio links expanded. Radio cost less to maintain both in monetary and in human terms. No longer would teams of men have to endure the hardships of repairing wires in the harsh climate. In 1928 the Signal Corps discontinued the last of its land lines, bringing a colorful era of WAMCATS history to an end.49 Weather reporting continued as an important Signal Corps function, even though the branch had lost most of its experienced observers upon demobilization. New personnel were trained at Fort Monmouth, and officers could receive meteorological instruction at the Massachusetts and California Institutes of Technology. By July 1920 the Corps had fifteen stations providing meteorological information to the Field and Coast Artillery, Ordnance, and Chemical Warfare branches as well as to the Air Service. As in the past, the Signal Corps' weather watchers made their observations three times daily.50 The Corps refrained from duplicating the work of the Weather Bureau, however, and passed its information along for incorporation into the bureau's forecasts. In 1921 the Corps began exchanging data between some of its stations by radios.51 The Air Service, soon to become the Army Air Corps, placed the heaviest demands upon the Signal Corps' meteorological services. In 1921 the Air Service established a model airway between Washington, D.C., and Dayton, Ohio. Although the Signal Corps provided weather information to the Army pilots, it did not initially have enough weather stations to provide the level of assistance needed. In the meantime, the Air Service depended upon the Weather Bureau, only to find that it too had difficulty meeting the airmen's requirements. Consequently, by 1925 the Signal Corps had expanded its meteorological services to include a weather detachment at each Air Service flying field.52 As planes became more sophisticated and powerful, Army pilots attempted more ambitious undertakings. In 1924 they made their first flight around the world, assisted by weather information from the Signal Corps. At its peak the Signal Corps maintained forty-one weather stations across the country.53 The Corps also retained its photographic mission, even though it had lost responsibility for aerial photography in 1918. The branch maintained two photographic laboratories in Washington, D.C.; one for motion pictures at Washington Barracks (now Fort Lesley J. McNair), and the other at 1800 Virginia Avenue, Northwest. Among its services, the Signal Corps sold photos to the public. Its collection of still photographs included its own pictures, as well as those taken by other branches. The Corps also operated a fifty-seat motion-picture theater where films could be viewed for official purposes or the public could view films for prospective purchase.54 In 1925 the Signal Corps acquired responsibility for the Army's pictorial publicity. In this capacity it supervised and coordinated the commercial and news photographers who covered Army activities.55 Following their successful use during World War I, the Army increasingly relied upon motion pictures for training purposes. With the advent of sound films in the late 1920s, film production entered a new era. In 1928 the War Department made the Signal Corps responsible for the production of new training films but neglected to allocate any funds. To obtain needed expertise, the Signal Corps called upon the commercial film industry for assistance, and in 1930 the Signal Corps sent its first officer to Hollywood for training sponsored by the Academy of Motion Picture Arts and Sciences.56 While photography played a relatively minor role in the Corps' overall operations, it nonetheless provided valuable documentation of the Army's activities during the interwar period. The Signal Corps underwent its first change of leadership in half a dozen years when General Squier retired on 31 December 1923. In retirement Squier continued to pursue his scientific interests. One of his better known inventions, particularly to those who frequently ride in elevators, was Muzak. Based on his patents for "wired wireless," a system for transmitting radio signals over wires, Squier founded Muzak's parent company, Wired Radio, Inc., in 1922. He did not coin the catchy name, however, until 1934, when he combined the word music with the name of another popular item, the Kodak camera. In that year the Muzak Corporation became an entity and sold its first recordings to customers in Cleveland.57 In addition to his commercial ventures, Squier received considerable professional recognition for his contributions to science, among them the Elliott Cresson Gold Medal and the Franklin Medal, both awarded by the Franklin Institute in Philadelphia. In 1919 he had become a member of the National Academy of Sciences, and he also received honors from the governments of Great Britain, France, and Italy.58 The new chief signal officer, Charles McKinley Saltzman, was a native of Iowa and an 1896 graduate of the U.S. Military Academy. As a cavalry officer, he had served in Cuba during the War with Spain. After transferring to the Signal Corps in 1901, Saltzman embarked upon a new career that included serving on the board that examined the Wrights' airplane during its trials at Fort Myer in 1908 and 1909. During World War I he remained in Washington as the executive officer for the Office of the Chief Signal Officer. Saltzman possessed considerable knowledge about radio and had attended the national and international radio conferences since 1912. With this background he seemed extremely well qualified for the job when, as the Signal Corps' senior colonel, he received the promotion to chief signal officer upon Squier's retirement. The four-year limitation placed on the tenure of branch chiefs in the 1920 defense act obliged General Saltzman to step down in January 1928.59 But retirement did not end his involvement with communications. In 1929 President Hoover appointed him to the Federal Radio Commission, and he served as its chairman from 1930 to 1932. He also played an important role in the formation of the Federal Communications Commission.60 Saltzman's successor, Brig. Gen. George S. Gibbs, also hailed from Iowa but had not attended West Point. He received both the bachelor's and master's degrees of science from the University of Iowa. During the War with Spain he enlisted in the 51st Iowa Volunteer Infantry and sailed for the Philippines. There he transferred to the Volunteer Signal Corps and distinguished himself during the Battle of Manila. In 1901 he obtained a commission in the Signal Corps of the Regular Army, and several highlights of his subsequent career have already been mentioned. Immediately prior to becoming head of the branch in 1928 he was serving as signal officer of the Second Corps Area. Under his leadership the Signal Corps entered the difficult decade of the 1930s.61 World War I had witnessed the growth and strengthening of ties between government and business, the beginnings of what President Dwight D. Eisenhower later called the military-industrial complex. But the drastic military cutbacks following victory endangered this relationship. While research became institutionalized in the commercial sector with the rise of the industrial labs, such as those of AT&T and General Electric, the Army lagged behind.62 The Signal Corps' research and development program survived the Armistice, but in reduced form. The scientists recruited for the war effort returned to their own laboratories, although some, like Robert A. Millikan, retained their reserve commissions. While the Signal Corps lacked the money to conduct large-scale research, it did continue what it considered to be the most important projects. However, as Chief Signal Officer Saltzman remarked in his 1924 annual report, "The rapid strides being made in commercial communication makes the military development of a few years ago obsolete and if the Signal Corps is to be found by the next emergency ready for production of modern communication equipment, a materially larger sum must be expended on development before the emergency arises.”63 Because radio had not yet proved itself on the battlefield, wire remained the dominant mode of communication. The 1923 version of the Field Service Regulations reiterated the traditional view: "Telegraph and telephone lines constitute the basic means of signal communication. Other means of communication supplement and extend the service of the telegraph and telephone lines."64 Hence the Signal Corps devoted considerable energy to improving such familiar equipment as field wire, wire carts, the field telephone, and the storage battery. Until 1921 the Signal Corps conducted nonradio research in its electrical engineering laboratory at 1710 Pennsylvania Avenue. In that year the laboratory moved to 1800 Virginia Avenue, Northwest. The Corps also continued to support a laboratory at the Bureau of Standards, where Lt. Col. Joseph O. Mauborgne was in charge from 1923 to 1927.65 One significant advance made in wire communications during the interwar period was the teletypewriter. Although printing telegraphs had been used during World War I, they had not achieved the sophistication of the teletypewriter, which was more rapid and accurate than Morse equipment yet relatively simple to operate. Like the Beardslee telegraph of the Civil War, the teletype did not require operators trained in Morse code. On the other hand, teletype machines were heavier, used more power, and were more expensive to maintain than Morse equipment. Teletypewriters came in two general versions: page-type, resembling an ordinary typewriter, and tape-type, which printed messages on paper tape similar to ticker tape that could be torn off and pasted on sheets. By the late 1930s the Signal Corps had converted most of its administrative telegraph system from Morse to teletype. Teletype's adaptation to tactical signaling awaited, however, the development of new equipment that was portable and rugged. After making a good showing during the Army's interwar maneuvers, such teletype machines were on their way to the field by the time the United States entered World War II.66 Although wire remained important, military and civilian scientists attained advances in radio technology that launched Army communications into the electronics age. The Signal Corps conducted radio research in its laboratories at Fort Monmouth. Here in 1924 the Signal Corps Board was organized to study questions of organization, equipment, and tactical and technical procedures. The commandant and assistant commandant of the school served as its top officers.67 A second consultative body, the Signal Corps Technical Committee, had the chief and assistant chief of the Research and Development Division as its chairman and vice chairman, respectively. Transmission by shortwaves, or higher frequency waves, enabled broadcasts to be made over greater distances using less power and at lower cost. Consequently, the Corps gradually converted most of its stations, especially those belonging to the War Department Radio Net, to shortwave operation. By 1929 direct radio communication with San Francisco had been achieved.68 Meanwhile, work continued on the loop radiotelegraph set, first devised during World War I, which became known as model SCR-77. Other ground radio sets included the SCR-131 and 132, the latter with both telegraph and telephone capabilities. Signal Corps engineers made other significant discoveries, among them a new tactical communications device, the walkie-talkie, or SCR-194 and 195. This AM (amplitude-modulated) radiotelephone transceiver (a combination transmitter and receiver) had a range of up to five miles. Weighing about twenty-five pounds, it could be used on the ground or in a vehicle or carried on a soldier's back. The Signal Corps field tested the first models in 1934, and improved versions passed the infantry and field artillery service tests in 1935 and 1936. Lack of funds prevented production until 1939, when the new devices were used successfully during the Plattsburg maneuvers. Walkie-talkies provided a portable means of battlefield communication that increased the ability of infantry to maneuver and enabled commanders to reach units that had outrun field telephone lines.69 As the Army slowly moved toward motorization and mechanization during the 1920s and 1930s, the Signal Corps also addressed the issue of mobile communications. Without radios, early tankers communicated by means of flags and hand signals. As in airplanes, a tank's internal combustion engine interfered with radio reception. The friction of a tank's treads could also generate bothersome static. With the development of FM radio by Edwin H. Armstrong, vehicular radio finally became feasible, but the Signal Corps was hesitant to adopt this revolutionary technology.70 FM eliminated noise and static interference and could transmit a wider range of sound than AM radios. When coupled with crystal control, permitting a radio to be tuned automatically and precisely with just the push of a button, rather than by the intricate twirling of dials, FM radios could easily be used in moving vehicles. Although demonstrations at Fort Knox, Kentucky, in 1939 did not conclusively prove FM's superiority over AM, the chiefs of infantry and field artillery recognized FM's potential and pushed for its adoption. The mechanized cavalry also called for the new type of sets. Nevertheless, the Signal Corps remained skeptical. The Corps' preference for wire over radio, the shortage of developmental funds, and the resistance to FM within the communications industry (where it would render existing AM equipment obsolete) delayed FM's widespread introduction into military communications. Meanwhile, with the Army far from being completely motorized, the Signal Corps continued working on a pack radio set for the Cavalry. Only in late 1940 did the Signal Corps begin to respond to the demands from the field for FM radios.71 When the War Department reduced the Signal Corps' communication duties in 1920, it gave the Air Service responsibility for installing, maintaining, and operating radio apparatus for its units and stations. The Signal Corps retained control, however, over aviation-related radio development. The rapid improvements being made in aircraft design necessitated equal progress in aerial radio. In its Aircraft Radio Laboratory at McCook Field, Ohio, the Signal Corps conducted both the development and testing of radios designed for the Air Corps.72 Expanding on its work during World War I, the Signal Corps made significant strides in airborne radio during the postwar period. Improvements took place in the models of the SCR-130 series. Sets were designed for each type of aircraft: observation, pursuit, and bombardment. The pursuit set (SCR-133) provided voice communication between planes at a distance of 5 miles; the observation and bombardment sets (SCRs 134 and 135) had ranges of 30 and 100 miles, respectively. The SCR-136 model provided communication between ground stations and aircraft at distances of 100 miles using radio and 30 miles using telephony. Many technical problems had to be solved in developing these radios, including the interference caused by the plane's ignition system. With the installation of proper shielding, this difficulty could be overcome.73 But despite advances in aerial radio, pilots in the 1930s still relied to some extent on hand signals to direct their squadrons.74 The Signal Corps also developed radios for navigational purposes, basing its technology on work done during the war in direction finding.75 One of the most important navigational aids was the radio beacon, which enabled a plane to follow a signal to its destination. When equipped with radio compasses, which they tuned to the beacons on the ground, pilots no longer had to rely on their senses alone; they could fly "blind," guided by their instruments. This system proved itself in June 1927 when it guided two Army pilots, 1st Lts. Lester J. Maitland and Albert F. Hegenberger, on the first nonstop flight from California to Hawaii. This milestone occurred just a few weeks before Charles Lindbergh made his historic flight across the Atlantic.76 Lieutenant Hegenberger later became head of the Air Corps' Navigational Instrument Section at Wright Field, which was located in the same building as the Signal Corps' Aircraft Radio Laboratory. (McCook Field was incorporated into Wright Field in 1927.) However, the Signal Corps did not always enjoy a cordial relationship with the Air Corps regarding radio development. In fact, Hegenberger, in an attempt to take over the Signal Corps' navigational projects, went so far as to lock the Signal Corps personnel out of his portion of the building they shared. When the Air Corps failed in its attempt to carry the mail in 1934, suffering twelve fatalities and sixty-six crashes in four months, some senior Air Corps officers tried to blame the high casualty rate on the Signal Corps for neglecting to develop the appropriate navigational aids. In fact, inexperienced pilots and inadequate training had accounted for many of the accidents. The chief signal officer at that time, Maj. Gen. James B. Allison, and Maj. Gen. Benjamin D. Foulois, chief of the Air Corps, finally agreed in 1935 to discontinue Hegenberger's laboratory.77 In August 1929 the Signal Corps consolidated its research facilities in Washington with those at Fort Monmouth, establishing the Signal Corps Laboratories at Fort Monmouth. In 1935 a modern, permanent laboratory opened there to replace the World War I-vintage buildings previously in use. The new structure was named, most fittingly, Squier Laboratory, in honor of the former chief signal officer and eminent scientist, who had passed away the previous year at the age of sixty-nine.78 Meanwhile, the Signal Corps' Aircraft Radio Laboratory remained at Wright Field because the equipment produced there required continuous flight testing.79 Probably the most significant research undertaken by the Signal Corps between the wars was that pertaining to radar, an offshoot of radio. The word radar is an acronym for radio detection and ranging.80 In brief, radar depends on the reflection of radio waves from solid objects. By sending out a focused radio pulse, which travels at a known rate (the speed of light), and timing the interval between the transmission of the wave and the reception of its reflection or echo, the distance, or range, to an object can be determined. The resultant signals are displayed visually on the screen of a cathode-ray oscilloscope. During the interwar years many other nations, including Germany, Great Britain, and Japan, conducted radar experiments, but secrecy increased along with heightening world tensions. In the United States credit for the initial development of radar belonged to the Navy, which conducted its seminal experimentation at the Naval Research Laboratory in Washington during the 1920s and 1930s. While the Signal Corps did not invent radar, its subsequent efforts played an important role in furthering its evolution.81 The origins of the Army's radar research dated back to World War I, when Maj. William R. Blair, who then headed the Signal Corps' Meteorological Section in the American Expeditionary Forces, conducted experiments in sound ranging for the purpose of locating approaching enemy aircraft by the noise of their engines. After the war Blair served as chief of the meteorological section in Washington and in 1926 became head of the Research and Engineering Division. In 1930 he was named director of the laboratories at Fort Monmouth. In February 1931 Blair began research on radio detection using both heat and high-frequency, or infrared, waves. Known as Project 88, this undertaking had been transferred to the Signal Corps from the Ordnance Department. When these methods proved disappointing, Blair began investigating the pulse-echo method of detection.82 Contrary to its usual procedure, the Signal Corps conducted all of its developmental work on radar in its own laboratories, rather than contracting components out to private industry. Chief Signal Officer Allison did not believe that commercial firms could yet "offer useful results in practical form."83 Although Allison requested additional money for radar research, the War Department provided none, and the Signal Corps obtained the necessary funds from cutbacks in other projects. In December 1936 Signal Corps engineers conducted the first field test of their radar equipment at the Newark, New Jersey, airport where it detected an airplane at a distance of seven miles. In May 1937 the Signal Corps demonstrated its still crude radar, the future SCR-268, for Secretary of War Harry H. Woodring; Brig. Gen. Henry H. Arnold, assistant chief of the Air Corps; and other government officials at Fort Monmouth.84 Impressed by its potential, Woodring later wrote to Allison: "It gave tangible evidence of the amazing scientific advances made by the Signal Corps in the development of technical equipment."85 Arnold, also responding favorably, urged the Signal Corps to develop a long-range version for use as an early warning device. With this high-level support, the Signal Corps received the funds it needed to continue its development program.86 The Corps' application of radar to coast defense was an extension of its longstanding work in the development of electrical systems for that purpose, which had begun in the 1890s. Because national policy remained one of isolationism, American military planners envisioned any future war as defensive. Consequently, the Army placed great reliance upon warning systems to protect against surprise attack by sea and especially by air. Hence the Signal Corps developed the SCR-268, a short-range radar set designed to control searchlights and antiaircraft guns, and subsequently designed for the Air Corps two sets for long-range aircraft detection: SCR-270, a mobile set with a range of 120 miles, and SCR-271, a fixed radar with similar capabilities.87 In an interesting historical parallel, the Signal Corps carried out its radar testing at the same locations-Sandy Hook and the Highlands at Navesink, New Jersey-where Assistant Surgeon Albert J. Myer had tested his wigwag signals with 2d Lt. Edward P Alexander prior to the Civil War. While Myer had favored these sites for their proximity to New York Harbor, the later generation of experimenters found them convenient to Fort Monmouth. Here and elsewhere the Signal Corps was bringing the Army into the electronics age.88 While the cost of technology steadily rose, the amount of money the nation was willing to spend on its Army tended to decline during the early 1930s, as the nation plunged into the Great Depression that followed the stock market crash of October 1929. Two veteran Signal Corps officers led the branch during this difficult period: General Gibbs and his successor, Maj. Gen. Irving J. Carr. Gibbs, who remained at the helm until 30 June 1931, counted among his major achievements the consolidation of the Corps' laboratories and a reorganization and restructuring of the Signal Office that endured until World War II.89 Upon retirement he became an executive with several communications firms, an indication of the increasingly close relationship between the military and industry, based in part on the growing similarity of military and civilian technology.90 General Carr, who received a degree in civil engineering from the Pennsylvania Military College in 1897, had served as an infantry lieutenant during the Philippine Insurrection. Graduating from the Army Signal School in 1908, he was detailed to the Signal Corps during World War I. Carr served in France successively as chief signal officer of the 2d Division, the IV Army Corps, and the Third Army. In addition to attending the General Staff School and the Army War College after the war, he served as signal officer of the Western Department and as chief of staff of the Hawaiian Division. At the time of his appointment as chief signal officer, Carr held the position of executive officer in the Office of the Assistant Secretary of War.91 General Carr faced a situation that had been transformed by the economic crisis. While Americans stood in breadlines, the Army, already experiencing hard times because of national pacifism and war-weariness, felt the added impact of the Great Depression. In the midst of this national tragedy, military preparedness took a backseat to social and economic concerns. Chief of Staff General Douglas MacArthur did nothing to improve the Army's image by dispersing with unnecessary brutality the so-called Bonus Army of World War I veterans who marched on Washington in the summer of 1932. This violent incident may also have contributed to President Herbert Hoover's defeat by Franklin D. Roosevelt in the presidential election that fall. Despite its lack of funds, the Army sought new roles to assist the nation through its time of economic distress. Its contribution to the organization of the Civilian Conservation Corps (CCC), established as part of President Roosevelt's New Deal in April 1933, proved popular but a drain on its limited resources. The CCC's activities included reforestation, soil conservation, fire prevention, and similar projects. The Army set up and ran the camps and supplied food, clothing, shelter, medical care, and recreation. For its part, the Signal Corps provided radio communication and linked radio stations at CCC district headquarters with the War Department Radio Net. Members of the Army Amateur Radio System participated in this effort. The Signal Corps also helped to advertise this least partisan of New Deal ventures, completing a three-reel historical film about the CCC in 1935.92 The Second International Polar Year was held from 1932 to 1934, fifty years after the original event. Financial support from the Rockefeller Foundation helped make this effort possible in the midst of the worldwide depression. While Arctic studies remained the focus, more countries participated and more branches of science were included than before. Although the Signal Corps did not play as prominent a role as in the 1880s, it nonetheless lent its expertise to the scientists involved in polar research. The Corps established communication facilities for the Army's station near the Arctic Circle and supplied equipment for studying problems of radio transmission.93 With General Carr's retirement, Maj. Gen. James B. Allison became chief signal officer on 1 January 1935. Allison had received extensive experience in signal training during the years 1917-1919 when he commanded Signal Corps training camps at Monterey, California; Fort Leavenworth; and Camp Meade, Maryland. From September 1925 to June 1926 he served as commandant of the Signal School. Prior to becoming chief, he had been signal officer of the Second Corps Area at Governors Island, New York. Allison was fortunate to assume his new duties during the same year that the Army acquired a new chief of staff, General Malin Craig, who recognized the value of communications. Craig, concerned about the threatening world situation in both the Far East and Europe, pressed for a limited rearmament. He also supported increases in the Signal Corps' budget that finally ended its years of impoverishment.94 The growing danger of war, the demands for improved technology, and even the Great Depression itself improved the Signal Corps' prospects. The turnover rate of its enlisted personnel dropped as joblessness increased in civilian life. When Congress enlarged the size of the Army in 1935, the Signal Corps received an additional 953 enlisted men, enabling the Corps to handle the growing demands on its services caused by the public works programs of the New Deal and the expanding activities of the Air Corps.95 The Corps also held onto one of its traditional activities, WAMCATS, in the face of renewed demands that the government sell the Alaska system because of its predominantly commercial nature. It was also argued that the release of the more than two hundred enlisted men assigned to duty in Alaska would help ease the Corps' overall personnel shortage. But Chief Signal Officer Gibbs had opposed the sale, and Congress did not act upon the War Department's enabling legislation. While the long-standing debate continued as to whether to transfer the system to another agency or turn it over to commercial interests, WAMCATS remained in the Signal Corps' hands.96 Under the Corps' stewardship the system continued to develop. By 1931 radio had overtaken the use of cables, but the underwater lines were kept in operable condition in case of emergencies. In 1933 the Army transferred the Dellwood, now left with little to do, to the U.S. Shipping Board, which in turn sold it to a commercial cannery.97 To reflect its new image, the WAMCATS underwent a name change in 1936, becoming the Alaska Communication System (ACS).98 WAMCATS continued to render important service to Alaskans, proving itself to be a "lifeline to the north." In 1934, when much of Nome went up in flames, the city's WAMCATS station stayed on the air to coordinate relief and rescue work. WAMCATS also played a key role in the drama surrounding the plane crash that killed humorist Will Rogers and aviator Wiley Post near Point Barrow in August 1935. Sgt. Stanley R. Morgan, the Signal Corps radio operator there, learned of the accident from a native runner. After summoning help, Morgan traveled to the crash site to do what he could. Unfortunately, both men had died instantly. Returning to his station, Morgan signaled news of the tragedy to the world.99 The Signal Corps' photographic mission continued to expand during the 1930s. Photographic training was briefly transferred to the Army War College, but soon returned to Fort Monmouth. In 1933 the Corps produced its first feature-length sound movie, depicting infantry maneuvers at Fort Benning, Georgia. The Corps also released several new training films, including such action-packed features as "Cavalry Crossing Unfordable Stream" and "Elementary Principles of the Recoil Mechanism." The shortage of funds, however, prevented the Signal Corps from making many films prior to World War II. The Corps did work diligently to index and reedit its World War I films, making master copies and providing better storage facilities for these priceless records.100 Despite many difficulties, the Signal Corps' operations increased overall during the 1930s. But it lost one function, military meteorology. As the decade progressed, the branch simply could not keep up with the demands made on its weather service by the Air Corps. Following the airmail fiasco, the Air Corps sought to upgrade operations at some stations to provide weather service around the clock and throughout the year. With its limited manpower and varied missions, the task was beyond the Signal Corps' capability. In his 1936 annual report Chief Signal Officer Allison recommended that "if the required additional personnel could not be given [to] the Signal Corps, all meteorological duties ... be transferred to the Air Corps which is the principal user of the meteorological service."101 The secretary of war agreed, and returned weather reporting and forecasting to the using arms effective 1 July 1937. As a result, many of the Signal Corps' meteorologists transferred to the Air Corps. Although the Signal Corps retained responsibility for the development, procurement, supply, and maintenance of meteorological equipment, the sun had set once more on its weather service.102 Upon General Allison's retirement at the end of September 1937, Col. Joseph O. Mauborgne was designated to become the new chief signal officer, effective on I October. Originally commissioned as a second lieutenant of infantry in 1903, he had served with the Signal Corps since 1916 and transferred to the branch in 1920. A well-known expert in radio and cryptanalysis, Mauborgne had been chief of the Corps' Research and Engineering Division during World War I. His postwar assignments included heading the Signal Corps Laboratory at the Bureau of Standards and commanding, for a second time, the Research and Engineering Division in the Signal Office. He also served as a technical adviser at several international communications conferences, including the radio conference held in Washington in 1927. After becoming a colonel in 1934, he was the director of the Aircraft Radio Laboratory from 1936 to 1937. In addition to his scientific expertise, Mauborgne possessed considerable artistic talent as a portrait painter, etcher, and maker of prize-winning violins.103 Among its many duties, the Signal Corps held responsibility for revising and compiling all codes and ciphers used by the War Department and the Army. Under General Mauborgne, himself a gifted cryptologist, activities in this area expanded. In 1929 General Gibbs had established the Signal Intelligence Service to control all Army cryptology. In addition to code and cipher work, the Signal Intelligence Service absorbed the covert intelligence-gathering activities formerly conducted by the so-called Black Chamber within the Military Intelligence Division of the War Department General Staff. WILLIAM F. FRIEDMAN, CENTER BACK, AND THE STAFF OF THE SIGNAL William F. Friedman became the Signal Intelligence Service's first chief. After serving in the intelligence section of the General Staff, AEF, during World War I, Friedman had joined the Signal Corps in 1921 to develop new codes and ciphers. In 1922 he became chief cryptanalyst in the code and cipher compilation section of the Research and Development Division where he became known for his remarkable code-breaking abilities. In addition to cryptographic skills, Friedman shared Mauborgne's interest in the violin and formed a musical group that included the chief signal officer and several friends.104 In 1935 the Army reinstituted its program of large-scale maneuvers, which it had not held since before World War I. The 51st Signal Battalion, the only unit of its type, provided the communications for these exercises. In 1937 the Army tested its new "triangular"-three regiment-division at San Antonio, Texas. This streamlined unit, reduced from four regiments and without any brigade headquarters, had been favored by Pershing in 1919. Providing more mobility and flexibility than the square division of World War I, the triangular division would become the standard division of the next war. While the divisional signal company was somewhat larger (7 officers and 182 men) than that provided for in the 1920 tables of organization, the signal complement of the combat arms was cut in half.105 Thus, helped by a variety of factors, the Signal Corps weathered the years of political isolationism and economic depression. As a technical service, it benefit- ed from the rapid development in communications technology pioneered by civilian industry and from the growing realization among military and civilian leaders alike that science would be a crucial factor in any future conflict. Unfortunately, that future was closer than many Americans liked to think. Throughout the 1930s the world situation had grown increasingly ominous. Adolph Hitler came to power in Germany in 1933 and, denouncing the Versailles treaty, undertook a program of rearmament. Italy's dictator, Benito Mussolini, began a course of aggression by attacking Ethiopia in 1935. In 1939 Hitler signed a treaty with the Soviet dictator, Joseph Stalin, and invaded Poland, precipitating a general war in Europe. Across the Pacific, Japan unleashed its power, seizing Manchuria in 1931 and invading China in 1937. Finally, the formation of the Rome-Berlin-Tokyo Axis in September 1940 appeared to unite three heavily armed and aggressive nations against the ill-armed democracies. After years of stagnation, the United States began a gradual military buildup in the late 1930s. President Roosevelt, who had once served as assistant secretary of the Navy, at first championed only a naval rebuilding program, but the Army eventually began to receive greater attention. In his annual message of January 1938, Roosevelt requested an Army budget of $17 million, a substantial sum but considerably less than the Navy's allotment of $28 million.106 Having learned some hard lessons from its unpreparedness for World War I, the War Department devoted considerable attention during the interwar period to planning for future wars. Responsibility for strategic planning rested with the War Plans Division of the General Staff, while the 1920 defense act assigned supervision of procurement and industrial mobilization planning to the assistant secretary of war.107 Despite the power wielded by the General Staff, considerable administrative control still existed at the branch level. For its part, the Signal Corps contained a procurement planning section which prepared estimates of requirements, conducted surveys of manufacturers, and identified scarce raw materials, such as the Brazilian quartz used in radios.108 The Army's Industrial Mobilization Plan of 1930 established procedures for harnessing the nation's economic might, while the Protective Mobilization Plan of 1937 set forth the steps for manpower mobilization, beginning with the induction of the National Guard. These plans failed, however, to envision a conflict on a scale larger than World War I. For instance, estimates placed the Signal Corps' monthly requirement for batteries during wartime at five million; the actual number later proved to be more than four times that amount.109 With the outbreak of war in Europe, the United States undertook a limited preparedness effort with the emphasis on hemispheric defense. President Roosevelt declared a "limited national emergency" on 8 September 1939 and authorized an increase in the Regular Army's enlisted strength to 227,000.110 Public opinion, however, remained committed to staying out of war and protecting "America First." The blitzkrieg tactics of the Nazis in Poland suggested that this war would be a mobile one, unlike the stalemate of the Western Front during World War I. By 1939 the United States Army had undergone extensive motorization, although mechanization remained in its early stages. For the Signal Corps, motorization meant developing light automobiles equipped with radios as reconnaissance vehicles and adapting motor vehicles to lay wire.111 But little had been done to integrate communications into larger, combined arms mobile formations. During the spring of 1940 the Army held its first genuine corps and army training maneuvers. The exercises, conducted in May 1940 along the Texas and Louisiana border, "tested tactical communications more thoroughly than anything else had since World War I”112 Unfortunately, much Signal Corps equipment proved deficient. The W-110 field wire, for instance, worked poorly when wet and suffered considerable damage from motor vehicles. (Local cattle also liked to chew contentedly upon it.) Moreover, the SCR-197, designed to serve as a long-range mobile radio, could not function while in motion. Intended for operation from the back of a truck, the radio could only send or receive messages after the vehicle had stopped. First, however, the crew had to dismount to deploy the antenna and start the gasoline generator. The allocation of frequencies also became a problem with the proliferation of radios throughout the Army's new triangular divisions. In part, the frequency issue arose because the radios in use were obsolescent. They did not reflect the most recent innovations-crystal control and FM-that would both increase the range of available frequencies and enable operators to make precise adjustments to particular frequencies with just the push of a button. Until the Army adopted improved radios, it could not fight a modern war successfully. Moreover, in addition to highlighting the general inadequacy of tactical communications, the 1940 maneuvers demonstrated that the Signal Corps needed additional men and units to carry out its mission.113 Although technically a neutral nation, the United States gradually began to prepare for the possibility of entering the war and increased its support to the Allies. On 10 May 1940 Germany invaded France and the Low Countries. The subsequent defeat of the Allied armies, followed by the narrow escape of the British expeditionary force from Dunkirk and the fall of France in June 1940, brought Allied fortunes to the brink of disaster. At the end of August Congress authorized the president to induct the National Guard into service for a year and to call up the Organized Reserves. Furthermore, the Selective Service and Training Act, signed into law on 16 September 1940, initiated the first peacetime draft in the nation's history. While the United States was not yet ready to become a direct participant, the signing of the Lend-Lease Act in March 1941 officially made it the world's "arsenal of democracy."114 While the nation moved toward war, the Signal Corps underwent some changes of its own. The pressure of the impending conflict resulted in enormous demands for new communications equipment. The Air Corps, in particular, grew increasingly impatient with the slow pace of progress, especially in relation to radar. Under intense criticism from the airmen, Chief Signal Officer Mauborgne was suddenly relieved of his duties by Chief of Staff General George C. Marshall, Jr., in August 1941. Pending Mauborgne's official retirement the following month, Brig. Gen. Dawson Olmstead stepped in as acting chief.115 On 24 October 1941, Olmstead officially became chief signal officer with the rank of major general, the fifteenth individual to hold that post. A graduate of West Point, class of 1906, Olmstead had received his commission in the Cavalry. During 1908 and 1909 he had attended the Signal School at Fort Leavenworth. After World War I, during which he had served in the Inspector General's Office of the AEF, he held a number of Signal Corps-related assignments. These included signal officer of the Hawaiian Department from 1925 to 1927, officer in charge of the Alaska communication system from 1931 to 1933, and commandant of the Signal School at Fort Monmouth from 1938 to 1941.116 For the new chief signal officer, as for the nation, war was now close at hand. Despite outstanding work by the Signal Intelligence Service, now comprising almost three hundred soldiers and civilians, the exact point of danger eluded American leaders. In August 1940 William Friedman and his staff had broken PURPLE, the Japanese diplomatic code, and the intelligence received as a consequence became known as MAGIC.117 While MAGIC yielded critical information regarding Japanese diplomatic strategy, the intercepted messages did not explicitly reveal Japanese war plans.118 American officials knew that war was imminent, but considered a Japanese attack on Hawaii no more than a remote possibility. During 1940 President Roosevelt had transferred the Pacific Fleet from bases on the West Coast of the United States to Pearl Harbor on the Hawaiian island of Oahu, hoping that its presence might act as a deterrent upon Japanese ambitions. Yet the move also made the fleet more vulnerable. Despite Oahu's strategic importance, the air warning system on the island had not become fully operational by December 1941. The Signal Corps had provided SCR-270 and 271 radar sets earlier in the year, but the construction of fixed sites had been delayed, and radar protection was limited to six mobile stations operating on a part-time DAVID SARNOFF OF RCA (LEFT) AND CAPTAIN STONER, IN CHARGE OF THE WAR basis to test the equipment and train the crews. Though aware of the dangers of war, the Army and Navy commanders on Oahu, Lt. Gen. Walter C. Short and Admiral Husband E. Kimmel, did not anticipate that Pearl Harbor would be the target; a Japanese strike against American bases in the Philippines appeared more probable. In Hawaii, sabotage and subversive acts by Japanese inhabitants seemed to pose more immediate threats, and precautions were taken. The Japanese-American population of Hawaii proved, however, to be overwhelmingly loyal to the United States.119 Because the Signal Corps' plans to modernize its strategic communications during the previous decade had been stymied, the Army had only a limited ability to communicate with the garrison in Hawaii. In 1930 the Corps had moved WAR's transmitter to Fort Myer, Virginia, and had constructed a building to house its new, high-frequency equipment. Four years later it added a new diamond antenna, which enabled faster transmission.120 But in 1939, when the Corps wished to further expand its facilities at Fort Myer to include a rhombic antenna for point-to-point communication with Seattle, it ran into difficulty. The post commander, Col. George S. Patton, Jr., objected to the Signal Corps' plans. The new antenna would encroach upon the turf he used as a polo field and the radio towers would obstruct the view. Patton held his ground and prevented the Signal Corps from installing the new equipment. At the same time, the Navy was about to abandon its Arlington radio station located adjacent to Fort Myer and offered it to the Army. Patton, wishing instead to use the Navy's buildings to house his enlisted personnel, opposed the station's transfer. As a result of the controversy, the Navy withdrew its offer and the Signal Corps lost the opportunity to improve its facilities.121 Though a seemingly minor bureaucratic battle, the situation had serious consequences two years later. Early in the afternoon of 6 December 1941, the Signal Intelligence Service began receiving a long dispatch in fourteen parts from Tokyo addressed to the Japanese embassy in Washington. The Japanese deliberately delayed sending the final portion of the message until the next day, in which they announced that the Japanese government would sever diplomatic relations with the United States effective at one o'clock that afternoon. At that hour, it would be early morning in Pearl Harbor. Upon receiving the decoded message on the morning of 7 December, Chief of Staff Marshall recognized its importance. Although he could have called Short directly, Marshall did not do so because the scrambler telephone was not considered secure. Instead, he decided to send a written message through the War Department Message Center. Unfortunately, the center's radio encountered heavy static and could not get through to Honolulu. Expanded facilities at Fort Myer could perhaps have eliminated this problem. The signal officer on duty, Lt. Col. Edward F French, therefore sent the message via commercial telegraph to San Francisco, where it was relayed by radio to the RCA office in Honolulu. That office had installed a teletype connection with Fort Shafter, but the teletypewriter was not yet functional. An RCA messenger was carrying the news to Fort Shafter by motorcycle when Japanese bombs began falling; a huge traffic jam developed because of the attack, and General Short did not receive the message until that afternoon. Earlier that day, as the sun rose over Opana on the northern tip of Oahu, two Signal Corpsmen, Pvts. George A. Elliott and Joseph L. Lockard, continued to operate their radar station, although their watch had ended at 0700. At 0702 a large echo appeared on their scope, indicating a sizable formation of incoming planes about 130 miles away. They telephoned their unusual sighting to the radar information center at Fort Shafter, but the young Air Corps lieutenant on duty told them to "Forget it." An attack was not expected, and the planes were assumed to be American bombers scheduled to arrive that morning from California. Nevertheless, Elliott and Lockard tracked the planes until they became lost on their scope. Just minutes before the attack began at 0755, the two men left their station for breakfast.122 Despite the breaking of PURPLE, the surprise at Pearl Harbor was "complete and shattering."123 The following day President Roosevelt went before Congress to ask for a declaration of war against Japan. In an eloquent speech, he called 7 December "a date which will live in infamy," and the House and Senate voted for war with only one dissenter.124 On 11 December, Germany and Italy declared war on the United States, and Congress replied in kind. Despite Woodrow Wilson's lofty intentions, World War I had not made the world safe for democracy; with Hitler's armies supreme in Europe and Japanese forces sweeping through the Far East, freedom appeared to be in greater peril than in 1917. In just twenty years the hopes for a lasting peace had vanished, and once again the United States prepared to throw its might on the side of the Allies. Angered by the bombing of Pearl Harbor, the American people entered World War II with a strong sense of mission and purpose. At the same time that Japanese war planes shattered the Pacific Fleet, they also destroyed the American sense of invulnerability-the nation's ocean bulwark had been breached. Nevertheless, displaying his characteristic optimism, President Roosevelt proclaimed on 9 December: "With confidence in our armed forces, with unbounded determination of our people, we will gain the inevitable triumph.”125 In this triumph, the Signal Corps would play a pivotal role. Return to Table of Contents
http://www.history.army.mil/books/30-17/S_6.htm
13
23
A brief history of the United States Photograph courtesy of the U.S. National Archives and Records Administration The first Europeans to reach North America were Icelandic Vikings, led by Leif Ericson, about the year 1000. Traces of their visit have been found in the Canadian province of Newfoundland, but the Vikings failed to establish a permanent settlement and soon lost contact with the new continent. Five centuries later, the demand for Asian spices, textiles, and dyes spurred European navigators to dream of shorter routes between East and West. Acting on behalf of the Spanish crown, in 1492 the Italian navigator Christopher Columbus sailed west from Europe and landed on one of the Bahama Islands in the Caribbean Sea. Within 40 years, Spanish adventurers had carved out a huge empire in Central and South America. The first successful English colony was founded at Jamestown, Virginia, in 1607. A few years later, English Puritans came to America to escape religious persecution for their opposition to the Church of England. In 1620, the Puritans founded Plymouth Colony in what later became Massachusetts. Plymouth was the second permanent British settlement in North America and the first in New England. In New England the Puritans hoped to build a "city upon a hill" -- an ideal community. Ever since, Americans have viewed their country as a great experiment, a worthy model for other nations to follow. The Puritans believed that government should enforce God's morality, and they strictly punished heretics, adulterers, drunks, and violators of the Sabbath. In spite of their own quest for religious freedom, the Puritans practiced a form of intolerant moralism. In 1636 an English clergyman named Roger Williams left Massachusetts and founded the colony of Rhode Island, based on the principles of religious freedom and separation of church and state, two ideals that were later adopted by framers of the U.S. Constitution. Colonists arrived from other European countries, but the English were far better established in America. By 1733 English settlers had founded 13 colonies along the Atlantic Coast, from New Hampshire in the North to Georgia in the South. Elsewhere in North America, the French controlled Canada and Louisiana, which included the vast Mississippi River watershed. France and England fought several wars during the 18th century, with North America being drawn into every one. The end of the Seven Years' War in 1763 left England in control of Canada and all of North America east of the Mississippi. Soon afterwards England and its colonies were in conflict. The mother country imposed new taxes, in part to defray the cost of fighting the Seven Years' War, and expected Americans to lodge British soldiers in their homes. The colonists resented the taxes and resisted the quartering of soldiers. Insisting that they could be taxed only by their own colonial assemblies, the colonists rallied behind the slogan "no taxation without representation." All the taxes, except one on tea, were removed, but in 1773 a group of patriots responded by staging the Boston Tea Party. Disguised as Indians, they boarded British merchant ships and dumped 342 crates of tea into Boston harbor. This provoked a crackdown by the British Parliament, including the closing of Boston harbor to shipping. Colonial leaders convened the First Continental Congress in 1774 to discuss the colonies' opposition to British rule. War broke out on April 19, 1775, when British soldiers confronted colonial rebels in Lexington, Massachusetts. On July 4, 1776, the Continental Congress adopted a Declaration of Independence. At first the Revolutionary War went badly for the Americans. With few provisions and little training, American troops generally fought well, but were outnumbered and overpowered by the British. The turning point in the war came in 1777 when American soldiers defeated the British Army at Saratoga, New York. France had secretly been aiding the Americans, but was reluctant to ally itself openly until they had proved themselves in battle. Following the Americans' victory at Saratoga, France and America signed treaties of alliance, and France provided the Americans with troops and warships. The last major battle of the American Revolution took place at Yorktown, Virginia, in 1781. A combined force of American and French troops surrounded the British and forced their surrender. Fighting continued in some areas for two more years, and the war officially ended with the Treaty of Paris in 1783, by which England recognized American independence. The framing of the U.S. Constitution and the creation of the United States are covered in more detail in chapter 4. In essence, the Constitution alleviated Americans' fear of excessive central power by dividing government into three branches -- legislative (Congress), executive (the president and the federal agencies), and judicial (the federal courts) -- and by including 10 amendments known as the Bill of Rights to safeguard individual liberties. Continued uneasiness about the accumulation of power manifested itself in the differing political philosophies of two towering figures from the Revolutionary period. George Washington, the war's military hero and the first U.S. president, headed a party favoring a strong president and central government; Thomas Jefferson, the principal author of the Declaration of Independence, headed a party preferring to allot more power to the states, on the theory that they would be more accountable to the people. Jefferson became the third president in 1801. Although he had intended to limit the president's power, political realities dictated otherwise. Among other forceful actions, in 1803 he purchased the vast Louisiana Territory from France, almost doubling the size of the United States. The Louisiana Purchase added more than 2 million square kilometers of territory and extended the country's borders as far west as the Rocky Mountains in Colorado. In the first quarter of the 19th century, the frontier of settlement moved west to the Mississippi River and beyond. In 1828 Andrew Jackson became the first "outsider" elected president: a man from the frontier state of Tennessee, born into a poor family and outside the cultural traditions of the Atlantic seaboard. Although on the surface the Jacksonian Era was one of optimism and energy, the young nation was entangled in a contradiction. The ringing words of the Declaration of Independence, "all men are created equal," were meaningless for 1.5 million slaves. (For more on slavery and its aftermath, see chapters 1 and 4.) In 1820 southern and northern politicians debated the question of whether slavery would be legal in the western territories. Congress reached a compromise: Slavery was permitted in the new state of Missouri and the Arkansas Territory but barred everywhere west and north of Missouri. The outcome of the Mexican War of 1846-48 brought more territory into American hands -- and with it the issue of whether to extend slavery. Another compromise, in 1850, admitted California as a free state, with the citizens of Utah and New Mexico being allowed to decide whether they wanted slavery within their borders or not (they did not). But the issue continued to rankle. After Abraham Lincoln, a foe of slavery, was elected president in 1860, 11 states left the Union and proclaimed themselves an independent nation, the Confederate States of America: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, Tennessee, and North Carolina. The American Civil War had begun. The Confederate Army did well in the early part of the war, and some of its commanders, especially General Robert E. Lee, were brilliant tacticians. But the Union had superior manpower and resources to draw upon. In the summer of 1863 Lee took a gamble by marching his troops north into Pennsylvania. He met a Union army at Gettysburg, and the largest battle ever fought on American soil ensued. After three days of desperate fighting, the Confederates were defeated. At the same time, on the Mississippi River, Union General Ulysses S. Grant captured the city of Vicksburg, giving the North control of the entire Mississippi Valley and splitting the Confederacy in two. Two years later, after a long campaign involving forces commanded by Lee and Grant, the Confederates surrendered. The Civil War was the most traumatic episode in American history. But it resolved two matters that had vexed Americans since 1776. It put an end to slavery, and it decided that the country was not a collection of semi-independent states but an indivisible whole. Abraham Lincoln was assassinated in 1865, depriving America of a leader uniquely qualified by background and temperament to heal the wounds left by the Civil War. His successor, Andrew Johnson, was a southerner who had remained loyal to the Union during the war. Northern members of Johnson's own party (Republican) set in motion a process to remove him from office for allegedly acting too leniently toward former Confederates. Johnson's acquittal was an important victory for the principle of separation of powers: A president should not be removed from office because Congress disagrees with his policies, but only if he has committed, in the words of the Constitution, "treason, bribery, or other high crimes and misdemeanors." Within a few years after the end of the Civil War, the United States became a leading industrial power, and shrewd businessmen made great fortunes. The first transcontinental railroad was completed in 1869; by 1900 the United States had more rail mileage than all of Europe. The petroleum industry prospered, and John D. Rockefeller of the Standard Oil Company became one of the richest men in America. Andrew Carnegie, who started out as a poor Scottish immigrant, built a vast empire of steel mills. Textile mills multiplied in the South, and meat-packing plants sprang up in Chicago, Illinois. An electrical industry flourished as Americans made use of a series of inventions: the telephone, the light bulb, the phonograph, the alternating-current motor and transformer, motion pictures. In Chicago, architect Louis Sullivan used steel-frame construction to fashion America's distinctive contribution to the modern city: the skyscraper. But unrestrained economic growth brought dangers. To limit competition, railroads merged and set standardized shipping rates. Trusts -- huge combinations of corporations -- tried to establish monopoly control over some industries, notably oil. These giant enterprises could produce goods efficiently and sell them cheaply, but they could also fix prices and destroy competitors. To counteract them, the federal government took action. The Interstate Commerce Commission was created in 1887 to control railroad rates. The Sherman Antitrust Act of 1890 banned trusts, mergers, and business agreements "in restraint of trade." Industrialization brought with it the rise of organized labor. The American Federation of Labor, founded in 1886, was a coalition of trade unions for skilled laborers. The late 19th century was a period of heavy immigration, and many of the workers in the new industries were foreign-born. For American farmers, however, times were hard. Food prices were falling, and farmers had to bear the costs of high shipping rates, expensive mortgages, high taxes, and tariffs on consumer goods. With the exception of the purchase of Alaska from Russia in 1867, American territory had remained fixed since 1848. In the 1890s a new spirit of expansion took hold. The United States followed the lead of northern European nations in asserting a duty to "civilize" the peoples of Asia, Africa, and Latin America. After American newspapers published lurid accounts of atrocities in the Spanish colony of Cuba, the United States and Spain went to war in 1898. When the war was over, the United States had gained a number of possessions from Spain: Cuba, the Philippines, Puerto Rico, and Guam. In an unrelated action, the United States also acquired the Hawaiian Islands. Yet Americans, who had themselves thrown off the shackles of empire, were not comfortable with administering one. In 1902 American troops left Cuba, although the new republic was required to grant naval bases to the United States. The Philippines obtained limited self-government in 1907 and complete independence in 1946. Puerto Rico became a self-governing commonwealth within the United States, and Hawaii became a state in 1959 (as did Alaska). While Americans were venturing abroad, they were also taking a fresh look at social problems at home. Despite the signs of prosperity, up to half of all industrial workers still lived in poverty. New York, Boston, Chicago, and San Francisco could be proud of their museums, universities, and public libraries -- and ashamed of their slums. The prevailing economic dogma had been laissez faire: let the government interfere with commerce as little as possible. About 1900 the Progressive Movement arose to reform society and individuals through government action. The movement's supporters were primarily economists, sociologists, technicians, and civil servants who sought scientific, cost-effective solutions to political problems. Social workers went into the slums to establish settlement houses, which provided the poor with health services and recreation. Prohibitionists demanded an end to the sale of liquor, partly to prevent the suffering that alcoholic husbands inflicted on their wives and children. In the cities, reform politicians fought corruption, regulated public transportation, and built municipally owned utilities. States passed laws restricting child labor, limiting workdays, and providing compensation for injured workers. Some Americans favored more radical ideologies. The Socialist Party, led by Eugene V. Debs, advocated a peaceful, democratic transition to a state-run economy. But socialism never found a solid footing in the United States -- the party's best showing in a presidential race was 6 percent of the vote in 1912. When World War I erupted in Europe in 1914, President Woodrow Wilson urged a policy of strict American neutrality. Germany's declaration of unrestricted submarine warfare against all ships bound for Allied ports undermined that position. When Congress declared war on Germany in 1917, the American army was a force of only 200,000 soldiers. Millions of men had to be drafted, trained, and shipped across the submarine-infested Atlantic. A full year passed before the U.S. Army was ready to make a significant contribution to the war effort. By the fall of 1918, Germany's position had become hopeless. Its armies were retreating in the face of a relentless American buildup. In October Germany asked for peace, and an armistice was declared on November 11. In 1919 Wilson himself went to Versailles to help draft the peace treaty. Although he was cheered by crowds in the Allied capitals, at home his international outlook was less popular. His idea of a League of Nations was included in the Treaty of Versailles, but the U.S. Senate did not ratify the treaty, and the United States did not participate in the league. The majority of Americans did not mourn the defeated treaty. They turned inward, and the United States withdrew from European affairs. At the same time, Americans were becoming hostile to foreigners in their midst. In 1919 a series of terrorist bombings produced the "Red Scare." Under the authority of Attorney General A. Mitchell Palmer, political meetings were raided and several hundred foreign-born political radicals were deported, even though most of them were innocent of any crime. In 1921 two Italian-born anarchists, Nicola Sacco and Bartolomeo Vanzetti, were convicted of murder on the basis of shaky evidence. Intellectuals protested, but in 1927 the two men were electrocuted. Congress enacted immigration limits in 1921 and tightened them further in 1924 and 1929. These restrictions favored immigrants from Anglo-Saxon and Nordic countries. The 1920s were an extraordinary and confusing time, when hedonism coexisted with puritanical conservatism. It was the age of Prohibition: In 1920 a constitutional amendment outlawed the sale of alcoholic beverages. Yet drinkers cheerfully evaded the law in thousands of "speakeasies" (illegal bars), and gangsters made illicit fortunes in liquor. It was also the Roaring Twenties, the age of jazz and spectacular silent movies and such fads as flagpole-sitting and goldfish-swallowing. The Ku Klux Klan, a racist organization born in the South after the Civil War, attracted new followers and terrorized blacks, Catholics, Jews, and immigrants. At the same time, a Catholic, New York Governor Alfred E. Smith, was a Democratic candidate for president. For big business, the 1920s were golden years. The United States was now a consumer society, with booming markets for radios, home appliances, synthetic textiles, and plastics. One of the most admired men of the decade was Henry Ford, who had introduced the assembly line into automobile factories. Ford could pay high wages and still earn enormous profits by mass-producing the Model T, a car that millions of buyers could afford. For a moment, it seemed that Americans had the Midas touch. But the superficial prosperity masked deep problems. With profits soaring and interest rates low, plenty of money was available for investment. Much of it, however, went into reckless speculation in the stock market. Frantic bidding pushed prices far above stock shares' real value. Investors bought stocks "on margin," borrowing up to 90 percent of the purchase price. The bubble burst in 1929. The stock market crashed, triggering a worldwide depression. By 1932 thousands of American banks and over 100,000 businesses had failed. Industrial production was cut in half, wages had decreased 60 percent, and one out of every four workers was unemployed. That year Franklin D. Roosevelt was elected president on the platform of "a New Deal for the American people." Roosevelt's jaunty self-confidence galvanized the nation. "The only thing we have to fear is fear itself," he said at his inauguration. He followed up these words with decisive action. Within three months -- the historic "Hundred Days" -- Roosevelt had rushed through Congress a great number of laws to help the economy recover. Such new agencies as the Civilian Conservation Corps and the Works Progress Administration created millions of jobs by undertaking the construction of roads, bridges, airports, parks, and public buildings. Later the Social Security Act set up contributory old-age and survivors' pensions. Roosevelt's New Deal programs did not end the Depression. Although the economy improved, full recovery had to await the defense buildup preceding America's entry into World War II. Again neutrality was the initial American response to the outbreak of war in Europe in 1939. But the bombing of Pearl Harbor naval base in Hawaii by the Japanese in December 1941 brought the United States into the war, first against Japan and then against its allies, Germany and Italy. American, British, and Soviet war planners agreed to concentrate on defeating Germany first. British and American forces landed in North Africa in November 1942, proceeded to Sicily and the Italian mainland in 1943, and liberated Rome on June 4, 1944. Two days later -- D-Day -- Allied forces landed in Normandy. Paris was liberated on August 24, and by September American units had crossed the German border. The Germans finally surrendered on May 5, 1945. The war against Japan came to a swift end in August of 1945, when President Harry Truman ordered the use of atomic bombs against the cities of Hiroshima and Nagasaki. Nearly 200,000 civilians were killed. Although the matter can still provoke heated discussion, the argument in favor of dropping the bombs was that casualties on both sides would have been greater if the Allies had been forced to invade Japan. A new international congress, the United Nations, came into being after the war, and this time the United States joined. Soon tensions developed between the United States and its wartime ally the Soviet Union. Although Soviet leader Joseph Stalin had promised to support free elections in all the liberated nations of Europe, Soviet forces imposed Communist dictatorships in eastern Europe. Germany became a divided country, with a western zone under joint British, French, and American occupation and an eastern zone under Soviet occupation. In the spring of 1948 the Soviets sealed off West Berlin in an attempt to starve the isolated city into submission. The western powers responded with a massive airlift of food and fuel until the Soviets lifted the blockade in May 1949. A month earlier the United States had allied with Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, and the United Kingdom to form the North Atlantic Treaty Organization (NATO). On June 25, 1950, armed with Soviet weapons and acting with Stalin's approval, North Korea's army invaded South Korea. Truman immediately secured a commitment from the United Nations to defend South Korea. The war lasted three years, and the final settlement left Korea divided. Soviet control of eastern Europe, the Korean War, and the Soviet development of atomic and hydrogen bombs instilled fear in Americans. Some believed that the nation's new vulnerability was the work of traitors from within. Republican Senator Joseph McCarthy asserted in the early 1950s that the State Department and the U.S. Army were riddled with Communists. McCarthy was eventually discredited. In the meantime, however, careers had been destroyed, and the American people had all but lost sight of a cardinal American virtue: toleration of political dissent. From 1945 until 1970 the United States enjoyed a long period of economic growth, interrupted only by mild and brief recessions. For the first time a majority of Americans enjoyed a comfortable standard of living. In 1960, 55 percent of all households owned washing machines, 77 percent owned cars, 90 percent had television sets, and nearly all had refrigerators. At the same time, the nation was moving slowly to establish racial justice. In 1960 John F. Kennedy was elected president. Young, energetic, and handsome, he promised to "get the country moving again" after the eight-year presidency of Dwight D. Eisenhower, the aging World War II general. In October 1962 Kennedy was faced with what turned out to be the most drastic crisis of the Cold War. The Soviet Union had been caught installing nuclear missiles in Cuba, close enough to reach American cities in a matter of minutes. Kennedy imposed a naval blockade on the island. Soviet Premier Nikita Khrushschev ultimately agreed to remove the missiles, in return for an American promise not to invade Cuba. In April 1961 the Soviets capped a series of triumphs in space by sending the first man into orbit around the Earth. President Kennedy responded with a promise that Americans would walk on the moon before the decade was over. This promise was fulfilled in July of 1969, when astronaut Neil Armstrong stepped out of the Apollo 11 spacecraft and onto the moon's surface. Kennedy did not live to see this culmination. He had been assassinated in 1963. He was not a universally popular president, but his death was a terrible shock to the American people. His successor, Lyndon B. Johnson, managed to push through Congress a number of new laws establishing social programs. Johnson's "War on Poverty" included preschool education for poor children, vocational training for dropouts from school, and community service for slum youths. During his six years in office, Johnson became preoccupied with the Vietnam War. By 1968, 500,000 American troops were fighting in that small country, previously little known to most of them. Although politicians tended to view the war as part of a necessary effort to check communism on all fronts, a growing number of Americans saw no vital American interest in what happened to Vietnam. Demonstrations protesting American involvement broke out on college campuses, and there were violent clashes between students and police. Antiwar sentiment spilled over into a wide range of protests against injustice and discrimination. Stung by his increasing unpopularity, Johnson decided not to run for a second full term. Richard Nixon was elected president in 1968. He pursued a policy of Vietnamization, gradually replacing American soldiers with Vietnamese. In 1973 he signed a peace treaty with North Vietnam and brought American soldiers home. Nixon achieved two other diplomatic breakthroughs: re-establishing U.S. relations with the People's Republic of China and negotiating the first Strategic Arms Limitation Treaty with the Soviet Union. In 1972 he easily won re-election. During that presidential campaign, however, five men had been arrested for breaking into Democratic Party headquarters at the Watergate office building in Washington, D.C. Journalists investigating the incident discovered that the burglars had been employed by Nixon's re-election committee. The White House made matters worse by trying to conceal its connection with the break-in. Eventually, tape recordings made by the president himself revealed that he had been involved in the cover-up. By the summer of 1974, it was clear that Congress was about to impeach and convict him. On August 9, Richard Nixon became the only U.S. president to resign from office. After World War II the presidency had alternated between Democrats and Republicans, but, for the most part, Democrats had held majorities in the Congress -- in both the House of Representatives and the Senate. A string of 26 consecutive years of Democratic control was broken in 1980, when the Republicans gained a majority in the Senate; at the same time, Republican Ronald Reagan was elected president. This change marked the onset of a volatility that has characterized American voting patterns ever since. Whatever their attitudes toward Reagan's policies, most Americans credited him with a capacity for instilling pride in their country and a sense of optimism about the future. If there was a central theme to his domestic policies, it was that the federal government had become too big and federal taxes too high. Despite a growing federal budget deficit, in 1983 the U.S. economy entered into one of the longest periods of sustained growth since World War II. The Reagan administration suffered a defeat in the 1986 elections, however, when Democrats regained control of the Senate. The most serious issue of the day was the revelation that the United States had secretly sold arms to Iran in an attempt to win freedom for American hostages held in Lebanon and to finance antigovernment forces in Nicaragua at a time when Congress had prohibited such aid. Despite these revelations, Reagan continued to enjoy strong popularity throughout his second term in office. His successor in 1988, Republican George Bush, benefited from Reagan's popularity and continued many of his policies. When Iraq invaded oil-rich Kuwait in 1990, Bush put together a multinational coalition that liberated Kuwait early in 1991. By 1992, however, the American electorate had become restless again. Voters elected Bill Clinton, a Democrat, president, only to turn around two years later and give Republicans their first majority in both the House and Senate in 40 years. Meanwhile, several perennial debates had broken out anew -- between advocates of a strong federal government and believers in decentralization of power, between advocates of prayer in public schools and defenders of separation of church and state, between those who emphasize swift and sure punishment of criminals and those who seek to address the underlying causes of crime. Complaints about the influence of money on political campaigns inspired a movement to limit the number of terms elected officials could serve. This and other discontents with the system led to the formation of the strongest Third-Party movement in generations, led by Texas businessman H. Ross Perot. Although the economy was strong in the mid-1990s, two phenomena were troubling many Americans. Corporations were resorting more and more to a process known as downsizing: trimming the work force to cut costs despite the hardships this inflicted on workers. And in many industries the gap between the annual compensations of corporate executives and common laborers had become enormous. Even the majority of Americans who enjoy material comfort worry about a perceived decline in the quality of life, in the strength of the family, in neighborliness and civility. Americans probably remain the most optimistic people in the world, but with the century drawing to a close, opinion polls showed that trait in shorter supply than usual. The Civil War was the most traumatic episode in American history. But it resolved two vexing matters... It put an end to slavery, and it decided that the country was not a collection of semi-independent states but an indivisible whole. For big business, the 1920s were golden years. The United States was now a consumer society, with booming markets for radios, home appliances, synthetic textiles, and plastics. Despite a growing federal budget deficit, in 1983 the U.S. economy entered into one of the longest periods of sustained growth since World War II. |Back to Chapter 2||Continue to Chapter 4|
http://www.4uth.gov.ua/usa/english/facts/factover/ch3.htm
13
17
Describe the contributions of France and other nations and of individuals to the outcome of the Revolution. Identify the different roles women played during the Revolution. Understand the personal impact and economic hardship of the war on families, problems of financing the war, wartime inflation, and laws against hoarding goods and materials and profiteering. American Revolution: Sybil Ludington's Ride, April 26, 1777 Time Alloted2 - 3 45 Minute Periods, Estimated State Content Standards 5.6 Students understand the course and consequence of the American Revolution. - Describe the contributions of France and other nations and of individuals to the outcome of the Revolution. - Identify the different roles women played during the Revolution. - Understand the personal impact and economic hardship of the war on families, problems of financing the war, wartime inflation, and laws against hoarding goods and materials and profiteering. Downloaded and duplicated visuals and graphic organizers—bold lettered. The image may be projected from an acetate overhead, or from a power point slide show. The lesson concludes with an interactive computer challenge. Students should be familiar with the causes of the American Revolution and the first conflicts in and around Boston, Massachusetts in 1775-76. - In order to make the introductory image more meaningful, the teacher may want to start with reading the Events of the Ludington's Ride. - Show students the image entitled The Patriotic Race, by Charles C. Nahl. Have students report on the details. Make a list as they report. Have them speculate on what is beyond the edges of the scene—guesses should show a logical connection to the details of the drawing. Last, have students create positive and negative “spins” or interpretations on what they see pictured. They are, in the modern phrase, “spin doctors.” Record the positive and negative spins in another list. (A spin doctor explains a situation or an event in a way that influences others' way of looking at the situation or event. It is the opposite of being neutral or objective). - After the preview warm-up, students can play a game called Spin Doctor . It utilizes the skill of putting a specific spin on an event. The game allows students to consider how individuals influenced the American Revolution, women's roles in the war, and how the war impacted the way people lived. - To prepare for this game, students need some background preparation, for they will be creating “Patriot Spins” and “Loyalist Spins” for events/facts related to the Ride of Sybil Ludington in April, 1777. One of the reasons the American Revolution was so difficult is that Americans were not united over the decision to fight Great Britain . Patriots were eager to separate from Great Britain and to become an independent nation. Loyalists did not think independence was reasonable or even possible, given Great Britain 's military and economic strength. Download and duplicate both the Patriot Point of View and the Loyalist Point of View pages. Students may read the lists in small groups or as a whole class. Playing the Game: Spin Doctor The teacher or other appointed leader will read from a page entitled Events of Ludington's Ride during the course of the game. Divide the class into an even number of teams of three to five students each. Multiple teams maximize the amount of student participation. Make a score card on the blackboard or on an overhead. Decide which teams will create Patriot Spins, and which will create Loyalist Spins. Spin doctors on both sides aim to promote the virtue of their political position and/or discredit the virtue of the opposing position. The Loyalists have the first turn. The teacher reads the first event on the list of events. Teams confer separately to create spins that are consistent with or sympathetic to a Loyalist point of view. They have 20 seconds. Each Loyalist team shares one of their spin-ideas. The Loyalist score is the number of unique spin-ideas. Thus if two teams think of the same spin for an event, it only counts as one point. There is an advantage in creating more than one spin. It may be necessary at the last second to choose an idea that no one has yet mentioned-- in order to preserve the point requirement of “uniqueness”. The teacher referees whether the spins make sense. Nonsense spins don't count for any points at all. Play goes to Patriots. The teacher reads the next event on the list. Patriot teams have 20 seconds to come up with appropriate spins. They share. The score is equal to the number of unique spins. And so forth for 10 rounds total, or 5 spins for each side. Sybil's image should be displayed during the game. After play is completed, ask students what they can see in the picture that is consistent with what they learned from the game. What details in the image are outside the limits of the game? (Details may be outside the limits of history, e.g. clothes, hair style, facial expression.) Synthesis or Conclusion Activity: Have students complete a “Two Sides to Every Issue” network. On one side they record what Patriots might have thought about Sybil Ludington's 40 mile ride, at night, during a storm, to alert minute soldiers to get to their positions in the Connecticut militia outside Danbury , Connecticut . On the other side they record what Loyalists might have thought about the same events and details. One side will praise Sybil. One side will criticize her. Optional essay prompt: Have students write a paragraph to this prompt: Why is it reasonable or unreasonable to think that Sybil Ludington was the only girl or the only teenager who participated in the American Revolution? A page for writing is attached, entitled Writing Prompt for Sybil Ludington's Ride. Extension Activity: Interactive Computer Game The computer provides a script format that guides the student through an argument between a Patriot and a Loyalist. Each return is a cue to the computer to skip a line, print the next speaker's name in bold letters followed by a colon. Tabs are set so that what the student writes is properly indented. Cartoon figures available at the bottom of the screen can be used to illustrate the argument, or better yet, students may add their own, original cartoon figures. The images function like rubber stamps. The student can use them selectively to illustrate lines of dialogue at each of the activity's 8 returns. The result would be an “illuminated” dialogue between a Patriot and a Loyalist airing their differences on the need for a revolution against Great Britain . - Loyalty to the King is honorable, and has been honorable for centuries. - Americans are bound to England by language, history, culture, values and religion. United with Great Britain , colonists will be a part of all that is good in British culture and history. - American colonists still have family living in England . Loyalty to England is loyalty to family. - The British Empire has the strongest navy in the world, and a professional army. Americans have no navy and a volunteer army. - The British Parliament has tried to raise money from American colonists to pay for the army that protects the colonists from the Indians, the French, and the Spanish. The taxes are reasonable. - The Parliament has the right to govern, including the right to impose taxes when money is needed for the common good. Protection in the unsettled areas would benefit everyone. It would be tax money well spent. - There are ways in the British tradition of Parliament and King to correct errors over time. We have to be patient. The Parliamentary system works. - Americans are very lucky to be able to own property. With hard work, they become prosperous. A rebellion would wipe out prosperity. - Without the British military to help protect trade on the oceans there is piracy. Without the British military to protect settlement on the frontier, there is lawlessness. British power keeps us safer than we would be without it. - The British Empire is the most civilized empire in the world. Why start all over again? Starting over is a huge risk, more likely to fail than to succeed. - There is no freedom of speech or freedom of the press except for those who argue against rebellion. Those who oppose rebellion suffer from Patriot violence. Loyalist property is taken by Patriot mobs and armies. - The colonies are losing its most prominent citizens, many of them move to Canada or return to England than remain in a rebellious colony. - War will destroy what we have taken 150 years to build, governments, courts, towns, businesses, farms.... - War brings out the worst on both sides. There are civilized ways to solve problems that do not use violence. We should negotiate and petition and debate. With patience we will get what we want. - These are the beliefs that would be part of a Patriot's thinking. - The King is a tyrant who does not remember that Americans are citizens with rights. - Each colony has a governor; some of them are appointed by the king, others are elected by the people. All the colonies have legislatures. Thus, Americans have participated in representative local government since the colonies were built in the1600s. - Americans are a new kind of people who have figured out how to survive. England can't govern Americans better from across the ocean than Americans can govern themselves. - Laws passed by Parliament from 1763 to 1773 tried to impose taxes on American colonists as a way of propping up an expensive empire. - Americans object to how tax laws are made without any American having a voice in the debate. Then come punishments when American resistance grows strong - The king has shut down legislatures, and he has made judges and governors answer to him rather than to the colonists. - American citizens, like the citizens living in Britain , want the right of trial by peers. The king says citizens must be tried in England or in another colony. - The king thinks Americans will agree to laws that benefit only British merchants and British manufactures and British military. - The British army takes what it wants from private citizens or destroys property in order to make citizens suffer. - The king has not responded to Americans' requests for fair treatment, except to create more tax laws, or to close more courts and town meetings. - The only thing that has caused Parliament to drop tax laws is boycotts of British goods, but then Parliament always turns around and passes more laws that hurt Americans. - The British efforts to punish colonists don't work. For example, after the Boston Tea Party, King George III closed the port of Boston . Instead of dividing the colonies, punishment united the colonies against the king. - Americans fought with the British in the French and Indian War. They know how British armies think and fight. - A fight for freedom, rights, and property is a worthy fight. Write your topic sentence in the box. Use radiating lines to collect details that support the claim you make in the topic sentence. Write a paragraph at the bottom of the page. Writing Prompt for Sybil Ludington's Ride Write your topic sentence in the box. Use radiating lines to collect details that support the claim you make in the topic sentence. Write a paragraph at the bottom of the page. It is reasonable to think that Sybil Ludington was just one of many teenagers who participated in the American Revolution. There are primary sources like personal diaries and newspapers that describe how young people participated. Even without those documents, it is reasonable to expect that young people adopted the loyalties and concerns of their families. They would have helped their families in the hard work of settling in a new country and in the emergencies that occasionally occurred. Age would not have eliminated young people from the biggest emergency of them all-- the struggles related to a war. And in an emergency, being female would probably not matter very much either. Charles Christian Nahl (1818 – 1878) Charles C. Nahl was born in Kassel , Germany , in 1818, into a family of accomplished artists. He studied art with his father and one of his cousins, and learned the medium of watercolor by age 12. He undertook further training at both the Kassel and Dresden Academies . The formal academies prioritized historical and religious scenes over other subjects and stressed the importance of draftsmanship and detail. In 1846, Nahl moved to Stuttgart and then Paris with his mother, younger siblings, and an artist friend, Frederick August Wenderoth. While in Paris , he continued his studies. Because of the political and social unrest in Europe at this time, the Nahl party left France for the United States. Nahl and his family remained in New York until 1851. Like so many others, they decided to seek their fortunes in California. They sailed from New York to Panama and then up the Pacific Coast to San Francisco . They reached San Francisco on May 23, 18 51. The family set out immediately for Rough and Ready where they were tricked into purchasing a “salted mine.” It was co mmon for sellers to “salt mines” or sprinkle them with gold from another mine, to give the impression that it was rich w ith gold. With this disaster and bad health, Nahl decided to resume his artistic career. He, and his friend, Wenderoth, established a studio in Sacramento . They accepted commissions for portraits and commercial work, gaining a great following in their new community. After a devastating fire in Sacramento in November 1852, Nahl and Wenderoth moved to San Francisco where Nahl established a studio with his younger brother, Arthur. Within a short time, Nahl was the most sought after illustrator working in the state. His drawings of 19th century California , which were produced as detailed wood engravings, appeared in newspapers, periodicals, books, broadsides and letter sheets. They were seen by audiences on the Pacific Coast , as well as readers and viewers in the eastern states and Europe . Nahl's most popular work included illustrations for works written by Alonzo Delano. He produced images for Pen-Knife Sketches: or Chips of the Old Block (1853), The Miner's Progress (1854), The Idle and Industrious Miner (1854) and Old Block's Sketch Book: Or Tales of California Life (1856). By combining humor and morality with excellent draftsmanship, Nahl produced the quintessential image of California gold miner. Nahl was a careful observer of nature, and produced beautiful imagery of fruits, flowers and other vegetation, but he paid less attention to landscape. Unlike many of his contemporaries, Nahl was not drawn to such wonders as Yosemite. Nahl's images of mining life included accurate features of the Sierra foothills, but he usually emphasized figures over scenery. Nahl is most remembered for his panoramic historical themes and early California scenes such as Miners in the Sierra of 1851-52 (in collaboration with Wenderoth) and Saturday Night in the Mines (1856). His reputation rests on signature works such as Sunday Morning in the Mines, The Fandango and Sunday Morning in Monterey , all produced in the 1870s. Although Nahl continued to produce illustrations throughout his lifetime, his enduring success came from his large-scale paintings. His paintings attracted patrons including Judge E.B. Crocker, Leland Stanford and James Flood. By the time Nahl died in 1878, at the age of 59, his style of colorful, dramatic painting had passed from favor. It was not until the later part of the 20th century that his work was evaluated in new contexts and his reputation was re-established as Artist of the Gold Rush. About The Patriotic Race, 1870 Judge Crocker first commissioned Nahl to create a pair of paintings: The Love Chase , 1869 and The Patriotic Race, 1870. Both subjects provided the artist with an opportunity to depict running horses. In The Patriotic Race, notice how all four legs of the horse carrying our heroine are off the ground. This painting pre-dates the photographic experiments Eadweard Muybridge made of running horses for Leland Stanford. Stanford wanted to know if a horse's legs were really all off the ground at the same time at some point in a horse's gait. According to the narrative, The Patriotic Race, both armies were camped near the Potomac. One evening, British officers hosted a ball for the ladies in the nearby towns. The heroine was at the ball. While at the ball, the heroine overheard a conversation between two British Officers about an important dispatch. The dispatch needed to be delivered to the commanding British officer. The heroine, who supported the American cause, diverted the messenger, took the dispatch bag and found the messenger's horse outside. She got on the horse and raced toward the American camp. A British soldier saw her and raced after her and at sunrise the British officer finally caught up with the heroine. Out stepped an American soldier, alerting the British soldier, and allowing him to escape. Notice how Nahl unifies his painting: the light sweeps the viewer's eyes across the painting, exposing the full story; the rose color of the sky is carried across the painting by the heroine's rose-colored dress and connects with the British soldier's red coat. The Patriotic Race refers to the Revolutionary War period. So far the actual source for the story, told in this painting, has not been identified. It is similar to other actual stories of women during the Revolutionary War. Sybil Ludington, for example, has been called the Female Paul Revere. When her father, a colonel in a local Connecticut militia, learned that the British planned to burn the town of Danbury and his men were scattered over a wide area near his home, it was sixteen-year old Sybil, who rode on horseback over 40 miles in the dark to spread the alert in 1777. Twenty-two year old Deborah Champion is another of these real-life heroines. In 1775 her father sent her to Boston on a mission to deliver papers to General Washington. In the company of her family's slave Aristarchus, she rode from her home in Connecticut and crossed enemy lines in Massachusetts on her way to General Washington. The Patriotic Race may well be based on a fictional story, but the elements of courageous young heroines delivering messages in defiance of the British Army are based on actual events. Moreland L. Stevens , Charles Christian Nahl: Artist of the Gold Rush . Sacramento , CA : E. B. Crocker Art Gallery , Sacramento , 1976.
http://www.crockerartmuseum.org/school-educator/striking-gold/item/american-revolution-sybil-ludington-s-ride-april-26-1777
13
23
Despite the shared civilisational model, social structure, close trading ties and numerous cultural norms, the mainland Southern colonies and the British Caribbean colonies were politically divided in the 1760s leading up to the American Revolution in 1776. Recently, we have seen how White demographic failure and a rapidly growing Black slave majority fostered a ‘garrison mentality’ amongst British Caribbean Whites, who looked to British troops for support in maintaining social control. Meanwhile, mainland British colonists, even South Carolinians (who had been colonised by Barbadians and were closest in racial demographics and social order to the Caribbean), came to resent the presence of British troops and saw them as a danger to their liberty. This was a significant divide between the Caribbean plantation societies and those on the mainland. Another major difference which separated the Caribbean from the mainland was their very different reactions to the Stamp Act of 1765. In an effort to raise funds to pay for the French and Indian War (as it it was known in North America; more widely this war was known as the Seven Years’ War) and offset its large national debt the British Government imposed a tax (signified by a stamp) on printed documents in their colonies in the New World. This tax sparked massive protests in the mainland colonies and led to violent acts against British colonial officials as well as riots in the streets. This was an important step in the build-up to military rebellion and the colonies’ Declaration of Independence a decade later. The tax was eliminated just a year after it was imposed, but by then a major rift in the Empire had been created. Professor Andrew Jackson O’Shaughnessy explains why this rift occurred and what its long term implications were in chapter four of his book An Empire Divided: The American Revolution and the British Caribbean. The irony about the reactions to the Stamp Act is that this tax disproportionately taxed the Caribbean and yet the colonists there (with only a few exceptions) accepted the tax with little protest. Meanwhile, mainland colonists reacted violently to a tax that was minimal and primarily intended to pay for the troops which provided their security (protecting them from the French and their Indian allies over the Appalachians) – not for the benefit of Caribbean defences. The mainland response can only be understood from an ideological perspective, since the reaction to the tax was out of proportion to the economic burden it actually imposed. O’Shaughnessy writes: The Stamp Act imposed the greatest tax burden on the British West Indies because it contained clauses that specifically discriminated against the islands. …The British government consequently allocated more stamps to the Caribbean colonies than to North America. The greatest single consignment of stamps to British America went to Jamaica. The government apportioned more stamps to the Leeward Islands than to any of the mainland colonies: it expected revenues from Antigua to be higher than North Carolina or Maryland. Charles Jenkinson, the treasury secretary, expected roughly equal amounts of stamp revenue from the populous North American colonies as from the Caribbean. British Caribbean Whites were not silent about the new tax. They had a strong argument against it: they were essentially being asked to pay for the defense of others. These West Indian colonists read the anti-British newspaper articles and pamphlets which mainland colonists wrote denouncing the new tax. They were influenced by the same intellectual and ideological trends which influenced those on the mainland. For instance, O’Shaughnessy points out that most West Indians shared a ‘Country Whig‘ ideology with most mainland colonists. They tended to support British radical John Wilkes, though not as strongly as did the North Americans (even South Carolina voted to send money to Wilkes, which none of the Caribbean colonies did). Though many islanders voiced opposition to the Stamp Act, only the colonies of the Leeward Islands (the northern islands of the Lesser Antilles), and more specifically Saint Kitts and Nevis, experienced strong, organised and violent opposition to the tax. This is surprising, in some respects, because, as O’Shaughnessy notes, the Leeward Islands were in a weaker position than Jamaica and Barbados. They had a dwindling White population that was in a weaker proportional position than the British Caribbean Whites in Jamaica and Barbados. Their economy was less diversified. And yet, it was there that the radical activism of the mainland colonies was paralleled. The riots in Saint Kitts, O’Shaughnessy notes, likely involved over half of the island’s free adult White population. The stamp tax was not collected there and when it was repealed, a year later, there was great celebration. The much more populated islands of Barbados and Jamaica saw no such riots or organised resistance. WHY DIDN’T THE CARIBBEAN STRONGLY PROTEST THE STAMP TAX? As mentioned above, only Saint Kitts and Nevis strongly protested the Stamp Act. Why? This is a question that O’Shaughnessy adddresses on pages 96-100 of his book: [T]he reaction of the Leeward Islands to the Stamp Act was abnormal within the British Caribbean. Jamaica and Barbados typified the natural inclination of the West Indies toward conciliation with British… The vulnerability of the Leeward islands to economic sanctions by North American merchants explains their bold resistance to the Stamp Act. The Leewards, more than all the other islands, depended on the North American colonies for food. …The Leewards faced a stark alternative… resist the Stamp Act or suffer famine with the associated danger of a slave rebellion. …[T]he Leeward Islands yielded to the economic pressure exerted by the North Americans. Indeed, mainland colonists organised boycotts against their Caribbean kinsmen who complied with the tax. This caused a great deal of animosity between societies such as South Carolina and Barbados ( the former having been colonised by the latter), which shared a similar culture, social order and economy. O’Shaughnessy writes: North American merchants boycotted all the British islands that complied with the Stamp Act. …Stamp papers from Barbados and Jamaica were publicly burnt in North America, and radicals proposed starving the “Creole Slaves” by a virtual embargo. …North American merchants either blacklisted those islands that complied with the act or simply stayed away for fear that ships using unstamped papers were liable to seizure.
http://southernnationalist.com/blog/2012/10/25/stamp-act-divides-southern-colonies-caribbean-in-1760s/
13
140
There were few common elements in the militia organization to be found among the southern states. Virginia and South Carolina along the sea coast were heavily populated whereas in most of North Carolina the government had the greatest difficulty finding enough men within a day's ride to make up a militia company. The greatest problem for all the southern colonies came in organizing the militia on the frontier. The principal, if not exclusive, reason why the southern colonies created a militia was to combat the native Americans, with whom clashes occurred almost constantly from the earliest days forward. The second reason the southern militias were formed was to contain the growing slave populations, which, in some areas, outnumbered the slave-owning population. Virginia, and occasionally the other southern colonies, used the militia to contain the growing number of indentured servants and convict laborers. While the northern and middle provinces had enlisted indentured servants, Amerindians and even black slaves in their militias, southern colonies were rarely prepared to admit any of these classes into their militias. These exclusions were generally enforced despite the fact that the pool of eligible white, free males was so greatly reduced that the southern militia system was unable to function well. The militia system in the southern states was able to provide protection because, for the most part, the aborigines were too weak and undivided to offer much of a challenge, and because several civilized tribes sided with the colonists. In Virginia and a part of Maryland, the Algonquin tribes, especially the Powhatan Confederation, fed and sustained the English settlers more frequently than they fought with them. In Georgia there were essentially no problems with the Amerindians until the English stirred them up at the time of the American War for Independence. The southern tribes, such as the Cherokee, Catawba, Yamasee and Creek were essentially agricultural peoples who were more settled than the northern tribes. The large body of Cherokee remained generally friendly until 1759. South Carolina's Catawba were removed far enough from the settlers on the coast that they did not believe that the whites were a threat until about the end of the eighteenth century. Because the Spanish settlers in Florida favored the local tribes, the great Creek nation, traditional enemies of the Florida tribes, sided with the English who hated the Spanish. The principal Indian problems came from the Yamasee, a displaced tribe from Florida who fought the Carolinians as early as 1715. Also, there was essentially no rivalry in conquest from any other European power the way the northern colonies suffered from the rivalry between the French and the English for supremacy in North America. Occasionally, Georgia experienced incursions from the Spanish; and in the Seven Years' War the French presented a very few minor problems in Georgia, the Carolinas and Virginia. In that war Virginia, Maryland and to a far lesser degree, the Carolinas, did supply troops to fight against the French in western Pennsylvania. With a substantial portion of the southern population being slaves the militias in the south took on a special duty that was inappropriate to the north: they ran slave patrols. Slaves generally could not carry or own firearms. On each plantation one slave could be licensed to carry a gun for the purpose of hunting down predators and otherwise protecting the master's property. For cause, additional trusted slaves might be entrusted with arms, usually shotguns. The slave patrols were always staffed with white militiamen. Typical of laws enacted in response to real or imagined slave revolts was that resolved by the Norfolk, Virginia, city council, of 7 July 1741.(1) Resolved by the Common Council that for the future the inhabitants of this Borough shall, to prevent any invasion or insurrection, be armed at the Church upon Sundays, or other Days of Worship or Divine Service, upon the penalty of five shillings . . . . Josiah Smith, Mayor Were these duties not tied to the militia by law and custom, one might imagine that slave patrols were logically tied more to the posse comitatus. This ancient Anglo-Saxon legal term refers to the power or force of the county. In medieval times a sheriff could summon all able-bodied men, 15 years of age or older, to assist in the enforcement of the law, maintenance of the peace and the pursuit and apprehension of felons and runaway slaves and servants. In the United States a sheriff may summon a posse to search for a criminal or assist in making an arrest.(2) Southern militia constituted a standing posse, available at any time and often deployed on a regular schedule, whether there was suspicion of a crime or runaway slave or not. After regular forces and select militia units were created in the south, militia units often had no real function or duties save for slave patrols. During the Revolution the southern militia served primarily as guerrillas to harass the British army, like forces to counter the tory militias and auxiliaries which occupied territory and prevented extensive British control of populations and land. Many southern political leaders, however, treated the militia as an alternative to, or substitute for, the regular standing army, rather than as an auxiliary.(3) I should like to express my appreciation to the Marguerite Eyer Wilbur Foundation and the Second Amendment Foundation for their support. Much credit is also due to two devoted assistants, Damon Dale Weyant and Kevin Ray Spiker, Jr. Professor W. Reynold McLeon offered valuable suggestions as did my anonymous referees. My esteemed department chair, Allan Hammock supplied support for copying. I thank Mrs Mildred Moyers and Mrs Christine Chang of the West Virginia University Library system were most courteous and helpful in locating and gathering up materials for me. The Virginia militia was of greatest significance in the seventeenth century, during which time the development passed through several stages. The first quarter of the seventeenth century was marked by improvisation and experimentation as the colonists attempted to develop a formula which would work in the colony's particular circumstances. In the second quarter of the century "this system was reorganized, refined, and repeatedly tested in combat." In the third quarter the colonial leaders excluded slaves and indentured servants, but dwelled on intensive training of specialized units, such as the frontier rangers. Virtually all adult, free, white males answered the muster call. Following Bacon's Rebellion, 1675-77, the base of recruitment was further reduced and a gentlemen's militia, similar to the militia found Stuart England, emerged. The bulk of the population after 1677 constituted an under-utilized, rarely mustered reserve, similar to the medieval great fyrd. After 1680 few poor men served in any militia capacity, although some might enlist in a crown regiment for the pay. The chronic economic crises had reduced much of the population to poverty, so most of the poor were delighted to discover that the militia law was not going to be universally enforced. To most poor reduction of military duty meant that they had more time to plow and harvest and could pocket the money they might otherwise have to lay out for militia arms, supplies, gunpowder and accoutrements. The government began to establish central armories and gunpowder magazines rather than depending on the populace to store individual supplies. Changes in the number and distribution of guns as Virginia approached the eighteenth century were functions of economic and social factors.(4) In 1606 the English King provided a charter to the Virginia Company of London. It required the civil authority to recruit and train a militia and other prepare defenses to "encounter, repulse and resist" all the king's and the colony's enemies, suppress insurrection and treason and to enforce the law.(5) The Virginia Charter of 1612 required the government to provide the citizenry with "Armour, Weapons, Ordnance, Munition, Powder [and] Shot" for its defense.(6) The first settlers arrived on 24 May on the Sarah Constant, Goodspeed and Discovery, establishing Fort James, soon named Jamestown. The Company sent John Smith, a hardened military veteran, to establish a self-defense force. Upon his initial review of the men Smith observed that they were "for most part of such tender educations and small experience in martial accidents" as to be essentially useless. He immediately undertook to train them to "march, fight and skirmish" and to form an "order of battle" wherewith to provide some defense against the native aborigine. He exercised the company every Saturday night. Smith especially emphasized forming a proper battle order designed for the New World.(7) Smith departed Virginia in 1609, but there was no change in the exercise of the martial arts since the new leaders sent by the Virginia Company were also veterans of many European battles. If anything the new military leaders intensified the militarization of the colony. Much of Smith's work had come unraveled because of famine, disease and deaths at the hands of the Indians. Understanding that development and maintenance of a militia was the primary necessity, the "excellent old soldiers" divided the colonists into "several Companies for war." They appointed an officer for each fifty men "to train them at convenient times and to teach them to use their arms and weapons."(8) The primary problems with the defense of the colony were not military. The colonists had settled on one of the most inhospitable and undesirable pieces of land available and diseases of all kinds reduced the numbers of colonists. Famine was also a constant threat. By 1610 the Virginia militia was sufficiently powerful to take the offensive against the natives. Beginning with small forays into Amerindian territory, the militia became emboldened with small victories its first campaign. In 1614 the militia captured Powhatan's daughter Pocahontas, and this brought the first Indian campaign to halt, with the militia having tasted victory for the first time. Initial successes and the removal of the immediate threat to the settlement brought a certain inertia and the militia ceased its frequent practices. Peace also brought an end the military dictatorship of the militia company and its professional officers.(9) In 1613 Sir Thomas Dale concluded a treaty with the Chickahominies under Powhatan who were now closely allied because of the marriage of Pocahontas. Among other provisions, the tribe agreed that all members were now Englishmen, subject to the king, and that "every fighting man at gathering their Corn should bring two Bushel;s to the Store as Tribute, for which they should receive as many hatchets." They also agreed to supply 300 men to join the colonial militia to fight against the Spanish or any other enemy of the Crown.(10) On the whole it must be said that Powhatan Confederation sustained the colonists more frequently than it made war upon them. The Powhatan Algonquins initially did not view the settlers as much of a threat. Reports came to Powhatan that the English had neither much corn nor many trees in England, and thus were extremely poor. For their part, the English saw the Chickahominies as a potential threat, although within a sixty mile radius of Jamestown there were few villages of more than fifty inhabitants, and the entire Amerindian population was probably less than 5000, of which perhaps 1500 were warriors. Tribes allied with Powhatan could have raised fewer than 2500 warriors. The colonists could match these numbers, and they were armed with firearms and iron and steel weapons. Nonetheless, the governor published and edict that "no Indian should be taught to shoot with Guns, on Pain of Death to Teacher and Learner."(11) In 1618 the Virginia company reorganized, with a wholly civilian rule replacing the military one. No civil officer held military rank or was selected because of his military expertise or service. Another part of that reorganization brought about a change in the mission of the militia. Henceforth the militia was to be a defensive force, prepared only to keep the peace. The civil officers issued a stern warning against stirring up the Amerindians or violating any part of the negotiated peace. The new officers discouraged private ownership of martial arms and neglected the militia, essentially unilaterally disarming the colony.(12) On 24 July 1621 Governor Francis Wyatt issued three important orders. First, he instructed masters and apprentices to remain loyal to their trades and not give them up to make quick and easy profits "planting tobacco or any such useless commodity." Second, he ordered that any servants condemned to punishment for "common offenses" be placed to work on public works projects for the benefit of the whole colony. Third, he ordered that guards be placed around public fields for the protection of farmers.(13) During the first fifteen years of Virginia's existence as many as 10,000 English settlers and their slaves had come to the colony, but in 1622 perhaps only about 2200 remained. Many died and others returned to England. The temporary peace with the Amerindians did not last. In 1622 the Amerindians, angered at the rapid expansion of the colony made war against the whites along the James River On 22 March 1622 an Amerindian attack left 347 colonists dead, although the colony was saved because Christian Indians warned some men of the impending attack. Governor Francis Wyatt led the survivors into pallisaded towns where they took refuge against the marauders. As hunger, thirst and disease again ravished the colonists Wyatt ordered that available military stores be brought forth for whatever storage areas had been created when the colony demilitarized. With almost no training, save for the distant memories of a few of the earlier discipline, the militia sallied forth. With more luck than good management, Wyatt managed to win more skirmishes than he lost. Firearms and steel edged weapons proved decisive against the stone age weapons of the aborigine. The Amerindians had planned little for a campaign and had laid up few supplies and were therefore as vulnerable in their own way as the undrilled colonial militia. In March 1624 Wyatt recommended additional laws be enacted by the legislature to reduce the threat from the Amerindians. Article 23 required that all homes be pallisaded, article 24 required the people to go about armed at all times, and article 28 set a night watch for every community. Article 32 provided for state support of families of men killed, and for men disabled, in action against the Amerindians.(14) The colonists appealed to England for assistance. On 17 July the colonists received a reply. "His majesty was so far sensible of the loss of his subjects and of the present estate of the Colony . . . he was graciously pleased to promise them assistance . . . . It [the petition] was answered [with] munition . . . whereby they might be enabled to take a just revenge of these treacherous Indians . . . . It pleased his Majesty to promise them some arms out of the Tower as was desired . . . ." The king sent 100 brigandines, also called plate coats; 40 jackets of mail; 400 jerkins or shirts of mail; 200 skull caps and an unspecified quantity of halberts and spears. This initial shipment was followed by a shipment of 20 barrels of gunpowder and 100 firearms of unspecified type.(15) Wyatt decided that he would not be caught unprepared again. He also knew that the could not count on support from the financially troubled Virginia Company, so he had no choice but to revitalize the militia and revamp the militia law. Virginia statutes of 1622 provided death penalty for servants who ran away and traded or sold guns to the Amerindians;(16) and statutes of 1623 provided that "no man go or send abroad without a sufficient will [well] armed" and that the work "men go not to worke in the ground without their arms." Furthere, "the commander of every Plantation [is] to take care that there be sufficient of powder and ammunition within their Plantation under his command and their pieces [of war equipment] fixed and their armes compleat."(17) In 1624 the militia law provided that militiamen wounded or otherwise injured while in the public service would receive public support and the families of those killed while in service would be supported at the public expense. Survivors of the early years were exempted from further compulsory militia service. When a militiaman was impressed into duty his neighbors were required to spend one day a week assisting with his duties and chores at home.(18) Shortly after the enactment of the new militia law Wyatt received word that the Virginia Company had failed and that hereafter the colony would be under the Crown. The Stuart kings had provided no more assistance to the colony than had the Virginia Company. Defense remained a local obligation. All able-bodied males between 16 and 60 years of age, excepting only older veterans and certain newcomers, were enrolled in the militia. Those not serving in the militia were taxed for its support and were required to offer assistance on the farms of those who were in actual militia service. Gentlemen were to be placed in proper ranks, so that there was no social levelling and they were not reduced to serving as common soldiers. Regular drill was mandated by law. The law created officers whose duty it was to "exercise and drill them, whereby they may be made more fit for service upon any occasion." The legislature also ordered that a regular system of defensive shelters be built and maintained.(19) In October 1629 the General Assembly enacted a new series of militia laws. Plantation overseers were to reorganize their militias in preparation for new wars against the native aborigine. Three expedition, to begin in November 1629 and March and July 1630, were designed to "doe all manner of spoile and offence to the Indians that may possibly bee effected." So successful was the first expedition that the legislature ordered that the war be prosecuted without the possibility of surrender or peace.(20) At this time the Virginia militia could muster no less than 2000 men. The second war with the Powhatan Indians continued until 1632, but the weight of numbers and superiority of equipment enabled the colonists to win. This time, following the Second Powhatan War, there was no disassembly of the militia. On 21 February 1631 Governor Harvey recommended that the legislature place a tax upon ships entering and leaving Virginia harbors. This tax, Harvey wrote, will give the colony "a continuall supply of ammunition." The House of Burgesses agreed and enacted this dedicated tax.(21) In 1632 the Virginia House of Burgesses ordered that every physically fit free white male bring his gun to church services so that, immediately following Sunday service, he might join his neighbors in exercising with it. No settler was even to speak to an Amerindian under penalty of the law. Militiamen were authorized to kill any Amerindian found "lurking" or thought to be stealing cattle. In 1633 the legislature set the new penalty for selling guns to the Amerindians as the loss of goods and chattles and imprisonment for life.(22) In 1634 the militia was reorganized following the lines of the eight existing counties. The governor appointed county lieutenants and other officers in each county. In 1639 the governor issued a call for select militiamen, fifteen from each county, to punish one or more bands of marauding natives. While each militiaman provided his own gun and edged weapon, his neighbors otherwise supplied him. His neighbors also looked after his farm and each provided one day's service to the militiaman.(23) Under the act of 6 January 1639, all able-bodied men were made liable for militia service and were to provide themselves with arms and ammunition "or be fined as the pleasure of the Governor and Council." Slaves were specifically exempted from the obligation, for the act contained the language, "all persons except negroes."(24) In 1633 the colony recognized the importance of musicians and appointed drummers, paying them 1000 pounds of tobacco and six barrels of corn per year.(25) In the earliest days Virginia struggled to provide enough food to ward off starvation. After the first few years the colony could afford to sustain a militia and, with basic food, clothing and shelter provided for, mandate attendance at militia musters without disrupting the colony. Sundays became the regular militia training days, combining religious, military and social functions. Forty years later the militia trained only three times a year. In that time span much of the threat from the native aborigine had been contained. But another development, the trained specialist, had emerged, usually in the guise of frontiersmen, who knew how to fight the Indians on their own terms. These specialists served at times for pay and at other times as volunteer militia. There were also frontier forts to be garrisoned and this required a considerable number of men. Demands were so great that Virginia had to resort to paying some men. Since many volunteers had to be paid, there was a constant drain on the treasury. In the early eighteenth century was generally too impoverished to defend itself adequately so it had to rely primarily on retaliation. Since the frontier, with its large plantations and farms, was but sparsely settled and the loss of a few men from a particular area usually meant disaster. The families of the settlers could not defend themselves and would often abandon the land and return to the east. There were few fortified places or garrison houses on the frontier wherein settlers or their families could take refuge except for the scattered forts. Neither were there sufficient resources on the frontier to sustain the militia when units were deployed there. In 1642 the Lords of Trade sent a new set of orders to Governor Berkeley, including instructions concerning the militia. 11. To the end the country may be the better served against all Hostil Invasions it is requisite that all persons from the age of 16 to 60 be armed with arms, both offensive and defensive. And if any person be defective in this kind, wee strictly charge you to command them to provide themselves of sufficient arms within one year or sooner if possible it may be done, and if any shall fail to be armed at the end of the Term limited we will that you punish them severely. 12. And for that Arms without the Knowledge of the use of them are of no effect wee ordain that there be one Muster Master Generall, appointed by as for the Colony, who shall 4 times in the year and oftener (if cause be) not only view the arms, ammunition and furniture of every Person in the Colony, but also train and exercise the people, touching the use and order of arms and shall also certify the defects if any be either of appearance or otherwise to you the Governor and Councill. . . . And for his competent maintenance we will that you, the Governor and Councill, so order the business at a General Assembly that every Plantation be rated equally according to the number of persons, wherein you are to follow the course practised in the Realm of England. 13. That you cause likewise 10 Guarders to he maintained for the Port at Point Comfort. And that you take course that ye Capt of ye said Port have a competent allowance for his services there. Also that the said fort be well kept in Reparation and provided with ammunition. 14. That new Comers be exempted the 1st yeare from going in p'son or contributing to the wars Save only in defence of the place where they shall inhabit and that only when the enemies shall assail them, but all others in the Colony shall go or be rated to the maintenance of the war proportionately to their abilitys, neither shall any man be priviledged for going to the warr that is above 16 years old and under 60, respect being had to the quality of the person, that officers be not forced to go as private soldiers or in places inferior to their Degrees, unless in case of supreme necessity.(26) Virginia pursued a conscious plan of confrontation with the Amerindians between 1622 and 1644, a policy aimed at extermination or at least complete pacification. Initially Virginia's political authorities considered all Amerindians to be enemies and hostiles to be eliminated, adopting for perhaps the first time the maxim that the only good Indian was a dead one. There was almost constant warfare, although the number of real battles was few. In such a war of attrition, the demands on the militia were great and men groaned under the constant militia musters. An essential ingredient of the policy was constant and unremitting harassment of the enemy. The legislature again in 1643 ordered that no quarter be given to warring Amerindian tribes. This law essentially allowed militia to attack villages at will. Home county courts of the militia paid the expenses of the roving bands of terrorists.(27) On 17 April 1643, the Northampton County Court ordered that "no person or persons whatsoever within the County of Northampton except those of the Commission, shall from henceforth travel from house to house within said county without a sufficient fixed gun with powder and shot." Penalty for non-compliance was 100 pounds of tobacco, with the possibility of imprisonment for repeated failures to carry a gun.(28) Following the enactment of this local legislation, the Virginia legislature enacted a similar law. That law required that "every family shall bring with them to Church on Sundays, one fixed and serviceable gun . . . under penalty of ten pounds of tobacco." White male servants who were required otherwise to bear arms were to receive guns from their masters. If they failed to carry their guns to church they were subject to the penalty of "twenty lashes, well laid on."(29) In 1644 the Powhatan Indians again attacked the outlying and isolated farms along the James River. The governor ordered the militia, some 300 strong, into the field, where, for six weeks, they pursued the Indians, burned the crops and sacked their towns. This marked the end of the threat from the exhausted and depleted Powhatan tribes. Following the Third Powhatan War, the governor divided Virginia into two basic military districts, one north, and one south, of the James River. Each district made its own military policies and created its own strategy.(30) In 1651 the militia was again reorganized along county lines, following the model created by Massachusetts in 1643.(31) In February 1645 the legislature authorized the association of its three principal counties to create the first regimental structure in the colony. The law also designated the militia as the official source for soldiers. For each fifteen militiamen the counties were to furnish one soldier.(32) The system of drafting one man among each 15 taxables proved to be quite unpopular, especially when the pool of 15 could not agree upon which man should serve. There was widespread resistance to the drafting of militia, forcing the legislature to pass an explanatory act in 1648. The colony augmented force with some vague promises of scalp bounties, plunder, profits for sales of prisoners and land grants for service as an indispensable to sustaining support and morale. These laws were repealed only after peace had been established. In 1645 the legislature pursued a war against the Mansimum and their allies by "cutting up their corn and doing or performing any act of hostility against them" to such a degree that their towns were destroyed and the Amerindians reduced to hiding in the woods and ambushing whatever whites they might fall upon.(33) Although a populous and prosperous state Virginia could not sustain the costs of constant Indian wars that this policy promoted. The colony attempted to support those wounded in Indian wars, or their widows and offspring, or to at least remit taxes upon those injured or widowed.(34) By 1646 the colony adopted a kinder, more gentle policy toward the Amerindians. The colony made peace with the Mansimum in October 1646 on its own terms. The Indians ceded all land between the falls of the James River and the York River, acknowledged the sovereignty of the English king, surrendered all firearms, and return all runaway slaves, escaped prisoners, and indentured servants. Indians who returned to their former homes could be killed instantly.(35) The legislature considered several interesting ideas about "civilizing" and pacifying their former enemies. First, they would offer the chiefs a cow for every eight wolfs' heads turned in. When the men came to collect, the churchmen would attempt to convert them to Christianity. Amerindian children could be brought into settlers' homes provided they be instructed in Christianity. Indian traders would be controlled and licensed and would guide clergymen to the villages. As we have seen, Virginia had long attempted to contain the Amerindians by limiting their access to firearms and gunpowder, and this ban was to be continued.(36) On 25 November 1652, the colonial legislature passed a new law which provided, Whereas divers of the Inhabitants of this [Northumberland] County doe employ Indians with guns & powder and shott, very frequently and usually to the great danger of a Massacre, the Court doth think fitt to declare and publish unto the whole county that if any person or persons who so ever shall with 10 days after the date hereof deliver either gun powder or shott to any Indian under what pretence so over shall be proceeded with all according to the Act of assembly in that case provided and after that manner of persons that have any guns out amongst the Indians after publication hereof shall get them in with all convenient speed and that no persons what so ever imploy any Indian at all nor supply them with powder and shott.(37) In 1658 the Virginia House of Burgesses created a rudimentary militia act which required that a provident supplie be made of gunn powder and shott to our owne people, and this strictly to bee lookt to by the officers of the militia, vizt., That every man able to beare armes have in his house a fitt gunn, two pounds of powder and eight pound of shott at least which are to be provided by every man for his family before the last of March next, and whosoever shall faile of makeing such provision to be fined fiftie pounds of tobacco to bee laied out by the county courts for a common stock of amunition for the county.(38) In the same year, the legislature attempted to guarantee the natives' title to their land in the Shenandoah Valley and beyond, but still allowed, even authorized and financed, exploration of the area. The law still permitted the killing of an aborigine if he was suspected of "mischief." The legislature also permitted them to own guns, although there was no clear avenue for their sale or barter, or of supplying gunpowder, lead and flints.(39) It was simply a matter of time until additional land, especially in the fertile and beautiful Shenandoah Valley, was traded for trinkets, guns and supplies. Where title was obtained, the organization of a militia among the settlers was inevitable. In 1660 John Powell carried a complaint to the legislature in which he alleged that Amerindians had encroached on his land, committing unspecified damages. The legislature authorized him to capture as many Amerindians as would satisfy his claim and sell them as slaves abroad. The local militia was authorized to assist him in rounding up the slaves. In other cases over the next few years, the legislature attempted to protect the Amerindians' land, even to the point of ordering the militia to burn houses built on illegally obtained land. The legislature voided some questionable land conveyances, protecting the natives in a way as if "the same had bin done to an Englishman."(40) With the increasing encroachment of settlers into the western areas of Virginia, tribes on the frontier came under increasing pressure. Additionally, the Iroquois made occasional raids as far south as Maryland and Virginia. A few tribes found support from some of their traditional enemies. It appears that the Amerindians were beginning to understand that the tribes must either stand united or be decimated piecemeal. As early as 1662 Virginia warned the western Amerindians that they must not encroach on settlements, raid villages or homesteads, or molest tributary Indians. The whites, fearing that an alliance was in the making, demanded that a number of children be surrendered from the Potomaks and allied northern tribes. In a white man was killed the Amerindians in the closest village were to be held responsible.(41) In 1666 Thomas Ludwell wrote a travelogue of Virginia. He observed the militia and reported to the Lords of Trade, Every county within ye said Province hath a regiment of ffoot under ye command of a colonel and other inferior officers and many of them a troop of horse under ye command of a captain . . . . Great care is taken that ye respective officers doe train them and see their Armes [are] well fixed and truly, my Lords, I believe all to be in so good order as an Enemy would gain little advantage by attempting anything upon them.(42) By 1675 Virginia was fighting the last of its great colonial Indian wars. The natives were in submission and most were nominally allied with the colony, which, in reality, meant that they were dependent upon Virginia for daily support and protection. The Senecas of the Five Nations stirred up the Susquehannocks and Piscataways along the Potomac and a large combined force attacked the settlers in Maryland and northern Virginia. Six chiefs attempted to reestablish the peace, but were treacherously murdered. This outrage roused the Amerindians who slew a hundred colonists in revenge. A second time the confederated tribes offered peace and a second time their offer was rejected. The colonists were bent on revenge for the merciless slayings and wanted to exterminate the Indians. Initially, Governor William Berkeley had sought to adopt a largely defensive posture which required a minimum number of troops. But the legislature supported the people who clamored for war and authorized the counties to call out their militias. It declared war and passed a number of laws designed to bring the militia units up to full strength. Taxes were increased to pay for the equipping and salaries of the militiamen.(43) Meanwhile, the colony was torn with contentions incident to the Restoration, and these troubles culminated in what is known as Bacon's Rebellion. The deprivations and outrages perpetrated by the Amerindians, and the stiffening Amerindian resistance, afforded the rebels their excuse for arming. In 1676 Nathaniel Bacon, Jr., led a group of settlers who applied great pressure on Berkeley to double the number of militia called for duty in order to launch a great all-out offensive against the Amerindians, designed to end the menace forever. In 1676 Sir William Berkeley, Governor of the colony, called for a standing army of 500 levies drafted out of the militia units, and paid for by the increase in public taxation. The planters who dominated the legislature objected, saying that the colony could not sustain the additional taxes.(44) Bacon, an articulate planter, made a counter-proposal, calling for a force of 1000 volunteers, funded by the spoils of war. The assembly was dominated by Bacon's followers and it authorized the creation of the full fore of one thousand militiamen by assigning quotas to each of the eighteen counties. Berkeley correctly surmised that Bacon's mercenaries would plunder the wealthiest tribes, which were peaceful, and ignore the poorer ones that were warlike.(45) The uncivilized and more warlike aborigine had few desirable goods whereas the more peaceful "civilized" Indians had considerable goods. Still, since Bacon was able to dominate the legislature, which became known as the Bacon Assembly, his legislation passed. His militia act attempted to distinguish between friendly and hostile Amerindians. The act declared that any Amerindian found outside his village was to be considered an enemy. All Amerindians had to surrender their arms, even guns that had been heretofore legally owned. They must agree not to hide, shelter, conceal, or even trade with, any warriors from other tribes, and had to deliver up any strangers who came among them. If the visitors were too strong for the hosts to capture, they must assist the militia in taking them. Each town must provide an accounting of its warriors, women and children. All Amerindians taken in battle were to be enslaved, with proceeds of their sale to be accounted as booty of war.(46) Following the massacre of the relatively unarmed and peaceful Occaneechee in May 1676, and just before the anticipated slaughter of the like Pamunkeys, Berkeley ordered Bacon to disband and relinquish his command. Bacon marched against Berkeley, burned James Town, and assumed political control of the colony.(47) Commands were given by trumpet for the first time in the Virginia colony.(48) The Bacon Assembly suspended all trade with the Amerindians, but this caused too great a loss to the traders so trade was permitted with those adjudged to be friendly. Natives wishing to trade had to come unarmed. Two forty day trading sessions were established north and south of the James River, with the governor and council receiving a percentage of the profits. At any point, whites might demand that any native approaching must lay down his arms.(49) Upon hearing that a British army was on its way from the Chesapeake area to restore order, Bacon was unshaken. He would merely adopt tactics learned in fighting the Amerindians. "Are we not acquainted with the country, so that we can lay ambuscades?" Bacon asked, "Can we not hide behind trees to render their discipline of no avail? Are we not as good or better shots than they?"(50) Bacon's position became gospel to the colonists and is something that might have been uttered by any of a large number of militiamen during the American Revolution. In October 1676 Bacon died and Berkeley reestablished his authority. One thousand regular troops arrived, sent by the Stuarts from England, and a commission investigated Berkeley's alleged despotism. In May 1677 Berkeley returned to England, but died there on 9 July 1677, before the matter was settled. With Stuart troops firmly in charge the remainder of Bacon's militia disbanded and melted back into the frontier. Since both of the principals were now dead nothing more was done at court and, having received a pledge of renewed loyalty from the colonists, the troops were withdrawn.(51) William Sherwood's account was hardly flattering of Bacon and his men, viewing them as seditious rebels. "Ye Rabble giveing out they will have their owne Lawes, demanding ye Militia to be settled in them with such like rebellious practices."(52) Some settlers complained that they were forced to leave their farms and neglect their occupations and stand seacoast watch and garrison duty in outlying frontier forts, but received no compensation for serving their militia duty and that impressment into the militia was a cause of the rebellion. Others complained of the burden the law imposed by requiring them to buy arms to stand the hated militia duty. Having armed themselves, they found themselves disarmed by the same government which imposed the purchase upon them. "Wee have been compelled to buy ourselves Guns, Pistols and other Armes . . . [and] have now had them taken away from us, the which wee desire to be restored to us again."(53) The destruction of the Amerindians was essentially complete. The poor remnants that remained were of no great consequence, with most reduced to tue most wretched poverty. Tribal distinctions all but disappeared as the survivors struggled merely to exist. Berkeley in 1680 claimed that "the Indians our neighbours are absolutely subjected, so that there is no fear of them." Amerindian country was clear for western expansion.(54) Soon after Bacon's Rebellion, the North Carolina's government was threatened by a second popular uprising, known as Culpeper's Rebellion. As a protest against the arbitrary rule of Governor John Jenkins, Thomas Miller, unpopular leader of the proprietary faction, combined the functions of governor with the lucrative post of customs collector. On 3 December 1677 the anti-proprietary faction arrested and imprisoned Miller. Miller escaped and fled to England and put his case before the Privy Council. The governor considered calling out the militia to restore order and the home government considered dispatching troops from England. John Culpeper of Virginia defended the leaders of the anti-proprietary party. Meanwhile, the Earl of Shaftsbury, having decided that Miller had exceeded his authority, mediated the dispute, and the uneasy peace was made permanent. In 1679 the Assembly decided to construct four garrison-houses on the headwaters of the four great rivers, Potomac, Rappahannock, Mattapony and James, "and that every 40 tithables within this colony be assessed and be obliged to fitt out and sett forth one able and sufficient man and horse with furniture well and completely armed with a case of good pistols, carbine or short Gunn, and a sword." The settlers on the Rappahannock were to have "in readiness upon all occasions, at beate of drum, fifty able men well armed." Additionally, two hundred men were to be counted as reserves, to be called when needed. Major Lawrence Smith was to organize the militia and for this service was to receive 14,000 acres of land. William Bird was to have the same amount of land for organizing the militia near the fall on the James River.(55) In 1680 the assembly in Jamestown, Virginia, ordered that all persons of color be disarmed. Blacks were prohibited from carrying swords, clubs, guns or any other weapons for either offensive or defensive use. The assembly was likewise afraid of the black assemblies because "the frequent meetings of considerable numbers of negroe slaves, under pretence of feasts and burialls is judged [to be] of dangerous consequence."(56) In 1705 the law was mitigated by substituting the word slave for negro, and that "all and every such person or persons be exempted from serving either in horse or foot."(57) At this point Virginia reconsidered its militia policy. Few poor men could realistically afford to buy their firearms and other militia supplies so the colony undertook to finance many expenses for individual militiamen. The government could not afford to both maintain the militia and provide static fortifications. By recruiting only among gentlemen the colony was freed from having to make contributions to the support of the militiamen. No formal law or edict disarmed the poor. They were merely relegated to a position as inactive militia. Disarmament occurred by attrition. No one inspected arms or mustered the great militia and the poor neglected to maintain and update their arms. In April 1684 Charles II approved a major change in the colony's militia law. The law is significant in several ways. It decreed the right, as well as the obligation, of colonists to own their weapons; and it protected the arms owned by the subjects from government confiscation. For the encouragement of the inhabitants of this his majesties collony and dominion of Virginia, to provide themselves with arms and ammunition, for the defence of this his majesties country, and that they may appear well and compleatly furnished when commanded to musters and other the king's service which many persons have hitherto delayed to do; for that their arms have been imprest and taken from them. Be it (a) enacted by the governour, council and burgesses of this present general assembly, and the authority thereof, and it is hereby enacted, that all such swords, musketts, (b) pistolls, carbines, guns, and other armes and furniture, as the inhabitants of this country are already provided, or shall provide and furnish themselves with, for their necessary use and service, shall from henceforth be free and exempted from being imprest or taken from him or them, that already are provided or shall soe provide or furnish himselfe, neither shall the same be lyable to be taken by any distresse, seizure, attachment or execution. Any law, usage or custom to the contrary thereof notwithstanding. And be it further enacted, That between this and the five and twentieth day of March, which shall be in the yeare of our Lord one thousand six hundred eighty six, every trooper of the respective counties of this country, shall furnish and supply himself with a good able horse, saddle, and all arms and (c) furniture, fitt and compleat for a trooper, and that every foot soldier, shall furnish and supply himselfe, with a sword, musquet and other furniture fitt for a soldier, and that each trooper and foot souldier, be provided with two pounds of powder, and eight pounds of shott, and shall continually keep their armes well fixt, cleane and fitt for the king's service. And be it further enacted, That every trooper, failing to supply himselfe within the time aforesaid, with such arms and furniture, and not afterwards keeping the same well fixt, shall forfeite four hundred pounds of tobacco, to his majesty, for the use of the county in which the (a) delinquent shall live, towards the provideing of colours, drums and trumpetts therein, and every foot souldier soe failing to provide himselfe, within the time aforesaid, and not keeping the same well fixt, shall forfeit two hundred pounds of tobacco to his majesty, for the use aforesaid, and that all the militia officers of this country, take care to see the execution and due observation of this act, in their several and respective regiments, troops and companies. And be it further enacted by the authority aforesaid, That every collonell of a regiment within this country, shall once every yeare, upon the first Thursday in October, yearly, cause a generall muster, and exercise of the regiment under his command, or oftener if occasion shall require. And that every captain or commander of any troop of horse or foot company, within this country, shall once at the least in every three months, muster, traine and exercise, the troop or company under his command, to the end, they may be the better fitted and enabled, for his majesties and the countryes service when they shall be commanded thereunto.(58) Some thought that there were problems with he practice of the militia law, if not defects in the law itself. The governor was frequently remiss in appointing officers to take control over the colony's militia. On 4 July 1687 Lieutenant-colonel William Fitzhugh complained that in Stafford County, "I know not there being one Militia Officer in Commission in the whole County & consequently people best spared cannot be commanded into Service & appointed to guard the remotest, most suspected and dangerous places." He submitted a full list of men eligible for militia duty, but pointed out that a select militia would make more sense. At least on the frontier, where few musters could be readily scheduled, intensively training the few made more sense than half training the many. "A full number with a soldier like appearance," Fitzhugh wrote, "is far more suitable and commendable than a far greater number presenting themselves in the field with clubs and staves rather like a rabble rout than a well disciplined militia."(59) In this year the legislature appropriated tax money for the purchase of colors, drums and trumpets for the militia. It also agreed to purchase all musicians' instruments at public expense.(60) Those exempted from militia service in the 1690s included physicians, surgeons, readers, clerks, ferrymen and persons of color.(61) By effectively disarming the poorer classes the authorities had less cause to worry about a popular uprising.(62) In 1691 the legislature repealed all former prohibitions to, and restrictions on, the Indian trade. This act also had the effect of protecting all Amerindians from being newly enslaved after that date. Neither could they be enlisted in the militia against their will.(63) Ranging companies were commonplace in the middle colonies by the time of the American Revolution, but were uncommon in the seventeenth century. Virginia had formed companies of rangers by 1690, for there is a notation in the British Public Records Office dated 23 April 1692 which refers to gunpowder and other supplies having been sent to the rangers of King & Queen County, Virginia.(64) By 1701 the militia of those two counties alone numbered 132 officers and non-commissioned officers; 152 horsemen; 222 dragoons; 415 foot soldiers. Among their arms were 575 swords, 141 pistols and 543 muskets.(65) A new military-Indian policy proved to be more reasonable. Virginia would ally with and materially support friendly, civilized tribes who would guarantee the provincial borders. The colony built a string of forts along the frontier and recruited mounted rangers to maintain order and peace. These militia-cavalry were the equivalent of the much vaunted New England minutemen. The system generally worked well. On 9 December 1698 the king appointed a new executive, Lieutenant-governor Nicholson. He proved to be highly unpopular by exercising powers heretofore reserved to, or traditionally exercised by, council or legislature. Two usurpations of power were related to the militia. First, Nicholson assumed appointment of superior militia officers. Second, he was charged with "advancing men of inferior stations to the chief commands of the militia" while "all colonels, lieutenant-colonels, majors and captains . . . are put in and turned out" arbitrarily."(66) So great were the protestations that the king removed Nicholson on 15 August 1705. One of the unique functions of the militia in the late seventeenth, and early eighteenth, centuries was the enforcement of religious participation. The militia was charged with forcing all persons, whether religious or not, to attend services at the Church of England. By the end of the seventeenth century Virginia's needs for militia were changing. The population of the colony increased, making training and recruitment easier and expediting the creation and maintenance of militia enrollment lists. Still, increasingly poorer emigrants swelled the ranks while failing because of poverty to arm themselves adequately. Her concern for Amerindian attacks was minimal since by 1700 the colony had subdued the stronger tribes. The Carolinas served as a successful barrier to the south and the Appalachian mountains, with a few frontier forts, guarded her western boundary. The French did not threaten Virginia's interests for another half century. What remained of the decimated Amerindian tribes received support from the colonial government. They frequently sold their services as scouts and even warriors. The colony had to provide only money, command and a few supplemental frontiersmen to serve as scouts. In the Tuscarora War of 1712 Virginia was able to rely on the Carolina militias and Governor Spotswood's diplomacy.(67) When Colonel John Barnwell took his troops into battle in the Tuscarora War he manipulated his mounted troops with trumpets and his foot soldiers with drums.(68) In 1710 the Assembly authorized the lieutenant-governor, as military commander of the colony, to form several bands of rangers. Each county lieutenant "shall choose out and list eleven able-bodied men, with horses and accouterments, arms and ammunition, resideing as near as conveniently may be to that frontier station." The lieutenant served simultaneously as county militia commander and commandant of the rangers.(69) With the coming of the war known as Queen Anne's War, 1702-1713, authorities thought that Virginia needed an adequate militia law. The militia law of 1705 was the first truly comprehensive enactment on the subject promulgated in the colony. The law created a general obligation to keep and bear arms in defense of country. There was a long list of exemptions to the requirement that men muster, including: millers with active mills; members of the House of Burgesses and the King's Council; slaves and imported servants; officers and men on active duty with the king's forces; the attorney general; justices of the peace; the clerks of parishes, council, counties and the general court; constables and sheriffs; ministers; schoolmasters; and overseers charged with the supervision of four or more slaves. Those exempted still had to supply their own arms and could be fined for failure to do so. Those exempted were charged with an obligation to "provide and keep at their respective places of abode a troopers horse, furniture, arms and ammunition, according to the directions of this act hereafter mentioned." They could be mustered in case of invasion or insurrection. "And in case of any rebellion or invasion[they] shall also be obliged to appear when thereunto required, and serve in such stations as are suitable for gentlemen, under the direction of the colonel or chief officer of the county where he or they shall reside, under the same penaltys as any other person or persons, who by this act are injoyned to be listed in the militia. . . ." Militiamen who failed to appear with the required arms, ammunition and accoutrements were fined 100 pounds of tobacco. The commander of each troop was required to appoint a clerk who was to record courts-martial and receive the company fines. The other major provisions of the law read as follows. For the settling, arming and training a militia for her majestie's service, to be ready on all occasions for the defence and preservation of this her colony and dominion, be it enacted, by the governor, council, and burgesses, of this present general assembly . . . to list all male persons whatsoever, from sixteen to sixty years of age within his respective county, to serve in horse or foot, as in his discretion he shall see cause and think reasonable . . . . The colonell or chief officer of the militia of every county be required, and every of them is hereby required, as soon as conveniently may be, after the publication of this act, to make or cause to be made, a new list of all the male persons in his respective county capable by this act to serve in the militia, and to order and dispose them into troops or companys . . . . each trouper or ffoot soldier may be thereby guided to provide and furnish himself with such arms and ammunition and within such time as this act hereafter directs. . . . That every ffoot soldier be provided with a firelock, muskett or fusee well fixed, a good sword and cartouch box, and six charges of powder, and appear constantly with the same at time and place appointed for muster and exercise, and that besides those each foot soldier have at his place of abode two pounds of powder and eight pounds of shott, and bring the same into the field with him when thereunto specially required, and that every soldier belonging to the horse be provided with a good serviceable horse, a good saddle, holsters, brest plate and crouper, a case of good pistolls well fixed, sword and double cartouch box, and twelve charges of powder, and constantly appear with the same when and where appointed to muster and exercise, and that besides those each soldier belonging to the horse have at his usuall place of abode a well fixed carabine, with belt and swivel, two pounds of powder and eight pounds of shott, and bring the same into the ffield with him, when thereunto specially required. . . . eighteen months time [is] be given and allowed to each trouper and ffoot soldier . . . to furnish and provide himself with arms and ammunition . . . . for the encouragement of every soldier in horse or ffoot to provide and furnish himself according to this act and his security to keep his horse, arms and ammunition, when provided, . . . the musket or ffuzee, the sword, cartouch box and ammunition of every ffoot soldier, and time horse, saddle and furniture, the carbine, pistolls, sword, cartouch box and ammunition of every trooper provided and kept in pursuance of this act to appear and exercise withall be free and exempted at all times from being impressed upon any account whatsoever, and likewise from being seized or taken by any manner of distress, attachment, or writt of execution, and that every distress, seizure, attachment or execution made or served upon any of the premises, be unlawful and void . . . . the colonel or chief officer of the militia of every county once every year at least, [is to] cause a general muster and exercise of all the horse and ffoot in his county . . . [and] every captain both of horse and foot once in every three months, muster, train and exercise his troop or company, or oftener if occasion require. Provided, That no soldier in horse or foot, be fined above five times in one year for neglect in appearing. . . . all soldiers in horse and ffoot during the time they are in arms, shall observe amid obediently perform the commands of their officer relating to their exercising according to time best of their skill, and that the chief officers upon time place shall and may imprison mutineers and such soldiers as do not their dutys as soldiers at time day of their musters and training, and shall and may inflict for punishment for every such offence, any mulet not exceeding fifty pounds of tobacco, or the penalty of imprisonment without bail or main prise, not exceeding ten days.(70) The militia act did not yield the desired results. At the end of Queen Anne's War, Governor Alexander Spotswood thought "the Virginians to be capable of being made as good a militia as any in the World, yet I do take them to be at this time the worst in the King's Dominions."(71) In 1713 Governor Spotswood called out the militia against a weak Amerindian enemy, but it failed to respond. He attempted to recruit, first by a call for volunteers, and then by offering substantial pay incentives, an army of frontiersmen. He found that those living inland shared little concern for the lives of the frontiersmen; and that in time of Amerindian threat the frontiersmen did not want to leave their homes, farms, crops and families. In a long letter to the Board of Trade he argued that the rich had gotten off too easily in the past and that the poor had unfairly borne the burden. After a year filled with great frustration, Spotswood declared that "no Man of an Estate is under any Obligation to Muster . . . [while] even the Servants or Overseers of the Rich are likewise exempted," and thus "the whole burthen lyes upon the poorest sort of people," he thought to scrap the whole militia system. Disgusted, he proposed that the House of Burgesses rewrite the law, changing the general militia into a select one. What Spotswood proposed would constitute a radical change. A select force of skilled, trained and disciplined militiamen would be recruited, consisting of approximately one-third of the adult, free, male population. The remaining two-thirds would be taxed to support the select militia. The citizen-soldiers would be exempted from paying the militia tax. The militia would exercise ten times a year. He proposed extending the frontier mounted ranger principle to encompass the entire militia system. A select militia system would wholly replace the general militia and "Persons of Estates . . . would not come off so easily as they do now." (72) Disillusioned by defeat of his plan in the legislature, Spotswood made peace with the Tuscarora who soon moved on to become the sixth tribe associated with the League of the Iroquois. As in New England, militia training days, especially the annual regimental muster, had become important social events in Virginia. As 1737 the militia put on a public demonstration of its skills at a county fair, passing in review before those assembled and practicing the manual of arms and other drill exercises. The militia musicians played music for the entertainment of the spectators "and gave as great Satisfaction, in general, as could be possibly expected." Refreshments, games and general socializing followed the militia's performance. The most accomplished regimental trumpeter often displayed his skills in support of a horse race. Few events were more popular among the spectators than the culminating parade in which all militia units passed in formal review before the highest ranking militia officers and various political authorities.(73) The Lords of Trade inquired of Governor Spotswood as to the number of inhabitants and the state of the militia in 1712. Spotswood responded on 26 July 1712. "The number of freemen fit to bear arms . . . [is] 12,051 and I believe there cannot be less than an equal number of Negroes and other Servants, if it were fit to arm them upon any occasion."(74) On 16 February 1716 Governor Spotswood reported to the Lords of Trade on the numbers enrolled in the Virginia militia. "Ye number of Militia of this Colony . . . consists of about 14,000 horse and foot . . . The list of tythables . . . last year amounted to 31,658 . . . all male persons, white and black, above ye age of 16." He also reported that there were 300 firelocks in the public stores.(75) On 7 February 1716 Spotswood proposed the Commission of Trade and Plantations that Virginia form a "standing militia" of select membership. Membership would rotate on an annual basis, but those serving during a certain year would be in "permanent condition of muster." He called for 3000 foot and 1500 horsemen "at a yearly cost of 600,000 pounds of tobacco."(76) He argued his case in a letter to the Board of Trade, What my Designs were, by the Scheme I laid before the Assembly regulating their Militia, will best appear from the Project it self, which, because it is not inserted in the Journals of the Assembly . . . I think it becomes me to employ my Thoughts in search of what may better conduce to the Welfare of the People committed to my charge, and do apprehend that I have the same Liberty of Recommending my notions to the Assembly, to be brought, (if they consent,) into a Bill, as they have of Proposing Their's to me to be pass'd, (if I assent,) into a Law; yet I offer'd no Scheme upon this Head 'till, after the House of Burgesses had Addressed,(77) expressing their Inclinations to have the Militia of this Colony under a better Regulation, and, at the same time, desiring me to propose a Method by which it might be rendered more usefull . . . my Project for the better Regulation of the Militia was no more than what is agreeable to the Constitution of Great Britain, I hope your Lordships will rather approve the same, and not judgde that I have endeavoured to destroy a profitable People by desiring them to imitate the Justice and Policy of their Mother Country, where no such unequal Burthen is laid upon the poor as that of defending the Estates of the Rich, while those contribute nothing themselves; For, according to the present constitution of the Militia here, no Man of an Estate is under any Obligation to Muster, and even ye Servants or Overseers of the Rich are likewise exempted; the whole Burthen lyes upon the poorest sort of people, who are to subsist by their Labour; these are Finable if they don't provide themselves with Arms, Ammunition and Accoutrements, and appear at Muster five times in a Year; but an officer may appear without Arms, who may absent himself from Duty as often as he pleases without being liable to any Fine at all; nay, and if it be his interest to ingratiate himself with the Men, he will not command them out, and then the Soldier, not being summoned to march, is not liable to be fined any more than the Officer. Besides, when the Poorer Inhabitants are diverted from their Labour to attend at Muster, it is to no manner of purpose, their being not one Officer in the Militia of this Government that has served in any Station in the Army, nor knows how to exercise his Men when he calls them together. This is the State of the Militia under the present Law, and therefore I could not imagine that my endeavouring a Reformation thereof would be imputed to me as a Crime; That 3,000 Foot and 1,500 Horse should be more a Standing Army or a greater means for me to govern Arbitrarily than 11,000 Foot and 4,000 Horse, of which the Militia now consists, is surprizing to every Body's understanding but the Querist's own. That these 15,000 men, mustering each five times in a year, should be less burthensome than 4,500 Men, mustering ten times in a year, is no less strange, unless the Querist has found out a new kind of Arithmetick, or that he looks upon the Labour of those People who are now obliged to Muster to be of no value. On the contrary, it is demonstrable by my Scheme that above two-thirds of the Inhabitants now listed in the Militia would have been eased from the trouble of Mustering, and consequently that the Man which stayed at home would not be charged with so much as half the pay of him that attended in the Field, which Exemption, costing less than Seven pounds of Tobacco per Muster, there is scarce one man serving in the Militia now who would not be content to pay more than Thrice as much for being to follow his own business instead of travelling 20 or 30 Miles to a Muster. And if, by one Man thus paying his poorer Neighbour for four or five days' Service in a Year, above 600,000 pounds of Tobacco, (as the Querist computes,) should be spent throughout the whole Colony; yet, far from granting that such a Charge must be to the entire Ruin of the Country, I apprehend yet it must be rather a benefit to the Publick by the Circulation of Money and Credit that would be increased thereby, and this circulation would be more just and beneficial, seeing ye Payments would generally happen to be made by the Richer to the Poorer sort. It is true, that by my Scheme Persons of Estates would not come off so easily as they do now, They must have contributed to the Arming as well as Paying the Men who were to be train'd up for the defence of their Estates; And I cannot but pitty the simplicity of the Vulgar here, who, at every offer of a Governor to make their Militia usefull, (tho' the Regulation be never so much in their favour,) are set on to cry out against him as if he was to introduce a Standing Army, Arbitrary Power, burthensome Taxes, &c. And as for their Abettors, who chose rather to risk their whole Country than to be brought to Club for its defence, I wish they or their Posterity may not have cause to Repent of their present Folly When an Enemy shall happen to be at their Doors. For, tho' I will allow the Virginians to be capable of being made as good a Militia as any in the World, Yet I do take them to be at this time the worst in the King's Dominions, and I do think it's not in the power of a Governor to make them Serviceable under the present constitution of the Law. It is, indeed, a Strange Inference. The Querist, upon the Proposal of Adjutants, that they were to huff and Bully the People, This, I am sure, was never intended as any part of their Office in my Scheme, nor am I apt to believe the House of Burgesses, to whom it was referred, would readily have given 'em such an authority. These Adjutants were proposed to be of the Inhabitants of the Country who were first to be exercised and instructed by me in Military Discipline, and afterwards to go into their respective Countys to teach the Officers and Soldiers. However, if, in the above mentioned Scheme there appeared any thing disagreeable to the Inclinations or Interest of the People, I was far from pressing them to it, Seeing it is evident from my Message to the House of Burgesses that I left it to them to adapt it to the Circumstances of the Country.(78) The Tuscarora War of 1711-12 in North Carolina, in which at least two hundred settlers were massacred, had been won only with the assistance of the militias of South Carolina and Virginia. As the remnants of the once mighty Tuscarora began to migrate northward, Virginia thought it wise policy to exclude these savage warriors from its lands. When two Germans, Lawson and deGraffeured, seeking land for a colony of their countrymen in western Virginia, were taken by Tuscarora in September 1711, the governor mustered frontier ranging militia and dispatched to the area of the New River. Alexander Spotswood attempted to forge a treaty with the Tuscarora, secured by Amerindian hostages, to guarantee the peace of his colony, but failed. Spotswood next tried to make a show of force by mustering six hundred of the best militia to be located, but the Tuscarora had seen militia in the Carolinas and were unimpressed. For his part, the governor genuinely sought an honest, just and equitable settlement and peace. But the legislature entered the picture, thinking Governor Spotswood's response to be quite inadequate. The legislators feared the Tuscarora who would thought still to include as many as two thousand warriors, while the province could field 12,051 militiamen who were scattered all over the vast territory. So they created a special regiment of rangers, empowering it with the power to kill hostiles on sight. The legislative definition of hostiles included any Amerindian fleeing from a white man or refusing to respond to an order to halt. Fleeing braves could be killed without any fear of prosecution. Indians who were found in the forest and who could not "give a good account of themselves" might be killed, enslaved or imprisoned. Enemy Indians who were captured were enslaved and sold to the benefit of the militiaman. The law excluded these rangers from accountability and punishment for killing any presumed hostile Amerindian. When a company commander certified that a militiaman had killed an Amerindian who had previously attacked or killed any white man or woman, the man received a bonus of £20. As a bonus, those who served as rangers would be exempted for one year thereafter from serving in the militia or being subject to parish or county levies. The legislature denied the Tuscarora the right to live, gather firewood, hunt, or be servants within the provincial boundaries. It budgeted £20,000 to fund the militia. The act was given an effective period of only one year, but was extended at least twice.(79) Spotswood thought the measures to be far too harsh. In reporting the overreaction of the legislature to the Board of Trade, he wrote, So violent an humour amongst them [the Assembly] for extirpating all the Indians without distinction of Friends or Enemys that even a project I laid before them for assisting the College to support the charge of those Hostages has been thrown aside without allowing it a debate in their House tho' it was proposed on such a foot as would not have cost the country one farthing.(80) The Tuscarora initially capitulated and accepted the legislature's conditions after learning of the extent of Virginia's response. They surrendered the hostages, children of their principal leaders, who were then to be converted to Christianity and educated. They released deGraffenreid. The legislative enactment permitted only men of the Eastern Shore, Pamunkey and Chickahominy tribes to hunt in any area east of the Shenandoah Valley. These tribes became known as the Tributary Indians and the law afforded them certain protections and a few privileges. They alone could harvest seafish and shellfish, although they had to wait until the whites had taken all they wanted first. They were required to act as spies and report on any movements of foreign warriors on the frontier. They were expected to join the militia in wars against the hostile tribes to the west. In 1712 the legislature expanded the list of tributary Indians to include the Nansemond, Nottoway, Maherin, Sapon, Stukanocks, Occoneechee, and Tottero tribes. These Amerindians could trade for arms, ammunition and lead.(81) However, the scope of the conflict widened. Southern tribes who were traditional enemies of the Tuscarora entered the conflict by offering their services to North Carolina. The Cherokee, Creek, Catawba and Yammassee tribes joined with South Carolina to eliminate the Tuscarora menace. The Iroquois Confederation, or at least the Seneca tribe, threatened to join with the Tuscarora, drawing all the northern colonies into the conflict. Spotswood, if not his legislature, thought Virginia to be too divided to wage war effectively, and he wished merely to preserve the peace. But the South Carolina militia, much emboldened by the Amerindian support, fell upon the villages that were supposedly protected by treaty. The Tuscarora and their allies retaliated by massacring both settlers and the tributary Indians. The Nottoways bore the brunt of the attacks. The large combined force of Carolina militia, Virginia militia, and southern Indians engaged the Tuscarora at the Neuse River and soundly defeated them. Many captives were sold in the West Indies as slaves. The hostile remnant of the Tuscarora migrated for to the north, eventually allying with the Iroquois as the sixth confederated tribe.(82) With the Tuscarora War finally over, Virginia again turned its eyes westward. The next arena of military action would be in the rich trading area west of the Allegheny mountain range. The Virginia merchants competed with the French for control of the great Mississippi Valley. During the fifteen hundred mile trips, the traders were at great risk from the warring, often intoxicated, Indians who were allied with the French.(83) By treaty signed at Albany, New York, the Iroquois were not to make war, travel, or trade south of the Poyomac River or east of the Allegheny mountain range without a passport from the New York governor. Virginia's tributary Indians were to remain east of the Alleghenies and south of the Potomac River. By these means the colonists sought to establish peace, enlarge their domain, and increase their settlements.(84) Spotswood thought the frontier inhabitants to have been comprised of people "of the lowest sort." Most had been transported to the colony either as indentured or convict servants "and being out of their time they settle themselves where land is to be taken up and that will produce the necessarys of Life with little Labour. It is pretty well known what morals such people bring with them. . . ." They quickly learned that an enormous profit could be earned by selling liquor to the natives "and make no scruple of first making them drunk and then cheating them of their skins, and even beating them in the bargain." Spotswood thought them incapable of dealing honestly, serving in the militia faithfully, or supporting the government fully. Their misbehavior and cheating ways prompted Indian wars.(85) On 9 May 1723, the militia law was revised, requiring service of men between ages 21 and sixty. Regarding persons of color, the law was changed back to its original language, denying to any "free negro, mulatto or indian whatsoever," the right to "keep or carry any gun, powder or shot, or any club, or other weapon whatsoever, offensive or defensive" under penalty of "whipping, not to exceed 39 lashes." However, "every free Negro, Mulatto or Indian . . . listed in the Militia may be permitted to keep one gun, powder and shot." Those not enlisted were given a few months in which to dispose of any arms they possessed. Slaves and free blacks could be required to serve as musicians. In time of invasion, rebellion or insurrection, persons of color "shall be obliged to attend and march with the militia, as to do the duty of pioneers, or such other servile labor as they shall be directed to perform."(86) In case of emergency free or enslaved blacks might be required to join the militia to do "the duty of pioneers, or other such servile labor as they shall be directed to perform."(87) Before 1713, Virginia demanded and received two hostages from each tributary Indian village. Governor Spotswood though that this was the best way to keep these Amerindians peaceful, while giving some of the most talented of their numbers an English style education. As early as 1713 there were seventeen of these students being educated by the College of William and Mary. Shortly thereafter, a special Indian school was erected at Christanna and some additional tributary Indians were brought from reservations to be educated there. A mathematics professor, Hugh Jones, left a memoir of his experience with them. The young Indians, procured from the tributary Indians . . .with much difficulty were formerly boarded and lodged in town, where the abundance of them used to die, either through sickness, change of provision, and way of life, . . . often for want of proper necessaries and due care of them. Those of them that have escaped well, and have been taught to read and write, have, for the most part, returned to their home. . . . A few of them have lived as servants with the English. . . . But it is a pity more care is not taken of them after they are dismissed from school. They have admirable capacities when their humors and tempers are perfectly understood.(88) Virginia, like most colonies, used the militia as a reservoir from which troops could be recruited into select ranging forces and such regular military units as were populated by Americans. These units were not under the standard militia limitation of being confined to deployment within the colony. Virginia sought to recruit by advertising for recruits. An Act for raising Levies and Recruits to serve in the present expedition against the French on the Ohio. Whereas his Majesty has been pleased to send Instructions to his Lieutenant-Governor of this Colony, to raise and levy Soldiers for carrying on the present Expedition against the French on the Ohio; and this present General Assembly being desirous, upon all Occasions, to testify their Loyalty and Duty; and taking into their Consideration that there in every County and Corporation within this Colony, able-bodied Persons, for to serve his Majesty . . . . The Justices of the Peace of every County and Corporation within this Colony . . . are appointed or impowered to solicit Men, to raise and levy such able bodied men . . . to serve his Majesty as Soldiers on the present Expedition . . . . Nothing in this Act contained shall extend to the taking or levying any Person to serve as a Soldier . . . who is, or shall be, an indented or bought Servant, or any person under the Age of 21 years or above the Age of 65 years.(89) Between 1727 and 1749 Governor William Gooch reported that the Virginia militia consisted of 8800 foot soldiers divided into 176 companies; and 5000 horsemen in 100 troops. The unenrolled militia consisted of all able bodied freemen between ages 21 and 60. The enrolled militia, Gooch ordered, "will be constantly kept under regular discipline and the common men [i.e., unenrolled militia] will be improved in their manner, which want not a little pushing."(90) In 1726 King and Queen County reported that the number of militia to include 221 horsemen and 607 foot-men.(91) In 1728 William Byrd wrote on the recurrent problems with the Amerindians. He noted that nearly all Amerindian tribes with which Virginians came into contact were now armed with firearms, having completely abandoned their traditional weapons. Byrd wondered why they have given up their bows for a warrior could fire most of a quiver of arrows in the time it took to reload a gun. The Amerindians could make bows and arrows themselves and thus did not become dependent on whites for supplies. They were dependent upon traders and others to supply them with gunpowder, flints and lead balls. Time was on the side of the colonists because they Indians failed to maintain their arms and they could not themselves repair firearms or manufacture gunpowder. Control of the Indian trade was far more important than several companies of militia.(92) By act of 1738, the legislature mandated that the county militia officers "shall list all free male persons, above the age of one and twenty years" and train them as he saw fit. The men were to provide suitable arms at their own expense for service either as foot soldiers or cavalrymen. The law, reaffirmed by acts of 1755 and 1758, required free blacks, Indians and mulattoes to report at militia musters. Failure to appear invoked the fine of 100 pounds of tobacco. Blacks, whether enslaved or free, and Indians living within white settlements were still forbidden to own or carry firearms. They could serve as pioneers, sappers and miners, trumpeters and drummers.(93) Many blacks served as musicians in the Virginia, and other colonial, militia units. England declared war on Spain on 19 October 1739 in what is commonly known as the War of Jenkins' Ear. Britain assigned a quota of men to be recruited in the thirteen colonies for service in the West Indies. Lord Cathcart commanded British troops and troops of the thirteen colonies came under the command of Governor Alexander Spotswood of Virginia who was to hold the rank of major-general, quartermaster-general, chief of colonial staff and second in command of the expedition. Colonel William Blakeney was to assist Spotswood in recruiting, drafting if necessary, troops from the colonial militias. Blakeney carried with him signed blank commissions for colonial officers and arms and supplies. Included in Blakeney's instructions was a provision that, if Spotswood could not command the colonial troops, Virginia's Lieutenant-governor William Gooch was to serve in his stead. Spotswood died of a chill on 7 June 1740, before Blakeney arrived with his commission. Thus, responsibility for filling both the Virginia and the entire colonial assessment of troops devolved on Gooch. The American recruits became popularly known as Gooch's American Foot.(94) The entire expedition soon devolved into a complicated mess. Gooch's commission was inspecific as to rank, so he served as a junior colonel, and was not included in the Council of War once the troops arrived in Jamiaica. When the men and officers left on 25 September 1741, money was not available for transportation of all troops, so the cost was borne through private subscription and the generosity of private ship owners. When the colonial troops arrived they found that no one had made provision for their rations or pay so officers pooled their funds and purchased rations at exorbitant prices from British merchants. Likewise, the colonial troops were not included in orders given to the medical staff and few, if any, physicians and surgeons had been recruited in the colonies. It was common practice for each regiment to guard its own medical facilities jealously and to refuse to treat the men of other regiments unless ordered to do so. Most colonials were impressed into sea service and were given the most degrading physical duties, such as manning bilge pumps. British naval officers moved colonial enlisted men around among the ships as they chose, often in open defiance of their officers, although this practice had long been prohibited to British soldiers. Two men were reportedly killed or maimed after being beaten or flogged according to British naval custom. Had Spotswood lived he would have been a member of the Council of War, and as a major-general, would have been privy to the most intimate circles of command. As it was, Gooch was treated as a colonel of inferior standing, ignored and excluded from command decisions. He and other colonial officers wrote memorials to the senior British officers, but these had little effect. No records are available to account for colonial casualties, but all evidence points to their having been large. Disease took a heavy toll of lives. The American regiment was disbanded of 24 October 1742, on which date there were still 7 officers and 133 enlisted men hospitalized. The experiences of militiamen in the War of Jenkins' Ear were, to say the least, bitter. Doubtless, many colonial volunteers were of the lower class, freebooters, adventurers and just plain scoundrels, but many others were unemployed laborers and frontiersmen seeking cash to support their families or to buy a piece of property. They came back with stinging tales of army brutality and of the open disdain in which both British officers and soldiers held them. They were much disgusted with the lack of planning for their arrival, their mis-deployment once they arrived and the failure of the Council of War to integrate them into the army once their presence was made known. The British soldiers and officers, for their part, were unimpressed with the Americans whom they saw only serving duties for which they were ill suited and for which they had not been recruited.(95) Yet another step had been taken down the road to independence. By 1742 the frontiersmen had pushed west of the mountains, into what is now the state of West Virginia. The first recorded clash between the Virginia provincial militia and Indians west of the Blue Ridge Mountains in Virginia occurred on December 18-19, 1742. Colonel James Patton, commander of the Augusta County regiment, reported on the engagement which occurred near Balcony Falls in present-day Rockbridge County, Virginia to Virginia's Governor Gooch. Colonel Patton's first report was dated December 18. The second, dated December 23, contains a longer account but differed from the first in the number of men slain. A parcel of Indians appear'd in an hostile manner amongst us Killing and carrying off Horses &c. Capt. John Buchanan and Capt. John McDowel came up with them this day, and sent a Man with a Signal of Peace to them, which Man they kill'd on the Spot and fir'd on our Men, which was return'd with Bravery, in about 45 Minutes the Indians fled, leaving eight or ten of their Men dead, and eleven of ours are dead, among whom is Capt. McDowel, we have also sundry wounded. Last night I had an Account of ye Behaviour of the Indians, and immediately travel'd towards them with a Party of Men, and came up within two or three hours after the Battle was over. I have summon'd all the Men in our County together in order to prevent their doing any further Damage, and to repel them force by force. We hear of many Indians on our Frontiers: the particulars of the Battle and Motions of the Enemy I have not time now to write. I am, Yr. Honor's most obedient Servt., James Patton P.S. There are some white men (whom we believe to be French) among the Indians. Our People are uneasy but full of Spirits, and hope yr Behaviour will shew it for the future, they not being any way daunted at what has happen 'd. Augusta County Xher 22 1742 Honrd Sr.: Thirty six Indians appear 'd in our County ye 5th Instant well equipp'd for War, pretending a Visit to the Catabaws, they had a Letter dated the 10th of Ober from James Silver near Harris's ferry in pensilvania directed to one Wm. Hogg a Justice o' Peace desiring him to give them a Pass to travel through Virginia to their Enemies, wch Letter they shew'd here, and I serv'd as a pass where Silver's hand was well known. Instead of going directly along the Road they visited moot of our Plantations, killing our Stock, and taking Provisions by force. The 14th Instant they got into Burden's Land about 20 miles from my house, the 15th Capt. McDowel by an Express inform'd me of their insolent Behaviour as also of the uneasiness of the Neighbours, and desird my Directions, on wch I wrote to him and Capt. John Buchanan that the Law of Nature and Nations obliged us to repel an Enemy force by force, but that they were to supply those Indians wth Provisions wch they shd be paid for at the Governments Charge, at the same time to attend yr Motions until they got fairly out of our County. The 16th 17th and 18th Instant they kill'd several valuable Horses, besides carrying off many for their Luggage, which so exasperated our Men that they upbraided our two Captains with Cowardice. Never the less our Captains to prevent mischief sent two men with a White Flag the 10th Instant, desiring Peace and Friendship, to which they answer'd, "O Friends are you there, have we found you, and on that fir d on our Flag, kill'd Capt. McDowel and six more of our Men, on which Capt. Buchanan gave the word of Command and bravely return 'd ye Compliment, and stood his Ground with a very few hands (for our Men were not all come up) in 45 Minutes the Indians fled, leaving 8 of yr Men dead on the spot, amongst whom were two of their Captains. Our Capt. pursued them with only 8 Men several hundred yards, the Enemy getting into a Thicket, he return'd to the Field which he cow'd not by any means prevail on his Men to keep, and stand by him. The Night before the Engagemt I heard of the Indians Behaviour, and march 'd up with 23 Men, and met our Capt. returning 14 Miles distance from where they had ingaged, to which place I went next Day and brought off our Dead being 8 in Number, Capt. Buchanan having taken off ye Wounded the Day before. I have order'd out Patrawlers on all our Frontiers well equipped, and drafted out a certain Number of Young Men out of each Company to be in readiness to reinforce any Party or Place that first needs help, have ordered the Captains to guard their own precincts, have appointed places of Rendez-vous where each Neighbourhood may draw to an Occasion, and have call'd in the stragling Families that lived at a Distance.(96) Under an act passed in October 1748, slaves living on plantations located on the frontier, and threatened by Amerindian attack, could obtain licensed firearms. The slaves' owners had to sign applications allowing the slave to own guns and they were made responsible for the slaves' use of the guns. While this act did not formally admit slaves to membership in the militias, it did have the effect of allowing them to act as a levees en masse in defense of their own lives and the property and safety of their owners. On 25 October 1743 France signed a treaty known as the Second Family Compact with Spain and on 15 March 1744 joined Spain's war against England. The French made an unsuccessful assault on Annapolis Royal [Port Royal], Nova Scotia, in 1744. On 16 June 1745 Sir Peter Warren captured Fort Louisbourg. The press in New England was highly critical of Virginia for failing to support the expedition. Virginia had contributed no money and only 150 volunteer militiamen to the expedition, although Virginia was the most populous province and the richest.(97) In the early 1750s there were many reports that the French were stirring up the Amerindians in the western frontier of the Carolinas and Virginia. Reportedly the French were building forts as bases of supply for the coming war. The Ohio Company assisted the province of Virginia in recruiting and equipping volunteers who would serve in the militia.(98) The newspapers continued to report the alleged movements and actions of the French throughout 1753 and 1754 with great anxiety. The French were alleged to have issued orders to kill or take prisoner all whites, especially traders, caught within the territory they claimed, including Ohio.(99) The press paid no attention to provincial boundaries in reporting "trouble on the frontier" and one article might contain unsubstantiated reports of Indian attacks from the Carolinas to New England.(100) Governor Robert Dinwiddie, a man with essentially no experience in military affairs, was so anxious to enter the war and chase the French from the Ohio territory that he moved without authorization from his superiors or the legislature. He was unable to convince the House of Burgesses that Virginia had any interest in the war. He attempted to use his executive powers to order out a draft of the militia which was essentially a paper organization.(101) The end result was unsatisfactory to everyone. On 27 February 1752 the legislature passed a new militia act. Each county lieutenant was to enlist all able-bodied men between ages 18 and sixty, excepting indentured servants and slaves, Amerindians and free persons of color. Within two months of the passage of the law, the militia commanders were to muster and enumerate the men and report their names to the governor. Amerindians and free and enslaved black men could still be admitted to service as musicians, or be used in servile capacities as required in emergencies. Strangely, there was no mention of any militia obligation for indentured servants.(102) The French and Indian war opened with an engagement between the Virginia militia commanded by George Washington and the French in what is now western Pennsylvania, territory then claimed by Virginia. In the absence of any militia force from Pennsylvania, Virginia Governor Dinwiddie ordered his colonial militia to build fortifications at the Forks of Ohio [present day Pittsburgh]. The French had already erected Fort Duquesne and Washington's militia, which had constructed Fort Necessity, clashed with a force led by Coulon de Villiers at Great Meadows on 28 May 1754. Washington had about 150 militiamen and other recruits which brought his force to about 300. The French had about 900 men. In July 1754 Washington was forced to capitulate after losing 30 killed and 70 wounded. He optimistically reported that he had inflicted 300 casualties on the French force.(103) The news of the beginning of hostilities was widely reported. On 14 February 1754 the Assembly appropriated £10,000 "for the encouragement and protection of western settlers." Five days later Governor Dinwiddie issued a proclamation granting land bounties, in addition to regular pay, to all militiamen who would volunteer "to expel the French and Indians and help to erect a fort at the Forks of the Monongahela." As it turned out, only about 90 men shared in grants that totaled 200,000 acres, most of it between the Kanawha and Great Sandy rivers.(104) George Washington was ordered to go from Williamsburg to Fort Cumberland a few days later. He assumed command of some Virginia men and a company each from South Carolina and New York and on 20 March was promoted to lieutenant-colonel. Colonel Joshua Fry recruited the first Virginia volunteer regiment at Alexandria, consisting of 75 men, of which Fry had personally enlisted 50. The volunteers now numbered about 300. As his men marched westward Fry died at Patterson's Creek, probably on 31 May. As we have seen, Colonel George Washington had assumed command of the Virginia volunteers upon the death of Colonel Fry. Washington's command was forced to seek terms from the French on 17 April 1754. On 3 July he returned to Mount Vernon and in October resigned his commission. Colonels William Byrd and Adam Stephen joined the Virginia volunteers as officers. Colonel James Innes assumed command at Fort Cumberland, Maryland. In October 1754 the Assembly again authorized recruitment of volunteers, and the drafting of the unemployed, to serve against the French in the West. Justices of the peace, county lieutenants, and other officers were "to raise and levy such able-bodied men as do not follow or exercises any lawful calling or employment, or have not some other lawful and sufficient support and maintenance, to serve his Majesty as soldiers." Any soldier maimed would be supported afterward at the public expense, and families of those killed would also receive public support.(105) By the first of September, Dinwiddie had received numerous petitions from the southwestern frontier reporting Amerindian incursions and massacres of isolated homesteads. He proposed building several forts on Holstein's and Green Brier's rivers. On 6 October 1754 Colonel Lewis led forty or fifty men on a punitive expedition into the Indian country. Lewis remained in West Augusta until February 1755. In 1755 Dinwiddie reported to the Lords of Trade the number of militia and inhabitants in Virginia. There were 43,329 white heads of households and an estimated total white population of 173,316. He estimated that there were 60,078 black males of military age and a total population of 120,156 blacks in the province. That provided an estimated total population of 293,472 persons in Virginia. He numbered the militia at 36,000, with another 6000 potential militiamen exempted by various provisions of the militia law. Worse, Dinwiddie reported, "the Militia are not above one-half armed, and their Small Arms are of different Bores."(106) On 19 February 1755 General Edward Braddock arrived at Hampton, Virginia. The next day Braddock assumed command of all the king's troops in North America. Washington accepted reappointment to his old rank and joined Braddock. Braddock formed two companies of artificers, principally skilled carpenters, to accompany his expedition to cut a road and build fortifications. He next selected a company of light horsemen and four companies of rangers to join his two Irish regiments.(107) Dinwiddie called a council of governors, which met on 14 April, at Alexandria, to discuss manning, equipping, supplying, and funding Braddock's expedition. Meanwhile, Braddock's army marched to Winchester and on to Cumberland, arriving there on 10 May. Like all southern colonies Virginia constantly feared a slave revolt and took legislative action designed to minimize the possibility of such an armed insurrection among a most numerous population. Virginia Governor Robert Dinwiddie, upon hearing of slave problems near Fort Cumberland, remarked that "The villainy of the Negroes on an emergency of government is what I always feared."(108) However, General Edward Braddock notified Dinwiddie that he intended to utilize a number of free blacks and mulattoes, although he would not necessarily arm them.(109) On 27 June, Braddock's force was joined by Cherokee and Catawba warriors. On 9 July Braddock was surprised near Fort DuQuesne and his army decimated. Before his defeat, Braddock had predicted that, if his army were to be destroyed, the savages would fall upon the frontier settlements with a vengeance. He also predicted that, as his army neared Fort DuQuesne, the Amerindians would circle around and attack along the frontier to the south. Dinwiddie agreed, and ordered his militia to increase the number serving watch duty. At least one-tenth of the militia was to be stationed at armed readiness at all times. Fast runners were to stationed at all vital spots to carry messages to various ranging stations, the militia, and the governor. Despite the many precautions, massacres occurred along the Holstein River. Dinwiddie summoned Colonel Lewis, asking him to increase the number of rangers and lookouts on the frontier.(110) Dinwiddie's first recorded correspondence acknowledging Braddock's defeat was dated 16 July. Dinwiddie wrote to Colonel Patton in the Greenbrier area, asking that he strengthen the militia under his command and ordering him to do as much damage as possible to the marauding Amerindian forces. "I have ordered the whole militia of this dominion to be in arms," Dinwiddie wrote, "and your neighboring counties are directed to send men to your assistance." He dispatched Colonel Stewart and about fifty rangers to assist. In the New River area, between October 1754 and August 1755, 21 persons were killed, 7 wounded and nine taken prisoner. Among those killed were Colonel Patton and his deputy, Lieutenant Wright. The latter was killed just three days after Braddock's defeat by Amerindians whose courage had been bolstered by news of that event. At about the same time the first reports of the terrible massacre were received along the New River. Reverend Hugh McAden, who kept a journal of his times, reported that settlers by the hundred were fleeing the frontier. Many came first to Bedford, and then moved to North Carolina. John Madison, clerk of Augusta County, reported families fleeing from the Roanoke area.(111) On 25 July Dinwiddie wrote to Washington, informing him that he had ordered three companies of rangers to patrol the frontier. To Colonel John Buchanan he wrote a letter urging him to stand firm and reporting that his ranging company would soon be augmented by the addition of fifty rangers from Lunenburg County under Captain Nathaniel Terry and companies of forty or more rangers led by Captains Lewis, Patton, and Smith. These, Dinwiddie thought, "will be sufficient for the Protection of the Frontiers, without calling out the militia, which is not to be done till a great Extremity." Dinwiddie requested Samuel Overton to raise a company of volunteers in Hanover County and Captain John Phelps to do the same in Bedford County. All ranging companies were to "proceed with all expedition to annoy and destroy the enemy." As an incentive to enlist men and to have them fight, Dinwiddie placed a bounty of £5 on Amerindian scalps. The governor thought the incursions would end by Christmas and that peace would come to the frontier by spring.(112) Governor Dinwiddie expressed his hope that Colonel Dunbar would not take the remnants of Braddock's army into winter camp, leaving the frontier undefended. Dinwiddie decided to pursue a multi-faceted self-help plan for defense of the colony. He would equip and support the ranging companies, improve the militia, build a select militia, continue the bounty payments for Amerindian scalps, obtain adequate firearms for his troops, and enlist the aid of friendly natives. Most parts of the policy, with the notable exception of the creation of the select militia, had proven effective in years passed. In 1755, in the wake of Braddock's defeat, and the subsequent Amerindian attacks all along the frontier, Virginia's legislature passed an act placing a bounty on the scalps of the hostiles, in effect confirming the governor's earlier executive order. Whereas, divers cruel and barbarous murders have been lately committed in the upper parts of this colony, by Indians supposed to be in the interest of the French, without any provocation from us, and contrary to the laws of nature and nations, and they still continue in skulking parties to perpetrate their barbarous and savage cruelties, in the most base and treacherous manner, surprising, torturing, killing and scalping, not only our men, who live dispersedly in the frontiers, but also their helpless wives and children, sparing neither age nor sex; for prevention of which shocking inhumanities, and for repelling such malicious and detestable enemies, be it enacted by the lieutenant-governor, council and burgesses of this present General Assembly, and it is hereby enacted by the authority of the same, that the sum of ten pounds shall be paid by the treasurer of this colony, out of the public money in his hands, to any person or persons, party or parties, either in the pay of this colony, or other the inhabitants thereof, for every male Indian enemy, above the age of twelve years, by him or them taken prisoner, killed or destroyed, within the limits of this colony, at any time within the space of two years after the end of this session of Assembly. [The act further provided that] the scalp of every Indian, so to be killed or destroyed, as aforesaid, shall be produced to the governor or commander-in-chief.(113) On 14 July 1755 Dinwiddie commissioned William Preston captain of a ranging company, to serve until 24 June 1756. Preston had to recruit his own men and was nominally under the command of Colonel Patton. By the middle of August he had recruited only thirty men, few of whom were from Virginia. On 14 August Dinwiddie promoted Washington to colonel of the Virginia Regiment and made him supreme commander of all provincial forces raised in defense of the frontier. Dinwiddie promised him sixteen regiments of his countrymen with command post to be established at Winchester and field offices at Alexandria and Fredericksburg. The office in Alexandria would be used primarily for recruitment. Meanwhile, Dinwiddie would obtain arms, ammunition, clothing, and other supplies. Upon his arrival at Winchester, Washington found the recruits to be in "terrible bad order." No man followed orders unless the officers threatened physical punishment. When he ordered them drilled, it became immediately obvious that they had not been exercised in recent times. The distressed refugees from the frontier cowered in fear of the drunken behavior of most of the recruits. Recruiting officers had obviously given thought only to the collection of bounties and not to the creation of a formidable fighting force. Provincial officers assigned to recruiting duty showed no interest in carrying out their assignment and returned after several weeks' work without signing a single man. Many recruits were persistent idlers, some criminals, others escaped bondsmen, and still others physically unsuited for service. Many men who had been drafted from militia units chafed at the thought of discipline and complained of their bad luck in having been selected. Few showed any aptitude for, or interest in, military life. Drill sergeants complained about the "insolence" of almost all recruits. Recruits ignored frontiersmen who attempted to explain some of the critical points of Indian fighting. Officers leading men on forced marches often encountered settlers fleeing from the frontier. These poor creatures detained the officers, telling them their tales of woe and beseeching them to return and liberate their homesteads.(114) George Washington had a prejudice of long standing against the militia. That bias showed throughout the Revolution, but its origins were in the Seven Years War. Writing to his friend and rival Adam Stephen, later a general in the Revolutionary Army, on 18 November 1755, Washington observed that in the "life of Military Discipline" required that "we enforce obedience and obedience will be expected of us." He wished that militiamen be "be subject to death as in Military Law." He urged that bounties be placed on those who desert from the militia as was already the case for deserters from the army. But, he observed, "the Assembly will make no Alteration in the Militia Law."(115) In reality, Washington made no greater progress with the governor that he had with the legislature. Writing from Fort Cumberland on 13 July 1756, he complained to Captain Thomas Waggener, that the "Governor has ordered the Militia to be discharged as soon as harvest."(116) On 4 August 1756 he expressed his disdain for the militia to Governor Dinwiddie. Reporting on his experience in western Virginia he pointed put that when he was ambushed "near Fort Ashby" he received little militia support. He wrote of the "dastardly behavior of the Militia who ran off without half of them having discharged their pieces."(117) He characterized the militia to Dinwiddie as "obstinate, self-willed, perverse, of little or no service to the people and very burthensome to the Country."(118) Washington was much concerned about the sad condition of the Virginia militia well before Braddock's defeat. He first wrote to Dinwiddie on 21 August 1754, urging greater training of the colonial militia.(119) Following Braddock's defeat George Washington, on 2 August 1755, asked help from Colin Campbell to put the militia "in proper order" to meet the expected onslaught on the frontier.(120) He began correspondence in earnest with Governor Dinwiddie asking his assistance on the same subject. On 8 October 1755, writing from Fredericksburg, Washington told the governor that "I must again take the liberty of mentioning to your honor the necessity of putting the militia under better regulation than they are at present." He urged that Virginia revise its militia law.(121) That letter was followed in rapid succession with another letter dated 11 October in which he threatened to resign his commission "unless the Assembly will enact a law to enforce military law in all its Parts."(122) He suggested to Dinwiddie that the militia law be so revised as to force deserters who were apprehended to be "immediately draughted as Soldiers into the Virginia Regiment."(123) Washington's views were shared by others, including Governor Dinwiddie, who thought that it lacked both organization and proper discipline. So great was the governor's distrust of the county militias that only under the most dire circumstances would he order it out, depending instead on ranging units. Dinwiddie asked the legislature to take the necessary and proper steps to place them in readiness. In the governor's mind, it was a simple problem requiring only an equally simple remedy. The militia "had not been properly disciplined, or under proper command" and those who neglected their duty were rarely, if ever, punished. A new militia law, requiring service under a more severe set of penalties, and mandating periodic training sessions, would do much to remedy the problem. Had the settlers responded immediately by banding together, they would never have had to leave their homes and crops and would have repelled the invasions. The great body of trained militia could have saved themselves great losses and misfortune.(124) The legislature responded by passing a new militia law, mandating service of all able-bodied men between ages 16 and sixty. Exemptions to this act included most political officials, millers, farm and slave overseers, and those engaged in mining and refining lead, brass and iron. Men were required to provide at their own expense a "well fixed firelock" with a bayonet, cutting sword, and cartridge box. Those who could afford to provide the appropriate equipment could join the companies of horse. However, this service was necessarily restricted to the wealthy and their sons because of the rather considerable equipment required: a horse, good saddle, breast-plate, crupper, curb-bridle, carbine with boot, brace of pistols with holsters, double cartridge box, and a sword. The law restricted use of militia to the province and no more than five miles beyond habitation on the frontier.(125) On 23 February 1756 Dinwiddie reported to the Lords of Trade on his progress with militia training. "On my arrival at my government [post] I found the militia in bad order." Although there was an enrollment reported of more than 36,000 men, far fewer men were armed and most were undisciplined or trained in militia tactics. "The militia are not above half-armed, and their small arms [are] of different bores, making them very inconvenient in time of action." The exemptions to the Militia Act were many. There were far too many classes which "are exempted by Act of Assembly from appearing under arms." Those exempted included judges, justices of the peace, plantation overseers, millers and most politicians and public officers. Additionally, many tradesmen were exempted by virtue of their trades. All together those exempted by law amounted, according to Dinwiddie's estimates, to an additional 6000 men who might have been serving in the militia. Dinwiddie then asked the legislature "to vote a general tax to purchase arms of one bore for the militia," but lamented that "I have not yet prevailed with them."(126) However, Dinwiddie, in an address to the legislature, referred most favorably to the militia. "Our militia, under God, is our chief dependence for the protection of our lives and fortunes."(127) The select militia were specially trained citizen-soldiers who had little frontier experience and whose service was to be primarily in urban areas. On 17 September 1755 Dinwiddie issued orders for the dress of the select militia. The officers of the regular militia were to be dressed in a "suit of regimentals of good blue cloath coat to be faced and faced with scarlet and trimmed with silver; a scarlet waist-coat, with silver lace; blue breeches with silver-laced hat." The officers sent into the woods were also to have one set "of common soldiers' dress."(128) Governor Dinwiddie valued George Washington's advice and the militia colonel convinced his superior that the enlistment of friendly Amerindians was crucial to the defense of the frontier. Washington knew that the governor could exploit the ancient tribal antagonisms. There were many advantages to be gained at little cost or inconvenience. Obviously, those natives who assisted the colonists would not be at war with them. Their contacts with other tribes would render many vital scouting and intelligence services. They were experienced trackers and woodsmen. Considerable numbers could be enlisted for trinkets worth only a few hundred pounds. Their considerable presence might act as a shield against other, more hostile, tribes. Virginia Governor Dinwiddie joined the growing effort to take the offensive against the French. Responding in large measure to Washington's several letters,(129) he asked the House of Burgesses to appropriate money to support the British effort against the French at Crown Point, and to supply and arm the militia in the spring of 1756.(130) North Carolina Governor Arthur Dobbs offered aid and militia supplies to Virginia.(131) The press throughout the American colonies reported Governor Dinwiddie's several calls for increased military preparedness.(132) In Williamsburg the House of Burgesses appropriated money for defense and ordered the militia to be trained and equipped.(133) New militia districts were drawn and training was to be improved.(134) Dinwiddie decided to take the offensive in February 1756. Major Lewis was to assume command assisted by two "old woodsmen," Captains Woodson and Smith. A supply of 150 small arms, along with gunpowder and lead, was accompanied by a much-needed surgeon, Lieutenant William Fleming. The Cherokees promised aid and Dinwiddie enthusiastically reported to Washington that he hoped to have about 350 men in Lewis' command. The individual companies marched through the Roanoke Valley and assembled at Dunkard's Bottom on the New River at a post optimistically called Fort Frederick. A local minister named Brown appeared to bless the troops, preach a military sermon, and invoke God's protection. Almost immediately word arrived that a Shawnee raiding party had caused mischief about a day's march to the west. Lewis had ordered a man "switched" for swearing and the sight of such physical punishment disgusted the Cherokees who deserted. Major Lewis and Captain Pearis followed them and persuaded them to return, but valuable time had been lost. Scouts picked up signs of the Shawnee war party along with their prisoners, but the trail was difficult and food soon ran short. Lewis ordered the men to go on half-rations. The New River at many points ran through steep mountain passes with no level land to be found on either shore. The party had to cross the river almost every mile. The men obtained canoes, but most capsized, damaging and destroying supplies. Eight of Smith's men deserted and a part of Preston's company was compelled to continue on the their mission only under the threat of being shot. Unable to contain the spreading mutiny, Major Lewis delivered an impassioned speech urging the men to perform their duty. Only about thirty men and the officers agreed to continue, while the volunteers from the companies led by Smith, Dunlap, Preston and Montgomery deserted. The remaining party pursued the natives without being able to engage them. Casualties were caused either by natural disaster or the ambush of deserters. Disgusted and frustrated, Lewis returned and delivered his report. On 24 April, Dinwiddie sent him to Cherokee country to construct a fort which was completed at a cost of £2000. Captain Dunlap constructed another fort at the mouth of Craig's Creek. Captain Preston continued to march his men through the woods along the Catawba and Buffalo creeks, after which he commanded a portion of the Augusta County militia that had been mustered to defend the frontier. Frontiersmen circulated a petition, asking that a chain of new forts be constructed along the entire frontier. Meanwhile, the House of Burgesses conducted an inquiry into the conduct of the officers assigned to the Shawnee expedition, finding them all innocent.(135) Dinwiddie proposed to the Lords of Trade that they authorize the construction of a string of forts along the Allegheny mountains, with emphasis on the mountain passes. The legislature took up the call, demanding that forts be erected from Great Capon in Hampshire County in the north and extending to the south fork of the Mayo River in Halifax County. Many frontiersmen, upon hearing of this policy consideration, supported it by sending memorials and petitions to both the chief executive and the House of Burgesses.(136) Washington entered the debate. His logic was impeccable. To have the desired effect, each fort would have to have a garrison of approximately eighty to one hundred men. At any time about forty to fifty men would have to be assigned to patrols. The chain of forts would have to be built at intervals not greater than one day's march. The state could not afford to maintain an adequate garrison at so many places. If fewer forts were built, the Amerindians would soon learn how to circumvent them. If fewer men were assigned, the natives would isolate and destroy the smaller garrisons. If the men remained in the forts, they would serve no good purpose. Dinwiddie appointed Washington to chair a conference on this matter, to be held on 10 July 1756 at Fort Cumberland. The conferees expended most of their energy arguing over the best locations for forts. In April 1756 the Virginia militia skirmished with a party of Amerindians led by French officers. Papers taken from a dead French officer revealed that his party, and possibly others, were to harass Virginia settlements and isolated farms along a broad line. They were to penetrate to within 50 miles of major towns and cities.(137) By May 1756 the Amerindian incursions on the frontier had cut communications among many of the frontier towns. Dinwiddie received reports that "the French and indians to the amount of some thousands have invaded our Back Settlements, committed the greatest Cruelties by murdering many of our Subjects without the least regard to age or sex and burnt a great many Houses." He found it difficult to draft men because few were willing to abandon their families to the savages. He requested cannon and small arms from the home government.(138) Dinwiddie sent Richard Pearis to the Cherokee nation on 21 April 1756, with gifts and a letter asking them to come to the aid of the province. An Indian trader claimed that the nation owed him 2586 pounds of deer hides for trade goods delivered and that they must hunt until the debt was paid. Pearis, on his own initiative, assigned the debt to Virginia and burned the books. He was then able to recruit 82 warriors to accompany him. The House of Burgesses awarded Pearis £100 for pay his expenses and to discharge the debt.(139) In late June, Major Lewis gathered several units of rangers and added the 82 Cherokees and set out on another expedition against the Shawnees. The greatest difficulty Lewis encountered was finding a sufficient number of arms to equip his men. The Shawnee spotted the movement of Lewis' troops and on 25 June fell upon the inhabitants in the Roanoke area, massacring many and destroying the only fort in the area. A survivor, John Smith, sent a memorial to the House of Burgesses in which he described the massacre and claimed that a party of eight hundred men could "easily" destroy the Shawnees and burn their principal towns.(140) On 5 May 1756 Dinwiddie issued instructions to the county lieutenants. They were to make two drafts among the militia, one being for the little army that was needed to fill the void left after the English had fled to the safety of the eastern seaboard. This group would serve garrison duty at the various forts and comprise an army to seek and destroy the enemy. The second draft was for a group of minutemen who would be available to respond to Amerindian incursions on the frontier.(141) On 24 May Dinwiddie wrote to Maryland Governor Sharpe that he was saddened by the failure of the Pennsylvania legislature to adopt a proper militia law and to offer sufficient support in arms, food and other materials of war. He was heartened by the emergence of a strong militia among the propertied class. "We have a volunteer Association of Gentlemen of this Province," he wrote, "to the number of 200." Dinwiddie was optimistic that "it will be of service in animating the lower Class of our people."(142) But there was little good news elsewhere. Washington had reported that on the "dastardly behavior" of the militia serving with him. Dinwiddie accepted Washington's report on 27 May and apologized for inability of the militia officers to control their men or instill in them the least sense of discipline. He ordered some militia home and suggested that measures be taken to create an orderly martial atmosphere.(143) Dinwiddie received a letter from Washington that he had received an order from William Shirley to send what remained of his meager supplies on the frontier, beginning with gunpowder stored at his most important frontier post, Fort Cumberland, to New York to be used in campaigns Shirley planned in the northeast. Dinwiddie wrote Major-General James Abercrombie, "I hope the order will be countermanded, as there are many forts on the frontiers depending on supplies."(144) On 22 July 1756 Dinwiddie expressed his disappointment in the provincial militia to ironmaster William Byrd, III. He lamented that "if the militia would only, [even] in small numbers, appear with proper spirit, the banditti of Indians would not face them."(145) In preparation for a new campaign in July 1756 the General Assembly passed a new militia act which differed but little from earlier laws. It required that all able-bodied white males, except indentured servants, between ages 18 and 60 be enrolled in the county militia wherein they resided. Residents of Hampshire County were also exempted from the provisions of the act, perhaps because they represented the county closest the scene of the action. Doubtless, these people were expected to act as levees en masse in defense of their homes. Free blacks, Amerindians and slaves could serve as musicians and manual laborers, but could not bear arms. Since indentured servants were not mentioned in any additional provisions of the law, it may be assumed that no service of any kind was expected of them.(146) This law was reenacted through July 1773. Dinwiddie decided to build three forts in Halifax County and one in Bedford County. He assigned various county militia units to guard duty, but there were problems almost immediately when the Augusta County militia proved to be ineffective and quite uncooperative. A settler named Stalnaker reported that the Shawnee were gathering a force to attack as far east as Winchester. Dinwiddie gave him £100 to build a fort at Draper's Meadows and told him to raise a company of volunteer militia to defend it. In August Dinwiddie again met with Washington who advised him to build three additional forts in the frontier counties of Augusta, Bedford, and Hampshire. Manning these new forts, along with the existing ones, would severely tax the militia. Washington raised another issue. What was to be done about the ranging companies, most of whose men had deserted? Washington was still unhappy about the high desertion rate during the first Shawnee expedition. The forts, Washington reminded Dinwiddie, were useless without militia to garrison them. The forts had to gather information on the enemy Indians and send out period patrols. Rangers were supposedly the most skilled and highly trained troops available for frontier patrol duty and gathering intelligence. Dinwiddie suggested that his militia commander-in-chief make an inspection tour of the frontier. After attending briefly to some personal business, Washington set off on his grand tour. Most militiamen had no idea how to build a fort and the officers had no plans for fortifications and rarely issued comprehendible orders during the construction phase. Washington was appalled that, following an Amerindian attack on the headwaters of Catawba Creek, the fort's commander, Colonel Nash, could not recruit a ranging company to track and pursue the Shawnees. A second call for militia yielded only a few officers and eight men from Bedford County. Washington moved on to another fort being constructed in Augusta County being built by Captain Hogg. Only eighteen men had shown up to assist, although supposedly another thirty from Lunenburg County were on their way. Still, Colonel John Buchanan assured Washington that, in an emergency, he could turn out 2000 militia on short notice. Washington concluded that about one man out of thirteen had performed his duty. He reported to Dinwiddie, "The militia are under such bad order and discipline, that they will go and come when and where they please, without regarding time, their officers, or the safety of the inhabitants."(147) The tour showed him clearly the terrible state of discipline among the militia, the poor condition of the forts, and the dispirited defense of the garrison troops. On 20 July 1756 the home government attempted to assess the true situation by requesting that the colonial governors respond to certain questions. One of the principal concerns in London was: what measures were the colonies taking to provide for their own defense. The result was the Blair Report on the military preparedness of the colonies. Dinwiddie submitted his report to the king, but it largely repeated findings of which the king was already aware and which we have already discussed.(148) In Virginia the militia consisted of about 36,000, but was only half-armed. The guns in quality and usefulness varied enormously and they certainly did not all fire the same ammunition, "which is inconvenient in time of action." Almost any citizen could escape the Virginia militia service by paying £10.(149) About 1760 there were theoretically 43,329 citizens liable for militia service, but there were over 8,000 exceptions. Blacks, free and enslaved, numbering 60,078, were entirely disarmed and thus were useless for militia duty.(150) At the end of the summer things were looking up. Dinwiddie managed to gather a reasonably effective militia force by early autumn 1756. He reported to Loudoun on 28 October 1756 that he now had 400 effective rangers guarding the frontier. Washington had sufficiently reenforced Fort Cumberland so that it appeared to be sufficiently strong to "protect it from falling into the Enemy's hands. He was making some headway in recruiting men for the Royal Americans. Still landholders especially resisted long-term enlistment, especially for service outside their home areas. For service in the Royal Americans Dinwiddie "applied for one-twentieth Part of our Militia, but to no effect. As they are mostly free-holders, they insist on their Privileges and can't be persuaded voluntarily to join in Arms for the Protection of their Lives, Liberties and Properties."(151) In the late autumn Major Lewis returned from Cherokee Fort, having completed his mission. On 15 November, Governor Dinwiddie called Lewis and Colonel Buchanan into conference because he was greatly concerned about the rising costs of maintaining the Augusta County militia. The six companies from Augusta cost more than all the other militia units in the field. The military advisers suggested reducing the active number of men to three companies of sixty men each and sending the rest home. On 23 November Lewis issued orders to Captain Preston to draft sixty men from the militia to relieve the Augusta militia at Miller's Fort and other frontier posts. Those militiamen who had been drafted complained bitterly about their misfortune, but remained on duty through January 1757. In mid-winter, Dinwiddie proposed launching a second expedition against the Shawnee. Captain Vause and Morris Griffith, who had been captured in the Roanoke Valley and escaped, proposed enlisting 250 to 300 volunteers, to be supplied with arms, ammunition and clothing, and to be given only ordinary militia pay, plunder and £10 bounty for scalps. Captain Stalnaker would act as guide. The three companies from Augusta, Dinwiddie thought, would be sufficient to guard the frontier. Vause and Griffith thought that they would have no trouble enlisting men if only because so many men were upset, and many had been personally touched, by the earlier Amerindian massacres. Because the frontiersmen had initiated the expedition, its supporters became known as the Associators. Meanwhile, Dinwiddie attended a strategy meeting in Philadelphia, where it was decided to enlarge the punitive force to six hundred men. Upon his return he discovered a number of letters and petitions from frontiersmen advising against the expedition. The principal complaints revolved around the election and appointment of officers, state of equipment, and availability of commissary. Colonel Clement Read, writing from Lunenburg County, offer his opinion. "I am sorry the Expedition so well intended against the Shawnee is likely to be defeated, and all our schemes for carrying it on rendered abortive by an ill-timed jealousy and malicious insinuations."(152) News reached Dinwiddie in April that atrocities and massacres had occurred in Halifax County and the inhabitants were blaming the presumably allied Cherokees. In May Captain Stalnaker reported the passage through Halifax of at least four hundred Amerindians, including Catawbas, Tuscaroras and Cherokees. Dinwiddie proposed the adoption of a three part plan. First, he called for the creation of three new ranging companies under Colonel John Buchanan and Captain Hogg. Second, he also ordered the drafting of one thousand militiamen into the First Virginia Regiment with Washington as the commander-in-chief. The number of men in the pay of the colony was now two thousand exclusive of rangers, constituting a considerable financial burden on the colony. Third, he ordered the creation of a series of block-houses and forts along the southern frontier.(153) The governor thought that conditions on the frontier had been pacified, but decided to maintain a presence. He sent a new draft of sixty militiamen to Miller's Fort to relieve Preston's first company. Under Preston's leadership, this band built new fortifications at Bull's Pasture, Fort George, and Fort Prince George. None of these outposts reported significant Amerindian activity and no new tales of massacres were heard. In June 1757 Dinwiddie received another bit of encouraging news. An Amerindian friend of the Virginians, known as Old Hop, dispatched 30 warriors to assist in repelling incursions of the French Indians near Winchester, and promised to send at least three more similar bands. It was a mixed blessing because Dinwiddie was asked to provide each warrior with a shirt, leggings, pants, a small arm, powder, lead and blankets and they demanded match coats which Dinwiddie could not supply. To keep their Amerindian allies loyal to the British side, the legislature appropriated £5000 to reestablish the Indian trade.(154) A few days later Dinwiddie reported to William Pitt that 220 Catawbas, Nottoways and Tuscaroras had joined his militia at Winchester and had just brought the first scalps and a few prisoners. Another party of 70 warriors, largely Cherokees, was working with the militia toward Fort Duquesne. He was optimistic that he would soon have as many as 1500 Amerindians fighting on the British side.(155) In 1757 the Virginia legislature again revised the colony's basic militia law because "the Militia of this Colony should be well regulated and disciplined." The act required that, henceforth, all officers, superior or inferior, should be residents of the county in which they command. It covered all able-bodied, free, white male inhabitants, ages 16 to 60, except newly imported servants and members of council, House of Burgesses and most colonial, county and local officials; professors and students of the College of William and Mary; overseers of four or more slaves or servants; millers and founders; persons employed in copper, tin or lead mines; and priests and ministers of the Gospel. The county and local officials and a few others exempted "shall provide Arms for the Use of the County, City or Borough, wherein they shall respectively reside." Councilors were to provide "for complete sets of Arms." The day following a general muster the county officers were to meet at the court house "and to inquire of the Age and Abilities of all Persons enlisted, and to exempt such as they shall adjudge incapable of Service." Free blacks, persons of mixed racial heritage and Amerindians who chose to enlist were to be "employed as Drummers, Trumpeters or Pioneers, or in other servile Labour." Within twelve months of receiving their appointments county lieutenants, colonels, lieutenant-colonels and majors had to provide themselves with suitable swords. Captains and lieutenants had to have firelocks and swords; and corporals and sergeants, swords and halberts. Every militiaman had to provide himself with a well fixed firelock, bayonet and double cartridge box; and keep in his home a pound of gunpowder and four pounds of musket balls fitted to his gun. Parents were required to provide arms for their sons; and masters arms for their servants. Those too poor to afford a musket were to certify the same to the county officers and then the county would provide a musket branded with the county markings. On the death or removal of a poor militiaman, or his attainment of age 60, the musket was to be surrendered to the county lieutenant. An officer could "order all Soldiers . . . to go armed to their respective Parish Churches." "For the better training and exercising the Militia," the county commanders were to "muster, train and exercise his Company . . . in the Months of March and April or September or October, yearly." Failure to appear at muster subjected a militiaman to discipline, usually a fine. The officers were to "cause such Offender to be tied Neck and Heels for any Time not exceeding five Minutes, or inflict such corporal Punishment as he shall think fit, not exceeding 20 Lashes." The law was quite specific as to the use of militia fines. The officers were to "dispose of such Fines for buying Drums and Trophies for the Use of the Colony and for supplying the Militia of said County with Arms." Officers were required to take the following oath: " I --- do swear that I will do equal Right and Justice to all Men, according to the Act of Assembly for the better regulating and discipling the Militia." Under the militia act, county lieutenants were required to appoint one inferior officer and as many men as he though were needed to serve as slave patrols. The law charged these patrols with visiting all "Negro Quarters and other places suspected of entertaining unlawful Assemblies of Salves or other disorderly Persons." Slaves absent from their own masters' plantations were "to receive any Number of Lashes, not exceeding 20 on his or her bare back, well laid on. Militiamen serving slave patrol received ten pounds of tobacco for each day's or night's service.(156) Militiamen in several cities were covered by separate acts. Citizens of Williamsburg and Norfolk were mustered and trained according to laws passed in 1736 and 1739. Exempted by these acts were sailors and masters of ships. These militias had the additional responsibility to stand seacoast watch. Cities had nightly slave patrols, which were assigned duty within the city limits and one-half mile beyond in all directions. The legislature also passed an act for "making Provision against Invasions and Insurrections" which gave the governor full authority over the militia in times of emergency. He "shall have full Power and Authority to levy, raise, arm and muster, such a Number of Forces out of the Militia of this Colony as shall be thought needful for repelling the Invasion or suppressing the Insurrection, or other Danger." Penalties for failure to muster were substantially increased, up to death or dismemberment.(157) The colonial regiment might be sent to the aid of royal forces or incorporated as part of such troops whereas the ranging units had been developed for the protection of the frontier and were not subject to royal draft. The legislature appropriated £1500 for the support of the troops provided the rangers remain always in the service of the colony. The royal authorities had no choice but to accept the legislature's terms for the crown needed men to join General Forbes' expedition against Fort DuQuesne. Major Lewis joined Washington at Winchester, bring a significant portion of the volunteer regiment with him. This left Colonels John Buchanan and William Byrd and Captains Preston, Dickinson, and Young to guard the frontier. These men built a new fort on the James River named after Francis Farquier who, in January 1758, succeeded Dinwiddie as the colony's governor. With the best military men serving with the First Virginia Regiment, poor leadership plagued the militia. In one major blunder, Captain Robert Wade led a party of militiamen up the New River where they encountered a band of friendly natives, fell upon them, and massacred many warriors. Colonel Byrd made a similar mistake in the late autumn.(158) Washington was still far from being pleased with progress the province was making in discipling and training the militia. He complained to Governor Francis Farquier of the sad state of the militia in early 1758. On 25 June 1758 Farquier replied to Washington's letter. "I am extremely sensible of all you say in your letter of the nineteenth, instant, relative to the bad condition of the militia and wish I knew how to redress it."(159) Farquier decided to appoint William Byrd to serve as colonel of the second Virginia regiment, although he was placed nominally under Washington's orders. Byrd sent an Indian trader named George Turner to the Cherokee camp to carry gifts to atone for Wade's and Byrd's earlier slaughter of their braves, and to recruit them into Virginia service and the assistance of Forbes expedition against Fort DuQuesne. The crown ordered, and the legislature concurred, that the volunteers in both Virginia regiments should remain in royal service until January. The volunteers complained that this extended their service several months beyond their contractual time, but appeals to patriotism, revenge, and additional pay won the cause. The Forbes expedition was a resounding success, highlighted by the capture of Fort DuQuesne on 26 November 1758. Forbes suffered few casualties beyond the needless loss of about four hundred men under Majors Grant and Lewis. Washington resigned his commission and was succeeded by William Byrd as provincial commander-in-chief. The French were now gone from the Ohio territory so Virginia turned its attention to the former French allies, the Shawnee and associated tribes, and against the troublesome part of the Cherokee nation. Based largely on captured French records spies, and officers, Forbes estimated the following numbers of hostile Amerindians: the Delawares between Ohio River and Lake Erie, 500 warriors; the Shawnee on the Scioto and Muskingum rivers, 500 braves; the Mingoes on the Scioto River, 60 warriors; and the Wyandots on Miami River, 300 men at arms. Additionally, the Cherokees in western North Carolina and eastern Tennessee had 1500 to 2000 warriors.(160) In January 1759 Governor Farquier convened a military council, including his council and Colonel Byrd, to plan the Cherokee expedition. He ordered Byrd to position his second regiment to its best advantage in anticipation of a move south. Forbes demanded that a portion of the regiment be stationed at Pittsburgh to guard against a return of the French. Since this fit well with the provincial desire to hold the western Amerindian tribes at bay, council agreed. Farquier ordered that the militia of the counties of Frederick, Hampshire and Augusta, and the rangers in Bedford and Halifax counties, to be placed in readiness to assist in maintaining the peace on the frontier. Two hundred artisans were to be recruited by offering an enlistment bounty of £5 and then deployed in the strengthening fortifications. The men of the proposed expedition remained in camp, adopting a defensive, rather than offensive, posture. Three hundred of the militia and frontier were enlisted "to secure and preserve the several forts and places . . . and protect the frontiers from the threatened invasion of the Cherokees and other Indians." By March 1760, additional rangers and militia were placed in readiness on the southern frontier. In May, an additional seven hundred men were recruited by offering a bounty of £10 and sent to the southwestern frontier and the relief of Fort Loudoun. Major Lewis assumed command of the new recruits. In the summer of 1761 Captain William Preston stationed rangers in several fortifications on New River to protect the inhabitants from the Amerindians. He thought the situation to be sufficiently dangerous to muster the militia, but Governor Farquier refused permission, telling him to solve the problems by peaceful means. Provincial expenses were high enough without having to pay more militiamen. Farquier wrote Preston, urging him to persuade the frontiersmen to remain on their plantations. Preston was able to settle his problems with the Cherokee by having a local surveyor, Thomas Lewis, draw a boundary between their land and that of the colony. Andrew Lewis, brother of the surveyor, met the Cherokees and made a treaty that obliged them to guard the southwestern frontier. So successful was the peace treaty that surveying continued along the Roanoke River in 1762 and 1763.(161) The Cherokee expedition was finally ready to move on the enemy in April 1761. Colonel Byrd ordered the various component companies to assemble at Captain James Campbell's plantation in Roanoke. By act of the legislature of 31 March 1761, Byrd was authorized to proceed with one thousand men. The money was not forthcoming and Byrd was unable to offer cash for the bounties or purchase supplies for his commissary under Thomas Walker. Byrd decided to recall all available men from Pittsburgh and to proceed with the five hundred men he could pay and supply. On 1 August the supplies had not arrived nor had more men been recruited. Byrd ordered the old Cherokee Fort to be refitted, strengthened and garrison by sixty militia recruits. The Cherokees retreated from their northern towns and Colonel Grant, commander of the advance forces, failed to engage them. Enlistment of the volunteers were expiring and the legislature authorized the extension of service through May 1762. Adam Stephen then assumed command with orders from council to proceed against the Cherokees. He moved his three hundred men to Great Island, built Fort Robinson thereon, and set up camp there for the winter under Captain John McNeil. The Amerindians were now some three hundred miles away from the inhabitants of the southern Virginia frontier. Declaring the frontier to be safe, and the Cherokees driven south, council disbanded the second regiment in February 1762, and then commended them on their service.(162) In 1761 all British subjects "living on the western waters" were ordered to vacate their homesteads since these lands were to be reserved to the Amerindians. A few cabins were burned, but English authority was never firmly established on the frontier and the area was far too vast to police effectively. The normally docile Shawnee especially resented the incursion on their lands and in 1761 effectively isolated the settlers in the Greenbrier area. In July 1763 massacres again occurred along the southwestern frontier. On 27 July, Colonel Preston reported, "Our situation at present is very different from what it was. . . . All the valleys of Roanoke River and along the waters of the Mississippi are depopulated." He sent the Bedford County militia out in pursuit of a Shawnee raiding party. His report continued. I have built a little fort in which are 87 persons, 20 of whom bear arms. We are in a pretty good posture of defence, and with the aid of God are determined to make a stand. In 5 or 6 other places in this part of the country they have fallen into the same method and with the same resolution. How long we may keep them is uncertain. No enemy have appeared here as yet. Their guns are frequently heard and their footing observed, which makes us believe they will pay us a visit. . . . We bear our misfortunes so far with fortitude and are in hopes of being relieved.(163) Governor Farquier sent Preston a letter in which he promised to move militia from other counties to assist in the relief of Roanoke. He promoted Andrew Lewis to the rank of major, to serve under Preston who was the county lieutenant. In October 1763 Captain William Christian led a party of Amherst County militia to the New River where they engaged a band of about twenty Amerindians. After an exchange of gunfire, and the massacre of a settler held captive, the savages fled. Otherwise, the expedition was essentially unremarkable. Lieutenant David Robinson, an officer in the Bedford County contingent of Captain Preston's rangers, led his men in yet another fruitless tour of the New River area in February 1764. William Thompson and a Captain Sayers followed Robinson and they, too, had no luck in engaging the natives. Still, isolated Shawnee raids decimated isolated settlements and slaughtered their inhabitants. One unfortunate incident followed the killing of members of a party of Shawnee who had murdered the Cloyd family. The militiamen recovered the family's "fortune" of £137/18/0, mostly in gold and silver coins, but fought over the distribution despite the fact that the militiamen were the Cloyds' neighbors. The dispute ended only when the county court decided to grant each man thirty shillings.(164) By 1764 they had pushed into the Shenandoah Valley as far as Staunton. The militia was ineffective in responding to these expeditions. In April, Dr. William Fleming, then living in Staunton, wrote Governor Farquier, telling him that the local militia was unequal to the task of defending the town. Farquier dispatched 450 militia under Colonel Andrew Lewis to defend the town, but they did not encounter any hostiles and, after three months of inactivity, were discharged. Lewis retained 130 rangers in service in the area until September.(165) General Bouquet decided he must carry the war against the Shawnee into the Ohio territory. Accompanying his army were two hundred Augusta County militiamen under Captain John McClenachan. On 9 November 1764 Bouquet concluded a peace treaty with the Shawnee at Muskingum. One part of the agreement required that prisoners held in Shawnee camps be returned. Throughout the winter and into the following spring, prisoners were delivered to Fort Pitt and other posts and placed under the care of the Virginia militia. Bouquet's peace lasted until 1774. Still, sporadic raids occurred against isolated settlements in the southwestern frontier. In May 1765 a party of Shawnee camping at John Anderson's house in the Greenbrier Valley was attacked by Augusta County militiamen in retribution for various earlier raids. Colonel Lewis and Dr William Fleming intervened on behalf of the Indians, saving at least some of their lives. Leaders of the "Augusta boys" offered a reward of £1000 for Lewis' scalp and £500 for Fleming. Cooler heads prevailed, the community came to its collective senses, and Lewis and Fleming emerged as heroes.(166) During Pontiac's uprising Virginia had kept over 1000 militiamen on duty on the frontier and reduced casualties significantly. Still, the natives could strike anywhere at almost anytime and no system of defense was foolproof. However, Virginia's losses were negligible compared to those of Pennsylvania. George Croghan, well known Indian trader and diplomat to the Pennsylvania and New York tribes, estimated that Pennsylvania lost over 2000 inhabitants during that short war, and Virginia nearly as many.(167) Governor Dinwiddie thought the militia should have repelled the Amerindian incursion. General Jeffery Amherst called on Virginia to furnish volunteers and militia to garrison Fort Pitt and to carry out the reduction of the Shawnee towns in Ohio. If Virginia would supply the frontier fighters Amherst would try to spare some regulars "to join the Virginians in offensive operations against the Shawanese Towns on the Banks of the Ohio."(168) In 1766 the Virginia legislature again revised the fundamental militia act. The act renewed the list of those exempted from militia, adding physicians and surgeons, Quakers and other religious dissenters, tobacco inspectors at public warehouses, acting judges and justices of the peace. The provisions for the purchase and maintenance of militia arms were reenacted, with the penalty increased to £5. The act brought Williamsburg and Norfolk under the obligation to muster and train in March or April, and to attend a regimental muster once a year, although other provisions of the particular acts of 1736 and 1739 for these boroughs remained in force. The authorities of James City and York were clearly and legally separated from Norfolk and Williamsburg.(169) At this time the mounted militia substituted trumpets for the traditional drums used by foot soldiers.(170) In January 1774 John Murray, Earl of Dunmore (1732-1809), royal governor of Virginia,(171) seized western Pennsylvania and set up a new government in and near Pittsburgh under James Connolly. Simultaneously, he encouraged more hunters, traders and settlers to enter that region of Virginia known as Kentucky. Certain disaffected persons, at home and in England, used the colonial independence as an opportunity to forment trouble with the native Americans as much to embarrass the Whigs as to advance their interests in western lands. Some believe that British Indian agents urged the Shawnee, peaceful since the treaty ten years earlier, to resist colonial encroachment on their lands by warring against the traders in their lands. Massacres of some traders precipitated a response by Virginia. Shawnee and Ottawa war leaders decided to end this encroachment upon their lands, leading to what is known as Dunmore's War. Two columns of Virginia militia and volunteers responded. Dunmore led an expedition down the Ohio River while Colonel Andrew Lewis led a second militia column down the Great Kanawha River. Dunmore's militiamen rode their horses into battle as mounted infantry, but having overloaded the poor animals and chosen old and otherwise useless horses, to avoid having good animals killed or wounded, the men were forced to rest the animals frequently. The three columns traveled at different speeds and during rest periods lost contact with one another. The Amerindians used that opportunity to divide and conquer and so launched the attack during a rest period. What should have been a resounding colonial victory turned into the indecisive Battle of Point Pleasant on 10 October 1774. The standard newspaper account exaggerated the size of the enemy force and underestimated the size of the militia force by claiming that 600 Virginia militia and volunteers had fought against 900 Amerindians at the mouth of Kanawha River and won a "resounding victory."(172) Most objective accounts conclude that neither side that gained an advantage. In reality, the colonial militia outnumbered the Amerindians 1000 men to about 300. The war ended with the Amerindians yielding hunting rights in Kentucky and guaranteeing free passage on the Ohio River in the Treaty of Camp Charlotte.(173) Still, like Dinwiddie's war in 1754, Dunmore's war was a failure. Like Dinwiddie, Dunmore had antagonized the House of Burgesses. Time was already past in 1754, let alone in 1774, when a governor could order a major deployment of the militia without first receiving legislative acquiescence. The legislature and some local officials, not the governor controlled the militia. The wars were both very unpopular and the general population was generally displeased with both the cost and the result. On 24 December 1774 Governor Dunmore wrote to Lord Dartmouth, "every county is now arming a company of men whom they call an independent company."(174) Most counties had already formed, or were in the process of forming, such independent companies. By the end of the year at least six companies were fully formed, armed and prepared for action.(175) Patrick Henry(176) assumed political leadership, realizing that a number of independent volunteer companies, formed in and by various counties, could not provide the force necessary for a sustained war. He saw these companies as barriers against the Amerindians and as a reservoir of trained or semi-trained manpower from which an regular force might draw. He had not yet considered the possibility of enlistment in a national regular army, but was bound to the concept of a statewide militia under state command. Henry's position at the end of 1774 may be summed up by the following resolution which he offered at the First Virginia Convention. Resolved, That a well regulated militia, Composed of gentlemen and yeomen, is the natural strength and only security of a free government; that such a militia in this colony would for ever render it unnecessary for the mother country to keep among us, for the purpose of our defence, any standing army of mercenary soldiers always subversive of the quiet, and dangerous to the liberties of time people, and would obviate the pretext of taxing us for their support. That the establishment of such a militia is, at this time, peculiarly necessary, by time state of our laws for the protection and defence of the country, some of which have already expired, and others will shortly be so; and that time known remissness of the government in calling us together in legislative capacity, renders it too insecure, in this time of danger and distress, to rely that opportunity will be given of renewing them, in general assembly, or making any provision to secure our inestimable rights and liberties, from those further violations with which they are threatened. Resolved, therefore, That this colony be immediately put into a state of defence, and that there be a committee to prepare a plan for embodying, arming, and discipling such a number of men, as may be sufficient for that purpose.(177) The Virginia militia filled a number of vital and important roles during the Revolution, supporting the patriot cause in both the north and south. In March 1775 the Virginia Convention met in Old St. John's Church on a hill above the falls of the James River in Richmond. The delegates were seeking privacy and distance from royalist Governor Dunmore. Patrick Henry immediately moved that the "Colony be immediately put in a state of defense," meaning that the militia be formed, disciplined and armed. Opposed by even some of the patriots, Henry then delivered his famous "Give me liberty or give me death" speech, which was more than sufficient to carry the motion.(178) Henry's speech at the Convention was based on the assumption that a simple militia would be insufficient because a prolonged war was inevitable and that a real, substantial force, based on, but separate from, the general militia, was to be absolutely necessary for the defense of Virginia. Henry argued that a mere show of force in the form of a general and broad muster of the militia would accomplish nothing because the British authorities would not be intimidated. His purpose was to convince the assembly that they should abandon all hopes of a peaceful reconciliation and prepare for a prolonged war.(179) After considerable debate Henry introduced a second resolution which called for placing the colony in a full state of military preparedness. The state was to call into service a body of men sufficient to defend it from both the English forces along the coast and the Amerindians whom the British might seduce into making raids along the frontier. The men were to be completely trained in military arts, fully armed and subjected to standard military discipline. This would become the select militia of yeomen and gentlemen of which Henry had spoken earlier in his first motion. Richard Henry Lee, who had spoken in favor of Henry's position, seconded the motion. Thomas Jefferson also rose in support of Henry's plan, as did the distinguished jurist, St. George Tucker and John Taylor of Caroline County. Thomas Nelson, one of the wealthiest men in Virginia, declared that, should the British land troops in his county, he would summon his militia to resist whether he had authorization from the Convention or not. Other militia officers rose to second Nelson's position. Washington, perhaps recalling his distaste for militia, said nothing.(180) The Williamsburg "gunpowder affair" became for Virginia what the British attempts at confiscation of the same commodity at Lexington and Concord was from the militia of Massachusetts. Patrick Henry had demanded that Dunmore release the colony's supply of gunpowder at the Williamsburg Magazine for militia use. Dunmore related the order of 19 October 1774 from Lord Dartmouth which forbade the export of gunpowder and arms to the American colonies. The royal governor interpreted the order as including the distribution of arms and powder already in the colonies, stored in the royal armories and magazines. Henry argued that the arms and gunpowder in question had been sent for militia use and the royal authorities had simply neglected to distribute these to the county militias. Dunmore sent 20 kegs of gunpowder from the public magazine on the night of 20 April 1775 and had it loaded aboard the schooner Magdalen. As word of this confiscation circulated many Virginians talked open rebellion. Council, on Henry's recommendation, addressed a communication to Dunmore, pointing out that the powder had been stored for the protection and security of the colony and that it must be restored to it. Dunmore claimed that the mere presence of the gunpowder among the militia constituted a call to arms and an open invitation to the more rebellious leaders of the militia to actually rebel. He would release the powder immediately upon hearing of any Amerindian incursion, but, for the time being, it would remain with Captain Collins aboard the Magdalen. Henry summoned the militia. A significant body of armed men gathered at Fredericksburg. Volunteers arrived from Hanover and New Castle. With the arrival of each new militia, the commanders sent messages to Williamsburg, bragging on their gathering strength. By 26 April, the governor saw that his position was untenable. Dunmore acquiesced to Henry's demand by pledging his honor to return the powder, but he considered this the first act of rebellion in his colony.(181) Honoring his pledge to return the confiscated gunpowder proved to be Dunmore's last act as the generally recognized royal political authority in Virginia. On 29 April the Virginia Gazette carried news of the events at Lexington and Concord. Patrick Henry used this news as an occasion to spur the patriots onto greater action. To him, after the "robbery" of the gunpowder, "the next step will be to disarm them, and they will then be ready to arms to defend themselves."(182) Even after the return of the powder, Dunmore had planned to remain in his mansion. He took the precautionary step of ordering that it be fortified, even to the point of bringing in artillery, but he was soon intimidated by the gathering militia from the countryside. On the morning of 8 June 1775, Dunmore abandoned Williamsburg, escaped to Yorktown and boarded the man of war Fowey.(183) There he issued his final report on the gunpowder affair. I have been informed, from undoubted authority, that a certain Patrick Henry, of the county of Hanover, and a number of his deluded followers, have taken up arms and styling themselves an Independent Company, have marched out of their County, encamped, and put themselves in a posture for war, and have written and dispatched letters to divers parts of the Country, exciting the people to join in these outrageous and rebellious practices, to the great terror of all His Majesty's faithful subjects, and in open defiance of law and government; and have committed other acts of violence, particularly in extorting from His Majesty's Receiver-General the sum of Three hundred and Thirty Pounds, under pretence of replacing the Powder I thought proper to order from the Magazine; whence it undeniably appears that there is no longer the least security for time life or property of any man: I have thought proper, with the advice of His Majesty's Council, and in His Majesty's name, to issue this my Proclamation . . . . Reaction in Virginia to reports of the events of Lexington and Concord were much the same as among the people of the other states. One American living on the Rappahannock River wrote to a London newspaper that "It would really surprise you to see the preparations [we are] making for our defence, all persons arming themselves, and independent companies, from 100 to 150 men in every county of Virginia, well equipped and daily endeavouring to instruct themselves in the art of war." He claimed that "in a few days an army of at least 7 or 8 thousand well disciplined men" who were "well armed" would "be together for the protection of this country." (184) Patrick Henry addressed the militia at New Castle, claiming that the British Ministry had created a plan "to reduce the colonies to subjugation, by robbing them of the means of defending their rights."(185) Another correspondent from Virginia reported to a London newspaper, We shall therefore in a few weeks have about 8000 volunteers (about 1500 of which are horse) all completely equipped at their own expence, and you may depend are as ready to face death in defence of their civil and religious liberty as any men under heaven. These volunteers are but a small part of our militia; we have in the whole about 100,000 men. The New England provinces have at this day 50,000 of as well trained soldiers as any in Europe, ready to take the field at a day's warning, it is as much as the more prudent and moderate among them can do, to prevent the more violent from crushing General Gage's little army. But I still hope there is justice and humanity, wisdom and sound policy, sufficient in the British nation to prevent the fatal consequences that must inevitably follow the attempting to force by violence the tyrannical acts of which we complain. It must involve you in utter ruin, and us in great calamities, which I pray heaven to avert, and that we may once more shake hands in cordial affection as we have hitherto done, and as brethren ought ever to do. . . . Messrs. Hancock and Adams passed through this city a few days ago . . . about 1000 of our inhabitants went out to meet them, under arms . . . . By last accounts from Boston, there were before the town 15,000 or 20,000 brave fellows to defend their country, in high spirits . . . . Should the King's troops attack, the inhabitants will be joined with 70,000 or 80,000 men at very short notice. . . .(186) In June 1775 Lord Dunmore abandoned his capitol, taking refuge aboard a British man o' war, and went through the pretense of asserting royal authority. The colonists thereafter were to charge that he conducted warfare by plundering isolated plantations, abusing women, abducting children, stealing slaves, and burning wharves. In October he was repulsed at Hampton and in December defeated at Norfolk. The royal government was dissolved. On New Year's Day 1776, Dunmore made his last raid and then sailed away to England.(187) A convention met at Richmond with the charge to reconstitute government. The interim government ordered the formation of two regiments of the Northern Continental Line under the command of George Washington and two bodies of militia: the regular militia and a body of special minutemen to be organized along the lines of minutemen in New England. By November 1775 Accomack County reported that "almost to a man" the whole body of freemen of that and surrounding counties were "ready to embody themselves as a militia."(188) The new Virginia militia act, passed in July 1775, came as a legal reaction to the spontaneous popular reaction to the massacre of the patriots in Massachusetts. The act created two classes of militia, the regular companies and special companies of minute-men. The militia law was enacted providing that all free males, between the ages of sixteen and fifty, with certain exceptions, should be enrolled. These militia were organized into companies of from thirty two to sixty eight men strong, and companies were organized into regiments. The Governor appointed the regimental officers. All the militia in a county were under an officer called the County Lieutenant, who held the rank of colonel, who, on taking the field, ranked all colonels commanding regiments.(189) In the winter of 1775-76 Virginia organized Minute Men. The State was divided into districts, each furnishing a battalion. Selected officers were appointed who secured their men from the State militia. The were required to have extra drills and were better clothed an armed than the militia. They were subject to call at any time.(190) The militia act of July 1775 created a specially trained select militia, the Minutemen. Regarding the minutemen, the Convention resolved, That the minute-men in each respective district, so soon as they are enlisted and approved . . . shall be embodied and formed into separate battalions, and shall be kept in training under their adjutant for 20 successive days, at such convenient place as shall be appointed by the committee of deputies in each district; and after performing such battalion duty, the several companies of each battalion shall, in their respective counties, be mustered, and to continue to exercise four successive days in each month, except December, January and February . . . care being taken that such appointments do not interfere with battalion duty. . . . and be it further ordained, that, in order to render them the more skillful and expert in military exercise and discipline, the several companies of minute-men shall twice in every year, after the exercise of 20 days, be again embodied and formed into distinct battalions within their districts, and shall at each meeting, continue in regular service and training for 12 successive days . . . . And as well for the case of the minute-men, as that they may be returned in regular rotation to the bodies of their respective militias, be it further ordained, that after serving 12 months, 16 minute-men shall be discharged from each company . . . and the like number the end of every year, beginning with those who stand first on the roll, and who first enlisted; and if those who stand first should choose to continue in the service, taking the next in succession being desirous of being discharged, and so from time to time proceeding in regular progression. . . . The minute-men shall not be under the command of the militia officers . . . (191) The minutemen were a select militia which was assigned defense of the state and especially the frontiers. The minutemen were separate in the chain of command from the great militia, and one set of officers had authority over the other organization only when they were expressly mustered in joint action. The minute-men in each respective district, so soon as they are enlisted and approved, as before directed, shall be embodied, and formed into separate battalions, and shall be kept in training under their adjutant for 20 successive days, as such convenient place as shall be appointed . . . and after performing such battalion duty, the several companies of each battalion, shall in their respective counties be mustered, and continue to exercise for successive days in each month, except in December, January and February. . . . in order to render them more skillful and expert in military exercise and discipline, the several companies of minute-men shall twice in each year, after the exercise of 20 days, be again embodied and formed again into distinct battalions within their districts, and shall in each meeting continue in regular service and training. . . but the minute-men shall be under the command of the militia officers, nor the militia under the command of minute officers, unless drawn out upon duty together.(192) The minutemen were to be rotated so that no individual was unduly burdened. As well for the case of the minute-men, as that they may be returned in regular rotation to the bodies of their respective militias, be it further ordained, after serving 12 months, 16 minute-men shall be discharged from each company . . . and the like number at the end of every year, beginning with those who stand first on the roll, and who were first enlisted; and if those who stand first should choose to continue in service, taking the next in succession desirous of being discharged, and so from time to time proceeding in regular progression.(193) Robert Carter Nicholas, one of Virginia's delegates to the Continental Congress, warned the state legislature of the limitations of the militia. "Neither militia nor Minute-men will do except for sudden and expeditious service."(194) One of the first actions assigned to the minutemen was the capture of Lord Dunmore, last royal governor of Virginia. Dunmore had recruited a band of loyalists and escaped servants and slaves and had erected fortifications on Gwynn's Island, Matthews County. Scotch merchant James Parker, writing from Norfolk, Virginia, to a friend in Edinburgh, Scotland, on 12 June 1775, observed, You will see the Governor [Lord Dunmore] and his family again. I do not think his lady will return to Williamsburg. Tis said he will, provided the shirtmen are sent away. These shirtmen of Virginia uniform are dressed with an Oznaburg shirt over their clothes, a belt round them with a Tommyhawk or Scalping Knife. They look like a band of assassins and it is my opinion, if they fight at all, it will be in that way.(195) Newly elected Governor Patrick Henry resolved to end this threat to the security of the state. Dunmore referred to the minutemen as "shirtmen" on account of their habit of wearing buckskin or homespun shirts instead of regular uniforms.(196) Dunmore was aware of the deadly accuracy of the rifle-equipped shirtmen, having seen them in action during Dunmore's War just two years earlier. Moreover, at the Battle of Great Bridge, on 9 December 1775, the shirtmen killed or mortally wounded 62 British troops with their deadly rifle fire, while losing no men of their own. The British commander, Captain Fordyce, fell early in the engagement, his body pierced by 14 rifle shots.(197) After warning his command that the shirtmen would surely scalp all survivors alive, as well as all dead loyalists, Dunmore fled, boarding a small man-of-war in the James River, leaving the New World forever behind. The minutemen found Gwynn's Island deserted.(198) An American correspondent wrote to a London newspaper in early spring 1776, reporting that "nothing has happened in Virginia since the entire destruction of Norfolk." However, he optimistically reported that the state "by the month of April will have 30,000 or 40,000 men to take the field." Many were common militia, but "amongst these are a great number of riflemen."(199) One historian claimed that, at the outbreak of the war, approximately 45,000 men were eligible for service in Virginia and that, during the entire war, that number was never less than 40,000. However, only about one-quarter of the number was ever engaged in any significant service. When the war began, large numbers of militiamen were still in Dunmore's service on the frontier. Later, others served in the expedition against the Cherokee nation in the west, and still others had been sent to the aid of North Carolina in its Cherokee War.(200) On 14 August the Virginia Convention received news that Dunmore was planning an attack upon Williamsburg, with the intention of capturing as many of the rebel leaders as possible. The Convention requested the Committee of Safety to enlist volunteers to protect the city, and to call out the militia. The legislature acted quickly, calling out 8180 militiamen to be equipped as minutemen. And "the balance of the militia were ordered to be armed, equipped and trained, so as to be ready for service." The legislature also adopted a manual of arms and militia training. It established an arsenal at Fredericksburg to manufacture muskets and other small arms. To pay for the various expenses of defense, the legislature issued £350,000 in paper money, along with an annual tax to redeem the issue.(201) In December 1775 the Virginia Convention authorized the formation of six additional regiments of the Continental Line, with each regiment consisting of ten companies of 68 men each. Drafts from the militia rolls were instituted. Having excluded blacks, whether free or slave, and indented servants from militia service, the Virginia Convention, in the summer of 1776, enlisted two hundred Amerindians in the state militia.(202) On 10 March 1776 Virginia dispatched two regiments of 650 men each to assist North Carolina, primarily against Tarleton's Loyalist forces. During the first three years of the war, England held no part of Virginia. The best the English could do was to attempt to wreak havoc and hope that they could lower provincial morale. The militia served three purposes in the early years. First, the general militia was regarded as a reservoir upon which the Continental Line could draw replacements. Second, along the seacoast the urban militia served to protect cities in a case of an invasion. The tidewater militia was especially trained for this service. Third, the militia from the Blue Ridge Mountains and westward fought in major engagements with the native aborigine. Many were enrolled in the frontier rangers. British agents and disgruntled adventurers had stirred up the natives who were still resentful over their defeat at Point Pleasant, supplied them with guns, and urged them to war by granting them gifts, money, and liquor. As John Page advised Jefferson, "have the militia completely armed and well trained as the time they can spare will admit of, and [then] . . . make draughts of it when men are wanted."(203) All militiamen were required to take the following oath. I, ------, do swear that I will be faithful and true to the colony and dominion of Virginia; and that I will serve the same to the utmost of my power, in defence of the just rights of America, against all enemies whatsoever. The Third Virginia Convention passed a new militia act. Because of the "present danger, it is adjudged necessary" that all free, able-bodied males between the ages of 18 and 50 be enrolled in the general militia. Companies of not less than 32, nor more than 68, members were to be formed in all counties of the state. The militia law required that "every militia man should furnish himself with a good rifle, or common firelock, tomahawk, bayonet or scalping knife, pouch or cartouch box, and three charges of powder and ball." Drills were to be held semi-weekly, along with two general county musters, to be held in April and October, with the minutemen providing training. The act provided for the exemption of two groups of religious dissenters, the Society of Friends and Mennonites. It also exempted bound apprentices, indented servants and several classes of professions. Clergy of the established church and those churches in communion with it were exempted. Those engaged in various trades adjudged to be vital to the war effort were also exempted.(204) Shortages of manpower required that the legislature remove certain exemptions. On 15 June 1776 the legislature passed an ordinance "to raise and embody a sufficient force for the defense and protection of the Colony" so overseers of plantations and millers on the eastern shore lost their immunity from militia duty. On 5 July 1776 the revocation of the exemption of millers was extended to the whole state.(205) On 24 June the Convention voted to "let the present Militia officers be chosen annually . . . by joint vote of both houses of the assembly." The governor was empowered to fill vacancies with the advice of his privy council.(206) On 29 June the Convention voted to allow the governor to "embody the Militia with the advice of Privy Council and when embodied shall alone have the direction of the Militia."(207) With Patrick Henry elected commander of the select Virginia militia, men began to appear in increasingly large numbers. Two regiments, destined to become continental regulars, soon formed. Henry described them appearing with various garb, from ancient militia uniforms to buckskins, to recently sewn uniforms, although most were dressed in "green hunting shirts." Many had the words, "Liberty or Death" inscribed somewhere on their clothing. Hats or caps were trimmed with buck-tails, and nearly all carried scalping knives or tomahawks. Most carried their own fowling pieces, which fired the widest possible assortment of round balls. Some carried flags or banners with the coiled rattlesnake motif and the words, "Don't tread on me."(208) The militiamen were organized in units of 76 men with four officers with halberds, one fifer, one drummer and one color bearer. The public treasury provided the fifes, drums, halberds and flags. By the time six regiments had been raised the legislature authorized creation of a post of drum major.(209) Philip Fithian described a militia muster in late 1775 or early 1776. The Drums beats & the Inhabitants of this Village muster each Morning at 5 o'clock . . . . Mars, the great God of Battle is now honoured in every part of this spacious Colony, but here every Presence is warlike, every sound is martial! Drums beating, Fifes & Bag-Pipes playing & only sonorous & heroic Tunes -- Every Man has a Hunting Shirt, which is the Uniform of each Company.(210) The select militia was given special training and organization. The state was divided into sixteen military districts and each district was to recruit 500 minutemen, to be divided into ten companies of 50 men each. Only "expert riflemen" need apply for membership in these select units, and the members were ordered to muster and train for 20 days in the month following organization, and then four days each month thereafter. Additionally, they would superintend training of the great militia at annual spring and fall musters, each of which was to last 12 days.(211) The Baptists approached the Virginia legislature, asking that their clergy be given the privilege of preaching among the troops. Many of its adherents had already enlisted in the patriot cause. The Church of England was the established denomination, but the legislature thought that since the Baptists had pledged loyalty to the patriot cause, the privileged status of one church should not present an obstacle, and thus granted permission. The privilege was then granted to all Protestant sects willing to support the cause. The Baptist pulpit, in repayment, became politicized in support of the cause of liberty. Colonel William Woodford, a Virginia militia commander, and a close friend of George Washington, had recently been commissioned and wrote to Washington for advice on selecting a manual for discipline if his troops. On November 10, 1775, Washington, writing from Cambridge, offered his opinion on military discipline. Washington provided him with a list of five military books for study: Sir Humphrey Bland's A Treatise of Military Discipline,(212) a book Washington noted as "standing foremost." Next he named An Essay on the Art of War which was the book written by Count Turpin de Crisse, and recommended to Washington by Forbes.(213) Third was Instructions for Officers.(214) The last two books were: The Partisan(215) and Young's Essays on the Command of Small Detachments.(216) One cannot but be struck with the excellence of this selection. They deal largely with infantry as he was writing to an infantry colonel. Two of the books, Bland's and Turpin's, were respectively the best military books of the period published in England and France. The Partisan covered the use and deployment of light troops and partisans, today's guerrillas, and was thus especially useful to militia commanders. Thomas Simes had published The Military Guide for Young Officers in Philadelphia in 1776 but this book was merely a reprint from an older English edition.(217) When Von Steuben arrived at Valley Forge he found only two military books were used, those of Bland and Simes.(218) These books constituted the substance of military knowledge upon which officers both of the regular army and the militia in all the states drew during the Revolution.(219) In the winter of 1775-76 Dunmore gathered a band of loyalists to supplement his army of two companies of the Fourteenth Regiment and moved through Norfolk and Princess Anne county. At the east branch of the Elizabeth River at Kemp's Landing Dunmore defeated the Princess Anne militia under Colonel Hutchings. Colonel Woodford gathered a few of the fledgling Continentals and a number of militia and pursued Dunmore's force. At Great Bridge on the Elizabeth River, on 9 December 1775, Woodford met and defeated Dunmore's 200 regulars and 300 loyalists and escaped black slaves, inflicting considerable losses on Dunmore while suffering only one man wounded. Woodford reported that "the deadly rifles of Captain Green's Culpeper [militia] men, every one of them a marksman, contributed greatly to this victory, as they had at Hampton." Dunmore retreated to the safety of his ships at Norfolk, leaving the slaves to make their own way out.(220) Patrick Henry ordered several companies of minute-men to encamp around Williamsburg to protect the city and its officials. The Committee of Safety ordered out several more companies of minute-men to guard other points, such as Burwell's Ferry, Jamestown, Hampton and York-town, where Dunmore might land his mixed force. The Virginia Convention met on 1 December 1775 at Richmond and soon adjourned to Williamsburg, where it remained in session until 20 January 1776. In cooperation with the Committee of Safety, it created seven additional regiments of regulars and called out a company of 500 riflemen. The latter were deployed in the counties of Accomack and Northampton to protect them from Dunmore's force. Colonel Woodford, as ranking military officer in the state,(221) pressed the Convention to supply better arms of standard military caliber. As many of the men, both regulars and militia, were armed with fowling pieces of various calibers, each man had to mold his own bullets. Moreover, fowlers were wholly unsuited to the use of bayonets. Woodford also complained of the poor quality of the arms received from the former colonial stores. Had "better arms been furnished in time for this detachment, they might have prevented much trouble and great expense to this Colony. Most of those arms I received the other day from Williamsburg are rather to be considered as lumber, than fit to be put in men's hands. . . ."(222) The Convention considered a number of principles upon which the new state should be based. The thirteenth point convened the militia. It declared that "a trained militia is the proper defense of a free state, that standing armies in times of peace are dangerous to liberty, and that the military must be in subordination to the civil power." The Convention made reference to the provisions in the English Bill of Rights that Protestants should be allowed to keep and bear arms and that there should be no standing army in peacetime without the consent of Parliament. The delegates agreed that these two sections were the natural conclusions of historical experience and of a true democratic tradition.(223) Amerindian problems beset the newly independent state almost immediately. Urged on by royal emissaries and white renegades the native aborigine carried out raids against isolated settlements along the Holston and Ohio rivers and in Kentucky. The Cherokees along the Holston were especially active so a large militia force was created made up largely of frontiersmen who were experienced in Indian fighting. The urban militia supplemented the backwoodsmen by occupying the few towns and forts in the path of the marauders. The militias from the counties of the Shenandoah Valley were able to sustain the Amerindian incursions from the north. The Virginia Bill of Rights of 1776 provided "that a well-regulated militia, composed of the body of the people, trained to arms, is the proper, natural and safe defence of a free State." It also rejected standing armies and ordered the subordination of the military to civil authority.(224) The Virginia Convention of 1776 put Thomas Jefferson to work on a draft of a new constitution for the newly independent state. His first draft of the fundamental document contained a provision for the militia and the right to bear arms based in classical political thought which tied human freedom to the right to keep and bear arms. The following shows Jefferson's original draft and changes made by deletion. No freeman shall ever be debarred the use of arms. No souldier shall be capable of continuing in there shall be no standing army but in the time of peace actual war(225) After the delegates considered and debated his initial draft, Jefferson made the following changes in his second draft. Deletions are shown. No freeman shall be debarred the use of arms within his own lands or tenements There shall be no standing army but in time of actual war(226) His third draft of this provision read exactly as the second had read.(227) The Constitution of 1776 also provided that the governor direct and command the militia and recommend commissions to the legislature. Militia officers commissioned previously were to be continued in grade provided only that they take the oath of loyalty.(228) In the summer of 1776 the citizens of Kentucky met at Harrodsburg and on 6 June 1776 appointed deputies to represent them at Williamsburg. They wished to secure Virginia citizenship for themselves and to associate their frontier militia with the state militia. The Harrordsburg gathering appointed Gabriel Jones and George Rogers Clark (1752-1818) to represent their interests and sent them on the 500 mile journey to Williamsburg.(229) By the time they reached Botetourt County, they learned that the Convention had adjourned. Jones joined Colonel Christian's expedition against the Cherokees while Clark continued on his journey. He met with Patrick Henry at his home and received a cordial reception. Henry recommended both the incorporation of the Kentucky militia and material support, especially with 500 pounds of gunpowder. On 23 August the Convention provided the gunpowder, sending it to Pittsburgh and then down the Ohio River.(230) This secured the loyalty of Kentucky to Virginia and drew its militia into the state's military organization. On 29 May 1776 the Virginia legislature decided to create three companies of Minute Men, to be stationed on the frontier. The main problem attending the deployment of these ranging units was securing rifles wherewith to arm them.(231) Rangers were to be skilled marksmen and thus be armed with rifled arms instead of muskets. Unlike muskets, rifles had not been standardized, but the legislature deemed uniformity of caliber highly desirable. They also required a greater effort and investment of time to manufacture. The law creating ranging units was strengthened to provide "the better defence of the frontiers of this Colony." Funds were appropriated for implementation of the Minute Men in June 1776.(232) On 20 June 1776 the legislature authorized the formation of a company of rangers in Fincastle, Botetourt and Augusta counties. Ranging companies were to be drawn from frontier companies because the men there were accustomed to the Amerindian way of fighting. Urban militia were essentially useless in the wilderness. Their special talents were wasted in urban settlements. Rangers were ordered to assist the militia in other counties as needed; and in return they could ask for assistance from other counties.(233) General Washington, writing from New York, supported the formation of ranging companies on the frontiers, believing this to be an effective use of frontiersmen. "With respect to [the use of] militia in the management of Indian affairs, I am fully persuaded that the inhabitants of the frontier counties in your colony are, from inclination as well as ability, particularly adapted to that kind of warfare."(234) In mid-June the Fifth Virginia Convention considered the revisions of the militia law to make it better meet the needs of a wartime state.(235) The Convention and the Governor then turned their attention to arming the militia. By the time of the Revolution, arms were extremely scare among the population. One of the primary problems confronting the militia was replenishment of supply of the once legally mandated privately owned and supplied firearms. The state sent impressment gangs through the countryside to confiscate (although they eventually paid for) firearms wherewith to arm both the Virginia Continental Line and the militia, although the former certainly had priority in the allocation of arms. Impressment of arms from private citizens was a primary source of supply, and was an extremely unpopular device. Moreover, they scrounging officers brought back a mixed bag of old, obsolete, obsolescent, worn out and damaged arms more frequently than they brought back current and useful arms. The guns were of many calibers and fired a variety of projectiles. On 2 October 1776 Captain Nicholas Cabell (1750-1803) delivered to Captain Samuel Higgenbotham the product of a week of impressment. These arms, which were to be consigned for militia use, included 22 rifles of 14 different calibres and 8 shotguns, a hunting weapon usually not considered useful or suitable for military use.(236) Arms shortages continued to plague Virginia throughout the Revolution. So destitute was the militia of firearms that the Committee of Safety ordered to issue muskets when available, but if none were available, to issue "speers or cutlasses." Several companies were issued tomahawks.(237) On 14 June 1776 General Francis Johnson wrote from Long Island to General Anthony Wayne, "I shall not continue 6 months longer in the Service without Arms," warning him that, as things were, he would have to defend various fortifications "with our People armed with Spears, or be compelled to leave the Camp. He also noted that "Howe and his Redcoats will pay us a Visit immediately . . . [and] we for our parts have nothing but damned Tomahawks."(238) Like other states, it had a need for arms greater than it could fulfill through any sources of supply. The state authorities were willing to accept whatever arms they could procure. On 13 September 1777 Edmund Pendleton wrote to William Woodford, "the length or form of Rifles or other guns I am inclined to think will make no great difference so long as the old sort of experienced hands use them."(239) To secure the arms from pilferage, the state ordered that "all arms delivered out of Publick Stores or purchased by Officers for use on the Continent, [are to be] branded without loss of time."(240) On 20 February 1781, John Bannister complained to Jefferson that Congress was remiss in supplying the state. "I cannot help observing how unjust it is in Congress not to assist us with arms when we have to contend singly with the greatest part of the British army."(241) In the late summer 1776 Governor Henry sent Colonel William Christian with a substantial company of militia to the relief of the frontier. He made his way through the southern Ohio territory, down the Tennessee River, into the lands of the Cherokees and Creeks. However, the enemy proved to be elusive because "the men retreat faster than I could follow." He reported to Henry that, "I know, Sir, that I could kill and take Hundreds of them, and starve hundreds by destroying their Corn, but it would be mostly the women and children." Unlike General John Sullivan later on, Christian refused to make war on the able-bodied men by starving the very old, very young and the children. "I shewed pity to the distressed and spared the supplicants, rather than that I should commit one act of Barbarity." Nonetheless, Christian captured 40 to 50 thousand bushels of corn and 10 to 15 thousand bushels of potatoes, along with assorted quantities of horses, fowl, cattle and hogs. The expedition also rescued a few white captives. Christian attempted to negotiate with the leaders, sachems and chiefs, but had little initial success. It is here that Christian first encountered a renegade chief he called Dragon Canoe, on whom more later. He warned the leaders with whom he did meet that he could easily command 2000 Virginia militia and that the Carolinas would supply another 400, all experienced Indian fighters. Eventually, some chiefs responded to his overtures of peace. Time also allowed for the gathering of intelligence and he learned that one Cameron, a British agent, had successfully seduced Dragon Canoe and a few others, and that Cameron had promised to produce large quantities of war materials at Mobile, to be given to such tribes as would ally with the English against the colonists.(242) Christian warned Henry that there he apprehended far greater from the English at Mobile than at Fort Detroit, and strongly recommended an expedition be undertaken against the southern renegade Indians.(243) A second militia detachment under General Rutherford attacked several Indian towns and killed a number of warriors, captured several Frenchmen and took prisoner several escaped slaves. The militia also captured a quantity of gunpowder and lead and provisions valued at £2500. These supplies had been destined for Mobile, to be used to attract Cherokees to the British cause. South Carolina militia under Colonel Williamson, after suffering considerable losses during an ambush, regrouped and routed the Cherokees, supposed to have been under British and tory leadership. Williamson joined Rutherford "destroyed all the Towns, the Corn and everything that might be of service" to the Cherokees in several of their villages. Despite being opposed by a "considerable body" of hostiles, Rutherford lost only three men.(244) In 1776 Virginia had far fewer problems recruiting soldiers for the Continental Line than it had in supplying them with arms and accoutrements. Congress had ordered on 16 September 1776 that Virginia supply fifteen battalions of the Line. So successful was the state in filling its initial quota that John Wood, governor of Georgia, on 20 August 1776, asked for, and received, legislative permission to recruit in Virginia in order to fill his own state's quota. In a letter to Richard Henry Lee, Henry complained bitterly about this allowance. "I write to the General [Washington] that our enlistments go on badly. Indeed, they are almost stopped. The Georgia Service has hurt it much."(245) Discipline was harsh and, at times, even bizarre. In 1776 Captain John Pegg, a vestryman in his church and militia captain, was fined, broken in rank and held up to public contempt for "drinking and making use of in his family the detestable East Indian tea." Pegg responded that the inquiry into his habits, practiced within the privacy of his own home constituted "an impertinent interference in his family affairs" and that he would not be bound by such inquiries. The state responded by listing him as "an enemy to the cause" in the Virginia Gazette.(246) Washington on 4 October 1776 had observed that there is an enormous, material difference between voting to raise companies of soldiers and actually recruiting, equipping, arming and discipling them. Responding to Washington's request for reasonable terms of service, on 16 November 1776 the legislature set enlistment terms at three years and made provision for recruiting, even drafting if necessary, men from the reservoir of trained militiamen.(247) In December 1776 the Virginia legislature authorized the formation of three additional battalions of regulars to serve under the command of the Congress, but in the pay of the state. It also authorized the creation of additional minute-men and volunteer companies in the exclusive service of the state. By December 1776, the legislature had to ask assistance in recruiting from "justices, members of county committees, and the other good people of this Commonwealth" in recruiting men to serve at all levels, from regulars with three year enlistment obligations to militia to minute-men to volunteer companies.(248) The question of the legality and legitimacy of the deployment of militia outside the state had never been resolved, dating from colonial days. Rather than resolving this problem, on 26 December 1776, Governor Henry issued a special call for volunteers "willing to engage in the defence of this State, or march to the assistance of any other, should the exigency of things demand it."(249) He described the volunteers to General Washington. "The volunteers will consist chiefly from the upper parts of the country, who would make the best of soldiers, could they continue so long in the service as to be regularly disciplined. He thought they would be "as respectable as such a corps can be expected, without training." They will find their own arms, clothes, and . . . be commanded by captains . . . of their own choosing." They would differ from militia in that "they will be subject to the Continental Articles of War."(250) By February 1777 it was apparent that Henry's call interfered with the enlistment of troops for long service in the Continental Line, so Henry suspended his call for volunteers until the enlistment of regulars was completed.(251) In March 1777, Governor Henry reported that "the recruiting business of late goes on so badly that there remains but little prospect of filling six new battalions from this State, voted by the Assembly." He was disappointed at the failure of the militia to serve, as hoped, as a reservoir of trained manpower for the army. "I believe you can receive no assistance by drafts from the militia."(252) Nonetheless, the legislature authorized a draft from the militia to complete enlistments in the Line.(253) In March Henry was forced to send militia to the Virginia frontier. He ordered militia from Botetourt and Montgomery counties to march to the relief of the settlers in Kentucky, primarily to escort the more distant settlers to convenient places of safety while the Indian menace loomed. Although he understood that there was a vast territory to scour for settlers, Henry was forced to inform the lieutenant of Montgomery County that his many commitments outweighed his resources. "The great variety of War in which this State is engaged," Henry wrote, "makes it impossible to spare such a number of men for this Expedition as I could wish."(254) Henry was much concerned for the defense of the western frontier. In March 1777 he asked Governor Thomas Johnson if Maryland was able to support Virginia with militia to defend Fort Pitt and to join in an expedition down the Ohio River to contain the hostile Cherokees.(255) More bad news concerning the Amerindians trickled in from the western frontier. Cornstalk had approached the Virginia garrison at Point Pleasant on the Ohio River to report that Colonel Henry Hamilton, the notorious "hair buyer," had achieved remarkable success among the northerly tribes. Cornstalk did not want to become involved in the "white man's dispute," but he might have "move with the stream." The commandant detained him along with his two companions. Cornstalk's son, worried about his father's failure to return, then came to the fort. Meanwhile, two men hunting for fresh meat not far from the fort were attacked and one was killed by Cornstalk's men. A relative of the dead man, one Captain Hall, advanced on Cornstalk and murdered him, his son and at least two other Shawnee. Even a vital portion of Cornstalk's message was lost since, at the time of his murder, he was performing a vital service to his friends, the Virginians, by drawing a map that showed the disposition and location of the various tribes between his own Shawnee villages and the Mississippi River.(256) The wanton murder of one of the most popular Amerindian leaders was the immediate cause of raids into the Greenbrier Valley. The militia and rangers contained the attacks, but the deprivations continued throughout the war, tying up many militiamen who might have served the patriot cause better by deployment elsewhere. Garrison duty at the many forts maintained along the frontier during the entire war proved to be the most unpopular duty assigned to the militia. Many Virginians objected to the drafting of militia into the army. The opposition was especially strong on the frontier where the loss of the male head of household might prove disastrous to the farms. Samuel McDowell of Rockbridge County, wrote to Governor Thomas Jefferson, complaining that the draft "must ruin a number of those whose lot is to march . . . their families and stocks must suffer, as they mostly have not any person behind them when they are gone from home to work their small farms." McDowell advised Jefferson that his friends and neighbors "would serve as militia but would not be drafted for 18 months as regulars." McDowell's neighbor George Moffet emphasized just how much they loathed the draft in his letter of 5 May 1781 to Jefferson. "Yet they would suffer death before they would be drafted 18 months from their families and made regular soldiers of."(257) Since Virginia was neither occupied nor greatly molested during the war, the state was able to function as a reservoir of troops for the Continental Line and as a base of supplies for the patriots. There is scant evidence of deployment of the militia in the north and only occasional use of it in the south during the first three years of the war. Thus, other than frontier duty, the militia was used almost exclusively as a source of semi-trained manpower for the army. In 1779 Clinton sent a fleet to harass the Virginia coast, ending the first phase of the revolution for the state. Urban militia were placed on coastal watch and a portion of them became minutemen, ready to act in defense of the seacoast. The basic militia law was re-enacted and slightly reconstituted by the General Assembly on 5 May 1777, as "An Act for Regulating and Discipling the Militia. All free white males between ages 16 and 50 were eligible for enlistment. Hired servants and apprentices, but not free black or slaves, were included. Excluded were the governor, members of the state council, members of Congress, judges, state officers, such as attorney general and clerks, ministers, postmasters, jail keepers, hospital personnel, millers, iron and lead workers and persons engaged in firearms production for the state. Enlisted officers and men serving in the Continental Line and state navy were also exempted from registration for the militia. Companies of not less than 32 nor more than 68 men were formed, with battalions being made of not less than 500, nor more than 1000 men. Each company had a captain, two lieutenants and an ensign; battalions had additionally a colonel, a lieutenant-colonel and a major. (258) With the continued scarcity of arms, Virginia could ill afford to lose arms through pilferage. On 8 June 1777 the legislature ordered that "all arms delivered out of the Public Stores, or purchased by officers for use on this Continent, to be branded without loss of time." The standard brand employed was "VA" or "Va Regt --."(259) By late winter 1777 Governor Henry had deployed 300 militia at Fort Pitt, primarily to guard against tory and Amerindian activity.(260) To stem the Amerindian menace, Henry conceived, and the legislature approved, an action against Pluggy's Town, an Indian village beyond the Ohio River. Henry dispatched scouts and emissaries to the Delaware and Shawnee, to ascertain if they had objections to Virginia sending militia across their lands. Having determined that these neutral tribes would not be drawn into combat were Virginia militia to enter their lands, on 12 March 1777, Henry began to lay specific plans for this militia action. On that date, Henry wrote to George Morgan, superintendent of Indian Affairs, and Colonel John Neville, commandant at Pittsburgh, laying out his scheme. Both men responded on 1 April, cautioning strongly against the action. They expressed the most grave concerns that a punitive action would be inconclusive and that it would most likely provoke a general, long, barbarous and expensive Indian war.(261) Despite the acute shortage of arms there was often considerable friction between artificers and military contractors and other military authorities. Despite the obvious and acute need for the arms, accoutrements, horseshoes and canteens to be made and repaired, local governmental authorities, facing increased quotas for replacements in the Continental Line, threatened to enlist the artificers in the militia. At Peytonsville, Spotsylvania County, William McCraw, commander of a small band of artificers, wrote the governor, reminding him that McCraw had promised, he assumed on the authority, and with the consent, of the governor, that his men would be exempt from other duties while performing their jobs at the forges. "Unless this be stopped, I can not furnish the canteens so much wanted by the Southern Army; the armourers will not be able to repair the damaged guns, nor can I have horseshoes made, now so much needed." The General Assembly therefore passed legislation which specifically exempted from the draft or militia or other military service any artificer assigned to military posts or privately employed by independent arms or military supply contractors.(262) As the war progressed, many Virginians expressed confidence in their state militia. Edmund Pendleton on 30 August 1777 wrote to Richard Henry Lee, "I think it no unimportant part of our late success that [the] Militia had a principal hand in it, for if they will stand six hours hard fighting with their officers and men falling by their sides, we can never be subdued, our resources in that way are infinite."(263) In August 1777, while Governor Henry was in Hanover preparing for his impending marriage, word was received that General Howe's army had appeared with the British Navy off the Virginia coast. Henry authorized General Thomas Nelson to muster and command 64 companies of militia for the defense of Williamsburg. Among those responding was a militia company of students at the College of William and Mary. Henry ordered Colonel Charles Harrison's regiment of artillery to remain at York-town on the pretext that "militia must in this case be chiefly depended on, and their skill in managing Cannon promises nothing effectual." He also ordered the militia to detain persons suspected of disloyalty on the pretext that they might aid the British.(264) As it was, the British fleet did not land until it reached the Head of Elk, and its mission on this occasion was to provide troops for the assault on Philadelphia, not for an attack on Virginia. To support Washington in this assault, Henry ordered one-third of the militia of the counties of Prince William, Loudoun, Fairfax, Culpeper, Fauquier, Berkeley, Shenandoah, and Frederick, to march toward Philadelphia.(265) Washington thanked Henry for dispatching militia, but noted again his disdain for the Virginia militia, offering a sharp contrast to the New York and New England militias. How different the case in the northern department! There the states of New York and New England, resolving to crush Burgoyne, continued pouring in their militia, till the surrender of that army, at which time not less than 14,000 militia . . . were actually in General Gates's camp, and those composed, for the most part, of the best yeomanry in the country, well armed, and, in many instances, supplied with provisions of their own carrying. Had the same spirit pervaded the people of this and the neighbouring States, we might, before this time, have had General Howe nearly in the situation of General Burgoyne. . . .(266) In May 1778, the legislature passed a series of acts designed to draft or recruit 2000 men to assist General Washington. Those enlisted, whether as volunteers or drafts from the militia, were to serve until 1 January 1779, or less than two years. Additionally, minute-men were to be recruited for the defense of the eastern shore from British raiders and on the west from Amerindian attacks.(267) By mid-summer 1778, enlistments of many Virginia Continentals were expiring. Their numbers had been diminished by desertion, casualties in battle and death and incapacity from smallpox, dysentery and other diseases. Word of plagues of smallpox and other contagion diminished whatever enthusiasm yet remained for the patriot cause. While the legislature authorized the payment of bounties and another draft from militia rolls, Henry found it nearly impossible to recruit even half of the assigned quota. The state currency had become so depreciated that neither bounty nor pay were meaningful. Looking forward, Henry could see that the enlistments of the first nine regiments of the Virginia Line were due to expire early in 1778. He wrote to Congress, expressing his deep concern, but without being able to offer any solution.(268) In May 1778 Governor Henry received a distressing report regarding the Northampton County and Norfolk city militias. Captain John Wilson, the militia commander, wrote, "I beg to observe that the militia of late, fail much in appearing at musters, submitting to the trifling fine of five shillings, which, they argue, they can afford to pay by earning more at home."(269) Immediately after reading this, Henry conveyed a message to Benjamin Harrison, Speaker of the House of Delegates, concerning the military. In a positive vein, he reported success in the campaign against the Cherokees. Regarding the militia, he had a mixed report. "Although the militia of this commonwealth are in general well affected, and no doubt can be entertained of the general good disposition of the people," he wrote, "I am sorry to say that several instances of refractory and disobedient conduct have, which, for the sake of example, called loudly for punishment." But, probably with Wilson's letter in mind, he also reported that "offenses against the Militia law are become common."(270) Having established relations with the settlers in Kentucky, Virginia felt somewhat obligated to undertake their protection. Henry also had men in that year engaged in other frontier areas of the West. The policy of appeasement and peace that Colonel Neville and George Morgan had recommended was evidently a failure. After a series of Amerindian outrages, the Supreme Executive Council ordered Colonel John Todd to enlist 250 militiamen to provide some relief.(271) Congress also thought to act on behalf of the western settlements and in the spring of 1777 ordered General Hand to enroll a large body of militia to move against the Amerindians in Ohio from a base at Pittsburgh. Hand called into serve the militias of the Virginia counties of Frederick, Yohogania, Ohio, Hampshire, Monongalia, Botetourt, Augusta and Shenandoah. Henry was still uncertain if he could deploy the militia beyond the state's boundaries, so he decided to call for volunteers. Colonel Skillern raised five volunteer companies in the counties of Greenbrier, Augusta and Botetourt and marched to Point Pleasant, where a fort had been created, to join Hand. Captain Arbuckle commanded Fort Randolph at Point Pleasant and he had engaged several important Amerindian leaders in negotiations, among them Red Hawk and Cornstalk. The latter, desiring to honor the treaty he made after Dunmore's War, had attempted to dissuade his tribesmen from entertaining the British representatives. Cornstalk was unsuccessful in his attempts to maintain neutrality, so he came to Fort Randolph to inform the Americans of the British entreaties. Arbuckle detained all the Amerindians who came to the fort to act as hostages to prevent a large scale Indian war. After a militiaman from Rockbridge County was killed, allegedly by one of Cornstalk's men, the militiamen of Captain Hall's company murdered the hostages, including Cornstalk, his son and Red Hawk. Hand arrived two days after the murder, having failed to recruit any militia volunteers in Pennsylvania.(272) Neither did Hand bring provisions, and there being none at the forest, the volunteers abandoned their mission and returned home. The murder of one of the great Shawnee leaders precipitated an Indian war as the whole Shawnee confederation sought to avenge Cornstalk's death. Concerned citizens of Greenbrier County sent an elaborate memorial to the state authorities, demanding help.(273) On 27 May 1778, Henry ordered a post to be set up at Kelly's in Greenbrier County, manned by militia from Botetourt County, to guarantee the communication and supply route between Williamsburg and Fort Randolph. He also dispatched militia from several counties to support Fort Randolph. And he offered a substantial reward for the capture and punishment of those responsible for the murder of Cornstalk and the others. Finally, he appointed Andrew Lewis and John Walker to serve as special ambassadors to the Delaware and Shawnee nations at a conference scheduled at Fort Pitt on 23 July 1778. The murderers, Captains Hall and Galbraith and others, were brought to trial in Rockbridge County, but immediately acquitted as no man was willing to execute a white man for an Indian's murder.(274) Disgruntled, more than 200 of the Shawnees laid siege to Fort Randolph in May 1778. Failing to capture the fort, the marauding band wreaked havoc throughout Greenbrier County until repulsed by Colonel Samuel Lewis and Captain John Stuart and the militias of several counties. Congress replaced Hand with an experienced Indian fighter from Georgia, General McIntosh, who was given command of a joint force of militia, volunteers and the Thirteenth Virginia Continental Line. McIntosh was to carry the war to Detroit where Henry Hamilton, known as the "hair buyer" for his purchases of white scalps, was headquartered. Congress ordered Governor Henry to provide 2000 men, whether militia or volunteers. Henry estimated the following items would be among the bare minimum supplies needed to carry out the orders of Congress: 30,000 pounds of lead; 1000 horse belts; 400 felling axes and 3000 hatchets; 100 kettles, tents, haversacks and suits of clothing; 500 horses; and a large supply of arms and gunpowder and money. Additionally, there would be the problems of "recruiting, arming, accoutring & discipling" of such a large body of militia. In a long letter to Congress, dated 8 July 1778, Henry begged off. There was no way, he said, could Virginia afford or supply all that Congress demanded. Congress, he wrote, seemed to have no idea of "the exhausted state of this Country," but seemed think the state's resources were unlimited. He certainly supported the scheme, and the elimination of Hamilton's scalp purchasing was certainly a worthy objective. Congress reluctantly accepted Henry's explanation and simply ordered McIntosh to what he could with what he had and to operate from Pittsburgh.(275) The expedition proved to be fruitless. In 1778 McIntosh set up a garrison of 150 militia at Fort Laurens on the Tuscarawas River in the Ohio territory, but abandoned it the next year. Where Hand and Mcintosh had failed, George Rogers Clark was destined to succeed. He had journeyed to Williamsburg in the autumn of 1777, carrying a petition from Kentucky which asked for relief from the Amerindian raids. Having failed to find other ways to relieve the pressures on the frontier, the legislature offered some token support and £1200, not a great sum in the depreciated Virginia currency.(276) It commissioned Clark a lieutenant-colonel and charged him with capturing Fort Detroit. It ordered "that the Governor be empowered . . . to order such part of the militia of this Commonwealth as may be most convenient . . . to act with any troops on an expedition that may be taken against any of our western enemies."(277) Clark had convinced Council that Kaskasia "was at the present held by a very weak garrison" and could be taken without great effort or cost. Moreover, "there are many pieces of cannon & military stores to a considerable amount." This proved to be an irresistible bait and Council ordered him "to procure the artillery and stores" to supply the army. Council suggested Clark raise "seven companies of 50 men each" who were "to receive the pay and allowance of the militia & to act under the laws and regulations of this State, now in force, as militia." Despite Hamilton's policy of buying scalps and the brutality of the attacks on the frontier, Council ordered him "to show humanity to such British Subjects, and other persons, as fall into your hands."(278) With the consent of Governor Henry, Clark offered 300 acres of land to any who would volunteer to serve on his mission. Henry had long harbored the dream of extending Virginia's boundaries west to the Mississippi River and Clark's mission, if successful, on behalf of the state would go a long way to establish that boundary.(279) Moreover, by claiming the Mississippi as the boundary, Henry was on safe legal grounds in deploying state militia on that frontier. Captain Leonard Helm of Farquier County, and Captain Joseph Bowman of Frederick County, each offered to raise a militia company to support Clark. They planned to meet at Redstone Old Fort [Brownsville, Pennsylvania]. He encountered great difficulties because many potential recruits in Western Pennsylvania regarded Clark's expedition as a way to promote Virginia over Pennsylvania interests. Few were willing to support the defense of Kentucky. The county commissioners of Farquier and Frederick questioned the legality of deploying their militiamen in the western territories. Eventually, in May 1778 Clark raised a small force and, with 175 volunteers and militia, moved down the Ohio almost to its juncture with the Mississippi River and then moved northwestward. On 4 July 1778 he captured Kaskasia and, with the support of French inhabitants, brought the surrounding area under control. On 17 December Hamilton, with a force of about 500, of which about one-half were Amerindians, took Vincennes, but on 6 February 1779, Clark recaptured it. By 25 February, after a super-human effort to cross flooded plains, he forced Hamilton's surrender and took the "hair buyer" prisoner. Patrick Henry, under whose orders Clark's militia had fought, reported with unconcealed delight to Richard Henry Lee. Governor Hamilton of Detroit is a prisoner with the judge of that country, several captains, lieutenants, and all the British who accompanied Hamilton in his conquest of the Wabash. Our brave Colonel Clark, sent out from our militia, with 100 Virginians, besieged the Governor in a strong fort with several hundreds, and with small arms alone fairly took the whole corps prisoners and sent them into our interior country. This is a most gallant action and I trust will secure our frontiers in great measure. The goods taken by Clark are said to be of immense amount, and I hope will influence the Indians to espouse our interests. . . .(280) By resolution of Congress of 25 July 1778, the planned combined national and state attack on Fort Detroit and other western British outposts was postponed. Instead, Congress adopted Governor Patrick Henry's suggested plan of attack on hostile Amerindian towns in the Ohio territory, especially several Shawnee towns along the Ohio River. Henry, after meeting with the Council of Safety, decided to deploy the frontier county militias of Washington, Montgomery, Botetourt, Augusta, Rockbridge, Rockingham, Greenbrier, Shenandoah, Frederick, Berkeley, Hampshire, Monongalia, Yohogania and Ohio. These counties, Henry argued should supply all the men that General McIntosh could use, and all should be experienced Indian fighters.(281) Before McIntosh could move, Henry learned that, while the eastern seaboard action had come to a standstill, a mixed British force of regular troops, Amerindian allies and tories, was moving against the small forts in Kentucky. British Colonel Henry Hamilton had decided that this move might preclude an American move against Fort Detroit. Governor Henry order the colonel of the Washington County militia, Arthur Campbell, to choose 150 select frontier rangers from the counties noted above to move to the relief of the settlers in Kentucky.(282) Perhaps because the George Rogers Clark expedition pressed the British, the expected attack on Kentucky never materialized. Meanwhile, the British forces in the south were pressing hard in South Carolina. Governor Henry expressed great admiration for "the brilliant John Rutledge [who] was Governor of the State. Clothed with dictatorial powers, he called out the reserve militia and threw himself into [the defense of] the City."(283) Henry decided to respond to Rutledge's call for aid by dispatching 1000 Virginia militia to relief of the Carolinas. His primary problems were with the commissary, which could not round up enough "tents, kettles, blankets & Waggons" to supply this force.(284) The British captured Savannah in December 1778, crushing the 1000 man militia force under General Robert Howe (1732-1786). By the spring of 1779, the British had crushed General Benjamin Lincoln's force at Stonoe Ferry, and the southern campaign seemed to be going well for the enemy. Patrick Henry thought Lincoln had done well, although he lost 300 men while inflicting only 130 casualties on the enemy. The British successes in the south created another danger. The enemy immediately sent emissaries to the Cherokee and other potentially war-like tribes, promising them weapons and other aid should they join the British cause. Henry moved to break up the alliance before it became a real, effective coalition that could over-run the frontier. He had learned that the most war-like of all the southern tribes had gathered in an area from the mouth of the Cickamauga River south some fifty miles down the Tennessee River. Led by a chief named Dragging Canoe (or Dragon Canoe), whom we have met before, these were the outcasts from many tribes and villages. They had welcomed into their towns various tories, bandits, escaped criminals, murderers, cut-throats, fugitives from justice and escaped slaves, bringing a grand total number of armed men of perhaps 1000. Dragging Canoe and his band of loosely associated allies had refused overtures of peace from Virginia sent via Colonel Christian. On 13 March 1779, Henry informed Washington that he had drawn on the select militias of the same counties he had called into a state of readiness, to be commanded by Colonel Evan Shelby. Shelby had served as quartermaster for the Virginia militia, so he was able to command all the supplies and arms that had eluded the militia had earlier planned to send to South Carolina. Henry reported to Washington that, "About 500 militia are ordered down the Tennessee River to chastise the settlements of the renegade Cherokees that infest our southwestern frontier and prevent our navigation on that river, from which we hope for great advantages." Soon after, North Carolina added 500 of its militia to Shelby's force. As it was, many of the North Carolina militia turned out to be displaced Virginians or men recruited into the North Carolina militia from Virginia.(285) Shelby's mission was an overwhelming success. His militia force, which actually consisted of 600 men, assembled near Rogersville, Tennessee, at the mouth of Big Creek. They enlisted the help of Colonel Montgomery's 150 men who had been on their way to aid George Rogers Clark. On 10 April 1779, the force began its journey by canoe, reaching Dragging Canoe's town by 13 April. Having captured an Indian, they forced him to guide them to the enemy's campsite. Shelby took the camp by surprise, killed 40 warriors, burned their supplies and captured British war materials valued at £20,000 sterling. The British dream of uniting the southern tribes with Colonel Hamilton's forces came to an abrupt end. In a single stroke, the power of the Chickamauga tribes was broken, and the Cherokees, seeing the power of Shelby's militia, soon withdrew from further negotiations with the English. Henry's two major deployments of militia on the far frontiers, under Shelby in Tennessee, and under George Rogers Clark in the west, had saved the frontier and precluded the necessity of Washington's having to divert regular troops from the eastern seaboard to fight on the frontier. Meanwhile, Virginia militia on the northwestern frontier came under pressures from British, tory and Amerindian troops. Ebenezer Zane (1747-1812) recruited his neighbors and formed a militia. His volunteers resisted attacks on Fort Henry at Wheeling, [West] Virginia, in 1777 and 1782.(286) No colony ever had sufficient regular forces to guard its seacoast from invasion. One primary responsibility of the militia remained standing coastal watch. In May 1779, as Shelby's army was mopping up in Tennessee, British troops landed in Portsmouth, embarking from a reported 35 ships, including Raisonable, Rainbow and Otter. This expedition, which had sailed from New York on 5 May 1779, consisted of 2500 men under Major-general Edward Matthew, conveyed on ships commanded by Commodore Sir George Collier, acting on the home government's explicit orders to Sir Henry Clinton. This force was to destroy American ships, especially privateers, disrupt the economy and prevent supplies reaching the southern states during the campaign being waged from Savannah, Georgia. The hundred regulars stationed in Portsmouth offered little resistance. These troops, like others assigned to similar coastal watch duty, might have been better deployed in the field. Having occupied Portsmouth so easily, the British army followed up quickly, marching on Suffolk. There they captured 1200 barrels of pork and looted and burned the town. They also destroyed ordnance and gunpowder, tobacco and various naval materials of war. Governor Henry called out the militia, which assembled too late to save Suffolk, but with 2000 to 3000 militiamen under arms marching Suffolk, the British withdrew. The British, before withdrawing completely, also burned and looted Portsmouth and Norfolk.(287) On the east coast of Virginia, the French came into contact with the Virginia militia for the first time in 1778. About this same time Virginia sent militia to assist South Carolina in its struggle against the British invasion. Whether prejudiced by Washington's views or on their own account, the French held a dim view of the Virginia militia. Of their value in the New Jersey campaign, Jean Baptiste Antoine de Verger, attached to the staff General Rochambeau, thought them cowardly in battle unless they had a clear advantage in numbers and position. They preferred having a clear avenue of retreat even when they had the upper hand. A competent commander could inspire them to perform brave deeds, but only for a short while. As de Verger wrote, "the persuasive eloquence of their commander aroused in them an enthusiastic ardor of which immediate advantage must be taken or lost."(288) Jefferson did not share this skepticism of the militia. He was quite proud of his state's militia, and especially its prowess with the rifle. He wrote to Marquis de Lafayette,(289) "the militia of Washington, Montgomery, Botetourt, Rockbridge, Augusta and Rockingham are our best Rifle counties."(290) Nonetheless, Jefferson was to hear more criticism of the citizen-soldiers in the months to come. Baron von Steuben wrote Jefferson on 2 January 1780, that "in case of the calling out a Body of Militia it will be highly necessary to adopt some measures to prevent numerous abuses and terrible destruction of the Country."(291) In 1780 the militia was mustered in large numbers both to assist its sister colonies to the south to repel Cornwallis' invasion and to contain the Amerindian incursions along the frontier. The Virginia militia's contribution to the Whig victory at King's Mountain on the border of North Carolina and South Carolina was significant. On 18 August 1780 the notorious tory Banastre Tarleton had defeated an American force at Fishing Creek, South Carolina, opening the way for the invasion of North Carolina. Sorely in need of a victory, Colonels Isaac Shelby (1750-1826) and William Campbell (1745-1781) recruited a force of backwoodsmen, mostly expert riflemen from the Carolinas, Kentucky and Virginia, and on 7 October, trapped and decisively defeated Major Patrick Ferguson's force atop King's Mountain. Ferguson himself was killed and nearly his entire command was killed or captured. Ferguson had served as Corwallis' screening force on his left flank and this loss was a serious one, forcing the British commander to retreat and establish winter camp at Winnsborough. General Nathaniel Greene was more optimistic than General Washington about the effectiveness and use of the militia. Perhaps this was because he had little choice in the matter since virtually no trained troops were available to him. Virginians had been sent to serve in the Continental Line both north and south. General Mathew's Virginia regiment had been mauled at Germantown, Pennsylvania, and most survivors were taken captive. General Buford's Virginians had been massacred by Tarleton's tories. More Virginia soldiers were held captive as a result of General Benjamin Lincoln's surrender at Charleston, South Carolina. Still, Greene held out considerable hope of success with the men he had at his disposal. Writing to Jefferson on 20 November 1780 from Richmond, soon after his appointment as commander of the Southern Army, Greene complimented the militia's devotion to duty, provided only that they be used properly. It Affords me great Satisfaction to see the Enterprize and Spirit with which the Militia have turn'd out lately in all Quarters to Oppose the Enemy; and this Great Bulwark of Civil Liberty promises Security and Independence to this Country, if they are not depended upon as a principal but employed as an Auxiliary but if you depend upon them as a principal the very nature of the War must become so ruinous to the Country that tho numbers for a time may give security yet the difficulty of keeping this order of Men long in the field and the Accumulated expences attending it must soon put it out of your Power to make further Opposition and the Enemy will have only to delay their Operations for a few months to give Success to their measures. It must be the extreme of folly to hazard our liberties upon such a precarious tenure when we have it so much in our power to fix them upon a more solid basis.(292) Writing from Hillsborough, North Carolina, on 30 August 1780, Edward Stevens complained to Governor Thomas Jefferson of the behavior of the Virginia militia who had come to his aid. First, they were poorly armed because political authorities had not permitted them to "Carry a single Muskett out of the State" so they had to be rearmed from Philadelphia. That may not have been the fault of the men, but they had deserted in great numbers in the face of the enemy in the action against Lord Cornwallis' army near Camden on 16 August. Stevens thought that a large measure of the blame from General Gates' army was due to the cowardice of the militia.(293) Shortages of arms and other materials of war continued to plague the Virginia militia. On 21 October 1780 Thomas Nelson, writing to Governor Jefferson from Hall's Mills, lamented "the Enemy will undoubtedly secure all the passes, there be no possibility of preventing it with the Militia . . . who are not armed at all."(294) On 22 October 1780 Jefferson was forced to inform General Horatio Gates that he had mustered the militia south of the James River and the volunteers from these units were "in readiness" and would join him "as soon as Arms can be procured." Likewise, volunteers from other counties would follow within the next eight months if they could find arms wherewith to equip them.(295) As autumn approached, the governor received intelligence that tories from the Carolinas under Major Ferguson were planning to raid the Greenbrier Valley and wreak havoc in southwestern parts of the state. The lead mines in Wythe County supplied a significant part of the patriots' needs for bullets and thus provided an attractive target for marauders. There were many tories in southwestern Virginia who might become politically and militarily active given some encouragement, but the death of Ferguson at King's Mountain ended the preparations. Meanwhile, Washington had dispatched General Muhlenberg from Pennsylvania to assist in the defense of Portsmouth against a major British landing party. With the help of local militia, the Continentals defeated British General Leslie and liberated the town. Benedict Arnold, now a British officer, appeared with a superior force of regulars and drove Muhlenberg's militia from Richmond. As the militias from additional counties swelled Muhlenberg's army, Arnold fell back to Portsmouth, burning and looting all the way. Muhlenberg's militia stood in his path and Washington dispatched Lafayette with 1200 of the Continental Line to capture the traitor and defeat his army. The British landed Colonel Phillips and his regiment at Portsmouth. Phillips seized Petersburg, but died almost immediately of some fever and his men joined Arnold's command. Steuben's and Lafayette's timely arrival prevented a second capture of Richmond and Arnold beat a quick retreat to Portsmouth and the British fleet. George Washington at his headquarters near Passaic, on 18 October 1780, prepared a Circular sent to Jefferson and the other state governors, the Continental Congress and others. In obedience to the Orders of Congress, I have the honor to transmit Your Excellency the present state of the Troops of your line, by which you will perceive how few men you will have left after the first of January next. When I inform you also that the Troops of the other Lines will be in general as much reduced as Yours, you will be able to judge how exceedingly weak the Army will be at that period; and how essential it is the States should make vigorous exertions to replace the discharged men as early as possible. Congress's new plan for a military establishment will soon be sent to the states with requisitions for their respective quotas. New levies should be for the war, as I am religiously persuaded that the duration of the war, and the greatest part of the Misfortunes, and perplexities we have hitherto experienced, are chiefly to be attributed to temporary inlistments. . . . A moderate, compact force, on a permanent establishment capable of acquiring the discipline essential to military operations, would have been able to make head against the Enemy, without comparison better than the throngs of Militia, which have been at certain periods not in the field, but on their way to, and from the field: for from that want of perseverance which characterises all Militia, and of that coercion which cannot be exercised upon them, it has always been found impracticable to detain the greatest part of them in service even for the term, for which they have been called out; and this has been commonly so short, that we have had a great proportion of the time, two sets of men to feed and pay, one coming to the Army, and the other going from it. Instances cited of the disasters and near-disasters caused by the constant fluctuations in the number of troops in the field. Besides, It is impossible the people can endure the excessive burthen of bounties for annual Drafts and Substitutes, increasing at every new experiment: whatever it might cost them once for all to procure men for the War, would be a cheap bargain. Not without reason, the enemy themselves look forward to our eventually sinking under a system, which increases our expence beyond calculation, enfeebles all our measures, . . . and wearies and disgusts the people. This had doubtless had great influence in preventing their coming to terms. Through infatuation with an error which the experience of all mankind has exploded, and which our own experience has dearly taught us to reject . . . America has been almost amused out of her Liberties. Those who favor militia forces are those whose credulity swallows every vague story, in support of a vague hypothesis. I solemnly declare I never was witness to a single instance, that can countenance an opinion of Militia or raw Troops being fit for the real business of fighting, I have found them useful as light Parties to skirmish in the woods, but incapable of making or sustaining a serious attack . . . . The late battle of Camden is a melancholy comment upon this doctrine. The Militia fled at the first fire, and left the Continental Troops surrounded on every side, and over-powered by numbers to combat for safety instead of victory. The Enemy themselves have witnessed to their Valour. Let the states, then, in providing new levies abandon temporary expedients, and substitute something durable, systematic, and substantial. . . . The present crisis of our affairs appears to me so serious as to call upon me as a good Citizen, to offer my sentiments freely for the safety of the Republic. I hope the motive will excuse the liberty I have taken. Washington added a postscript to Jefferson because the Virginia militia was in large responsible, in Washington's opinion as it had been in Stevens', for the disaster at the Battle of Camden. "The foregoing is circular to the several States. The circumstances of Your Line put it out of my power to transmit a Return."(296) In December 1780 Edmund Pendleton reported to James Madison that he was having great problems raising some of the militia units for duty. The Caroline County militia, in particular, became war weary very quickly after they had been mustered to resist a British invasion at Portsmouth in October. Pendleton told Madison that many "will rather die than stir again." The militia had been placed under the command of Major Charles McGill, aide-de-camp to General Horatio Gates, and a brutal disciplinarian. The men had become "very sickly and many died below, on their way back" because McGill had marched them through avoidable water hazards, had not allowed them to dry out their clothes afterward, failed to feed and rest them properly and committed all sorts of other atrocities. Many had died of "laxes and Pleurisies."(297) Sixteen hundred Virginia militia did march to General Greene's assistance. Daniel Morgan led these men to victory at Cowpens on 17 January 1781. There he was assisted by Colonel William Washington (1752-1810) and his mounted militia in one of the rare engagements involving these forces. Morgan's especially skillful disposition of his one thousand militiamen at that battle carried the day against the hated tory, Colonel Banastre Tarleton, inflicting 329 casualties on the enemy and capturing about 600 of his force. All available militiamen were marched to augment American forces at the Battle of Guilford Court House on 15 March. While this battle left Cornwallis in command of the field, his losses in men and material were so great as to seriously impede his future actions. Roving bands of militia under Francis Marion (1732-1795), Thomas Sumter (1734-1832) and Henry Lee (1756-1818) proved to be effective in delaying and diverting Cornwallis' planned march. Virginia militia assisted in these actions and in capturing a number of small, rural British outposts. So effective was these forces that Cornwallis did not arrive in Virginia until mid-June, by which time the small forces of Steuben and Lafayette had been reinforced by Anthony Wayne. The British authorities had become convinced that the lower southern colonies could not be pacified as long as Virginia remained a training ground for patriot warriors. So Charles Cornwallis led his 1500 men into Virginia, starting out from Wilmington, North Carolina, on 25 April 1781. The North Carolina and Virginia frontier militias remained important factors by harassing the British supply and communication lines. By the time Lord Cornwallis reached Petersburg, Virginia, he had added 4000 men to his depleted command of 2000 men capable of performing their duty. General William Phipps and turncoat Benedict Arnold added their troops, bringing his total strength to about 7500 men. The British troops not only outnumbered the Continental Line under von Steuben and Lafayette, but they were better trained, disciplined and equipped than their provincial brethren. The British army pursued Lafayette's inferior force to the Rapidan River which the Americans crossed at Ely's Ford. Cornwallis sent a raiding party under Colonel Simcoe to harass the Whigs, and it succeeded in destroying American gunpowder and other supplies at the mouth of the Rivanna River. Another raiding party under Tarleton proved to be such a formidable force that on 4 June it almost captured the state legislature and Governor Thomas Jefferson. It was repulsed as it turned south toward Staunton by local militia who turned out to control the mountain passes. Lafayette recrossed the Rapidan River at Raccoon Ford and secured a strong position behind Meechums River where he was soon joined by General Wayne's army and other forces from the north. Cornwallis, under orders from Clinton, then turned toward the sea, leading to his eventual entrapment, defeat and, on 18 October, capitulation of this British army. The militia played little, if any, role in the final reduction of Cornwallis' army. With Cornwallis' surrender British plans for reestablishing its colonial rule over America ended.(298) By war's end, Virginia had furnished more troops and militia to the patriot cause than any other colony, save Massachusetts. This was not surprising to General Washington, who, in a letter to Governor Henry, written early in the Revolution, had commended the martial spirit of the men of his home state. "I am satisfied that the military spirit runs so high in your colony, and the number of applicants will be so considerable, that a very proper selection may be made."(299) In 1776, in response to the call from Congress, Virginia furnished 6181 men; in 1777, Congress assigned a quota of 10,200, of which number, 5744 enrolled in the continental line and the state retained 5269 militia. In 1778 Congress assigned a quota of 7830, which Virginia filled as follows: 5230 continentals; 600 guards for prisoners at Saratoga; and 2000 state militia. In 1779 the state had a quota of 5742, of which 3973 were continentals; 600 served as guards for enemy prisoners; and 4000 served in the militia.(300) In order to secure the blessings of liberty which only a well-regulated militia could provide the Virginia Constitution provided, That a well-regulated militia, composed of the body of the people, trained to arms, is the proper, natural and safe defense of a Free State; that standing armies, in time of peace, should be avoided, as dangerous to liberty; and that in all cases the military should be under strict subordination to, and governed by, the civil power.(301) Richard Henry Lee lauded the state for its passage of a new militia law in 1785. His comments are noteworthy for his statement on the proper place for the militia in a state. I am told our Assembly have passed a new Militia Law, of a mor [torn] nature than former - I have not seen it, but am of Opinion that [torn] the meetings for exercise are made more frequent, it will pro [duce] mischief rather than good, as I never discovered other fruits from those meetings, than calling the Industrious from their Labour to their great disgust and the Injury of the community, and affording the idle an opportunity of dissipation. I rather think that in time of peace, to keep them enrolled and oblige them to meet once a year to shew their Arms and Ammunition - to provide Magazines of those, and in case of a War to throw the Militia into an Ar- rangement like our minute Plan, for defence until a regular Army can be raised, is the most Eligible System, leaving the people at liberty to pursue their labour in peace, and acquire wealth, of great service in War.(302) The Virginia convention, called to consider the ratification of the proposed U. S. Constitution, considered the role of the militia in the new republic. Rather naturally, the debate quickly focused on the role the militia had performed, and how it had, or had not, fulfilled it obligations in the War for Independence. On 9 June 1788 Henry Lee rose to offer his opinion on the subject in response to comments made by Edmund Randolph. Here, sir, I conceive that implication might operate against himself. He tells us that he is a staunch republican, and adores liberty. I believe him, and when I do I wonder that he should say that a kingly government is superior to that system which we admire. He tells you that it cherishes a standing army, and that militia alone ought to be depended upon for the defence of every free country. There is not a gentlemen in this house -- there is no man without these walls -- not even the gentleman himself, who admires the militia more than I do. Without vanity I may say that I have had different experience of their service from that of the honorable gentleman. It was my fortune to be a soldier of my country. In the discharge of my duty I knew the worth of militia. I have seen them perform feats that would do honor to the first veterans, and submitting to what would daunt German soldiers. I saw what the honorable gentleman did not see our men fighting with the troops of that king which he so much admires. I have seen proofs of the wisdom of that paper on your table. I have seen incontrovertible evidence that militia cannot always be relied on. I could enumerate many instances, but one will suffice. Let the gentle. man recollect the action of Guilford. The American troops behaved there with gallant intrepidity. What did the militia do? The greatest numbers of them fled. The abandonment of the regulars occasioned the loss of the field. Had the line been supported that day, Cornwallis, instead of surrendering at York, would have laid down his arms at Guilford.(303) In replying to the argument of Patrick Henry, that the states would be left without arms, Lee said he could not understand the implication that, because Congress may arm the militia, the States could not do it. The States are, by no part of the plan before you, precluded from arming and disciplining the militia should Congress neglect it. He rebuked Henry for his seemingly exclusive attachment to Virginia, and uttered the following sentiment: In the course of Saturday, and in previous harangues, from the terms in which some of the Northern States were spoken of, one would have thought that the love of an American was in some degree criminal, as being incompatible with a proper degree of affection for a Virginian. The people of America, sir, are one people. I love the people of the North, not because they have adopted the Constitution, but because I fought with them as my countrymen, and because I consider them as such. Does it follow from hence that I have forgotten my attachment to my native State? In all local matters I shall be a Virginian. In those of a general nature I shall never forget that I am an American.(304) The Reverend Mr Clay, priest in the established church, on 13 June, led the objections to granting power to the national government to call out the state militias under the Militia Clause. James Madison responded, using much the same argument he developed in his contributions to the Federalist Papers. Madison was followed by Mason, who denounced it as not sufficiently guarded, in an able harangue, which called forth an elaborate reply from Madison. Clay was not satisfied with the explanations of Madison. "Our militia," he said, "might be dragged from their homes and marched to the Mississippi. He feared that the execution of the laws by other than the civil authority would lead ultimately to the establishment of a purely military system. Madison rejoined, and was followed by Henry, who exhorted the opponents of the new scheme to make a firm stand. "We have parted," he said, "with the purse, and now we are required to part with the sword." Henry spoke for an hour, and was followed by Nicholas and Madison in long and impassioned, but reasoned, speeches. Henry replied, and was followed by Madison and Randolph. George Mason rejoined at length, and was followed by Lee, who threw with great oratorical skill several pointed remarks at Henry. Clay rose, evidently motivated by great passion. He said that, as it was insinuated by Randolph, he was not under the influence of common sense in making his objection to the clause in debate, his error might result from his deficiency in that respect; but that gentleman was as much deficient in common decency as he was in common sense. He proceeded to state the grounds of his objection. and showed that in his estimation the remarks of the gentleman were far from satisfactory. Madison rejoined Clay, and passing to the arguments of Henry, spoke with great vigor, refuting them. Clay asked Madison to point out an instance in which opposition to the laws of the land did not come within the idea of an insurrection. Madison replied that a riot did not come within the legal definition of an insurrection. After a long and animated session the House adjourned.(305) The debate then turned in other directions and Virginia eventually ratified the new frame of government without demanding that changes be made to the militia system therein constructed. The Virginia revolutionary militia had one more duty to perform. On 17 July 1794, President George Washington mustered the Virginia Militia, calling it into federal service to suppress the Whiskey Rebels in western Virginia and Pennsylvania. The President of the United States, having required a second detachment of Militia from this Commonwealth, amounting to 3000 infantry and 300 cavalry, inclusive of commissioned officers, to be prepared for immediate service, the commander in chief accordingly directs the same to be forthwith appointed.(306) The Virginia militia was of great importance in the seventeenth century, so much so that one might easily conclude that without it, the Amerindians might easily have destroyed the colony at almost any early stage of its development. As either the most populous or second most populous, colony everything that happened in Virginia was of consequence to the other colonies. Since its nearest rival, Massachusetts, was severely circumscribed in territory for growth, Virginia would continue to be a colonial leader. It was a truly southern colony which nonetheless had some very cosmopolitan characteristics. It boasted no cities to compete with New York, Philadelphia or Boston, yet it established the first major gun manufactory in the nation and produced many of its outstanding military and political leaders and political philosophers. As we have seen, the Virginia militia fell on hard times largely because it was heavily populated by poorer farmers and tradesmen spread out over vast areas. Once the frontier advanced to and beyond the Shenandoah Valley, the inhospitable terrain and sparse population made it very difficult for a militia to assemble or function. Still, it performed very well when pressed by the French and their Amerindian allies. It was the mainstay of Braddock's auxiliaries and may have saved what could be salvaged from his ill-fated expedition. It also protected much of the frontier after the remnants of Braddock's army fled to the protection of the eastern seaboard. And it contained parts of Pontiac's conspiracy. In the American War for Independence, it successfully kept the native aborigine and Tories at bay, and bore the main share of defense of the colony. That the British army did not choose to operate much in the state may be credited in large to the Virginia militia. And it performed well as a reservoir to fill the Continental Line. In 1629 Sir Robert Heath was granted a patent to settle the area between 31 and 36 degrees north under the name of New Carolina. The following year Heath conveyed this land to Samuel Vassal and others who explored it and made an ineffectual attempt to settle the area. By 1632 Henry Lord Maltravers claimed the area as the Province of Carolana under an alleged grant from Heath and by the Harvey Patent issued by the governor of Virginia, John Harvey. The Harvey Patent established Maltravers' claim to the area south of the James River known as Norfolk County. No effective settlement was established. The Albemarle settlement was the first permanent caucasian habitat and was created about 1653 by Virginians moving through the Nansemond Valley and Dismal Swamp into the area of Albemarle Sound and Chowan River. Shortly after, a group of London merchants and disaffected New Englanders established a settlement at Cape Fear, but it was abandoned about 1663. In 1644, the Proprietors of the Carolinas ordered the Governor of North Carolina to "constitute Trayne bands and Companys with the Number of Soldiers [necessary] for the Safety, Strength and defence of the Counteys and Province." The Proprietors agreed to "fortifie and furnish . . . ordnance, powder, shott, armour, and all other weapons and Habillaments of war, both offensively and defensively."(307) Every newly arrived "freeman and freewoman . . . shall arrive in ye said countrie armed." The "master or Mistress of every able-bodied servant he or she hath brought or sent . . . each of them [is to be] armed with a good firelocke or Matchlocke."(308) To whom these orders applied is unclear, based upon the settlement timetable noted above. Perhaps the orders were only theoretical, promulgated in the case an actual settlement was established. Sir John Colleton, a wealthy planter from Barbados, and William Berkeley, former governor of Virginia, in conjunction with a colonial promoter, Anthony Ashley Cooper(309), approached Charles II about developing the Carolina colony. On 3 April 1665, the king granted a new charter to eight proprietors, including the above captioned promoters, Earl of Clarendon, Duke of Albemarle, William Berkeley's brother John Lord Berkeley, Earl of Craven and Sir George Carteret. Maltraver's heirs, the Duke of Norfolk and Samuel Vassal all filed counter-claims on 10 June, and on 6 August the Cape Fear Company added its name to those contesting title. On 22 August 1665 the Privy Council confirmed Charles II's more recent grant and declared all previous grants to be null and void. In October 1664 William Berkeley, authorized to name the first governor of the Province of Albemarle (as North Carolian was then called) nominated William Drummond. Berkeley head a board to draw up the Concessions and Agreements of 1665, granting basic rights, including liberty of conscience and creating a representative assembly of freeholders. The Charter of Carolina of 1663 required that the proprietors build whatever fortifications were necessary for the protection of the settlers and to furnish them with "ordnance, powder, shot, armory and other weapons, ammunition [and] habilements of war, both offensive and defensive, as shall be thought fit and convenient for the safety and welfare of said province." The proprietors were to create a militia and appoint civil and military officers. Because "in so remote a country and scituate among so many barbarous nations," to say nothing of pirates and Amerindians, the crown ordered the proprietors "to levy, muster and train all sorts of men . . . to make war and pursue the enemies."(310) The eight Lord Proprietors of Carolina in 1663 ordered that the governor "levy, muster and train all sorts of men" as a militia.(311) The second charter, issued just two years later, contained many of the same instructions.(312) The proprietors ordered that a militia be formed and allowed it to march out from the colony to assist other colonies in times of crisis. They ordered that there be a constable's court, consisting of one proprietor and six others, who assumed command of the militia. This body was to provide arms, ammunition and supplies and to build and garrison forts. Twelve assistants were to be lieutenant-generals of the militia. In war the constable was to act as field commander.(313) In 1667 the governor ordered the officers of the counties to train the colonists in the art of war.(314) The Fundamental Constitutions of North Carolina of 1669 required "all [male] inhabitants and freemen" between the ages of 17 and 60 to bear arms in service to the colony.(315) The governor was to "levy, muster and train up all sorts of men of what conditions soever." The language was much the same as the two earlier Carolina charters including a provision that the militia might be deployed "without the limits of the said province." With a properly ordered militia the colony might "take and vanquish" its enemies, even "to put them to death by law of war, and to save them at their pleasure."(316) Tradition has ascribed the Fundamental Constitutions to John Locke, the noted and influential political theorist, in collaboration with Anthony Ashley Cooper. It was an unusual blending of aristocratic conservatism with the liberalism of the Enlightenment. While permitting freedom of conscience and religion and creating a citizens' militia, it also established a hierarchical aristocratic rule with classes based on land ownership. For example, a lord of a manor must own no less than 3000 acres whereas landgraves owned no less than 12,000 acres, and freeholders were recognized only if they owned a minimum of 50 acres. The eight proprietors constituted the Palatine Court which had the power of disallowance of laws and appointment of governors. The provincial council seated all landed hereditary nobility and popularly elected members who must own at least 300 acres. In 1675 the total population of North Carolina was less than 5000; and it had increased to less than 6000 by 1700.(317) It was not only inconvenient and impractical to muster and train the militia in the first century, but even dangerous.(318) Thus, the militia could hardly have been a formidable force in the seventeenth century. By 1680 Moravian and other Calvinist religious dissenters had begun to move into the Carolinas. They were as opposed to military service as their Quaker brethren in Pennsylvania, and in 1681, decided they had sufficient strength and support to oppose reenactment of the North Carolina militia law. As a period history of the colony said, they "chose members [of the legislature] to oppose whatsoever the Governor requested, insomuch as they would not settle the Militia Act" even though "their own security in a natural way depended upon it."(319) Another contemporary history confirmed that the dissenters were "now so strong among the common people that they chose members to oppose . . . whatsoever the Governor proposed [especially] the Militia Law."(320) Many non-dissenters simply opposed the militia law because they wished to not serve in the militia; or because they were naturally opposed to the governor, government and British rule. The result, of course, was to emasculate the militia and destroy most of the colony's ability to defend itself. By 1693 the legislature had become bicameral. The larger baronies initially recognized never were established and in fact no seignory of more than 12,000 acres was ever created. Eventually, the governor's council became the Grand Council. The proprietors revised the Fundamental Constitutions on 12 January 1682, but the revisions were voided the next year, and revised again in 1698, but never accepted by the assembly. Although religious toleration had included non-establishment, the Church of England became the established church in 1670. No changes were made in the fundamental militia system, probably because the proprietors had no interest in bearing (or raising by taxation) the cost of a standing army. On 2 October 1701 Governor Nicholson of North Carolina reported to the Lords of Trade in London that the citizens under his charge "do not put themselves in a state of defence by having any regular Militia, arms or ammunition."(321) That neglect cost the colony dearly during the Tuscarora Indian War of 1711-12. The Tuscarora surprised the colonists in large because they were able to create a war confederation with four neighboring tribes with whom they had never before cooperated, and they kept the negotiations completely secret from the whites. The unsuspecting colonists had not prepared and after the first attack Governor Edward Hyde could find only 160 militiamen ready to muster. The best that he could do was to order these men to herd the surviving settlers into fortified positions and protect them while begging for help from South Carolina.(322) Militia training paid occasional dividends. When installed as governor, Alexander Spotswood committed to a strong militia. In 1711 when the Tuscarora were menacing the frontier, Governor Spotswood decided to impress on them the power of the colonists using his best trained militia from three counties. As he reported to Lord Dartmouth, "I brought into discipline the body of Militia . . . upwards of 1600 men. So great an appearance of armed Men in such good order very much surprised them" and aided in avoiding a great Indian war.(323) Had Spotswood not paraded the militia before the Tuscarora the damage might have been more intense. Indian trouble in the South was not ended. Irritated by dishonest and self-seeking traders and the establishment of a new colony of Swiss at New Bern, the Tuscaroras had risen against the northern Carolinians in September, 1711, and killed about 130 inhabitants. On 22 September an Amerindian force estimated at 1200 Tuscarora warriors, with some additional support from other tribes, massacred settlers along the Chowan and Roanoke Rivers. Only the timely arrival of militia forces from South Carolina saved the colony from annihilation. A wealthy Irish planter, Colonel John Barnwell, with thirty-three militia and five hundred Yamasees and Catawbas, struck back at the Tuscaroras and defeated them. Returning the favor, the Tuscarora in January 1712 fortified their village very effectively. The stockade had a trench, portholes, a rough abatis and four round bastions. Barnwell learned that a runaway slave had taught the Tuscarora the art of fortification.(324) They kept a sullen peace for two years and then were fighting again. Colonel James Moore, Jr., of South Carolina moved against them in March 1713, with about 100 militia and eight hundred Cherokees, Catawbas and Creeks. He killed 800 warriors, while suffering only 58 killed and 84 wounded. This action so overwhelmed the Tuscaroras that they began moving up to New York in waves, seeking protection among their ancient brethren. The Oneidas adopted and domiciled them, but the Iroquois never quite granted them equal status. As members of the Iroquois Confederation they were never to become significant actors in the drama on the New York-Canada frontier. On 20 September 1712 Lord Carteret reported to the Lord Proprietors in London that "we obtained a law that every person between sixteen and sixty years of age able to carry armes" is to be enlisted in the militia.(325) With the assistance of the South Carolina militia on 28 January 1812 the colonial forces defeated the Tuscarora and killed about 300 of the Amerindians along the banks of the Neuse River. So destitute was the North Carolina for muskets that it was forced in 1712 to borrow some from the South Carolina militia.(326) The legislature reported to Lord Carteret in London that as a result of that embarrassment, "we obtained a law that every person between 16 and 60 years of age able to carry arms" be armed at his own expense.(327) Hyde demanded that the militia be upgraded and better organized. The legislature considered Hyde's requests and then debated the militia and discussed its importance. On 15 October 1712 Colonel A. S. Spotswood reported to the Lords of Trade that a militia was indispensable because it had three vitally important functions: first, it was the first line of defense against the Amerindians; second, it protected the colony against the ravages and outrages of pirates and smugglers, as posse comitatus; and, third, it defended the colonists against slave insurrections.(328) On 15 October 1712 Alexander Spotswood reported to the Lords of Trade that the colonial legislature had agreed to maintain the militia for three purposes. It would maintain the peace with the Amerindians. It would assist in repressing piracy and smuggling. And it would be on guard against slave insurrections.(329) By 1713 the war was over and the once proud Tuscarora left their southern home forever, went north and joined the Iroquois Confederation, becoming the sixth nation in that political entity, thereafter known as the Six Nations. The lesson of the Tuscarora War was clear enough. A better armed and regulated militia was imperative to secure the peace. In 1715 the legislature enacted the militia law that remained in effect for the duration of the colonial period. The governor was the principal officer of the militia and he was authorized to appoint other officers to order, drill, discipline and inspect the militia. All freemen between 16 and 60 years of age were enlisted and enrolled. Any captain who failed to maintain his militia list was subject to a fine of £5. Each citizen-soldier had to supply at his own expense a "good gun, well fixed," a sword, powder, bullets and accoutrements.(330) The act provided exemptions for the physically disabled, Church of England clergy and a host of local and colonial public officials. However, all men had to provide arms and ammunition and the exemption was voided in times of grave emergency.(331) Militiamen who were killed or wounded while doing militia duty were to be cared for at the public expense. A permanently disabled man, and the family of a dead militiaman, received a "Negro man-slave" as compensation to assist in various household and farming duties.(332) Within fifteen years the militia law was forgotten. The colony was at peace and no one cared much about enforcing an unnecessary, burdensome and unpopular law. [W]e learn from Experience that in a free Country it [the militia] is of little use. The people in the Plantations are so few in proportion to the lands they possess, that servants being scarce, and slaves so excessively dear, the men are generally under a necessity there to work hard themselves . . . so that they cannot spare a day's time without great loss to their interest. . . . [A] militia there would become . . . burthensome to the poor people . . . . Besides, it may be questioned how far it would consist with good Policy to accustom all the able Men in the Colonies to be well exercised in Arms.(333) The situation had changed little over the next decade. Governor George Burrington (served 1731-1734) had little use or respect for the militia and did nothing to train, equip or muster it. However, when Gabriel Johnson was appointed governor in 1734 he reassessed the militia and in 1735 introduced legislation to "put the militia on better footing."(334) In 1740 a new and only slightly modified version of the 1715 act was passed. A new piece of legislation, the Militia Act of 1746, placed servants as well as freemen on the militia rolls. Millers and ferrymen were added to the exemption list. There were to be at least two types of militia musters. The regiment was to muster annually and the companies were to muster four times a year. One drew militiamen to their local companies, while the second muster was general. The law allowed the militia to act in concert with the militias of Virginia and South Carolina, but no other province. The new law also made provision for mounted troops. The act also allowed the governor to call out the militia to march to the assistance of either Virginia or South Carolina, provided that these colonies should bear the costs of such assistance. Many militiamen resisted this provision, arguing that it was, or at least should be, unlawful to deploy the militia outside the colony.(335) In 1749 the militia law was again changed since the 1746 act was given only a three year life. Company musters were reduced in number to two per year. The death penalty was no longer permitted in court martial cases.(336) In 1754 the French and Indian War began and the colony returned to more frequent militia company musters.(337) On 17 July 1754 Governor Sharpe approved a loan of some militia muskets to the province of North Carolina.(338) The legislature established greater control over the militia budget and demanded a greater role in the appointment of militia officers. Militia lists from the French and Indian War show that several blacks and mulattoes were members of militia companies. Since only race and not status was noted it may be assumed that these were freemen and that slaves were not armed.(339) Meanwhile, the political situation had not improved. Unanswered raids by the Amerindians, adjunct to the French and Indian War, proved that the colony's militia was unprepared. On 15 March 1756 Arthur Dobbs, then Governor of North Carolina, reported that the militia law had failed in the colony in his charge. He reported that "not half of the Militia are armed as no supply of Arms can be got although they would willingly purchase them . . . ."(340) During the French and Indian War the North Carolina militia became a reservoir on which the British command drew for enlistments for the Canadian campaign. In November 1756 Loudoun reported to Cumberland that "I had great hopes of the North Carolina Regiments . . . [but] the Carolina Troops would not Submit to be turned over [to English command] without force; which I thought better avoided . . . [recently] I have got a good many of them enlisted in [the Royal] Americans."(341) In 1756 British assigned a quota of 1000 men to be raised in North Carolina as a part of 30,000 man force the English hoped to raise in the colonies to join with the British troops in an invasion of Canada.(342) In 1759 the war with the Cherokee Indian nation spilled over into the colony. The militia, sensing danger at home, refused to march outside the colony's borders, arguing that the North Carolina militia was suitable only for home defense. Governor Arthur Dobbs reported that 420 of 500 militiamen sent against the Cherokee had deserted. Many militiamen and officers interpreted the law as being permissive, but not compelling. They chose to not leave the province.(343) The Militia Act of 1759(344) increased fines for desertion and insubordination and allowed the Governor, with the consent of the legislature, to send the militia to the aid of South Carolina and Virginia to fight against the Cherokees. In 1760 the legislature passed an act which provided that it would pay bounties on Indian scalps in order to encourage enlistment and participation in the militia. And for the greater encouragement of persons as shall enlist voluntarily to serve in the said companies, and other inhabitants of this province who shall undertake any expedition against the Cherokees and other Indians in alliance with the French, be it enacted by the authority aforesaid, that each of the said Indians who shall he taken a captive, during the present war, by any person as aforesaid, shall and is hereby declared to be a slave, and the absolute right and property of who shall be the captor of such Indian, and shall and may be possessed, pass, go and remain to such captor, his executors, administrators and assigns, as a chattel personal; and if any person or persons, inhabitant or inhabitants of this province, not in actual pay, shall kill an enemy Indian or Indians, he or they shall have and receive ten pounds for each and every Indian he or they shall so kill; and any person or persons who shall be in the actual pay of this province shall have and receive five pounds' for every enemy Indian or Indians he or they shall so kill, to be paid out of the treasury, any law, usage or custom to the contrary notwithstanding. Provided, always, that any person claiming the said reward, before he be allowed or paid the same, shall produce to the Assembly the scalp of every Indian so killed, and make oath or otherwise prove that he was the person who killed, or was present at the killing the Indian whose scalp shall be so produced, and that he hath not before had or received any allowance from the public for the same; and as a further encouragement, shall also have and keep to his or their own use or uses all plunder taken out of the possession of any enemy Indian or Indians, or within twenty miles of any of the Cherokee towns, or any Indian town at war with any of his majesty's subjects.(345) The experience of the province in the French and Indian War prompted yet another series of changes in the provincial militia law. This time most of the benefits were given to the citizenry. The legislature sought to entice, rather than to force, compliance with the law. No militiaman could be arrested on his way to muster. Militiamen paid no tolls on bridges, highways or ferries while on their way to muster. The number of musters was reduced from five to four annually, and later to one annual muster. Officers in the various units had to come from the same county as the enlisted militiamen.(346) The legislature, seeking novel ways to assist the militia and increase its enthusiasm if not efficiency, passed a law placing a bounty on Indian scalps and to allow for the enslavement of hostile Amerindians.(347) By 1762 the exemption list had grown. Presbyterian and Anglican clergy were wholly exempted from any service, although they might choose to serve as chaplains. By 1774 the law covered all Protestant ministers. Overseers of slaves were indispensable to the maintenance of order on the plantations and indeed were fined if they did attend militia muster. Schoolmasters who had ten or more students were charged to remain in their classrooms except in dire emergencies. Pilots and road supervisors and overseers were also exempted. By the time of the Revolution probably half of the able-bodied freemen in North Carolina were exempted.(348) On 3 November 1766 the provincial legislature passed a new militia act. All freemen and servants between sixteen and sixty years of age were obligated to serve, with no exemptions noted, but no mention of any kind was made of slaves.(349) In 1768, with the French menace permanently removed, the policy of arming blacks was clarified, and slaves were specifically excluded. Overseers and/or owners of slaves were subject to fines if they allowed slaves to appear at militia musters.(350) North Carolina passed legislation that was designed to prevent slaves from using guns even to hunt unless accompanied by a caucasian.(351) North Carolina approached the American Revolution under this basic militia law. The only significant change was in the creation of ranging units. These select units were authorized to "range and reconnoiter the frontiers of this Province as volunteers" at no cost to the public. The rangers provided an outlet for militiamen who lived too far distant from urban areas to be able to muster with standard militia units. Most rangers were experienced woodsmen and Indian fighters. They were delighted with their orders to kill any Amerindian they encountered since most had experienced , or at least seen, some Indian atrocity committed against the settlers.(352) During the various Indian wars, ranging units frequently made a substantial profit in Indian scalps at rates as high as £30 per scalp. The rangers were authorized to take the scalps of any "enemy Indian," and it is obvious that a public official could not determine, in paying the bounty, which scalps were of hostiles and which were of friendly or allied Indians.(353) At the same time the legislature moved to relieve some burdens from the poor. Initially a fine was imposed on any militiaman who failed to provide appropriate weapons and accoutrements. A company's officers could now certify that a man was too poor to provide his own equipment and a gun would then be provided through a company's militia fines.(354) In 1771 the militia was tested against insurrectionary forces. The Regulators resisted British authority. By 1768 the Regulators had formed a militia under the leadership of Herman Husbands (1724-1795). They protested a failure of the legislature to grant full representation to the Piedmont, charging the more eastern section with "extortion and oppression." The Johnston Bill ("Bloody Act"), passed on 15 January 1771, was specifically designed to repress the Regulators. On 16 May 1771 some 1200 militia under Governor William Tryon (1729-1788) defeated them at the Battle of Alamance near Hillsboro. Although there were about 2000 Regulators present, many had no arms. Husbands fled, James Few was executed on the spot on 17 May, 12 others were condemned to death and six men were actually executed. Tyron forced some 6500 inhabitants of the Piedmont to sign an oath of loyalty to the crown.(355) Silas Deane, writing to James Hogg, on 2 November 1775, observed, "Precarious must be the possession of the finest country in the world if the inhabitants have not the means and skill of defending it. A Militia regulation must, therefore, in all prudent policy, be one of the first" preparations made by the colonists in North Carolina.(356) The North Carolina Constitution of 1776 provided "That the people have a right to bear arms for the defence of the State . . . ." It also denounced the practice of maintaining armies in time of peace and of allowing the military to subordinate the civil authority. The provisional government enacted a temporary militia law, which was followed by a permanent law enacted by the state legislature.(357) Until 1868 each North Carolina county was divided into one or more militia districts, with each unit being commanded by a captain, who was usually a county official, such as deputy sheriff or justice of the peace. They were required to enroll all able-bodied males between 18 and 60, with attendance at quarterly musters being mandatory. Free blacks were also required to attend militia musters, although they were rarely accorded the right to keep and bear arms.(358) The Committee of Safety ordered that the local authorities confiscate the arms belonging to the tories and issue these to militia or members of the army.(359) The militia officers who were willing to swear allegiance to the new nation were retained in rank.(360) In April 1776 the North Carolina Provincial Congress set standards for muskets to be made for militia use. The Congress wished to purchase good and sufficient Muskets and Bayonets of the following description, to wit: Each Firelock to be made of three-fourths of an inch bore, and of good substance at the breech, the barrel to be three feet, eight inches in length, a good lock, the bayonet to be eighteen inches in the blade, with a steel ramrod, the upper end of the upper loop to be trumpet mouthed; and for that purpose they collect from the different parts of their respective districts all Gunsmiths, and other mechanicks, who have been accustomed to make, or assist in making Muskets. . . .(361) The Congress also resolved on 17 April that, No Recruiting Officer shall be allowed to inlist into the service and Servant whatsoever; except Apprentices bound under the laws of this Colony; nor any such Apprentices, unless the Consent of his Master be first had in writing; neither any man unless he be five feet four inches high, healthy, strong made and well-limbed, not deaf or subject to fits, or ulcers on their legs.(362) The legislature created an arms manufactory at Halifax known as the North Carolina Gun Works, under the superintendency of James Ransom. On 24 April 1776 the legislature ordered Ransom, Joseph John Williams and Christopher Dudley to bring all of the state's energies to bear in the manufacture of muskets in conformity with the direction of Congress and state law, that is, to be made with 44 inch barrels and 18 inch bayonets. They were to recruit ""gunsmiths and other mechanicks who have been accustomed to make, or assist in making, muskets." An unknown, but presumably small, number of arms was produced at the manufactory before the legislature closed it in early 1778. North Carolina found, as did its sister colonies, that it was cheaper and more expeditions to contract with gunsmiths for arms that the state needed than to run its own manufactory. When the manufactory closed, and tools and machinery ordered sold at public vendue, there were 36 muskets nearing completion. These were issued to the Halifax militia.(363) Between 3 and 27 February 1776 in a campaign in the area of Fayetteville, the North Carolina militia of about 1000 men engaged English and Tory forces of 1500 to 3000 men. The militia carried the field and captured military equipment sufficient to equip the militia for months to come.(364) Among the treasures that greatly aided the depleted patriot commissary were: 1500 rifles, all of them excellent pieces; 350 guns and shotbags; 150 swords and dirks; £15,00 sterling; 13 sets of wagons and horses; and two medicine chests, one with medicine and surgeon's tools valued at £300. After the completion of the campaign the militia swelled to 6000 men. By year's end there were 9400 men enlisted in the North Carolina militia.(365) From 3 to 27 February 1776 North Carolina militia engaged British regulars supplemented by Tories from Fayetteville to New Bern, and on the 27th about 1000 patriots defeated an enemy force variously estimated at from 1500 to 3000 strong at Moore's Creek Bridge near Wilmington. This victory caused General Henry Clinton to abandon his planned incursion into the Carolinas with a combined force of his own regulars supplemented with local Tories.(366) The spoils of war were nearly as valuable to the arms-hungry patriots as the victory itself. 1500 Rifles, all of them excellent pieces, 350 guns and shotbags, 150 swords and dirks, two medicine chests immediately from England, one valued at £300 sterling, 13 sets of wagons with complete sets of horses, a box of Johannes and English guineas, amounting to £15,000 sterling, and [the arms and accoutrements of] 850 common soldiers, were among the trophies of the field.(367) On 19 March 1778 North Carolina created a new constitution which made the governor the commander of all military forces. The legislature appointed officers above the rank of captain. The military power was subordinated to the state.(368) After Charleston, South Carolina, fell to British forces on 12 May 1780 Charles Cornwallis (1738-1805)(369) decided to move his force across the Carolinas, retaining the city as his base of supplies, occupied by a largely tory militia force. The minutemen of North Carolina were soon to demonstrate the same prowess with their rifled arms that the British observed with other colonial militias and units of the Continental Line which had been recruited from among backwoods militias. Lord Cornwallis' greatest victory was the patriot's most humiliating defeat. It occurred on 16 August 1780 near Camden, South Carolina. Horatio Gates, who commanded at least 1400 regulars and 2752 militia, advanced against Cornwallis with 2239 veterans, including such tory units as Banastre Tarleton's Legion; Volunteers of Ireland, consisting entirely of ethnic Irish deserters from the American army; and the Royal North Carolina Regiment. Gates had only 3052 men fit for duty and most militia had never faced (or used) a bayonet. Gates had no battle plan, issued no comprehensible orders and quickly joined the routed militia in wild retreat. For his part, Cornwallis proved to be a superior leader who took advantage of the weakness and inexperience of Gates' army. Johann DeKalb, commanding the Continental Line, fought bravely until mortally wounded and captured. The remaining militia fled into North Carolina, and Gates had no viable army.(370) With no apparent patriot army to slow his advance, Cornwallis sent his agents into North Carolina to prepare for its return to the fold, which had been his objective in moving north. But Cornwallis found few recruits for a tory supporting force. That he blamed on the tyranny of the whig government. He hanged several men who had cross-enlisted as examples to turncoats, but this did nothing to increase his popular support.(371) Cornwallis did little to take advantage of the situation. He did not resume his march into North Carolina until 8 September, and he paused at Waxhaw for another two weeks. As Cornwallis moved toward Charlotte, militia rose to harass, if not to directly face, his army. Militia from Rowan and Mecklenberg counties moved out under the command of Colonel William L. Davidson and Major William Davie. Primarily, the militia reported on the movement of Cornwallis' troops and interrupted communications and captured stragglers and deserters. Gates drafted orders to avoid direct military confrontation for his force was too small and too weak to accept full battle. Davie's militia, 100 strong, struck the left flank, slowing the enemy advance. On 20 September they captured a Tory outpost near Waxhaws. Davie's riflemen, acting as sharpshooters, so harassed Cornwallis' army that he was unable to occupy Charlotte until 25 September.(372) Few loyalists enlisted in his adjunct militia, and he found few willing to sell him badly needed food and supplies. He paused again to await a supply convoy from the south. Colonel John Cruger at Ninety-Six and Major Patrick Ferguson at Gilbert Town had the same experiences. Meanwhile, Cornwallis learned that patriot forces were on the verge of liberating Georgia, destroying one of his principal achievements. During September 1780, a formidable force of backwoods militia gathered in North Carolina to oppose Cornwallis's army of the south. Colonel Campbell (1745-1781) of Washington County, Virginia, brought 400 militiamen. Colonel Isaac Shelby (1750-1826) recruited 240 militia from Sullivan County, North Carolina. From Washington County, North Carolina, Colonel John Sevier brought the same number of militiamen. Burke and Rutherford counties, North Carolina, sent 160 militiamen under Colonel Charles McDowell. By the end of the month, Colonel Benjamin Cleveland and Major Joseph Winston brought 350 militia from Wilkes and Surrey Counties, North Carolina. One author described this militia force vividly. "The little army was mostly well armed with the deadly Deckard rifle, in the use of which every man was an expert."(373) By early October, the band of militia companies was joined by 270 militia under Colonel Lacy; and by another group of 160 volunteer backwoodsmen. On the eve of the major confrontation with Major Patrick Ferguson's British army, they numbered at least 1840 militia and volunteers. The men, in truly democratic fashion, selected William Campbell as their commander. This force initially had in mind more harassing Cornwallis's British army than confronting its strong left wing. Cornwallis withdrew to Winnsboro between Ninety-Six and Camden. British intelligence, which at this point seemed to be good and reliable, reported a major gathering of American forces to the west. Ferguson dismissed them as mere untrained and undisciplined militia and looked forward to meeting and defeating them. Reportedly, Ferguson had released a captured American so that he could carry a message to the backwoodsmen. If they did not desist from their treason, he warned, "I will march my army over their mountains, hang their leaders and lay their country waste with fire and sword."(374) Whether the message emanated from Ferguson or not, it was accepted as true by Campbell's force. Americans hurried toward Ferguson at Gilbert Town, while Ferguson took up position on King's Mountain, waiting to slaughter the country bumpkins. The Battle of King's Mountain of 7 October 1780 pitted Tory and patriot militias against one another in a fight among relatives and neighbors. The Tory force of 1100, led by Major Patrick Ferguson, encountered a patriot force of frontier militia, then numbering about 910.(375) The long hunters, armed with at least 600 rifles, decimated the tories' lines with deadly and accurate rifle fire. Ferguson represented Cornwallis' left wing, and it was destroyed by the American militia. Campbell did not await the arrival of the remainder of his van. He encircled Ferguson's troops and his skilled riflemen rained deadly rifle fire upon the British lines. After Ferguson was morally wounded, his army was thoroughly disheartened. The Americans lost 28 killed and 62 wounded while killing or capturing nearly the entire opposing force, 1105 in all. As the principal historian of that battle wrote, The fatality of the sharpshooters at Kings Mountain almost surpassed belief. Rifleman took off rifleman with such exactness that they killed each other when taking sight, so instantaneous that their eyes remained, after they were dead, one shut and the other open, in the usual manner of marksmen when leveling at their object. . . . Two brothers, expert riflemen, were seen to present at each other, to fire and fall at the same instant . . . . At least four brothers, Preston Goforth on the Whig side, and John Goforth and two others in the Tory ranks, all participated in the battle and all were killed.(376) This action may have turned the tide of the war in the south. It certainly purchased precious time for the American regular army to regroup and plan its campaign. Cornwallis, who had advanced beyond Charlotte, on the road to Salisbury, after King's Mountain, decided to retreat back into South Carolina and set up for winter at Winnsborough. His army was racked by disease and fatigue and was in no condition to confront a major American force. Most of all, Cornwallis had become discouraged that so few tories had come to his aid and had come to doubt that truth of the fundamental assumption that American loyalists were waiting in large numbers for their liberation. He thought then to continue to march northward and receive any loyalist support that might come his way. Sir Henry Clinton had sent Major-General Alexander Leslie with 3000 men to Portsmouth, Virginia, with orders to move south and join with Cornwallis as he marched northward. Cornwallis asked Leslie to attempt to move south and create a diversion that might free his army to move northward to join Leslie. Events in South Carolina changed Cornwallis' mind. The patriot militia rose everywhere, harassed his communication and supply lines, captured isolated patrols and quieted the loyalists. These disruptions, combined with the defeat of Ferguson's force at King's Mountain, compelled Clinton to order Leslie to embark on ships and move to Charleston, South Carolina, to re-enforce Cornwallis. On 14 October 1780, Congress appointed the very capable General Nathaneal Greene (1742-1786)(377) to relieve Horatio Gates (1727-1806)(378) as American commander in the south. He headed a force of about 2000 men, over half of which were militia. Additionally, there were the various partisans, irregular volunteers and militia and guerrillas, operating largely outside his direct command. They served to harass the enemy, slow his progress, disrupt his supply lines and deplete his ranks. They forced Cornwallis to divert many men to guard his supplies and lines of communication. In December 1780, General Greene, too weak to confront Cornwallis' army directly, moved from North Carolina to Cheraw, South Carolina. As Greene wrote to Thomas Jefferson, "Our force is so far inferior, that every exertion in the State of Virginia is necessary. I have taken the liberty to write to Mr. [Patrick] Henry to collect 1400 or 1500 militia to aid us."(379) Working hard to create a substantial force, Greene made an unorthodox command decision: he divided his already outnumbered force into two commands. One division was commanded by Brigadier-general Daniel Morgan (1736-1802)(380) with 600 regulars of the Continental Line and General William Davidson's North Carolina militia, moved against the left flank of the British army. Realizing the shock value of partisan warfare, Greene ordered Daniel Morgan and his 800 riflemen to move west and join with Henry Lee to harass the British as guerilla forces. Retaining command of the second force, Greene moved against the right flank. Cornwallis responded by sending Tarleton's loyalists against Morgan. On 17 January 1781 Morgan's militia confronted a loyalist force of about 1100 men ordered out by Cornwallis and commanded by Colonel Banastre Tarleton.(381) Morgan's force had grown to about 1000, with the addition of mountain militiamen, volunteers and frontier sharpshooters. Morgan positioned his men well and invited Tarleton to attack. Morgan's riflemen defeated the Tory militia and regulars at the Battle of Cowpens, inflicting over many casualties by using his skilled riflemen to great advantage.(382) Morgan successfully combined militia and regulars at Cowpens.(383) His great contribution lay in utilizing the militia properly, in open field combat against regulars. He positioned them so that they complemented, not substituted for, the Continental Line. Morgan placed a line of hand selected men across the whole American front. The sharpshooting frontiersmen were ordered to advance 100 yards ahead of the main line. When the British force was about 50 yards away they were to fire and then retreat back to the main line. Approximately halfway between the skirmish line and the main American line Morgan placed 250 riflemen, mostly raw recruits from the Carolinas and Georgia. Morgan expected them to fire twice and then retire to the main line. A small but significant feature of Morgan's strategy was his order given to the main line of the Second Maryland Continental Line. He assured them that they must not misinterpret the planned withdrawal for a retreat which might cause general panic among the men.(384) Tarleton escaped, but his much diminished force was never again a major factor.(385) Morgan lost only 75 men, while Tarleton lost 329 men and 600 more were captured. Angered by this loss to undisciplined and unwashed militiamen, Cornwallis himself set out after Greene and Morgan who had combined forces after Cowpens. The patriots retreated across the Dan River into Virginia before Cornwallis could catch them and force another major battle. Cornwallis had hoped to force one all-out battle with Greene and to defeat him as he had defeated Gates at Camden. He failed to confront Greene before he crossed the river and because he had no boats, and his supplies were running very low, Cornwallis had to abandon the chase. Cornwallis attempted one last ruse. On 20 February 1781, he moved his army south, from just below the Virginia border, to Hillsborough, announcing that the mission had been successful and that North Carolina was officially liberated. Just three days after Cornwallis issued his proclamation North Carolina and other militia destroyed Colonel John Pyle's 200 loyalists. Greene took advantage of the situation to replenish his army by adding more volunteers and militia, bringing his total strength to about 4400 men. On 15 March 1781, Nathaneal Greene's force confronted Cornwallis with his mixed force of militia and regulars at Guilford Court House. The militia and a Continental Line of fresh militia recruits broke and Greene's army seemed to be on the verge of ruin. At that critical juncture, with the first two lines breached and with the fate of the Southern Department in jeopardy, the Maryland and detached Delaware regulars plugged the gap and held the line. Cornwallis ordered his artillery to fire indiscriminately on the mixed mass of troops, but still both forces stood ground. Greene then withdrew the American army to fight again another day. Cornwallis lost one-fourth of his army in winning the day, but still had not defeated the southern rebel army. Following this battle, Greene's force was now superior in numbers to that of Corwallis. It was to be the last major confrontation between Cornwallis and Greene.(386) The militia could not, or at least would not, stand against artillery fire and bayonet charges of seasoned British regulars. Greene had to find another role for his militia. Weakened by the loss of 100 killed and 400 wounded, Cornwallis had retreated to Wilmington. He then decided to move into Virginia to join the British force of the Chesapeake commanded by General William Phipps. Greene gave battle at Hobkirk's Hill, but lost, on 19 April; originated a siege at Ninety-Six from 22 May to 19 June; and lost again to Cornwallis on 8 September at Eutaw Springs. No British victory was decisive for Greene knew when to withdraw, and these actions bled the dwindling British army. Throughout this final campaign in North Carolina, Greene used his militia most effectively. Militia and regular troops commanded by Francis Marion (1732-1795),(387) Andrew Pickens (1739-1817) and Thomas Sumter (1734-1832) managed to capture a number of seemingly minor British outposts. Marion's ranging militia units tied up numerous British patrols with his elusive tactics, diverting British troops so that the patriots had time to regroup after the Battle of Camden. His militia also ambushed a train of British regulars and tories at Horse Creek and killed 22 British troops and captured several Tory militiamen. More important his command rescued 150 regulars of the First Maryland Continental Line who had been captured at Guilford.(388) Again, it was the cumulative effect of massive militia action that served to wear down the British army. At the outbreak of the war, Marion had initially served in the South Carolina Provincial Congress, but decided he could better serve patriot cause by accepting militia command. Known widely as the "Swamp Fox," Marion and his volunteer irregulars almost single-handedly kept the patriot cause alive in the South in 1780-81. With many loyalists active in the area,(389) Marion roamed the coastal marshes, attacking isolated British and Tory commands and patrols and disrupting communications and supplies. In 1781 he commanded the militia at the Battle of Eutaw Springs. After the war he returned to politics, serving on the state constitutional convention and in the state legislature.(390) Corwallis turned south after making one final call for the loyalists to rise up to his aid on 18 March. As was to be expected, no tory militia came to his aid so Cornwallis left North Carolina, having accomplished nothing. By the fall British control dwindled to the immediate Charleston area. Fundamental British strategy underwent change as it was obvious that the countryside was far more hostile than hospitable to the interlopers. Corwallis' proclamation of 18 March was the last attempt the British command made to rally the tories. As he left North Carolina, Cornwallis found South Carolinians no more helpful than their brethren to the north, and his army suffered as he received neither aid nor comfort in his retreat.(391) Nineteenth century historian Francis Vinton Greene(392) argued that had General Nathaneal Greene had failed to crush the British forces under General Cornwallis because the militia would not fight in the campaign in Virginia and the Carolinas, 1780-81. He argued that had Nathaneal Greene had several regiments of regular troops such as Colonel Henry Lee's Legion or the First Maryland Continentals he would have crushed the British in one great all-out battle. A more recent author has argued that "under the leadership of Nathaneal Greene and Daniel Morgan, the service of militia was essential to the success of the campaign against Cornwallis, a campaign which could easily have resulted in disaster but for the action of these irregular troops." The value of militiaman must be understood against in his proper function, and not in cases where commanders insisted on setting him "to military tasks for which he was not trained, equipped or psychologically prepared." One clear case of misuse involved placing him before bayonet charges, which, he argued, makes no more sense than Braddock's insistence that his army maintain proper firing positions twenty years earlier at the Battle of the Wilderness. The militia accomplished one main mission, and that was to divert the British from their bases in South Carolina, altering their course northward into North Carolina, where they were able to harass them almost at will. The militia cowed the loyalists who were undecided on what course to pursue. They struck at Cornwallis' foragers and scouts.(393) Two recent historians have blamed much of the failure of Cornwallis's mission on his decision to abandon the Carolinas and move northward into Virginia. They argued that had he remained among the numerous tories in the Carolinas, he might have met far greater success.(394) This may be unfair to Cornwallis, for he certainly tried, but failed, to attract loyalist support during his occupation of the Carolinas. After resting at Wilmington for two weeks, Cornwallis suddenly made his last fateful decision to gather the remnants of his army about him, abandon the Carolinas completely and, without any orders or authority to do so, to move boldly into Virginia. There was no loyal regime established, for which enormous credit must be given to the activities of the southern militia. Their constant harassment of the British and loyalist forces and omni-presence in the hinterland precluded real recruitment and placement of British troops. The North Carolina militia performed its functions with great efficiency and success. It was generally among the best in the nation and was especially effective in the early Amerindian wars and during the American Revolution. North Carolina's borders were among the msot secure in the nation and much credit for that can go to the militia. It served relatively well as a reservoir to supply troops to the Continental Line. Settlement in South Carolina centered on Charles Town, which was foundewd in April 1670 by a party of English settlers under the leadership of Joseph West who settled at Albemarle Point. Some 140 settlers arrived at Albemarle Point and there threw up an earthen and log structure to serve as a fort and mounted it with a dozen cannon imported from England. As soon as their supply ships departed to bring additional settlers and fresh supplies, the Spanish from St. Augustine appeared offshore. Simultaneously, some Amerindians, Spanish allies, appeared in the nearby woods, but the test-firing of the cannon frightened them away. The little fort managed to hold out against the Spanish ships until their own ships reappeared.(395) In 1680 the settlers moved to the junction of the Ashley and Cooper Rivers, leaving Old Charles Town behind. The colony soon gathered 5000 whites and a much larger number of Amerindians and slaves. Few settlers made inroads into the interior in the first decades of settlement for fear of the Spanish to the south and Amerindians to the west. Spanish priests were especially active among the Amerindians and the Spanish until 1670 claimed territory as far north as Virginia. Consequently, social cohesiveness within the ruling strata was greater in this colony than others because of common interest bonds formed between planters of large estates and merchants of the town. Between 1671 and 1674 several additional groups of colonists arrived, including those led by Sir John Yeamans from Barbados and another sizable band from New York. The first governor, William Sayle, died on 4 March 1671, and was succeeded by Joseph West. Among West's achievements was the summoning of the first session of the legislature on 25 August 1671. Yeamans claimed that because he owned considerably more land than West, indeed was the only cacique.(396) Yeamans' commission arrived in April 1672, but West replaced him in 1674. Soon after South Carolina's first settlers stepped ashore, they organized a militia for their defense. Their action was unavoidable: unfriendly Spanish outposts lay close to their settlement, and Indians surrounded it. At first, the militia simply protected the settlers from invasion and Indians, but in 1721, it was charged with the administration of the slave patrol as well. Between its inception and the beginning of Reconstruction in 1868, the militia changed little. Its numbers swelled, and its organization became more elaborate, but it remained what it had always been~an institution requiring the registration of all able bodied male citizens; an institution that administered limited universal military training; and an institution that controlled insurrectionists, outlaws, and the slave and Indian populations. Its men were neither equipped nor trained to wage full-scale war and the militia behaved poorly when it was misused in that way. The militiamen were paid only if the government called them for duty. The governors commonly mustered them to suppress insurrection, fight Amerindians and to defend against invasion. They could be called out only for fixed periods of time, and usually only for service within the colony. During the colonial period the government paid volunteers -- men on temporary leave from the militia, men recruited from the other colonies, and transients -- to fight its wars. Just as Massachusetts bore the brunt of French attacks in the north, so South Carolina was the buffer against Amerindian, Spanish and, to a degree, French, ambitions from the south.(397) During the first phase of South Carolina history, that is, during the Proprietary period, and extending a few years into the Royal period, the South Carolina militia was the sole protection in the south of the English North American colonies. The militia proved to be a most effective defense force. Its importance declined significantly with the establishment of Georgia early in the Royal period. By 1740 the British government took the heat off the South Carolina militia by placing a company of regular troops in Georgia to contain the Spanish ambitions and buttressed them with some white Georgia militiamen. In 1763 the Peace of Paris yielded Florida into English hands. After that cession the South Carolina militia as guardian of the southern gate ended forever.(398) The Charter of Carolina of 1663 required that the proprietors build whatever fortifications were necessary for the protection of the settlers and to furnish them with "ordnance, powder, shot, armory and other weapons, ammunition [and] habilements of war, both offensive and defensive, as shall be thought fit and convenient for the safety and welfare of said province." The proprietors were to create a militia and appoint civil and military officers. Because "in so remote a country and scituate among so many barbarous nations," to say nothing of pirates and Amerindians, the crown ordered the proprietors "to levy, muster and train all sorts of men, of whatever condition, or wheresoever born, to make war and pursue the enemies."(399) The second charter, issued just two years later, contained much the same instructions.(400) The first law in South Carolina dealing with the subject of the militia was incorporated into the Fundamental Constitutions of the colony. That law placed control of the militia in the hands of a Constable's Court, which was comprised of one of the proprietors, six councilors called marshals, and twelve lieutenant-generals. It was intended to direct all martial exercises. As it was, the laws drawn in England proved to be a practical impossibility after the first settlers arrived in South Carolina. The first actual control of the militia was vested in the Governor and Grand Council. The first order of this body was to enroll and enumerate all caucasian inhabitants, free or servants, between the ages of sixteen and sixty years. The colony was divided and two militia companies were established.(401) In 1671 South Carolina enacted a new militia law. It reaffirmed the enrollment of all able-bodied males between ages 16 and 60. All such persons, excepting only members of the Grand Council, were to exercise under arms on a regular basis. Those who failed to attend muster were to be punished at the discretion of the Grand Council, which usually translated to a fine of about five shillings. Poor men who could not pay the fine, usually newly arrived settlers and indented servants, could be subjected to physical punishment. Physical punishment consisted of running the gauntlet or riding the wooden horse. Militiamen were required to furnish their own guns, although the government might provide arms to poor men. Masters supplied arms to their indented servants. Arms varied considerably in quality, ranging from imported muskets to common fowling pieces. In addition to the basic arm, men were to provide a cover for the gunlock, a cartridge box with 12 rounds of ammunition, a powder horn and utility pouch, a priming wire, and a sword, bayonet or hatchet. Most men wore a belt over the left shoulder to support his belt with cartridge box. The first law said nothing of blacks or slaves. A system of providing notification in the case of attack was established, following surprise attacks by Westoe and Kussoe Indians.(402) The first of several wars with the native aborigine occurred in 1671, when authorities at Charles Town accused the local tribe, known as the Kusso, of conspiring with the Spanish. There is little documentary evidence extant that sheds light on this little war. The tribe was evidently quickly subdued since the war did not extend beyond 1671. Those natives who did not perish in the war were enslaved marking the beginning of the experiment in using the Amerindians as involuntary servants. Additional natives were added in 1680 as a result of the abortive Westo revolt. Dr Henry Woodward had been appointed agent to trade with the Westo tribe for hides, peltries and slaves, and the chiefs had objected to the inequitable terms of that trade and attacked the traders. Following a few engagements in April 1680, the South Carolina militia was successful in subduing all tribes along the Savannah River [now in Georgia]. Thereafter, political authorities put forth considerable effort to maintain friendly relations with the neighboring Amerindian tribes. European encroachment on tribal lands was a principal cause of inter-racial friction so in June 1682 the Lords Proprietors issued an order to "forbid any person to take up land within two miles, on the same side of a river, of an Indian settlement." Those who did take up lands near Indian settlements were to help the Indians "fence their corn that no damage be done by the hogs and cattle of the English."(403) The Lords Proprietors considered the Anglo-Indian society as a whole, but by 1690 real political authority in the colony had passed from the old guard and into the hands of a group of merchants who were primarily interested in commercial profits to be earned in the fur trade. Thus, in regard to the question of Amerindian relations cordiality became of paramount importance and the occasions for friction greatly decreased the profitability of the Indian trade. South Carolina vacillated on the creation of its militia organization. After creating two companies initially, it had formed six companies by 1672. In 1675 the number of companies was reduced to three. There were militia laws passed in 1675 and 1685 but copies of the texts are no longer available.(404) In 1710 there were two militia regiments of foot, divided into sixteen companies of about fifty men each, which enrolled a total of 950 whites. The governor's own guard enrolled forty select militiamen. There was an equal number of blacks, primarily slaves, since each company captain was to enlist and train one black man for each white militiaman. Few blacks were given firearms, but most were trained to use the lance or pike.(405) The scarcity of arms in the colony and the economy of the proprietors caused the colony to take radical steps in times of emergency. The legislature allowed the impressment of arms, military supplies, gunpowder and other military necessities to meet the Spanish threat in 1685.(406) It also created a public magazine wherein to store the colony's supply of gunpowder.(407) The law required that all free inhabitants between ages 16 and sixty were to be enumerated and their names recorded on the militia enrollment lists. The governor served as commander-in-chief of all the provincial armed forces. He signed officers' commissions, issued warrants for failures to perform militia service, created courts martial, authorized the collection of fines, and impressed food and supplies in times of emergency. With the consent of the council he could proclaim martial law. He appointed regimental colonels and company captains and announced the dates of regimental musters. Company captains appointed sergeants who made arrests for violations of the militia law and inspected the men to make certain they had the proper arms and equipment.(408) In 1677 South Carolina's time of troubles occurred. A group of citizens claimed that Governor John Jenkins had become a dictator and acted against the best interests of the people and prorpietors. Calling itself the proprietary faction, and headed by Thomas Miller, the new party combined the offices of governor and customs collector. The so-called anti-proprietary faction captured Miller, using militia loyal to the governor, and imprisoned him on a charge of high treason. Miller escaped to England and laid his case before the Privy Council. Earl of Shaftsbury defended Miller and sought to mediate the matter. Miller and co-leader John Culpeper were acquitted and the Privy Council ruled that both Miller and Jenkins had exceeded their respective legal authority. In 1685 the Grand Council received several petitions from some newly arrived settlers in which they complained that they had been compelled to serve in the militia before their farms had been cleared and made ready for planting. Their land was "fallen" and hard to work and required their full attention until the first crops were harvested. The proprietors in England agreed suggesting to the council that settler should be exempted from militia duty "for the first year or two."(409) The Spanish attacked the southern border of the colony in 1686. Said to number about 153, the marauders consisted of persons of mixed racial heritage, Spanish regulars, and some allied Amerindians. They destroyed the Scottish settlement at Port Royal and plundered outlying plantations along the North Edisto River. The settlers appealed to the Grand Council which, in turn, appealed to the proprietors in England. Council resolved to attack the Spanish in Florida by appropriating £500 for an expedition. The proprietors thought it unwise to provoke the Spanish and offered the opinion that perhaps the raiders were pirates operating illegally under Spanish colors. The proprietors reminded the council that the colony's charter did not permit it to attack enemies outside its borders except in hot pursuit of raiders. They suggested that Governor Joseph Morton inquire of the Spanish at Havana and St. Augustine if they had authorized the raid. Were South Carolina to attack the Spanish in Florida, retaliation would certainly follow and England was not in a position to enter into war at that time. After the council decided to accept the will of the proprietors, they informed the colony that, had it gone ahead with the planned invasion of Florida, Governor Morton would have been held responsible and may have forfeited his life.(410) Slave patrols were increased dramatically following several slave revolts. Militia slave patrols had been established, under law separate from the other militia acts, as early as 1690.(411) Each militia captain, under the act of 1690, was to create and, when needed, deploy, a slave control and runaway slave hunting unit which would be ready to act promptly upon notification from proper authorities.(412) The General Assembly passed the second important militia law in 1696. It provided for the creation of officers at all grades and for the enumeration of all male inhabitants between the established ages. Each man was required to provide his own firelock and this additional equipment: a gunlock cover, a cartridge box holding a minimum of 20 rounds of ammunition, a gun belt, a worm for removal of a ball, a wire for cleaning the touch-hole, and either a sword, bayonet or tomahawk. A freeman who could not furnish this equipment could be indentured for six months to another person who would buy his equipment. Freemen who owned indented servants had to buy the same equipment for their servants. The act also provided for the creation of cavalry, with the "gentlemen" who could provide their own horses, tack and appropriate equipment qualifying for such service.(413) Failure to attend muster could result in a fine of £0/2/6 for each unexcused absence. Failure to pay a fine could result in the seizure of one's property and/or confinement in a debtor's prison. Those who owned servants were responsible for the appearance of their charges under the same penalty. A man who moved muster either continue to attend muster with his usual unit or obtain a certificate of removal, showing that he had signed up with the proper unit of his new area. Local companies were to drill once in a two month period, "and no oftener." A general regimental muster was held annually, and failure to appear at that time would result in a fine of 20 shillings. Local fines were used to offset expenses of local companies and fines paid for absences at a regimental muster went into the colonial treasury. The act also created the interesting principle that no militiaman could be arrested while going to, attending, or returning from a militia muster. The protection extended for a full day after a militiaman returned to his home. Civil officers who violated that principle could be fined £5 and any civil papers or warrants served in violation of this principle were nullified. Members of the Society of Friends were exempted only if they paid the militia fines for non-attendance.(414) In 1690 the legislature also created a militia watch system on Sullivan's Island.(415) In 1698 the legislature created an Act for Settling a Watch in Charles Town and for Preventing of Fires. It required that town officials make a list of the names of all men over 16 and under 60 to use as a basis of militia, slave patrol and watch duty. The constables of the town were to summon six men at a time "well equipped with arms and ammunition as the Act of Militia directs, to keep watch" from 8 P.M. to 6 A.M. in the winter and 9 P.M. to 4 A.M. in the summer. The patrols were also to detain and arrest slave and free blacks "who pilfer, steal and play rogue."(416) South Carolina's first elected Indian agent, Thomas Nairn, wrote a description of the colony in 1710 in which he gave provided an insight into the colonial militia. In England, Nairn wrote, tradesmen thought that militia service was beneath their status and that they should not be bothered with such mundane intrusions on their time. But in South Carolina every man from the governor down to the poorest indented servant thought it his duty to prepare himself as fully as possible for militia duty. British troops excelled at coordinated movements, but the militiamen were much better at making aimed shots, especially when equipped with rifles. He attributed this skill to their habit of hunting game in the forest. Even trusted slaves were commonly enrolled, and, despite provisions of the law to the contrary, occasionally trusted slaves were armed. In his work as Indian Commissioner Nairn observed British officers working with and equipping allied Amerindians. If the colony was invaded the British officers would "draw the warriors down to the Sea Coast upon the first news of an Alarm." The colony liked to use their Amerindian allies because they cost little. He described the natives under his care as "hardy, active, and good Marksmen, excellent at an Ambushcade."(417) During the entire colonial period South Carolina used friendly Amerindians as auxiliaries to the militia. They proved to be especially well adapted to tracking down runaway slaves and indentured servants, reporting enemy activity, and scouts for militia operating in the backwoods. The natives liked fancy clothing so, as the bulk of their pay, they received scarlet and bright green waistcoats, ruffled shirts, and bright white breeches. The more costly and dangerous gifts were swords and cutlasses, guns, gunpowder, lead and bullets.(418) The administrations of Governors Archdale and Blake were generally peaceful and prosperous. Their ambitious successor, James Moore, who came to office in 1700, adopted an aggressive policy toward both the Spaniards and the Indians. The rupture of relations between England and Spain on the continent led to a Carolina invasion of Florida. The invasion was a disaster. Nevertheless, Moore followed the ill-fated invasion with a somewhat more successful campaign against the Indians. Between 1712 and 1717 Moore undertook two major Indian campaigns, against the Tuscaroras and against the Yamassees. While the outcome of the battles were usually favorable to the colonists, the continued presence of the Spanish on the southern border presented a constant danger. Four hundred blacks, mostly slaves, fought in the Yamassee Indian War with whites so most slave owners supported the law which mustered and trained men "of what condition or wheresoever born."(419) The latter measure was to prove to be unwise later. On 30 August 1720 the king sent instructions to Francis Nicholson, governor of South Carolina, regarding the militia. "You shall take care that all planters and Christian servants be well and fitly provided with arms," the monarch wrote, and "that they be listed under good officers." The militia was to be mustered and trained "whereby they may be in a better readiness for the defence of our said province." He warned that the frequency and intensity of the militia training must not constitute "an unnecessary impediment to the affairs of the inhabitants."(420) By 1703 the colony had enrolled 950 militiamen and a cavalry troop of 40 men. In 1703 the South Carolina legislature enacted a comprehensive militia law because "the defense of any people, under God, consists in their knowledge of military discipline." There were very few changes made in the subsequent militia laws. All free, able-bodied white men between ages 16 and 60 were liable to militia service. This age requirement was not changed until 1782. Exemptions to service were made for members of the council, legislature, clerks thereof, various other colonial officers, sheriffs, justices of the peace, school-masters, coroners, river pilots and their assistants, transients and those who had not yet resided in the colony for two months. In case of an alarm even those otherwise exempted might be required to serve in the militia, in which case they also had to provide their own arms. The law allowed the formation of mounted units, with subsequent exemption from regular militia duty for those so serving. Cavalry men had to supply their own horses, arms and equipment. This law was inspecific as to the description of the horses, arms and equipment, although later laws gave more detailed descriptions of what was required. The legislature had authorized the enlistment of slaves before 1703 for the militia law of that date assumes that slaves were to be enrolled as in the past. Beginning in 1703, it was lawful for the owner of slaves, when faced by actual invasion, "to arme and equip any slave or slaves, with such armes and ammunition as any other person" was issued. No corps was to have more slaves than one-third of its number. A slave who fought bravely was to be rewarded. A slave who killed or captured an enemy while in actual service was to be given his freedom, with the public treasury compensating the owner. Whereas, it is necessary for the safety of this Colony in case of actual invasions, to have the assistance of our trusty slaved to assist us against our enemies, and it being reasonable that the said slaves should be rewarded for the good service that they may do us, Be it therefore enacted . . . That if any slave shall, in actual invasion, kill or take one or more of our enemies, and the same shall prove by any white person to be done by him, shall for his reward, at the charge of the publick, have and enjoy his freedom for such his taking or killing as afore said; and the master or owner of such slave shall be paid and satisfied by the publick, at such rates and prices as three freeholders of the neighborhood who well know the said slave, being nominated and appointed by the Right Honourable Governor, shall award, on their oaths; and if any of the said slaves happen to be killed in actual service of this Province by the Enemy, then the master or owner shall be paid and satisfied for him in such matter and forme as is before appointed to owners whose negroes are sett free.(421) Each man had to provide his own arms, which, in this act specifically meant, "a good sufficient gun, well fixed, a good cover for their lock, with at least 20 cartridges of good powder and ball, one good belt or girdle, one ball of wax sticking at the end of the cartridge box, to defend the arms in the rain, one worm, one wire [priming wire], and 4 good spare flints, also a sword, bayonet or hatchet." These specifications changed very little over the decades because the initial law well covered the equipment of the times and few improvements were made over the next eighty years. South Carolina was the only colony to require the lock cover, commonly called a cow's knee because that was the source of the material for the cover, and the ball of wax. The militia units had to muster and train once every two months, with regimental musters being held from time to time. Officers had to supply themselves with a half-pike "and have always, upon the right or left flank, when on duty or in service, a negro or Indian, or a white boy, not exceeding 16 years of age, who shall, for his master's service, carry such arms and accoutrements as other persons are appointed to appear with." Masters had to provide the same arms, ammunition and accoutrements for all servants who were eligible for militia duty, although these remained the property of the master. When a servant had completed his term of indenture he had to provide the same arms, accoutrements and ammunition within a space of one year. Those who had just moved into the colony also were granted twelve months to acquire the requisite arms and supplies. The grace period was granted on the assumption that newly freed servants and some immigrants would be too poor to provide their own arms immediately. Failures of masters and servants to provide the required equipment was penalized by a fine of ten shillings. Unlike some colonies which allowed arming poor citizens from the public treasury, usually in exchange for performance of some civic duties, South Carolina merely set the requirement and assumed that even its poorest citizens would comply with the law in some way or another. In times of emergency, the law allowed the impressment of supplies, vessels, wagons, provisions, supplies of war, ammunition and gunpowder and such other items as the militia might require. If ships of any description were required, their pilots and sailors could be impressed. When the militia was called into service those who sold liquor were especially enjoined against serving intoxicants to militiamen. Men might be drafted out of their militia units to serve on seawatch duty, although those assigned to this responsibility were paid.(422) The enlistment of slaves in the militia was, to say the least, a very controversial matter. Nonetheless, the legislature thought that it was imperative to swell the ranks of militia available for emergencies. The legislature enacted the law of 23 December 1703 which applied only to the City of Charleston and allowed slaves who, in war, killed or captured an enemy were to be freed and the master compensated from the public treasury.(423) In 1704 the legislature authorized the enlistment of "negroes, slaves, mulattoes and Indian slaves" into the militia. Militia officers were to ascertain which of the foregoing were trustworthy and those should be enumerated, trained, mustered, marched and disciplined along with free whites and indentured servants. Any master who thought that one or more of his slaves should be exempted could appear before the militia officers to explain his opinion. The master was required to arm his best slaves with lance, sword, gun or tomahawk at his expense. Failure to comply with the law could result in a fine of £5 for each offense.(424) Later, those slaves entrusted with arms were given their weapons at public expense, some of which came from militia fines.(425) The slave containment act was strengthened in 1704 when militia units were ordered to patrol the boundaries of their district on a regular basis, with special attention to be given to the apprehension of runaway slaves. Indeed, all slaves found to be away from their owners' plantations were subject to militia arrest. Each militia patrolman had to furnish his own horse and equipment. Officers of the slave patrol received £50 per annum and enlisted men were paid £25 a year. Militia units were only very rarely deployed as full units, except in cases of grave emergency, and in any event not as full units outside their own counties. Volunteers and the usual specially trained Rangers were drawn from all militia companies, using the militia as a reservoir for recruitment. During Queen Anne's War [or War of Spanish Succession, 1702-1713], South Carolina Governor James Moore decided to order the militia to leave the colony and attack the Spanish at St. Augustine, Florida. He first tried to recruit rangers and they refused to leave the colony. He issued a call for volunteers from among the militia and there was only minimal response. He then ordered the regular militia to march, but it too refused to go, arguing that it was not required to leave its county of origin except in case of grave emergency or when martial law was in force. Moore called the legislature into session, but it refused to concur in Moore's judgment and pass the enabling legislation he sought. Thereafter there was no question that it was universally held that it was unlawful to march the militia out of the colony. Any militia so deployed had to be volunteers selected from among the reservoir the militia offered.(426) During Queen Anne's War faced with the threat of invasion and Indian war in 1703 South Carolina authorized the arming of specially selected slaves and free blacks. They would be used only as a last resort and only if the regular militia proved to be insufficient to handle the emergency.(427) In 1704 the legislature ordered masters to draw up a list of "reliable" slaves and provide it to local militia officials who would then summon such slaves as might be needed. If a slave was used, wounded or killed his master would be compensated out of the public treasury. A master who refused to allow, or make certain that, his "trusty" slaves muster could be fined £5.(428) In 1708 the legislature again considered emergency measures and allowed that trusted slaves in times of grave emergency might be armed from the public stores with a lance or hatchet, and, if absolutely necessary, with a gun. A slave who killed or captured an enemy soldier would be freed. A slave rendered incapable of work after being wounded in battle would be maintained at public expense.(429) In the later years of the seventeenth century and the earliest years of the eighteenth century, South Carolina thought itself threatened by an incursion of wild beasts. Initially, bounties applied only to citizens, but that proved to be insufficient to contain the vicious beasts of prey. So the legislature authorized slaves, Amerindians and militia were to kill any "wolfe, tyger or beare" which marauded around settlements. The legislature offered bounties of up to ten shillings for each large animal killed.(430) As Queen Anne's War dragged on the British home and colonial authorities decided to put some pressure on the Spanish enemy in Florida as they had on the French in Canada. In October, 1702, Governor James Moore of Carolina, a planter and adventurer, gathered 500 militia and 300 Amerindian allies, mostly Yamasees, and sailed southward from Port Royal. Their goal was to take Fort San Marcos at St. Augustine before it could be strengthened with French forces. As an inducement to volunteer, the militiamen were promised plunder. The squadron turned in at St. Johns River, and the force captured several outposts on the approach to St. Augustine. It ransacked deserted towns, burning many houses, but the moated stone fort containing the garrison and 1400 inhabitants was more than the Carolinians bargained for. Moore sent to Jamaica for cannon, but they failed to arrive. Governor Zuñiga withstood a siege of seven weeks, and when two Spanish warships appeared on Christmas day, Moore decided to retreat to his relief ships at St. Johns River. The expedition cost £8500 for which Carolina issued paper currency. A year later, having lost the governorship, Colonel Moore proposed a second expedition, against the Apalachee settlements west of St. Augustine. The Assembly gave reluctant approval but specified that the force must pay its own way. Moore could enlist only fifty militia, but he raised about a thousand Indians and after a long march won a pitched battle. Although he did not attack Fort San Luis [Tallahassee], he broke up thirteen dependent missions, which were never restored, and carried off nearly a thousand mission Indians as slaves. Another 1300 were resettled along the Savanna River as a buffer. Moore lost only four whites and fifteen Indians, and the expedition more than paid for itself in booty and slaves. Florida's jealousy of the nearby French changed to alliance against a common enemy, the English. France in turn saw Florida as Louisiana's bastion. There would be a day of reckoning with Carolina. In the summer of 1706 the war came to life again in the South. Iberville had left Mobile for the West Indies and already captured the islands of Nevis and St. Kitts in April. Before he could extend his conquests he died of fever. Spain and France Were devising measures to revenge themselves for the attacks on St. Augustine and Apalachee. Five French privateers were engaged to carry Spanish troops from Havana and St. Augustine to attack Charleston, South Carolina. Anticipating such a raid, Charleston had called out militia and built stronger fortifications. Even so, the town might have been sacked by a more determined enemy under a better commanded. The Spaniards were poorly led, their landing parties were repulsed, and two hundred and thirty of them were taken prisoner. Then Colonel William Rhett, with an improvised squadron, drove off the French ships. Only with eventual help from North Carolina and Virginia did the South Carolina militia under Governor Nathaniel Johnson repulse the Spanish filibustering expedition in 1706.(431) Aroused and encouraged, the Carolinians decided on an offensive against the centers of Spanish and French power. The colony raised several hundred Talapoosas from Alabama to join with militia volunteers to attack Pensacola in the summer of 1707. The attackers killed eleven Spaniards and captured fifteen, but failed to take Fort San Carlos. In November Pensacola was hit again and siege begun. It did not prosper, and the invaders were ready to give up when Bienville brought relief from Mobile to the garrison and hastened their decision. South Carolina also had its martial eye on Mobile, but was unable to rouse the neighboring Indians or unify its own leaders of the enterprise. On both sides the southern offensive expired. In 1707 the legislature renewed the militia act of 1703 with few changes to its substance. For those who were too poor to provide their own arms as the law had required, a new tact was taken. The officers could "put out" persons who failed to supply their own arms "as servants, not exceeding six months, unto some fitting person (himself not finding one to work with), for so long a time as they shall think he may [require to] earn one sufficient gun, ammunition and accoutrements, as directed." While a servant, such a poor person would use his master's equipment, and the law seems to have allowed the master to pay the servant by exchanging arms for services. In times of actual service, the militia law also allowed for corporal punishment, with forfeiture of life or limb only excepted, for disobedience to officers, failure to show for duty, cowardice before the enemy, rebellion or insurrection. General officers had the responsibility for discipline and for administering punishment. The right of appeal from company discipline to the regimental commander were guaranteed by the law.(432) As in other colonies, especially in the early colonial years, the South Carolina militia was widely dispersed, following patterns of settlement. Only Charles Town [Charleston] could truly be said to have possessed an urban militia. This urban militia was small and, on several occasions, nearly collapsed before the scattered rural militia was able to muster. By 1712 South Carolina had created a substantial militia, consisting of all able-bodied men between the ages of sixteen and sixty years. The militia was to be used, on orders of the London Board of Trade and Lord Proprietors, to suppress piracy and smuggling, restrain the slaves and guard against slave revolts as well as to contain the Spanish.(433) Governor James Moore was much disposed to allow slaves to be armed, thus augmenting the very meager white militia of the colony, for Moore believed that the French and Spanish and the Amerindian allies were a far greater threat to the colony than the slaves. In 1708 and again in 1719 the legislature again ordered the principal militia officers to "form and compleat a list of such negroes, mulattoes, mustees and Indian slaves, as they, or any two of them, shall judge serviceable for the purpose. . . ." All three acts required that upon receiving an alarm the slave militiamen were report immediately to the rendez-vous as with free militia, there to be armed with "a good lance, hatchett or gun" from the public stores. Masters might supply the slaves with arms, and if such privately owned arms were lost, captured or damaged, the public was to replace the arm or bear the cost. Masters and overseers who failed to supply slaves in a timely manner were to be fined £20. Officers who refused to enlist any slaves were to be fined £5. The public treasury would pay a fair market value to the owners of slaves killed in militia service or wounded so that they could not again serve their masters.(434) The neglect of the militia in neighboring North Carolina cost that colony dearly during the Tuscarora Indian War of 1711-12. Only the timely arrival of militia forces from South Carolina and Virginia saved the colony from annihilation. South Carolina sent Captain John Barnwell with several militia companies and a large number of Amerindian allies from Cape Fear. Barnwell knew that he had to depend on the Indians to swell his numbers, and he knew well how to play on the ancient tribal animosities, but he was dismayed at the savage behavior of these allies. He complained to the legislature that he had to give "them ammunition & pay them . . . for every scalp, otherwise they will not kill many of the enemy."(435) The colony provided protection against slave insurrections was in three ways. First, it legislated limitations and restrictions which were especially designed to prevent slaves from congregating and thus planning and executing revolution. Second, by importing indentured servants it provided a higher proportion of white men to black slaves than would have been otherwise possible. Larger numbers of able-bodied militiamen translated to a trained and ready force sufficient to defeat slave conspiracies or seditions. As a remedy, in 1711 Governor Gibbes suggested the importation of whites at the public charge. Bills "for the better security of the Inhabitants of this Province against the insurrections and other wicked attempts of negroes and other Slaves" (436) as well as those "for the better securing this Province from Negro insurrections & encouraging of poor people by employing them in Plantations"(437) were regularly proposed by both the governor and the legislature. Third, the government attempted to limit the number of slaves imported into the province. In 1711 Governor Gibbes asked the House of Assembly to "consider the legal quantities of negroes that are daily brought into this Governt., and the small number of whites that comes amongst us, and how many are lately dead, or gone off. How insolent and mischievous the negroes are become, and to consider the Negro Act doth not reach up to some of the crimes they have lately been guilty of."(438) No person after the ratification of the 1712 act "Shall Settle or manage any Plantation, Cowpen or Stock that Shall be Six Miles distant from his usual Place of abode and where in Six Negroes or Slaves Shall be Imployed without One or more White Person Liveing and Resideing upon the Same Plantation, upon Penalty or Forfeiture of Forty Shillings for each Month so Offending."(439) In 1712 South Carolina created a comprehensive code covering all aspects of slave life. One provision of this act was that masters every fortnight were to search all slave quarters, and all other dwellings on their premises occupied by persons of color, for weapons of all sorts, including guns, knives, swords and any other "mischievous" weapons.(440) An act of 7 June 1712, was designed to increase the importation of indentured servants either directly or indirectly through the full support of the colonial government. The first article of the act empowered the "publick Receiver for the time being . . . dureing the Term of Four Years, after the Ratification of this Act, [to] pay out of the publick Treasury of this Province, the Sum of Fourteen Pounds Current Money to the Owners or Importers of each healthy Male British Servant, betwixt the Age of Twelve and Thirty Years, as soon as the Said Servant or Servants are assigned over into his Hands by him or them to whom they belong." The second article authorized the Public Receiver to dispose of these servants to the inhabitants of the Province "as much to the publick Advantage as he can, either for Money paid in Hand, or for Bonds payable in Four Months," and drawing interest at ten per cent thereafter. Article four provided that "in Case it so happen that there remains on any Occasions some Servants, whom the Receiver can neither dispose of in any reasonable time, nor employ to the Benefit of the Publick, he shall with the Approbation of Mr. William Gibbon, Mr. Andrew Allen and Mr. Benjamin Godin, or any two of them, sett these Servants Free, taking their own Bonds, or as good Security as he can get, for the Payment of the Sum or Sums of Money, as the Publick has expended in their behalf. The sixth article prohibited the importation of any who "were ever in any Prison or Gaol, or publickly stigmatized for any Matter criminal by the Laws of Great Britain."(441) The political authorities felt that those who had had experience fighting in the various European wars would make good militiamen for the American frontier. It did not make any difference which side they had fought on in Europe, for they expected that in America all Europeans would stand side by side against the Amerindians. The most exposed colonies were therefore constituted the most suitable place to settle the disbanded soldiery of Europe.(442) In May 1715, upon the recommendation of Governor Charles Craven and John Lord Cartaret, the legislature passed an arms confiscation act.(443) It allowed the government to "impress and take up for the publick service" ships, arms, ammunition, gunpowder, military stores and any other item "they shall think to employ and make use of for the safety and preservation of this Province." The Indian War had severely taxed the resources of the province and the government was desperate for arms, ships and supplies. Impressment seemed to be the only alternative to "standing naked against the Indian Enemy and their Confederates." The public treasury was required to make restitution, and officers were required to give receipts for the reasonable value of confiscated materials. The act also allowed the militia officers to "seize and take up such quantities of medicines, spices, sugars, linen and all other necessaries" required by both the poor and wounded militiamen. The governor planned to send a ship "northward" to trade peltries for arms and ammunition and, since it was expected that Craven would have to bargain for arms and supplies, he was authorized to seize furs with a value not to exceed £2500, giving receipt for such seizures. The militia also rose to the challenge during the Yamassee Indian War in 1715, although these aborigine were ill armed and poorly organized and in large defeated themselves through ineptitude. There was essentially no defense against surprise attacks except constant vigilance and the Yamassee and their allies worked surprise attacks very effectively. The colony found that forts were too far apart to support one another so the colony built additional forts to complete a strong chain across Yamassee territory. The forts never were quite large enough to shelter all who sought refuge during Indian attacks.(444) Nonetheless Governor Robert Johnson declared that the militiamen had acted bravely and said that in terms of competence in the art of war they compared favorably with the very best professional soldiers from Europe.(445) More responsible citizens, realizing the true state of affairs, and seeing that the population would suffer significantly from major losses of tradesmen and farmers in militia service, appealed over Johnson's head, asking that London dispatch regular troops. After much correspondence the parties compromised and formed a primary defensive unit comprised of British troops and volunteer colonial rangers, all paid by the colony.(446) During the Yamassee War, Governor Charles Craven ordered "about two hundred stout negro men" to serve in the militia. Since there were less than 1500 able-bodied whites in the colony, Craven felt justified in enlisting the slaves.(447) In 1717 the South Carolina militia consisted of 700 white men able to bear arms.(448) The legislature decided in that year to renew and revise slightly the colony's basic militia law. No substantial changes in the law were noted.(449) The Assembly reaffirmed the use of slaves in the militia in 1719, requiring that slaves serve in the militia if ordered to do so. It diminished the reward for slaves who captured or killed enemies in action, offering only £10 instead of freedom. All slave owners had to submit a list of able-bodied slaves between ages 16 and 60, who might be drawn upon in case of serious emergency.(450) The act also provided for the deployment of great guns in Charleston, the care and maintenance of the cannon and for the training of an artillery company within the militia.(451) South Carolina organized its militia units on a territorial basis. These geographic areas were commonly called beats. The colony supported two types of companies: the ordinary line companies and the elite volunteer companies, known variously as minutemen and frontier rangers. Mounted troops and cavalry were also considered elite volunteers. The law required all non-exempt male citizens to serve in a line company but gave them the option of serving in a volunteer company instead. Since only males with means could afford to serve in the mounted volunteer companies, most males served in the line, regular, or ordinary companies. Within each beat, every resident, service-age male who was not in a volunteer company belonged to a line company. The line companies were infantry units, literally the people in arms. The law required free white males to provide their own muskets and accoutrements. Free black males and slaves acted as fatigue men (laborers) and musicians. Membership of each line company varied over the years, but the maximum complement was sixty-four men and officers and the minimum was thirty. Companies could maintain full membership only if the number of males living within a beat remained level. Since such stability of population was rare, beat boundaries were redrawn whenever the numbers in a beat exceeded, or fell below, the reasonable limits provided by law. The line companies were the focal point for registration and training. Each company elected its own beat captain, and inductees registered with him when they turned eighteen. The beat captains used their rolls to organize the slave patrol and to see to it that all members attended training musters. The militia held musters for one full day every two months; four of those musters were company musters, one was a battalion muster, and one a regimental muster. During these musters, the inductees drilled, marched, and practised musketry, although with mixed results. The line companies were the building blocks of the militia structure. Line companies combined to form battalions, battalions combined to form regiments, regiments combined to form brigades, and brigades combined to form divisions. Companies, battalions, and regiments assembled for musters. Units larger than regiments, that is, divisions and brigades, seldom assembled. Instead, division and brigade staff members adjusted beat boundaries and inspected training musters at periodic intervals. These superior officers were appointed by the governor with the nominal approval of council and the legislature. Before the American Revolution, the militia had organized its line companies loosely into a number of regiments. These regiments covered vaguely defined areas, and the governor acted as the nominal commander-in-chief. Two regiments had formed in 1721, in which year the militia took on the administration of the slave patrol. The Southern Regiment was made up of the line companies in Granville and Colleton counties. The North West Regiment was comprised of the line companies in Berkeley and Craven counties. As the population of the colony increased, seven regiments had been formed by 1758, and twelve had been formed by the time of the American War for Independence. After the American Revolution, the state passed several pieces of legislation that organized militia companies into a more complex structure and placed regiments, brigades, and divisions within judicial districts. By 1787, twenty-three regiments had combined to form four line brigades.(452) After the suppression of the Jacobite Rebellion of 1715 the British decided to send some of the rebels to America. South Carolina had a policy of long standing that encouraged the importation of white convict and indentured servants to increase the number of whites in order to contain, indeed overwhelm, the black slaves. It is not surprising that the Lords Proprietors, Board of Trade and governor all wanted to receive as many of these condemned rebels as possible. On May 10, 1716, the Lords Proprietors advised Governor Craven, We having received two Letters from Mr. Secretary Stanhope signifying his Majtys pleasure in relation to such of the Rebels who were taken at Preston and are to be transported to his plantations in America that as soon as any of the Rebels shall land in any port of our province of Carolina you shall appoint a sufficient Guard for securing them till they are dispos'd of according to the Terms of the Indentures they have enter'd into here and such of the Rebels who have not enter'd into Indentures here you are to offer to them that they enter into the like Indentures with the others, Vizt. to serve for the space of seven years and in case of their refusal to enter into such Indentures you are to give proper certificates to those that purchase them that it is his Majesty's pleasure that they shall continue servants to them & their assigns for the term of seven Years, which certificates you are to cause to be recorded for the satisfaction of those who purchase them, lest they should attempt to make their Escape not being bound. We do hereby strictly require & command you to Obey these orders in every particular. . . ."(453) The government itself purchased some of these rebels. On 1 August 1716, Deputy Governor Robert Daniell sent a message to the Commons House of Assembly, explaining that the danger from Indian attacks was so imminent that he had taken it upon himself to purchase "thirty of the Highland Scots rebels at thirty Pounds per head to be paid for in fifteen days." He added that he, "would have contracted for the whole number, but that I could not persuade the commissioners that they had powers enough."(454) On the fourth an "act to Impower the Commissioners appointed to Stamp Fifteen Thousand Pounds in Bills of Credit to Pay for Thirty Two White Servants Purchased by the Honourable the Governor" was ratified.(455) In 1718 the legislature authorized the use of militia against the enemies of the Cherokees because "the safety of this Province does, under God, depend on the friendship to this government, which is in daily danger of being lost to us by the war now carried on against them by divers nations of Indians supported by the French."(456) To recruit Amerindians to the assistance of the provincial militia the South Carolina legislature placed a bounty on enemy Amerindian scalps. The law provided that "every Indian who shall take or kill an Indian man of the enemy shall have a gun given him for the reward."(457) In 1719 more specific legislation in this area was directed at the Tuscarora tribe. "Any Tuscarora Indian who shall . . . take captive of any of our Indian enemies, shall have given up to him, in the room thereof, one Tuscarora Indian Slave."(458) Slaves received similar rewards under a laws of 1706 and 1719.(459) If any slave shall, in actual invasion, kill or take one or more of our enemies and the same shall prove by any white person to be done by him, [he] shall receive for his reward, at the charge of the public, have £10 paid him by the public receiver for such his taking or killing every one of our enemies, as aforesaid, besides what slaves or other plunder he shall take from the enemy.(460) In the same year the king sent a substantial quantity of arms for the militia, so the legislature passed a substantial act which provided for a public magazine, a public armourer, care and maintenance of the arms, and penalties for private conversion of such arms. The arms were to remain in the magazine and were to be issued only upon authorization by the governor and in case of emergency. In 1719 the colony had 6400 white inhabitants, suggesting a militia potential of at least 500 men; in 1720 the governor reported 9000 white inhabitants, probably expanding the militia to 1000. In 1721 the report showed the same number of white inhabitants and 12,000 blacks.(461) On 12 January 1719 Colonel Johnson, on behalf of the governor, reported Amerindian population at the same period to the Lords of Trade.(462) Charles Town Name Villages Men Total 90-S.W. Yamasses 10 413 1,215 130-S.W. Apalatchicolas 2 64 214 140-W. Apalachees 4 275 638 150-W. by N. Savanas 3 67 283 180-W.N.W. Euchees 2 130 400 250-W. by N. Creeks 10 731 2,406 440-W. Abikaws 15 502 1,773 430-S.W. by W. Albamas 4 214 770 390-W.S.W. Tallibooses 13 636 2,343 200-N.N.W. Catapaws 7 570 1,470 170-N. Sarows 1 140 510 100-N.E. Waccomassus 4 210 610 200-N.E. Cape Fears 5 76 206 70-N. Santees 2 43 125 100-N. Congarees 1 22 80-N.E. Wensawa 1 36 106 60-N.E. Seawees 1 57 W & Ch 57 Mixt. wth ye English: Itwans 1 80 240 Settlement. Corsaboys 5 95 295 450-N.W. upper settlement 19 900 390-N.W. middle settlement 30 2,5000 11,530 320-N.W. lower Settlement 11 600 1,900 640-W. Chikesaws 6 700 In 1720 the Assembly reported to the Board of Trade that it possessed 2000 "bold, active, good woodsmen" who were "excellent marksmen." The principal obstacle to the development of a good militia was the sparseness of the population outside the few urban areas such as Charleston. By 1721 the militia rolls showed over 2000 men in two infantry regiments and one troop of cavalry. This militia was spread out through the colony, with lines of communication as long as 150 miles.(463) Following a major slave revolt in Charleston, the legislature passed the Militia and Slave Patrol Act of 1721,(464) which expanded the militia patrol system.(465) The principal result of the act was the creation of additional patrols with more men. Following the Yamassee War, the colony successfully moved away from proprietary government. Democratic sentiments for democratic government and choice of their own government propelled this largely peaceful revolution. In 1719 and 1720 the Speaker of the Assembly, along with eight other legislators, assumed political control of the colony. The assembly elected one of its own, James Moore, to serve as governor. Although the colony was in a deep economic depression following destruction of crops and frontier enterprises in the war, the leaders never allowed popular discontent to disrupt the main functions of government or to allow change to become radicalized. Moore immediately sent a letter of explanation of grievances to the crown. The colony had suffered from the Yammassee and Tuscarora wars and also from the repeated threats from the Spanish and the pirates to whom they had given protection. Sir Francis Nicholson arrived in May 1721 to become the new governor, captain-general and commander-in-chief. He carried several instructions relating to the military situation in the colony. The crown assured the colonists that it would supply ample arms, gunpowder and flints to enable the "Planters, Inhabitants, and Christian Servants" to defend themselves. The crown issued specific orders that the training of the militia was never to interfere with the ordinary business of the citizenry. Nicholson was charged with providing good officers for the militia. Under the cooperative leadership of Nicholson and the Assembly the colony once again prospered. One of the main objects of executive and legislative attention was the rebuilding of the militia to respond to the "constant alarms" from French, Spanish and Amerindian attack.(466) There were essentially four types of armed bands available to the colony. The great militia had provided most of the colony defense during the early years. As we have seen, Amerindian allies provided a low cost alternative to the militia. Provincial regiments had aspects of both a standing army and a select militia. The King's Independent Companies were recruited and trained in England. In 1721 the legislature again reenacted the basic militia law, making few changes, as had been the case with the last two militia acts. Because the lack of coordination among militia units had been a problem, the law required the three militia units closest to one another to muster annually and practice as a single unit. Ministers were added to the exemption list. Retired militia officers were exempted from militia musters, but had to serve in case of an alarm. Company captains could choose their own sergeants, clerks and corporals. The law now allowed seizure and forfeiture of personal property and goods to satisfy unpaid militia fines. The law also expanded the powers of impressment of goods and services in times of emergency or alarm. The 1721 militia act made more specific provision for armament of mounted troops who were now required to supply a good horse, a brace of pistols, carbine, sword, and proper saddle and mountings. Mounted troops could no longer be impressed into infantry. With reference to acts of 1712 and other years, the militia was charged with containment of slaves and white indentured servants. Militiamen could search the dwellings of slaves and confiscate any offensive weapons located therein. The grace period for former servants and other poor men to be armed and accoutred was reduced from twelve to six months. Militiamen could drafted into slave patrols as well as seacoast watch duty. Militiamen could hold, even imprison, slaves or indentured servants absent from their masters' plantations without a pass or sufficient cause. Penalties for neglect of duty and failure to appear at musters were increased.(467) After 1725 the professional military organizations of the provincial and independent companies assumed the primary defense of the colony and the militia was reduced to controlling the slaves and defending against surprise Indian attacks. The king's companies, having been comprised of professional soldiers from an urban environment, were essentially useless on the frontier. They especially resented being divided into smaller units, such as companies, for assignments. A judicious Indian policy then eliminated the necessity of using the South Carolina militia in Indian wars.(468) Thereafter it became increasingly difficult to convince militiamen to muster and, in turn, political leaders expressed less confidence in the usefulness and discipline of the militia. The county militia units were uniform only in their resistance to disciple and order. A few select militia units, notably the Charleston artillery, were well practiced. Neglect of militia discipline reduced many urban units to the position of being mere social clubs. In the year 1727 South Carolina had grave reason to prepare, arm and reorganize its militia. English attention was diverted to the War of the League of Hanover with Spain. Spanish attention was focused the Carolinas. The colony expected a Spanish attack to be launched from their foothold in St. Augustine. The restless Amerindian tribes also expected Spanish help and the authorities in South Carolina worried that a Spanish attack would herald a general Indian uprising. Perhaps the Spanish would also be prepared to precipitate a slave revolt to assist in achieving their design. One interesting act passed in anticipation of invasion required that all slave holders retain no less than one servant, often a purchased indentured immigrant, for each ten slaves. The price of indentured servants rose and male servants were included in the militia.(469) One important reason for importing indentured servants into the southern colonies was self-protection. The Spanish in Florida had long looked upon the constant southerly extension of English settlements in the late seventeenth century with as jealous an eye as they had viewed the French attempts of a century earlier. While actual hostilities did not break out until the opening of the eighteenth century, the loss of runaway servants and Negroes, rivalry in the Indian trade, and the unsettled state of affairs in their respective home nations all contributed to the suspicion with which the Carolina and Florida settlements regarded one other. Moreover, danger was always to be apprehended from the Indians, whether incited by Spanish intrigue or going to war for their own reasons. The colony was so desirous of having settlers on the frontier that it even went into the business of recruiting and importing servants. When the colony imported servants it demanded immediate placement so that the services of the new militiamen could be utilized. Moreover, the government wished to be reimbursed for the expenses involved in importing them.(470) Importation of white servants and convicts remained an important concern in South Carolina. The servants were really more important as a defense against possible slave insurrections than as a defense against the Amerindians or Spanish. As the culture of rice increased, the demand for slaves grew. More and more they furnished the vast majority of the colony's agricultural labor. The growth in number of slaves created a new demand for servants. In 1726 a committee of the Assembly reported that it was their opinion that "it will greatly reduce the charge for manning the said Forts if five servants be purchased for each and in order to procure the same we propose that Captain Stewart or some other person be treated with to transport such a number which we believe maybe agreed for at £40 or £50 per head indented for four years."(471) In January 1741, Lieutenant Governor Bull suggested the plan of purchasing sufficient single men to man the forts.(472) South Carolina purchased and hired servants as they were needed, for, as privately owned servants were liable to service in the militia and patrols. At this same time the reorganized militia had to be used to maintain internal orders. The South Carolina economy had become heavily dependent on the export of tar and pitch, vital commodities used by the British navy. Initially, the home government paid bounties, but in 1726, dissatisfied with what it considered inferior shipments of these pine tree extracts, it discontinued the subsidies. Extraction of tar and pitch were labor intensive, and required large stands of pine and some buildings and equipment. All of these were taxable items. As long as the economy was strong there had been few complaints about the taxes. The economy declined, and exports dropped to half their previous value, but taxes and expenses of keeping slaves and facilities did not decline. The arrest of plantation owner and tax protestor Thomas Smith, Jr., in April 1727 brought the situation to a head. A private militia, more like a mob, assembled and Smith was released. The legislature mobilized the militia. The tax protestors called a meeting at Smith's plantation. The legislature ordered the arrest of Smith's father on the charge of treason. With the leader gone, a strong and loyal militia in place, and the threat from the Spanish and their potential, if not real, Amerindian allies, still a reality, the revolt ended.(473) In April 1728 in a clash with the Yamassee, the South Carolina militia killed 32 of the enemy. The Yamassee retreated to Florida and took refuge in a Spanish castle. The militia demanded the surrender of the Yamassee, but the Spanish retorted that these Amerindians were subjects of the king of Spain. The militia retreated, unable to take the fortress because they lacked siege equipment and cannon. They did take fifteen Yamassee prisoner. The force consisted of a hundred Amerindians and one hundred militiamen.(474) In the fall of 1739, the Negroes made an insurrection which began first at Stonoe (midway betwixt Charles Town and Port Royal) where they had forced a large Store, furnished them Selves with Arms and Ammunition, killed all the family on that Plantation and divers other White People, burning and destroying all that came their way. The militia engaged one armed band of liberated slaves which consisted of no less than 90 armed men. In this engagement the militia killed 10 and captured four. They offered a reward of £50 for each insurrectionary captured alive, and £25 for each killed. The South Carolinians were certain that the Spanish played a role in seducing the slaves into revolt, "promising Liberty and Protection to all Slaves that should desert thither from any Part of the English Colonies, but more especially from this." Previously, "a Number of Slaves did from Time to Time by Land and water desert to St. Augustine."(475) The governor reported, In September 1739, our Slaves made an Insurrection at Stono in the heart of our Settlements not twenty miles from Charles Town, in which they massacred twenty-three Whites after the most cruel and barbarous Manner to be conceived and having got Arms and Ammunition out of a Store they bent their Course to the southward burning all the Houses on the Road. But they marched so slow, in full Confidence of their own Strength from the first Success, that they gave Time to a Party of our Militia to come up with them. The Number was in a Manner equal on both Sides and an Engagement ensued such as may be supposed in such a Case wherein one fought for Liberty and Life, the other for their Country and every Thing that was dear to them. But by the Blessing of God, the Negroes were defeated, the greatest Part being killed on the Spot or taken, and those that then escaped were so closely pursued and hunted Day after Day that in the End all but two or three were [killed or] taken and executed. That the Negroes would not have made this Insurrection had they not depended on St. Augustine for a Place of Reception afterwards was very certain; and that the Spaniards had a Hand in prompting them to this particular Action there was but little Room to doubt, for in July preceding Don Piedro, Captain of the Horse at St. Augustine, came to Charles Town in a Launch with twenty or thirty Men . . . .(476) The Georgia militia restrained the slaves who attempted to cross that province to gain freedom in Spanish Florida. Caught in a pincer between South Carolina and Georgia militias, the slave revolt was crushed and the leaders executed and other slaves mutilated or deported.(477) Afraid of the consequences of another slave revolt, the slave owning militiamen thought of the containment of the blacks as their first obligation. No militiaman who owned slaves was willing to leave his plantation to go off hunting down Indians when his slaves might rise up and massacre his family. Since there were three able-bodied blacks for every able-bodied white man in the colony, it made a great deal of sense to use the militia to contain the slave menace. Slavery had become "a source of weakness in times of danger and . . . a constant source of care and anxiety."(478) After the slave revolts those blacks who were mustered were less well armed than had been the case heretofore and were deployed primarily to scout and forage. In 1733 a conspiracy had been formed between slaves and Amerindians who were already ravaging the frontier. It was betrayed accidentally by an Indian woman who bragged of the impending alliance and expected resulting insurrection. The Assembly investigated and interrogated the woman who claimed that all the Indian nations were about to unite in one final, great, all-out battle to drive the whites from their shores, and that they would be aided by the slaves. The Assembly then considered what would happen if the French should aid the Amerindians while simultaneously infiltrating the slave population. It concluded that there were "many intestine dangers from the great number of Negroes" and that "insurrections against us have often been attempted and would at any time prover very fatal if the French should instigate them by artfully giving them an Expectation of Freedom." Finally, on 10 November 1739, the colonial legislature enacted a law which required, that every person owning or intitled to, any Slaves in the Province, for every 10 males slaves above the age of 12, shall be obliged to find or provide one able-bodied white male for the militia of this Province, and do all the duties required by the Militia Laws of this Province . . . that every owner of land and slaves . . . who shall be deficient herein, his sons and apprentices above the age of 60 years, to be accounted for and taken as so many of such white persons to serve in the Militia.(479) By 1730 there were "above 3000 white families" in South Carolina, suggesting a militia potential of 2500 or more men.(480) By 1736 the number of white men in South Carolina exceeded 15,000.(481) In 1731 the legislature attempted to reduce the Amerindian potential for war by limiting the amount of gunpowder available to them. No trader was to trust any Indian with more than one pound of gunpowder or four pounds of bullets at one time.(482) In 1730 less than half, perhaps 40%, of the slaves in the province had lived there for more than ten years, or had been born there. By 1740 the slave population of the colony was 39,000, of whom 20,000 had been imported over the past decade. In the five years preceding the Stoenoe insurrection more than 1000 slaves had been imported into St. Paul's Parish, nearly all from Angola or the Congo. There was a certain cohesiveness among this group which had been living together only five years earlier. They generally spoke the same language, which was incomprehensible to whites and most, if not all, slaves who had lived there for some time past. Those responsible for the maintenance of the slave system were concerned with new conspiracies and had given little or no thought heretofore about the possibility of past associations leading to insurrection. Contemporary accounts suggest that the uprising was primarily an Angolan event. In 1734 the legislature passed a new act for regulating militia slave patrols in South Carolina.(483) The county militia officers were to appoint one captain and four militiamen in each county to serve as slave patrols. "Every person so enlisted shall provide for himself and always keep, a good horse, one pistol, and a carbine or other gun, a cutlass [and] a cartridge box with at least 12 cartridges in it." Patrols were to survey all estates and roads within their counties at least once a month for the purpose of arresting any slaves found beyond their masters' lands without a permit. Should a patrol locate a band of slaves too large to contain it was to send word to the officer in charge of the county militia who would then assemble whatever force was necessary to contain the slaves. "It shall be lawful for any one or more of the said patrol to pursue and take the said slave or slaves, but if they do resist with a stick or any other instrument or weapon, it shall be lawful for the patrol to beat, maim or even to skill such slave or slaves." Masters hiding runaway slaves could be taken also by the militia patrol, with a minimum penalty of £5. Penalty for refusal or failure to serve was £5. In 1734 several important acts passed the South Carolina legislature. First, the legislature reenacted the basic militia act which continued to provide for the enrollment of all able-bodied, free, white males between ages 16 and 50 was enacted.(484) Next, the legislature created legislation for setting patrols which would look for Indian activities and runaway slaves and indentured servants.(485) In early 1735 the legislature ratified legislation providing for better regulation of slaves, which included responsibility for the militia to assist in containing "evil designs" of slaves.(486) In 1737 the Assembly debated a law allowing slave patrols to "kill any resisting or saucy slave." In 1737 the legislature appropriated £35,000 for the defense of the colony, including the arming and equipping of the militia.(487) The legislature also authorized the creation of several forts as buffers against the Amerindians. The militia volunteers assigned to guard duty were to be paid out of public funds.(488) In 1739 the militia was organized into companies, regiments and battalions, with battalions being formed when any three or more companies could be formed within three miles of one another. In 1738 the act relating to slaves keeping guns was amended to bring it into conformity with the Negro Act.(489) With the Spanish threatening invasion and Indian problems on the frontier the militia was already overextended and could not provide adequate slave patrols. The colony was also beset by the ravages of an outbreak of smallpox. When the plotters realized that it could not provide for all its responsibilities they decided that the time was ripe to move against the white masters. As early as 8 February 1739 the provincial secretary of Georgia heard from slaves who had escaped from South Carolina that "a Conspiracy was formed by Negroes in Carolina to rise and forcibly make their Way out of the Province." The Stoenoe insurrection occurred in September 1739 when slaves killed about 25 whites and destroyed considerable property. Before the revolt ended the slaves had killed about sixty persons of all races. The South Carolina Militia engaged 90 slaves in a single body. They killed all but four. The militia commander then posted a reward of 50 livres for each insurrectionary taken alive and 25 livres for each taken dead. Those not taken where believed to be headed for Georgia, as the earlier report had suggested they would. The militia commander blamed the Spanish for inciting the revolt by offering freedom to all slaves who sought asylum in Florida. On 13 September 1739 an eyewitness described the insurrection at Stoenoe. Negroes had made an insurrection which began at Stoenoe, midway betwixt Charles Town and Port Royal, where they had forced a large store, furnished themselves with Arms and Ammunition, killed all the family on that Plantation and divers other White People, burning and destroying all that came in their way.(490) The legislature of South Carolina posted a reward for those insurrectionist slaves who escaped to Georgia. Men were valued at £40, women at £25 and children under 12 brought £10, if brought in alive. If killed adult scalps with two ears brought £20. One party of four slaves and a Catholic Irish servant killed a man as they headed for anticipated asylum in Spanish Florida and were pursued by militia acting as posse comitatus. Amerindians killed one slave and received the £20 reward, but the others reached St. Augustine where they were warmly received. Two runaway slaves were displayed publicly. One who apparently had no hand in the insurrection, but had merely used the confusion to try to escape, was publicly whipped. The other, branded an insurrectionist, was induced to make a confession of his errors and crimes before a large group of slaves. Contrition did him no good for he "was executed at the usual Place, and afterwards hung in Chains at Hangman's Point, opposite to this Town, in sight of all Negroes passing and repassing by Water." Slaves remained at large as late as November 1739, when rumors spread that the remaining insurgents were planning another revolt. The Assembly requested that the governor muster the militia. In December the militia captured several slaves. In March the Assembly arranged for the interrogation of several others captured by militiamen acting as posse comitatus whom the militiamen suspected of plotting insurrection. In June 1740 the slave patrols in neighboring St. John's Parish, Berkeley County, arrested a large group, perhaps as many as 200 slaves, who were charged with conspiracy to foment insurrection. In Charleston in 1741 a slave insurrection was suspected in a series of arson fires and the militia was mustered. In 1742 the militia also investigated an alleged slave conspiracy being planned in St. Helena Parish.(491) The Stoenoe insurrection prompted the legislature on 11 April 1739 to rewrite the slave patrol code. All caucasian males between ages 16 and 60, and all women who owned 10 or more slaves, were liable for containment of the slaves, who comprised a significant portion of the colony's population. Since the primary protection of the colony had, since 1725, been entrusted to a standing army and ranging companies, slave patrol had become the primary militia obligation. No less than one-fourth of the militia was to be retained in all situations to control the slaves. All citizens, whether slave holders or not, were subject to service in the militia slave patrol. County militia captains were required to establish regular patrol beats. Militiamen in actual slave patrol service were enlisted for two months at a time. The law permitted hiring substitutes provided that the individual paid the substitute 30 shillings per night and outfitted him. The militia officers chose the patrol officers.(492) This law remained in effect until 1819. Between 1737 and 1748 South Carolina, like its sister colonies, was embroiled, first, in the War of Jenkins' Ear (1739-1744), and, second, King George's War (1744-1748). During the January 1739 term of the South Carolina House of Commons the legislators debated two major amendments to the basic militia law. First, it decided that no man need carry arms to church on Sundays if he chose to go disarmed; and that the owners of slaves who did not wish to carry arms need not bear arms if they chose to go unarmed.(493) The legislature's explanatory act for "better regulating the militia of this Province" emphasized more integrated regional militia training. Training at the company level was set at six musters a year, "but not oftener." Greater provision was made for company and regimental implementation of discipline and for appeal from courts martial.(494) In 1740 the governor and legislature approved a new manual for military discipline "calculated for the use and very proper and perusal not only of the officers, but of all Gentlemen of the Militia of South Carolina . . . according to the improvement made for Northern Troops."(495) It was an abridgement of General Humphrey Bland's book first published in London in 1727.(496) The financial burden of war fell heavily on the colony for it was forced to pay for militia and volunteers to guard the southern border and to contain the Amerindians on the frontier. The legislature summoned large numbers of militia. Legislative-executive cooperation was good, thanks largely to the able administration of governors William Bull (served 1737-44) and James Glen (served 1744-56). Still, it was the legislative committee system that had assumed control of the militia following the revolution against proprietary leadership in 1719 and 1720. It planned expeditions, approved appointment of officers, levied taxes and paid military expenses. It even decided to deploy the militia when the colony was engaged in military action in Florida.(497) The South Carolina Assembly intended to cooperate fully with Georgia's Governor James E. Oglethorpe who commanded the joint Georgia-South Carolina expedition against the Spanish in Florida in 1739. The Assembly estimated that its share of Oglethorpe's planned expedition would be £100,00 South Carolina currency. Speaker Charles Pinckney argued that such a sum was beyond the ability of the colony to bear. Rice prices had fallen on the international market, the treasury had nowhere near that amount and increased taxation would fall heavily on everyone. With the war underway, Pinckney argued, the colony already was heavily committed to military expenses. It would have to increase the watch, especially in the area of Charleston, inspect various fortified sites and public and private arms, repair and garrison forts along the frontier, buy arms and supplies, set up magazines and repair the colony's arms which were reportedly in bad shape. The real shock came when the Assembly received Oglethorpe's estimate of costs, which he placed at £209,492. The Assembly was unprepared to appropriate more than £120,000, with £40,000 being taken from the treasury and the remainder funded by a bond issue. The estimate included many categories of projected costs: pay to slave owners for the use of slave labor, gifts for various chiefs and supplies for 1000 Amerindians, munitions, militia pay, transportation from Charleston to St. Augustine, medical supplies and surgeons, provisions and food for the men.(498) Additionally, gubernatorial Indian policy had been founded upon good diplomacy, regulation of the Indian trade, sending agents among the various tribes and the offering of relatively expensive gifts to key Amerindian leaders. With the coming of war, Governor Bull informed the legislature on 13 February 1740 that more gifts and more agents would be required among all the tribes. Continuation of the policies that had worked, Bull argued, would save the lives of numerous militiamen in a pointless Indian war.(499) The South Carolina Assembly ordered that masters prepare a list of trusted slaves who might be enlisted in the militia. In case of a general alarm these selected slaves would be provided with a gun, hatchet or sword, powder horn, shot pouch with bullets, 20 rounds of ammunition and six spare flints. Once again the legislature held out the promise of manumission for such slaves as might kill or capture an enemy. Slaves who fought well might be rewarded with gifts of clothing, such as "a livery coat and pair of breeches made of good red Negro cloth turned up with blue, and a pair of new shoes." They might also be rewarded with being granted annually for life a holiday on such a day as they had performed bravely.(500) While the legislature entered debate concerning the colony's participation in the expedition to destroy St. Augustine, Oglethorpe suggested that 1000 slaves be enlisted as volunteers in the militia. About two hundred were to be armed while the other 800 would act as porters and servants. Masters were to be paid £10 per slave per month of service, masters assuming all risks except death. If a slave were killed his master would be compensated for his actual value, not to exceed £250.(501) The idea went untested because in late 1739 a major slave insurrection occurred at Stoenoe, followed by a second insurrection nine months later in Charleston.(502) The legislature, having discovered that slaves had secreted a rather substantial supply of weapons, ordered that no slaves be armed for any reason whatsoever.(503) Oglethorpe protested that four months had passed without action and that the Spanish were undoubtedly preparing for a possible expedition against their Florida stronghold. Word then reached him that the Assembly of his ally had cut £100,000 from his request. Oglethorpe protested and Bull took a full month to deliver his message to the Assembly, which for its part responded by setting up yet another committee of inquiry. On 15 April the Assembly passed the military appropriation bill and appointed one of its own and a member of the military appropriations committee, Colonel Alexander Vander Dussen to command its militia. There was additional significance attached to the Assembly's passage of the military appropriations bill, for with it the lower house asserted its right to pass in final form all appropriations and denied the power of the upper house to make any changes to such money bills.(504) Oglethorpe's mixed force of British regulars, Amerindians and militia landed in Florida on 20 May 1740. It enjoyed a few initial successes, burning the town of St. Augustine. Fort San Marcos withstood the siege. South Carolina found Oglethorpe's leadership lacking in most areas of command. Specifically, he alienated both the South Carolina militia and the Amerindians. Whether Oglethorpe's fault or not, the men suffered grievously from heat, disease and excessive rainfall. Well protected inside the fort the Spanish waited for the appearance of a relief force. Dissensions remained and indeed grew in intensity as the siege showed no progress. On 19 July Isaac Mazyck, a leader of the Assembly from Charleston, delivered a preliminary military committee report to the legislature. The expedition, he said, was a "lost cause" and South Carolina should withdraw its militia as soon as possible.(505) By August Oglethorpe agreed and ordered the expedition with withdraw to the north. In South Carolina the Assembly was shocked and then reacted by seeking a scapegoat. The new speaker, William Bull, II,(506) son of the lieutenant-governor, appointed a committee to "inquire into the Causes of the Disappointment of success in the Expedition against St. Augustine." The upper house followed suit. The lower house also created a committee to seek assistance from the home government. Bull appointed the most important legislative leaders to serve of these committees, excusing them from all other duties until the work of the committees be finished.(507) A thorough report of more than 150 pages, the final document contained no less than 139 appendices with extracts from various journals, reports and letters from those who had served with the South Carolina contingent. Not surprisingly, by 1741 the house had issued a final report which was highly critical of Oglethorpe and the expensive mission because they had failed to achieve any important part of the mission.(508) The second committee used the report to justify its requests for money and troops from England. Despite its size and documentation the report failed to take into account the long delays in mounting the expedition caused by legislative wrangling, the failure to surprise the Spanish in Florida and the inadequate supplies. Simply put, the legislature had not supplied materials for a full siege and the militia had not been trained or equipped for that type of warfare. The fundamental militia law which had been reenacted on 11 March 1736 was again extended on 22 January 1742.(509) On 7 July 1742 the legislature enacted a law which was designed the enroll in the militia frontiersmen, especially Indian traders, who were unenumerated on tax lists. The purpose of the law was to provide a first defense line of frontiersmen who were familiar with the terrain and with local Amerindian customs. As the legislature wrote, the war was designed to "secure the assistance of people who are unsettled that they may be encouraged . . . [to] enlist in the service of this Province before any draughts are made of the [urban] militia."(510) In 1742 the legislature ordered the recruitment of militia volunteers and militia rangers to "repel his Majesty's enemies and to contribute the utmost of our Power to the defence of the Colony of Georgia and this Province." Governor William Bull asked for and received legislative authorization to issue £63,000 in paper currency to pay for the expedition to defend Georgia.(511) In 1743 the South Carolina legislature passed an act which required that citizens to go armed to church and other public places. Whereas, it is necessary to make some further provisions for securing the inhabitants of this province against the insurrections and other wicked attempts of negroes and other slaves within the same, we therefore humbly pray his most sacred majesty that it may be enacted, and be it enacted by the Hon. William Bull, Esq., lieutenant-governor and commander-in-chief in and over his majesty's province of South Carolina, by and with the advice and consent of his majesty's honorable Council, and the Commons House of Assembly of this province, and by the authority of the same, that within three months from the time of passing this act every white male inhabitant of this province (except travelers and such persons as shall be above sixty years of age) who, by the laws of this province, is or shall be liable to bear arms in the militia of this province, either in times of alarm or at common musters, who shall, on any Sunday or Christmas day in the year, go and resort to any church or any other place of divine worship within this province, and shall not carry with him a gun or a pair of horse-pistols, in good order and fit for service, with at least six charges of gunpowder and ball, and shall not carry the same into the church or other place of divine worship as aforesaid, every such person shall forfeit and pay the sum of twenty shillings, current money, for every neglect of the same, the one-half thereof to the church-wardens of the respective parish in which the offense shall be committed, for the use of the poor of the said parish, and the other half to him or them who will inform for the same, to be recovered on oath before any of his majesty's justices of the peace within this province in the same way and manner that debts under twenty pounds are directed to be recovered by the act for the trial of small and mean causes.(512) As late as 1765, a grand jury at Charleston, South Carolina, presented "as a grievance the want of a law to oblige the inhabitants of Charleston to carry arms to church on Sundays, or other places of worship."(513) In 1744 Governor James Glen requested that the legislature provide new taxes to strengthen the militia and build and repair magazines and fortifications. He was greatly concerned that the war with Spain would invite privateers, pirates and Spanish forces from Florida to invade the Carolinas. He wished also to protect the lucrative trade that Spain coveted.(514) By this time full power over fiscal matters had passed to the Assembly, the beginning of a long process which, by 1760, had eliminated Council and the upper house of virtually all their powers over the purse. The legislature moved to consider Glen's requests at snail's pace. As we have seen regarding Oglethorpe's expedition against St. Augustine, this was simply the price for the increasing democratization of the decision making process. The legislature referred gubernatorial requests to committees which held hearings, considered their constituents' viewpoints and wrote reports. Executive requests in the vital areas of Indian affairs, military appropriations and provincial defense were delayed by the workings of the emerging democratic process. Control of the militia through legislation and appropriations was among the most important applications of the popular legislative power.(515) Meanwhile, the home government demanded an accounting of the colony's business and an enumeration of its population. In 1745 the governor reported that number of whites in South Carolina exceeded 10,000 with more than 40,000 blacks, primarily slaves. In 1749 the number of whites reported by the governor had grown to 25,000 while the black population had declined slightly to 39,000. The militia could count at least 2000 men, with a few trusted armed slaves and others enlisted as porters and musicians.(516) In 1747 the legislature modified the basic militia law, noting that "the safety ad defence of this Province, next to the blessings of Almighty God, and the defence of our most gracious Sovereign, depends on the knowledge and use of arms and good discipline." Where three or more militia companies co-existed within a distance of six or fewer miles, a regiment was to be formed and periodically jointly exercised. Ideally, each county would form a regiment comprised of its various militia companies. Each company was to muster at least six times a year. Other than a reduction in the minimum required number of cartridges from 20 to 12, the that aspect of the law dealing with arms and accoutrements was unchanged. It chose to ratify what had long been considered, by custom and tradition, to be a primary power of the governor, that of appointing all commissioned and non-commissioned officers in the militia. No one could refuse to accept a gubernatorial militia appointment. The law continued to authorize a troop of cavalry, but limited its number to 200 men. The law authorized formation of an artillery militia, with these men being exempted from additional duties. In cases of insurrection the governor, lieutenant-governor or president of council was required to command the militia in person. All citizens between ages 16 and 60 were to be enrolled, excepting only strangers residing in South Carolina for less than three months and a small list of others. Those exempted from militia duty remained the same as in earlier laws, although the law reduced substantially the number of those exempted in various professions such as millers, ferry operators and sailors. The law also required that those exempted be required to muster in case of emergency. The law now required masters to arm apprentices in the same way that it had required masters to arm indentured servants. Those apprentices who had served their terms were granted six months to supply their own arms. Those citizens who had moved from their homes were to be carried on muster rolls, and expected to continue to serve in the militias, of their old homesteads until they joined another militia unit at their new homes. When raising militias to repel invasions or suppress insurrections, the governor's power to call out companies and regiments was essentially unlimited. Still, the law charged the governor with retaining in each county and city sufficient militia to control slaves. Fines for failure to muster were increased, with £50 being the minimum penalty for those who refused to muster in time of alarm. Superior officers could levy fines up to £500, and impose corporal punishment less than loss of life or limb, for various offenses under the act. The act increased the size and frequency of slave patrols. Superior officers could muster militias through the regimental level provided they received reports of insurrection, invasion or Indian attack from reliable witnesses or informers. Masters were to provide a list of reliable slaves who might be marched with the militia. These slaves could be armed with "one sufficient gun, one hatchet, powder horn and shot pouch" and ammunition and accouterments, although they could not possess the arms until they were marched with the militia. If a slave served on militia duty the province paid his owner for his time. If the slave was killed the state paid his master for his market value; and if he was disabled, the colony compensated his master for the loss of his services in proportion to his disability. Slaves who showed conspicuous acts of bravery under fire, or who killed or captured an enemy, were to be rewarded with clothing annually for life. If the slave was freed as reward for his bravery, his master received public compensation. Slaves serving in the militia who failed or neglected their duty were subject to corporal punishment. If a poor man or servant was injured while serving in the militia, he was to be paid an annual stipend according to his loss. If a poor man or servant was killed, his family was to receive public support at a rate of £12 a year. The province supported the dead man's children only through age 12 and the widow only so long as she remained single. Indentured servants who acted bravely in combat or who killed or captured an enemy was to be freed, with his master receiving public compensation for loss of his services.(517) The act of 13 June 1747 was continued by an act of 1753 for two years, revived and reenacted in 1759 for a period of five years. From 1749 through at least 1764 there were constant conflicts between the lower house of the legislature over military and other appropriations and taxation, with each of the several governors trying to reestablish executive prerogatives and the legislature resisting. Likewise, the upper house attempted to interject its authority only to receive similar resistance. The lower house gradually gained control over political patronage, local administration, finance, and the militia. In times of war or threatened conflict the legislature would demur until the governors agreed to the erosion of their power as the price they must pay to accede to pressures from the home government to make war or prepare for war. For example, when Glen wanted to build and garrison Fort Loudoun in what is now Tennessee the lower house agreed only upon condition that the governor was willing to recognize the power of the assembly over the budget. The interests of at least the militia officers were well protected because many of the elected assemblymen during this period were simultaneously officers in the militia. George Gabriel Powell, for example, held an assembly seat for over 20 years while serving as a colonel in the militia. He frequently chaired committees on military appropriations and militia affairs.(518) The only area concerned with the militia directed by the executive was Indian affairs. Ably assisted by Edmund Aiken, Governor Glen and other governors conducted a model of colonial Indian policy. Glen enjoyed considerable legislative support, especially from the committee on Indian affairs in the lower house of the Assembly. By keeping the various tribes either pro-British or at least neutral the administrations reduced the burden upon the militia.(519) Glen's policies worked well, preventing any Indian war. When Glen's successors William Henry Lyttleton and Thomas Boone chose to ignore the Assembly, the legislature was vocal in its disagreement. These clashes the with Indian affairs committee over the direction of Indian policies resulted in the bloody war with the Cherokees. The Cherokee War was the greatest challenge mounted on South Carolina's soil since the Yamassee War of 1716-17. The causes of the war were many, including failures of gubernatorial Indian policy, the duplicity of the Indian traders, aggressive expansion into Indian lands and the successful intrigues of the French. In early 1759 the Cherokees overran Fort Loudoun and burned many isolated farms and frontier settlements. By early 1760 they threatened Prince George and Ninety-six. The governor mustered the militia, but a smallpox epidemic struck hard at those gathered at Charleston. Low country planters withheld their militia after threats of a slave insurrection spread. Those militia initially deployed suffered several defeats, primarily from well executed ambuscades. Lyttleton, perhaps using his political influence to secure the post, received a promotion from the Board of Trade to become governor of Jamaica. William Bull II assumed the high executive post and immediately took certain bold steps. He asked for and received legislative support to increase the number, training and supplying of additional ranging companies. He recruited his rangers heavily along the frontier, offering various bonuses, an opportunity for revenge and appeals to patriotism. The men he chose, after proper training and outfitting, proved to be the correct force for the job. As all colonial politicians discovered, urban militia were essentially useless in the deep forests and were not even especially suited for garrison duty in isolated areas. Some British regulars assumed responsibility for garrison duty in some forts. The Amerindians of course had made no real provision for a war of some length by laying in food and supplies. The provincial rangers simply ground them down in a series of small clashes, none of which was especially noteworthy; and by destroying their homes and crops and dispersing their families. The Assembly had to raise money to pay the militia and to offer assistance to the frontiersmen who suffered from the ravages of Amerindian attacks. Only slowly did the Assembly realize the full extent of the depravations suffered on the frontier. The Assembly then set up a committee to investigate the causes of the war and the reasons why damages were so great. The committee report found fault with Bull's handling of the situation and criticized him for moving too slowly.(520) The Assembly had to deal with a second problem. Indian defeats were inevitably accompanied by cession of lands to the provincials and the Cherokee War was no exception. While new settlers arrived in large numbers to take up homesteads on the cession, there were other undesirable elements who followed. Some militiamen had deserted as their companies returned home, returning to the abandoned homes to loot. Others turned bandits and stole the supplies sent to the relief of the frontier families. These men were soon joined by escaped slaves and indentured and criminal servants, Indian traders and criminals. Since the new land had essentially no constables or sheriffs, the Assembly once again had to muster the militia to bring law and order to the frontier. The task of the militia was complicated by the emergence of kangaroo courts and vigilantes. A circuit-riding minister, Charles Woodmason, is credited with having drawn up a petition to the Assembly, signed by over 1000 backwoodsmen, asking for greater law enforcement and true justice. Rumor followed that regulators were planning to march on Charleston. The legislature passed legislation designed to bring a permanent peace, law and order to the frontier, culminating in the Circuit Court Act of 1768.(521) Between 1748 and 1764 the Assembly worked on legislation designed to prevent slave insurrections. Enforcement of these regulatory acts fell heavily on local governmental units led by committees dominated by planters. Committee members had to supply their own arms, and go armed everywhere, and were authorized to arrest (or kill) any slave suspected of illegal or ever "suspicious" activity.(522) In 1751 the Negro Act(523) made provision for the militia patrols to apprehend, confine and punish or maintain, or deport any slave involved in insurrection or "that may become lunatic."(524) The legislature ordered the militia to pursue at the expense of the owners any runaway slaves who were likely to ferment insurrection. When slaves ran away and when three or more of the runaways gathered, the slaves were considered to be forming a conspiracy. The law required the militia to collect them and return them to their owners and if the slaves refused to submit, the law mandated that the militia kill them. Owners were required to pay into the militia fund £5 for each of their slaves captured or killed. Slaves were forbidden to "carry a gun or any other firearm, with ammunition, to hunt, or for any other purpose" upon penalty "of being whipt, not exceeding 20 stripes." No "free negro, mulatto or mestizo" was permitted to loan or give a firearm to any slave, upon penalty of fine, physical punishment or imprisonment. In 1756 the home government appointed a new governor, William Henry Lyttleton, who served until 1761. In his first year in office Lyttleton reported to his superiors that the militia of the province included 5000 to 6000 men, ages 16 to 60, enrolled according to the muster rolls.(525) In 1756 British assigned a quota of 2000 men to be raised in South Carolina as a part of 30,000 man force the English hoped to raise in the colonies to join with the British troops in an invasion of Canada.(526) The quota was reduced in order to deploy the militia to defend the colony. Lord Loudoun complained to Cumberland that "the great Number of Troops that are employed in Nova Scotia and South Carolina . . . robs the main body" of his force mustered to invade Canada. In "South Carolina I think there is more Force there than [is] necessary." He asked that the quota be reinstated and that South Carolina be ordered to send the men to his army.(527) The governor sent a mission to Indian territory in the autumn of 1756 to discover how the Amerindians were receiving firearms wherewith to conduct their raids on the outlying settlements. Daniel Pepper reported that a minor chief named the Gun Merchant had, in the past, procured arms from the French agents who were urging the tribes to rise up and drive out the English. Since the French had withdrawn Gun Merchant was procuring arms from the various Indian traders working their territory. Pepper warned that since the French had sold rifled guns instead of trade muskets the Indians wanted no other arms and that they had become exceedingly proficient in the use of rifles, regularly hitting targets at 200 yards.(528) Having decided to put an end to hostilities with unfriendly Amerindian tribes, and to give an incentive to its provincial militia, the South Carolina legislature decided to follow the pattern set by other provinces and place a bounty on Amerindian scalps. By 1757, in response to the emergency of the French and Indian War, the militia had seven infantry regiments and three cavalry troops with over 6500 men.(529) In 1760 the legislature passed an act to specifically authorize the formation of an artillery militia in Charleston. Noting that the men had "taken great pains in learning the exercise of artillery" it thought this authorization was long overdue. Those serving in the company were exempted from other militia duties. They had the same power of impressment of supplies and portage as other militias. The company, like the mounted militia, was obviously a highly select band, comprised of the sons of wealthy merchants, planters and tradesmen and placement was difficult.(530) On 31 July 1760 the legislature appropriated £3500 to pay for Cherokee "and other hostile" Amerindian scalps.(531) In 1761 the colony received its new governor, Thomas Bone, who served until 1764. In 1770 Charleston had 5030 white and 5830 black inhabitants. The total number of white inhabitants in the colony was not provided, but there were 75,178 blacks, mostly slaves, in South Carolina just a few years before the beginning of the War for Independence.(532) On the eve of the American Revolution there were over 12,000 men in a dozen infantry units and a cavalry regiment.(533) News of the clash between the patriots and the British army at Lexington and Concord reached Charleston, South Carolina, within ten days, via courier dispatched by the Massachusetts Committee of Safety. A gentleman from Charleston wrote to a friend in London of the militia preparations in Charleston. "Our companies of artillery, grenadiers, light infantry, light horse, militia and watch are daily improving themselves in the military art. We were pretty expert before, but are now almost equal to any soldiers the King has." Men in the rural areas were ready also and the colony planned to raise a "company of Slit-Shirts immediately."(534) In February 1776 the South Carolina Provincial Congress considered the military needs of the state. It began with the premise that "it is absolutely necessary that a considerable body of Regular Forces be kept up for the service and defence of the Colony in this time of imminent danger." The Congress decided that the "Regiment of Rangers be continued" and that the number of men be increased. The rangers "shall be composed of expert riflemen who shall act on horseback or on foot, as the service may require." It also ordered that there be created another "Regiment of expert Riflemen, to take rank as the Fifth Regiment." All riflemen were to provide themselves at their own expense with "a good Rifle, Shotpouch and Powderhorn, together with a Tomahawk or Hatchet." The public would supply them with "a uniform Hunting-shirt and Hat or Cap and Blanket." All riflemen would be tested for their skills by the commanding officer.(535) The Congress sought to contract for arms for the militia. The Commissioners for purchasing Rifles . . . are hereby authorized and empowered to agree with any person to make a Rifle of a new and different construction . . . . to contract for the making, or purchasing already made, any number . . . of good Rifles with good bridle locks and proper furniture, not exceeding the price of £30 each; the barrels of the rifles to be made not to weigh less than 7 1/2 pounds or to be less than three feet, eight inches in length; and carrying balls of about half an ounce weight; and those new ones already made not to be less than three feet four inches long in the barrel. Also for the making or purchasing already made . . . good smoothbored Muskets, carrying an ounce ball, with good bridle locks and furniture, iron rods and bayonets . . . the Muskets to be made three feet six inches long in the barrel and bayonet seventeen inches long. . . .(536) In 1776 the British command laid its first plans to invade South Carolina and hold Charleston. In light of the discovery of the plan, South Carolina mobilized its militia.(537) In the autumn of 1776 the South Carolina legislature sent Colonel Williamson into the backwoods, to fight the Cherokee nation, which was then under British influence. In September the force under Colonel Wiliamson crossed the Catawba River in North Carolina, in pursuit of the enemy. They sought to join with North carolina militia under General Rutherford and Virginia militia under Colonel William Christian. Initially ambushed, Williamson fought back and turned the engagement into a victory over the hostiles, who then fled. After joining with Rutherford and Christian, the force laid waste to most of the Cherokees' principal towns and villages and took British supplies valued at £2500. They also recaptured several runaway slaves and several British agents.(538) In 1776 the state adopted a new constitution. That document first noted that Britain had forced a defensive war upon the colony in part because of its military policies. It empowered the legislature to create a militia and to commission all military officers.(539) Military occurrences in the colony were few until 1780, and thus the militia remained generally inactive. The militia served primarily as a reservoir of trained manpower to furnish troops for South Carolina's share of the Continental Line. The provincial militia act remained in force until 1778 when the legislature decided to rewrite the law to reflect the change from dependency to sovereign state. The law specifically disallowed private militias such as had been formed as vehicles to achieve independence, and ordered that any such private armed forces then existing be disbanded. Every able-bodied man between ages 16 and 60 was required to serve or pay a fine of £200. Those exempted from militia duty included all state executive, judicial and legislative personnel and their clerks; past-masters and post-riders; river and harbor pilots and their crews; one white man in each grist mill and ferry; and firemen in Charleston. Each man was obliged to provide "one good musket and bayonet, or a good substantial smooth bore gun and bayonet, a cross belt and cartouch box, capable of containing 36 rounds, . . . a cover for the lock of said musket or gun, or one good rifle-gun and tomahawk or cutlass, one ball of wax [and] one worm or picker." The militiaman had his choice of providing lead balls or buck-shot, as well as gunpowder and spare flints. The militia was to be divided into three brigades, each commanded by a brigadier-general; and regiments of from 600 to 1200 men commanded by a full colonel; and companies of not more than 60 men commanded by captains. Each captain was to muster and train his company at least once a month, except in Charleston where companies were to train each fort-night. Regiments were to train each six months. Courts martial were authorized at each organizational level, with the superior organizations having power of appeal and the power to impose greater penalties. The act authorized the draft of men into the Continental Line from the militias. When a draft was made the law required that a sufficient militia force be retained to quell insurrections or slave uprisings. Some militiamen were also to drafted to maintain slave patrols and seacoast watches. Penalty for failure to serve on patrols and watches was a fine of £100. Superior militia officers could call an emergency alarm upon the what he considered to be a reliable report. Masters had to provide arms and equipment for apprentices and indentured servants. When discharged from a master's service a former apprentice or servant had to provide himself with his own arms and equipment within six months. Poor men and indentured servants who were maimed were to receive public support as would the families of such men killed in service. Indentured servants who acted bravely in battle could be freed, with public compensation given to the master for loss of his services. Masters had to provide the company officers with lists of reliable slaves, ages 16 to 60, who might be impressed into service in an emergency. Each militia company could enlist slaves up to one-third of its number. Lists of other slaves who might be impressed to do manual labor were also to be submitted, to be used as hatchet-men or pioneers. The government was obliged to pay the owners for slaves killed or maimed in battle.(540) Excepting a few Amerindian raids, there was little action in the South during the early years of the war, but things were to change following the catastrophic defeat of General John Burgoyne at Saratoga. On 29 December 1778 Clinton, who had succeeded Sir William Howe as the British commander, landed a force of 3500 regulars near Savannah, Georgia. General Robert Howe, then commander of the Southern Department, had a mixed force of about 1000, militia and regulars, and could not withstand the assault. Howe was shortly thereafter replaced by Major-General Benjamin Lincoln (1733-1810), a distinguished veteran of actions near New York City and at Saratoga. In February 1780 Clinton decided that could now capture Charleston, and that, if he were successful, loyalists would soon appear and swell the ranks of his force. That would bring the Carolinas and Georgia once again under royal control. He assailed Charleston with 8000 troops and the mixed force of regulars between 11 February and 12 May 1780. Neither the continental line nor the militia available to the Southern Department could not hold out. When General Benjamin Lincoln surrendered on 12 May 1780, he lost 860 men of the North Carolina Continental Line. About all that was left of the North Carolina regular forces were those men who had been on leave, ill or attached to other duties or companies. Clinton captured 5400 Americans, the heaviest patriot loss of the war. It was a general of the regular army who had surrendered 5000 militiamen with his command.(541) On 5 June, Clinton left Cornwallis in charge and sailed back to New York, confident that South Carolina, and perhaps all the South, were about to fall to the crown.(542) Charleston fell on 12 May 1780.(543) Patrick Henry, who attempted to raise two to three thousand militia to march to the defense of Charleston, expressed great admiration for its governor. "The brilliant John Rutledge was Governor of the State. Clothed with dictatorial powers, he called out the reserve militia and threw himself into [the defense of] the city."(544) Disaster struck again on 16 August 1780 at the Battle of Camden, South Carolina. General Horatio Gates blundered into an engagement which neither he nor Lord Cornwallis wanted. Cornwallis commanded a force of 2400 regulars; in addition there were Banastre Tarleton's dragoons. Gates deployed his mixed force of regulars and militia badly. His line was a meager 200 yards from the British and in the line of musket fire. American troops broke when Tarleton's dragoons attacked the rear. A bayonet charge finished off the militia, most of whom were armed with Kentucky rifles which did not mount bayonets. It made no sense for militia to stand against raw steel, and responsibility for defeat at Camden rests more with Gates than with the militia. American losses included 800-900 killed and nearly 1000 captured.(545) Gates retreated to Hillsboro, North Carolina, 160 miles north. Revisionist critics of the militia have chosen to blame it rather than Gates' flawed leadership and poor skills as a field commander.(546) On 18 August Tarleton defeated an American militia force at Fishing Creek, South Carolina. General William Moultrie commented that the southern "militia are brave men and will fight if you let them come to action in their own way."(547) We have discussed at length the patriot's great victory at King's Mountain on 7 October 1780, along the border of North Carolina and South Carolina, in the preceding chapter. It was placed there because the North Carolina and other state and territorial militia had more of a role in the defeat of Cornwallis' left flank, and the death of Major Patrick Ferguson, than had the men from South Carolina. In January 1781 Cornwallis moved his force into the interior of North Carolina with the avowed purpose of destroying the small patriot army led by Nathaneal Greene, commander of the Southern Department. Cornwallis moved to Hillsboro where he thought he could recruit a considerable force of tories, but was disappointed.(548) Greene, meanwhile avoided confrontation, but gathered considerable strength along the way from militiamen. General Daniel Morgan advised Greene on how to deploy his militia supplements. "Put the militia in the center with some picked troops in their rear with orders to shoot down the first man that runs."(549) Finally, on 15 March 1781 the two forces met at the Battle of Guilford Court House. Cornwallis held the field and Greene withdrew, but Greene's army remained intact and his militiamen gained battlefield experience. Cornwallis' force was decimated. Greene wrote to General Sumter on 16 March 1781 that if the North Carolina Militia had behaved bravely he could have completely defeated Cornwallis. He rued the day that he had placed his dependence on the militia whose primary contributions had been the consumption of resources at a rate three times that of the regular army and which was best known for ravaging the countryside.(550) Edward Stevens, an inspirational patriot leader of plebeian origins, writing to Virginia Governor Thomas Jefferson agreed with Greene. "If the Salvation of the Country had depended on their staying Ten or Fifteen days, I dont believe they would have done it. Their greatest Study is to Rub through their Tower [tour] of Duty with whole Bones. . . . These men under me are so exceeding anxceous to get home it is all most impossible to Keep Them together."(551) Henry Lee raised this same point in defense of the federal Constitution in the Virginia Ratifying Convention in June 1787. Let the Gentlemen recollect the action of Guilford. The American regular troops behaved there with the most gallant intrepidity. What did the militia do? The greatest numbers of them fled. Their abandonment of the regulars occasioned the loss of the field. Had the line been supported that day, Cornwallis, instead of surrendering at York, would have laid down his arms at Guilford.(552) Cornwallis retreated to Wilmington, North Carolina. An advance force under Major James Craig took the town and disarmed the populace. On 7 April Cornwallis arrived with the tattered remnants of his army. The cowered townspeople were cooperative, but he found no large reserve of loyalists to join his force. Only about 200 Tory militiamen joined his cause. After resting and supplying his army with foodstuff and transportation, Cornwallis moved north to join with General William Phipps in Virginia.(553) Despite the increasing danger from the British army in 1779 and 1780 the southern colonies resisted any idea of arming blacks, whether freemen or slaves. John Laurens of South Carolina, and son of a member of Congress, and Alexander Hamilton proposed a plan to enlist 3000 blacks under white officers. Their plan was to liberate Georgia, which had effectively been under British control for some time. Laurens offered to lead one regiment. State authorities refused to enroll any blacks in the militia, save as unarmed laborers, out of fear of a slave revolt. Once trained, blacks constituted a greater, long range danger than the British army. Laurens argued that with so many planters absent from their plantations, the enlistment of "more aggressive blacks" would actually be of advantage. As early as March 1779 Laurens and Hamilton advanced their plan in Congress. Laurens' father opposed the idea in Congress. In mid-March Laurens trued to convince General Washington to by-pass the states and directly authorize the enlistment of blacks in the Continental Army. Washington demurred, dismissing the idea as fantastic, injurious of his relations with southern states, and beyond his authority. Laurens wrote to the President of Congress, John Jay, later Chief Justice of the United States. Congress accepted Laurens' plan, urging South Carolina to raise 3000 black men at arms. The South Carolina Council of Safety would not change its stand. Laurens, frustrated at the successive rebuffs from his state and his own father, joined a regiment and shortly after was killed in action. With his death any further idea of a black militia or army unit died.(554) In 1782 South Carolina reconsidered its fundamental militia act, because "the laws now in force for the regulation of the militia of this State are found inadequate to the beneficial purposes intended thereby for the defense of the State in the present time." Nonetheless, the changes to earlier acts were few. The upper age limit for service was lowered from 60 to 50. Militia captains were required to submit lists of eligibles every second month. One-quarter of the militia was to serve on garrison or field duty at any given time, with the men to be rotated every month or two. Should a man fail to appear for his assigned duty, his time was doubled. Up to one-third of the militia could be sent to assist another state. In any event, no county could yield so many of its militia as to render slave patrol and containment ineffective. Any man adjudged guilty of sedition, rebellion or dereliction of duty was required to serve on active duty for twelve months. The list of those exempt remained unchanged, with the exception that teachers who were to be relieved of militia duty had to have enrolled under their care no less than fifteen students. Before the Revolution the South Carolina militia was perhaps the most efficient and most accomplished south of New England. It had to perform the same duties that were required of other militias, while also serving on slave patrols. While on slave patrol it can be said to have acted as posse comitatus. It saved North Carolina in at least one Amerindian War. During the War for Independence it acted most efficiently when transformed into guerrilla bands and led by daring and innovative leaders such as Francis Marion. Without the southern militias, the American cause in the south might have been lost and Cornwallis' schemes accomplished. South Carolina had been the guardian of the southern gate of the British colonies against Amerindian, Spanish and, to a degree, French, ambitions in the south until the establishment of Georgia early in the Royal period. Indeed, one of the principal reasons that the colony was established was to act as a buffer against the French in Louisiana and the Spanish in Florida. Led by James Edward Oglethorpe and Lord John Percival, first Earl of Egmont, a Board of Trustees received a charter in 1732 to govern the colony for 21 years. The first colonists arrived in 1733 and founded Savannah. Spain reacted immediately, and the war lasted from 1739 until 1744. By 1740 the British government took the pressure off the Georgia militia by placing a company of regular troops in Georgia to contain the Spanish ambitions and buttressed them with some Georgia militiamen.(555) James Oglethorpe was the first southern authority to actively oppose the peculiar institution of slavery. So great was his opposition to slavery, and his trust in the good character of the slave that in 1740, when the South Carolina legislature was debating forming an expedition to destroy St. Augustine, James Oglethorpe suggested that 1000 slaves be enlisted. About two hundred would be armed while the other 800 would act as porters and servants. Masters were to be paid £10 per slave per month of service, masters assuming all risks except death. If a slave were killed his master would be compensated for his actual value, not to exceed £250.(556) The Georgia Charter of 1732 provided for a militia. The charter noted that because the "provinces in North America have been frequently ravaged by Indian enemies" the embodiment of a militia was a matter of absolute necessity. It related that "the neighboring savages" had recently "laid waste by fire and sword" the province of South Carolina and that substantial numbers of English settlers had been "miserably massacred" so the militia must be armed, trained and disciplined at as early a date as possible. The colony was to supply "armor, shot, powder, ordnance [and] munitions." The governor, with consent of council, could levy war with the militia against all enemies of the crown.(557) In 1739 the provincial legislature of Georgia passed legislation regarding the arming of blacks that was remarkably similar to the measure passed only slightly earlier in South Carolina. A slave could be armed only upon the recommendation of his master. One who acted bravely in battle could be given various material rewards and excused from menial labor on the anniversary of an act of heroism.(558) Almost immediately after the passage of the act a slave revolt occurred in St. Andrew's Parish and an overseer was killed. That ended the idea of arming slaves in Georgia.(559) The Georgia militia had a role in restraining the slaves who revolted at Stoenoe, South Carolina, in 1739. The South Carolina militia crushed the slave revolt and executed the leaders. Some of the other slaves who took part in the revolt were mutilated or deported. Some of the slaves escaped and attempted to cross Georgia in hope of gaining freedom in Spanish Florida. They were caught in a pincer between South Carolina and Georgia militias, acting as posse comitatus, and killed or captured.(560) The colony did not prosper under the Board of Trustees and Oglethorpe's administration. His attempt to outlaw both rum and slaves was generally unsuccessful. Oglethorpe did devolve a satisfactory policy with the Amerindians and no major Indian war occurred during the entire history of the colony. In 1760 the Crown assumed control and sent Sir James Wright to assume the office of provincial governor. In 1763 the Peace of Paris yielded Florida into English hands. After that cession the role of the Georgia militia as guardian of the southern gate ended.(561) In 1738 Governor William Bull of South Carolina observed that the people of both his own colony and Georgia were "excellent marksmen" and "as brave as any People whatsoever." The problem was that, outside a urban areas, such as Savannah and Charleston, the people were settled far too sparsely to be of much use in the militia. Most frontiersmen were heavily engaged in agriculture, whether on their own or by supervising slaves, and had neither the time nor the ability to contain either the French or the Spanish forces. Indeed, they were barely able to resist the few Amerindian incursions on the frontier. Bull concluded that "Military Discipline is Inconsistent with a Domestick or Country Life."(562) In 1739 James Oglethorpe decided, with urging from both the home government and the legislature of South Carolina, to attack and reduce St. Augustine. Since St. Augustine was the center of power there was nothing new about this strategy. Twice before the South Carolina militia had attacked, damaged the fortress of San Marcos, but had been unable to destroy it. Oglethorpe relied upon his own militia, British sea power, the element of surprise and a substantial number of volunteers and militia from South Carolina. He was also able to recruit over a thousand Amerindian warriors as auxiliaries. The expedition failed. The South Carolina legislature issued a long and involved technical report. Three main conclusions pointed to failures in Oglethorpe's command: he misused the South Carolina volunteers; treated the Amerindians badly; and deployed his troops poorly. Whether it was his fault or not, he failed to achieve the surprise his mission required.(563) On 6 August 1754 the king sent instructions to John Reynolds, governor of Georgia regarding the militia. "You shall take care that all planters and Christian servants be well and fitly provided with arms," the monarch wrote, and "that they be listed under good officers." The militia was to be mustered and trained "whereby they may be in a better readiness for the defence of our said province." He warned that the frequency and intensity of the militia training must not constitute "an unnecessary impediment to the affairs of the inhabitants."(564) In 1770 Georgia passed an "act for the better security of the inhabitants by obliging all white male persons to carry fire arms to all places of public worship." In 1774 Georgia, in an attempt to escape the various Indian wars which had plagued its neighbors, passed legislation designed to protect the natives from massacre. Knowing that it was virtually impossible to distinguish between a hostile and a friendly Amerindian, and being well aware of the bounty paid for scalps in the Carolinas and Virginia, Georgia passed an act which provided for the purchase of scalps only of hostiles. Arms for the colony were largely imported, but a few gunsmiths appeared in the colony. Jeremiah Slitterman was among the earliest men to make muskets for the provincial militia. He also served as colonial armourer, with a verified term from 1766 to 1775.(565) Georgia was the last of the thirteen colonies to be established and was also the last to join the patriot cause. Agitation for independence did not set well for several reasons. First, the colonists feared attack by the English from Florida and their Amerindian allies, the Creeks and Cherokees. Second, they expressed a measure of appreciation to the home government for the large amounts of money it had expended in setting up and maintaining the colony. It was unrepresented at both the Stamp Act Congress and the First Continental Congress. The loyalists were well represented in Georgia and had a most active militia system. Many loyalist militiamen volunteered to serve with the British army when it finally landed in Georgia.(566) News of the events at Lexington and Concord, Massachusetts, reached Savannah on 10 May, about three weeks after the actual events occurred. Citizens exhibited considerable excitement and that night an unknown group, presumably of the local militia, forced their war into the public gunpowder magazine and removed its contents. Royalist governor Wright offered a £50 reward for apprehension of the thieves. No one came forward to report the criminals. There is some belief that the gunpowder was distributed among the local committees of safety in Georgia and South Carolina. Throughout the summer, and against Wright's specific orders, the patriots continued to remove arms and supplies from the public domain. On 10 July militiamen from Georgia and South Carolina stopped a royal vessel carrying gunpowder for the Amerindian trade and removed the cargo of about six tons, before allowing the ship to continue. On 2 June, upon hearing that the colony's cannon were to fire a salute on the king's birthday, patriots spiked them and threw them down the embankment. Royalists recovered several and had them repaired in time to fire the salute on George III's birthday on 4 June. The patriots erected a liberty pole the next day, assembled the militia, and drank toasts to "no taxation without representation." Governor Wright reported these incidents to London, but had no power to do more. He asked to be relieved, reporting that sentiment was overwhelming for the cause of independence.(567) Whig militiamen gathered food, arms, 63 barrels of rice, £123 in specie, gunpowder and other supplies to send to the relief of Boston. It is unclear whether militia volunteers in any significant numbers marched to Massachusetts. On 14 July 1775 the provincial legislature began to consider the creation of a wartime militia. There were many schemes advanced to reorganize it. Georgia realized that it must contribute to the general war effort by drafting a number of men from the militias to form a regiment of the Continental Line. As one delegate observed on 14 July 1775, "The militia was thoroughly organized and drilled and active military operations prefatory to resistance to the continuance of British aggression were seen on every hand."(568) Initially, Georgia was reluctant to join the rebellion and sign a declaration of independence. The other colonies responded by ordering an embargo of all goods, but especially of arms and gunpowder, against Georgia. Once the legislature acted, the Continental Congress removed the embargo. Georgia applied to Congress for permission to export its indigo harvest and to import trade goods to pacify the Amerindians. Most of the 1775 legislative calendar was occupied with matters of governmental transition from the Crown to the Whigs. Wright, who had not received permission to withdraw and return to England, was powerless to stem the flow of power to the Whigs. Popular democracy took over, with three state congresses being elected in 1775 and a fourth in January 1776. The area of greatest governmental activity was with the Committee of Safety. The provincial congresses had created and supervised the state committee of safety which, in turn, loosely supervised local committees. Most of the work of these committees was devoted to the reconstitution of the militia, appointment of officers, confirmation of commissions to existing officers, administration of loyalty oaths, contracting for arms and supplies, and securing of existing military supplies. It ordered that muskets be purchased "as nearly [as possible] to the size recommended by the Continental Congress." It created a Committee of Safety which was authorized to place an initial order for 400 stands of arms with bayonets for the militia.(569) The militiamen insisted on electing their own officers, most of whom were refused confirmation by Governor Wright. Confirmation was then undertaken by the Committee of Safety. The Committee had to negotiate an equitable settlement of a dispute between a company of rangers stationed on the frontier and backwoodsmen who, for unknown reasons, who dis-trusted and had disarmed them. The committee required that the rangers take an oath of loyalty to the state and to renounce their presumed loyalty to the governor. This done, the committee ordered that they be rearmed and returned to duty. The Committee of Safety also recommended making certain changes in the basic militia act, but none of the first four congresses undertook to make the requested revisions. Legislative effort was directed at finding funding for the enormous expenses that the move to independence was requiring. It also wished to direct agents to work at retaining the loyalty, or at least neutrality, of the indigenous aboriginal population. The British had agents hard at work among the Amerindians and the legislature knew it had to act boldly to prevent a major Indian war. On 2 August a band of militiamen left Georgia and entered South Carolina and there took captive one Thomas Brown, reputed the natural son of Lord North who it was thought had been sent to America to recruit a Tory militia. Taken to Augusta, Brown was tarred and feathered and forced to swear allegiance to the new nation. Released, he did then attempt to recruit a Tory militia to avenge his maltreatment. The Son of Liberty gathered a counter-force of perhaps 700 men. Brown had perhaps 150 men and Governor Wright refused to test the loyalty of what remained of his local troops and refused to act on Brown's behalf. Brown retired to South Carolina and eventually moved to St. Augustine, Florida.(570) The enrolled militia of Georgia in 1775 numbered 1000 men under the command of brigade generals Lachlan McIntosh and Samuel Elbert. This number remained constant despite the desertion of some men to the tories in 1776, 1777 and 1778. In the early years of the Revolution about 750 men had been drafted into, or had volunteered for service in, the Continental Line. In July 1778 the state could count 2000 men serving six months enlistment in the Line. In that year the state also had 750 men enrolled as minute men. By 1779 British presence had reduced the number of men in the Line to about 750 while the state militia counted about the same number. In 1781 General Nathaneal Greene enrolled from the militia a special brigade to serve with him known as the Georgia Legion and commanded by General James Jackson.(571) In January 1776 South Carolina sent an urgent message, reporting that some British ships of war had arrived at Charleston to secure military supplies and were now headed for Savannah. The Committee of Safety ordered the militia to a state of readiness and called the militia units from other areas to assemble in Savannah. Fearing a British-inspired slave revolt, it ordered some militia to join with overseers to search the plantations near the seacoast, especially along the Savannah River, for weapons and ammunition. Militiamen were ordered to stand coastal watch for British activity. Four British men of war arrived by 18 January. Governor Wright attempted to persuade the Committee of Safety that all the British wanted was to purchase rice and other supplies. While such sales were in technical violation of the Continental Congress' embargo, selling them what they wanted was far better than suffering occupation, Wright argued. The Whig leaders responded by arresting the royal council, Wright and other suspected of being Tories. After a few days, the Committee accepted their paroles that they would not communicate with the British ships' captains. Emboldened by the arrival of several more ships with 200 regular soldiers on board, Wright fled to their protection, made a final appeal to forget about independence, and embarked for England. Hoping to escape blame for entering into armed conflict with the Whigs in the south, neither Wright nor the naval commander, Captain Barclay were prepared to force this issue just yet. Barclay continued to attempt to purchase the supplies he needed. Meanwhile, on 12 January 1776 the provisional legislature enacted a militia law which made all able-bodied men in all parishes, towns and counties subject to enrollment in the militia. Colonel Drayton was empowered to issue orders for the precise terms of enlistment and training.(572) The legislature decided against enlisting indentured servants, but allowed apprentices to serve.(573) The militia continued to gather in Savannah, with perhaps as many as 700 men on hand. The legislature sent to South Carolina for assistance. Meanwhile, Governor Wright, now safely on board the man of war Scarborough requested that Sir Henry Clinton dispatch 500 to 1000 British regulars to reestablish royal government in Georgia. If these troops arrived soon, Wright argued, the vast majority of the citizens of Georgia would resume their allegiance to the Crown. If the royal government abandoned Georgia, it would be very costly to return and reestablish governance since the Whigs would be very active in firming up loyalties and suppressing Tories. Most citizens, he thought, had been panicked and intimidated by the few active Whigs in the colony. Wright thought that the patriot militia would retreat at the first show of force. Captain Barkley agreed with this assessment, but was unwilling to land the troops under his command without Clinton's specific orders, and his orders in hand required him to return to Boston. On 20 June the legislature, on recommendation of the militia officers, "ordered that every man liable to bear arms do Militia Duty in the Parish or District where he resides." There was no age limits noted in the decree, and exemptions were made only for those "who shall be enrolled in some Volunteer Company." The Georgia Provincial Congress appointed Colonel Lachlan McIntosh to command the state militia, assisted by Samuel Elbert and Joseph Habersham. By 28 April McIntosh had recruited 286 men, and with a few more weeks the active militia numbered at least 600. Estimates of 4000 men able to bear arms may have been optimistic, although technically the enrolled militia numbered that many. No more than one-half that number could be mustered at any given time if the colony was to survive economically.(574) The militia was organized into brigades under a brigade general and a major who served as brigade inspector, a quartermaster and a captain who served the general; regiments commanded by colonels or lieutenant-colonels, which consisted of surgeons, quartermasters, pay-masters and adjutants; and companies under captains, a first and a second lieutenant, ensign, four sergeants and 64 enlisted men. Additionally, there were drummers, fifers, color bearers and various other functionaries. One novel feature of the organization of the Georgia militia was its division into three parts in ordinary times and into two divisions in times of emergency. The practice had begun in July 1775, even before the militia had been fully organized and was renewed on 8 January 1777.(575) One-third,(576) or under state of emergency, one-half, of the militia were actually on active duty at one time, with the remainder being allowed to remain at home.(577) Active duty was for a "fortnight," after which the militia was rotated with those who had not serve earlier.(578) In time of grave emergency the governor could order "that a draft be made of one-half [of the militia] and that they hold themselves in readiness to march at a moment's notice.(579) On 24 October 1781 the legislature resolved that "his honor the Governor be requested and empowered to order immediately the whole of the Militia of this State to join camp as soon as they can possibly be collected."(580) Because so many militiamen from the frontier owned their own riding horses, some of the militia were enlisted as mounted infantry.(581) Some militia were ordered to serve as scouts, primarily on the frontier, or against the British, as the situation requires.(582) While the patriots were establishing their control over the state government, many Tories fled to British protection in Florida, from which they raided into Georgia. Just as the colonists in the north had delusions of grandeur, thinking of conquering Canada, so the patriots in the south thought of conquering Florida. The latest intelligence showed that in the autumn of 1775 only about 150 British regulars occupied St. Augustine. On 1 January 1776, the Continental Congress offered to underwrite the cost of capturing the British garrison. Through well-placed Tory spies, the British knew as much as the patriots about the planned expedition. As an idea, there was much merit in the plan for a successful invasion of Florida would sever the Amerindians from the British agents and end the cattle raids and pillaging of farms that tied down most militiamen. Lee estimated that he would need about 1000 men, of which Georgia was to supply 600 of its Continental Line and militia. It was September before the expedition got under way, by which time the British army had substantially strengthened its garrison in St. Augustine and also recruited many Amerindian warriors. Some of the troops reached St. John where they laid waste the Tories' fields and farms. Few got farther south than Sunbury and none saw St. Augustine. Inclement weather, lack of transportation, and illness were the major impediments, although many militiamen were concerned about the increased pressures of Amerindian raids on their unprotected families on the frontier. The failure of the expedition did little to bolster the flagging spirits of the patriots. What it did do was to invite additional Tory raids on the outlying farms. By January 1777 intelligence reports indicated that the defenses at St. Augustine had been strengthened. British naval vessels controlled the port of Savannah. And on 18 February Captain Richard Winn surrendered his garrison of fifty men at Fort McIntosh on the Satilla River to British regulars and Tory militia. On 27 February 1776, the Continental Congress created the Southern Military District, comprised of Virginia, North Carolina, South Carolina, and Georgia, under the command of Major-general Charles Lee. South Carolina and Georgia came under the command of Lee's assistant, Brigadier-general John Armstrong. Lee ordered Armstrong to raise 2000 men, a wholly unrealistic number. McIntosh's militia was inducted into national service and placed under Lee's command as a part of the Continental Line. When Lee arrived in Charleston, South Carolina, in the summer of 1776, McIntosh reported that raising six battalions in Georgia was quite impossible, although he did turn over command of four troops of cavalry. McIntosh pleaded Georgia's case to Lee. Warriors of the Creek Nation outnumbered the Georgia militia and were on very friendly terms with the British Indian agents. Raiders from Florida were already stealing cattle and other supplies and despoiling the backwoods. The British had a substantial military presence in St. Augustine from whence they supplied the natives and rewarded the raiders. As mounted and foot militia were drafted into national service and deployed where Lee best thought to use them, the frontiers, even Augusta and Savannah, lay open for attack. McIntosh hoped to use the mounted men to patrol the state's borders. Among his first priorities was cutting off contact between British agents and the Creeks. Lee decided to inspect conditions in Georgia personally. When he arrived in August, McIntosh was able to turn out 2500 militia in addition to his command, now in Continental service. Lee suggested exchanging the Georgia Continental Line with men from another area, perhaps South Carolina, since most had Tory friends either locally or in Florida. Lee thought the militia to be unreliable for the same reason. In a letter to General Armstrong dated 27 August 1776, Lee was even more critical of the Georgia militia. The people here are if possible more harum scarum than their sister colony [i. e., South Carolina]. They will propose anything, and after they have proposed it, discover they are incapable of performing the least. They have proposed securing their Frontiers by constant patrols of horse Rangers, when the scheme is approved of they scratch their heads for some days, and at length inform you that there is a small difficulty in the way; that of the impossibility to procure a single horse -- . . . . Upon the whole I should not be surprised if they were to propose mounting a body of Mermaids on Alligators. . . .(583) As with most states, the lines of authority between state and national control over soldiers of the Continental Line were unclear and ill-defined. Most states absolutely denied any national control over their militias. Lee was concerned for security, especially about plans laid for punitive expeditions against Florida. As it was, his concerns were well-founded, although it is impossible to say whether the Line or the militia were the greater offenders. Probably, information leaked to Tories in Florida from the one merely buttressed information received from the other. Lee's recommendations for rotation of Georgia's troops in the Line with those from other states angered local authorities who resented any intimation that there were secret Tories among their men. Congress decided to augment the local troops by dispatching a battalion of riflemen and another of mounted troops to Georgia. Upon receipt of that information, Lee decided to move troops from Virginia and North Carolina to Georgia, angering the authorities in North Carolina. When no resolution was forthcoming, North Carolina withdrew its troops from congressional command. The Continental Congress in November 1776 ordered the states to create magazines for gunpowder and storage facilities for other supplies the army would require along with similar supplies for the state militias. Georgia was able to supply its own needs, along with those of other states, for rice and salted meat. The Georgia Constitution of 1777 provided for a militia. All counties having 250 or more militiamen under arms was permitted to form one or more battalions. The governor acted as commander-in-chief of all militia and other armed forces of the state. As such, the governor could appoint superior militia officers.(584) There were a few religious dissenters in Georgia, mainly Mennonites, who had been welcomed and granted haven under Oglethorpe's governance, but Georgia made no provision for their exemption. Some religious dissenters decided to leave the province when they were not granted military exemptions. Other "persons in the backwoods settlements" decided that they could not withstand an attack from the native aborigine if they were "seduced by British aims" and began to abandon their farms and homesteads. "The commanding officers of the Militia [are] to be directed to stop and secure the property of such persons as are about to depart the Province."(585) The legislature decided to create and maintain a show of force in Savannah and so on 16 January it ordered to "order forthwith a draft of at least one-third of the militia within . . . [the] parishes and have them immediately marched to Savannah together with every other person who may choose to come down as a volunteer." Those mustering and undergoing training in Savannah were to be paid £0/1/6 per day.(586) On 8 June 1776 the legislature ordered the militia to "hire a number of negroes to finish in a more proper manner the intrenchments about [Fort] Sunbury."(587) The legislature guessed correctly that any British invasion of Georgia would originate in Florida and move against Sunbury. In June 1776 it began to draft militia to staff the fort, rotating the militia every fortnight. Rotation helped to prevent the boredom that accompanies garrison duty; and it allowed the militiamen to keep in touch with their families and businesses and farms. When "it appears that the frontiers of this State, from Information, is in danger of being distressed by the Indians" the legislature moved to create a band of specially trained militia, the frontier rangers, or ranging companies.(588) For frontier rangers, who were to respond to a call on a moment's notice, the division was by halves rather than thirds, largely because there were so few able-bodied men to defend the state.(589) On 29 May 1776 the legislature authorized the formation of "three companies of Minutemen as soon as they can be furnished with arms, to be stationed where they may protect the Inhabitants from Indians.(590) "The Amerindians, having received presents and arms from the British in Florida, went on the warpath for the first time in Georgia. So severe were the depravations that on 24 September 1778 Colonel Williamson recruited 546 militiamen, virtually all the experienced frontiersmen in the state militia, to repel the Creeks and Cherokees.(591) The patriot militia of Georgia elected its own officers during the Revolution.(592) The pay of militiamen was wholly tied to the pay South Carolina granted to its militiamen. On 14 August 1779 the legislature ordered that pay "shall in every respect [be put] on the same footing that the South Carolina militia at present are."(593) The practice of using South Carolina's rate of pay for militia service antedated the Revolution, dating back to the time "when it was called out to suppress [slave] insurgents in South Carolina."(594) General Robert Howe, the new southern commander of the Continental Line, visited Savannah in March 1777, trying to recruit additional men. The Georgia light horse refused induction into federal service, leaving only the 400 men of the First Georgia Battalion in national service. Button Gwinnett called out the militia, hoping to assemble enough troops to mount another attack on British Florida and relieve pressures on the frontier. Howe, having been angered at his poor reception in Georgia, refused to detach any troops under his command in Charleston to assist. The British authorities having received the information about the planned expedition, roused the Creeks and some other Amerindians to ravage the frontier. By 1 May two groups embarked from Georgia, McIntosh's Continental Line making the voyage by water, and Colonel John Baker's mounted militia making the trip overland. The militia arrived at St John first and were immediately dispersed by the British regulars who were lying in ambush. McIntosh continued to experience difficulties in transit and abandoned the expedition on 26 May. The only tangible result was the confiscation of about a thousand head of cattle. Once again the Tories responded by raiding into Georgia in parties rarely exceeding 150 men. They sacked Augusta and came within five miles of Savannah. The militia seemed to be ineffective in dealing with the marauders. The legislature authorized the commissioning of bands of fifteen or more men to enter Florida and wreak what havoc they could. On 10 October 1777, Congress sent McIntosh, now a general, north to assume a new command and appointed Colonel Samuel Elbert to replace him in command of the Georgia Continental Line. He inherited a command in which the troops had not received regular pay for some months and in which morale was low and desertions were high. The militia ignored the Line and refused induction into it. Early in 1778 there was again discussion about making the now annual expedition against St. Augustine. Elbert thought that he would need 1500 men to stand any chance of capturing St. Augustine, which meant he required both a substantial infusion of regular troops and a significant number of Georgia militiamen. Word reached Savannah that British Governor Tonyn had sent German immigrants into Georgia to recruit German speaking settlers and Loyalist Florida Rangers were again raiding cattle along the frontier. Intelligence reported that some 400 to 700 disaffected South Carolina Tories were migrating through Georgia on their way to the British settlement in Florida. Governor Houstoun sent the militia to intercept them, but no contact was ever made. By mid-April some 2000 troops were in readiness to invade Florida. Robert Howe commanded members of the Continental Line from South Carolina and Georgia; Colonel Andrew Williamson commanded the attached South Carolina militia; and Governor Houstoun took personal command of the Georgia militia, probably because most of those men mustered in response to his direct appeal. The Whigs had a genuine opportunity to capture St. Augustine for they outnumbered the British forces by about two to one and were probably better equipped in superior physical condition. The problems in this third expedition were at the command level. The headstrong Houstoun, barely 30 years of age and with no military experience, thought himself the senior officer and refused to accept orders from Howe. Following this example, Williamson announced that he would not accept orders from either Howe or Houstoun because his militia were independent of both national and Georgia state command. Although the Florida Rangers and their Tory and Amerindian allies retreated at the approach of the patriot force, Howe asked for and received permission to withdraw his men because of the problems of command. Congress decided that if another expedition were mounted against Florida, it would have to be undertaken with trained regulars. The Georgians, militia or soldiers of the Line, leaked too much information to the British. Militia had proven themselves unreliable and undisciplined in the three previous expeditions and would be left behind to defend the frontier from Tory and Amerindian attacks. Civilian Whig authorities had assumed throughout the early years that with some effort they could defeat the English and capture Florida. Military opinion on both sides generally agreed that neither side was strong enough to conquer the other. But if one side did win, it would not be able to hold on to the prize. Both sides then had been reduced to raiding the cattle, food and other supplies of the other. The Whigs' punitive expeditions had done little more than cause the British to bribe the Amerindians to undertake massacres along the unprotected frontier. Because the militia was small, hesitant to leave its home areas undefended, and ill organized and poorly led, it failed to perform its primary function of protecting the home folks. If anyone can be said to have come out ahead in this bloody game of attrition, it was probably Tonyn's Tories, Amerindians and Florida Rangers. The Rangers were loathed by the regulars because they were essentially the dregs of humanity who had been given a license to plunder, but proved to be a better force because of superior organization, better administration and superior weapons and supplies. Their destruction of homes, crops and supplies, combined with the stealing of livestock, caused a great deal of hardship among the patriots. The Georgia militiamen were poorly supplied, many being without blankets, canteens, knapsacks, shoes or firearms. They were paid in state currency which had little value, and indeed was generally not accepted outside the colony. The well-supplied and equipped enemy received British currency, still accepted anywhere at face value. Most refused induction into the Continental Line and some were reluctant to report to militia musters, fearing that they might be conscripted by recruiting agents for the Line. Poor leadership also contributed to poor morale, although the militiamen had to share a portion of the blame for that failure since they elected most of the officers. Georgia's Amerindian policy was generally a failure, although the Whigs did make attempts to pacify the natives by the usual methods of meeting with them, distributing gifts, pledging that their lands would be protected and guaranteeing their borders. The first Cherokee War began in the summer of 1776, although most of the fighting occurred in the Carolinas. Eventually, the combined efforts of the militias of Georgia, North Carolina, South Carolina and Virginia defeated them. It was perhaps most important that the Creeks generally decided against allying with the Cherokees. After the Cherokees acknowledged defeat and signed the Treaty of DeWitt's Corner on 20 May 1777, there were few major problems with the natives. However, sporadic raids, largely incited by British agents, kept the frontier militia in a constant state of readiness. Georgia had been spared any direct military action in the Revolution until 1778 when the British moved against Savannah. When Sir Henry Clinton succeeded Sir William Howe as the British commander, he was determined to carry the war into the south. The British had planned to return ever since James Wright had been forced to flee the colony. Clinton's staff thought that it would require 5000 troops to capture Charleston, but only 2000 to take Savannah. Georgia thus became the logical place to begin the reduction of the southern colonies. On 27 November 1778, the British command sent Lieutenant-colonel Archibald Campbell of the 71st Scottish Regiment with 3000 British and Hessian regulars and four battalions of loyalists to accomplish the reduction of Georgia. General Robert Howe, then commander of the Southern Department, had a mixed force of about 1000, militia and regulars, and could not withstand the assault. On 23 December Campbell arrived at Tybee Island near Savannah and was unopposed. The patriot army crossed into South Carolina. Meanwhile, General Augustine Prevost, marching northward from Florida, captured the remaining patriot militia and army at Fort Sunbury on 10 January 1779. Having eliminated both regular army units and patriot militia as a factor in Georgia, Campbell was uncertain what to do next. The home office had wished to test its theory that the tories of the southern states were just waiting to show their loyalty, and would do so in considerable numbers. So Campbell decided to spread his command and seek out loyalist supporters.(595) Howe was shortly thereafter replaced by Benjamin Lincoln (1733-1810). Howe delayed his departure to assist the Georgia militia who were being pressed by British-induced Amerindian raids all along the frontier. The native American force was not large, but was extremely mobile, largely massacring isolated settlements and striking from ambush. On 6 January 1779 General Augustine Prevost, marching north from Florida, secured the surrender of Fort Sunbury. By 29 January Prevost's army had occupied Savannah. The British, assisted by loyalists, occupied the most populous parts of the state within a few months. Heartily encouraged, Campbell made additional sorties into the back country of Georgia, but these proved to be as fruitless as the first action was successful. Under the protection of the British army, James Wright returned to occupy the governor's office. There were only a few military actions of note. John Moultrie, with Georgia and Carolina militia, successfully defended Port Royal, South Carolina, in early February. The patriot militia under Colonel Andrew Pickens won a small battle over loyalist militia at Kettle Creek, South Carolina, on 14 February 1779. A certain Colonel Boyd had recruited about 700 loyalist militia and marched south to join Campbell's regulars in Georgia. Colonel Andrew Pickens gathered some 4000 militia and surprised the tories at Kettle Creek, killed Boyd and about forty of his men, wounded and captured another 150, and scattered the remainder. Pickens took his prisoners back to South Carolina where five leaders were hanged as traitors, another 65 condemned but pardoned, and others forced to take an oath of loyalty to the republic.(596) The patriots lost an engagement at Brier Creek, on 3 March. At Brier Creek, General John Ashe (1720-1781) led the patriot militia, which lost 350 men while inflicting only twenty casualties on the mixed British and tory force. Leaving Campbell in command at Savannah, Prevost moved northward into South Carolina. Meanwhile, Major-general Benjamin Lincoln rallied the patriot army and moved to Purysburg, about fifteen miles from Savannah. The swamps surrounding Lincoln's army inhibited Prevost's movements, and not wanting to become entrapped in such hostile territory, Prevost sent Major Gardiner to Port Royal Island. Lincoln sent General William Moultrie who led the Georgia militia against Gardiner who withdrew and returned to Savannah. Emboldened by French support, patriots made a desperate assault on Savannah, but were repulsed. Washington detached a corps of the Continental Line under General Benjamin Lincoln to support the militia in an assault on Augusta on 23 April. Prevost had moved his army northward along the coast toward Charleston, South Carolina. He had hoped loyalists could retain control in Georgia. Upon learning of Lincoln's arrival, he moved south. Lincoln's army met Prevost at Stono Ferry on 19 June. Lincoln suffered 300 casualties against 130 inflicted on the British, thus allowing Prevost to retain Savannah. Still, the British controlled only the area immediately surrounding Savannah, and the tories had been disheartened. The British finally withdrew from Savannah in 1782 as a result of patriot pressures to the north. The Georgia loyalist militia could not withstand patriot pressure and quickly disbanded and fled. The Georgia militia probably failed to achieve the universally accepted goals of colonial militia units to a greater extent than did the militias of other states. It was highly ineffective in stopping the constant raids from Florida, did not fill the ranks of the Continental Line, and did very little to contain the native Americans. In the latter function, it is fortunate that other militias were successful in breaking the power of the Cherokees. 1. Edward W. James, ed., The Lower Norfolk County, Virginia, Antiquary. 5 vols. New York: Peter Smith, 1951, 1: 104. 2. Williams v. State, 490 S.W. 117 at 121. 3. Don Higginbotham, Daniel Morgan. Chapel Hill: University of North Carolina Press, 1961, 132-33; Hugh F. Rankin, Francis Marion. New York: Capricorn, 1973; John R. Alden., The South in the Revolution, 1763-1789. Baton Rouge: Louisiana State University Press, 1957, 267; Robert Pugh, "The Revolutionary Militia in the Southern Campaign, 1780-81," William and Mary Quarterly 3d series, 19 : 154-75. 4. William L. Shea. The Virginia Militia in the Seventeenth Century. Baton Rouge: Louisiana State University Press, 1983, 136-40. 5. Virginia Charter of 1606 in Benjamin P. Poore, ed. The Federal and state Constitutions, Colonial Charters and Other Organic Laws of the United States. 2 vols. Washington: U. S. Government Printing Office, 1877, 2: 1891. 6. Virginia Charter of 1612, in Ibid., 2: 1906. 7. Travels and Works of Captain John Smith. Edward Arber and A. G. Bradley, eds. 2 vols. Edinburgh: Grant, 1910, 2: 433-34. 8. Records of the Virginia Company of London. S. K. Kingsbury, ed. 4 vols. Washington, D.C.: U.S. Government, 1906-35, 3: 21-22, 27, 220. 9. R. Hamor, A True Discourse on the Present State of Virginia . Richmond, Va.: Virginia State Library, 1957, 5-16; D. B. Rutman, "The Virginia Company and its Military Regime," in D. Rutman, ed. The Old Dominion. Charlottesville: University of Virginia Press, 1964, 1-20. 10. Quoted in Congressional Record, Executive Document 95, 48th Congress, Second Session. 11. Quoted in Congressional Record, Executive Document 95, 48th Congress, Second Session. 12. William Shea, "The First American Militia," Military Affairs, : 15-18; Records of the Virginia Company, 3: 164-73. 13. Statutes at Large, Being a Collection of All Laws of Virginia. W. W. Hening, ed. 13 vols. Richmond: State of Virginia, 1818-23, 1: 114. 14. Hening, Statutes at Large, 1: 121-29. 15. R. A. Brock, ed. Virginia Company of London, 1619-1624. 2 vols. Richmond: Virginia Historical Society, 1889, 2: 7, 9. 16. Act XXII of 25 September 1622, Hening, Statutes at Large, 4: 127-29. 17. Hening, Statutes at Large, 1: 122-23. 18. Records of the Virginia Company, 4: 580-84; Minutes of the Council and General Court of Colonial Virginia, 1622-1632. Richmond, Va.: State of Virginia, 1924, 18. 19. Journals of the House of Burgesses, 1619-1777. 30 volumes. Richmond: State of Virginia, 1905-15, 1: 52-53. 20. Hening, Statutes at Large, 1: 140-41, 153. 21. In Virginia Magazine of History and Biography, 2: 22-23. 22. Hening, Statutes at Large, 1: 167, 174, 176, 219. 23. Lower Norfolk Country Minute Book, 1637-1646. manuscript, Virginia State Library, 35, 39, 99. 24. Hening, Statutes at Large, 1: 224, 226. 25. Raoul F. Camus. Military Music of the American Revolution. Chapel Hill: University of North Carolina Press, 1976, 40. 26. "Instructions to Sir William Berkeley," 1642, in Virginia Magazine of History and Biography, 2: , 281-88. 27. Hening, Statutes at Large, 1: 219, 285. 28. S. M. Ames, ed. County Court Records of Accomack--Northampton Counties, Virginia, 1640--1645. Richmond: Virginia Historical Society, 1973, 268. 29. Hening, Statutes at Large, 1: 263. 30. See William L. Shea, "Virginia at War, 1644-46," Military Affairs, : 142-47. 31. General Court Session of 23 May 1677. 32. Hening, Statutes at Large, 1: 292-93. 33. Hening, Statutes at Large, 1: 293, 315-19. 34. See Northumberland County, Virginia, Order Book 2. Manuscript, Virginia Historical Society, 13. 35. Wesley Frank Craven, "Indian Policy in Early Virginia," William and Mary Quarterly, third series, 1: 73-76; Hening, Statutes at Large, 1: 140-41, 292-93, 323-26, 355. 36. Hening, Statutes at Large, 1: 393-96. 37. Northumberland County, Virginia, Order Book 2, 8. 38. Act XXIV of 1658-59, Hening, Statutes at Large, 1: 525. 39. Hening, Statutes at Large, 1: 515; 2: 34-39. 40. Hening, Statutes at Large, 2: 15. 41. Hening, Statutes at Large, 1: 185, 193. 42. Thomas Ludwell, "Description of Virginia," 17 September 1666, a report to the Lords of Trade, in Virginia Magazine of History and Biography, 5 : 54-59. 43. Hening, Statutes at Large, 2: 237, 336. 44. Thomas J. Wertenbaker. Virginia Under the Stuarts, 1607-1688. Princeton: Princeton University Press, 1914, 99-100. 45. Hening, Statutes at Large, 2: 326-36, 341-50; Wilcomb E. Washburn. The Governor and the Rebel: A History of Bacon's Rebellion in Virginia. Chapel Hill: University of North Carolina Press, 1957. 46. Hening, Statutes at Large, 2: 341. 47. Hening, Statutes at Large, 2: 326-36; 341-50. 48. Camus, Military Music of the American Revolution, 41. 49. Hening, Statutes at Large, 2: 336, 410, 439, 491-92. 50. Nathaniel Bacon, quoted in Thomas J. Wertenbaker. Torchbearer of the Revolution: The Story of Bacon's Rebellion and its Leader. Princeton: Princeton University Press, 1940, p. 135. 51. Bacon's rebellion was widely held a generation ago to have been a political event, an early revolution undertaken to ensure the rights of Englishmen, and so on. It was brought on by the despotic conduct of a tyrannical governor who had illegally and unjustly raised taxes without the consent of the governed. See, for example, Thornton Anderson, "Virginia: The Beginnings" in his Development of American Political Thought. New York: Appleton-Century-Crofts, 1961, 1-18. It is now viewed quite differently. Berkeley was the just defender of the peaceful Amerindians who wanted to prevent a mad, bloodthirsty and covetous bigot from exterminating a whole race. Berkeley wanted only to punish the wrong doers on the Indian side while protecting the vast majority who were peace loving brothers. See, for example, "Bacon's Rebellion," in Thomas C. Cochran and Wayne Andrews, Concise Dictionary of American History. New York: Scribner's, 1962, 79. 52. Quoted in Virginia Magazine of History and Biography, 1 [1893-94]: 2. 53. "Causes of Discontent in Virginia, Isle of Wright," numbers 7 and 8, 1676, Virginia Magazine of History and Biography, 2: 381-92. See also the statement on the same subject by the inhabitants of Surry County, in Ibid., 2: 170-73. 54. Hening, Statutes at Large, 2: 513. 55. Hening, Statutes at Large, 2: 233-45. 56. Hening, Statutes at Large, 2: 481. 57. Hening, Statutes at Large, 3: 335-36, 459. 58. An Act for the better supply of the country with armes and ammunition, Hening, Statutes at large, 3: 13-14; 36 Charles II act iv, April 1684. 59. Quoted in Virginia Magazine of History and Biography, 2: 263-64. 60. Camus, Military Music, 41. 61. Hening, Statutes at Large, 1: 526. See also "The Randolph Manuscript," Virginia Magazine of History and Biography, 20 : 117. 62. Shea, Virginia Militia, 122-35, 140. 63. Hening, Statutes at Large, 3: 69; Pallas v Hill , Hening and Mumford Reports, 2: 149. 64. Great Britain. Public Records Office Records: Colonial, 4: 1306. 65. Beverly Fleet, ed. Virginia Colonial Abstracts. Richmond, Va.: Fleet, n.d., 6: 14. 66. "Charges Against Governor Nicholson," Virginia Magazine of History and Biography, 3: 373-82. 67. John Shy. Toward Lexington. Princeton: Princeton University Press, 1965, 11; R. A. Brock, ed. Official Letters of Alexander Spotswood. 2 vols. Richmond: Virginia Historical Society, 1: 131-33, 194, 197, 204-07. 68. "Journal of John Barnwell," Virginia Magazine of History and Biography, 6 : 50. 69. Virginia State Papers, 1: 152. 70. Hening, Statutes at Large, 3: 335-42. 71. Alexander Spotswood, "Letter to the Lords, Commissioners of Trade," The Official Letters of Alexander Spotswood. R. A. Brock, ed. 3 vols. Richmond, Va.: State of Virginia, 1882-85, 2: 37, 194-212. 72. Letter to the Lords, Commissioners of Trade, Spotswood Letters, 2: 37, 194-212. 73. Virginia Gazette, 14 December 1737. 74. Spotswood Letters, 1: 163. 75. Spotswood Letters, 2: 140. 76. Spotswood Letters, 2: 209-10. 77. Journal of the House Burgesses, 1629-1677. 30 vols. Richmond: State of Virginia, 1905-15, August 9, 1715. 78. Spotswood Letters, 1: 210-13. 79. Spotswood, Letters, 1: 121, 130-35, 141-45, 166-67; Hening, Statutes at Large, 4: 10. 80. Spotswood, Letters, 1: 130. 81. Hening, Statutes at Large, 3: 343, 464-69; Spotswood, Letters, 1: 167. 82. Spotswood, Letters, 1: 169-72, 2: 19-25. 83. See Spotswood to Lords of Trade, especially letter of 9 May 1716, Spotswood, Letters, 2: 25, 121, 145. 84. Hening, Statutes at Large, 4: 103, 405, 461. 85. Spotswood, Letters, 2: 227. 86. Hening, Statutes at Large, 4: 118-19, 130-31. 87. Hening, Statutes at Large, 4: 119. 88. Hugh Jones. Present State of Virginia. London, 1724; see also College Catalogue of William and Mary, 1855, 5-10. 89. Virginia Gazette, 7 November 1754, supplement. 90. Original documents reported in Virginia Magazine of History and Biography 3 : 119. 91. Minute Book, King and Queen County, 5: 47. 92. William Byrd's History of the Dividing Line Betwixt Virginia and North Carolina. edited by William K. Boyd. Raleigh, N. C., 1929, 116. 93. Hening, Statutes at Large, 5: 16-17; 6: 533; 7: 95. 94. Pennsylvania Archives. J. H. Linn and W. H. Egle, eds. 119 vols in 9 series. Harrisburg: Commonwealth of Pennsylvania, 1852-1935. [Hereinafter, Pa. Arch, with series first, then vol. number, followed by page number). 1 Pa. Arch. 1: 581-83, 616-19; Archives of Maryland. W. H. Browne et al., eds. 72 vols. Annapolis: state of Maryland, 1883-. [Hereinafter Md. Arch.]. 28: 193-99, 224; Pennsylvania Colonial Records. 16 vols. Harrisburg: Commonwealth of Pennsylvania, 1852-60. [Hereinafter Pa. Col. Rec.], 4: 455-56. 95. William A. Foote, "The Pennsylvania Men of the American Regiment," Pennsylvania Magazine of History and Biography, 88 : 31-38; New York Weekly Journal, 17 January 1743. 96. Great Britain. Public Records Office, Colonial Office. Roll 5: 1325, 235, 237-39. 97. Boston News Letter, 18 December 1746; Maryland Gazette, 21 October 1746. 98. Boston News Leader, 9 August 1753 and 28 March 1754; Virginia Gazette, 23 February 1754; Pennsylvania Gazette, 12 March 1754; Maryland Gazette, 14 March 1754. 99. Boston News Letter, 21 and 28 March 1754. 100. See, for example, Pennsylvania Gazette, 10 May 1753. 101. Robert Dinwiddie served as governor of Virginia from 20 November 1751 until January 1758. His earliest appointment seems to have dated from 1727 when he was appointed collector of customs for Bermuda. He was promoted to surveyor of customs for all American colonies. Worn out by the performance of his duties in the Seven Years' War, he returned to England in 1758 and died in July 1770. Official Records of Governor Robert Dinwiddie. R. A. Brock, ed. 2 vols. Richmond: State of Virginia, 1883-84. 102. Hening, Statutes at Large, 6: 530-33. 103. Virginia Gazette, 19 July 1754. 104. "A Proclamation for Encouraging Men to Enlist in his Majesty's Service for the Defence and Security of this Colony." Hening, Statutes at Large, 7. 105. Hening, Statutes at Large, 6: 438. 106. Brock, Dinwiddie Papers, 1: 344. See also Dinwiddie to Colonel Jefferson, 5 May 1756, in Ibid., 1: 405. 107. Dinwiddie Papers, 1: 515. 108. Dinwiddie to Charles Carter, 18 July 1755 in Dinwiddie Papers, 2: 101. 109. Edward Braddock to Robert Napier, 17 March 1755, in Stanley Pargellis, editor. Military Affairs in North America, 1748-1765. Hampden, Ct.: Anchor, 1969, 78. 110. Dinwiddie Papers, 2: 67, 93. 111. Waddell, Annals of Augusta County, 112. 112. Dinwiddie Papers, 2: 100-200. 113. Hening, Statutes at Large, 6: 550-51. 114. Dinwiddie Papers, 2: 207-10. 115. The Writings of George Washington from the Original Manuscript Sources, 1745-1799. edited by Jacob E. Cooke and John C. Fitzpatrick. 39 volumes. Washington: Washington Bicentennial Commission, 1931-44, 1: 235. 116. Writings of Washington, 1: 399-400. 117. Writings of Washington, 1: 416. 118. Washington to Dinwiddie, 9 November 1756, Writings of Washington, 1: 493. 119. Writings of Washington, 1: 99. 120. Writings of Washington, 1: 158-59. 121. Writings of George Washington, 1: 188. 122. Writings of Washington, 1: 202. 123. Washington to Dinwiddie, 15 May 1756, Writings of Washington, 1: 371. 124. Dinwiddie Papers, 2: 197-200. 125. Hening, Statutes at Large, 6: 631-48. 126. Dinwiddie Papers, 2: 344-45. 127. Dinwiddie Papers, 1: 41. 128. Virginia Magazine of History and Biography, 1: 287. 129. Dinwiddie to Washington, 8 May 1756, in Dinwiddie Papers, 1: 406-08. 130. Pennsylvania Gazette, 15 April 1756. 131. Maryland Gazette, 30 January 1755. 132. Maryland Gazette, 12 September 1754. 133. Boston News Letter, 3 January 1754. 134. Boston News Letter, 6 September and 6 December 1753 and 3 January 1754. 135. Journals of the House of Burgesses, 1756-1758, 346, 356-61; Dinwiddie Papers, 2: 390; Preston Papers, 1QQ: 131-36. 136. Hening, Statutes at Large, 7: 17. 137. Boston News Letter, 13 May 1756. 138. Dinwiddie to Henry Fox, 10 May 1756, in Dinwiddie Papers, 1: 408-10. 139. Journal of the House of Burgesses, 1758-1761, 379-81. 140. Preston Papers, 1QQ: 131-33; Journal of the House of Burgesses, 1756-58, 499. 141. Dinwiddie to County Lieutenants, 5 May 1756, in Dinwiddie Papers, 1: 404. 142. Dinwiddie to Sharpe, 24 May 1756, in Dinwiddie Papers, 1: 426-28. 143. Dinwiddie to Washington, 27 May 1756, in Dinwiddie Papers, 1: 422-24. 144. Dinwiddie to Abercrombie, 28 May 1756, in Dinwiddie Papers, 1: 424-26. 145. Marion Tinling, ed. Correspondence of the Three William Byrds of Westover, Virginia. 2 vols. Richmond: Virginia Historical Society, 1977, 2: 616. 146. Hening, Statutes at Large, 7: 93-95. 147. Writings of Washington, 1: 354-59. 148. See Dinwiddie to Thomas Jefferson, 5 May 1756, in Dinwiddie Papers, 1: 405. 149. in Louis K. Koontz. The Virginia Frontier. Baltimore: Johns Hopkins University Press, 1925, 85, 176. 150. Dinwiddie Papers, 2: 476. 151. Dinwiddie to Loudoun, 28 October 1756, Dinwiddie Papers, 1: 532-34. 152. Dinwiddie Papers, 2: 581-92; Military Grants, French and Indian War, in Virginia Land Office; Preston Papers, 13, 18, 26. 153. Dinwiddie Papers, 2: 620-23. 154. Dinwiddie to James Atkin, 16 June 1757, Dinwiddie Papers, 1: 640. 155. Dinwiddie to Pitt, 18 June 1757, Dinwiddie Papers, 1: 641-42. 156. The Act of the Assembly, Now in Force in the Colony of Virginia. Williamsburg: Rind, Purdie and Dixon, 1769, 334-42. 157. An Act for Reducing the Several Acts for Making Provision against Invasions and Insurrections into One Act, Acts of the Assembly, 342-44. 158. Hening, Statutes at Large, 7: 172-73; "Memorial to the House of Burgesses, 3 April 1758," in Legislative Journal, 3: 1183. 159. George Reese, ed. Official Papers of Francis Fauquier, 1758--1768. 3 vols. Richmond: Virginia Historical Society, 1980, 2: 168. 160. Koontz, Virginia Frontier, 293. 161. Preston Papers, 54. 162. Journal of the House of Burgesses, 1758-1761, appendices; Hening, Statutes at Large, 7: 492-93. 163. Preston Papers, 55. 164. Draper mss, 2QQ44; Withers, Border Warfare, 99; Koontz, Virginia Frontier, 288; Waddell, Annals of Augusta County, 198-99. 165. Ibid., 263, 289. 166. Idib., 292-93. 167. Howard H. Peckham. Pontiac and the Indian Uprising. Princeton: Princeton University Press, 1947, 214-17. 168. Amherst to Lt.-gov. Fauquier, 29 August 1763, Collections of the Michigan Pioneer and Historical Society, 19 : 228-29. 169. Hening, Statutes at Large, 7: 93-106; 274-75; 8: 241-45, 503; The Acts of the Assembly nowe in Force in the Colony of Virginia. Williamsburg: Rind, Purdie and Dixon, 1769, 474-76. 170. Virginia Gazette, 13 November 1766. 171. Dunmore was the last royal governor of Virginia. He opened up the Ohio Territory by defeating the Shawnee in Dunmore's War of 1774. He fought constantly with the House of Burgesses and finally had to flee to a British man of war. He returned to England in July 1776 and later served as governor of the Bahamas, 1787 to 1796. 172. For example, in Rivington's New York Gazette, 17 November 1774. 173. Virgil A. Lewis. History of the Battle of Point Pleasant. Charleston, W. Va., 1909; E. O. Randall, "Lord Dunmore's War," Ohio Archaeological and Historical Publications, 11 : 167-97; R. G. Thwaites and Louise P. Kellogg, eds. Documentary History of Dunmore's War. Madison: Wisconsin State Historical Society, 1905. 174. William Wirt Henry, ed. Patrick Henry: Life, Correspondence and Speeches. 3 vols. New York: Scribner's, 1891, 1: 252. 175. American Archives. Peter Force, ed. 9 vols. in series 4 and 5. Washington: U. S. Government, 1837-53. 4 Amer. Arch. 2: 1211-15. 176. Robert Douthat Meade. Patrick Henry. 2 vols. Philadelphia: Lippincott, 1957-62; Moses Tyler C. Patrick Henry. New York: Houghton Mifflin, 1915; Richard R. Beeman. Patrick Henry. New York: McGraw-Hill, 1974; Norine Dickson Campbell. Patrick Henry, Patriot and Statesman. New York: Devin-Adair, 1969. 177. Henry, Patrick Henry, 1: 257-58. 178. R. A. Brock, "Eminent Virginians," in Historical and Geographical Encyclopedia. New York: Hardesty, 1884, 348. 179. Henry, Patrick Henry, 1: 258. 180. Edmund Randolph, ms. in Virginia Historical Society. 181. 4 Amer. Arch. 1: 881. 182. Henry, Patrick Henry, 1: 279. 183. Henry, Patrick Henry, 1: 156. 184. Letter from Virginia, 1 July 1775, Morning Chronicle and London Advertiser, 21 August 1775. 185. Henry, Patrick Henry, 1: 280. 186. Letter from Virginia, dated 16 April 1775, London Chronicle, 1 June 1775. 187. J. T. McAllister, The Virginia Militia in the Revolutionary War. Hot Springs, Va.: McAllister, n. d., 7. 188. Revolutionary Virginia: The Road to Independence. Brent Tarter, ed. 8 vols. Charlottesville: University of Virginia Press, 1983, 7: 515. 189. Virginia Magazine of History and Biography, 22 : 57. 190. 5 Amer. Arch. 3: 52. 191. Ordinances of the Convention, July 1775, 1: ch. 1: 33, 34, 35. 192. Ordinances of the Convention, July 1775, 33. 193. Ibid., 34. 194. Julian P. Boyd, ed. Papers of Thomas Jefferson. Princeton: Princeton University Press, 1950-, 1: 268. 195. "Letters from Virginia, 1774-1781," Virginia Magazine of History 3: 159. 196. Each man enlisted was to be equipped with a hunting shirt, a pair of leggings and a proper arm at the public expense. If the men provided their own weapon they were to receive an additional allowance of 20 shillings per years. Hening, Statutes at Large, 9: 18. 197. Lenora H. Sweeny. Amherst County, Virginia, in the Revolution. Lynchburg, Va.: Bell, 1951, 3. 198. Benson J. Lossing. Pictorial Field Book of the Revolution. 2 vols. New York: Harper, 1851-52, 2: 536. 199. Letter from Philadelphia, 12 March 1776, London Gazetteer and New Daily Advertiser, 16 May 1776. 200. McAllister, Virginia Militia, 5. 201. Henry, Patrick Henry, 1: 317. 202. Letter of Jasper Yates to James Wilson, 30 July 1776, "You may recollect that sometime ago the Convention of Virginia resolved that 200 Indians should be inlisted by John Gibson in the service of that Colony." in Pennsylvania Magazine of History and Biography, 39 : 359. 203. John Page to Jefferson, 24 November 1775, Jefferson Papers, 1: 266. 204. Hening, Statutes at Large, 9: 30ff. 205. Hening, Statutes at Large, 9: 28-29, 139-41; Revolutionary Virginia, 7: 505. 206. Revolutionary Virginia, 7: 597. 207. Revolutionary Virginia, 7: 633. 208. Virginia Gazette, 23 September 1775; Henry, Patrick Henry, 1: 319. 209. Hening, Statutes at Large, 9: 9-10; Virginia Gazette, 1 April 1775. 210. Robert G. Albion and Leonidas Dodson, eds. Journal and Letters of Philip Vickers Fithian, 1773-1774. Williamsburg: Colonial Williamsburg Foundation, 1943, 24. 211. Hening, Statutes at Large, 9: 267-68. 212. Humphrey Bland. A Plan of Military Discipline. Several editions were published. Washington specified the London edition of 1762. 213. Turpin de Crisse (1715-1792). An Essay on the Art of War. London, 1755. The count was among the most distinguished scholars of military history of his day. He had translated and interpreted many of the more important works of antiquity. He served in Belgium, Holland and France under Marshal Saxe. 214. Roger Stevenson. Military Instructions for Officers, "lately published in Philadelphia" [Philadelphia, 1775]. Washington had a copy of this work in his library at Mount Vernon Library. 215. M. de Jeney. The Partisan. London, 1760. 216. William Young. Essays on the Command of Small Detachments. 2 vols. London, 1771. 217. Thomas Simes. The Military Guide for Young Officers. Philadelphia, 1776. This work was published in two volumes, the second a military dictionary; the first a military scrap book containing many quotations from other works, such as Bland and Saxe. 218. Friedrich Kapp. The Life of Major-General Frederick William von Steuben. New York: Mason, 1859, 130. 219. John W. Wright. Some Notes on the Continental Army. Vails Gate, N. Y.: National Temple Hill Assn., 1963, 2-4. 220. Henry, Patrick Henry, 1: 327, 337. 221. Technically, Patrick Henry carried a commission which read, "colonel of the first regiment of regulars, and commander in chief of all forces raised for the protection of this colony" in fact Henry was too heavily involved in the politics of the Convention and political organization of the state to command any force. There was an apparent contradiction between Woodford's and Henry's commissions, but there was little opportunity for clash because each man was too busy at his own task. Henry, Patrick Henry, 1: 338-39. 222. William Woodford to Convention, 7 December 1775, in Henry, Patrick Henry, 1: 335-37. 223. Henry, Patrick Henry, 1: 429. 224. Virginia Bill of Rights, 1776, in Poore, Constitutions, 2: 1909. 225. The Papers of Thomas Jefferson. Julian P. Boyd and others, eds. 20 vols. Princeton: Princeton University Press, 1950-, 1: 344-45. 226. Jefferson Papers, 1: 353. 227. Jefferson Papers, 1: 363. 228. Virginia Constitution of 1776, in Poore, ed. Federal and State Constitutions, Colonial Charters, and Other Organic Laws of the States, Territories and Colonies, Now and Heretofore Forming the United States of America. Washington: U.S. Government Printing Office, 1909, 2: 1911. 229. Clark was born near Charlottesville, Virginia, on 19 November 1752, and in 1775 moved to the Kentucky territory, where he organized militiamen to defend their homes. After the war, Clark returned to Louisville, where he lived until his death on 13 February 1818. 230. Journal of the Virginia Council, 23 August 1776. 231. Revolutionary Virginia, 7: 306. 232. Revolutionary Virginia, 7: 552. 233. Edmund Pendleton to James Madison, 11 December 1780, in D. J. Mays, ed. The Letters and Papers of Edmund Pendleton, 1734-1802. 2 vols. Richmond: Virginia Historical Society, 1967, 1: 186. 234. Henry, Patrick Henry, 3: 13-15. 235. Revolutionary Virginia, 7: 548-49. 236. Alexander Brown. The Cabells and Their Kin. Richmond: privately printed, 1896, 124. 237. Revolutionary Virginia, 7: 625, 695. 238. "Two Letters of Colonel Francis Johnson," this one dated 14 June 1776, in Pennsylvania Magazine of History and Biography, 39 : 302. 239. D. J. Mays, ed. Letters and Papers of Edmund Pendleton, 1734-1803. 2 vols. Richmond: Virginia Historical Society, 1967, 2: 223. 240. Proceedings of the Virginia Historical Society, New Series, 11 : 346. 241. Jefferson Papers, 4: 664. 242. According to Sabine, American Loyalists, II, 146-47, William Panton of Georgia was the principal agent for British arms sent the Cherokee, telling them that "these guns were to kill Americans and that he would rather have them applied to that use than to the shooting of deer." 243. William Christian to Patrick Henry, 23 October 1776, in Henry, Patrick Henry, 3: 25-29. 244. Willie Jones, President of the North Carolina Council, Halifax, to Patrick Henry, 25 October 1776, Ibid., 3: 29-30. 245. Patrick Henry to Richard Henry Lee, 28 March 1777, in Henry, Patrick Henry, 1: 515. 246. Maud Carter Clement, History of Pittsylvania County, Virginia. Lynchburg, Virginia: Bell, 1929, 142. 247. Henry, Patrick Henry, 1: 483. 248. Resolution of Legislature, 21 December 1776, in Henry, Patrick Henry, 1: 502-04. 249. 5 Amer. Arch. 3: 1425. 250. Patrick Henry to George Washington, 29 March 1777, in Henry, Patrick Henry, 1: 516-17. 251. Virginia Gazette, 21 February 1777. 252. Patrick Henry to George Washington, 29 March 1777, in Henry, Patrick Henry, 1: 516-17. 253. Hening, Statutes at Large, 9: 275. 254. Patrick Henry to the Lieutenant of Montgomery County, 10 March 1777, in Henry, Patrick Henry, 3: 44. 255. Patrick Henry to Thomas Johnson, 31 March 1777, in Henry, Patrick Henry, 3: 51-53. 256. Stuart, "Memoir of the Indian Wars," Collection of the Virginia Historical and Philosophical Society, 1: 1. 257. McDowell to Jefferson, 20 April 1781; Moffet to Jefferson, 5 May 1781, Jefferson Papers, 5: 507, 603-04. 258. Hening, Statutes at Large, 9: 267-68. 259. Proceedings of the Virginia Historical Society. . New Series, 11: 346. 260. Patrick Henry to Richard Henry Lee, 20 March 1777, in Henry, Patrick Henry, 1: 514. 261. Henry, Patrick Henry, 1: 518-19. 262. Calendar of Virginia State Papers, 2: 301, 398 144, 173, 260, 234 & 232; M. C. Clement, History of Pittsylvania County, 170-87. 263. D. J., Mays, ed. Letters and Papers of Edmund Pendleton, 1734--1803. 2 vols. Richmond: Virginia Historical Society, 1967, 1: 221. 264. Executive Journal, 18 August 1777, Virginia Historical Society 61; Patrick Henry to George Washington, 29 October 1777, in Henry, Patrick Henry, 1: 541-42. 265. Hening, Statutes at Large, 9: 373. 266. George Washington to Patrick Henry, 13 November 1777, in Henry, Patrick Henry, 1: 542-44. 267. Hening, Statutes at Large, 9: 445. 268. Patrick Henry to Congress, 18 June 1777, in Henry, Patrick Henry, 3: 177. 269. John Wilson to Patrick Henry, 20 May 1778, in Henry, Patrick Henry, 3: 169-70. 270. Patrick Henry to Benjamin Harrison, 21 May 1778, in Henry, Patrick Henry, 3: 167-69. 271. Journal of the Executive Council, 28 June 1777, 30, Virginia Historical Society. 272. 1 Pa. Arch. 6: 18. 273. Journal of the Executive Council, 19 February 1778, Virginia Historical Society. 274. Executive Journal, 1778, 227, 273; Annals of Augusta County, 164, 18 April and 5 May 1778. 275. Patrick Henry to Congress, 8 July 1778; Congress to Henry, 6 August 1778, in Henry, Patrick Henry, 1: 578-79; 3: 189. 276. John Bakeless. Background to Glory. Philadelphia: Lippincott, 1957; James A. James The Life of George Rogers Clark. Chicago: University of Chicago Press, 1928. 277. Hening, Statutes at Large, 9: 374-75. 278. Executive Journal, 2 January 1778. 279. Patrick Henry to George Rogers Clark, 2 January 1778, in Henry, Patrick Henry, 1: 588. 280. Patrick Henry to Richard Henry Lee, 19 May 1779, in Henry, Patrick Henry, 2: 30-31. 281. Executive Journal, 303. 282. Executive Journal, 305. 283. Henry, Patrick Henry, 2: 7. 284. Henry to Henry Laurens, 28 November 1778, in Henry, Patrick Henry, 2: 21-23. 285. Patrick Henry to George Washington, 13 March 1779; Arthur Campbell to Patrick Henry, 15 March 1779, in Henry, Patrick Henry, 2: 23; 3: 231. Isaac Shelby was born in Washington County, Maryland, on 11 December 1750. He became a leader of patriot militia in the Carolinas. About 1783 he moved to Kentucky and became its first governor when it was admitted to statehood. In the War of 1812 he organized band of militia and volunteers some 4000 strong and defeated the British army at the Battle of the Thames on 15 October 1813. He died on 18 July 1826. Sylvia Wrobel and George Grider. Isaac Shelby: Kentucky's First Governor and Hero of Three Wars. Danville, Ky.: Cumberland Press, 1974. 286. John G. Patterson, "Ebenezer Zane, Frontiersman," West Virginia History, 12 . 287. Patrick Henry to Richard Henry Lee, 19 May 1779, in Henry, Patrick Henry, 2: 30-31; Sir George Collier to Sir Henry Clinton, 16 May 1779, in Henry Clinton. The American Rebellion. Sir Henry Clinton's Narratives of His Campaigns, 1775-1782. William B. Willcox, ed. New Haven, Ct.: Yale University Press, 1954, 406. 288. "Journal of Jean Baptiste Antoine de Verger," in Howard C. Rice, ed. The American Campaign of Rochambeau's Army. Princeton: Princeton University Press, 1957, 152. 289. In 1785 Patrick Henry, serving again as governor of Virginia, hired Lafayette to advise him on militia training and discipline. Lafayette wrote Henry on 7 June 1785, "I have been honored with your Excellency's commands . . . and find myself happy to be employed in the service of the Virginia Militia . . . . Indeed, Sir, the Virginia militia deserves to be well armed and properly attended." Henry, Patrick Henry, 3: 298-99. 290. Jefferson Papers, 6: 36. 291. Jefferson Papers, 4: 298-99. 292. Jefferson Papers, 4: 130-31. 293. Jefferson Papers, 3: 576-77. 294. Jefferson Papers, 4: 54. 295. Jefferson Papers, 4: 57. 296. Writings of Washington, 20: 45-46. 297. Edmund Pendleton to James Madison, 11 December 1780, in D. J. Mays, ed. The Letters and Papers of Edmund Pendleton, 1734-1802. 2 vols. Richmond: Virginia Historical Society, 1967, 1: 326. 298. Benjamin F. Stevens, ed. The Campaign in Virginia, 1781: An Exact Reprint of Six Rare Pamphlets on the Clinton-Cornwallis Controversy. 2 vols. London, 1888; Randolph G. Adams, " A View of Cornwallis's Surrender at Yorktown," American Historical Review, 37 : 25-49; William B. Willcox, "The British Road to Yorktown: A Study in Divided Command," American Historical Review, 52 : 1-35. 299. George Washington to Patrick Henry, 5 October 1776, in Henry, Patrick Henry, 3: 12-15. 300. American State Papers: Military Affairs. 7 vols. Washington: Gales and Seaton, 1832-61, 1: 14ff. 301. Sources of Our Liberties. R. Perry and J. Cooper, eds. Washington: American Bar Association, 1959, 312. 302. Richard Henry Lee in the Pendleton Papers, 473, dated 21 February 1785. 303. Quoted in Hugh B. Grigsby, History of the Virginia Convention of 1788. R. A. Brock, ed. 2 vols. Richmond: Virginia Historical Society, 1890, 1: 158-59. 304. Quoted in Grigsby, op. cit., 1: 161. 305. Grigsby, op. cit., 1: 258. 306. Calendar of Virginia State Papers, 7: 218. 307. Colonial Records of North Carolina. William L. Saunders, ed. 10 vols. Raleigh, State of North Carolina, 1886-1890, 1: 83-87. Hereinafter cited as N. C. Col. Rec. North Carolina State Records. ed. Walter Clark and William L. Saunders. (Raleigh: State of North Carolina, 1886-1905). Hereinafter cited as N. C. State Rec. 308. N. C. Col. Rec., 1: 87. 309. In 1672 Cooper was named Earl of Shaftsbury. 310. Poore, Constitutions, 2: 1388. 311. N. C. Col. Rec. 1: 31. 312. Ibid., 2: 1395-96. 313. N. C. Col. Rec., 1: 112; Poore, Constitutions, 2: 1401-02. 314. William S. Powell, ed. Ye Countie of Albemarle in Carolina. Raleigh: North Carolina Department of Archives and History, 1958, 23-24. 315. N. C. Col. Rec. 1: 239, 361, 389. 316. Fundamental Constitutions of North Carolina of 1669, in Poore, Constitutions, 2: 1396. 317. H. T. Lefler and A. R. Newsome. North Carolina. Chapel Hill: University of North Carolina Press, 1954, 600. 318. E. M. Wheeler, "Development and Organization of the North Carolina Militia," North Carolina Historical Review, 41 : 307-43. 319. John Archdale, "A New Description of that Fertile and Pleasant Province of Carolina," in A. S. Salley, Jr., ed. Narratives of Early Carolina, 1650-1708. New York: Scribner's, 1911, 277--313. 320. John Oldmixon, "History of the British Empire in America: Carolina" , in Ibid., 313-74. 321. N. C. Col. Rec., 1: 541. 322. Walter Clark, "Indian Massacre and the Tuscarora War," North Carolina Booklet, 2 : 9. 323. Spotswood Letters, 1: 123. 324. "Journal of John Barnwell," Virginia Magazine of History and Biography, 5 : 391-402; also in South Carolina Historical and Genealogical Magazine, 9 : 28-54. 325. N. C. Col. Rec., 1: 877. 326. Ibid., 1: 871-75. 327. Ibid., 1: 877. 328. Ibid., 1: 886. 329. Ibid., 1: 886. 330. N. C. State Rec., 13: 29-31. 331. Ibid., 13: 23-31. 332. Ibid., 13: 30. 333. "A Short Discourse on the Present State of the Colonies in America with Respect to the Interest of Great Britain," in N. C. Col. Rec., 2: 632-33. 334. Ibid., 4: 78. 335. N. C. State Rec., 13: 244-47. 336. N. C. State Rec., 13: 330. 337. N. C. State Rec., 15: 334-37. 338. Md. Arch. 50: 534. 339. N. C. State Rec., 22: 370-72. 340. Koontz, Virginia Frontier, 169. 341. Loudoun to Cumberland, November 1756, in Pargellis, Military Affairs, 267. 342. "Some Hints for the Operations in North America for 1757," in ibid., 314. 343. N. C. Col. Rec., 4: 220-21. 344. N. C. Col Rec., 4: 119. 345. Laws of the State of North Carolina. 2 vols. Raleigh: State of North Carolina, 1821, 1: 135. 346. N. C. State Rec., 13: 518-22. 347. Laws of North Carolina, 1: 135. 348. N. C. State Rec., 23: 787-88, 941. 349. North Carolina Statutes, 1715-1775, 434-35. 350. Luther L. Gobbel, "The Militia in North Carolina in Colonial and Revolutionary Times," Historical Papers of the Trinity College Historical Society, 12 : 42. 351. Laws of North Carolina, 1: 125. 352. N. C. State Rec., 23: 601. 353. Wheeler, "Carolina Militia," 317-18; N. C. Col. Rec., 5: xli. 354. N. C. State Rec., 23: 597. 355. Wheeler, "Carolina MIlitia," 318. 356. N. C. Col. Rec., 10: 302; 4 Amer. Arch. 4: 556. 357. North Carolina Constitution of 1776, in Poore, Constitutions, 2: 1410. 358. In 1868 townships were created in the counties and these served, among their many functions, as permanent militia districts. Clarence W. Griffin, History of Old Tryon and Rutherford Counties. Asheville, NC: Miller, 1937, 139, 141-43. 359. 4 Amer. Arch. 5: 1330. 360. Poore, Constitutions, 2: 1409. 361. 4 Amer. Arch. 5: 1337-38. 362. 4 Amer. Arch. 5: 1326. 363. Robert Gardner. Small Arms Makers. New York: Crown, 1963, 141-41, 212. 364. Eric Robson, "The Expedition to the Southern Colonies, 1775-1776," English Historical Review, 116 : 535-60. 365. N. C. State Rec., 10: xiii. 366. Hugh F. Rankin, "The Moore's Creek Bridge Campaign," North Carolina Historical Review, 30 : 23-60. 367. N. C. State Rec., 10: xiii. 368. North Carolina Constitution of 1778, in Poore, Constitutions, 2: 1623-27. 369. Marquis Charles Cornwallis, eldest son of the First Earl Cornwallis, inherited his father's title in 1762. He was a graduate of Eton, an officer in the Seven Years War, and an active Whig in the House of Lords, where he opposed the Declaratory Act of 1766. He was second in command in America to Sir Henry Clinton and served with distinction. He subdued New Jersey in 1776 and defeated the patriots at Brandywine, occupying Philadelphia in 1777. He urged aggressive action in the southern states early in the war, but his pan received no support until 1780. After the Revolution, in 1786, was transferred to India where he laid the foundations for the British administrative system. He checked the uprising of Tippu Sultan, reformed the land and revenue systems and introduced a humane legal code and reformed court system. In 1792 he was made a marquess, returned to England in 1793, and made a member of the cabinet in 1795. He worked to pass the Act of Union, unifying the Irish and English parliaments. After George III objected to emancipation of Roman Catholics, he resigned from the cabinet in protest. Appointed Governor-general of Indian in 1805, he died on 5 October of that year. Frank and Mary Wickwire. Cornwallis: The American Adventure. 2 vols. Boston: Houghton-Mifflin, 1970-80; Mary and F. B. Wickwire. Cornwallis and the War of Independence. London: Faber and Faber, 1971. 370. Ward, War of the Revolution, 2: 722-30. 371. Smith, Loyalists and Redcoats, 145-47. 372. N. C. Rec., 14: 614-15, 647, 655, 774, 786; 19: 958. 373. Henry, Patrick Henry, 2: 65. The reference to Deckard rifles is interesting. Jacob Dickert (1740-1822) was born in Germany, emigrated to America in 1748, and settled in Lancaster County, Pennsylvania, after living briefly in Berks County, Pa. He operated a large gunshop in Lancaster, where he was an important figure in the development of the uniquely American product, the Pennsylvania long rifle, also commonly called the "Kentucky rifle." Stacy B. C. Wood, Jr. and James B. Whisker. Arms Makers of Lancaster County, Pennsylvania. (Bedford, PA: Old Bedford Village Press, 1991, 14-15. We find another, later reference to Dickert's products by name in an advertisement of merchant Robert Barr for "Dechard rifle guns." Kentucky Gazette, 1 September 1787. 374. Quoted in Henry, Patrick Henry, 2: 64. 375. North Callahan. Royal Raiders: The Tories of the American Revolution. Indianapolis: Bobbs-Merrill, 1963, ch. 10. 376. Lyman C. Draper. King's Mountain. Cincinnati: Thompson, 1881, 314. 377. Nathaneal Greene was born in Rhode Island, served as a deputy in the Rhode Island Assembly (1770-72, 1775), and was appointed a brigadier-general in May 1775 to lead three Rhode Island regiments. After serving at the siege of Boston and as commander of the American occupation army, he was promoted on 9 August 1776 to major-general. He supported George Washington at Trenton in December 1776 and Germantown and spent the winter of 1777-78 at Valley Forge. He served as quartermaster-general and was present at the battles of Monmouth and Newport. In 1780 he chaired the court martial which condemned Major André in the Benedict Arnold plot. After relieving Horatio Gates, he led the southern army to a string of effective delaying actions and victories and many credit the ultimate defeat of Lord Cornwallis' army to his leadership. He died on 19 June 1786 near Savannah, Georgia. Papers of Greene. 378. Horatio Gates received much credit for the American victory over General John Burgoyne's army at Saratoga, although he spent most of his time at the critical juncture in the battle debating the merits of the American Revolution with a captured British officer while Benedict Arnold led the men to victory. He was born in England, served in the Seven Years War and retired on half-pay and in 1772 purchased an estate in Virginia. In 1775 Congress appointed him adjutant-general and in 1776 promoted him to major-general. In 1777 he was president of the board of war. The Conway Cabal, led in Congress by Thomas Conway, sought to replace George Washington with Gates, but failed. In 1780, following his disastrous loss to Lord Cornwallis at the Battle of Camden, Congress replaced him and Gates retired to his plantation. Activated in 1782 at Newburgh, New York, he retired again in 1783. He moved to Manhattan where he died on 10 April 1806. Max M. Mintz, Generals of Saratoga: John Burgoyne and Horatio Gates. New Haven: Yale University Press, 1990; Paul D. Nelson. Horatio Gates. Baton Rouge, La.: Louisiana State University Press, 1976. 379. Nathaneal Greene to Thomas Jefferson, 10 February 1781, Calendar of [Virginia] State Papers, 1: 504. 380. Don Higginbotham. Daniel Morgan: Revolutionary Rifleman. New York, 1961. Morgan was later part of Washington's force that put down the Whiskey Rebellion. He also served in the U. S. House of Representatives, 1797-99. 381. See Banastre Tarleton. A History of the Campaigns of 1780 and 1781 in the Southern Provinces of North America. London: Cadell, 1787. 382. Hugh F. Rankin, "Cowpens: Prelude to Yorktown," North Carolina Historical Review, 31 : 336-69. 383. Ward, War of the Revolution, 2: 755-62. 384. Robert C. Pugh, "The Revolutionary Militia in the Southern Campaign, 1780-81," William and Mary Quarterly, 3d series, 14 : 164-65; Hugh F. Rankin, "Cowpens: Prelude to Yorktown," North Carolina Historical Review, 31 : 336-69. 385. Tarleton did raid into Virginia and on 4 June 1781 nearly captured Thomas Jefferson, then governor of Virginia, and some members of the state legislature. 386. Ward, War of the Revolution, 2: 783-96. 387. Hugh F. Rankin. Francis Marion: The Swamp Fox. New York: Crowell, 1973. 388. William G. Simms. The Life of Francis Marion. New York: Appleton, 1845, 126ff. 389. Robert O. Demond. The Loyalists in North Carolina During the Revolution. Durham: North Carolina State University Press, 1940. 390. Rankin, Francis Marion. 391. Paul H. Smith. Loyalists and Redcoats. Chapel Hill: University of North Carolina Press, 1964, 152-53. 392. Francis Vinton Greene. General Greene. New York: Scribner's, 1914. 393. Robert C. Pugh, "The Revolutionary Militia in the Southern Campaign, 1780-81," William and Mary Quarterly, 3d series, 14 : 160. 394. George W. Kyte, "Strategic Blunder: Lord Cornwallis Abandons the Carolinas, 1781," The Historian, 22 : 129-44; William B. Willcox, "The British Road to Yorktown: A Study in Divided Command," American Historical Review, 52 : 1-35. See also Willcox's "British Strategy in America," Journal of Modern History, 19 : 97-121. 395. John Tate Lanning, ed. The St. Augustine Expedition of 1740. Columbia: State of South Carolina, 1954, 4; A. S. Salley, Jr., ed. Journal of the Grand Council of South Carolina, August 25, 1671, to June 24, 1680. Columbia: State of South Carolina, 1907, 21. 396. Cacique is Spanish for Amerindian chief and was a term applied to land barons in the Carolinas who owned 24,000 or more acres of land. Alongf with landgraves and lords of the manor, caciques constituted the medieval style landed seignory in these colonies. 397. Osgood, American Colonies, 2: 373. 398. David Cole, "A Brief Outline of the South Carolina Colonial Militia System," Proceedings, South Carolina Historical Association, 24 : 14-23. 399. Poore, Constitutions, 2: 1388. 400. Ibid., 2: 1395-96. 401. The Statutes at Large of South Carolina. Thomas Cooper and David McCord, eds. Columbia, S.C.: State of South Carolina, 1836-41, 1: 48-49. 402. Journal of the Grand Council of South Carolina, 1671-1680. A. S. Salley, ed. Columbia, S.C.: State of South Carolina, 1907, 10-11, 42. 403. Calendar of State Papers: Colonial America and West Indies. 11: 540. hereinafter cited as C.S.P. 404. Cole, "Brief Outline," 16. 405. Edward McCrady. The History of South Carolina under the Proprietary Government. New York, 1897, 477. 406. Act . . . for the Defence of the Government, No. 30 of 15 October 1686, Statutes at Large, 1: 15-18. 407. Act 33 of 22 January 1686; Act 52, 1690, Statutes at Large, 2: 20-21, 42-43. 408. Act 162 of 8 October 1698, Statutes at Large, 1: 7-12. 409. A. S. Salley, Jr., ed. Records in the British Public Record Office Relating to South Carolina, 1685 to 1690. Atlanta: State of South Carolina, 1929, 87. 410. Lanning, St. Augustine Expedition, 9; Statutes at Large, 2: 15. 411. The primary source of information on militia slave patrols comes from H. M. Henry, Police Control of the Slave in South Carolina. Lynchburg, Va.: Emory, 1914. Professor Henry gave the date of 1686 as the year of the first deployment of militia slave patrols, but this is strongly disputed in Cole, "Brief Outline," 21. 412. South Carolina Statutes at Large, 7: 346. 413. Act 49 of 1690. 414. Laws of Governor Archdale, 1-8, in Statutes at Large of South Carolina. 415. An Act for . . . Maintaining of a Watch on Sullivan's Island, No. 51 of 22 December 1690, Statutes at Large, 2: 40-42. 416. An Act for Settling a Watch in Charlestown and for Preventing Fires, 1698, in Kavenagh, Colonial America, 3: 2389-90. 417. Thomas Nairn. A Letter from South Carolina, Giving an Account of the Soil, Air, Products, Trade, Government, Laws, Religion, People, Military Strength . . . of that Province. London, 1718, 28-29. 418. Journals of the Commons House of Assembly of the Province of South Carolina. hereinafter J. C. H. A., 3: 35. 419. Statutes at Large of South Carolina, 1: 29. 420. Instructions to Francis Nicholson, Royal Governor of South Carolina, 30 August 1720, in Kavenagh, Colonial America, 3: 1975. 421. South Carolina Statutes at Large, 7: 33. 422. David J. McCord. The Statutes at Large of South Carolina. 10 vols. Columbia, S. C.: State of South Carolina, 1836-41, 8: 617-24. 423. Statutes at Large, 2: 33. 424. South Carolina Statutes at Large, 7: 347-49. 425. South Carolina Statutes at Large, 3: 108-11. 426. Colonial Records of South Carolina: Journal of the Common House of Assembly. edited by J. H. Easterby and others. Columbia, S.C.: State of South Carolina, 1951--. 11 vols to date. Volumes in this series are still coming out. I: 228, dated 20 August 1702. 427. South Carolina Statutes at Large, 7: 33. 428. South Carolina Statutes at Large, 7: 347-49. 429. South Carolina Statutes at Large, 7: 349-51. 430. An Act for the Encouragement and Killing and Destroying Beasts of Prey, No. 128 of 16 March 1696; No. 211 of 8 May 1703, South Carolina Statutes at Large, 2: 108-10, 215-16. 431. "The settlers who held Charleston against the allied forces of France and Spain were partners in the glory of Stanhope and Marlborough, heirs to the glory of Drake and Raleigh." John H. Doyle, English Colonies in America. 5 volumes. New York: Holt, 1822, 1: 369. 432. Statutes at Large, 8: 625-31. 433. Orders of Lords Proprietors to Governors of the Carolinas in N. C. Col. Rec., 1: 877, 886. 434. Acts 237 of 1704 and 418 and 419 of 1719, Statutes at Large, 2: 347-49; 3: 108-11. 435. Joseph P. Barnwell, "Second Tuscarora Expedition" in 10 South Carolina Historical and Genealogical Magazine : 33-48. 436. J. C. H. A., 11: 5-27. 437. J. C. H. A., 7: 456. 438. J. C. H. A., 3: 552. 439. Trott, Laws of South Carolina, 480. 440. Statutes at Large of South Carolina, 7: 353. 441. Trott, Laws of South Carolina, 217-18. 442. C. S. P., 15: 1407, 1412; 16: 5. 443. An Act to Impower . . . Council to Carry on and Prosecute the War Against our Indian Enemies and their Confederates, Act No. 351 of 10 May 1715, South Carolina Statutes at Large, I2: 624-26. 444. Verner W. Crane. The Southern Frontier, 1670-1732. University of North Carolina Press, 1929, 178. 445. British Public Records Office, Records Relating to South Carolina. London: H.M. Stationary Office, 1889--, 8: 67. 446. Cole, "Brief Outline," 19. 447. N. C. Col. Rec., 2: 178. 448. London Transcripts in Public Records of South Carolina, 7: 7. 449. Statutes at Large, 8: 631. 450. Cooper, South Carolina Statutes, 3: 108-10. 451. Act 408 of 12 February 1719, South Carolina Statutes at Large, 2: 100-02. 452. David A. Cole, "The Organization and administration of the South Carolina Militia, 1670-1783," Ph. D. dissertation, University of South Carolina, 1953; David Cole. "A Brief Outline of the South Carolina Militia System," Proceedings, South Carolina Historical Association, 24 : 14-23; Michael Stauffer. South Carolina's Antebellum Militia. Columbia, S. C.: South Carolina Department of Archives and History, 1991; Jean Martin Flynn. The Militia in Antebellum South Carolina Society. Spartanburg, S. C.: Reprint Company, 1991; Journal of the Grand Council of South Carolina, 1671-1680. edited by A. S. Salley. Columbia, S. C.: State of South Carolina, 1907; Benjamin Elliott. The Militia System of South Carolina. Charleston: Miller, 1835; Fitzhugh McMaster. Soldiers and Uniforms: South Carolina Military Affairs, 1670-1775. Columbia, S. C.: University of South Carolina Press, 1972. 453. Quoted in Warren B. Smith. White Servitude in Colonial South Carolina. Columbia: University of South Carolina Press, 1961, 29. 454. J. C. H. A., 5: 153. 455. J. C. H. A., 5: 158, 457. 456. South Carolina Statutes at Large, 3: 39. 457. South Carolina Statutes at Large, 2: 324. 458. South Carolina Statutes at Large, 2: 636-37. 459. South Carolina Statutes at Large, 2: 347-53; 3: 33. 460. Statutes of South Carolina, 3: 109-10. 461. Colonial Records of South Carolina, James H. Easterby, ed, Columbia: State of South Carolina, 1951-, 7: 225, 233-34, 238-39; 8: 66; 9: 67-68. 462. Colonial Records of S. C., 7: 233-9. 463. Shy, Toward Lexington, 11; Calendar of State Papers: America and West Indies, 29 January 1720, no. 531. 464. South Carolina Statutes at Large, 9: 254-55. 465. Bills for salve patrols were considered through 1740. S. C. Col. Rec. 1: 202, 334, 351-53, 392, 398, 424, 427, 507-08, 509, 511-12, 515, 552, 562. Two bills were considered relative to the slave patrols. Bill number 22 was passed on 25 March 1738 and bill number 64 was enacted on 3 April 1739. 466. George Edward Frakes. Laboratory for Liberty: The South Carolina Legislative System, 1719-1776. Lexington, Ky.: University of Kentucky Press, 1970, 43-46; William James Rivers. A Chapter in the Early History of South Carolina. Charleston, 1874, 477. 467. South Carolina Statutes at Large, 8: 631-41. 468. N. C. Col. Rec. 2: 256. 469. South Carolina Statutes at Large, 3: 272-73. 470. J. C. H. A., 1: 233. 471. J. C. H .A., 7: 376. 472. J. C. H. A., 14: 166. 473. Lawrence Lee. The Lower Cape Fear in Colonial Days. Chapel Hill: University of North Carolina Press, 96-99. 474. Charleston Gazette, 25 April 1728. 475. S. C. Col. Rec. 3: 83. 476. S. C. Col. Rec. 3: 83. 477. A Journal of the Proceedings in Georgia, Beginning October 20, 1737. William Stephens' diary. London: Meadows, 1742, 2: 128ff. 478. Edward McCrady, History of South Carolina Under the Proprietary Government, 1670-1719. New York: Macmillan, 1897, 151. 479. Colonial Records of South Carolina, Series 1, 2: 25. 480. Colonial Records of S. C., 14: 243. 481. Colonial Records of S. C., 18: 89. 482. South Carolina Statutes at Large, 3: 330. 483. Act 574 of 9 April 1734, South Carolina Statutes at Large, 3: 395-99. 484. South Carolina Gazette, 15 June 1734; South Carolina Statutes at Large, 8: 641. 485. South Carolina Gazette, 22 June 1734. 486. South Carolina Gazette, 19 April 1735. 487. Boston News Letter, 13 January 1737. 488. An Act for Regulating the Guard at Johnson's Fort and for Keeping Good Order in the Several Forts and Garrisons, Act 621 of 5 March 1737, South Carolina Statutes at Large, 2: 465-67. 489. Colonial Records of S. C., 1: 429. 490. Proceedings in Georgia, 2: 128. 491. The account of the Stoenoe [or Stono] Revolution follows Peter H. Wood, "Black Resistance: The Stono Uprising and Its Consequences," in James K. Martin. Interpreting Colonial America. New York: 2d ed.; Harper and Row, 1978, 162-75. 492. South Carolina Statutes at Large, 3: 568-73. 493. Colonial Records of S. C., 1: 674. 494. South Carolina Statutes at Large, 8: 641-44. 495. South Carolina Gazette, 8 January 1741. 496. Humphrey Bland. Treatise of Military Discipline. London: Millar, 1727. Bland's book, in the unabridged London edition, was not advertised in the South Carolina Gazette until 12 February 1756. 497. Colonial Records of South Carolina: Journal of the Common House of Assembly. edited by J. H. Easterby and others. Columbia, S.C.: State of South Carolina, 1951-. 11 volumes to date, 2: 227-28, 237-38, 240-47, 250-52, 257, 302-06, 397, 402. 498. Journals of the House of the Assembly, 2: 179, 190. 499. Journals of the House of the Assembly, 2: 164-65, 172-73. 500. Acts of the South Carolina Legislature, 1733-1739, unpaged manuscript. 501. Colonial Records of South Carolina, 2: 175-78, 195, 309. 502. Benjamin Quarles, "Colonial Militia and Negro Manpower," Mississippi Valley Historical Review, 45 [1958-59]: 643-52. 503. Act of 11 December 1740, S. C. Col. Rec., 2: 420. 504. Journal of the House of the Assembly, 2: 265, 273-75, 278-79, 288-90, 294-95, 302, 309-10. 505. Journal of the House of the Assembly, 2: 357. 506. William Bull II served as lieutenant-governor from 1760 to 1761 and again 1764 to 1768. 507. Journal of the House of the Assembly, 2: 364-67, 369, 381. 508. Ibid., 2: 161; 3: 78-247. 509. Colonial Records of S. C., 1: 321, 333; Acts of the Legislature, nos 48 and 50. 510. Colonial Records of S. C., 3: 572. 511. Act for the Immediate Relief of the Colony of Georgia, Act 695 of 10 July 1742, South Carolina Statutes at Large, 3: 595-97. 512. Act of May 7, 1743, South Carolina Statutes at Large, 7: 417. 513. South Carolina Statutes of South Carolina, 2: 755. 514. Pennsylvania Gazette, 5 April 1744. 515. William Roy Smith. South Carolina as a Royal Province, 89; Frakes, op. cit., 78. 516. Colonial Records of S. C., 22: 115; 27: 369-70. 517. South Carolina Gazette, 28 September 1747; South Carolina Statutes at Large, 9: 645-63. 518. Oliver Morton Dickerson. American Colonial Government, 1696-1765, 361-62; Frakes, op. cit., 82. 519. Edmund Atkin. Indians of the Southern Frontier, xxvii-xxviii, 4; Frakes, op. cit., 87-90. 520. Peckham, The Colonial Wars, 1689-1762, 201-04; Frakes, 94-97. 521. McCrady, Royal Government, 623, 635-52. 522. William A. Schaper, "Sectionalism and Representation in South Carolina," Report of the American Historical Society, 1 : 333. 523. An Act for the Better Ordering and Governing Negroes and other Slaves in this Province, Act 790 of 17 May 1751, South Carolina Statutes at Large, 3: 420. 524. Precisely what constituted lunacy was hard to define and the legislature made little provision for defining it beyond including such anti-social behavior as acts of gross insubordination, theft, arson, running away, conspiracy and poisoning masters or other slaves. Poorer masters could be compensated for the loss of slaves incarcerated by slave patrols or law enforcement officers, and the colony was charged with the costs of deporting, executing or confining lunatic slaves. There was no thought of rehabilitation or counseling. The act also covered at length the prevention of poisoning of masters and the teaching of slaves the art of administering poison. 525. Public Records of S. C., 27: 192, 369-70. 526. "Some Hints for the Operations in North America for 1757," in Pargellis, Military Affairs, 314. 527. Loudoun to Cumberland, 17 October 1757, in Pargellis, Military Affairs, 407. 528. Daniel Pepper to Governor Lyttleton, 30 November 1756, William McDowell, ed. Colonial Records of South Carolina: Documents Relating to Indian Affairs, 1754-1765. Columbia, S. C.: Department of Archives and History, 1970, 295-97. 529. Cole, "Brief Outline," 18-19. 530. South Carolina Statutes at Large, 8: 664-66. 531. South Carolina Statutes of South Carolina, 4: 128. 532. Colonial Records of S. C., 32: 388, 395. 533. Cole, "Brief Outline," 18-19. 534. Letter of a gentleman from Charles-Town, South Carolina, to his friend in London, 10 May 1775, London Gazetteer and New Daily Advertiser, 5 July 1775. Split-shirts was a term that was interchangeable with Shirtmen, backwoods militia usually armed with rifles and expert in their use. 535. 4 Amer. Arch. 5: 578. 536. 4 Amer. Arch. 5: 581. 537. Frances R. Kepner, ed. "A British View of the Siege of Charleston, 1776," Journal of Southern History, 11 : 93-103. 538. Willie Jones, president of the North Carolina Council, to Virginia Governor Patrick Henry, in Henry, William Henry, 3: 30-31. 539. South Carolina Constitution of 1776, in Poore, Constitutions, 2: 1616-19. 540. South Carolina Statutes at Large, 8: 666-82. 541. N. C. Col. Rec. 14: xi. Lincoln was later exchanged and served as Secretary of War. He also commanded the Massachusetts militia force that suppressed Shays' Rebellion in 1787. See also, Ella P. Levett, "Loyalism in Charleston, 1761-1784," Proceedings of the South Carolina Historical Association, : 3-17. 542. George W. Kyte, "The British Invasion of South Carolina in 1780," The Historian, 14 : 149-72. 543. William T. Bulger. "The British Expedition to Charleston, 1779-1780" Ph. D. dissertation, University of Michigan, 1957. 544. Henry, Patrick Henry, 2: 7, 21-23. 545. Howard Lee Landers. The Battle of Camden, South Carolina. Washington, 1929. 546. One of the principal apologists for Gates, and harshest critics of the militia, is Samuel White Patterson. Horatio Gates, Defender of American Liberties. New York, 1941. See especially pages 320-21 in which Patterson blames the loss at Camden wholly on the cowardice of the militia. 547. William Moultrie. Memoirs of the American Revolution. 2 vols. New York, 1802, 2: 245. 548. Henry Lee. The Campaign of 1781 in the Carolinas. Chicago: Quadrangle, 1824; Henry Lee. Memoirs of the War in the Southern Department of the United States. New York: University Publishing, 1870. 549. Daniel Morgan quoted in James Graham. Life of Daniel Morgan of the Virginia Line of the Army of the United States. New York: Derby & Jackson, 1856, 370. 550. Greene to Sumter, 18 March 1781. See also Greene to John Mathews, 16 March 1781. in Greene Papers, Clements Library. 551. Edward Stevens to Thomas Jefferson, dated 8 February 1781, Jefferson Papers, 4: 561-64. 552. Henry Lee's Reply to Patrick Henry, June 1778, in Bernard Bailyn, ed. The Debate on the Constitution. 2 vols. New York: Library of America, 1993, 2: 638. 553. N. C. State Rec., 15: 451-52, 543. 554. Leslie H. Fishel. The Negro American: A Documentary History. Chicago: Scott, Foresman, 1967, 49-52; David D. Wallace. Life of Henry Laurens, with a Sketch of the Life of Lieutenant Colonel John Laurens. New York, 1915, 259-450. 555. Osgood, American Colonies, 2: 373. 556. Colonial Records of South Carolina, 2: 175-78, 195, 309. 557. Poore, Constitutions, 1: 371-77. 558. Allen D. Candler, ed. Colonial Records of the State of Georgia. 26 vols. (Atlanta: State of Georgia, 1904-16), 19: part 1, 324-29. 559. Quarles, "Colonial Militia and Negro Manpower," 651. 560. Journal of the Proceedings in Georgia, 2: 128ff. 561. David Cole, "A Brief Outline of the South Carolina Colonial Militia System," Proceedings, South Carolina Historical Association, 24 : 14-23. 562. Calendar of State Papers: Colonial, 1719-20, No. 531, 29 January 1720; to the Board of Trade, Public Records Office 30/47, Egremont mss, 25 May 1738, 14: 55-56. 563. Colonial Records of S. C., 2: 353, 357, 364-67, 369, 381; 3: 78-247. 564. Instructions to John Reynolds, 6 August 1754, Kavenagh, Colonial America, 3: 2053. 565. Robert Gardner. Small Arms Makers. New York: Crown, 1963, 178. 566. Kenneth Coleman, The American Revolution in Georgia. Athens: University of Georgia Press, 1958; Wilbur W. Abbott, The Royal Governors of Georgia, 1754-1775. Chapel Hill: University of North Carolina Press, 1959. 567. Smith, Loyalists and Redcoats, 192-96; Kenneth Coleman. The American Revolution in Georgia, 1763-1789. Athens: University of Georgia Press, 1958, 51-53. 568. Revolutionary Records of Georgia, 1: 273. 569. Ibid., 1: 85. 570. Collections of Georgia Historical Society, 8:20-21; Coleman, Revolution in Georgia, 65-66. 571. George White. Statistics of the State of Georgia. Savannah: Williams, 1849, 63-64. 572. Revolutionary Records of Georgia 1: 97. 573. Ibid., 1: 82-83. 574. Ibid., 1: 141. 575. Ibid., 2: 206, 221. 576. Ibid., 2: 291. 577. Ibid., 2: 291. 578. Ibid., 2: 103. 579. Ibid., 2: 254. 580. Ibid., 2: 277. 581. Ibid., 2: 293. 582. Ibid., 2: 103. 583. Collections of the New York Historical Society, : 246. 584. Georgia Constitution of 1777 in Poore, Constitutions, 1: 381-82. 585. Ibid., 1: 184. 586. Ibid., 1: 100. 587. Ibid., 1: 136-37. 588. Ibid., 2: 317. 589. Ibid., 2: 312. 590. Ibid., 1: 306. 591. Ibid., 2: 104-05. 592. Revolutionary Records of Georgia, 2: 87. 25 August 1778, "all vacancies of officers in the Militia of the state shall be forthwith be filled up by new elections and that from time to time as fast as elections happen a report [is] to be made out to the Governor." 593. Ibid., 1: 97. 594. Ibid., 2: 154. 595. Charles Stedman, The History of the Origin, Progress and Termination of the American War. 2 vols. London: printed for the author, 1794, 2: 103-20; David Ramsay, The History of the American Revolution. 2 vols. Philadelphia: Aitkin, 1789, 2: 420-31; Kenneth Coleman, The American Revolution in Georgia, 1763-1789. Athens: University of Georgia Press, 1958. 596. Charles Olmstead, "The Battles of Kettle Creek and Brier Creek," Georgia Historical Quarterly, 10 : 85-125.
http://constitution.org/jw/acm_5-m.htm
13
36
The U.S. History of Capital Punishment Capital punishment’s history in the United States is basically a debate between two ways of viewing the world: that state-sanctioned death is necessary for society, and that a civilized society should not see death as the only fair way to punish any crime or criminal. Throughout the history of capital punishment in the United States, reformists have spoken out against capital punishment, changing the methods used to execute convicted criminals, reducing the types of crimes that deserve a death sentence—and, in many cases, eliminating them—and analyzing the forces that produce criminals to try to stop criminals from being created. As society continually struggles to balance the human desires for retribution and compassion, many different forces and opinions shape the continually evolving philosophy and practice of capital punishment. Ancient Western Roots of Capital Punishment The American system of capital punishment is based heavily on British law, which, in turn, grew out of the primitive Western basis of capital punishment: personal retribution. Ancient laws encouraged and authorized individuals to seek retribution by killing their offenders. They also began the tradition of defining and listing the crimes that would deserve death as a punishment, setting a precedent for Western legal codes. For example, the Babylonian Code of Hammurabi, written around 1700 B.C., arbitrarily made selling beer and revealing the location of sacred burial places crimes punishable by death (Henderson 2000). Around the seventh century A.D., government leaders began understanding that crimes harmed society’s collective interests and so became more involved in controlling and punishing crimes. To protect society these leaders passed laws that devised a list of different punishments to be used depending on the nature of the specific crime. Also, laws focused more on keeping peace in society than serving justice, with the Justinian Code of A.D. 529 standing as an example. In the ancient Greco-Roman state, the prime reason for execution was to punish those who attacked the religion the state. The best known examples of the use of capital punishment for this specific offense were Socrates’ execution for heresy circa 399 B.C. and the circa A.D. 33 crucifixion of Jesus Christ, whose formal charge was sedition against the state (Henderson 2000). Throughout this era, punishment was violent and often a means of inflicting torture along with death. Middle Ages and Renaissance During the Middle Ages, it became very important to justify punishing convicted criminals by making sure they were guilty. The predominant methods of determining guilt or innocence at the time were trial by battle, the ordeal, and compurgation. Trial by battle pitted the offender and the victim, or a family member of the victim, in a fight against each other. Whoever won that fight was believed to be blessed by the gods. Thus, if the accused won, it was because he or she was innocent. The ordeal subjected the accused to torture, and if the accused criminal survived the ordeal, he or she was innocent because, again, the gods would favor this innocent person and given him or her the strength to survive whatever the torturers inflicted. Compurgation gave an accused criminal the opportunity to gather compurgators, or relatives and neighbors, and swear his or her innocence to each of them individually. The compurgators would then take an oath and attest that they believed the accused was telling the truth that he or she was innocent. This method of determining innocence or guilt was reserved for the members of the higher classes of society (Banks 2005). These three methods of finding criminals guilty slowly lost popularity as the government realized they were ineffective. Between about the eighth and eleventh centuries A.D., exacting vengeance was operationalized into civil law (born of the concept of imparting justice in the king’s court) and criminal law (which came out of the ancient notion of vengeance). Trial by jury also became the accepted and effective way of establishing guilt and was widely used by the mid-thirteenth century. As civil and criminal law developed, torture was phased out, but very, very slowly. In fact, the centuries between 1400 and 1800 were marked by enough executions that the laws on capital punishment were later dubbed the “Bloody Code” (Levinson 2002). Torture was even legal into the eighteenth century. To maximize the psychological and physical effects of torture, many methods were invented. Some commonly used methods of torture were chopping off the hands and feet, impaling the body on a large stake, stripping off the skin, boiling the body alive in oil, drawing and quartering, burning at the stake, and crucifying. Although there was a lot of torturing and executing going on, there was a lot of thought and discussion about why it should be stopped. The eighteenth-century European Enlightenment focused on ideas that emphasized the value of humankind and the potential that every individual possessed. Reformists began thinking about how the government could serve the common good (with the common good encompassing a lot more people than it ever had) while controlling and punishing criminals (Banks 2005). The Italian philosopher and politician Cesare Beccaria wrote On Crimes and Punishments in 1764, trying to answer these questions and creating a turning point in death-penalty reform. Beccaria argued for abandoning the system of maximum terror, replacing it with a system that applied a punishment that was proportionate to the crimes. The use of incarceration as punishment began to grow, taking away liberty as a punishment. Being the opposite of liberty, which was harsh in the minds of humanists who saw liberty as extremely precious, prison created a rehabilitative environment, which was important considering that many saw crime as the product of an offender’s environment and able to be corrected (Banks 2005). In the United States Early American settlers’ criminal codes were based on Britain’s laws, and some were just as harsh. For example, in 1612, acts like stealing grapes, killing chickens, and trading with Indians were capital crimes. As the colonies grew more independent of each other and Britain, they developed unique laws. Their insularity also made them slower to accept the ideals of the European Enlightenment. They retained the traditional belief that humankind was naturally depraved and not a product of environment, putting the responsibility for crime on criminals themselves (Introduction to the Death Penalty 2009). As American legal codes became more defined by the colonies, patterns of punishment surfaced. The early northern colonies were more lenient than England for crimes against property but much harsher in punishing crimes against morality. The early southern colonies adopted English law without modifying it very much but also developed a subset of crimes that were punishable only if committed by blacks. Many saw this addition as an American “Bloody Code” (Banner 2002). The Bill of Rights, ratified in 1791, controlled the use of capital punishment by prohibiting “cruel and unusual punishment” in the Eighth Amendment. However, at the time of the Constitution, the phrase “cruel and unusual punishment” was a stock verbal formula, and its contemporary meaning is disputed today. It is possible that the phrase enforces proportionality, or reserving the harshest sentences for the worst crimes. The phrase could also have been used to make a list of the methods of punishment that would be considered too harsh for capital crimes. At the time, the death penalty by hanging was not seen as cruel or unusual punishment (ibid). In the first decade after 1776, some Americans began to espouse the ideas becoming popular in Europe, with Beccaria’s book being published in New York City in 1773. Much of the dissatisfaction with the death penalty stemmed from the growing belief that every human had innate virtue and that a person’s environment—not inherent evil—shaped actions and choices. These early abolitionists did make some progress in moving away from the use of the death penalty (it was partially abolished in some places) but it was still used as a standard punishment. Pre- and Post-Civil War In the first half of the nineteenth century, the American capital punishment debate boiled down to two ways of seeing the world and the war between those two ways. One side had sympathy for the criminal, which the opposition said made it impossible to see the larger picture. The other side saw the larger picture but not the individual human beings that made it up (ibid). By this time, prisons were widely used to punish criminals. Prisons were seen more and more as able to provide tailored punishment to convicts, offering probation and other rehabilitative programs. A turning point in the nature of capital punishment was when it began to be administered in prisons, as many felt public executions actually encouraged violent crime. With the privatization of the administration of the death penalty, capital punishment lost even more of its symbolic meaning and its ritual significance (ibid). Crimes being punished in a private space led to more humane methods of inflicting punishment. Hanging was criticized as too brutal, the product of a more barbaric society, and a growing faith in science as the means of ameliorating aspects of the human condition led to the advent of the electric chair. The first chair was built in New York in 1888 and used in 1890 to execute William Kemmler. Primarily in western states, the gas chamber began to be used (Introduction to the Death Penalty 2009). By 1846, Michigan had abolished the death penalty for every crime but treason. Soon after, Rhode Island and Wisconsin abolished the death penalty for all crimes. Progressive Era through World War II Throughout the Progressive Era around the turn of the twentieth century, capital punishment was on the decline. Crime continued to be seen as the result of a criminal’s environment, and science was proving it was also the result of inborn genetic traits. As the criminal became more and more a victim of outside forces, the death penalty became less and less just. But the shift away from capital punishment was undone when America entered World War I. Because of panic generated by the Russian Revolution and class conflicts, many states that had abolished the death penalty reinstated it. Also, no more states abolished capital punishment until the 1950s. In fact, there was significant growth of the support for and use of capital punishment from 1920 to 1935. In the mid 1950s, interest in the debate over whether the death penalty should be used resurfaced. Caryl Chessman, a death-row inmate, who wrote several books while in prison, and his first book, his autobiography, entitled Cell 2455, Death Row, was very popular in the United States and around the world. Chessman’s case brought the death penalty question back to the forefront of issues facing society. Simultaneously, support for capital punishment was waning as many nations around the world abolished the death penalty (Introduction to the Death Penalty 2009). Civil Rights Era The next significant movement toward abolition of the death penalty occurred during the 1960s civil rights movement. This movement helped the abolition debate, largely because abolitionists changed the way they approached the issue. Activists went from trying to use the legislative process to fighting the practice in the judicial arena. In the mid-twentieth century, several Supreme Court cases transformed the legal community’s understanding of the Eighth Amendment and “laid a foundation that lawyers would eventually use to challenge the constitutionality of the death penalty.” In the 1958 Supreme Court case Trop v. Dulles, the court decided that the Eighth Amendment allowed for an evolution of standards for civilized conduct. Many abolitionists applied this decision (it was not a capital case) to the death penalty, arguing that it no longer fit in with society’s standard of decency. The LDF lawyers began a legal campaign that led to a Supreme Court declaration in the 1972 landmark case Furman v. Georgia that the death penalty was unconstitutional by declaring the death penalty as cruel and unusual punishment and in violation of the Eighth Amendment (Introduction to the Death Penalty 2009) . The progress abolitionists made against the death penalty did not last long—in 1976, Gregg v. Georgia determined that rather than ruling that capital punishment itself was unconstitutional, the court in 1972 had ruled that “the haphazard way in which it was administered was constitutionally impermissible.” The Court opened the way for states to rewrite their capital statutes to eliminate the arbitrariness in capital sentencing. As states amended capital punishment laws, justices held that the amendments “provided sufficient safeguards to ensure that the death penalty was employed in a constitutionally acceptable manner.” As a result, capital punishment was reinstated in the United States, and the nation’s first execution in 10 years took place in January 1977 when Gary Gilmore was executed by firing squad in Utah. Later, Charles Brooks was the first person executed by lethal injection in Texas on December 7, 1982 (Banner 2002). The Continuing Debate The main points of the worldwide debate currently surrounding the death penalty are not new but seem to accumulate and converge as societies progress. Does the death penalty protect society by ridding it of evil and actually deter people from committing crimes? Does it exact retribution from criminals appropriately, in fair proportion to the crime committed? Is the punishment used fairly in terms of the race and class of its victims? Is capital punishment barbaric or does it have a place in civilized society? Is the death penalty justified in the vast monies saved by not having to support such criminals with incarceration for their lifetimes, or is its cost to society's humanity even dearer? All of these questions have been asked throughout history and continue to be weighed as society tries to determine whether capital punishment itself will ultimately live or die. -- Posted September 19, 2009 Abbott, Geoffrey. 2005. Execution: The Guillotine, the Pendulum, the Thousand Cuts, the Spanish Donkey, and 66 Other Ways of Putting Someone to Death. New York, NY: St. Martin’s. Banks, Cyndi. 2005. Punishment in America. Contemporary World Issues. Santa Barbara, CA: ABC-CLIO. Banner, Stuart. 2002. The Death Penalty: An American History. Cambridge, MA: Harvard UP. Henderson, Harry. 2000. Capital Punishment. Rev. ed. New York, NY: Facts on File. Hood, Roger. 2002. The Death Penalty: A Worldwide Perspective. 3rd Ed. Oxford, UK: Oxford UP. “Introduction to the Death Penalty.” 2009. The Death Penalty Information Center. Accessed September 9, 2009. Levinson, David. 2002. “Capital Crimes.” Encyclopedia of Crime and Punishment. 4th Vol. Thousand Oaks, CA: Sage.Wolf, Robert V. 1998. Capital Punishment. Philadelphia, PA: Chelsea House.
http://www.randomhistory.com/2009/09/19_capital-punishment.html
13
25
Brown's GCSE/IGCSE KS4 science-CHEMISTRY Revision Notes Oil, useful products, environmental problems, introduction to 5. ALKENES - unsaturated hydrocarbons - their chemical reactions The alkenes are a series of hydrocarbon molecules (made of carbon and hydrogen atoms). They are referred to as 'unsaturated' hydrocarbons because they have a carbon = carbon C=C double bond and other atoms can add to them via simple addition reactions. The physical properties and chemical reactions of alkenes with hydrogen (to form alkanes), bromine to form a dibromoalkanes (used as a test for alkenes), polymerisation (self-addition to form polymers like polyethene and with oxygen (combustion, burning) are fully described with word and symbol equations. Index of KS4 Science GCSE/IGCSE Chemistry Oil & Organic Chemistry Pages: 1. Fossil Fuels : 2. Fractional distillation of crude oil & uses of fractions : 3. ALKANES - saturated hydrocarbons and combustion : 4. Pollution, carbon monoxide, nitrogen oxides, what makes a good fuel?, climate change-global warming : 5. Alkenes - unsaturated hydrocarbons : 6. Cracking - a problem of supply and demand, other products : 7. Polymers, plastics, uses and problems : 8. Introduction to Organic Chemistry - Why so many series of organic compounds? : 9. Alcohols - Ethanol - properties, reactions, biofuels : 10. Carboxylic acids and esters : 11. Addition polymers and condensation polymers : 12. Natural Molecules - carbohydrates - sugars - starch : 13. Amino acids, proteins, enzymes & chromatography : 14. Oils, fats, margarine and soaps : 15. Vitamins, drugs-analgesic medicines & food additives and aspects of cooking chemistry! : 16. Ozone, CFC's and free radicals : 17. Extra notes, ideas and links on Global Warming and Climate Change : Multiple Choice and Gap-Fill Quizzes: m/c QUIZ on Oil Products (GCSE/IGCSE easier-foundation-level) m/c QUIZ on Oil Products (GCSE/IGCSE harder-higher-level) : IGCSE/GCSE m/c QUIZ on other Aspects of Organic Chemistry 3 Easy linked GCSE/IGCSE Oil Products word-fill worksheets hydrocarbons series (unsaturated) hydrocarbons containing a carbon...carbon double bond (>C=C<) as well as single bonds. have the general formula CnH2n where n = 2, 3, 4 etc. giving the formulae C2H4, C3H6, The three formula quoted above match the three names above. As with naming all organic molecule series eth.. means 2 carbon atoms in the chain, prop... means 3 and but.. means 4 etc. These are called unsaturated molecules because two atoms can join onto half of the double bond when it opens up. The first three in the series are shown in the section below and are colourless smelly gases. They are extremely reactive and important compounds in the chemical industry and are converted into very useful compounds e.g. plastics and A Cracking demonstration! You can demonstrate cracking in the laboratory by heating paraffin grease over an aluminium oxide catalyst at 400-700oC, and collecting the smaller gaseous hydrocarbon molecules over water - easily shown to be flammable! Examples of alkene (1) is the molecular formula: a summary of the totals of each atoms of each element in one molecule is are 'shorthand' versions of the full structural formula or displayed formula (3) is called the full structural formula or displayed formula: formula shows how all the atoms are linked with the covalent bonds (the dashes -) ie the C-C bonds and the C-H Note that carbon must form four bonds (C-C single bond or a C=C double bond) and hydrogen forms one bond (C-H). It is the presence of the carbon = carbon double bond (C=C) which makes alkenes unsaturated hydrocarbon molecules. the C=C is referred to as the 'double bond' 5c. More on ALKENES - unsaturated hydrocarbons - a quick summary - These cannot be obtained directly from crude oil and must be made by cracking (see section 6 cracking notes). - The unsaturated hydrocarbons form an homologous series called alkenes with a general formula CnH2n - Unsaturated means the molecule has a C=C double bond to which atoms or groups can add. - Alkene examples: Names end in - C2H4 or or - The alkenes are more reactive than alkanes because of the presence of the carbon = carbon double bond. The alkenes readily undergo addition reactions in which one of the carbon = carbon double bonds breaks allowing each carbon atom to form a covalent bond with another atom such as hydrogen or bromine. - Examples of addition reactions are: with hydrogen under pressure and in the presence of a nickel catalyst to form an alkane - Alkenes react by 'addition' with bromine and decolourises the orange bromine water because the organic product is colourless, and this is a simple test to distinguish an alkene from an alkane. - Vegetable oils contain unsaturated fats can be hardened to form margarine by adding hydrogen on to some of the carbon=carbon double bonds using a nickel catalyst. The process is called hydrogenation, - Alkenes can add to themselves by addition polymerisation to form 'plastic' or polymeric materials. - Alkenes readily burn, just like alkanes, to give carbon dioxide and water. - e.g. ethene + oxygen ==> carbon dioxide + - C2H4 + 3O2 ==> 2CO2 + 2H2O - However, they are NOT used as fuels for - They are far too valuable for use to make plastics, anti-freeze and numerous other useful compounds. - They burn with a more smokey flame than alkanes due to less efficient and more polluting combustion. - Alkenes are isomeric with cycloalkanes e.g. the molecular formula C6H12 can represent hexene or cyclohexane - hexene CH3-CH2-CH2-CH2-CH=CH2 - and note that .... - hexene is an unsaturated hydrocarbon with a double bond, - the isomeric cyclohexane does not have a double bond and is a saturated hydrocarbon, - so a simple bromine test could distinguish the two similar colourless liquids, - because only the hexene would decolorize the bromine water test reagent. Multiple Choice Quizzes and Worksheets KS4 Science GCSE/IGCSE m/c QUIZ on Oil Products KS4 Science GCSE/IGCSE m/c QUIZ on Oil Products KS4 Science GCSE/IGCSE m/c QUIZ on other aspects of Organic Chemistry 3 linked easy Oil Products gap-fill quiz worksheets ALSO gap-fill ('word-fill') exercises originally written for ... ... AQA GCSE Science Useful products from crude oil AND ... OCR 21st C GCSE Science Worksheet gap-fill C1.1c Air pollutants etc ... ... Edexcel 360 GCSE Science Crude Oil and its Fractional distillation ... each set are interlinked, so clicking on one of the above leads to a sequence of several quizzes Level Organic Chemistry revision notes Revise KS4 Science GCSE/IGCSE/O level Chemistry Revision-Information Study Notes for revising for AQA GCSE Science, Edexcel 360Science/IGCSE Chemistry & OCR 21stC Science, OCR Gateway Science WJEC/CBAC GCSE science-chemistry CCEA/CEA GCSE science-chemistry (and courses equal to US grades 8, 9, 10) © Dr W P Brown 2000-2013 All rights reserved on revision notes, puzzles, quizzes, worksheets, x-words etc. * Copying of website material is not permitted Alphabetical Index for Science B C D G H I J K L M N O P U V W X Y Z
http://www.docbrown.info/page04/OilProducts05.htm
13
38
Introducing Students to Inflation Indexes For monetary policymakers, the rate of inflation, or the overall rate of the increase in prices, influences the actions they take to promote long-run price stability. Inflation concerns everyone because it reflects an overall decline in the purchasing power of their money. The first-quarter 2011 issue of the Atlanta Fed's EconSouth includes an article that highlights the important distinction between inflation and the cost of living—concepts that are often confused with one another. The bottom line is that inflation happens when a central bank issues more money than the public wants to hold. Central banks control the amount of money circulating in the economy and thus control inflation. On the other hand, central banks cannot control changes in the cost of living, or price increases of particular goods, such as gasoline, and services. Central banks do not produce commodities, so they do not control relative-price changes. Part of the Federal Reserve's "dual mandate" is a commitment to stable prices. Having an accurate gauge of inflation means that policymakers can make well-informed decisions. Consumer and producer prices Since early 2000, the Federal Reserve Board of Governors has expressed its inflation outlook in terms of the PCE. One reason the Fed chose the PCE measure over the CPI is that the PCE market basket is thought to weigh more accurately how much people pay for certain things, perhaps most notably medical care and housing (the PCE measure gives a greater weight to the former, a smaller weight to the latter). The Federal Open Market Committee (FOMC)—the Federal Reserve's monetary policymaking body—implements monetary policy with the goal of maintaining a 2 percent inflation rate over time as measured by the annual change in the PCE. Having some small level of inflation reduces the chance of deflation—or a fall in prices and wages—if economic conditions were to weaken. And although the FOMC uses core PCE for inflation monitoring, it is the overall PCE price measure that is the basis for their inflation target. Headline versus core inflation Measuring core inflation helps policymakers see through the short-term price changes of certain goods that may not be representative of the longer-term trend in the overall price level. Lots of things can happen to cause a sudden and temporary price changes in certain items, and perhaps food and energy goods most of all. For example, a drought or storm can damage farmers' crops, resulting in a sharp but transitory increase in food prices. This does not mean that these price increases are not important to consumers. However, it does mean that these price increases are likely temporary, and the change may not reflect the trend in the economy's overall price level that is under the control of the Federal Reserve. Economists debate the best ways to compute core inflation. Although excluding the prices of food and energy goods from indexes may help with predicting longer-term price trends, economists also use more involved ways to calculate core inflation. The trimmed-mean PCE inflation rate, for one, involves leaving out a certain fraction of the most extreme price changes of components that make up the index. By leaving out the most dramatic price changes at both ends of the spectrum, economists hope to come as close as possible to core inflation, according to the Federal Reserve Bank of Cleveland. So what does it all come down to? The important point to keep in mind is that the Federal Reserve considers a variety of indicators, and many inflation measures so they can track the trend in the PCE price index. And after students become familiar with the different indexes—including the CPI, PPI, and PCE—and with the concepts of headline and core inflation, they can understand how the prices they pay, whether for an ice cream cone, movie ticket, or gasoline, fit into the larger economic picture. By Elizabeth Bruml, an economics major at Emory University in Atlanta, who contributed this article as part of her internship at the Federal Reserve Bank of Atlanta October 31, 2012
http://www.frbatlanta.org/pubs/extracredit/12fall_inflation_indexes.cfm
13
26
Clinically, hearing loss is categorized into one of the following three categories: - Conductive hearing loss - Sensorineural hearing loss - Mixed hearing loss A fourth category, central deafness, is much more uncommon. Central deafness occurs when the sound is successfully transmitted by the lower auditory pathways, but the brain cannot recognize the signal as sound. This type of deafness generally occurs in conjunction with other neurological conditions. - Hearing Loss - Video Clip courtesy of Cochlear Corporation Conductive Hearing Loss Conductive hearing loss occurs when something in the outer ear or middle ear blocks or impedes the passage of sound waves to the inner ear. You can simulate a conductive hearing loss by wearing earplugs. Conductive hearing loss can be caused by a number of things, including fluid in the middle ear, excessive cerumen (earwax) buildup in the ear canal, perforated eardrum, damaged ossicles or tumors in the ear canal or middle ear. Conductive hearing losses are more likely to be temporary and can often be corrected medically or surgically. However, if medication or surgery does not work or is not an appropriate treatment, then many people with conductive hearing loss can be fit with hearing aids. Conductive hearing losses can range up to a maximum of about 50-60 dB HL (mild to moderate hearing loss). People with conductive hearing losses, who use hearing aids, generally do very well. Sensorineural Hearing Loss Sensorineural hearing loss (SNHL) occurs when there is a problem in the inner ear. “Sensorineural” hearing loss is an umbrella term that refers to problems with either the cochlea (sensory hearing loss) or the auditory nerve (neural hearing loss). Most people with SNHL actually have just sensory hearing loss, which means the source of hearing loss is in the cochlea. Sometimes people who have SNHL also have balance problems because the cochlea and the semicircular canals (for balance) are both part of the inner ear; therefore, the source of the hearing loss can also cause balance problems too. SNHL can be caused by many different things, including certain kinds of medications, lack of oxygen at birth, excessive noise exposure, tumors, degenerative diseases, auto-immune diseases, genetic factors, viruses or bacteria. Sensorineural hearing losses are more likely to be permanent and often cannot be treated with surgery or medication. Hearing aids are typically recommended for people with SNHL. The amount of benefit from hearing aids varies widely for people with SNHL. Unlike conductive losses, which essentially create a reduction in volume, individuals with SNHL sometimes also experience the added disadvantage of a distorted signal. Those individuals often comment that, “I can hear you but I can’t understand what you’re saying.” Think of a radio that is perfectly tuned in to your favorite station but the volume has been turned way down. This would sound like what a person with a conductive hearing loss hears. For those individuals, putting on a hearing aid would be like turning up the volume on the radio. Now suppose the radio is not quite tuned all the way in to the station, AND the volume has been turned way down. This would sound like what some people with SNHL hear. For those individuals, putting on a hearing aid would be like turning up the volume on the radio, but the station still isn’t tuned all of the way in. It is important to understand that not everybody with SNHL has trouble with distortion. But for people with severe or profound hearing losses, hearing aids are often inadequate because of the perception of distortion. Those individuals are typically considered good candidates for a cochlear implant. Cochlear implants are generally appropriate when the hearing loss is extensive enough that hearing aids do not help and the loss cannot be treated with medication or other surgery. Mixed Hearing Loss Mixed hearing loss occurs when there is a conductive component on top of a sensorineural component. For example, someone who has a noise-induced hearing loss (sensorineural) who subsequently gets an ear infection with fluid in the middle ear (conductive) would have a mixed hearing loss.
http://www.boystownhospital.org/knowledgeCenter/articles/hearing/Pages/TypesofHearing.aspx
13
20
After the Conquest of NEW FRANCE, Great Britain wanted to redraw the boundaries of its new colony so as to make room in the fisheries and the fur trade for the rival merchants of Québec and Montréal. The QUEBEC ACT of 1774 was a formal recognition of the failure of the project, as the borders were adjusted in closer conformity to the needs of a transcontinental economy. In 1791 the fur trade still played a determining role for the merchants and seasonal workers drawn from the rural population. These and their dependants still felt that their territory included both the St Lawrence Valley and the huge western expanse from the Great Lakes to the Pacific. In the early 19th century, however, the economic bases for this perception grew blurry and, for most francophone Lower Canadians, took on the dimensions of the St Lawrence Lowlands from Montréal to the Gulf of St Lawrence. When in 1822 Louis-Joseph PAPINEAU attacked the proposed union of the 2 Canadas, he described Lower Canada as a distinct geographic, economic and cultural space, forever destined to serve the HABITANT as a Catholic and French nation. This Québecois vision found little support among the anglophone merchants, who continued to challenge the 1791 decision and who, from Montréal, largely controlled the economic development of Upper Canada. These businessmen, who owned the banks and means of transportation and who fervently advocated the building of canals on the St Lawrence River, were involved primarily in the grain trade to England and in transporting Upper Canadian forest products to the port of Québec; they occupied an economic space that overflowed the borders of the St Lawrence Valley. After the unsuccessful attempt to unite the 2 Canadas in 1822, they began clamouring for the annexation of Montréal to Upper Canada and continued until after the failure of the REBELLIONS OF 1837, when a single province was formed. An Economy in Crisis Around 1760 the colonial economy was still dominated by the FUR TRADE and a commercial AGRICULTURE based on wheat. The FISHERIES, the timber trade, shipbuilding and the FORGES SAINT-MAURICE were all secondary. The fur trade was still expanding northwards and towards the Pacific: towards the end of the century, 600 000 beaver skins and other furs worth over £400 000 were being exported annually to England. All this activity, transcontinental and international by its very nature, was largely concentrated in the hands of the bourgeoisie of the NORTH WEST COMPANY - the Montréal-based company that had triumphed over its American rivals and the HUDSON'S BAY COMPANY. However, after 1804, growing pressure from these rivals reduced profits to such an extent that in 1821 the NWC had to merge with the HBC. The wheat trade underwent equally important transformations. After about 1730 wheat farming, the basis for a subsistence agriculture, started to become a commercial activity, thanks to the development of an external market. This market was mainly the West Indies until 1760, and then it expanded until, by the beginning of the 19th century, it included southern Europe and Britain. Thereafter, production fell off so sharply that around 1832 Lower Canada had to import over 500 000 minots (about 19.5 million L) of wheat annually from Upper Canada. The deficit became chronic. Oats, potatoes and animal husbandry occasionally brought profits to some farmers, but most grew these crops for subsistence. The increasing difficulties in agriculture and in the fur trade adversely affected the population's standard of living. This was the context for the rapid growth of the TIMBER TRADE after 1806. Increased production and export of forest products occurred during Napoleon's Continental Blockade when England, to guarantee wood supplies for her warships, introduced preferential tariffs that were maintained at about the same level until 1840, despite successive price drops. Again there was abundant seasonal help in Lower Canada. The forest industry, with Québec City as its nerve centre, was especially active in the Ottawa Valley, the EASTERN TOWNSHIPS and the Québec and Trois-Rivières areas. Squared pine and oak, construction wood, staves, potash and shipbuilding were the industry's mainstays. Lower Canada's economy, transformed in the climate of crisis of declining fur and local wheat shipments, was increasingly Québec-centered and yet more dependent for its exports on surplus production in Upper Canada. This produced an urgent need for credit institutions and for massive investments in road and canal construction. From the early 18th century, French Canadian population had grown without significant help from immigration. With a birthrate of about 50 births per thousand population and mortality of about 25 per thousand, the population doubled every 25-28 years. Post-Conquest British immigration hardly affected this demographic trend except for a limited time during the Loyalist wave, whereas land was so abundant and people so scarce that French Canada's vigorous increase continued until the end of the century. It was in the seigneur's interests to grant lands upon request in order to have the largest possible number of rent payers, but early in the 19th century this policy, combined with the high birthrate, led to decreasing accessibility of good lands; as well the seigneurs, prompted by the rising value of their forest products, began to limit the peasants' access to real estate. As the scarcity of land, real or artificial, became more widespread, a rural proletariat began to develop, which by 1830 made up about one-third of the rural population. French Canadian immigrants to the US (see FRANCO-AMERICANS) came largely from this group and from the impoverished peasantry. After 1815, population pressure was intensified in the rural communities along the St Lawrence and Richelieu rivers by a massive wave of British immigrants looking for land and jobs. Peasants and the proletariat in rural Québec felt threatened by the strangers, who sought land in the Townships, where French Canadians had long thought their own excess population could settle. The rapidly rising urban anglophone population was even more alarming to them: in Québec City in 1831, Anglophones formed 45% of the population and topped 50% among the day labourers; in Montréal in 1842 the percentages were 61% and 63%, respectively. These factors helped sharpen the Francophones' feeling that their culture was in danger and helped strengthen the nationalist movement, tormented by class struggle (see FRENCH CANADIAN NATIONALISM). Class Struggles and Political Conflicts The society that had developed in New France was one in which the military, nobility and clergy were dominant and the bourgeoisie was dependent on them. After 1760 British military personnel, aristocrats and merchants replaced their francophone equivalents. But the development of class consciousness within the 2 bourgeoisies, the English and the French, helped set off a conflict between the middle class and the aristocrats over the introduction of parliamentary institutions. The outcome in 1791 showed both the progress of the middle class and the economic and social decline of the nobility. Towards the end of the century, the power of the nobility was entirely dependent on the privileges and protection guaranteed by the heads of the colonial state. Economic and demographic changes after 1800 produced a deterioration of social relationships, the emergence of new ideologies and a reorientation of the old ones. This was the context for a struggle among 3 classes for the leadership of society: the anglophone bourgeoisie, the French Canadian middle class and the clergy. The anglophone merchant bourgeoisie, the main beneficiary of the 1791 reform and of recent economic expansion, felt that its status and power were threatened by the widespread changes. The efforts of these merchants to have the St Lawrence River canalized and their desire to stimulate the construction of access roads into the Townships were parts of a larger program seeking to increase immigration, create banks, revise the state's fiscal policies and abolish or reform the SEIGNEURIAL SYSTEM and customary law. But these measures required political support from the francophone nationalists, who were on the rise and held a majority in the legislative assembly. Income from continued TIMBER DUTIES was uncertain since it depended, after the 1815 peace, on both the goodwill of this nationalist element and the failure of England to introduce free trade. Anglophone merchants dominated business circles in the cities (in 1831 they constituted 57% and 63% of the merchant class in Québec City and Montréal, respectively) and played a disproportionately large role in the countryside. Nevertheless, they felt vulnerable in a colony numerically dominated by Francophones. Not surprisingly, Anglophones tended to seek the political support of governors, colonial bureaucrats and even the government in London. Their attitude is explained by their inability to form a party capable of dominating the majority, slight though it was, in the Legislative Assembly. Their successive political defeats over 30 years forced them to defend the imperial connection and the constitutional status quo and to support conservative political ideas. After the turn of the century, this bourgeoisie began to clash with the French-Canadian middle class, in particular with the professionals who were then developing a national consciousness. These professionals, whose numbers were rapidly growing and who aspired to form a national elite, became sharply aware that major economic activities were increasingly controlled by Anglophones. Regarding this as the result of a serious injustice done their fellow Francophones, they tended to view the anglophone merchants and bureaucrats as the most dangerous enemies of the French Canadian nation. Their ideology, warmly welcomed among small-scale merchants in French Canada, became steadily more hostile to the activities on which anglophone power was based. The francophone petite bourgeoisie glorified agriculture, defended the COUTUME DE PARIS and the seigneurial system (which it wanted to see extended throughout the province) and opposed the BRITISH AMERICAN LAND CO, loudly insisting that Lower Canadian territory was the exclusive property of the French Canadian nation. To promote its interests, the French-Canadian bourgeoisie fashioned the PARTI CANADIEN (which in 1826 became the Parti PATRIOTE). Party leaders explained economic disparities by the British control of the political machine and the distribution of patronage. They therefore developed a theory that, though it provided for political evolution along traditional British lines, also justified rule by the majority party in the legislative assembly. Party leader Pierre BÉDARD was the main architect of this strategy, which was inspired by a desire to apply the principle of ministerial responsibility. (Its obvious consequence was to transfer the bases of power to the francophone majority and to reduce the governor's powers.) In 1810, in the context of revolutionary and imperial wars, perpetual tension with the US and current ideas about colonial autonomy, these reformist plans seemed so radical that the suspicious Governor CRAIG arrested the editors of Le Canadien, suppressed this nationalist party organ and dissolved the legislative assembly. After the WAR OF 1812, Papineau, the new leader of the decapitated party, realized that it was necessary to seek more limited results. He focused on the struggle over control of revenues and on complaints, with the immediate objective of sharing power with his party's opponents. Papineau hoped in this way to control the clergy and win over the Irish Catholics, thus warding off accusations of nationalist extremism; it is from this perspective that the leadership roles in the party of John Neilson and, later, E.B. O'Callaghan can be explained. Only after 1827 did the pressure of events and from the militants cause Papineau to become more radical, and the idea of an independent Lower Canada then began to take root. The desire initially to win power by ordinary political means was at the heart of this adjustment of political ideology. But the British model was replaced by the American model, which justified the elective principle for all posts that exercised power, from justices of the peace and militia officers to legislative councillors and even the governor. As the political struggle intensified, the Parti patriote gained strength in French-Canadian circles, stirred up by nationalism, but lost popularity among Anglophones, who tended to align themselves with the anglophone merchants. Though they agreed on the main objective - national independence - Patriote militants disagreed about the kind of society that should follow their victory: the majority, which backed Papineau, wanted to continue the social ancien régime, whereas a minority hoped to build a new society inspired by authentic liberalism. These opposing views were to play a major role in the failure of the rebellions. The clergy, a class solidly enthroned on a complex institutional network that generated great revenues, naturally became engaged in the struggle for power. Having seen the effects of the French Revolution and the intervention of the Protestant colonial state in Québec education at the turn of the century, Québecois clerics were already aware of the threat to their social influence. They became even more aware when conflict flared between the Parti canadien and the merchants' party, which was supported by the governor. The ecclesiastical leaders became convinced that a local group was using parliamentary institutions to achieve its revolutionary intentions. Consequently, during the crisis of 1810, Monseigneur PLESSIS asked his priests to support (with little success) the government's candidates. When the War of 1812 began, it is not surprising that the episcopacy strongly denounced the Americans and demanded, on pain of religious sanction, that the population actively defend its territory. After 1815, reassured by peace and the more conciliatory attitude of the Parti canadien leaders, who opposed the Sulpician fathers (still French in origin) and supported the clergy's efforts to create a diocese in Montréal, clerical leaders began fighting for the restoration and extension of the privileges of their class. They sought control of primary education, perceiving that school was one of the main instruments of socialization. With Papineau's support the clergy won a dramatic but brief victory over the Protestant and state threat when the Parish Schools Act was passed in 1824. The clergy gained new strength when Monseigneur J.J. LARTIGUE became bishop of Montréal and devoted himself to reorienting clerical ideology and strategy to fight the lay and Protestant threat. He was well suited to the role: he was one of the first priests to break with GALLICAN ideology and to be won over by ULTRAMONTANE and theocratic doctrine. He followed the new form of nationalism, now detached from its liberal roots and justifying the dominant role of the clergy in a Catholic society. He hoped to restore to the church full control over educational institutions and to bring the clergy closer to the people so as to deepen church influence. But after 1829 the Parti patriote decided to establish assembly schools (nurseries for future patriotes), sought to democratize the management of the parishes and adopted a liberal and republican rhetoric. A break between the clergy and the French Canadian middle class became inevitable. Rebellions of 1837-38 The 3-way power struggle became more violent in March 1837, when the British government, to break the political and financial deadlock, adopted the Russell Resolutions, which effectively rejected the Patriotes' demands. The Patriotes were not well enough organized to jump immediately into a revolutionary venture, so they developed a strategy that provided for the possibility that the state would refuse to yield to the pressure of a mass movement while it gave them time to prepare an armed insurrection to begin after winter set in. The great parish and county assemblies began in May 1837 and spread agitation from parish to parish. For the present, action was supposed to stay within legal limits, but these assemblies, pushed by radicals, soon went beyond. Government leaders saw the uproar as a huge blackmail operation, but the better-informed clergy immediately understood the Patriotes' real objectives. By July 1837 Bishop Lartigue had given precise instructions for his priests in case of armed uprising. Agitation increased until the end of October, when the Patriotes held the "Assembly of the Six Counties" in ST-CHARLES-SUR-RICHELIEU. It was marked by a declaration of rights and by the adoption of resolutions suggesting a desire to overthrow the government. Meanwhile, militant Patriotes had been extremely active in Montréal, where they set up the FILS DE LA LIBERTÉ, an association that publicly advocated revolution, held military drills and paraded through the streets amid great commotion. A November 6 battle between the Fils de la liberté and anglophone members of the DORIC CLUB led to government intervention, something long and anxiously sought by country dwellers who were being harrassed by the Patriotes. A few days later the government issued warrants for the arrest of the Patriote leaders, who hastily left Montréal and took refuge in the countryside. Armed confrontation came well ahead of the Patriotes' intended timetable. Following an incident in LONGUEUIL on November 16, the governor sent troops into the Richelieu Valley. On 23 November 1837 the Patriotes, led by Wolfred Nelson, took ST-DENIS, but 2 days later were defeated at ST-CHARLES. Having scattered the last insurgent ranks S of Montréal, General COLBORNE attacked ST-EUSTACHE on December 14 and ended Patriote resistance. Papineau, supreme commander, had hidden in St-Hyacinthe before taking refuge in the US under an assumed name. Many refugees gathered in the US and, until Lord DURHAM tried to calm tempers, attempted to plan an invasion of Lower Canada. Their efforts were complicated by a rift within Patriote ranks between the radicals, such as Coté and Nelson, and the more conservative elements led by Papineau. When Durham left Canada in early November 1838, a second rebellion broke out, led by the radicals. Even though the revolutionary organization, through the efforts of the Société des frères chasseurs (HUNTERS' LODGES), spread throughout the territory, the Patriotes had no more luck than the year before. By about mid-Nov 1838 order had been re-established in the Richelieu Valley. In 1837 Durham had exiled a few of the most seriously compromised political prisoners, but was rebuked by London. In 1838, 850 suspects were arrested; 108 were brought before a court-martial and 99 were sentenced to death; only a dozen were hanged and 58 were deported to Australia. The main winners in the revolution were the clergy, with its special vision of a French, Catholic nation, and the anglophone bourgeoisie, with its plans for development through economic measures. In 1840 the ACT OF UNION was passed in Britain, providing for the 1841 unification of Upper and Lower Canada into the single PROVINCE OF CANADA. Author FERNAND OUELLET Links to Other Sites The website for the Historica-Dominion Institute, parent organization of The Canadian Encyclopedia and the Encyclopedia of Music in Canada. Check out their extensive online feature about the War of 1812, the "Heritage Minutes" video collection, and many other interactive resources concerning Canadian history, culture, and heritage. Bibliothèque et Archives nationales du Québec An extensive online collection of documents, portraits, maps, audio clips, and other archival material relating to the history of Québec. Library and Archives Canada The website for Library and Archives Canada. Offers searchable online collections of textual documents, photographs, audio recordings, and other digitized resources. Also includes virtual exhibits about Canadian history and culture, and research aids that assist in locating material in the physical collections. Canada: A People's History This CBC feature program highlights significant events, issues, and personalities in Canadian history. French Canada and the Early Decades of British Rule (1760 - 1791) A digitized copy of a booklet that examines the issues and policies that defined Britian's administration of its North American colonies in the decades preceeding the implementation of the Quebec Act and the Constitutional Act. From the Canadian Historical Association and Library and Archives Canada. Keys to History Search this "Keys to History" website for fascinating online exhibits about notable people, places, and events in Canadian history. From Montréal's McCord Museum. This UNB website provides access to extensive references and resources about the United Empire Loyalists and their descendents. This overview of the political history of Lower Canada is part of the "Canadian Confederation" website at Library and Archives Canada. Also features historical maps. Canadian Geographic: Historical Maps Take a walk through the history of Canada. Select a year to see the maps and the history related to that era. From the "Canadian Geographic" website. A historical overview of the political turmoil and military action that engulfed Lower and Upper Canada during the Rebellions of 1837 – 1838. Many illustrations and interesting historical minutiae. Manoir-Papineau National Historic Site of Canada This Parks Canada site features a detailed profile of Patriot leader, Louis-Joseph Papineau, and background notes about the seigneurial system of land tenure. A Collector's Passion - The Peter Winkworth Collection View an extensive collection of distinctive paintings that document more than four centuries of Canadian history. Also features artist's biographies and notes about specific paintings. From the Peter Winkworth Collection of Canadiana at Library and Archives Canada. Fathers of Confederation Biographies of the Fathers of Confederation are part of the "Canadian Confederation" website from Library and Archives Canada. Includes historical photographs and other archival resources. Sir Étienne-Paschal Taché This biography of Sir Étienne-Paschal Taché is part of the “Canadian Confederation” website. Includes photographs and other archival resources. From Library and Archives Canada. The Research Program in Historical Demography This University of Montréal website features data and statistics about Québec history and Québec French-Canadian genealogy before 1800. Images from the Turn of a Century: 1760-1840 This extensive online exhibition is devoted to 18th and 19th Century Québec art, literature, and music. From the Université du Québec. Invasion Repulsed, 1812 This capsule history of the War of 1812 documents the primary issues that determined the course of the war and some of its outcomes. From the website for The Canadian Atlas Online. The Rebellions of 1837-1838 Learn about the simmering political and social issues that set off the insurrections in Lower and Upper Canada from 1837 to 1838. Features biographies of leading figures, great illustrations, maps and snippets of some of the fiery oratory of the time. Part of the Histori.ca “Peace and Conflict” educational website. The Peoples of Canada: A Pre-Confederation History, Second Edition See online excerpts from the reference book "The Peoples of Canada: A Pre-Confederation History, Second Edition". From Oxford University Press. The Constitutional Act Read an online digitized copy of the landmark "Constitutional Act," a decree signed by King George III of England on June 10, 1791, that created the provinces of Upper and Lower Canada. On its pages are details pertaining to the establishment of effective government institutions, the responsibilities of the lieutenant governor, the role of the church, and more. From Canadiana Online. John Coape Sherbrooke A biography of John Coape Sherbrooke, army officer and colonial administrator. From the Dictionary of Canadian Biography Online. Sir James Henry Craig A biography of Sir James Henry Craig, army officer and colonial administrator. From the Dictionary of Canadian Biography Online. Facebook: Canada's History Magazine Join the conversation about noteworthy events and personalities in Canadian history.
http://www.thecanadianencyclopedia.com/articles/lower-canada
13
14
LIMITATIONS OF THE OCTET RULE: While the octet rule is a useful model that allows you to picture the structure of molecules, it is important to realize that all MOLECULES do not obey the octet rule. The concept serves only as a rule of thumb. MOLECULES WITH MORE THAN AN OCTET: Compounds also exist in which the central atom has more than an octet of electrons. All of the compounds formed from the noble gas elements (Argon on down) are examples. A very important concept to remember: ONLY Carbon, Nitrogen, Oxygen, and Fluorine, MUST have an octet ! The existence of compounds of noble gas elements was thought to be impossible - because the noble gas atoms already have complete octets. One of the first noble gas compounds to be synthesized was xenon tetrafluoride, XeF4. The electron dot structure for XeF4 has twelve electrons in the valence orbitals of xenon. - PREDICTING THE SHAPES OF MOLECULES: There is no direct relationship between the formula of a compound and the shape of its molecules. The shapes of these molecules can be predicted from their Lewis structures, however, with a model developed about 30 years ago, known as the valence-shell electron-pair repulsion (VSEPR) theory. The VSEPR theory assumes that each atom in a molecule will achieve a geometry that minimizes the repulsion between electrons in the valence shell of that atom. The five compounds shown below can be used to demonstrate how the VSEPR theory can be applied to simple molecules. - LINEAR MOLECULES: There are only two places in the valence shell of the central atom in CO2 where electrons can be found. Repulsion between these pairs of electrons can be minimized by arranging them so that they point in opposite directions. Thus, the VSEPR theory predicts that CO2 should be a linear molecule, with a 1800 angle between the two C - O double bonds. - TRIGONAL PLANAR MOLECULES: There are three places on the central atom in boron trifluoride (BF3) where valence electrons can be found. Repulsion between these electrons can be minimized by arranging them toward the corners of the equilateral triangle. The VSEPR theory, therefore, predicts a trigonal planar geometry for the BF3 molecule, with a F - B - F bond angle of 1200. Also, it is important to note that boron is VERY HAPPY with only six electrons and not eight. This is a "tricky" element that is overlooked easily. - TETRAHEDRAL MOLECULES: CO2 and BeF3 are both two-dimensional molecules, in which the atoms lie in the same plane. If we place the same restriction on methane (CH4), we should get a square-planer geometry in which the H - C - H bond angle is 900. If we let this system expand into three dimensions - we end up with a tetrahedral molecule in which the H - C - H bond is approximately 1090. - TRIGONAL BIPYRAMID MOLECULES: Repulsion between the five pairs of valence electrons on the phosphorus atom PF5 can be minimized by distributing these electrons toward the corners of a trigonal bipyramid. Three of the positions in a trigonal bipyramid are labeled equatorial because they lie along the equator of the molecule. The other two are axial because the they lie along an axis perpendicular to the equatorial plane. The angle between the three equatorial positions is 1200, while the angle between an axial and an equatorial position is 900. - OCTAHEDRON MOLECULES: There are six places on the central atom in SF6 where valence electrons can be found. The repulsion between these electrons can be minimized by distributing them toward the corners of an octahedron. The term octahedron literally means "eight sides," but it is the six corners, or vertices, that interest us. To imagine the geometry of an SF6 molecule, locate fluorine atoms on opposite sides of the sulfur atom along the X, Y, and Z axes of an XYZ coordinate system. The valence electrons on the central atom in both NH3 and H2O should be distributed toward the corners of a tetrahedron, as shown in the figure below. Our goal, however, is not predicting the distribution of valence electrons. It is to use this distribution of electrons to predict the shape of the molecule. Until now, the two have been the same. Once we include nonbonding electrons, that is no longer true. The VSEPR theory predicts that the valence electrons on the central atoms in ammonia and water will point toward the corners of a tetrahedron. Because we cannot locate the nonbonding electrons with any precision, this prediction cannot be tested directly. But the results of the VSEPR theory can be used to predict the positions of the nuclei in these molecules, which can be tested experimentally. If we focus on the positions of the nuclei in ammonia, we predict that the NH3 molecule should have a shape best described as trigonal pyramidal, with the nitrogen at the top of the pyramid. Water, on the other hand, should have a shape that can be best descried as bent, or angular. Both of these predictions have been shown to be correct, which reinforces our faith in the VSEPR theory. Predict the shape of the following molecules. Draw the Lewis Dot Structure and then name the shape of the following molecules, : - INCORPORATING DOUBLE AND TRIPLE BONDS Compounds that contain double and triple bonds raise an important point: The geometry around an atom is determined by the number of places in the valence shell of an atom where electrons can be found, not the number of pairs of valence electrons. Consider the Lewis structures of carbon dioxide (CO2) and carbonate (CO32-) ion, for example. There are four pairs of bonding electrons on the carbon atom in CO2, but only two places where these electrons can be found. (There are electrons in the C=O double bond on the left and electrons in the double bond on the right.) The force of repulsion between these electrons is minimized when the two C=O double bonds are placed on opposite sides of the carbon atom . The VSEPR theory, therefore, predicts that CO2 will be a linear molecule, just like BeF2, with a bond angle of 1800. The Lewis structure of the carbonate ion also suggests a total of four pairs of valence electrons on the central atom. But these electrons are concentrated in three places: The two C - O single bonds and the C=O double bond. Repulsion between these electrons are minimized when the three oxygen atoms are arranged toward the corners of an equilateral triangle. The CO3-2 ion should therefore have a trigonal-planer geometry, just like BF3, with a 1200 bond angle. Bond polarities (Polar Bonds) arise from bonds between atoms of different electronegativity. When more complex molecules are considered we must consider the possiblility of molecular polarities that arise from the sums of all of the individual bond polarities. To do full justice of molecular polarity - one must consider the concept of vectors (mathematical quantities that have both direction and magnitude). Let's begin by thinking of a polar bond as a VECTOR pointed from the positively charged atom to the negatively charged atom. The size of the vector is proportional to the difference in electronegativity of the two atoms. If the two atoms are identical, the magnitude ofthe vector is ZERO, and the molecule has a nonpolar bond. Let's consider molecules with three atoms. We can establish from the Lewis dot symbols and VSEPR that CO2 is a linear molecule. Each of the C - O bonds will have a vector arrow pointing from the carbon to the oxygen. The two vectors should be identical and pointed in exactly opposite directions. The sum of these two vectors must be zero because the vectors must cancel one another out. Even though the C - O bonds must be polar, the CO2 MOLECULE is NONPOLAR. HCN, hydrogen cyanide, is linear. Since carbon is more electronegative than hydrogen one would expect a vector pointing from H to C. In addition, nitrogen is more electronegative than C so one should expect a bond vector pointing from C to N. The H-C and C-N vectors add to give a total vector pointing from the H to the N. HCN is a POLAR MOLECULE with the vector moving from the hydrogen to the nitrogen - making the hydrogen end somewhat positive and the nitrogen end is somewhat negative. In contrast, let's examine the case of SO2. We know from the Lewis dot symbol and from VSEPR that this molecule is "bent." Its overall geometry would be considered to be trigonal planar if we considered the lone pair electrons on the Sulfur. Lone pair electrons are NOT considered when we examine polarity since they have already been taken into account in the electronegativity. We would predict that there should be polarity vectors pointing from the sulfur to the two oxygens. Since the molecule is bent the vectors will NOT cancel out. Instead they should be added together to give a combined vector that bisects the O-S-O angle and points from the S to a point in-between the two oxygens. GO TO VSEPR WORKSHEET:
http://www.avon-chemistry.com/vsepr_explain.html
13
33
In computer security, a vulnerability is a weakness which allows an attacker to reduce a system's information assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw. To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface. Vulnerability management is the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities" This practice generally refers to software vulnerabilities in computing systems. A security risk may be classified as a vulnerability. The usage of vulnerability with the same meaning of risk can lead to confusion. The risk is tied to the potential of a significant loss. Then there are vulnerabilities without risk: for example when the affected asset has no value. A vulnerability with one or more known instances of working and fully implemented attacks is classified as an exploitable vulnerability — a vulnerability for which an exploit exists. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled—see zero-day attack. Security bug (security defect) is a narrower concept: there are vulnerabilities that are not related to software: hardware, site, personnel vulnerabilities are examples of vulnerabilities that are not software security bugs. Constructs in programming languages that are difficult to use properly can be a large source of vulnerabilities. - A weakness of an asset or group of assets that can be exploited by one or more threats where an asset is anything that can has value to the organization, its business operations and their continuity, including information resources that support the organization's mission - A flaw or weakness in a system's design, implementation, or operation and management that could be exploited to violate the system's security policy - Vulnerability — Weakness in an IS, system security procedures, internal controls, or implementation that could be exploited - A flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system's security policy. - The existence of a weakness, design, or implementation error that can lead to an unexpected, undesirable event [G.11] compromising the security of the computer system, network, application, or protocol involved.(ITSEC) - The probability that threat capability exceeds the ability to resist the threat. - The probability that an asset will be unable to resist the actions of a threat agent According FAIR vulnerability is related to Control Strength, i.e. the strength of a control as compared to a standard measure of force and the threat Capabilities, i.e. the probable level of force that a threat agent is capable of applying against an asset. - A weakness in design, implementation, operation or internal control Data and Computer Security: Dictionary of standards concepts and terms, authors Dennis Longley and Michael Shain, Stockton Press, ISBN 0-935859-17-9, defines vulnerability as: - 1) In computer security, a weakness in automated systems security procedures, administrative controls, Internet controls, etc., that could be exploited by a threat to gain unauthorized access to information or to disrupt critical processing. 2) In computer security, a weakness in the physical layout, organization, procedures, personnel, management, administration, hardware or softwarethat may be exploited to cause harm to the ADP system or activity. 3) In computer security, any weakness or flaw existing in a system. The attack or harmful event, or the opportunity available to a threat agent to mount that attack. Matt Bishop and Dave Bailey give the following definition of computer vulnerability: - A computer system is composed of states describing the current configuration of the entities that make up the computer system. The system computes through the application of state transitions that change the state of the system. All states reachable from a given initial state using a set of state transitions fall into the class of authorized or unauthorized, as defined by a security policy. In this paper, the definitions of these classes and transitions is considered axiomatic. A vulnerable state is an authorized state from which an unauthorized state can be reached using authorized state transitions. A compromised state is the state so reached. An attack is a sequence of authorized state transitions which end in a compromised state. By definition, an attack begins in a vulnerable state. A vulnerability is a characterization of a vulnerable state which distinguishes it from all non-vulnerable states. If generic, the vulnerability may characterize many vulnerable states; if specific, it may characterize only one... National Information Assurance Training and Education Center defines vulnerability: - A weakness in automated system security procedures, administrative controls, internal controls, and so forth, that could be exploited by a threat to gain unauthorized access to information or disrupt critical processing. 2. A weakness in system security procedures, hardware design, internal controls, etc. , which could be exploited to gain unauthorized access to classified or sensitive information. 3. A weakness in the physical layout, organization, procedures, personnel, management, administration, hardware, or software that may be exploited to cause harm to the ADP system or activity. The presence of a vulnerability does not in itself cause harm; a vulnerability is merely a condition or set of conditions that may allow the ADP system or activity to be harmed by an attack. 4. An assertion primarily concerning entities of the internal environment (assets); we say that an asset (or class of assets) is vulnerable (in some way, possibly involving an agent or collection of agents); we write: V(i,e) where: e may be an empty set. 5. Susceptibility to various threats. 6. A set of properties of a specific internal entity that, in union with a set of properties of a specific external entity, implies a risk. 7. The characteristics of a system which cause it to suffer a definite degradation (incapability to perform the designated mission) as a result of having been subjected to a certain level of effects in an unnatural (manmade) hostile environment. The term "vulnerability" relates to some other basic security terms as shown in the following diagram: + - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+ | An Attack: | |Counter- | | A System Resource: | | i.e., A Threat Action | | measure | | Target of the Attack | | +----------+ | | | | +-----------------+ | | | Attacker |<==================||<========= | | | | i.e., | Passive | | | | | Vulnerability | | | | A Threat |<=================>||<========> | | | | Agent | or Active | | | | +-------|||-------+ | | +----------+ Attack | | | | VVV | | | | | | Threat Consequences | + - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+ A resource (either physical or logical) may have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromise the confidentiality, integrity or availability of resources (not necessarily the vulnerable one) belonging to an organization and/or others parties involved(customers, suppliers). The so-called CIA triad is the basis of Information Security. The attack can be active when it attempts to alter system resources or affect their operation: so it compromises integrity or availability. A "passive attack" attempts to learn or make use of information from the system but does not affect system resources: so it compromises Confidentiality. OWASP (see figure) depicts the same phenomenon in slightly different terms: a threat agent through an attack vector exploits a weakness (vulnerability) of the system and the related security controls causing a technical impact on an IT resource (asset) connected to a business impact. A set of policies concerned with information security management, the information security management system (ISMS), has been developed to manage, according to Risk management principles, the countermeasures in order to accomplish to a security strategy set up following rules and regulations applicable in a country. Countermeasures are also called Security controls; when applied to the transmission of information are named security services. Vulnerabilities are classified according to the asset class they are related to: - susceptibility to humidity - susceptibility to dust - susceptibility to soiling - susceptibility to unprotected storage - insufficient testing - lack of audit trail - unprotected communication lines - insecure network architecture - inadequate recruiting process - inadequate security awareness - area subject to flood - unreliable power source - lack of regular audits - lack of continuity plans - lack of security - Complexity: Large, complex systems increase the probability of flaws and unintended access points - Familiarity: Using common, well-known code, software, operating systems, and/or hardware increases the probability an attacker has or can find the knowledge and tools to exploit the flaw - Connectivity: More physical connections, privileges, ports, protocols, and services and time each of those are accessible increase vulnerability - Password management flaws: The computer user uses weak passwords that could be discovered by brute force. The computer user stores the password on the computer where a program can access it. Users re-use passwords between many programs and websites. - Fundamental operating system design flaws: The operating system designer chooses to enforce suboptimal policies on user/program management. For example operating systems with policies such as default permit grant every program and every user full access to the entire computer. This operating system flaw allows viruses and malware to execute commands on behalf of the administrator. - Internet Website Browsing: Some internet websites may contain harmful Spyware or Adware that can be installed automatically on the computer systems. After visiting those websites, the computer systems become infected and personal information will be collected and passed on to third party individuals. - Software bugs: The programmer leaves an exploitable bug in a software program. The software bug may allow an attacker to misuse an application. - Unchecked user input: The program assumes that all user input is safe. Programs that do not check user input can allow unintended direct execution of commands or SQL statements (known as Buffer overflows, SQL injection or other non-validated inputs). - Not learning from past mistakes: for example most vulnerabilities discovered in IPv4 protocol software were discovered in the new IPv6 implementations The research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human: so humans should be considered in their different roles as asset, threat, information resources. Social engineering is an increasing security concern. ||This section needs additional citations for verification. (December 2010)| The impact of a security breach can be very high. The fact that IT managers, or upper management, can (easily) know that IT systems and applications have vulnerabilities and do not perform any action to manage the IT risk is seen as a misconduct in most legislations. Privacy law forces managers to act to reduce the impact or likelihood that security risk. Information technology security audit is a way to let other independent people certify that the IT environment is managed properly and lessen the responsibilities, at least having demonstrated the good faith. Penetration test is a form of verification of the weakness and countermeasures adopted by an organization: a White hat hacker tries to attack an organization information technology assets, to find out how is easy or difficult to compromise the IT security. The proper way to professionally manage the IT risk is to adopt an Information Security Management System, such as ISO/IEC 27002 or Risk IT and follow them, according to the security strategy set forth by the upper management. One of the key concept of information security is the principle of defence in depth: i.e. to set up a multilayer defence system that can: - prevent the exploit - detect and intercept the attack - find out the threat agents and prosecute them Physical security is a set of measures to protect physically the information asset: if somebody can get physical access to the information asset is quite easy to made resources unavailable to its legitimate users. Responsible disclosure (many now refer to it as 'coordinated disclosure' because the first is a biased word) of vulnerabilities is a topic of great debate. As reported by The Tech Herald in August 2010, "Google, Microsoft, TippingPoint, and Rapid7 have recently issued guidelines and statements addressing how they will deal with disclosure going forward." A responsible disclosure first alerts the affected vendors confidentially before alerting CERT two weeks later, which grants the vendors another 45 day grace period before publishing a security advisory. Full disclosure is done when all the details of vulnerability is publicized, perhaps with the intent to put pressure on the software or procedure authors to find a fix urgently. Well respected authors have published books on vulnerabilities and how to exploit them: Hacking: The Art of Exploitation Second Edition is a good example. Security researchers catering to the needs of the cyberwarfare or cybercrime industry have stated that this approach does not provide them with adequate income for their efforts. Instead, they offer their exploits privately to enable Zero day attacks. The never ending effort to find new vulnerabilities and to fix them is called Computer insecurity. Mitre Corporation maintains a list of disclosed vulnerabilities in a system called Common Vulnerabilities and Exposures, where vulnerability are classified (scored) using Common Vulnerability Scoring System (CVSS). OWASP collects a list of potential vulnerabilities in order to prevent system designers and programmers from inserting vulnerabilities into the software Vulnerability disclosure date The time of disclosure of a vulnerability is defined differently in the security community and industry. It is most commonly referred to as "a kind of public disclosure of security information by a certain party". Usually, vulnerability information is discussed on a mailing list or published on a security web site and results in a security advisory afterward. The time of disclosure is the first date a security vulnerability is described on a channel where the disclosed information on the vulnerability has to fulfill the following requirement: - The information is freely available to the public - The vulnerability information is published by a trusted and independent channel/source - The vulnerability has undergone analysis by experts such that risk rating information is included upon disclosure Identifying and removing vulnerabilities Many software tools exist that can aid in the discovery (and sometimes removal) of vulnerabilities in a computer system. Though these tools can provide an auditor with a good overview of possible vulnerabilities present, they can not replace human judgment. Relying solely on scanners will yield false positives and a limited-scope view of the problems present in the system. Vulnerabilities have been found in every major operating system including Windows, Mac OS, various forms of Unix and Linux, OpenVMS, and others. The only way to reduce the chance of a vulnerability being used against a system is through constant vigilance, including careful system maintenance (e.g. applying software patches), best practices in deployment (e.g. the use of firewalls and access controls) and auditing (both during development and throughout the deployment lifecycle). Examples of vulnerabilities Vulnerabilities are related to: - physical environment of the system - the personnel - administration procedures and security measures within the organization - business operation and service delivery - communication equipment and facilities - and their combinations. It is evident that a pure technical approach cannot even protect physical assets: you should have administrative procedure to let maintenance personnel to enter the facilities and people with adequate knowledge of the procedures, motivated to follow it with proper care. see Social engineering (security). Four examples of vulnerability exploits: - an attacker finds and uses an overflow weakness to install malware to export sensitive data; - an attacker convinces a user to open an email message with attached malware; - an insider copies a hardened, encrypted program onto a thumb drive and cracks it at home; - a flood damages your computer systems installed at ground floor. Common types of software flaws that lead to vulnerabilities include: - Memory safety violations, such as: - Input validation errors, such as: - Race conditions, such as: - Privilege-confusion bugs, such as: - Privilege escalation - User interface failures, such as: Some set of coding guidelines have been developed and a large number of static code analysers has been used to verify that the code follows the guidelines. - "The Three Tenents of Cyber Security". U.S. Air Force Software Protection Initiative. Retrieved 2009-12-15. - Foreman, P: Vulnerability Management, page 1. Taylor & Francis Group, 2010. ISBN 978-1-4398-0150-5 - ISO/IEC, "Information technology -- Security tecniques-Information security risk management" ISO/IEC FIDIS 27005:2008 - British Standard Institute, Information technology -- Security techniques -- Management of information and communications technology security -- Part 1: Concepts and models for information and communications technology security management BS ISO/IEC 13335-1-2004 - Internet Engineering Task Force RFC 2828 Internet Security Glossary - CNSS Instruction No. 4009 dated 26 April 2010 - a wiki project devoted to FISMA - FISMApedia Vulnerability term - NIST SP 800-30 Risk Management Guide for Information Technology Systems - Risk Management Glossary Vulnerability - Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009. - "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk Management Insight LLC, November 2006; - Matt Bishop and Dave Bailey. A Critical Analysis of Vulnerability Taxonomies. Technical Report CSE-96-11, Department of Computer Science at the University of California at Davis, September 1996 - Schou, Corey (1996). Handbook of INFOSEC Terms, Version 2.0. CD-ROM (Idaho State University & Information Systems Security Organization) - NIATEC Glossary - Wright, Joe; Jim Harmening (2009). "15". In Vacca, John. Computer and Information Security Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 257. ISBN 978-0-12-374354-1 - ISACA THE RISK IT FRAMEWORK (registration required) - Kakareka, Almantas (2009). "23". In Vacca, John. Computer and Information Security Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 393. ISBN 978-0-12-374354-1 - Technical Report CSD-TR-97-026 Ivan Krsul The COAST Laboratory Department of Computer Sciences, Purdue University, April 15, 1997 - The Web Application Security Consortium Project, Web Application Security Statistics 2009 - Ross Anderson. Why Cryptosystems Fail. Technical report, University Computer Laboratory, Cam- bridge, January 1994. - Neil Schlager. When Technology Fails: Signi cant Technological Disasters, Accidents, and Failures of the Twentieth Century. Gale Research Inc., 1994. - Hacking: The Art of Exploitation Second Edition - Kiountouzis, E. A.; Kokolakis, S. A. Information systems security: facing the information society of the 21st century. London: Chapman & Hall, Ltd. ISBN 0-412-78120-4 - Bavisi, Sanjay (2009). "22". In Vacca, John. Computer and Information Security Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 375. ISBN 978-0-12-374354-1 - The Tech Herald: The new era of vulnerability disclosure — a brief chat with HD Moore - Blog post about DLL hijacking vulnerability disclosure - OWASP vulnerability categorization - Security advisories links from the Open Directory http://www.dmoz.org/Computers/Security/Advisories_and_Patches/
http://en.m.wikipedia.org/wiki/Vulnerability_(computing)
13
19
ECOLOGY OF THE WOLF Populations of wolves that are unexploited by man are rare and generally so remote that a long-term study is prohibitively expensive or impractical. National parks provide unique environments in which to observe these important predators. The insular nature of Isle Royale lends special significance to wolf-population studies, for there is relatively little opportunity for transfer between the island and the mainland. While the wolf population has exhibited great year-to-year stability in total numbers, there have been important variations in its social organization. Since the population is rather small, significant changes in its structure can be linked circumstantially to single events such as the death of an alpha male (Jordan et al. 1967) or possible ingress of a new pack (Wolfe and Allen 1973). The basic, long-term pattern of a single, large pack and several smaller social units has recently changed. In the early 1970s the island supported two large packs that each utilized about half the area. This provided the potential for an increase in total population, and wolf numbers reached a midwinter high of 44 in 1976. The recent development of a second major pack on the island appears to have resulted from a significant increase in moose vulnerability and a higher beaver population, an important summer prey species. This addition has caused a higher level of predation on moose in winter, especially when deep-snow conditions increased the vulnerability of calves. Observations of wolves in winter were made either from light aircraft or from the ground using a telescope at long range. Summer observations were limited to a period of several days at one rendezvous site. Alpha wolves were the only animals consistently identified from year to year. In addition to providing the pack with leadership, they were most active in scent-marking their environment (during winter observations) and were the most active breeders. Alpha wolves restricted the courtship activities of subordinate members of the pack, and mate preferences were demonstrated, both of which contributed to a reduction in courtship behavior and presumably of mating among subordinate wolves. The recent establishment of a second wolf pack on Isle Royale was a significant departure from the pattern observed in the 1960s, when the population remained remarkably constant. The history of this wolf population has illustrated the effectiveness of natural mechanisms which adjusted wolf numbers to their food base. A Brief History Yearly variations in the Isle Royale wolf population are detailed by Mech (1966), Jordan et al. (1967), and Wolfe and Allen (1973). These provide the basis for the following review. During the initial 11 winters of the project (1959-69), the wolf population varied between 17 and 28 (Table 5). The highest population occurred while the large pack was still in operation in 1965; the lowest was in 1969 after 2 years of social instability. TABLE 5. Estimated number of wolves on Isle Royale in midwinter, 1959-76. From 1959 through 1966, the population contained only one large pack. For the first 3 years, this pack traveled over the entire island in midwinter; in subsequent years, its movement usually was restricted to the southwestern two-thirds of the island. The large pack numbered 15-17 wolves from 1959 through 1963. It reached an all-time high in 1964, when 22 wolves were seen. From 1964 to 1966, the pack remained large, at 15-20 wolves. In 1965, a pack of five appeared, believed to have separated from the large pack. This group may have persisted as a pack of four in 1966, although there was some speculation that it left the island in late winter 1965. In 1966, the large pack initially numbered 15, but three wolves dissociated from the pack shortly after the winter study began. The alpha male, recognizable from 1964 to 1966, developed a limp and was apparently killed by other wolves in March 1966. For the remainder of the 1966 study, the largest group numbered eight wolves. The strong leadership of the alpha male was thought to be instrumental in the maintenance of the large pack, and its fragmentation was linked circumstantially to his demise. When the 1967 winter study began, two packs (six and seven wolves) were found in the central and southwestern parts of the island. These two packs may have been remnants of the large pack since their travels overlapped considerably. Another pack of four occupied the northeastern end of the island. In February, a pack of seven, including four black wolves, was seen in Amygdaloid Channel, apparently having crossed the frozen channel from Ontario. There was evidence of violence among wolves; a wolf with a bloody head was seen running toward Canada when the "Black Pack" was first seen, and a few days later an injured wolf was observed near the lodge buildings at Rock Harbor. The packs of six and four were not relocated after the Black Pack was seen. Between 2 and 7 February 1968, two black wolves were observed in a pack of six at the west end of the island. Another pack, numbering seven (the Big Pack) was first seen on 12 February and included one black wolf. While these packs could have been the same, Wolfe and Allen (1973) considered them distinct and suggested that the pack with two black wolves left the island via an existing ice bridge. The single black wolf in the Big Pack was probably one of the four black wolves first seen in 1967. How this wolf became integrated into a resident pack is unknown; Wolfe and Allen (1973) speculated that it could have been associated previously in some way with wolves in this pack, thus implying an additional interchange of wolves between the island and the mainland. The Big Pack included three wolves that were recognizable from 1968 through 1970the alpha male and female and the black wolf, a male. The alpha pair was observed mating in 1968. There was mutual courtship observed in this pair in 1969 and 1970, indicating probable mating. In all 3 years, the black male was often seen in the company of the alpha pair and seemed to enjoy special status. Consequently, he was designated the second-ranked, or beta, male. A photograph of the alpha female, taken in 1968 by D. L. Allen, revealed an unusual conformation in her left front leg. By 1972, she had developed a severe limp in this leg and was not seen the following year. The Big Pack persisted into the present study period and became known as the West Pack after the establishment of a second pack in 1972. Annual Fluctuations, 1971-74 During the present study, the Isle Royale wolf population continued to increase from a low of 17 in 1968 to a high of 31 in 1974. In summer 1971, a second pack became established. During winter studies from 1972 through 1974, each pack occupied approximately half of the island. 1971. The Big Pack (hereafter referred to as the West Pack) was recognized in 1971 by the presence of the black male and alpha female. The black male was clearly the alpha male, replacing the large gray male that had been dominant from 1968 through 1970. Although ten members were seen twice, the pack usually numbered seven to nine wolves. On the basis of limited behavioral information, two pups were believed present in the pack. The pack ranged over the southwestern third of the island, venturing northeast as far as the middle of Siskiwit Lake. In addition to the main pack, three duos were observed: one traveled among the peninsulas of the northeast end of the island, a second duo ranged from Moskey Basin through Chippewa Harbor to Wood Lake, and a third inhabited the shoreline of Siskiwit Bay, traveling between Houghton Point and Malone Bay. Four single wolves also were recognized, with their respective activities centered at the southwest end, northeast end, the north shore west of Todd Harbor, and Malone Bay. While the maximum number of wolves seen on a single day was 16, the presence of three duos and four singles was well established, and the population may be summarized as follows: 1972. Two packs (West and East) accounted for most of the island's wolves from January to March 1972. Each pack commonly numbered eight wolves in late January, but consistently numbered seven and ten, respectively, after mid-February. The ranges of these packs did not overlapeach occupied about half of the island. In addition, a trio of wolves operated in the Malone Bay-Siskiwit Bay area, with tracks suggesting that they ranged along the shore of the island as far as Chippewa Harbor. The East Pack had its origins within the wolf population present the previous winter, since there was no ice bridge to Canada in the interval between the winter studies of 1971 and 1972. Besides the alpha male and female there were six wolves that were uniform in size and body markings, virtually indistinguishable during observations or in photographs (Fig. 30). All six had the physical appearance of pupspresumably a litter from the alpha pair. This conclusion was supported by the fact that the alpha male was never observed chasing any of these six wolves away from the alpha female during the mating season. In 1972 the alpha wolves in the West Pack were the same individuals as in 1971the black male and small, gray female (Fig. 31). Although no other wolves in the pack were identifiable from 1971, the presence of pups was not confirmed. In spite of a limp, the alpha female was able to retain her dominant status, though occasionally she had trouble keeping up with the other wolves in the pack. I saw this female for the last time in May 1972, when she walked, still limping, along the shore of an inland lake with the black male; her summer coat was quite reddish. In September 1972, the black male, a smaller, reddish wolf (probably the alpha female), and three gray wolves were seen lying on an open ridge (Coley Thede, pers. comm.). The black alpha male and female apparently died between September 1972 and January 1973. The black male was then at least 6.5 years old, since he was first observed in 1967. The alpha female was also at least 6.5 years old when she died, because she mated in 1968 and had to be at least 22 months old at that time. She was probably older, since it is rather unlikely that she could have reached the position of alpha female by her second year. On 24 February, a total of 20 wolves was observed (packs of seven and ten plus the trio). It was obvious from tracks on fresh snow that at least two single wolves were also presentone in the vicinity of Chippewa Harbor and one on the north shore near Little Todd Harbor. On 3 March, a wolf was seen following and apparently trying to remain hidden from the pack. Accordingly, the population totaled: 1973. The East and West packs, numbering 8 and 13 wolves, were again well defined. Spatial arrangements between the packs were similar, although their travels overlapped along the north shore of the island, where they visited each other's kills. On 24 and 25 February, a total of 23 wolves was seen. The final population estimate was: The "Todd duo" was seen only three times. Judging from tracks, most of their activity was centered in the Todd Harbor area, although once they traveled from Little Todd Harbor to Lake Whittlesey. The loner, positively identified as a male, was seen only once, but tracks indicated that he ranged along the north shore from a point opposite Lake Desor to the northeast end of Amygdaloid Island, a distance of 40 km. The leadership of the West Pack had changed completely since the previous yeara new alpha pair had replaced the black male and his limping mate. From their appearance and, especially, their behavior, four wolves in this pack were classed as pups. One of these disappeared from the pack around 20 February and was not seen again. The East Pack numbered 12 or 13 for the entire winter study. Observations were hampered by the wolves' extreme avoidance of the study plane, probably resulting from disturbance by other aircraft earlier in the winter. The alpha pair had not changed from the previous year (Fig. 32). The number of pups was estimated from the increase in maximum pack size from 1972 to 1973certainly a minimum figure since it assumes no mortality in the intervening year. 1974. Both main packs increased in size from 1973, and a duo and at least one single were present. The 31 wolves observed on 17 February provided the following minimum count: Again, each pack inhabited its respective end of the island, but movements of the East Pack into West Pack territory increased the amount of overlap. Two wolves again were active in the Todd Harbor area, quite possibly the duo of 1973. In late January, the West Pack numbered 11 or 12. The increase in pack size from 1973 indicated the presence of at least four pups. Early in February, the pack broke into several smaller groups, and the alpha male, recognizable from 1973, was the dominant wolf in one group of four. His mate, the alpha female, was also in this groupthere was some uncertainty that this was the same female as during the previous year. A single wolf was tolerated by this group near a kill, and it was probably one of the original pack members. Several days later, another group of four was seen leaving a kill in the interior of the island. Two other wolves, soon joined by a single, were observed in the Washington Harbor area, and this trio stayed together for the remainder of the study period. Since no other wolves were seen in the West Pack's range, it appeared that the West Pack had broken into units of 4, 3, 3, and 1. The 4 wolves, one group of 3, and a single wolf reunited in March, forming a pack of 8. On Isle Royale, wolf movements in winter vary from year to year, depending on snow conditions and the presence of shoreline ice. Extensive travel within a pack's range is necessary to locate vulnerable prey; such travel is lowest in years when vulnerable prey are abundant. Natural topography determines the ease of travel in different areas of the island, with the principal avenues for wolf movements consisting of chains of lakes, shorelines, old beachlines of Lake Superior, and bedrock ridges. The shorelines of the island stand out as principal hunting areas for wolves. Moose often seek conifer cover in winter, and since most of the conifer cover on the island is located in predominantly spruce-fir forests near lake level (Linn 1957), moose densities in midwinter tend to be highest along lakeshores. This creates an optimum hunting arrangement for wolves. Of a total of 325 wolf-killed moose located in winters from 1959 through 1974, 45% were within 200 m of either Lake Superior or Siskiwit Lake, a large interior lake. The distribution of wolf-killed moose from 16 winter periods further suggests that some areas of the island produce more favorable hunting conditions than others. Kill density is obviously high in the area of North Gap (mouth of Washington Harbor), Malone Bay, Chippewa Harbor-Lake Mason, and Blake Point. All of these locations receive a high level of hunting effort in winter either because they are land masses lying between frozen lakes or bays, or because many travel routes intersect in those areas. Blake Point was hunted by a large pack in 1972 for the first time in several years, and perhaps a high proportion of vulnerable moose had been allowed to accumulate there. Other areas, notably the 1936 burn, have produced few kills in recent years, probably because of a gradual reduction in use of old burns by moose and unusually deep snow in several recent winters that restricted moose to more dense forest types. Travel routes of the East and West packs during winters 1972-74 are shown in Figs. 33-35, along with locations of old and fresh kills. With the exception of the West Pack in 1974 (which fragmented and was impossible to track adequately), the routes shown represent continuous movements during the period of study. Variations in extent of travel and actual routes used are explained below in relation to snow and ice conditions. EFFECT OF SHORELINE ICE Travel around the perimeter of the island was extensive in 1972 and 1974, but quite reduced in 1973 (Figs. 33-35). In both 1972 and 1974, shelf ice was continuous around the island for most of the study period, and shorelines were used commonly by wolves (Fig. 36). In contrast, little shelf ice formed in 1973, and wolves had to travel onshore. Similarly when there was no shelf ice in 1969, wolves made extensive use of the interior even though snow was exceptionally deep (Wolfe and Allen 1973). Occasionally, wolves venture onto ice that is very thin, especially if it is covered with snow. One morning in February 1974, the East Pack rested within 50 m of the edge of the shelf ice near Houghton Point. Tracks of one wolf led directly to the edge and then back to a resting place close to the other wolves. In the afternoon, the thin ice where the wolf had walked broke off and floated away. EFFECT OF SNOW CONDITIONS Relative to moose, wolves have a lighter foot loading (weight-load-on-track) and consequently receive greater support from snow of a given density. Weight load-on-track for five wolves in the Soviet Union ranged from 89 to 114 g/cm2 (Nasimovich, 1955). In contrast, a cow and calf necropsied on Isle Royale in 1973 had foot-loadings of 488 g/cm2 and 381 g/cm2, respectively. Measurements by others range from 420 g/cm2 to over 1000 g/cm2, depending on the sex and age of the moose (Nasimovich 1955; Kelsall 1969; Kelsall and Telfer 1971). Moose rarely receive consistent support from crusts on the surface or within the snow profile (Kelsall and Prescott 1971), and wolves often have a considerable advantage when crusts are strong enough to support them. For example, in 1972, wolves on Isle Royale appeared to be supported by a crust located 20 cm below the surface of the snow, although moose calves broke through and moved with difficulty. Crusting conditions and frequent thaws (which increase snow density) during the entire 1973 winter study allowed wolves to travel with relative ease throughout the interior of the island (Fig. 34). Similar conditions prevailed during the first half of March 1974. At such times moose usually remained in areas of conifer cover, and their movements seemed greatly restricted. Since wolves have relatively short legs, they are greatly handicapped by deep, soft snow. Nasimovich (1955) found that wolves sank to their chests in snow of density 0.21 or less, which describes essentially all fresh-snow conditions. Thus, wolves generally travel in single file through snow, and have been observed moving into this formation in response to as little as 20-25 cm of snow along lake edges (Fig. 37). Nasimovich also found that wolves had difficulty chasing ungulate prey when snow depths exceeded 41 cm, and, with depths greater than 50-60 cm pursuit through untracked snow was almost impossible. In 1971, 41 cm of fresh snowfall on a 51-cm base precluded extensive travel by wolves in the interior of the island. Frequent fresh snow in 1972 kept depths in open areas above 75 cm, and, in spite of a crust within the snow profile, movements of wolves usually were limited to shorelines. The distribution of wolf-killed moose illustrates one effect of deep snow. When snow depth exceeded 75 cm, there was a significant increase in the number of kills located within 0.8 km of a shoreline, although part of this increase is related to changes in moose distribution. Distance Traveled by Wolf Packs Since most pack movements involve hunting, the amount of travel should roughly reflect success and, indirectly, the relative abundance of vulnerable prey. Average distances traveled by Isle Royale packs between kills are quite variable in different years (Table 6), ranging from a low of 18.5 km/kill to a high of 54.1/kill. TABLE 6. Travel estimates for wolf packs, 1971-74. Highest travel per kill was shown by both East and West packs in 1973, a year when the average daily mileage was also highest for both groups. This suggests that moose vulnerability was lowest in 1973, a hypothesis supported by the fact that calves were killed least often in that year. Frequent snow crusts which made travel for wolves relatively easy contributed to the greater movements in 1973. Minimum movement between kill (18.5 km) was registered by the West Pack in 1971, when wolves had little trouble finding vulnerable prey, especially calves, along shorelines. Even shorter distances were reported by Kolenosky (1972), who found that a pack traveled only 14.7 km between kills of deer in Ontario during 1969 when deep snow rendered deer more vulnerable and probably reduced wolf movements. Fundamentally, predator-prey interaction involves energy transfer from one trophic level to another, as from herbivore to carnivore. The complexity of this energy transfer is directly related to the number of species in a particular food web. On Isle Royale, the wolf, the major carnivore, is entirely dependent on moose and beaver, which are primary consumers of vegetation. This is a relatively simple system compared to the food web described by Cowan (1947) for the Rocky Mountain national parks of Canada, where wolves depended heavily on elk but also killed deer, moose, bighorn sheep, caribou, and, at certain seasons, snowshoe hare and beaver. Isle Royale wolves prey on moose at all times of the year, while beaver are available only during the ice-free season. While quite variable from 1971 to 1974, the entire food base of the Isle Royale wolves was probably higher in the early 1970s than during the previous decade, owing to increased vulnerability of moose (at least in winter) and an increased beaver population. This is probably why the island was partitioned into two pack territories after 1971. The Wolf as a Predator of Big Game Food habits of wolves have been studied intensively, mainly because of human concern for the prey species, domestic or otherwise. Wolves are well adapted both physically and behaviorally for predation on large mammals, and an absence of large ungulate prey may adversely affect resident wolf populations, especially pups (see Pup Production). Food habits of wolves seem to be most variable in tundra areas where wolves typically prey on a single ungulate species. Clark (1971) found wolves in central Baffin Island to be almost completely dependent on caribou, while Tener (1954) indicated snowshoe hares as the principal prey species for wolves on Ellsmere Island. Wolves denning in the northern Brooks Range in Alaska often ate small rodents, birds, fish, and insects, although Stephenson and Johnson (1972) believed that wolves nonetheless depended primarily on ungulates. Pimlott et al. (1969) pointed out that wolves have never been shown to thrive for a significant period on prey smaller than beaver, and most biologists agree that wolves are characteristically dependent on large mammals. Moose, white-tailed deer, and beaver are the principal prey species of wolves in mainland areas adjacent to Lake Superior (Thompson 1952; Stenlund 1955; Pimlott et al. 1969; Mech and Frenzel 1971). Prey size and numbers determine which species are most important for the wolf. Where deer are available they are highly preferred (Pimlott et al. 1969; Mech and Frenzel 1971). Since beaver are small and, presumably, easily killed by wolves, predation on them is determined largely by availability. Nonwinter Food Resources Analysis of wolf scats collected in 1973 showed that Isle Royale wolves preyed on beaver to a much greater extent than a decade earlier (Fig. 38). A sample of 554 wolf scats was collected in 1973 from homesites and game trails used by both packs (Table 7). Most of these were from 1973, although a small proportion of the scats collected at the East Pack den were probably from 1972. TABLE 7. Occurrence of prey remains in wolf scats, 1973. Beaver and moose calves together constituted 90.0% of the food items in nonwinter scats. Remains of beaver (hair and occasionally claws) were found in 75.8% of the total scat sample and made up 50.5% of 831 prey occurrences. Moose hair occurred in 69.3% of the scats and comprised 49.5% of the total food items. Of the identifiable moose remains (in scats deposited before the change in calf pelage in early August), 85.7% were from calves. Hare and bird remains were identified from only one scat and are unimportant as prey. While the 1973 blueberry crop was the best remembered by many long-time island residents, fruit was not found in any of the scats. Vegetation (mostly grass) was found in 6.1%, and unidentified seeds in 2.2% of the scats. These nonanimal items were not tallied in Table 7. Murie (1944) suggested that grass may act as a scouring agent against intestinal parasites, a hypothesis supported by his discovery of roundworms among blades of grass in some scats. An 18-inch section of tapeworm (Taenia sp.) was found in a fresh Isle Royale wolf scat containing grass, and Kuyt (1972) reported a similar finding. INCREASED PREDATION ON BEAVER The incidence of beaver remains in fresh scats from 1958 to 1960 was 13.1% (Mech 1966) (Fig. 39). In the following 3-year period beaver occurrence was essentially the same, 15.6% (Shelton 1966). Although there were no systematic scat collections in subsequent years, field examination of scats found incidental to other work showed no obvious changes (Jordan et al. 1967; Wolfe and Allen 1973). However, the 1973 data clearly demonstrate a significant increase in predation on beaver (ts = 13.7, P 0.001) since 1958-63 (Table 8). TABLE 8. Beaver occurrence in summer wolf scats, and beaver population trends. In the decade between the scat analyses, the beaver population doubled, with the estimated number of active colonies (determined from aerial count) increasing from 140 in 1962 to 300 in 1973 (Shelton, unpubl. data). During the same period, wolf predation on beaver tripled, with percentage of beaver in wolf scats increasing from 14.4% to 50.5%. Pimlott et al. (1969) found that the frequency of beaver in wolf scats in the Pakesley area of southern Ontario was 59.3%, compared to 7.1% in nearby Algonquin Park. A study by Hall (1971) showed that the beaver population in the Pakesley area was at least three times more dense than that of Algonquin Park. Clark (1971) pointed out that increased predation on beaver at higher densities could result from a shift in hunting effort to the more abundant beavers or simply from an increased frequency of encounters between wolf and beaver. Hall (1971) reported an increase in predation on beaver and a decrease in predation on deer in Pakesley during the 1960s, corresponding to changes in densities of these two prey species. He believed this indicated a shift in hunting effort. On Isle Royale, known wolf trails often parallel water courses and pond edges, but we do not know whether predation on beaver is limited to chance encounters or whether purposeful hunting is important. Field observation did not indicate depression of beaver numbers around wolf homesites. Some of these were adjacent to active beaver ponds, suggesting that wolves did not spend a lot of time stalking and hunting beaver. The relative levels of predation on moose and beaver as indicated by scats were consistent throughout spring-fall 1973 (Table 9). Shelton (1966) found a slight increase in beaver occurrence in wolf scats in the fall, when beaver are actively cutting winter stores of food and consequently are more vulnerable. Although scats from the last East Pack rendezvous did not indicate any increase, most of the scats from this area probably dated from August and early September, before intensive cutting begins. TABLE 9. Incidence of beaver and moose remains in wolf scats from various homesites and associated trails, 1973.a While it is difficult to estimate the current importance of beaver to Isle Royale wolves in terms of biomass or numbers of prey, a comparison with other studies provides a rough assessment. The highest reported occurrence of beaver in wolf scats came from studies in the Pakesley area of Ontario. Beaver remains were found in 62% of the scats examined in 1960, and beaver comprised 59% of the total food items (Pimlott et al. 1969). By 1964, the frequency of occurrence of beaver in wolf scats in Pakesley had increased to 77%, and beaver were regarded as the primary summer prey of wolves (Hall 1971; Kolenosky and Johnston 1967). The incidence of beaver in scats from Isle Royale wolves (76%) and the percentage of beaver in total food remains (51%) are second only to the reported data for the Pakesley area. A high beaver population on the northeastern half of the island may have been an important factor allowing rapid growth of the East Pack (from 8 to 10 in 1972, to 16 in 1974). Over a 3-year period a minimum of 13 pups survived to midwinter in this pack. The general appearance in July of the 1973 pups and the rapid growth of at least two of them between observations in July and August suggest an abundant food supply. The dense beaver population probably has been an important factor ensuring high pup survival at a time when the production of moose calves, the other principal summer prey, was subnormal. Winter Predation Patterns Winter food habits determined from direct observations and aerial tracking showed that wolves on Isle Royale continue to subsist in winter almost entirely on moose. The snowshoe hare population was relatively low during this study, and while wolves occasionally flushed hares during observations, they never gave chase. No indications of wolf predation on hares in winter were found. Beaver were available only in rare instances when they ventured from beneath the ice to cut food. During mild weather between January and March 1973, we discovered two wolf-killed beaver. Likewise, during a thaw in March 1974, one or more beaver were killed on the Big Siskiwit River. Although wolves rarely find active beaver in winter, they show great interest in beaver lodges and dams encountered during their travels (Fig. 40). The East Pack even dismantled a lodge in February 1973, near Harvey Lake. The wolves had killed two moose within 100 m of the lodge, and their activities while in the area for several days centered on digging out the lodge. In forested regions such as Isle Royale, wolves depend heavily on their sense of smell for prey detection. Of 30 observations of wolves detecting moose from 1972 to 1974, it was possible to determine the method of prey detection 17 times. In 10 cases in which wolves caught the scent of a moose, they either approached directly upwind or turned toward their prey after crossing downwind from the moose. Mech (1966) reported that wolves seemed to sense prey 2.4 km away, underscoring their olfactory sensitivity. Wolves visually detected moose six times, and once they followed a fresh moose track to the animal. Mech (1966, 1970) provided an extensive discussion of the results of moose-wolf encounters observed in the first three winters of the project. The basic pattern he observed has not changed significantly. Moose that stand their ground when wolves approach are not killed; all observed encounters on Isle Royale that ended in a kill occurred after the victim initially ran from wolves. For unknown reasons, vulnerable moose do not stand and face wolves when first approached. While chasing a moose, wolves apparently respond to vulnerability cues that are not obvious to aerial observers; sometimes they quit immediately, at other times the chase might last for long distances (Fig. 41). The primary point of attack is the hindquarter region of the moose, where wolves can dash in and out and stand the best chance of avoiding the quick strikes of the hooves. When wolves inflict serious wounds, they are often content to wait until the moose weakens. In February 1972, however, the East Pack, after spending most of a night close to a wounded adult near Lake Richie, abandoned the animal around daybreak. The moose continued to stand in heavy cover that morning, but by afternoon was lying on its side. This was an 8-year-old cow in apparently good condition, with abundant marrow and visceral fat reserves and pregnant with one fetus. She had deep wounds around the anal opening and had apparently lost a considerable amount of blood. In the next 5 weeks, the East Pack never returned to the carcass, but by early May the wolves had consumed it entirely. Mech (1966) found that wolves have a low rate of hunting success, presumably because most of the moose they encounter are not vulnerable. Of 77 moose tested by wolves, 6 were killed. From 1972 to 1974, 38 moose were tested during observations, and only one was killed. A schematic representation of results of moose-wolf encounters in the two periods is presented in Fig. 42. Observations of hunts in the recent period were too few to determine changes in hunting success. EFFECT OF SNOW CONDITIONS Crusts within the snow profile or on its surface provide support for wolves but interfere with moose movements. Since crusts are frequent on Isle Royale, deep snow often results in increased hunting success for wolves. This was apparent in 1969 when more kills were found than in any previous winter (Appendix L). An increased kill rate on Isle Royale was also evident in the "deep snow" winters of 1971 and 1972. In all three of these winters, the degree of carcass utilization was noticeably less, indicating higher hunting success (Wolfe and Allen 1973; Peterson and Allen 1974) (Fig. 43). Increased calf vulnerability due to reduced mobility in deep snow is reflected in a high kill of moose calves when snow depths exceed 75 cm (Fig. 44). Most calves are killed near shorelines, which are traveled heavily by wolves when snow is deep and shelf ice present. Calves may be so restricted that they are left in shoreline areas by their mothers who have gone elsewhere to feed. In 1972, the West Pack encountered two adults and a calf on the south shore. Both adults ran along the shore, but the calf headed inland and was pulled down by wolves within 100 m. Either the calf's mother was behaving in a highly abnormal fashion or she was not present. In 1971, we saw two calves without a mother present; one of these was killed by a single wolf (Peterson and Allen 1974). WINTER FOOD AVAILABILITY Estimates of food consumption by wild wolves usually are derived by multiplying the average weight of prey by the number killed in a specific period (Mech 1970). When calculated in this manner, food availability rather than actual consumption is estimated, since utilization of carcasses varies considerably with the size of the pack, size of prey killed, and the ease with which additional prey may be taken (Fig. 45). In winters when moose are more vulnerable to wolves, the kill rate may go up, while the corresponding degree of carcass utilization declines. The calculated availability of food for Isle Royale wolves is most useful for comparisons of hunting success. Whole weights of several Isle Royale moose (Appendix E) provided the basis for estimates of the potential food contributed by each bull, cow, and calf (assumed average whole weights of 432, 364, and 159 kg, respectively). The primary inedible portions of a moose are the stomach contents and some of the hide and skeleton. The stomach of a 400-kg bull necropsied in February weighed 65 kg, or about 16% of its body weight. Inedible stomach and intestinal contents of adults were assumed to weight 68 kg, and an additional 34kg were subtracted for portions of hide and skeleton usually left uneaten. Stomach and intestinal contents of calves were assumed to weigh half those of an adult, or 34 kg. Although wolves sometimes eat the entire skeleton and hide of calves in winter, 11 kg were subtracted for parts usually left uneaten. Thus the potential food of each bull, cow, and calf when killed by wolves is a calculated 330, 261, and 114 kg. Adults of unknown sex were assumed to contribute 295 kg. Carcasses of moose collected for necropsy were consumed by the West Pack; weights of these carcasses were included in the calculations for the West Pack on the assumption that the wolves would have otherwise killed a moose themselves (Fig. 46). From 1971 through 1973, calculated availability of food for both East and West packs varied between 6.2 and 10.0 kg/wolf/day, while in 1974 daily figures for both East and West packs dropped to 5.0 and 4.4 kg/wolf, respectively (Table 10). The drop in availability of food in 1974 stems largely from the increase in pack sizes from 1973 to 1974 and the fact that a high percentage of the kills were calves. Winter availability of food on an individual basis declined for each pack through the period of study, partially reflecting a decline in ease of prey capture from winters in 1971 and 1972 when unusually deep snow contributed to a high kill-rate. TABLE 10. Estimates of food availability for West and East packs, 1971-74. Food available to pack members from 1971 through 1973 on Isle Royale was greater than that indicated for the former large pack (Mech 1966). Using the moose weights given above, that pack had available 4.9, 3.8, and 5.1 kg/wolf/day in 1959, 1960, and 1961, respectively. Food available to Isle Royale wolves is well above the minimum amount required in winter. Mech (1970) estimated the daily food requirement for a wild wolf at about 1.7 kg on the basis that active domestic dogs need about this amount. Growing wolf pups and captive adults can be maintained on this amount of food (Kuyt 1972; Mech 1970). Food availability for an Ontario wolf pack was estimated at 3.7 kg/wolf/day during one winter season (Kolenosky 1972). A Minnesota wolf pack increased after a winter with 5.8 kg/wolf/day of available food, remained the same size at 3.6 kg/wolf/day, and decreased at 3.4 and 3.0 kg/wolf/day (Mech, in press). The food economy of loners (single wolves) and small groups is difficult to study because the extent of their movements and feeding patterns is usually unknown. Although Jordan et al. (1967) described some loners as "gaunt" and implied that most led a rather tenuous existence, this may not be the case in winters of abundant prey. For example, in 1971 a loner subsisted for several weeks on three moose carcasses in the Malone Bay area and apparently moved very little. Likewise, the Todd duo killed two moose and fed on two old kills in a 15-day period in February 1974, rarely moving out of the Todd Harbor area. Long-term Changes in Food Resources The appearance of two socially stable wolf packs on Isle Royale was not observed prior to 1972; this appearance, presented earlier, probably resulted from an increase in the food base of the wolf population. The beaver population increased in the 1960s, as did wolf predation on beaver during the nonwinter months. Since production of moose calves was noticeably lower in recent years than in the early 1960s, beaver assumed a position of significance by supplying food during the critical pup-rearing season. While the moose population also appeared to increase during the 1960s, this would not in itself provide an immediate increase in prey for wolves. The food supply for wolves depends on the density of vulnerable moose rather than absolute moose densities. Thus, a moose population in the early stages of a natural decline may provide wolves with a maximum number of available prey. This was apparently the case on Isle Royale in the early 1970s. The establishment of the East Pack has probably brought about a greater utilization of prey within this territory, where previously only loners or packs of two or three wolves lived. For example, the number of moose killed on the northeast half of the island during the winter study period increased greatly from 1971 to 1972, after the appearance of the East Pack (Fig. 47). In its first three winters of operation, the pack killed nine moose on the Blake Point peninsula, about 8.5 km2 in area, during a total of 18 weeks of aerial tracking. Ground search turned up eight additional kills on this peninsula. Moose densities in midwinter in this area commonly exceed 4-6/km2. Like physical characteristics, an animal's behavior has been shaped by rigorous selection pressures, resulting in behavior patterns that are closely adapted to a particular function in the ecosystem. A comparison of the red fox and wolf provides a simple illustration. While both are canids, foxes exhibit much less diversity in behavioral expression and communication than do wolves (Fox 1970). The fox, a semi-solitary creature, preys extensively on game smaller than itself and, at certain seasons, depends heavily on plant fruits and carrion. Therefore, in terms of food acquisition, there would be no advantage for young foxes to remain with their parents in a social group. The behavioral repertoire of foxes is less diverse, yet sufficient for its more solitary way of life. Cooperation among wolves in a pack, however, is essential to their ecological role as a predator on large ungulates. Consequently a complex dominance hierarchy and elaborate array of behavioral expression have evolved among wolves, allowing them to live in close association as group-hunting carnivores. The organization of wolf populations into packs does not fully explain wolves' diverse means of expression and communication, since other group-hunting canids, notably the bush dog (Speothos venaticus) of South America and the African wild dog (Lycaon pictus), do not exhibit a similar, high level of behavioral expression (Fox 1971; Kruuk 1972). While little is known of the ecology of the bush dog, the wild dog of Africa exists year-round in a cohesive pack, with social bonds apparently maintained by highly ritualized food-begging behavior (K¨hme 1965). Individuals within a wolf pack are frequently separate, however, especially in summer when most hunting is done individually or in small groups. This led Fox (1971) and Kruuk (1972) to suggest that the well-developed means of expression among wolves is not only important in coordinating group activities and maintaining order in the pack but also provides for more effective reintegration of individuals into the group after separation. The territorial nature of wolf packs helps maintain pack integrity and seems to apportion space among resident packs according to the availability of food. Mechanisms of territory maintenance may include scent-marking and howling and agonistic behavior during rare confrontations between packs. In spite of extensive field studies of the wolf, many generalizations concerning behavior within and between packs are poorly documented in the wild, primarily because observations are hampered by the wolf's environment and mobility. Lengthy ground observations have been possible only at den sites in tundra regions (Murie 1944; Haber 1968; Clark 1971); aerial observations, a primary research tool, are limited in scope. Insight into the ecological significance of wolf behavior patterns and a proper appreciation of their variability can be gained only by intensive study of many packs in different ecological settings. Social Hierarchy Within Packs The basic social structure of wolf packs is well understood from studies of captive wolves (Schenkel 1947, 1967; Rabb et al. 1967). Behavioral interaction within a pack occurs in a framework of dominance relationships or social hierarchy. A dominant (or alpha) male and female are the central members of a pack, and the other wolves constantly reaffirm their subordinate status through postures of submission directed toward the dominant individuals. Males and females have a separate dominance ranking, and the subordinates have definite dominance relationships among themselves, although interaction is less frequent and relationships are less well defined. Aggression is channeled into ritualized behavior patterns within the dominance framework, reducing the amount of direct conflict within the pack and promoting social order and stability. Alpha wolves provide leadership during travels of the pack, initiate many pack activities, and sometimes exert considerable social control over activities of subordinate wolves, notably their sexual behavior. Restriction of courtship behavior among subordinates, together with well-developed mate preferences among adults, is thought to reduce the potential number of breeding pairs in a pack, often resulting in the birth of only a single litter. The whole pack participates in gathering food and caring for the young, and this contributes both to the survival of young and cohesion within the pack. EXPRESSION OF DOMINANCE AND SUBORDINATION Facial expression, tail position, and posture combine to indicate subtleties of mood and desire. These indicators provided the basis for determining the social position of certain wolves in the packs on Isle Royale, especially the alpha wolves (Fig. 48). Tail position is easily seen from the air and is thus an obvious indicator of wolf status. The importance of the tail in communication probably lies in the fact that the hindquarters and anogenital region have a considerable function in olfactory and visual expression (Kleiman 1967; Schenkel 1947). Presentation of the anal region by a raised tail indicates a position of dominance, while a lowered tail (during interaction with other wolves) covering the anal region, is a component of submissive behavior. Postural changes reinforce these expressions: a dominant wolf stands erect with tail raised, while an extremely subordinate animal may pull its tail between its legs and lower its rear end to the ground (Figs. 49, 50). While the movements and positions of the ears, eyes, forehead, nose, and mouth of a wolf can be combined to produce subtle variations of expressions (Schenkel 1947), most are not observed by humans except at close range. Ears of dominant wolves are forward, while those of subordinate wolves are turned back or flattened against the head. Teeth are more exposed as the intensity of a threat increases. Wolves of high social standing often stare directly at another wolf as part of an expression of dominance or a mild threat, and subordinates respond by turning the head away and avoiding direct eye contact. Inferior wolves constantly show submissive behavior toward dominant wolves. Schenkel (1967:324) defined submission as "an impulse and effort of the inferior towards friendly and harmonic social integration." He described two basic types of submission in wolves, "active" and "passive." During active submission, Active submission often is seen as an element in greeting behavior, and is the most obvious form of expression during the "group ceremony," described below. Passive submission is usually shown by an inferior wolf in response to a threat from a superior individual: An important element in passive submission is "inguinal presentation," in which the wolf lying on its side raises its hind leg, thus exposing its inguinal region to the dominant wolf. Passive submission, and inguinal presentation in particular, seem to inhibit aggression in dominant wolves and thus are considered appeasement or "cut-off" gestures (Fox 1971). Many times on Isle Royale, active agonistic behavior, or even mild threats, from a dominant wolf caused subordinate individuals to fall into passive submission. The dominant wolf usually would reduce the level of its threat, and either investigate the prone wolf or simply stand over it for a minute or more. Slight movement by the inferior wolf usually brought a quick snap from the dominant. The subordinate wolf usually lay still, often with hind leg raised, until the dominant wolf walked off. Once, in the West Pack, a subordinate wolf maintained a position of inguinal presentation after the black alpha male walked away, and even rolled over and raised the other hind leg when the alpha male wandered behind him. Any other movement by the inferior male brought immediate punishment from the alpha male. Members of a pack often congregate in a "group ceremony," a greeting centered around the alpha animals. Subordinates crowd around the dominant wolves and show exuberant active submission and much body contact. Group ceremonies were observed 34 times among Isle Royale wolves from 1972 to 1974. Most commonly, they occurred immediately after the pack arose from sleep, or when one or several members returned to the pack after a brief absence. Frequently active submission toward an alpha by one wolf brought the rest of the pack running over to join in the proceedings, and sometimes a group ceremony ensued when wolves clustered about an alpha inspecting an inferior wolf lying on the ground. Group ceremonies also were seen when a pack "regrouped" after an unsuccessful chase of a moose. Such ceremonies often terminated with threats directed toward an inferior wolf by an alpha, perhaps in response to overenthusiastic greeting behavior. Group ceremonies provide a means of reaffirming dominance relationships, probably reinforcing both the status of alpha wolves and existing social bonds. Additionally, they may provide reassurance for pack members at critical periods; for example, when the East Pack traveled outside of its normal territory in 1974, subordinate wolves constantly crowded about the alpha wolves in a group greeting. Alpha wolves sometimes retain their dominant position for several years and may be instrumental in maintaining a stable pack. Jordan et al. (1967) recognized the alpha male in the large pack on Isle Royale from 1964 to 1966 and found that pack formation in 1966 coincided with his death. There has been relatively little turnover in the alpha positions in the West and East packs (Fig. 51). The small, gray female with the deformed left front leg held the alpha position for at least 5 years (1968-72) in the Big Pack (West Pack). The black male was associated with this female during all 5 years, apparently first as a subordinate (beta) male with special privileges allowing him to travel and rest near the alpha pair (Wolfe and Allen 1973), and finally as alpha male in 1971 and 1972. This was the only case from Isle Royale in which the previous history of an alpha animal has been known. None of the recognizable alpha wolves on Isle Royale has been seen after a known change in its dominant status, but whether their deaths preceded or followed the change in leadership is unknown. Jordan et al. (1967) found circumstantial evidence that the alpha male in the large pack in 1966 had been killed by his associates. During the present study, three alpha wolves disappeared; all three were last seen in summer. While the alpha male in 1966 apparently was killed after he developed a limp, the alpha female in the West Pack in 1972 managed to maintain her dominant status in spite of a limp which occasionally prevented her from retaining her customary position at the front of the pack. Winter observations on Isle Royale indicated that alpha wolves usually led the pack during its travels. Of 61 cases in which it was possible to determine whether the alpha male or female led the pack, an alpha wolf was first in line 70% (n = 43) of the time. In 33 cases the alpha female was first, the alpha male led in 6 cases, and 4 times the two dominant wolves were side by side. In many instances the alpha male showed obvious sexual interest in the alpha female and consequently followed her. Alpha wolves, usually at the front of the pack, normally choose the direction of travel and specific travel routes. This clearly was the case during an observation of the East Pack in 1974. The alpha female led the pack through the narrows between Wood Lake and Siskiwit Lake, then lagged behind to sniff an old moose track. Other wolves then assumed the lead position until they reached the first peninsula, where they stopped and waited for the alpha female to move to the front. She immediately set the direction of travel, led the way briefly, then fell back into a position in the middle of the pack. The same procedure was followed at the next point of land. A clear example of decision-making on the part of an alpha animal was observed in 1974 when the East Pack encountered a scent post of the West Pack. After the pack had examined the scent mark, the alpha female reversed the direction of travel and led the pack back to more familiar range. In the absence of alpha leadership, subordinate wolves may be indecisive. Once in 1972 we observed six probable pups in the East Pack by themselves when the alpha pair had dropped back several miles. Twice a wolf stopped and watched its back-trail. When the six wolves emerged on the shore of an inland lake, they vacillated for 15 minutes, sniffing snowed-in tracks and making false starts, and finally all started off in the same direction. Alpha wolves appear to provide leadership at critical times such as hunting, encountering novel stimuli, and perhaps when contacting neighboring packs. The position of the alpha wolves was observed in only six encounters with moose, but an alpha wolf led the pack in four of these cases. Certainly the wolves at the front of a pack would be the first to detect and chase prey. In February 1973, when the West Pack was under observation at a moose carcass across the harbor from Windigo, the alpha male detected us. He trotted excitedly toward shore, then back to arouse other members of the pack. After a group ceremony centered around him, he led the pack into thick cover near shore. Later in the same winter we watched the East Pack file along the ice on the north shore of Rock Harbor. When they reached open water, they moved onshore and slowly worked their way long the slippery, ice-coated shoreline, finally congregating on a small point. Rounded chunks of broken, "pancake" ice ranging up to several feet in diameter had frozen loosely together adjacent to the point. One wolf reached out with a foot and pushed on an ice chunk, withdrawing its foot quickly. Next the alpha male, together with an unidentified wolf, walked out on a large chunk of ice and stood for a few seconds. Suddenly they bolted back to shore, apparently after the ice had shifted. The alpha male then led the pack away, continuing the course onshore. No observations were made of confrontations between different packs of wolves on Isle Royale. However, we might expect that the alpha animals would take a leading role in such a situation, much as they did when an alpha wolf led the pack in chasing a fox in two observed cases. Although the alpha male and female did not lead the East Pack during their first foray out of their territory on 15 February 1974, they were obviously key figures. As the pack ventured across Siskiwit Bay, most of these wolves probably were encountering the area for the first time, since this pack was not observed southwest of Malone Bay in the previous two winters. In addition to an unusual amount of scent-marking as they crossed to Houghton Point, the subordinate wolves were constantly clustered around the dominant pair in a sort of mobile group ceremony. Perhaps the intense, active submission directed toward the alpha pair resulted from uncertainty and excitement among subordinate wolves. Courtship and Breeding Largely because of complex social relationships, such as mate preferences and a dominance hierarchy, the breeding potential of a wolf pack rarely is realized. Studies of captive wolves have demonstrated that breeding within a pack usually is limited to a few animals (Rabb et al. 1967), and a similar situation has been observed in packs on Isle Royale. In studying wild wolves, it should be remembered that sexually immature pups may account for a sizable proportion of a pack, a partial explanation for limited breeding activity. Mate preferences are recognized clearly on Isle Royale, and incidence of courtship among Isle Royale wolves indicates that mating is most likely to occur between the dominant male and female in a pack (Fig. 52, Table 11). It is also obvious that alpha wolves interfere with courtship attempts of subordinates. These topics will be presented in depth later in this section. TABLE 11. Minimum number of breeding pairs present, 1971-74. The primary function of courtship behavior is to establish and maintain a pair bond. Unlike many vertebrates, male wolves play an integral role in feeding and raising the pups, and a close relationship between a male and female wolf remains important on a year-round basis. Although there are records of more than one litter born in a pack (Murie 1944; Haber 1968; Clark 1971), one litter per pack is usually the rule (Van Ballenberghe and Mech 1975) and probably a safe assumption when pack sizes are not large. Other adults in a pack help raise the pups, enhancing their chances of survival. On this basis we can predict that natural selection would favor offspring from dominant, breeding wolves that interfered with mating attempts of subordinates in a pack. In species with well-developed threat behavior, such expressions are well hidden during courtship, since they would be detrimental to the formation of a close relationship between male and female (Eibl-Eibesfeldt 1970). Consequently, courting wolves display much greeting behavior, play soliciting, and submissive postures, all of which tend to decrease "social distance" (Fox 1971) (Fig. 53). Courtship behavior was observed among Isle Royale wolves throughout the annual winter study periods, with the peak in sexual activity usually sometime in February. During 52 hours of aerial and ground observation in winters from 1972 through 1974, courtship behavior was recorded 71 times. One "instance" of courtship behavior consisted of a well-defined behavior or sequence of behavior, such as mounting, a mutual greeting between mates, etc. Behavior patterns which were considered as courtship in at least some contexts are described in Appendix F. Most of the courtship behavior recorded consisted of males mounting females or males examining the genital region of females (genital snuffling). Undoubtedly, these behaviors are somewhat overrepresented because they are so easily recognized. Greeting and play behavior were also commonly seen but were not recorded as courtship unless there were other indications of sexual interest. Subtleties of behavioral expression are not seen readily from aircraft. When observing wolves whose sex, age, and relationships are unknown or poorly understood, some ambiguous behavior is difficult to classify. For example, it was not uncommon to see one wolf approach another with tail flagged, posture erect, ears and eyes forward, and the second wolf walk off in a generally submissive posture, tail tucked between its legs. This usually indicated a dominance display, but similar behavior was seen when a male tried to court an uncooperative female. Such ambiguities were resolved by carefully watching for subsequent interaction between the same individuals and their relationships to other wolves. Fortunately, most displays of dominance, submission, and courtship involved recognizable alpha wolves and were interpreted with little difficulty. Mate preference can be a powerful limitation on the amount of breeding within a pack. Clearly, if there is no mutual courtship between a male and female wolf, a mating between the two is unlikely. The breeding potential of the Brookfield wolves was reduced considerably by such "one-sided" courtships (Rabb et al. 1967). Pair bonds between mates may be very stable from year to year, although wolves will mate with other individuals if their preferred mate is not available (Rabb et al. 1967). Wolfe and Allen (1973) indicated a stable pair bond between the alpha male and female in the Big Pack (West Pack) from 1968 through 1970. This male had disappeared by 1971, but the same female mated in 1971, and presumably in 1972, with the black male that assumed the alpha position. The East Pack provided another example of a female accepting a new mate after the probable death of the alpha male. In 1974 the new alpha male courted the incumbent alpha female, who accepted his approaches with friendly greetings, indicating probable receptiveness. In all the packs observed from 1971 through 1974, the alpha pair either mated or showed mutual courtship and was considered a bonded pair. Studies of the Brookfield wolves (Rabb et al. 1967) showed that both males and females sometimes courted members of the opposite sex that were unreceptive, and in these cases courtship action was ignored or rebuffed with threats (Figs. 54, 55). My own observations at Brookfield indicated clear differences between the behavior of females that simply were not ready for copulation and those that were totally rejecting a male. A female that temporarily was rejecting a male responded to his advances with mild threats, or simply pulled away, and elements of greeting and play behavior were still seen between partners. This was typically the situation between alpha males and females on Isle Royale, and also a subordinate pair in the West Pack in 1972 that eventually mated. However, a female that was unreceptive to a particular male responded to his courtship attempts with obvious threats and showed little friendly behavior, except perhaps in the context of a group greeting ceremony. A subordinate male in the West Pack in 1973 frequently showed interest in a female, but she always replied with aggressive snapping, never exhibiting any friendly behavior toward the male. A mating between these wolves seemed unlikely. Little is known of the development of mate preferences among wild wolves, but among Brookfield wolves there were strong indications that future mate preferences are crystallized during the juvenile period (prior to sexual maturity at 22 months). Also, it appears that a young wolf generally develops a preference for the alpha wolf of the opposite sex, or at least a dominant individual (Rabb et al. 1967; Woolpy 1968). It is significant that the alpha female in the West Pack in 1971 accepted the black male as her new matea wolf that had enjoyed a close relationship with the alpha pair for at least 3 years. RESTRICTION OF SEXUAL BEHAVIOR AMONG SUBORDINATE WOLVES During three breeding seasons on Isle Royale, 69% of the observed courtship behavior (n = 71) occurred between alpha wolves. Studies of captive wolves have shown that dominant wolves restrict and, in some cases, eliminate courtship behavior and mating among subordinates (Schenkel 1947; Rabb et al. 1967; Woolpy 1968). Since the reduction of mating among subordinate adults could contribute to population regulation, it is important to try to determine the effectiveness of such restrictions among wild wolves. Rabb et al. (1967) noted an increase in agonistic behavior between dominants and subordinates during the breeding season of Brookfield wolves (Fig. 56). Observations in February on Isle Royale indicated frequent threats to subordinate wolves by the alpha male and female. In many cases a strong assertion of dominance seemed to be stimulated directly by courtship behavior among subordinates. This was further indicated by the lack of overt threats from the alpha male in the East Pack in 1972, when the pack was believed to consist primarily of an alpha pair and their offspring; the latter would have been sexually immature in their first winter. In 1973, however, when pups of the previous year could have been sexually mature, on two occasions the alpha male chased other wolves away from the alpha female. Both instances occurred on 16 February when frequent genital sniffing by the alpha male suggested that his mate was in heat. The West Pack provided the best opportunity to record interference of alpha wolves in the sexual behavior of subordinates. The black alpha male in this pack was very possessive of his mate, the alpha female of long standing (Fig. 57). One of my first observations of this pack was from the ground at Windigo on 29 January, 1972. As they rounded Beaver Island, there was much playful sparring as the pack moved along the ice, and at one point a subordinate wolf mounted the alpha female. She eventually squirmed away and snapped at the other wolf, and this brought the black male on a run. He knocked the subordinate over with a body slam, and then mounted the alpha female himself. This was typical of his behavior when other wolves approached his mate. The most interesting interaction in 1972 spanned several days, beginning 24 February. A subordinate pair managed to stay in the West Pack and mate in the presence of the alpha pair, in spite of repeated punishment from both the alpha male and female. Identification of the subordinate pair was not always positive: the male was a thin-tailed wolf that looked like one other wolf in the pack, and the female was one of three full-tailed wolves in the pack. This complicated the interpretation of observations made at different times, but since there was never any indication of sexual interest in more than two subordinate wolves, in the following account from my field notes, it will be assumed that the thin-tailed male and the full-tailed female were consistently the same individuals. Significant in these observations was the very aggressive attitude of the alpha female toward the full-tailed female when she was courting the thin-tailed male. Having been chased from the pack, however, the subordinate female managed to reinstate herself and mate successfully in spite of her "punishment." The black male usually did not interfere with the subordinate male's courtship activities and showed brief aggression only when the subordinate pair actually tied. In this case the discouraging influence of the alpha pair was not sufficient to prevent mating of subordinates, although the length of their copulatory tie was shorter than normal. A subordinate pair was present in the West Pack in 1973, and the alpha pair actively interfered with their courtship activities. On 6 February they were observed near a carcass at Windigo: While the efforts of the alpha pair to discourage courtship in this subordinate pair were persistent, of greater importance was the subordinate female's apparently irreversible lack of interest in the advances of the male. Before the West Pack fragmented in early February 1974, a presumably subordinate pair was observed mating while the pack rested nearby. The status of the alpha male from 1973 was not established before the mating took place, and immediately afterwards he behaved in a very subdued manner, walking at the rear of the pack with his tail down, while the mating pair led the way. This suggested a change in his status, yet he was still the alpha wolf in both the group of four in which he was later found and the pack of eight that reformed in early March. From the above accounts, it is obvious that alpha wolves usually interfere with attempts at courtship among subordinate wolves, although I did not record a case when they were actually able to prevent mating among subordinates. Such behavior on the part of the alpha wolves may, however, discourage pair-bond formation or initial sexual interest among subordinates. In the wild, of course, a subordinate pair could leave a pack and breed with no disturbance, but in such a case reintegration into the pack might be difficult. Our understanding of the effect of the dominance hierarchy on the formation of breeding pairs within a pack is still inadequate. In several packs, both captive and wild, the alpha male did not father the pups, or was relatively inactive sexually (Murie 1944; Rabb et al. 1967; Haber 1968). In the Brookfield pack, a male reduced his participation in courtship activities after assuming the alpha position. However, alpha wolves in both the East Pack and West Pack on Isle Royale have exhibited the most courtship behavior. Individual personalities and attributes and filial or allegiance bonds among wolves can greatly alter relationships within a pack (Rabb et al. 1967) and ultimately will limit the degree to which we can generalize about mate preference and the restriction of breeding among subordinate wolves. IMPLICATIONS OF SOCIALLY CONTROLLED MATING Behavioral limitations on mating, including mate preferences, may hold the productivity of wolves considerably below the theoretical maximum, and often only one litter of pups is born, even in large packs. The food-gathering abilities of the adults in the pack then contribute to the growth of a relatively small number of pups, enhancing their chances of survival. Since packs are basically family groups, there is obviously a high potential for inbreeding in stable packs. Woolpy (1968) studied the genetic implications of social organization in wolves. He contended that the notion that inbreeding results in deleterious effects probably is of little significance when genes are naturally "preselected" for combinations of adaptive value, as they are in wolves. As a demonstration of this principle he cited a study by Scott and Fuller (1965), who inbred beagles and basenjis with no deleterious effects, after preselecting them for fertility, behavior, and body conformation. We have already seen that the parents of wolf pups in the wild are likely to be dominant wolves, already preselected for traits of leadership and physical attributes (Fox and Andrews 1972). Of course, natural selection will rapidly eliminate inferior pups born in the wild. Woolpy (1968) further concluded that the organization of wolf populations into discrete packs, or subpopulations, was of considerable evolutionary significance. According to his hypothesis, over a period of several years of strong leadership in which most pups are born to a single pair, the expression of available genotypes (gene combinations) within a pack will be greatly reduced. Simultaneously, because of inbreeding, viable recessive gene combinations will appear more frequently. In the long run, this could result in greater variability between wolf packs. Thus, several genetic "lines" of wolves are maintained, with genetic variability partitioned "to give maximum exposure (to recessive gene combinations) at all times and to allow them to compete with each other and thus . . . provide the potential to move the population to new adaptive phenotypes" (Woolpy 1968:32). In a sense, due to wolves' social organization, evolution of the species would be accelerated, resulting in rapid adaptation to different environments. Such adaptability is evidenced by the original widespread distribution of wolves in North America and the description of 23 original subspecies on this continent (Goldman 1944). A major challenge facing us today is whether we can preserve a sufficiently large number of natural ecosystems to allow wolves and other species to achieve their own evolutionary potential. In spite of extensive field studies of wolves in various parts of North America, the precise nature of spatial relationships between adjacent packs was unknown until recently. It was unclear whether wolf packs occupied exclusive, nonoverlapping territories or whether neighboring packs utilized common hunting grounds, simply avoiding each other through direct and indirect communication. Recent studies of radio-marked wolves in many packs in Minnesota helped crystallize a concept of "land tenure" among wolf packs (Van Ballenberghe 1972; Mech 1972, 1973). Individuals within a pack utilize a common territory or "defended area" (Noble 1939), and the home range of individual wolves, defined as the area in which they travel during normal activities (Burt 1943), coincides with the territory of the pack to which they belong. The Minnesota studies revealed that packs usually occupy exclusive, nonoverlapping territories, with territory size and wolf density probably related to food supply. In northern Minnesota wolves began to travel outside their former territory in response to a shortage of their principal prey, white-tailed deer, indicating that territories may be enlarged in response to a decreased food supply (Mech, in press). Along the Minnesota shoreline of Lake Superior, where deer densities are very high, wolf densities reached 1/14 km2, with pack territories among the smallest reported for wolves (Van Ballenberghe 1972). Five resident packs totaling 40 wolves occupied an area the size of Isle Royale. The inherent flexibility of territory size, demonstrated by these Minnesota studies, allows for great adaptability to local conditions. We would expect that a pack's territory would be no larger than necessary to obtain sufficient prey. A shrinkage of territory in response to an expanded food base allows for the establishment of additional packs, as on Isle Royale. Significantly, during the winter prior to the appearance of the East Pack on Isle Royale, the West Pack utilized only half of the island. Simultaneously, the amount of food available to Isle Royale wolves was higher than at any other time during this study (Table 10). In 1972-74 Isle Royale had two primary packs, each occupying about half of the island. Additional duos and trios usually occupied areas along the boundary between the two large territories. Loners followed the large packs and scavenged their kills, or existed independently. As pack sizes increased from 1972 to 1974, the amount of spatial overlap between the two packs increased; there was a simultaneous decline in the calculated amount of prey available to wolves in both packs. Pack territories probably are no larger than the minimum size necessary to provide sufficient food and are sensitive to changes in density of vulnerable prey. This flexibility in territory size is advantageous to wolves in maximizing their hunting efficiency. Restriction of pack activity to a certain area ensures an intimate knowledge of that area (prey location and easiest travel routes) and prevents wasteful overlap in hunting efforts of two packs. A mechanism that spaces packs in relation to prey may be of greatest importance during the pup-rearing season, when pack activity is centered around the relatively immobile pups. Litters distributed so that there is plenty of food in the surrounding area would provide for rapid growth of pups. The advantages accruing from a system of exclusive territories should apply equally well to other species of group-hunting carnivores. Indeed, the spotted hyena (Crocuta crocuta) exhibits a similar pattern of social organization and spacing, at least at high population densities (Kruuk 1972). At lower predator densities the "need" for exclusive territories would be lessened; Kruuk (1972) found that in the Serengeti, where prey are highly mobile in response to environmental factors, the hyena population was relatively low and territories were not as clearly defined as in regions of higher hyena densities. Eaton (1974) suggested that naturally low densities of cheetahs might explain the lack of exclusive territories in this species. Territory size of the East Pack increased from 1972-1974, while there was no consistent trend in the range of the West Pack (Fig. 58; Tables 12, 13). Simultaneous with an increase in the territory of the East Pack there was a consistent decline in availability of food per wolf in both packs. TABLE 12. Numbers, movements, range, and prey availability for the Easts Pack, 1971-74. TABLE 13. Numbers, movements, range, and prey availability for the West Pack, 1972-74. The amount of overlap in pack territories increased from 1972-1974. In 1972, the first winter for the East Pack, approximately 9% of the island was not utilized by either main pack. The following year the two packs overlapped on 6% of the island during winter. In 1974, the amount of overlap increased to 16% of the island, primarily because of the movements of the East Pack into traditional West Pack territory. It seems reasonable that the increased amount of overlap resulted from the growth of the East Pack, with a concurrent increase in the food requirement. EXPANSION OF EAST PACK TERRITORY When the East Pack moved into what had been regarded as West Pack territory in 1974 its behavior and movements were of great interest (Fig. 59). The following account was edited from field notes: Several important aspects of the above account should be emphasized: (1) the East Pack, although it traveled into "foreign" territory, moved into an area that had not been used recently by the West Pack, and it turned back when it encountered significant West Pack activity; (2) the East Pack outnumbered any of the fragments of the West Pack with which it had direct or indirect contact. The alpha pair and two other wolves of the West Pack avoided areas crossed by the East Pack, even areas within their own territory; (3) as the East Pack ventured into areas probably unfamiliar to it, there was an unusual amount of scent-marking, by both dominant and subordinate members of the pack. MAINTENANCE OF TERRITORY Wolves must rely heavily upon indirect means of communication to delineate territorial boundaries. Potentially, any behavior which advertises the presence of a pack and causes another to avoid intrusion is of territorial significance. Scent-marking, howling, direct aggression, and avoidance may all serve to maintain territory. Scent-marking. In addition to other functions discussed later, scent-marking serves to delineate territorial boundaries. The above account of the movement of the East Pack into West Pack territory is the only reported observation of a pack encountering another pack's scent-marks. One additional instance of avoidance behavior along a territorial boundary was deduced from tracks in February 1973 (Fig. 62). Peters and Mech (1975) detailed four cases, determined from tracks, in which a pack responded to foreign scent-marks by avoidance. In one instance a pack chasing a wounded deer ceased pursuit at its territorial boundary. Generally, the outermost kilometer of a pack's territory was scent-marked profusely compared to the center. They concluded that scent-marking was an extremely important part of territorial behavior that contributed to efficient spacing among wolves. Howling. Joslin (1967) suggested that howling may be of territorial significance, since it is an effective form of long-distance communication and may convey enough information to permit identification of howling wolves. Since the interface between packs on Isle Royale is so short and territories are long and narrow, howling may be a less effective method of communication between packs than it would be elsewhere. Responses of wolves to human imitations of wolf howls are variable. In one case in June 1973, Sheldon L. Smith (pers. comm.) was sitting on a ridge less than a mile from where human imitations of wolf howls were being broadcast. Although he did not hear the human howls, he heard at the same time seven or eight brief howls from several wolves that were passing him through adjacent thick vegetation. The wolves seemed to be responding to the human howls, but were traveling in the opposite direction. In another case, in September 1972, we elicited a howling response from several wolves in East Pack territory. We approached the group and howled again, and soon a single wolf approached us. Its body and tail markings suggested it was the alpha male of the East Pack. When the wolf saw us it turned and ran back to the rest of the group, and the pack disappeared. Only the one wolf, quite possibly the dominant male, left the others to confront what he might have thought were foreign wolves. Joslin (1967) was often approached by one or more wolves when he howled within 200 yards of homesites. He interpreted this as active resistance toward intruders. Direct aggression and avoidance. When adjacent packs make visual contact with each other, such as across the ice of a large lake, they must either confront or avoid each other. The response of any two packs probably would depend on their previous history of association and perhaps their numerical strength. The East Pack killed a strange wolf even though it was in unfamiliar surroundings. The outcome of such a confrontation might well have been different if the East Pack had met the entire West Pack, instead of not more than three. Lack of numerical strength may have been the reason for the previously discussed avoidance of the East Pack by four West Pack wolves in Siskiwit Bay, even though the West Pack animals were in their own territory. SMALL PACKS AND "LONERS" In all 3 years that both East and West packs have existed, an additional, small group of wolves has been present (Fig. 63). In 1972, a pack of two or three was seen in the Malone Bay area, ranging over to Houghton Point and possibly as far as Chippewa Harbor. In the next two winters, two wolves (the "Todd duo") traveled the north shore in the vicinity of Todd Harbor and were also seen near Intermediate Lake and Lake Whittlesey. The smaller of the pair was noticeably reddish on its lower flanks and belly. Their friendly greetings suggested a male and female pair. It is significant that in the years when two large packs "divided" the island approximately in half, the small pack usually inhabited an area either between the two packs or overlapped by each. This supports the hypothesis of Mech and Frenzel (1971:33), who believed that wolves in Minnesota were organized into breeding packs occupying exclusive territories, with "loners and other nonbreeding population units" inhabiting nonexclusive areas among the pack territories. These small groups probably survive only by their ability to avoid the large packs. The "loners" are more difficult to locate and observe than the larger groups. Their ecology and social status relative to other wolves in the population is little known. Jordan et al. (1967) described several stages of "dissociation" of single wolves from a pack. They believed that many loners were aged and socially subordinate wolves that were gradually excluded from the pack, although in individual cases it is usually not possible to determine sex, age, or previous social relationships. I have seen only one case in which a single wolf which was following a pack might have been a "dropout." After the West Pack declined from eight wolves to seven in mid-February 1972, a wolf was seen following it, often hesitating and apparently trying to remain hidden from the view of the pack. The pack had left the carcass of a moose near Windigo, traveled a short distance, and then lay down on the ice north of Beaver Island. The single wolf walked up on a 50-m rise on the north side of Beaver Island, then sat down at the edge of a cliff overlooking the pack and watched them intently, hidden from view by trees. The different reactions of a pair of wolves to single wolves were recorded in 1974 (field notes): Communication Among Wolves The highly social nature of wolves and the flexibility of their group structure and hunting habits probably account for the diversity in forms of vocal communication found in this species. Howling, the most widely known and most unique wolf vocalization, is of obvious significance in long-range communication. Individual wolves have distinctly different howls and seem to be capable of distinguishing differences in howls, so there is a high potential for exchange of information via howling (Theberge and Falls 1967). Other widely recognized sounds that are not often heard in the wild include the whimper, growl, and bark (Mech 1970). Howling. Howls can be heard for several miles under certain conditions, and Joslin (1967) reported that howling could advertise the presence of wolves over a 130-km2 area. In addition to possible territorial significance, howling helps to assemble individuals in a pack after they have been separated. On Isle Royale in 1973, howling also was of obvious importance in coordinating moves of a large pack between summer homesites. Spontaneous howling of East Pack wolves was heard 62 times during approximately 383 hours spent near their rendezvous sites (homesites) in 1973. Most of the howling was heard at night (Fig. 64), when more adults were hunting and spatially separate. Such howling may help wolves coordinate hunting efforts. Pups and adults at or near a homesite often howled in response to howls of distant adults. Almost half (45%) of the howls heard near East Pack homesites included adults that howled some distance away. Increased howling at dawn and dusk may be associated with departures and arrivals of adults at the rendezvous areas. Carbyn (1974a) recorded dawn and dusk peaks in howling and general activity at wolf rendezvous sites in Jasper National Park in Alberta. Murie (1944) described how adults assembled at the den before departure for their nightly hunt. Howling at this time accompanied generally friendly behavior, with much greeting among the adults. Group howling and greeting ceremonies often occurred together among members of the captive pack at Brookfield Zoo (Fig. 65). Group howling also is common among coyotes and jackals (Canis aureus) (Kleiman 1967). A group howl was observed at a summer rendezvous of the East Pack in July 1973 (paraphrased from field notes): Although wolves are capable of fine auditory discrimination, they may howl in response to sounds which, to human ears, are quite distinct from actual wolf howls. At Brookfield Zoo, howling often occurs in response to sirens. Human "howling" is often an adequate substitute for prerecorded wolf howls when at tempting to stimulate howling among wolves. The common loon (Gavia immer) has a call that closely resembles a wolf howltwice in 1973 Isle Royale wolves at summer homesites began to howl immediately after hearing loons. Once, the pups were clearly the first to respond. On two occasions I heard loons calling shortly after wolves began to howl. Other vocalization. Only limited information was gathered on Isle Royale on other forms of vocal communication, mainly because they are inaudible at long distances. An adult whimpered when it arrived at a summer rendezvous after the rest of the pack had left. Whimpering, interspersed with occasional high-pitched yipping, was frequently heard from pups as they mobbed adults arriving at rendezvous sites. Joslin (1966) believed that whimpering was a friendly greeting, sometimes conveying a submissive attitude. Whimpering was often part of low-intensity friendly greetings at Brookfield Zoo, especially between pairs during the mating season. Barking was heard only during group howls at rendezvous areas. Much of the pup vocalization during group howls consisted of high-pitched "yips," and adult barking sometimes accompanied these pup vocalizations, especially near the end of a howl, much as Joslin (1966) described. He considered barking to be either of a threatening or alarm nature. The "alarm bark" is short and often seemed to cut off a howling session. Joslin occasionally elicited a threatening bark by howling at close proximity to wolves at a rendezvous. In such cases, the barking was more continuous and interspersed with growling. Humans, with a poor sense of smell, are ill-equipped to appreciate the importance of olfactory communication. Scent-marking helps maintain territories, contributes to pair-bond formation, provides information on social and sexual status and individual identity, and helps orient wolves in their environment (cf. Peters and Mech 1975). In canids, elimination (urination, defecation) and rubbing of certain body areas may have scent-marking significance (Kleiman 1966). Scent-marking differs from simple elimination by its directional and repetitive naturethat is, the same object may repeatedly be scent-marked. Kleiman also suggested that this form of scent-marking developed from autonomic responses to strange or frightening situations. Initially, scent-marking could have reassured an animal entering a strange environment and may have since acquired additional signal value in territoriality and courtship (Fig. 66). Wolves have at least two specialized scent glands (Mech 1970; Fox 1971). The anal gland is located on each side of the anal sphincter; presumably scent deposition takes place with each passage of feces. A tail (precaudal) gland of unknown marking function occurs on the dorsal surface of the tail near the base, under a dark patch of hair ("dorsal spot"). Urine is also of considerable scent-marking importance among wolves. In a field study based on tracking wolves in snow, Peters and Mech (1975) distinguished four types of scent-marks: (1) raised leg urination (RLU); (2) squat urination (SQU); (3) defecation (scats); and (4) scratching. They found that the RLU was the most frequent and significant type of scent-mark. Scent-marking by Isle Royale wolves was observed only in winter, usually from the air. Scent-marking was often difficult to distinguish from normal elimination, which seemed to be most common when packs were resting near kills or just beginning to travel. In these cases I ignored defecation and urination unless clearly directed at an object. Frequency of scent-marking. Obvious differences in frequency of scent marking occurred among Isle Royale wolves (Table 14). In all cases the packs were traveling. When the East Pack first entered "foreign" territory we observed 10 scent-marks in a half-hour of observations, compared to 2 in an equivalent length of time as the same pack reentered its own territory several days later. TABLE 14. Scent-marking frequency in traveling wolves. The highest level of scent-marking occurred when three wolves (McGinty duo + 1) left a kill in full view of four wolves of the West Pack (including the alpha pair) who had bedded down 1 km away after following the tracks of the three for many miles. The West Pack wolves were not watching the trio, one of which glanced in the direction of the sleeping wolves twice as we circled. This wolf made six of the seven scent-marks observed. In this case frequent urinations might have resulted from autonomic responses to fear or apprehension and might not have been actual scent-marking. Peters and Mech (1975) found that when packs traveled within a kilometer of the edge of their territories, the frequency of RLU's was twice as high as when packs traveled in the center of their territories. Thus, an accumulation of marks characterized territorial boundaries. The strongest stimulus to scent-mark was the mark of a neighboring pack. In one case when a pack discovered fresh tracks of a neighboring pack on its territorial boundary, these researchers found 30 RLU's, 10 scratches, 2 SQU's, and 1 scat. During normal travel in winter, wolf packs left a sign every 240 m on the average, including a RLU every 450 m. At their normal rate of travel of about 8 km/hr (Mech 1966), that implies an olfactory mark about every 2 minutes, with a RLU every 3 minutes. Isle Royale wolves demonstrated comparable marking frequency (Table 14). Indicator of sexual and social status. The alpha male and female in all packs observed from 1972 through 1974 accounted for most of the recorded instances of scent-marking (27 of 39 cases). In all cases there was active courtship between the alpha wolves; scent-marking clearly played a role in these activities at times. On four occasions the alpha female was seen urine-marking an object, and the alpha male, usually right behind her, sniffed the location and then urinated on it. Twice the alpha female marked a scent-post of the alpha male. In these cases, scent-marking should be considered part of the mechanism of pair-bond formation, as Schenkel (1947) suggested. Twice an alpha male mounted the alpha female immediately after inspecting her fresh urine-mark. The frequency of genital inspection of females during the mating season indicates the importance of olfactory cues in sexual behavior. Experimental work with domestic dogs reviewed by Johnson (1973) documented that urine from estrous females was more attractive to males than that of anestrous females and stimulated mounting among males. Peters and Mech (1975) found that the frequency of RLU and SQU increased before and during the wolf breeding season. During the breeding season they often found a SQU and RLU together in the snow, indicating a female-male combination as described above. We might surmise that scent-marking in the wild would be an important means of establishing initial contact between potential mates in widely dispersed populations. Peters and Mech (1975) pointed out that a lone wolf would be able to determine where potential mates lived and whether they were already paired off, since mating pairs often mark the same points. Also, territorial boundaries were marked with such clarity and frequency that a newly formed pair could easily tell whether they were in an occupied territory, along a territorial boundary, or in unoccupied space. Limited data suggests that scent-marking is related to social status. Alpha wolves often marked when exhibiting no sexual behavior. In some cases subordinate wolves did not mark when they might have been expected to do so. For example, in 1972 the alpha female in the East Pack squatted and urinated on a ridge of ice as the pack was traveling. The spot was inspected by a subordinate wolf, who continued on its way without marking. The alpha male, however, lifted his leg and urinated on the spot after sniffing it. In 1973, four subordinate wolves of the West Pack followed the alpha pair into the woods next to shore. The alpha male led the way, marking a tree on the shoreline. The alpha female followed suit, but the other four wolves sniffed the scent post and left without adding their own scent. Mech and Frenzel (1971) recorded an instance when a wolf believed to be the alpha male was more active than the others when the pack encountered scent posts. Peters and Mech (1975) reported that in two captive packs only high-ranking wolves raised their legs when urine-marking. Twice when they tracked wild pups, they found several SQU's, but no RLU's. While males tend to lift their legs more often than females when scent-marking (Kleiman 1966), several alpha females did this on Isle Royale. I did not observe subordinate wolves lift a hind leg while scent-marking, at least while alpha wolves were present. Orientation and information exchange. Scent-marking in canids may also serve for orientation and information exchange. Humans naturally regard visual signals as the most important means of orientation but, for wolves, a keen sense of smell would be more valuable. Scent-marks are frequent along pack travel-routes and are especially prominent around kills or other centers of activity. About half of the scent-marks recorded by Peters and Mech (1975) were at trail junctions. Peters (1973) found that fatty acid content of anal gland secretions differed between males and females and that all individuals differed slightly from one another. Peters and Mech (1975) showed that wolves tend to remark fresh scent-marks more often than old marks, and inferred that wolves also could discriminate between marks of different ages. Thus, by simply sniffing a scent-mark, a wolf can probably tell whether the marking wolf was a stranger, a male or female and its reproductive status, and how long ago the mark was made. Territorial marking. Young (1944) reported that wolves became greatly excited and scent-marked frequently when encountering the introduced scent of a strange wolf. Schenkel (1947) believed that scent-marking was of territorial significance; avoidance behavior in response to another pack's scent posts along a territorial boundary occasionally has been documented (see Maintenance of Territory). Rolling and scratching. Kleiman (1966) believes that rolling has some scent marking function (perhaps self-marking); Fox (1971) suggested that this may promote interaction with other pack members by encouraging social investigation of wolves carrying interesting odors. Wolves that are separated from the pack could transport odors on their fur and perhaps transmit information to other pack members. I have seen Isle Royale wolves roll at various places: a moose bed in the snow, the kill site of a fox, the dug-out remains of an adult moose killed 7 months earlier, and in snow next to a fresh moose kill. Also, when the East Pack reached Little Siskiwit Island on its trek into new territory, two wolves rolled on the ice after sniffing it; perhaps this was an old scent post. Wolves often scratch with their feet after urinating and defecating (Fig. 67). Usually, only high-ranking wolves exhibit such marking (Peters and Mech 1975). The function of this behavior is not clear, but Mech (1970) noted that it would increase the visual signal value of a scent-mark. Schenkel (1947) thought that it might be a behavioral rudiment which perhaps has lost its original function. This is reasonable in light of comparative work with other carnivores. Kruuk (1972) found that male hyenas scrape only as a sexual display in courtshipprobably to distribute the scent of their interdigital glands. While interdigital glands have not been described in wolves and coyotes, Fox (1971) stated that such glands are found in red foxes and perhaps existed in primitive canids. The shy and elusive nature of wild wolves makes summer ecological studies difficult. Significant observations in the wild are possible at dens or other centers of activity on the tundra of Alaska and Canada, but in forested areas rarely more than a fleeting glimpse of wolves is possible from the ground (Fig. 68). Radio-tracking has provided detailed knowledge of wolf movements in summer in Ontario and Minnesota (Kolenosky and Johnston 1967; Van Ballenberghe 1972; Mech and Frenzel 1971). During this study, a den used by the East Pack was found in early July 1973. The wolves had abandoned it but were located at a rendezvous area 1 km away. Subsequent movements of the pack were followed to three additional rendezvous sites until pack movements became extensive in late September. One rendezvous site of the West Pack was found in September, about a month after it had been vacated. Direct observations of wolves were possible only at one rendezvous of the East Pack in 1973. Wolves occasionally dig out dens weeks in advance of the birth of pups, which probably takes place in late April on Isle Royale. While wolves usually dig underground dens in sandy soil, they have also used hollow logs, rock cavities, old fox dens, and beaver lodges (Mech 1970). Dens commonly are close to water, perhaps because nursing females have a high water requirement. The whelping den used by the East Pack in 1973 was an abandoned beaver lodge, whose entrance had been exposed when a dam broke. Another abandoned lodge 10 m away and a nearby hole in a sandy bank also appeared to have been used (Fig. 69). All holes were within 20 m of water. Many old scats under leaves and other debris indicated that this den had been used beforepossibly during the first 3 years of the East Pack's existence. Tracks of a wolf were followed to this den in March 1974, and some scratching was found, but the site was not used in 1974. Both lodges had a central chamber; barely large enough for an adult wolf, and many interconnecting tunnels which only pups could use. The only obvious alteration by wolves was the enlargement of at least one entrance and tunnel to the central chamber. Scattered around the den area were bones from at least six beaver, one muskrat, and one adult and one calf moose. In 1975, the East Pack again denned in a beaver lodge. The West Pack denned in a hollow white pine (Pinus strobus) trunk that had fallen to the ground (Fig. 70). The log was 9 m long with the major opening 45 cm high and 55 cm wide. Pups frequently had used smaller openings created by decay of the wood. Three other possible whelping dens also were foundtwo abandoned beaver lodges and a hollow log. Wolves visited all six dens during both summer and winter. Because there are very few opportunities for wolves to dig dens on Isle Royale, they take full advantage of existing structures. In temperate regions pups are usually moved from the den site in late June or early July, after the pups have been weaned (Mech 1970). Thereafter, the activities of the pack center around "rendezvous sites" (Murie 1944:40) or "loafing areas" (Young 1944:103), where the pups remain while the adults make hunting forays. A succession of rendezvous sites are used by a pack until the pups are able to accompany the adults on all their travels. Rendezvous sites, like whelping dens, usually are near water and often are adjacent to bogs (Joslin 1967). In 1973, five rendezvous sites were found on Isle Royale, four of the East Pack and one of the West Pack (Fig. 71). All five were located by abandoned beaver ponds, with water still available nearby. Size varied from 0.4 ha to a drainage 1 km long. Most had a prominent open area where the vegetation had been matted, and holes often had been dug in nearby banks. A small den was found beneath the roots of a cedar tree at one area, and a beaver lodge had been excavated and used at another area. Both dens and rendezvous are frequently reused, with former rendezvous sites possibly serving as den sites at a later date, and vice versa. Over a 3-year period, a pack studied by Carbyn (1974a) used the same den and the same first rendezvous site each year. Rendezvous sites generally are used for shorter periods than den sites. Joslin (1967) found that packs moved an average of every 17 days, possibly influenced by frequent human howling nearby. Baffin Island packs moved to different summer dens (analogous to rendezvous sites) about every 30 days (Clark 1971); (Van Ballenberghe 1972). On Isle Royale, rendezvous areas were occupied from 11 to at least 48 days (Table 15). Wolves have been seen at rendezvous sites as late as October on Isle Royale and in northeastern Minnesota. TABLE 15. Successive rendezvous areas of the East Pack, 1973. OBSERVATIONS AT A MIDSUMMER RENDEZVOUS In July 1973, we observed the East Pack at its second rendezvous site from about 200 m. The pups usually were the only wolves in sight. Adults spent much of the day in the cooler forest surrounding a central open area. Most activity was observed before 11:00 a.m. or after 5:00 p.m. Seven pups, probably the total in the pack, were seen at this rendezvous, along with at least seven different adults. Most of the adults in the pack probably visited the rendezvous periodically. Adults were observed arriving at the rendezvous nine times, always before 10:00 a.m. or after 5:00 p.m. Only once did two wolves enter together, indicating that most of the hunting effort by adults in summer is done by individuals or small groups. Pups often sensed the imminent arrival of adults and ran out as a group to meet them. Such an arrival was an occasion of great excitement for the pups, and they greeted the adults by yipping and jumping at their heads. Excited licking of the mouth acts as a stimulus causing regurgitation of food for the pups. On three occasions the arriving adults regurgitated food for the pups immediately, while being greeted. The pups ate such regurgitated food within a minute. Adults rarely remained in open areas for any length of time. About 11 kg of food per day would be required to feed seven pups; (based on Kuyt 1972); providing this amount is undoubtedly a demanding task. Pup activity alternated with long periods of sleep, but even then pups frequently looked up or stood and readjusted their position. Their ears were in constant motion because of insects, mostly mosquitoes. When resting, pups often sought each other's company, even flopping down directly on another sleeping pup. Rest was interrupted with jaw-wrestling, scruff-holding, and occasional nibbling of legs and tails of nearby pups. Many of the pup activities were group-oriented, such as play-fighting and competition for bones or sticksappropriately termed "trophies" by Crisler (1958). Pups probably did most of the digging found at rendezvous sites. Many items were chewed extensivelymoose bones, antlers (especially those in velvet), sticks, and at one site, an aluminum canteen. One evening, five pups gathered around a rotten birch log. They attacked the log in much the same way that they would later treat a moose carcass; each pup lay on its belly, chewing on its portion of the log and snapping at any encroaching sibling. They ripped enthusiastically at loose pieces of rotten wood and occasionally wandered off with a chunk for more peaceful chewing. Six of the pups were of uniform appearance and impossible to tell apart. The seventh, called "7-up," was much lighter in color than the others. Its activities often set it apart. When first distinguished, this pup was the scapegoat during vigorous play-fighting of four pups. With tail firmly planted between its legs, "7-up" continually was the object of chases and alternately was submissive and defensive. Another time, this pup was chewing on a calf-leg bone when two others walked up and stood over "7-up" with a dominant attitude; one finally grabbed the bone. After a spirited defense, "7-up" ended up on its back, entirely submissive. On at least two occasions, "7-up" was the only pup at the rendezvous; once it appeared that no other wolves were present. During this time a cow moose walked slowly into the open area while the pup was out of sight. She stopped and sniffed the ground thoroughly. Undoubtedly, the scent of wolf pervaded the area, and she seemed hesitant, her movements very slow. Every few steps she stopped and looked about, frequently sniffing the ground in matted places. Finally she walked down the drainage and disappeared. Almost immediately, "7-up," with nose to the ground, scampered into the opening and followed the moose briefly. It is quite possible that the pup had had the moose under surveillance but was reluctant to show itself when the moose was nearby. Crisler (1958) and Fentress (1967) reported that their captive wolves were initially afraid of large animals, even traditional prey. Considerable experience is probably necessary before pups become effective predators of ungulates as large as moose. Moose commonly exhibit no fear of wolves. They were seen several times browsing on the edge of a rendezvous. Once I watched a bull, apparently unconcerned, browsing within 100 m of some pups and adults that were howling just out of sight. MOVEMENTS BETWEEN RENDEZVOUS SITES Wolves move to different rendezvous sites for seldom-known reasons. The accumulation of feces and debris eventually may render dens less desirable (Young 1944; Rutter and Pimlott 1968); perhaps the same applies to rendezvous sites. In some cases wolves might move the pups to a fresh kill. At two of the five rendezvous examined in 1973, a moose-kill was found in the center of the activity area (Fig. 72). At the rendezvous that the pack reused in 1974, a fresh kill was found in almost the same location as a kill that had been made the previous year. We watched the East Pack abandon its second rendezvous of 1973. Howling helped to coordinate the move to a new site, as shown by the following field notes: Earlier in the summer, five pups were observed en route from the first to the second rendezvous. In this case adults were howling periodically at both locations, and the five pups went to the next site by themselves. The following day, a sixth pup was still present at the first site. Two nights after initial occupancy of the second rendezvous, adults present at the first site were heard howling in response to pups and adults at the new site, indicating that several days may be necessary for complete relocation. Although pup mortality is widely regarded as an important factor controlling wolf populations, information on pup production and survival on Isle Royale is very limited (Table 16). Sometimes the minimum number of pups in packs can be estimated during winter, but this is not a valid year-to-year index. TABLE 16. Pup production on Isle Royale, 1970-73. Pup condition may provide some indication of the extent of mortality. A dead, emaciated pup was found in 1964 (Jordan et al. 1967), suggesting that inadequate food supply early in life might be a critical factor on Isle Royale. A decrease in food supply seems to be an important reason for poor pup condition and low survival in Minnesota (Mech 1973; Van Ballenberghe and Mech 1975; Seal et al. 1975). Kuyt's (1972) data suggested lower pup survival in areas where tundra wolves relied heavily on small mammals when caribou were absent. A visual comparison of pups on Isle Royale and in Minnesota suggests that the pups on the island were faring well. I first saw the East Pack pups in late July. Subsequently I saw four pups, weighing between 8 and 13 kg, live-trapped in northern Minnesota in late September. By comparison, the Isle Royale pups seen 2 months earlier weighed about 9-12 kg. This is within the range of weight of captive pups of the same age (Kuyt 1972), and is higher than weights of pups caught in Minnesota, where there was a food shortage (Van Ballenberghe and Mech 1975). Two pups of the East Pack were seen about a month later. Growth in the tntervening period was obvious; weight was estimated at 16 kg. They appeared full-bodied, with well-developed coat and guard hairs. These two pups were larger and appeared heavier than the four pups caught in Minnesota a full month later. These observations of Isle Royale pups suggest that the midsummer food supply, at least in 1973, was sufficient for normal growth and development. However, there can be great differences in pup weights even within a single litter (Van Ballenberghe and Mech 1975). There was some evidence of retarded winter pelage development among some pups in the East Pack in February 1974 (Fig. 73). Nonetheless, winter observations of this pack since 1972 indicate rapid numerical growth, suggesting high pup survival from 1971 through 1973 (three successive litters). In winter, wolves encounter scavengers for which moose carcasses are a principal source of food. Besides the red fox, many birds also utilize wolf-killed mooseprimarily the raven, gray jay, black-capped chickadee, and an occasional eagle. Only the fox and raven will be considered here. While wolves were seen chasing foxes six times in winter 1972-74, none was caught. Foxes can often run on light snow crusts where wolves break through, and they invariably outrun wolves when chased overland in snow. In the only chase seen on ice, the fox had such a long head start that it reached the shore with no trouble. In 1972, the East Pack was observed just leaving a fox it had killed on the open ice of Malone Bay. The area was matted with wolf tracks, and much hair had been pulled from the fox, though it was not eaten. The fox's ability to outrun wolves in most snow conditions may be an important reason for its continued coexistence with wolves on Isle Royale. Coyotes, however, disappeared from the island around 1957, less than a decade after the arrival of the wolf. Foxes have thrived recently on Isle Royale, and perhaps even increased after the disappearance of coyotes. While foxes have been observed on Isle Royale since the mid-1920s, long-time island residents report that foxes were uncommon, at least relative to coyotes, before wolves became established. Moreover, less competition for food resources exists between wolves and foxes than between wolves and coyotes. Johnson (1969) reported that snowshoe hares were the most important year-round food for Isle Royale foxes, and that at certain seasons they made extensive use of insects and fruit. Coyotes relied heavily on moose carcasses. Wolves apparently eliminated coyotes on Isle Royale (Mech 1966; Krefting 1969; Wolfe and Allen 1973), probably through direct killing and competition for food. Wolves occasionally were indifferent to the presence of foxes. In 1973, the West Pack bedded down on the ice after feeding on a moose carcass. Soon a fox approached, cautiously staying out of sight of the wolves when possible. At the carcass, the fox chased away several ravens and woke the wolves in the process, but they merely raised their heads for a brief look. During winter periods when foxes were unable to catch snowshoe hares because of deep snow they relied heavily on carcasses of wolf-killed moose (Fig. 74). Foxes have difficulty penetrating the thick hide of a moosethey depend on wolves not only to kill the moose but also to open it up. In winters when utilization of kills by wolves is less than usual, moose carcasses may attract a large number of foxesas many as 10 at one time in 1972 (Appendix G). Ravens on Isle Royale in winter are almost entirely dependent on food indirectly provided by wolves (Fig. 75). Ravens often accompany the large packs in their travels, sitting in trees when the wolves stop to rest. Fresh kills draw ravens from miles28 ravens were seen once on a moose carcass. Ravens also eat wolf scats, especially fresh ones with much incompletely digested meat. Similarly, they feed not only on fresh mountain ash fruit but also on fox scats that are loaded with fruit remains. Since ravens and wolves often feed on the same carcasses, there is much interaction. Ravens seem to tease resting wolves, swooping low over their heads, landing nearby and hopping close, further arousing the wolves. (Murie 1944; Crisler 1958; Mech 1966). Wolves, in turn, leap at ravens in the air, stalk them on the ground, and scatter them from kills. In February 1974, Don Murray and I were circling a kill of the West Pack, with four wolves resting nearby. Suddenly a wolf made a couple of quick boundsit had caught a raven, something Murray had not seen in 16 winters of flying on Isle Royale (Fig. 76). The wolf shook the raven vigorously in its mouth, then trotted by two other wolves, lay down on its belly and shook it again. Another wolf followed with great interest but was repulsed by a snap from the prize-holder. Finally, the wolf with the raven buried it in snow among some alders and trotted out to greet the other wolves. Next, it dug out the raven and paraded around with it in its mouth, always refusing to let the other wolves inspect it closely. After 15 minutes of this activity we left, but returned an hour later to find the wolves still playing with the raven's carcass. One wolf buried it below a shelf of ice next to shore, then stood above it while another wolf closed in on the buried trophy. When the wolf below came within 2 m, the one above leaped off the ledge and rolled the other over. A brief chase ensued, and then the whole pattern was repeated. The following day the wolves were gone, leaving the raven carcass in the snow. Last Updated: 06-Nov-2007
http://www.nps.gov/history/history/online_books/science/11/chap2.htm
13
33
World War II Homefront The World War II Home Front means the non-military activities of a nation during wartime, including politics, society, culture and the economy. Life on the home front during World War II was a significant part of the war effort for all participants, and had major impact on the outcome of the war. This article covers World War II, except for the U.S. and Canada. see American Homefront, World War II The major powers devoted 50–60% of their total GDP to war production at the peak in 1943. The Allies produced about three times as much in munitions as the Axis powers. The U.S. sent about $50 billion in military aid to the Allies through Lend Lease. |1935-9 ave||1940||1941||1942||1943||1944||Total 1939–44| Source: Goldsmith data in Harrison (1988) p. 172 Source: Jerome B Cohen, Japan's Economy in War and Reconstruction (1949) p 354 In 1939-1940, eastern Poland, Estonia, Latvia, Lithuania and Bessarabia were invaded and annexed into the Soviet Union proper. The Soviets lowered the local standard of living and disrupted and destroyed the prevailing socioeconomic structure. Local currencies were still legal tender but so was the Russian ruble. The occupying Russian soldiers were paid in rubles and the established exchange rate inflated the ruble by as much as 2000 to 3000 per cent. Overvaluing made the average Russian soldier extremely rich. This huge influx of rubles started a wave of inflation that natives did not notice at first. Eventually shortages were caused by Soviet purchasing agents that fanned out through the newly occupied nations, buying up wholesale goods in warehouses and the production of local factories. Goods produced locally were shipped to Russia instead of resupplying the local market. Russian propaganda stated the goal was to raise the ordinary working person's standard of living. Prices were frozen, and wages raised by as much as ten times. Merchants and factory owners declared bankruptcy and went out of business. Shortages of food and other necessities introduced growing inflation, a black market, and discontent among the population. These deliberate Soviet policies raised the cost of living but not the actual standard of living. Once annexation was complete, local stores and industries were nationalized, their former owners arrested, stripped of their possessions, including their accumulated rubles, and shipped to the gulags of Siberia. Workers still employed were then paid in rubles. see also Holocaust On September 1, 1939, Germany invaded Poland, conquering it in six weeks, as the Soviets invaded the eastern areas. During the German occupation there were two distinct uprisings in Warsaw, one by Jews in 1943, the other by Poles in 1944. Rutherford (2007) looks at the Wartheland region in a study of efforts to "Germanize" areas of western Poland. There were four major deportation operations between December 1939 and March 1941. Action taken against non-Jewish Poles was linked to the Nazis' later policy of Jewish annihilation. Jews in Warsaw Ghetto: 1943 The first took place in an entity, less than two square miles in area, which the Germans carved out of the city and called "Ghetto Warschau." Into the thus created Ghetto, around which they built high walls, the Germans crowded 550,000 Polish Jews, many from the Polish provinces. At first, people were able to go in and out of the Ghetto, but soon the Ghetto's border became an "iron curtain." Unless on official business, Jews could not leave it, and non-Jews, including Germans, could not enter. Entry points were guarded by German soldiers. Because of extreme conditions and hunger, mortality in the Ghetto was high. Additionally, in 1942 the Germans moved 400,000 to Treblinka where they were gassed on arrival. When, on April 19, 1943, the Ghetto Uprising commenced, the population of the Ghetto had dwindled to 60,000 individuals. In the following three weeks virtually all died as the Germans fought to put down the uprising and systematically destroyed the buildings in the Ghetto. Warsaw Uprising of 1944 The uprising by Poles, ordered by the government in exile in London, began on August 1, 1944. The Polish underground "Home Army," seeing that the Soviets had reached the eastern bank of the Vistula, sought to liberate Warsaw. However, Stalin had his own group of Communist leaders for the new Poland and did not want the Home Army or its leaders (based in London) to control Warsaw. So he halted the Soviet offensive. The Germans suppressed the rebellion ruthlessly. During the ensuing 63 days, 250,000 Poles in the Home Army surrendered to the Germans. After the Germans forced all the surviving population to leave the city, Hitler ordered that any buildings left standing be dynamited and 98% of buildings in Warsaw were destroyed. Public opinion strongly supported the war, and the level of sacrifice was high. The war was a "people's war" that enlarged democratic aspirations and produced promises of a postwar welfare state. In mid-1940 the R.A.F. was called on to fight the Battle of Britain but it had suffered serious losses. It lost 458 aircraft—more than current production—in France and was hard pressed. In order to speed output the government decided to concentrate on only five models in order to optimize output. They were Wellingtons, Whitley Vs, Blenheims, Hurricanes and Spitfires. They received extraordinary priority. Covering the supply of materials and equipment and even made it possible to divert from other types the necessary parts, equipments, materials and manufacturing resources. Labour was moved from other aircraft work to factories engaged on the specified types. Cost was not an object. The delivery of new fighters rose from 256 in April to 467 in September — more than enough to cover the losses — and Fighter Command emerged triumphantly from the Battle of Britain in October with more aircraft than it had possessed at the beginning. Most women who volunteered before the war went into civil defense or the Women's Land Army. The main civil defense services were Air Raid Precautions (ARP), the fire service and Women's Voluntary Services (WVS). 144,000 served with the emergency casualty services, Initially, the women mainly carried out clerical work, but their roles expanded to meet demand, and female pump crews became commonplace. By September 1943 over 450,000 women were in service (9.4%). Several First World War services were revived in 1938-39: the Army’s Auxiliary Territorial Service (ATS), the Women's Royal Naval Service (Wrens), and the Women's Auxiliary Air Force (Waafs). Commissions were for the first time given to women, and women were brought under regular military disciplinary law. The ATS was the largest. Its 200,000 women in 1943 were in eighty different military specialties ("trades"). In the skilled division included 3,000 clerical personnel, 9,000 technical, 3,000 communications, and 4,000 cooks; in the nonskilled trades were 30,000 hospital orderlies and 15,000 drivers. Some 57,000 ATS served in combat units in air defense and antiaircraft units based well behind the lines (so they could not be captured). They could load and aim the guns, but a man had to pull the final trigger. Conscription for all women was introduced in 1941 for women of 21 in that year. They had to join the armed forces or the land army or be assigned other war work. The services greatly expanded their nursing corps; the RAFNS had 21,300 nurses in the Royal Air Force. The WVS was the largest of these organizations, with over one million members. Typical WVS activities included organizing evacuations, shelters, clothing exchanges and mobile canteens . The Women's Land Army/Scottish Land Army was reformed in 1938 so that women could be trained in agricultural work, leaving male workers free to go to war. Most WLA members were young women from the towns and cities. Annice Gibbs, who worked for the WLA Timber Corps, remembers an encounter with Italian prisoners of war (POWs). "After our training, we soon got used to heavy work, such as lifting pit-props and cutting them into various lengths for the coal mines." With the onset of war, everything changed. If husbands joined the armed forces, or were sent away to do vital civilian work, mothers often ran the home alone - and had to get used to going out to work, as well. Young single women, often away from home for the first time, might be billeted miles from their families. Flexible working hours, nurseries and other arrangements soon became commonplace to accommodate the needs of working women with children. Before long, women made up one third of the total workforce in the metal and chemical industries, as well as in ship-building and vehicle manufacture. They worked on the railways, canals and on buses. Women built Waterloo Bridge in London. Food, clothing, petrol, leather and other such items were rationed. Access to luxuries was severely restricted, though there was also a small black market trading illegally in controlled items. Families with a bit of land grew victory gardens (small home vegetable gardens), to supply themselves with food. Farmers converted to high value food products, especially grains, and reduced the output of meat. From very early in the war it was thought that the major cities of Britain, especially London, would come under air attack, which did happen. Some children were sent to Canada. Millions of children and some mothers were evacuated from London and other major cities when the war began, but they often filtered back. When the bombing began in September 1940 they evacuated again. The discovery of the poor health and hygiene of evacuees was a shock to Britons, and helped prepare the way for the Beveridge Plan. Children were only evacuated if they're parents agreed but in some cases they didn't have choice. The children were only allowed to take a few things with them including a gas mask, books, money, clothes, ration book and some small toys. Belfast during the war Belfast was a key industrial city during World War Two. Britain relied on her to produce ships, tanks, shorts, aircraft, engineering works, arms, uniforms, parachutes and a host of other industrial goods to help the war effort. As a result of this unemployment was dramatically reduced in Belfast, as there was more demand for industrial goods. However, being a key industrial city during World War Two also made Belfast a target for German bombing missions. Belfast was poorly defended during World War Two. There were only 24 anti aircraft guns in the city for example. The Northern Ireland government under Richard Dawson Bates (Minister for Home Affairs) had prepared poorly. They believed that Germany would not attack Belfast as it was too far away and they would have to fly over Britain in the process. When Germany invaded France on 10 May 1940 this changed dramatically as German bombers no longer had to fly over British soil to access Belfast. The fire brigade was inadequate, there were no public air raid shelters as the Northern Ireland government was reluctant to spend money on them and there were no searchlights in the city, which made shooting down enemy bombers all the more difficult. After seeing the Blitz in Britain the Northern Ireland government started building some air raid shelters. The Luftwaffe in early 1941 carried out some reconnaissance missions and photographed the city. During April 1941 Belfast was attacked. The docks and industrial areas were targeted and many bombs were dropped on the working class areas of East Belfast where over a thousand were killed and hundreds were seriously injured. The Northern Ireland government requested help from the south, which dispatched several fire brigades. Many Belfast people left the city afraid of future attacks. The bombings revealed the terrible slum conditions to the middle class people who entered the working class areas to help the injured. As such, these people were from middle and upper class backgrounds and would never have usually frequented working class areas of Belfast. Middle class people having seen the conditions the poor lived in Belfast helped hasten the advent of the [Welfare State]] following the war. In May 1941, Germans dropped bombs and incendiary devices on the docks and Harland and Wolff shipyards and as a result Harland and Wolff closed for six months. Those not involved in the rebuilding of the docks were put out of work during this time and that increased the troubles of the poor people of Belfast even further. Apart from the numbers dead, the Belfast blitz seen half of Belfast houses destroyed. Approximately twenty millions pounds worth of damage was caused. The Northern Ireland government was criticized heavily for its lack of preparation. The criticism forced the resignation of Prime Minister J.M. Andrews. The bombing raids continued until the invasion of Russia. The American army also came during the war and set up bases around Northern Ireland, which led to a boost to local economies and excitement to those at home. While the war brought great employment and economic prosperity to Belfast, it also brought great human suffering, destruction and death to Belfast too. see also Holocaust After rapid German advances in the early months of the war reaching the city of Moscow, the bulk of Soviet industry and agriculture was either destroyed or in German hands. But in one of the greatest logistics feats of the war, thousands of factories were moved beyond the Ural Mountains along with well over a million workers. In general the tools, dies and machines were moved, along with the blueprints and skilled engineers. The whole of the remaining Soviet territory become dedicated to the war effort. Conditions were severe. In Leningrad, under German siege, over a million died of starvation and disease. Many factory workers were teenagers, women and old people. Despite harsh conditions, the war led to a spike in Soviet nationalism and unity. Soviet propaganda toned down socialist and anti-religious rhetoric of the past as the people now rallied by a belief of protecting their motherland against hated German invaders. Ethnic minorities thought to be collaborators were forcibly removed into exile. Religion, which was previously shunned, became an acceptable part of society. see also Holocaust The German invasion of the Soviet Union in 1941 was welcomed by many Ukrainians at first; the OUN even attempted to establish a government under German auspices. Nazi ideologue Alfred Rosenberg (1893-1946) considered Ukraine a strategically important region that should be occupied through capturing the hearts and minds of the Ukrainians. According to Rosenberg, everything should have been done to make the Ukrainians view the Germans as liberators. Though he presented his views on different occasions, Adolf Hitler's anti-Slavic racial views prevailed and overrode strategic considerations, leading to a harsh occupation. Very soon the realization that Nazi policies were brutal toward all the Ukrainians, and not only the Jews and Communists, drove most Ukrainians into opposition to the Nazis. Germany forced many Ukrainians to work within the so-called Reichskommissariat Ukraine (RKU) on tasks such as agriculture, road and railway building, and the construction of fortifications. The German authorities soon faced a serious local labor shortage, especially among skilled workers, as a result of Soviet evacuations before the invasion, the ongoing murder of the Jewish population, and the brutal recruitment, arrest, and deportation of other groups, usually with the cooperation of the local civilian, military, and police authorities. The pool of labor was further reduced as the Germans lost territory in the later stages of the conflict. Nazi administrator Fritz Sauckel's labor recruitment measures strained relations with local officials responsible for selecting the deportees, leading to bribery and corruption. The Kiev area was the main focus for recruitment and deportation, while conditions in the Vinnitsa region of central Ukraine typified the interaction of the various factors. In Ukraine, Belarus, and western Russia the first stage of partisan development, from 1941 to the fall of 1942, was uncoordinated and resulted in a great many losses. The second stage, late 1942 to 1944, was better coordinated; partisan groups were better defined, and relatively large-scale operations were carried out, often in cooperation with the Red Army. Organized leadership and cadres were created, various forms of actions (diversions, sabotage, direct attacks, and so on) were developed, and the Germans carried out punitive activities against the partisans. In all, more than 1.3 million partisans took part in actions in the enemy's rear in 6,200 units, and more than 300,000 received decorations for their actions. The OUN created a nationalist partisan fighting force, the Ukrainian Insurgent Army (UPA); many Ukrainians also joined the Soviet partisans and fought in the Soviet Army against the Germans. After World War II, the OUN and the UPA continued a hopeless guerrilla struggle against Soviet rule until 1953. The devastation caused by the war included major destruction in over 700 cities and towns and 28,000 villages. China suffered the second highest amount of casualties of the entire war. Civilians in the occupied territories had to endure many large-scale slaughters. Tens of thousands died when Nationalist troops broke the levees of the Yangtze to stop the Japanese advance after the loss of the capital, Nanking. Millions more Chinese died because of famine during the war. Millions of Chinese moved to the Western regions of China to avoid Japanese invasion. Cities like Kunming ballooned with new arrivals. Entire factories and universities were often taken along for the journey. Japan captured major coastal cities like Shanghai early in the war; cutting the rest of China off from its chief source of finance and industry. Though China received massive military and economic aid from the United States, much of it flown "over the Hump" (over the Himalayan mountains from India) China did not have sufficient infrastructure to use the aid to properly arm or even feed its military forces. Much of the aid was also lost to corruption and extreme inefficiency. Communist forces led by Mao were generally more successful at getting support or killing opponents than Nationalists. They were based mainly in Northern China, and built up their strength to battle with the Nationalists as soon as the Japanese were gone. In occupied territories under Japanese control, civilians were treated harshly. see also World War II, Holocaust Germany had not fully mobilized in 1939, nor even in 1941. Not until 1943 under Albert Speer did Germany finally redirect its entire economy and manpower to war production. Although Germany had about twice the population of Britain (80 million versus 40 million), it had to use far more labour to provide food and energy. Britain imported food and employed only a million people (5% of labour force) on farms, while Germany used 11 million (27%). For Germany to build its twelve synthetic oil plants with a capacity of 3.3 million tons a year required 2.4 million tons of structural steel and 7.5 million man-days of labour; Britain brought in all its oil from Iraq, Persia and North America. To overcome this problem Germany employed millions of forced laborers and POWs; by 1944 they had brought in more than five million civilian workers and nearly two million prisoners of war—a total of 7.13 million foreign workers. The workers were unwilling and inefficient, and many died in air raids. For the first part of the war, there were surprisingly few restrictions on civilian activities. Most goods were freely available in the early years of the war. Rationing in Germany was introduced in 1939, slightly later than it was in Britain, because Hitler was at first convinced that it would affect public support of the war if a strict rationing program was introduced. The Nazi popularity was in fact partially due to the fact that Germany under the Nazis was relatively prosperous, and Hitler did not want to lose popularity or faith. Hitler felt that food and other shortages had been a major factor in destroying civilian morale during World War I which led to the overthrow of the Kaiser in 1918. However, when the war began to go against the Germans in Russia and the Allied bombing effort began to affect domestic production, this changed and a very severe rationing program had to be introduced. The system gave extra rations for men involved in heavy industry, and lower rations for Jews and Poles in the areas occupied by Germany, but not to the Rhineland Poles. The points system Walter Felscher recalls: For every person, there were rationing cards for general foodstuffs, meats, fats (such as butter, margarine and oil) and tobacco products distributed every other month. The cards were printed on strong paper, containing numerous small "Marken" subdivisions printed with their value – for example, from "5 g Butter" to "100 g Butter". Every acquisition of rationed goods required an appropriate "Marken", and if a person wished to eat a certain soup at a restaurant, the waiter would take out a pair of scissors and cut off the required items to make the soup and amounts listed on the menu. In the evenings, shop-owners would spend an hour at least gluing the collected "Marken" onto large sheets of paper which they then had to hand in to the appropriate authorities. also created a cut in the amount of rationed bread, meat and fat. Women were idealized by Nazi ideology and work was not felt to be appropriate for them. Children were expected to go to houses collecting materials for the production of war equipment. The Germans brought in millions of coerced workers , called Arbeitseinsatz from the countries they occupied, along with prisoners of war. The American aerial bombing of a total of 65 Japanese cities too from 400,000 to 600,000 civilian lives. That comprises over 100,000 in Tokyo alone, over 200,000 in Hiroshima and Nagasaki combined, and 80,000-150,000 civilian deaths in the battle of Okinawa. In addition civilian death among settlers who died attempting to return to Japan from Manchuria in the winter of 1945 were probably around 100,000. Total Japanese military fatalities between 1937 and 1945 were 2.1 million; most came in the last year of the war and were caused by starvation or severe malnutrition in garrisons cut off from supplies. - WWII Homefront - Collection of color photographs of the homefront during World War II - 10 Eventful Years: 1937-1946 4 vol. Encyclopedia Britannica, 1947. Highly detailed encyclopedia of events in every country. - Beck, Earl R. The European Home Fronts, 1939-1945 Harlan Davidson, 1993, brief - Brandt, Karl. The reconstruction of world agriculture (1945) online edition - Costello, John. Love, Sex, and War: Changing Values, 1939-1945 1985. US title: Virtue under Fire: How World War II Changed Our Social and Sexual Attitudes - I.C.B. Dear and M.R.D. Foot, eds. The Oxford Companion to World War II (1995), detailed articles on every country - Harrison, Mark. "Resource Mobilization for World War II: The U.S.A., UK, USSR and Germany, 1938-1945". Economic History Review (1988): 171-92. - Higonnet, Margaret R., et al., eds. Behind the Lines: Gender and the Two World Wars (1987). excerpt and text search - Loyd, E. Lee, ed.; World War II in Europe, Africa, and the Americas, with General Sources: A Handbook of Literature and Research (1997). 525pp bibliographic guide - Loyd, E. Lee, ed.; World War II in Asia and the Pacific and the War's aftermath, with General Themes: A Handbook of Literature and Research (1998) - Marwick, Arthur. War and Social Change in the Twentieth Century: A Comparative Study of Britain, France, Germany, Russia, and the United States 1974. - Milward, Alan. War, Economy and Society 1977 covers homefront of major participants - Noakes, Jeremy ed., The Civilian in War: The Home Front in Europe, Japan and the U.S.A. in World War II (1992). - Schultz, Theodore, ed. Food for the world (1945) online edition - Wright, Gordon. The Ordeal of Total War (1968), covers all of Europe excerpt and text search Australia and New Zealand - S.J. Butlin and C.B. Schedvin, War Economy 1942–1945, Australian War Memorial, Canberra, 1997 - Darian-Smith, Kate. On the Home Front: Melbourne in Wartime, 1939-1945. (1990). - Saunders, Kay. War on the Homefront: State Intervention in Queensland, 1938-1948 (1993) - The Home Front Volume I by Nancy M. Taylor NZ official history (1986) - The Home Front Volume II by Nancy M. Taylor NZ official history (1986) - Political and External Affairs by Frederick Lloyd Whitfeld (1958) NZ official history - Brivati, Brian, and Harriet Jones, ed. What Difference Did the War Make? The Impact of the Second World War on British Institutions and Culture. Leicester UP; 1993. - Calder, Angus . The People's War: Britain 1939-45 (1969) - Corelli, Barnett. The Audit of War: The Illusion and Reality of Britain as a Great Nation. 1986. - Hancock, W. K. and Gowing, M.M. British War Economy (1949) official history - Hancock, W. K. Statistical Digest of the War, (1951) official history - Marwick, Arthur. The Home Front: The British and the Second World War. 1976. - Postan, Michael. British War Production, 1952. official history - Rose, Sonya O. Which People's War?: National Identity and Citizenship in Wartime Britain 1939-1945 (2003) - Titmuss, Richard M. Problems of Social Policy(1950) official history - Granatstein, J. L. Canada's War: The Politics of the Mackenzie King Government. (1975). - Granatstein, J. L., and Desmond Morton. A Nation Forged in Fire: Canadians and the Second World War, 1939-1945 1989. - Keshen, Jeffrey A. Saints, Sinners, and Soldiers: Canada's Second World War (2004) - Pierson, Ruth Roach. They're Still Women After All: The Second World War and Canadian Womanhood. 1986. - Barrett, David, and Larry Shyu. Chinese Collaboration with Japan, 1932-1945: The Limits of Accommodation (2001) excerpt and text search - Coble, Parks M. Chinese Capitalists in Japan's New Order: The Occupied Lower Yangzi, 1937-1945 (2003) excerpt and text search - Eastman Lloyd. Seeds of Destruction: Nationalist China in War and Revolution, 1937- 1945. Stanford University Press, 1984 - Fairbank, John, and Albert Feuerwerker, eds., Republican China 1912-1949 in The Cambridge History of China, vol. 13, part 2. Cambridge University Press, 1986. - Henriot, Christian, and Wen-hsin Yeh. In the Shadow of the Rising Sun: Shanghai under Japanese Occupation (2004) excerpt and text search - Hsiung, James C. and Steven I. Levine, eds. China's Bitter Victory: The War with Japan, 1937-1945 (1992) online from Questia; also excerpt and text search - Hsi-sheng, Ch'i. Nationalist China at War: Military Defeats and Political Collapse, 1937–1945 University of Michigan Press, 1982 - Hung, Chang-tai. War and Popular Culture: Resistance in Modern China, 1937-1945 (1994) excerpt and text search - Gildea, Robert. Marianne in Chains: Daily Life in the Heart of France During the German Occupation (2004) excerpt and text search - Jackson, Julian. France: The Dark Years, 1940-1944 (2003) excerpt and text search - Paxton, Robert O. Vichy France 2nd ed. (2001) excerpt and text search - Beck, Earl R. Under the Bombs: The German Home Front, 1942-1945 (1999) excerpt and text search - Burleigh, Michael. The Third Reich: A New History (2000) excerpt and text search - Hagemann, Karen and Stefanie Schüler-Springorum; Home/Front: The Military, War, and Gender in Twentieth-Century Germany Berg, 2002 - Hancock, Eleanor. "Employment in Wartime: the Experience of German Women During the Second World War." War & Society 1994 12(2): 43-68. Issn: 0729-2473 - Kaldor N. "The German War Economy". Review of Economic Studies 13 (1946): 33-52. in JSTOR - Klemperer, Victor. I Will Bear Witness 1942-1945: A Diary of the Nazi Years (2001), memoir by partly-Jewish professor - Koontz, Claudia. Mothers in the Fatherland: Women, the Family and Nazi Politics, 1987 - Milward, Alan. The German Economy at War 1965. - Overy, Richard. War and Economy in the Third Reich (1994). - Passmore, Kevin. Women, Gender and Fascism in Europe, 1919-45 (2003) excerpt and text search - Rempel, Gerhard. Hitler's Children: The Hitler Youth and the SS, (1989) online edition - Speer, Albert. Inside the Third Reich: Memoirs (1970), highly influential memoir. excerpt and text search - Stibbe, Matthew. Women in the Third Reich, 2003, 208 pages - Eby, Cecil D. Hungary at War: Civilians and Soldiers in World War II (2007) - Absalom, R, "Italy", in J. Noakes (ed.), The Civilian in War: The Home Front in Europe, Japan and the U.S.A. in World War II. Exeter: Exęter University Press. 1992. - Koon, Tracy. Believe, Obey, Fight: Political Socialization in Fascist Italy 1922-1943 (U North Carolina Press, 1985), - Morgan, D. Italian Fascism, 1919-1945 (1995) - Wilhelm, Maria de Blasio. The Other Italy: Italian Resistance in World War II. (1988). 272 pp. - Cohen, Jerome. Japan's Economy in War and Reconstruction. University of Minnesota Press, 1949. online version - Cook, Haruko Taya, and Theodore Cook. Japan at War: An Oral History 1992. - Dower, John. Japan in War and Peace 1993. - Duus, Peter, Ramon H. Myers, and Mark R. Peattie. The Japanese Wartime Empire, 1931-1945. (1996). 375p. excerpt and text search - Duus, Peter, ed. The Cambridge History of Japan: vol 6: The Twentieth Century (1989), 836pp excerpt and text search - Havens, Thomas R. Valley of Darkness: The Japanese People and World War II. 1978. - Havens, Thomas R. "Women and War in Japan, 1937-1945." American Historical Review 80 (1975): 913-934. online in JSTOR - Zhou, Wanyao. The Japanese wartime empire, 1931-1945 (1996) online at ACLS e-books - Agoncillo Teodoro A. The Fateful Years: Japan's Adventure in the Philippines, 1941-1945. Quezon City, PI: R.P. Garcia Publishing Co., 1965. 2 vols - Hartendorp A. V.H. The Japanese Occupation of the Philippines. Manila: Bookmark, 1967. 2 vols. - Lear, Elmer. The Japanese Occupation of the Philippines: Leyte, 1941-1945. Southeast Asia Program, Department of Far Eastern Studies, Cornell University, 1961. 246p. emphasis on social history - Steinberg, David J. Philippine Collaboration in World War II. University of Michigan Press, 1967. 235p. Poland and Ukraine - Berkhoff, Karel C. Harvest of Despair: Life and Death in Ukraine Under Nazi Rule. Harvard U. Press, 2004. 448 pp. - Dallin, Alexander. Odessa, 1941-1944: A Case Study of Soviet Territory under Foreign Rule. Portland: Int. Specialized Book Service, 1998. 296 pp. - Davies, Norman. Rising '44: The Battle for Warsaw (2004) - Gross, Jan T. Polish Society under German Occupation: The Generalgouvernement, 1939-1944. Princeton UP, 1979. - Gross, Jan T. Revolution from Abroad: The Soviet Conquest of Poland's Western Ukraine and Western Belorussia (1988). - Gutman, Israel. Resistance: The Warsaw Ghetto Uprising (1998) - Redlich, Shimon. Together and Apart in Brzezany: Poles, Jews, and Ukrainians, 1919-1945. Indiana U. Press, 2002. 202 pp. - Rutherford, Phillip T. Prelude to the Final Solution: The Nazi Program for Deporting Ethnic Poles, 1939-1941, (University Press of Kansas; 2007) 328pp. - Vallin, Jacques; Meslé, France; Adamets, Serguei; and Pyrozhkov, Serhii. "A New Estimate of Ukrainian Population Losses During the Crises of the 1930s and 1940s." Population Studies (2002) 56(3): 249-264. Issn: 0032-4728 Fulltext in Jstor. Reports life expectancy at birth fell to a level as low as ten years for females and seven for males in 1933 and plateaued around 25 for females and 15 for males in the period 1941-44. - Barber, Bo, and Mark Harrison. The Soviet Home Front: A Social and Economic History of the USSR in World War II, Longman, 1991. - Braithwaite, Rodric. Moscow 1941: A City and Its People at War (2006) - Thurston, Robert W., and Bernd Bonwetsch (Eds). The People's War: Responses to World War II in the Soviet Union (2000) - Andenaes, Johs, et al. Norway and the Second World War (ISBN 82-518-1777-3) Oslo: Johan Grundt Tanum Forlag, 1966. - Milward, Alan S. The Fascist Economy in Norway (1972) - Nissen, Henrik S. Scandinavia During the Second World War (1983) (ISBN 0-8166-1110-6) - Salmon; Patrick (Ed.) Britain and Norway in the Second World War London: HMSO, 1995. - ↑ Vladimir Petrov, Money and Conquest: Allied Occupation Currencies in World War II, (1967) pp. 173 -175. - ↑ Gutman (1998) - ↑ Davies (2004) - ↑ Postan ch 4 - ↑ Postan, 148 - ↑ Harris, Carol. Women Under Fire in World War Two: Changing roles BBC - ↑ Titmuss (1950) - ↑ Chóngqìng. - ↑ Hancock and Gowing p. 102 - ↑ Walter Felscher (1997-01-27). Recycling and rationing in wartime Germany.. Memories of the 1940's mailing list archive. Retrieved on 2006-09-28. - ↑ Cohen, Japan's Economy in War and Reconstruction (1949) p 368-9 - ↑ John Dower, "Lessons from Iwo Jima," Perspectives (Sept 2007) 45#6 pp 54-56 at
http://www.conservapedia.com/World_War_II_Homefront
13
14
Alabama seceded and joined the Confederate States of America from 1861 to 1865. The slaves were freed in 1865. All of the population suffered economic losses and hardships as a result of the American Civil War, the ensuing agricultural depression, and the financial Panic of 1873. After a period of Reconstruction, Alabama emerged as a poor, largely rural state, still tied to cotton. Whites used legal means, violence and harassment to re-establish political and social dominance over the recently emancipated African Americans. In 1901 the Democrats passed a constitution that effectively disfranchised most African Americans and many poor whites, who in 1900 comprised more than 45 percent of the state's population. By 1941 600,000 poor whites and 520,000 African Americans had been disfranchised. In addition, despite massive population changes in the state, the rural-dominated legislature refused to redistrict from 1901 to the 1960s. They thus ensured that a rural minority dominated for decades a state with increasing urban, industrial and contemporary interests. To escape the inequities of disenfranchisement, segregation and violence, and underfunded schools, tens of thousands of African Americans joined the Great Migration from 1910-1940 and moved to better opportunities in northern and midwestern industrial cities. So many left that the state's rate of population growth dropped nearly by half from 1910 to 1920, according to census figures. Politically, the state continued as one-party Democratic for years, and produced a number of national leaders. World War II brought prosperity. Cotton faded in importance as the state developed a manufacturing and service base. After 1980, the state became a Republican stronghold in presidential elections, and leaned Republican in statewide elections, while the Democratic Party still dominated local and legislative offices. It is possible that a member of Pánfilo de Narváez's expedition of 1528 entered what is now southern Alabama, but the first fully documented visit was that of Hernando de Soto, who made an arduous but fruitless journey along the Coosa, Alabama and Tombigbee rivers in 1539. The English also claimed the region north of the Gulf of Mexico. The territory of modern Alabama was included in the Province of Carolina, granted by Charles II of England to certain of his favorites by the charters of 1663 and 1665. English traders from Carolina were frequenting the valley of the Alabama river as early as 1687. The French also colonized the region. In 1702 a French settlement was founded on the Mobile River, including Fort Louis, which for the next nine years was the seat of government of Louisiana. In 1711, Fort Louis was abandoned to the floods of the river, and on higher ground was built Fort Conde, in the present city of Mobile. This was the first permanent European settlement in Alabama. The French and the English contested the region, each attempting to forge strong alliances with Indian tribes. To strengthen their position, defend their Indian allies, and draw other tribes to them, the French established the military posts of Fort Toulouse, near the junction of the Coosa and Tallapoosa rivers, and Fort Tombecbe on the Tombigbee River. The grant of Georgia to Oglethorpe and his associates in 1732 included a portion of what is now northern Alabama. In 1739, Oglethorpe himself visited the Creek Indians west of the Chattahoochee River and made a treaty with them. The 1763 Treaty of Paris, which ended the French and Indian War, terminated the French occupation of Alabama. Great Britain came into undisputed control of the region between the Chattahoochee and the Mississippi Rivers. The portion of Alabama below the 31st parallel then became a part of West Florida, and the portion north of this line a part of the "Illinois Country", set apart, by royal proclamation, for the use of the Indians. In 1767, the province of West Florida was extended northward to 32 degrees 28 minutes north latitude. A few years later, during the American Revolutionary War, this region fell into the hands of Spain. By the Treaty of Versailles , September 3, 1783, Great Britain ceded West Florida to Spain; but by the Treaty of Paris (1783), signed the same day, Britain ceded to the United States all of this province north of 31 degrees, thus laying the foundation for a long controversy. By the Treaty of Madrid, in 1795, Spain ceded to the United States the lands east of the Mississippi between 31 degrees and 32 degrees 28 minutes. Three years later, in 1798, Congress organized this district as the Mississippi Territory. A strip of land 12 or 14 miles wide near the present northern boundary of Alabama and Mississippi was claimed by South Carolina, but in 1787 that state ceded this claim to the federal government. Georgia likewise claimed all the lands between the 31st and 35th parallels from its present western boundary to the Mississippi river, and did not surrender its claim until 1802. Two years later, the boundaries of Mississippi Territory were extended so as to include all of the Georgia cession. In 1812, Congress added the Mobile District of West Florida to the Mississippi Territory, claiming that it was included in the Louisiana Purchase. The following year, General James Wilkinson occupied the Mobile District with a military force. The Spanish did not resist. Thus the whole area of the present state of Alabama was then under the jurisdiction of the United States, although Indians still owned most of the land by treaty and occupation. In 1817, the Mississippi Territory was divided; the western portion became the state of Mississippi, and the eastern portion became the Alabama Territory, with St. Stephens, on the Tombigbee River, as the temporary seat of government. Conflict between the Indians of Alabama and American settlers increased rapidly in the early 19th century. The great Shawnee chief Tecumseh visited the region in 1811, seeking to forge an Indian alliance of resistance from the Gulf of Mexico to the Great Lakes. With the outbreak of the War of 1812, Britain encouraged Tecumseh's resistance movement. Several tribes were divided in opinion. The Creek tribe fell to civil war. Violence between Creeks and Americans escalated, culminating in the Fort Mims massacre. Full-scale war between the United States and the "Red Stick" Creeks began, known as the Creek War. The Chickasaw, Choctaw, Cherokee, and other Creek factions remained neutral or allied to the United States, some serving with American troops. Volunteer militias from Georgia, South Carolina, and Tennessee marched into Alabama, fighting the Red Sticks. Later, federal troops became the main fighting force for the United States. General Andrew Jackson was the commander of the American forces during the Creek War and later against the British. His leadership and military success during the wars made him a national hero. The treaty of Fort Jackson (August 9, 1814), ended the Creek War. By the terms of the treaty the Creeks, Red Sticks and neutrals alike, ceded about one-half of the present state of Alabama. Later cessions by the Cherokee, Chickasaw, and Choctaw in 1816 left only about a quarter of Alabama to the Indians. One of the first problems of the new commonwealth was that of finance. Since the amount of money in circulation was not sufficient to meet the demands of the increasing population, a system of state banks was instituted. State bonds were issued and public lands were sold to secure capital, and the notes of the banks, loaned on security, became a medium of exchange. Prospects of an income from the banks led the legislature of 1836 to abolish all taxation for state purposes. This was hardly done, however, before the Panic of 1837 wiped out a large portion of the banks' assets. Next came revelations of grossly careless and even of corrupt management, and in 1843 the banks were placed in liquidation. After disposing of all their available assets, the state assumed the remaining liabilities, for which it had pledged its faith and credit. In 1830 the Indian Removal Act set in motion the process that resulted in the Indian removal of southeastern tribes, including the Creek, Cherokee, Choctaw, Chickasaw, and Seminole. In 1832, the national government provided for the removal of the Creeks via the Treaty of Cusseta. Before the actual removal occurred between 1834 and 1837, the state legislature formed the Indian lands into counties, and settlers flocked in. Until 1832, there was only one party in the state, the Democratic, but the question of nullification caused a division that year into the (Jackson) Democratic party and the State's Rights (Calhoun Democratic) party; about the same time an opposition party emerged, the Whig party. It drew support from plantation owners and townsmen, while the Democrats were strongest among poor farmers and Catholics in the Mobile area. For some time, the Whigs were almost as numerous as the Democrats, but they never secured control of the state government. The State's Rights faction were in a minority; nevertheless under their active and persistent leader, William L. Yancey (1814-1863), they prevailed upon the Democrats in 1848 to adopt their most radical views. During the agitation over the Wilmot Proviso which would bar slavery from territory acquired from Mexico, Yancey induced the Democratic State Convention of 1848 to adopt what is known as the "Alabama Platform." It declared that neither Congress nor the government of a territory had the right to interfere with slavery in a territory, that those who held opposite views were not Democrats, and that the Democrats of Alabama would not support a candidate for the presidency if he did not agree with them on these questions. This platform was endorsed by conventions in Florida and Virginia and by the legislatures of Georgia and Alabama. Old party lines were broken by the Compromise of 1850. The State's Rights faction, joined by many Democrats, founded the Southern Rights Party, which demanded the repeal of the Compromise, advocated resistance to future encroachments and prepared for secession, while the Whigs, joined by the remaining Democrats, formed the party known as the "Unionists," which unwillingly accepted the Compromise and denied the "constitutional" right of secession. The state's prosperity grew with the development of large cotton plantations in the Black Belt, whose owners' wealth depended on the labor of enslaved African Americans. In other parts of the state, the soil supported only subsistence farming. Most of the yeoman farmers owned few or no slaves. By 1860 the success of cotton production led to planters' holding 435,000 enslaved African Americans, 45% of the state's population. These early Alabama settlers were noted for their spirit of frontier democracy and egalitarianism, and their fierce defense of the republican values of civic virtue and opposition to corruption. J. Mills Thornton (1978) argues that Whigs argued for positive state action to benefit society as a whole while the Democrats feared any increase of power in government or in such private institutions as state-chartered banks, railroads, and corporations. Fierce political battles raged in Alabama on issues ranging from banking to the removal of the Creek Indians. Thornton suggested there was one overarching issue in the state's politics: how to protect liberty and equality for white people. Fears that Northern agitators threatened their value system angered the voters and made them ready to secede when Abraham Lincoln was elected in 1860 (Thornton 1978). The "Unionists" were successful in the elections of 1851 and 1852. Passage of the Kansas-Nebraska Bill and uncertainty about agitation against slavery led the State Democratic convention of 1856 to revive the "Alabama Platform". When Democratic National convention at Charleston, South Carolina failed to approve the "Alabama Platform" in 1860, the Alabama delegates, followed by those of the other cotton "states," withdrew. Upon the election of Abraham Lincoln, Governor Andrew B. Moore, as previously instructed by the legislature, called a state convention. Many prominent men had opposed secession. In North Alabama, there was an attempt to organize a neutral state to be called Nickajack. With President Lincoln's call to arms, most opposition to secession ended. On January 11 1861 The State of Alabama adopted the ordinances of secession from the Union (by a vote of 61-39). Until February 18 1861 Alabama was informally called the Alabama Republic. It never changed its formal name which always has been "State of Alabama." Governor Moore energetically supported the Confederate war effort. Even before hostilities began, he seized Federal facilities, sent agents to buy rifles in the Northeast, and scoured the state for weapons. Despite some resistance in the northern part of the state, Alabama joined the Confederate States of America. Congressman Williamson R. W. Cobb was a Unionist and pleaded for compromise. When he ran for the Confederate congress in 1861, he was defeated. (In 1863, with war weariness growing in Alabama, he was elected on a wave of antiwar sentiment.) The new nation brushed Cobb aside and set up its temporary capital in Montgomery and selected Jefferson Davis as president. In May, the Confederate government abandoned Montgomery before the sickly season began, and relocated in Richmond. Virginia. Some idea of the severe internal logistics problems the Confederacy faced can be seen by tracing Davis's journey from Mississippi, the next state over. From his plantation on the river, he took a steamboat down the Mississippi to Vicksburg, boarded a train to Jackson, where he took another train north to Grand Junction, then a third train east to Chattanooga, Tennessee, and a fourth train to Atlanta, Georgia. Yet another train took Davis to the Alabama border, where a final train took him to Montgomery. As the war proceeded, the Federals seized the Mississippi River, burned trestles and railroad bridges, and tore up track. The frail Confederate railroad system faltered and virtually collapsed for want of repairs and replacement parts. In the early part of the Civil War, Alabama was not the scene of military operations, yet the state contributed about 120,000 men to the Confederate service, practically all the white population capable of bearing arms. Most were recruited locally and served with men they knew, which built esprit and strengthened ties to home. Medical conditions were severe. About 15% of fatalities were from disease, more than the 10% from battle. Alabama had few well-equipped hospitals, but it had many women who volunteered to nurse the sick and wounded. Soldiers were poorly equipped, especially after 1863. Often they pillaged the dead for boots, belts, canteens, blankets, hats, shirts and pants. Uncounted thousands of slaves worked with Confederate troops; they took care of horses and equipment, cooked and did laundry, hauled supplies, and helped in field hospitals. Other slaves built defensive installations, especially those around Mobile. They graded roads, repaired railroads, drove supply wagons, and labored in iron mines, iron foundries and even in the munitions factories. The service of slaves was involuntary: their unpaid labor was impressed from their unpaid masters. About 10,000 slaves escaped and joined the Union army, along with 2,700 white men. Thirty-nine Alabamians attained flag rank, most notably Lieutenant General James Longstreet and Admiral Raphael Semmes. Josiah Gorgas who came to Alabama from Pennsylvania, was the chief of ordnance for the Confederacy. He located new munitions plants in Selma, which employed 10,000 workers until the Union raiders in 1865 burned the factories down. Selma Arsenal made most of the Confederacy's ammunition. The Selma Naval Ordnance Works made artillery, turning out a cannon every five days. The Confederate Naval Yard built ships and was noted for launching the CSS Tennessee in 1863 to defend Mobile Bay. Selma's Confederate Nitre Works procured niter, for gunpowder, from limestone caves. When supplies were low, it advertised for housewives to save the contents of their chamber pots--urine, a rich source of nitrogen. Alabama soldiers fought in hundreds of battles; the state's losses at Gettysburg were 1,750 dead plus even more captured or wounded; the famed "Alabama Brigade" took 781 casualties. In 1863, the Federal forces secured a foothold in northern Alabama in spite of the opposition of General Nathan B. Forrest. From 1861, the federal blockade shut Mobile, and, in 1864, the outer defenses of Mobile were taken by a Federal fleet; the city itself held out until April 1865. [Rogers, ch 12] In 1867, the congressional plan of Reconstruction was completed and Alabama was placed under military government. The freedmen were enrolled as voters and numerous white citizens were disenfranchised. The new Republican party, made up of freedmen, scalawags and carpetbaggers took control two years after the war ended. A constitutional convention, controlled by this element, met in November 1867, and framed a constitution which conferred universal manhood suffrage. Whites who had fought for the Confederacy were disenfranchised for a temporary period. The Reconstruction Acts of Congress required every new constitution to be ratified by a majority of the legal voters of the state. The whites of Alabama largely stayed away from the polls. After five days of voting, the constitution needed 13,550 to secure a majority. Congress then enacted that a majority of the votes cast should be sufficient. Thus the constitution went into effect, the state was readmitted to the Union in June 1868, and a new governor and legislature were elected. Many white citizens resisted postwar changes, believing the Reconstruction years were notable for legislative extravagance and corruption, and control by freedmen. However, whites had the most control, and it was a coalition that created the first system of public education, as well as charitable institutions to benefit all citizens. The state endorsed railway bonds at the rate of $12,000 and $16,000 a mile until the state debt had increased from eight million to seventeen million dollars, and similar corruption characterized local government. The native white people united, formed a Conservative party and elected a governor and a majority of the lower house of the legislature in 1870; but, as the new administration was largely a failure, in 1872, there was a reaction in favor of the Radicals, a local term applied to the Republican party. In 1874, however, the power of the Radicals was finally broken, as the Conservative Democrats elected all state officials. A commission appointed to examine the state debt found it to be $25,503,000; by compromise, it was reduced to $15,000,000. A new constitution was adopted in 1875, which omitted the guarantee of the previous constitution that no one should be denied suffrage on account of race, color or previous condition of servitude. Its provisions forbade the state to engage in internal improvements or to give its credit to any private enterprise, an anti-industrial stance that limited the state's progress for decades. The development of mining and manufacturing was accompanied by economic distress among the farming classes, which found expression in the Jeffersonian Democratic party, organized in 1892. The regular Democratic ticket was elected and the new party was then merged into the Populist party. In 1894, the Republicans united with the Populists, elected three congressional representatives, secured control of many of the counties, but failed to carry the state. They continued their opposition with less success in the next campaigns. Partisanship became intense, and Democratic charges of corruption of the black electorate were matched by Republican and Populist accusations of fraud and violence by Democrats. Despite opposition by Republicans and Populists, Democrats completed their dominance with a new constitution in 1901 that restricted suffrage and effectively disenfranchised African Americans. Its voter registration requirements also rapidly disfranchised tens of thousands of poor whites, an outcome the latter were not suspecting. From 1900 to 1903, the number of white registered voters fell by more than 40,000, from 232,821 to 191,492, despite a growth in population. By 1941 a total of more whites than blacks had been disfranchised: 600,000 whites to 520,000 blacks. This was due mostly to effects of the cumulative poll tax. The damage to the African-American community was more severe and pervasive, as nearly all its eligible citizens lost the ability to vote. In 1900 45% of Alabama's population were African American: 827,545 citizens. In 1900 fourteen Black Belt counties (which were primarily African American) had more than 79,000 voters on the rolls. By June 1,1903, the number of registered voters had dropped to 1,081. While Dallas and Lowndes counties were both 75% black, between them there were only 103 African-American voters registered. In 1900 Alabama had more than 181,000 African Americans eligible to vote. By 1903 only 2,980 had managed to "qualify" to register, although at least 74,000 black voters were literate. The shut out was longlasting. It meant the effects of segregation suffered by African Americans were severe. At the end of WWII, for instance, in the black Collegeville community of Birmingham, only eleven voters in a population of 8,000 African Americans were deemed "eligible" to register to vote. Birmingham was founded on June 1, 1871 by real estate promoters who sold lots near the planned crossing of the Alabama & Chattanooga and South & North railroads. The site of the railroad crossing was notable for the nearby deposits of iron ore, coal, and limestone-the three principal raw materials used in making steel. Its founders adopted the name of England's principal industrial city to advertise the new city as a center of iron and steel production. Despite outbreaks of cholera, the population of 'Pittsburgh of the South' grew from 38,000 to 132,000 from 1900 to 1910, attracting rural white and black migrants from all over the region. Birmingham experienced such rapid growth that it was nicknamed "The Magic City." By the 1920s, Birmingham was the 19th largest city in the U.S and held more than 30% of the population of the state. Heavy industry and mining were the basis of the economy. Chemical and structural constraints limited the quality of steel produced from Alabama’s iron and coal. These materials did, however, combine to make ideal foundry iron, and, because of low transportation and labor costs, Birmingham quickly became the largest and cheapest foundry iron producing area. By 1915 twenty-five percent of the nation’s foundry pig iron was produced in Birmingham. While African Americans suffered from segregation after disfranchisement, the state was diminished by its deliberate suppression of their talents. From 1910-1940, tens of thousands of talented African Americans migrated north from Alabama in the Great Migration to seek jobs, education for their children and better conditions in northern cities, such as Chicago, Detroit, Cleveland, Philadelphia and New York. There they built their own businesses, churches and community organizations, music and arts, and began to create a middle class. The rate of population growth in Alabama dropped from 20.8% in 1900 and 16.9% in 1910, to 9.8% in 1920, reflecting the impact of the outmigration. Disfranchisement was ended only in the mid-1960s by African Americans' leading the Civil Rights Movement and gaining Federal legislation to protect their voting and civil rights. A rapid pace of change across the country, especially in growing cities, combined with new waves of immigration and migration of rural whites and blacks to cities, all contributed to a volatile social environment and the rise of a second Ku Klux Klan (KKK) in the South and Midwest. In many areas it represented itself as another fraternal group to give aid to a community. Feldman (1999) has shown that the Second KKK was not a mere hate group; it showed a genuine desire for political and social reform. Alabama Klansmen were among the foremost advocates of better public schools, effective prohibition enforcement, expanded road construction, and other "progressive" measures to benefit poor whites. By 1925, the Klan was a powerful political force in the state, as figures like J. Thomas Heflin, David Bibb Graves, and Hugo Black manipulated the KKK membership against the power of the "Big Mule" industrialists and especially Black Belt planters who had long dominated the state. In 1926, Bibb Graves, a former chapter head, won the governor's office with KKK members' support. He led one of the most progressive administrations in the state's history, pushing for increased education funding, better public health, new highway construction, and pro-labor legislation. At the same time, KKK vigilantes---thinking they enjoyed governmental protection--launched a wave of physical terror across Alabama in 1927, targeting both blacks and whites. The conservative elite counterattacked. The major newspapers kept up a steady, loud attack on the Klan as violent and unAmerican. Sheriffs cracked down on Klan violence. The counterattack worked. The state voted for Al Smith in 1928, and the Klan's official membership plunged to under six thousand by 1930. The rural white minority's hold on the legislature continued, suppressing attempts by more progressive elements to modernize the state. A study in 1960 concluded that because of rural domination, "A minority of about 25 per cent of the total state population is in majority control of the Alabama legislature." Legislators and other mounted challenges in the 1960s, but it took years and Federal court intervention to achieve redistricting that came close to establishing "one man, one vote" representation. In 1960 on the eve of important civil rights battles, 30% of Alabama's population was African American. More than 980,000 citizens lived without justice in a segregated state. As Birmingham was the center of industry and population in Alabama, civil rights leaders chose to mount a campaign there for desegregation in 1963. Schools, restaurants and department stores were segregated; no African Americans were hired to work in the stores where they shopped or in city government supported in part by their taxes. There were no African American members of the police force. Despite segregation, African Americans had been advancing economically. In response, independent groups affiliated with the KKK bombed transition residential neighborhoods to discourage blacks' moving into them. To help with the campaign and secure national attention, the Rev. Fred Shuttlesworth invited members of the Southern Christian Leadership Conference (SCLC) to Birmingham to help change its leadership's policies. Non-violent action had produced good results in some other cities. The Rev. Martin Luther King, Jr. and other leaders came to Birmingham to help. In the spring and summer of 1963, national attention became riveted on Birmingham. The media covered the series of peaceful marches that the Birmingham police, headed by Police Commissioner Bull Connor, attempted to divert and control. King intended to fill the jails with nonviolent protesters to make a moral argument to the United States. Dramatic images of Birmingham police using dogs and powerful streams of water against children protesters filled newspapers and television coverage, arousing national outrage. Finally Birmingham leaders, King and Shuttlesworth came to agreement to end the marches and end segregation, but some of the progress was slow. The Kennedy Administration prepared civil rights legislation that was finally entered into law when President Lyndon Johnson helped secure its passage and signed the Civil Rights Act in 1964. The following year passage of the Voting Rights Act helped secure suffrage for all citizens. Court challenges related to "one man, one vote" and the Voting Rights Act of 1965 finally provided the groundwork for Federal court action that created a statewide redistricting plan in 1972. Together with renewed voters rights, hundreds of thousands of Alabama citizens were able to participate for the first time in the political system. Sisters of Mercy: Nuns served as nurses, pharmacists, bookkeepers, did laundry, took X-rays and ran the School of Nursing. Mar 28, 2007; Byline: Kim Kincaid Mar. 28--IMA -- Before St. Rita's Hospital Lever opened its doors, the Sisters of Mercy were overseeing the...
http://www.reference.com/browse/did+laundry
13
41
Hearing loss is any degree of impairment of the ability to apprehend sound. Sound can be measured accurately. The term decibel (dB) is a measure of loudness and refers to a unit for expressing the relative intensity of sound on a scale from zero, for a nearly imperceptible sound, to 130, which is the level at which sound causes pain in the average person. A drop of more than 10 dB in the level of sound a person can hear is significant. Sound travels as waves through a medium like air or water. These waves are collected by the external ear and cause the tympanic membrane (eardrum) to vibrate. The chain of ossicles (tiny bones) connected to the eardrum—the incus, malleus, and stapes—carries the vibration to the oval window (an opening to the inner ear), increasing its amplitude 20 times on the way. There, the energy causes a standing wave in the watery liquid (endolymph) inside the organ of Corti. (A standing wave is one that does not move.) The frequency of the sound determines the configuration of the standing wave. Many thousands of tiny nerve fibers detect the highs and lows of the standing wave and transmit their findings to the brain, which interprets the signals as sound. To summarize, sound energy passes through the air of the external ear, the bones of the middle ear, and the liquid of the inner ear. It is then translated into nerve impulses, sent to the brain through nerves, and understood there as sound. It follows that there are five steps in the hearing process: - air conduction through the external ear to the eardrum - bone conduction through the middle ear to the inner ear - water conduction to the organ of Corti - nerve conduction into the brain - interpretation by the brain Hearing can be interrupted in a variety of ways at each of the five steps. The external ear canal can be blocked with ear wax, foreign objects, infection, and tumors. Overgrowth of the bone can also narrow the passageway, making blockage and infection more likely. This condition can occur when the ear canal has been flushed with cold water repeatedly for years, as is the case with surfers, for whom the condition called "surfer's ear" is named. The eardrum is so thin a physician can see through it into the middle ear. It can be ruptured by sharp objects, pressure from an infection in the middle ear, or even a firm cuffing or slapping of the ear. The eardrum is also susceptible to pressure changes during scuba diving. Several conditions can diminish the mobility of the small bones (ossicles) in the middle ear. Otitis media, an infection in the middle ear, occurs when fluid cannot escape into the throat because the eustachian tube is blocked. The fluid (pus or mucus) that accumulates prevents the ossicles from moving as efficiently as they normally do, thus dampening the sound waves. In a disease called otosclerosis, spongy tissue grows around the bones |DECIBEL RATINGS AND HAZARDOUS LEVELS OF NOISE| |Decibel Level||Example Of Sounds| |Above 110 decibels, hearing may become painful| |Above 120 decibels is considered deafening| |Above 135 decibels, hearing will become extremely painful and hearing loss may result if exposure is prolonged| |Above 180 decibels, hearing loss is almost certain with any exposure| |35||Noise may prevent the listener from falling asleep| |40||Quiet office noise level| |60||Average television volume, sewing machine, lively conversation| |70||Busy traffic, noisy restaurant| |80||Heavy city traffic, factory noise, alarm clock| |90||Cocktail party, lawn mower| |180||Rocket launching pad| of the inner ear. This growth sometimes binds the stapes in the oval window, which interferes with its normal vibration and causes deafness. All the conditions mentioned so far—those that occur in the external and middle ear—are causes of what is known as conductive hearing loss. The second category, sensory hearing loss, refers to damage to the organ of Corti and the acoustic nerve. Prolonged exposure to loud noise is the leading cause of sensory hearing loss. A million people have this condition, many identified during the military draft and rejected as being unfit for duty. The cause is often believed to be prolonged exposure to rock music. Occupational noise exposure is the other leading cause of noise-induced hearing loss (NIHL) and is ample reason for wearing ear protection on the job. More unusual, but often undetected, is low-frequency hearing loss. Scientists discovered in 2001 that people with a particular gene mutation gradually lose their abilities to hear low-frequency sounds. Since those people with this type of hearing loss can still distinguish speech, they often remain unaware of the low-frequency changes in their hearing. The scientists believe that the same gene mutations might make some people more susceptible to high-frequency hearing loss, but further study is needed. One-third of people older than 65 have presbycusis, which is sensory hearing loss due to aging. Both NIHL and presbycusis are primarily loss of the ability to hear high-frequency sounds. In speech, consonants generally have a higher frequency than vowels. Yet in most languages, consonants provide us the clues needed for determining what a person is saying. So these people hear plenty of noise, they just cannot easily make out what it means. They have particular trouble differentiating speech from background noise. Brain infections such as meningitis, drugs such as the aminoglycoside antibiotics (streptomycin, gentamycin, kanamycin, tobramycin), and Meniere's disease can also cause permanent sensory hearing loss. Meniere's disease combines attacks of hearing loss with attacks of vertigo. The symptoms may occur together or separately. High doses of salicylates such as aspirin and quinine can cause a temporary high-frequency loss, and prolonged high doses can lead to permanent deafness. There is also a hereditary form of sensory deafness and a congenital form most often caused by rubella (German measles). Sudden hearing loss of at least 30 dB in less than three days is most commonly caused by cochleitis, a mysterious viral infection. The final category of hearing loss is neural hearing loss. Permanent neural hearing loss most often results from damage to the acoustic nerve and the parts of the brain that Hearing can also be diminished by tinnitus, which is characterized by extra sounds generated by the ear. These sounds are referred to as tinnitus, and can be ringing, blowing, clicking, or anything else that no one but the patient hears. Tinnitus may be caused by loud noises, medication, allergies, or medical conditions—from the same kinds of disorders that can cause diminished hearing. Many common causes of hearing loss can be detected through an examination of the ears and nose combined with simple hearing tests performed in the physician's office. An audiogram (a test of hearing at a range of sound frequencies) often concludes the evaluation. These simple tests often produce a diagnosis. If the defect is in the brain or the acoustic nerve, further neurological testing and imaging will be required. The audiogram has many uses in diagnosing hearing deficits. The pattern of hearing loss across the audible frequencies gives clues to the cause. Several alterations in the testing procedure can give additional information. For example, speech is perceived differently than pure tones. Adequate perception of sound combined with inability to recognize words points to a brain problem rather than a sensory or conductive deficit. Loudness perception is distorted by disease in certain areas but not in others. Acoustic neuromas often distort the perception of loudness. Conductive hearing loss can be treated with alternative therapies that are specific to the particular condition. The following dietary changes may help improve certain hearing impairment conditions: - Alleviate accumulated wax in the ear by taking oral supplements with essential fatty acids such as flax oil and omega-3 oil. - Identify and avoid potential allergenic foods. Children who are allergic to foods have an increased risk of getting chronic ear infections. - Take nutritional supplements. B-complex vitamins and iron supplements may be helpful in preventing protein deficiency and anemia. These conditions depress immune function and increase the risk of chronic ear infections. Children suffering from frequent ear infections may need supplementation with strong antioxidants such as vitamins A and C, zinc, and bioflavonoids. High-potency multivitamin and mineral supplements should contain most of these helpful nutrients as well as other essential vitamins and minerals. There are several effective herbal treatments for hearing impairments. They include: - Ginkgo biloba. Ginkgo may be effective in patients with hearing loss who often complain of ringing in the ears. - Natural antibiotics such as echinacea and goldenseal can help prevent or treat ear infections. - Certain Chinese herbal combinations can help alleviate tinnitus, ear infections, and chronic sinus infections that can lead to hearing loss. Homeopathic therapies may help patients who have sensory hearing loss. An experienced homeopathic physician will prescribe specific remedies based on knowledge of the underlying cause. Other therapies that may help improve hearing in some patients include Ayurvedic medicine, craniosacral therapy, and auditory integration training. Conductive hearing loss can almost always be restored to some degree, if not completely. - Matter in the ear canal can easily be removed, with a dramatic improvement in hearing. - Surfer's ear gradually regresses if the patient avoids cold water or uses a special ear plug. In advanced cases, surgeons can grind away the excess bone. - A middle-ear infection involving fluid is also simple to treat. If medications do not work, fluid may be surgically drained through the eardrum, which heals completely after treatment. - Traumatically damaged eardrums can be repaired with a tiny skin graft. - Otosclerosis may be surgically repaired through an operating microscope. In this intricate procedure, tiny artificial parts are substituted for the original ossicles. Now available for complete conductive hearing loss are bone conduction hearing aids and even devices that can be surgically implanted in the cochlea. Sensory and neural hearing loss, on the other hand, cannot readily be cured. Fortunately such hearing loss is rarely complete, and hearing aids can fill the deficit. In-the-ear hearing aids can boost the volume of sound by up to 70 dB. (Normal speech is about 60 dB.) Federal law now requires that aids be dispensed only by prescription. Tinnitus can sometimes be relieved by adding white noise (such as the sound of wind or waves crashing on the shore) to the environment. Decreased hearing is such a common problem that there are legions of organizations to provide assistance. Special language training, both in lip reading and signing, is available in most regions of the United States, as well as special schools and camps for children. - Meniere's disease - —The combination of vertigo and decreased hearing caused by abnormalities in the inner ear. Prompt treatment and attentive follow-up of middle ear infections in children will prevent this cause of conductive hearing loss. Sensory hearing loss as a complication of epidemic disease has been greatly reduced by control of infectious childhood diseases, such as measles. Laws that require protection from loud noise in the workplace have substantially reduced incidences of noise-induced hearing loss. Surfers, cold-water fishermen, and other people who are regularly exposed to frigid water should use the right kind of ear plugs. Alberti, R. W. "Occupational Hearing Loss." Disorders of the Nose, Throat, Ear, Head, and Neck, edited by John Jacob Ballenger. Philadelphia: Lea & Febiger, 1991. Bennett, J. Claude, and Fred Plum, eds. Cecil Textbook of Medicine. Philadelphia: W. B. Saunders, 1996. "Hearing and Ear Disorders." In Alternative Medicine: The Definitive Guide, compiled by The Burton Goldberg Group. Tiburon, Calif.: Future Medicine Publishing, 1999. Tierney, Lawrence M., M.D., et al., eds. Current Medical Diagnosis and Treatment. Stamford, CT: Appleton & Lange, 1998. Nadol, J. B. "Hearing Loss." New England Journal of Medicine 329 (1993): 1092–102. "Scientist Identify Gene Linked to Low-Frequency Hearing Loss." Genomics and Genetics Weekly (December 14, 2001): 6. Sodipo, Joseph O., and Phillip A. Okeowo. "Therapeutic Acupuncture for Sensory-Neural Deafness." Am J Chin Med 8, no. 4 (1980): 385–390. Alexander Graham Bell Association for the Deaf. 3417 Volta Place NW, Washington, DC 20007-2778. (202) 337-5220. http:/www.agbell.org. National Association of the Deaf. 814 Thayer Ave., Silver Spring, MD 20910-4500. (301) 587-1788. http://www.nad.org. National Institute on Deafness and Other Communication Disorders, National Institutes of Health. 31 Center Dr., Bethesda, MD 20892. (301) 496-7243. Fax: (301) 402-0018. http://www.nih.gov/nidcd. Self Help for Hard of Hearing People, Inc. 7910 Woodmont Avenue, Suite 1200, Bethesda, MD 20814. (301) 657-2248. http://www.shhh.org. DeafSource: An Internet Guide to Resources for Helping Professionals Working with Deaf and Hard of Hearing Individuals. http://home.earthlink.net/~drblood. Teresa G. Odle
http://www.healthline.com/galecontent/hearing-loss
13
48
Review Question (1) 1. What makes international trade different from domestic trade (within the regions of a nation state.) 2. How have the US patterns of trade changed in the past 50 years? 3. What are some of the important factors that have contributed to the growth of international trade? 4. What was the principle idea in the doctrine of mercantilism? 5. What was David Hume's specie-flow mechanism? 6. Country A uses 4 units of labor to produce one unit of food and 8 units of labor to produce one unit of cloth. Country B uses 3 units of labor to produce on unit of food and 5 units of labor to produce cloth. Assuming that labor is the only factor of production in booth countries, can these two countries gain from trade? Use a diagram to clearly explain. Be as specific as you can and clearly explain the key points on your graph. 7. What is does MRS (along an indifference curve) reflect/explain? 8. Show how a country's production and consumption are determined under autarky. 9. What did David Ricardo mean by "comparative advantage?" 10. What do we mean by "terms of trade?" 11. A small country produces computers and rugs. At the relative price of 10 rugs per 1 computer this country produces 50 rugs and 10 computers under autarky. a) Draw a Neoclassical production possibilities curve and show the point of production (and consumption) for this country to reflect the information described above. b) b) Now suppose the country opens to international trade. As a small country the international relative price of computers would be given to this country and it would have no influence on it. Assuming the international relative price of computers is 5 rugs per 1 computer, show the country's export and import and its gain from trade. Hint: You must draw a production possibilities curve and a set of indifference curves that would be consistent with the information given in the description. 12. Use a set of diagrams to show the gains from trade in two-country/two-good model in context of a Neoclassical world. Be very clear and specific about your assumption on factor abundance and production. 13. What is an offer curve? Explain the information contained in an offer curve (and a specific point on an offer curve) clearly. 14. Using a set of offer curves, demonstrate how equilibrium terms of trade are achieved in a two-country/two-good (say food and cloth) world. Then, try to explain how each of the following developments may affect the size of trade and the terms of trade. a. Economic growth in one of the two countries b. Economic growth in both countries c. Economic recession in one of the two countries d. A tariff imposed by one country on its imports 15. Explain the Heckscher-Ohlin theorem. 16. What is meant by "Leotief Paradox?" 17. Explain some of the reasons why the H-O model does not explain the real trade patters of the world. 18. Explain the factor price equalization theorem. 19. Describe the Stolper-Samuelson Theorem 20. Who stands to gain the most from trade in a labor-abundant country? Explain why under certain conditions (the specific-factor model) some capital owners and labor groups might oppose free trade. 21. Explain the Leontief Paradox. 22. Outline and briefly discuss the explanations for Leontief paradox. 23. Explain the Stolper-Samuelson Theorem. Be careful to explain the underlying assumptions of the theorem. 24. Explain how trade could affect (relative) prices and thus factor prices. 25. How do changes in factor prices affect the production process? (Hint: Think of K/L ratio, for example.) 26. Explain some of the reason why we do not observe factor price equalization. 27. Explain how factor mobility could affect adjustments in factor prices following the opening of trade. (Consider both short run and long run.) 28. Explain who will be the losers and who will the gainers (from free trade) under the assumption of factor immobility. Think of two factors production: capital and labor. 29. What do we mean by intra-industry trade? Explain some of the reasons for such trade. 30. China is considered a labor-abundant country. It is widely held that China’s attempt to join WTO will eventually result in China having freer trade relation with the rest of the world. Using H-O analysis, who in China stands to gain more from free trade? Explain. 31. If we assume that the existence of different skill levels in the US labor market is an explanation for the Leontief Paradox, how would you expect free trade to affect income distribution in the US? Review Questions (1A) 1. Consider a small country (Agraria) with given resource endowments that produces agricultural (A) and manufactured (M) products. Suppose that agricultural products are land-intensive while manufacturing is labor intensive. The country is (relatively) more richly endowed with land than with labor. a. Based on the above description, draw a hypothetical production-possibilities frontier (curve) for this economy. (Your PPF does not have to be identical to those of others.) b. Under autarky, the ratio between the price of agricultural products and the price of manufactured products (Pa/Pm) is 1/2. Assuming the country is operating under full employment and full production, show the country's production point under autarky on your diagram. c. Suppose this country opens to trade with the rest of the world. The world's price ratio, (Pa/Pm)w, is equal to one. Show the country's production point after trade. d. Now draw a set of community indifference map consistent with country's consumption under autarky. Show the country's consumption point after trade. e. Show the country's levels of export and import on your diagram. f. Discuss the country's gain from trade. 2. Suppose Agraria's neighboring country (Laboria) is also a small country. Laboria, however, is a country with a relatively small landmass but a large population. Laboria also produces agricultural as well as manufactured products. a. Based on the above description, draw a hypothetical production-possibilities frontier (curve) for this country as well. (Your PPF for this country does not have to be identical to those of others.) b. Under autarky, the ratio between the price of agricultural products and the price of manufactured products (Pa/Pm) in Laboria is equal to 2. Assuming the country is operating under full employment and full production, show the Laboria's production point under autarky on your diagram. c. Now suppose that rather than opening their borders to the rest of the world, the two countries enter into a bilateral trade agreement according to which they trade with each other freely while keeping their borders closed to the rest of the world. Using your diagrams, explain and demonstrate how the terms of trade (price ratio) between the two countries get established. d. Show how each county would gain from trading with each other. e. Determine the levels of export and import of each country. ( Show them on your diagram) 3. Can two countries with identical PPFs gain from trade with each other? Explain clearly. 4. Carefully and clearly explain under what conditions (or circumstances) a country could lose in social welfare as a result of (international) trade. 5. Draw Agraria's "offer curve". a. Explain what an offer curve shows? b. Assuming Agraria, as a small country, faces an international price ration (Pa/Pm) of one. Show Agraria's export and import levels on its offer curve. Review Questions (2) 1. Explain internal and external economies of scale. 2. Explain how internal economies if scale affect patterns of trade. 3. Explain how external economies of scale affect patterns of trade. 4. What is a tariff? 5. One way of determining the overall level of tariff for a country is to simply average tariff rates applied to different (imported) goods: a simple unweighted average. What is problem with this method? 6. One way of determining the overall level of tariff for a country is to calculate the weighted-average of all tariff rates based on the (relative) size of each import. What is the problem with this method? 7. Which group of countries has lowered its tariff rates more since the completion of the Uruguay Round? How have the lowered tariff rates affected the patterns of trade in the world? 8. Explain consumer and producer surplus. 9. Explain the welfare effects of tariffs in the case of a small country. Use a diagram to demonstrate. Who gains who loses and what is the net effect on the society? 10. Why is it that a large country could gain from tariffs on certain imports? Explain. 11. In one of tables in the textbook (Chapter Six) the net welfare effects of tariffs for certain U.S. imports have been listed. Which imports rank highest? Can you explain why? 12. What do mean by the "effective tariff rate?" 13. The tariff rate on women's footwear is 10 percent. Imported inputs constitute 75% of the value of women's footwear and there is only 20% tariffs on these imports. Determine the effective tariff rate on women's footwear. 14. Explain (you do not have to draw graphs) the welfare implications of export taxes for, a small country, a large country, and the world. 15. Use offer curves to explain the effect of import tariffs in the case of a large country. 16. Compare the welfare implications of tariffs and quotas. 17. Explain the welfare implication of an export subsidy from the perspective of a small importing country. 18. What are countervailing duties? 19. What is dumping? Explain each of the different types of dumping discussed in the textbook. 20. What are some of the problems associated with predatory dumping? 21. What are the conditions under which a firm can engage in persistent dumping? Explain. 22. Briefly explain some of the more commonly used non-tariff barriers and discuss their effectiveness. 23. Explain the two common methods of measuring non-tariff barriers. 24. Explain the "infant industry" argument for protection. 25. Assuming the infant status of an industry justifies some kind of protection for it, from among a tariff, a quota, and a subsidy which approach would you recommend? Clearly explain why. You might want to use graphs to make your point. 26. What is an optimal tariff? Explain its implications for the importing country and the world. 27. Demonstrate how international trade can affect a firm that has monopolistic power at home. 28. Demonstrate that when a firm has monopolistic power in the domestic market even restricted (by tariffs) trade could be better for consumers than no trade. 29. Explain under what conditions externalities could justify trade restrictions. 30. One of the common arguments for protection has to do with the difficulties that industrial countries have competing with low wage countries. Is that a valid argument? Explain. Review Questions (3) Ad valorem tariffs 2. What are the advantages of ad valorem tariffs over specific tariffs? 3. What is the net loss of economic welfare resulting from the imposition of an import tariff called? 4. Optimal tariffs: Explain under what conditions a country could gain in economic welfare by imposing tariffs on its imports. 5. What is an effective rate of protection? What does it depend on? How do we calculate it? 6. What is an import quota? 7. Compare tariffs and quotas with respect to their welfare implications. 8. Why do we say quotas are more restrictive than tariffs? 9. Explain why quota and voluntary export restrictions might not have significant impacts on a country’s balance of trade position. 10. Why do countries impose tariffs on their imports? 11. Demonstrate why economists prefer subsidies, as an import-restricting tool, to tariffs and quotas. 12. Demonstrate the welfare effects of an export subsidy on an exporting country and its trading partners (the world). 13. One way of determining the overall level of tariffs for a country is to calculate the weighted average of all tariff rates based on the (relative) size of each import. What is the problem with this method? 14. Explain and demonstrate why when all countries subsidize a certain export product not only will such subsidies become ineffective (in terms supports for producers) they result in a misallocation of economic resources. Besides, when an export subsidy becomes permanent a number of countries, no single country would be willing to abandon it because of the fear of losing its export markets to other countries. 15. Explain and demonstrate the welfare impacts of an export tax first in the case of a small country and then for a large country. 16. What are countervailing duties? 17. Discuss the welfare effects of a combination of an export subsidy and a countervailing duty on both exporting and importing country. 18. What is “dumping”? 19. Describe different types of dumping and the conditions under which they might occur. 20. Name and briefly describe four other types of non-tariff barriers. 21. Explain and briefly discuss each of the following protectionist a. Infant industry argument b. Optimal tariff argument 22. Why is argued that a monopoly under autarky could cause a misallocation of resources? 23. How does free trade affect the behavior of a monopoly? 24. If we are to provide protection for a monopolistic firm in a (small) country, is a tariff or a quota more harmful to consumers? Explain. 25. When production of a certain good in a country produces external benefits some degree of protection for that good may be justified. Among a quota, a tariff and a subsidy, which policy instrument would you recommend? Clearly explain. You may use graphs to demonstrate your points more clearly. 26. One of the arguments made in support of protection has to with the observation that in many counties costs of production are lower than, say, in the US because of cheap labor. So it wouldn’t be fair to expect American firms to compete with produces in these low-wage countries. Do you agree with this argument? Explain. 27. What were the important provisions in the Reciprocal Trade Agreements Act of 1934? 28. What is “GATT”? Briefly explain. 29. What were the major accomplishments of the Uruguay Round? 30. What is meat by the “fast track” process? 31. Briefly explain each of the following “forms” of trade or economic · Preferential trading arrangement · Free-trade area · Customs union · Common markets · Economic union 31. Explain and demonstrate how a free-trade agreement between two or more countries could result in trade diversion. 32. Is it possible for a free trade agreement to cause only trade creation? Explain. 33. In which of the above category does the NAFTA belong? Explain. 34. What are Maquiladoras? Would you say they have generally benefited Mexico? 35. Based on what you have learned in the course, in your opinion, what have been, the success and the failures of NAFTA for is members? 36. What are the factors contributing to economic growth? 37. Demonstrate how economic growth could affect a country’s welfare in autarky. 38. Demonstrate how economic growth could affect a small country’s welfare, assuming that the country is open to trade. 39. Demonstrate how labor-endowment growth could (assuming no population growth) affect a small country’s trade, income, and terms of trade. 40. Explain and demonstrate how labor-endowment growth could affect a large country’s, trade, income, and terms of trade. 41. Explain and demonstrate how capital-endowment growth could affect a large country’s, trade, income, and terms of trade. 42. Explain how technology-driven growth could cause a decline in economic welfare: immiserizing growth 43. Using a diagram demonstrate how a wage differential between two countries could result in the movements of labor and eventually wage equalization. 44. Using a diagram demonstrate how a rent (the rate of return on capital) differential between two countries could result in capital movements 45. What could explain the fact that most of the foreign investments go to industrial countries that are relatively capital rich? 46. Identify and briefly explain different types of capital movements. 46. What are some the reasons or concerns behind the opposition to immigration policies? Review Questions (4) 1. Silverland is small country facing international terms of trade (Px/Pm) of 3/2. At these terms of trade the country imports 600 units. a. Determine the country's export. b. Draw a hypothetical offer curve for Silverland consistent with the above information. c. Using your diagram, determine Silverland's elasticity of import at the above terms of trade. d. Suppose as a result of discovery of new mineral reserves (silver), Silverland's economy has had significant growth. Explain the possible effects of this economic growth on the country's offer curve and its elasticity of import. 2. Consider a two-country world with Country A and Country a. Draw a hypothetical offer curve for each country. b. Determine and show the equilibrium terms of trade between these two countries. c. Using your diagram, show and determine the elasticity of import for each country. d. Suppose a significant amount of capital is transferred from Country A to Country B. Show and discuss the possible effects of this capital transfer on each country's offer curve, exports, imports, and elasticity of import. 3. At the international terms of trade of one car for six rugs Mexico, as a small country, imports 20,000 cars per year. a. In a two-dimensional space measuring rugs on the vertical axis and cars on the horizontal axis, draw the international terms of trade line. b. Draw a hypothetical offer curve for Mexico showing her export and import quantities. c. As a result of NAFTA, Mexico lowers its tariff on imported cars. Show the effect on the diagram and explain it. 4. I n a two-country world with countries A and B, country A imports 300 bushels of tomatoes from country B and exports 900 bushels of corn to that country. a. Assuming an equilibrium terms of trade, draw the two countries’ offer curves. b. Now suppose country A imposes a tariff on imported tomatoes to protect its tomato farmers. Show and explain the effect of this tariff on the trade between the two countries. c. How does this tariff affect the terms of trade for each country? d. In retaliation country B considers imposing an import tariff on Corn. Discuss the welfare effect a retaliatory tariff on the two countries. 5. The following diagram shows the offer curve for a small country importing food products and exporting manufactured goods. a. Calculate the elasticity of demand for import for this country at (relative) price ratios: Pm/Pf = 1/5 Pm/Pf = ½ b. Explain (and compare) the effect of a change in each of these relative prices on the export and import of the country. c. Assuming that the international (relative) terms of trade (Pm/Pf) are 1/2, show and explain the impact of a significant inflow of capital on the country's trade patterns. How will the inflow of capital affect the country's elasticity of demand for imports? Note: Try to be as precise as you can when using the diagram. 6. Outline the underlying assumption of the Heckscher-Ohlin theorem. 7. One of the important assumptions of H-O analysis is that each good is (relatively) more intensive in a given factor (say, capital or labor) at all relative factor prices. Use a diagram to demonstrate this assumption. 8. Explain the Heckscher-Ohlin theorem. 9. Explain the factor price equalization theorem. 10. Describe the Stolper-Samuelson Theorem 11. Describe the effect of a factor intensity reversal on the trade patterns between two countries. 12. Use a diagram to explain how the absence of perfect competition would prevent factor price equalization to be realized. (Hint: Monopoly and price discrimination) 13. Who stand to gain the most from trade in a labor-abundant country? Explain why under certain conditions (the specific-factor model) some capital owners and labor groups might oppose free trade. 14. Explain the Leontief Paradox. 15. Outline and briefly discuss the explanations for Leontief paradox. 16. Explain the fundamentals of the “commodity approach” to testing the H-O theorem. 17. China is considered a labor-abundant country. It is widely held that China’s attempt to join WTO will eventually result in China having relation with the rest of the world. Using H-O analysis, who in China stands to gain more from free trade? Explain. 18. If we assume that different skill levels in the US labor market is an explanation for the Leontief Paradox, how would you expect free trade to affect income distribution in the US? 19. Explain and compare “specific” and "ad valorem" tariffs. Which one would you characterize as being more effective in providing consistent levels of protection for domestic industries? Explain. 20. Use a diagram to compare tariffs and quotas in terms of their welfare implications. 21. Compare the welfare implications of a tariff with those of an equally protective subsidy. 22. Critique the Generalized System of Preference (GLS) from an efficiency standpoint. 23. What is the rationale for “offshore assembly provisions” (OAP)? 24. The nominal tariff rate for shoes (footwear) is 8.8 percent. Most shoe manufactures in the US import the partly assembled tops and soles of their shoes from abroad. Generally, the imported tops make up about 25 percent of the value of a pair of shoes and the soles about 30 percent. There is a 5 percent tariff on imported tops and 6.5 percent on imported soles. Determine the effective rate of protection for shoe manufacturers. 25. Why do we say that weighted-average tariff rates could be biased downward? 26. Use a supply and demand diagram to demonstrate the welfare implications of a tariff (in terms of changes in consumer and producer surplus) for a small country. 27. Compare tariffs and quotas in terms of their welfare implications. 28. Explain why it is argued that from a welfare standpoint a subsidy is preferred to a tariff and quota. 29. Consider a small country in which rice is a staple food. The country produces rice and, under autarky, the domestic (equilibrium) price of rice is $3 per bushel. Draw a hypothetical supply and demand diagram to show the market for rice in this country. Suppose the international price of rice is $5 per bushel. a. Show the effect of free trade on the production and consumption of rice in this country. b. Now suppose a export tax of $1 is imposed on rice. Demonstrate the effect of this tax on the production, domestic consumption, and the export of rice. 30. Use a diagram to show the implications of a specific tariff in the case of a large country. Explain who bears the burden of a tariff. (a) Under what condition(s) is the burden of a tariff fully borne by the importing country? (b) Under what condition(s) is the burden of a tariff fully borne by the exporting county? 31. Use a diagram to show the effect of an ad valorem export tax in the case of a large country. Show who bears the burden of an export tax? What are the welfare implications of such a tax? 32. Use offer curve diagram(s) to demonstrate and compare the effects of import tariffs for small and large countries. Review Questions (5) 1. Explain how changes in transportation costs could affect the patterns of international trade. 2. For what types of goods is trade more likely to be affected by changes in transportation costs? 3. We know that a concave (bowed-out) production possibilities curve indicates increasing opportunity cost (caused by diminishing returns). What might a convex (bowed-in) production possibilities curve indicate? 4. Explain how, under free trade, a large country might be able to drive small counties’ producers out of the market even when small countries have a comparative cost advantage over the large country? 5. Free trade agreements among small countries (before they open their borders to free trade completely) might help them take advantage of their production potentials more fully. Explain. 6. Use an example to explain the “product cycle” theory of trade. 7. Use the scale economies theory of trade to explain why small countries may be at an disadvantage when opening their borders to free trade. 8. In negotiating the terms of their membership in the WTO, while demanding free access to the markets in other member countries, developing countries often request that they be given a longer transition period for the removal of their trade barriers. Explain how they could use the economies of scale argument to justify their request.
http://www.oswego.edu/~atri/lec343.html
13
17
Contents : Teach : Lesson 5 A. Scarcity and Pricing - Good and services are considered scarce, when theyre not free. Either they need to be bought or replaced with another resource of equal value. - Not all things come with a price tag. Resources such as water and air are considered to be free. - Almost all resources are scarce resources. Almost all goods and services are scarce goods and services. - Such scarcity therefore brings about price in goods and services. Price measures the value of a given commodity. - The prices are determined by the Price Mechanism. It has two components: Supply and - Demand is the behavior of buyers towards certain goods and - Such behavior is determined in the price change and quantity demanded of the product. - There is an inverse relationship between price and quantity demand. When price increases, demand decreases and when price decreases, demands increases. (see chart) - The chart shows the behavior of buyers towards different price - If the price decreases, many people will be able to afford it, thus raising demand, but if the price increases, few people will be able to afford it and so the demand drops. - On the other hand, Supply is the behavior of the suppliers of resources and commodities. - It is determined by price and quantity supplied of the - The relationship between price and supply is direct. Therefore, when the prices go up, so will the supply and vice versa. (see - The chart shows the behavior of suppliers at different price - At higher prices, suppliers are willing to produce more goods in search of higher profit. D. Price Determination - Price is controlled by these two market forces, so prices, demand, and quantities vary from time to time, but theres one position in which the price, both satisfies supply and demand. This is called Price Equilibrium. - In state of excess demand, theres not enough supply to satisfy the needs of the people. In order to increase the supply and meet the demand, suppliers will need more profit. Therefore prices go up to produce more of the product. - In contrast, the state of excess supply, theres a bulging surplus of supply compared to the little demand. In such situation, sellers are pressured to capture the small market, in order to maximize their profits. Theyll be forced to bring down the price in order to attract more buyers. E. Factors Affecting Shifts in Price - (Demand) Theres an unstable flux of salaries in society. The people will have more or less money to buy - (Demand) A change of price in substitute commodity occurs. A substitute commodity is a product that satisfies similar needs as the original commodity. (e.g. beef for pork) - (Demand) A rise or falls of the price of complementary commodities happen. Complementary commodities are products used jointly with the original commodity. (e.g. Original Commodity: cars & Complementary Commodities: gasoline) - (Demand) There is an increase/decrease in - (Demand) A redistribution of income to or away from certain people who favor a kind of commodity happens. For instance, people from Town X like pizza, so their income will determine the demand for pizza. - (Supply) Theres change in the desire of producers to supply more or less. - (Supply) Prices of substitute commodities change. - (Supply) Prices of resources (land, labor, and - (Supply) Improvements or deterioration in technology is done. If technology improves the production of Brand X, then more of Brand X will be made. F. Application of Price Determination - In the theory of Price Determination, it is assumed that the market is in Perfect Competition. Under such situation, all buyers and sellers arent in a position to dictate the price and demand of products. - Unfortunately, in reality, Perfect Competition is very rare. Big corporations, legislators, and trade unions control most - Although quite limited in nature, it can still give a pretty good picture of the developmental situation. Its limitations should be kept in check. - (see also Causes: Supply and Demand) - Economics lesson plans were based and adopted from Fr. Roberto Yap's economics notes found in the Tulong Dunong Sourcebook and Michael P. Todaro: Economics for a Developing World
http://library.thinkquest.org/25009/teach/teach.lesson5.html
13
15
In June, the World Health Organization (WHO) declared swine flu--officially known as the H1N1 virus--the first influenza pandemic since 1968. The following month, the WHO told countries to stop reporting individual swine flu infections because the number of victims had rapidly exceeded 1 million people and the virus had spread to almost every nation in the world. The flu continues to spread. A WHO scientist estimates that H1N1 could infect 2 billion people in two years. Since emerging in April, it has become one of the fastest spreading contagious diseases on record. H1N1 will return to the U.S. this fall with the flu season. This year's flu season may be more severe than normal, but the U.S. has the capacity to respond to the extra strains. Federal, state, and local governments should continue to improve their pandemic response and risk communication programs. They still need to do much to improve cross-state planning, continuity of operations, situational awareness and information sharing, and community However, an effective public response will likely be the most important factor in mitigating the effects of the flu season. The public should follow the guidelines of a responsible national vaccination strategy and adopt behaviors, such as washing hands properly, to limit the spread of the disease and minimize its What Is Swine Flu? Swine flu, identified as the H1N1 strain, contains a unique genetic makeup that distinguishes it from other influenza viruses. H1N1 includes gene segments from North American swine, bird, and human flu strains and from Eurasian swine flu--a unique combination that had not been previously reported. New influenza viruses are often created through "molecular reassortment," in which two distinct virus strains invade the same cell and, in the process of using the cell to replicate themselves, mingle their genes creating a hybrid strain. The Centers for Disease Control and Prevention (CDC) in the U.S. Department of Health and Human Services (HHS) has concluded that many H1N1 symptoms are similar to seasonal flu symptoms: fever, cough, sore throat, runny or stuffy nose, body aches, headache, chills, and fatigue. The CDC anticipates complications similar to seasonal flu. Indeed, the majority of reported cases exhibited symptoms found in influenza-like illness, such as fever and cough. However, some patients reported vomiting and diarrhea, which are unusual for the seasonal flu. H1N1 transmission modes also match those for seasonal influenza. The CDC has concluded that H1N1 most likely spreads from person to person by "large particle respiratory droplet transmission" (for example, via coughs or sneezes in close range of an uninfected person). Additionally, transmission can occur through contact with a contaminated surface. The virus can live on surfaces and infect individuals for up to eight hours after being deposited. Therefore, the CDC has warned that "all respiratory secretions and bodily fluids" should be considered potentially infectious. These materials can contain live viruses, which can infect the human body, usually entering through the nose or throat. As with other influenza viruses, infected individuals can begin infecting others before beginning to show symptoms and can still be infectious up to a week after onset of the illness. Like other forms of "common" influenza, H1N1 has proved resistant to amantadine and rimantadine, older antiviral drugs. Antiviral drugs stop flu from developing by inhibiting the virus from entering cells, thus preventing them from replicating. However, some flu viruses mutate and develop a resistance to antiviral drugs. In 2006, the CDC recommended against using amantadine and rimantadine for seasonal flu after a sample of cases in 26 states showed over a 92 percent resistance rate. The current strain of H1N1 has not yet become resistant to newer antivirals, such as Tamiflu (oseltamivir) and Relenza (zanamivir). Of course, this may change in the future because the virus continues to mutate. Indeed, a seasonal flu strain that appeared in the 2008-2009 flu season proved resistant During the initial H1N1 outbreak, no vaccine was available. Vaccines differ from antivirals in that they can be a prophylactic, preventing an individual from contracting a disease in the first place by stimulating the body's immune system to produce antibodies that will kill the virus. Vaccines are developed from dead or inactivated virus, but the virus must first be identified before a vaccine can be developed. Furthermore, because flu viruses constantly mutate, the formulation of flu vaccines must be changed almost yearly to remain effective against currently circulating strains. The H1N1 strain had not been identified before the outbreak in April 2009, thus no vaccine was available. The medical response to H1N1 will probably appear nearly identical to the response to seasonal flu. Individuals will be treated with the same antivirals. Indeed, individuals with flu-like systems are unlikely to be tested for H1N1 because the medical protocols will be so similar. In addition, individuals will be encouraged to receive both seasonal flu and the H1N1 vaccine when it becomes available. The principal fear is that the current strain of H1N1 could mutate into a highly lethal strain that causes a pandemic. A pandemic is a disease outbreak that affects a wide geographical area and infects a high proportion of the human population. Dr. Peter Palese, the Chair of Microbiology at Mt. Sinai hospital in New York City and an international expert on infectious influenza, has noted that H1N1 belongs to the same virus group as the 1918 Spanish flu, which killed millions worldwide. Moreover, the H1N1 strain is transmitted human to human, enabling it to spread easily. H1N1 has also displayed an "unusual robustness" by emerging outside the annual flu season, which occurs during the colder half of the year. Furthermore, the virus has become more virulent and/or deadly through "mutations and/or acquisition of gene derived from other human or influenza viruses." These factors raise serious concerns about the prospects of another deadly global pandemic. On the other hand, Dr. Palese notes that certain factors mitigate against the likelihood of plague on the scale of 1918. In "1976 there was an outbreak of an H1N1 swine virus in Fort Dix, N.J., which showed human-to-human transmission but did not go on to become a highly virulent strain." While the new strain of H1N1 is more complex, it still may not be more deadly than other seasonal influenzas. Furthermore, the virus lacks "an important molecular signature (the protein PB1-F2) which was present in the 1918 virus.... [H1N1] doesn't have what it takes to become a major killer." Research suggests that without the virulence marker the new strain will not be highly pathogenic. While H1N1 nightmare scenarios are not inevitable, the disease will certainly become more widespread. H1N1 is more contagious than seasonal influenza. Common influenza has a "secondary attack rate" (the rate of infection following close contact with an infected person) ranging from 5 percent to 15 percent. The WHO has estimated that the new H1N1 strain's secondary attack rate is 22 percent to 33 percent. In fact, the disease has spread so widely and rapidly that the WHO has classified the current H1N1 strain as a global pandemic. In short, many more people could contract the flu during this flu season than normal. More people will miss more days of work and school. In addition to potentially being more contagious than seasonal flu, H1N1 could cause severe complications. Seasonal flu kills an average of about 36,000 people in the United States each year. Another 200,000 are hospitalized. As of August 21, the CDC reported 522 deaths from H1N1-related illness in the United States and 7,983 hospitalizations. A White House advisory panel concluded that a second wave of H1N1 cases during the upcoming flu season could cause 90,000 deaths and hospitalize 300,000. Thus, the 2009-2010 flu season could be two or three times more severe than normal. On the other hand, the CDC has concluded that this advisory estimate may be excessive. Indeed, Dr. Peter Gross, chief medical officer at Hackensack University Medical Center, has concluded that "the mortality is no worse than the seasonal flu and, if anything, might be slightly less." If there are more deaths this year than during a normal flu season, it could simply be the result of more people catching the flu rather than the flu being more deadly. Furthermore, younger people are unusually susceptible to H1N1. For seasonal flu, people 65 and older are usually considered as part of the high-risk group and account for about 90 percent of flu-related deaths and 60 percent of flu-related hospitalizations. Yet H1N1 has affected younger populations at higher rates than is usual for seasonal flu. The CDC has concluded that more deaths have occurred among people under 25 years old. In contrast, an estimated one-third of older adults have some antibodies against H1N1. Beyond the older and younger groups, the groups most vulnerable to severe and life-threatening complications from influenza infections are the most vulnerable to other types of flu. These include pregnant women and people with medical conditions such as asthma, diabetes, suppressed immune systems, heart or kidney disease, and neurocognitive or neuromuscular disorders. For these reasons, when H1N1 flu vaccine becomes available, priority will probably be given to vaccinating younger individuals and others with particular medical conditions--the most vulnerable While the disease will undoubtedly spread widely, limiting transmission and infection to the maximum extent possible is the most vital component of the strategy to respond to H1N1. The fewer individuals who get sick, the lighter will be the burden placed on medical providers. The fewer high-risk individuals who get sick, the lower is the likelihood for serious medical complications and death. Events surrounding the outbreak of H1N1 this spring hold lessons for the right actions to deal with future outbreaks. Sickness and Response Mexico was the epicenter of the spring swine flu outbreak, and the U.S. media chronicled its progress. Mexican Secretary of Health José Ángel Córdova initially told reporters that the virus "constitutes a respiratory epidemic that so far is controllable." However, the actions taken by the Mexican public health department belied that optimistic tone and may have contributed to the subsequent global alarm about the influenza. Mexican officials effectively shut down all cultural life by closing museums and canceling soccer games and religious services. In addition, they requested that citizens avoid cinemas and other large public events and abstain from shaking hands and kissing one another on the cheek. Perhaps most significantly, officials closed down all of Mexico City's schools for the first time since the earthquake of 1985, leaving 7 million students idle. Citizens mostly complied with the government's requests and avoided public interaction, leading some observers to describe Mexico City as a "ghost town." Restaurants, schools, and other public venues did not reopen until early May. The sudden outbreak in Mexico, the unexpected deaths among young people with no previous medical complications, and the unsure flow of information from and response by Mexican officials soon garnered significant press attention in the United States and sparked widespread speculation. At least one Member of Congress publicly called for closing the border. Subsequent research has confirmed that attempting to control land borders cannot significantly control the spread of the new strain of H1N1. A research team lead by Dr. Kamran Khan at St. Michael's Hospital in Toronto has shown that the spread of swine flu around the globe perfectly matched air travel patterns. Between March and April, 2 million people flew out of Mexico. They traveled to 1,000 cities in 164 countries, and where they went, the flu went. Even if closing the land border with Mexico were possible, it would not have stopped the disease from spreading. Four of every five air travelers leaving Mexico landed in the United States. Even if the flu had not directly entered the United States by plane, it would have arrived soon thereafter. Indeed, it had gone global before Mexican officials recognized that they had a serious problem. An infected individual can infect others before he or she feels sick or develops a sniffle. Thus, infected individuals likely crossed U.S. borders by land and air before H1N1 was Closing the border would not have stopped the disease, but would have created more suffering than the disease itself. For example, in 2003, China implemented a "panic" response to the outbreak of Severe Acute Respiratory Syndrome (SARS). By some estimates, China's overreaction cost the mainland economy 1 percent of its gross domestic product (GDP), some $50 billion. It cost Hong Kong 2.5 percent of its GDP. Mexico is America's third largest trading partner. In 2008, trade between the two nations totaled $367 billion. "Stopping that trade would be like firing a shotgun blast into the heart of Mexico's economy and the foot of our own," but do little to mitigate the spread of the disease. Swine Flu in the Homeland The first documented cases of swine flu in the United States involved seven people infected from late March to mid-April. Five were in Imperial and San Diego Counties in California. Two were in San Antonio, Texas. Unable to classify the virus, state laboratories sent the specimens to the CDC. Similar to the situation in Mexico, the CDC did not believe these patients had any contact with pigs. Noting that the cases involved a father and a daughter and two 16-year-old schoolmates, the CDC concluded that the virus was transmittable through human contact. Eschewing the drastic tone adopted by Mexican officials, American officials initially minimized the flu's potential severity. On April 23, Dr. Anne Schuchat, director of respiratory diseases for the CDC, stated that all seven patients had recovered and that "so far this is not looking like very, very, severe influenza." Furthermore, although "we don't yet know the extent of the problem," "[w]e don't think this is a time for major concern." This assessment proved correct. However, U.S. authorities were not idle. Their response was guided in part by planning and coordination over the past few years in anticipation of a potential Avian flu pandemic. The U.S. response also reflected caution in dealing with a new form of influenza and public unease inflamed by media reporting and The first official U.S. response was on April 26, when HHS declared a public health emergency. This decision, which Secretary of Homeland Security Janet Napolitano said "sound[ed] more severe than really it is," was a required first step for the federal government to begin providing special assistance to state, local, and tribal governments. For example, the declaration allowed the CDC to release antiviral medication, personal protective equipment, and respiratory protection devices from its national stockpiles. The CDC began distributing to state and local emergency responders 12 million courses of antivirals (about 25 percent of the national stockpile), personal protective equipment, gloves, and masks. The DHS prioritized shipment to states with confirmed cases: Arizona, California, Indiana, New York, and Texas. By April 30, the antivirals and other materials had reached New York City, Indiana, Texas, Kansas, Ohio, Illinois, New Jersey, and the District of Columbia. By May 4, all states had received their shares of the stockpile. The government also pre-positioned antivirals for all sectors of the Border Patrol and Coast Guard and provided guidance to federal government employees on antiviral usage. To replenish the stockpile, HHS released funds to purchase 13 million more The HHS emergency declaration also gave the federal government the authority to control the movement of people and livestock across U.S. borders, establish quarantines, and close certain public transportation systems. Although the federal government prudently avoided excessive restrictions, Customs and Border Protection and the Transportation Security Administration isolated immigrants and travelers who were believed to be infected with the swine flu. The U.S. Department of Agriculture examined the food supply to confirm that it posed no threat of spreading swine flu. The CDC explicitly outlined its response strategy on May 12, a few weeks after the outbreak. Noting that the virus had spread to almost every state in the country, the CDC never sought to contain the virus's geographic distribution. Instead, it decided to concentrate on "reducing illness and death and mitigating the impact...as well as focusing our efforts on areas where they can have the most impact." This involved distributing antiviral drugs to those most vulnerable to H1N1, such as individuals with underlying medical conditions and those severely affected by the virus. Again, this proved to be prudent and realistic response. The strategy matched the facts of how the disease spreads with the risks involved, and it exploited the national capabilities that been established over the past several years to manage pandemic The U.S. government also made a significant effort to conduct "risk communications," attempting to implement response measures while dampening panic, despite the exaggerated commentaries and scare stories in the media and on the Internet. Federal health responders consciously sought to meet the recommendation of the national strategythat "trained" and "credible" government spokespersons transmit important information about the disease to the public. DHS officials "conduct[ed] daily conference calls with Homeland Security advisors, state and local elected officials, Fusion Centers, our private sector partners, and The CDC also employed new methods to ensure transparency and disseminate public information since the flu outbreak. Almost daily, CDC staff held open telephone briefings. The CDC updated its Web site and increased staffing to manage its information line (1-800-CDC-INFO), reducing both waiting time and dropped calls. Each day the CDC received 4,000 calls, more than 2,000 e-mails, and 6 million to 8 million hits on its Web site. The agency also sought to exploit the latest communication technologies by creating a Twitter site and an RSS feed. In addition, all 50 states and the District of Columbia had their own pandemic flu plans in place, including plans to receive and distribute emergency vaccines, antidotes, and pharmaceuticals. A February 2009 report from the Government Accountability Office noted federal efforts to collaborate with state and local partners. Federal officials sponsored pandemic summits with all 50 states. The DHS established coordinating councils to share pandemic information across sectors and levels of government. HHS complemented these efforts by convening influenza pandemic workshops in five influenza pandemic regions. Similarly, the Federal Executive Boards, which operate under the White House's Office of Personnel Management, were tasked with organizing joint activities for federal, state, and local officials. Many boards arranged for influenza pandemic training and exercises for their members. Federal spokespersons also provided preparedness guidance to the private sector. The DHS communicated with sectors in private industry, providing daily updates and urging them to regularly evaluate their continuity-of-business plans. National responses to the initial appearance of H1N1 proved generally adequate. Government did not overreact. At the federal, state, and local level officials took prudent steps, using the programs and instruments established to deal with pandemics. Nevertheless, substantial doubt remains whether the U.S. has adequate capacity and mechanisms to deal with a deadly global pandemic or widespread bioterrorism attack. A December 2008 report by the Trust for America's Health assessed the readiness of states in 10 key areas. Although a number of the findings were positive, the report noted significant gaps in effective response. For example, 26 states do not have laws limiting liability for businesses and non-profits that help during an emergency. An HHS assessment also found notable gaps. For example, most states have not considered the impact of a pandemic on workers, provided information to help them plan for such an event, or evaluated which state benefits could be used to help workers during a pandemic. Coordination of national efforts is still a work in progress. The national response to H1N1 identified additional shortfalls. For example, despite an active communications strategy and tactics during the crisis, some inconsistent CDC guidance caused confusion. Some practitioners found CDC guidance difficult to translate into practical decisions. This was particularly evident in school closures. The CDC initially supported school closures, but on May 5, Acting Director Richard Besser announced that decisions to close schools would henceforth be "local decisions." On May 22, the CDC's online guidance explicitly stated that school closures were "less effective as a control measure." CDC instructions resulted in inconsistent local decisions causing confusion and panic. For example, in Texas, officials closed the 80,000-student Fort Worth school district after several cases were confirmed in the area. Fearing that the situation was rapidly escalating, the mayor of neighboring Brownsville ordered its 52 schools to close. However, the school district refused to comply and opened schools as normal, a decision that led to much Despite such controversies, shortfalls in national capacity, and gaps in integrated national planning and response, nationwide efforts proved adequate to H1N1 response. While national capabilities may still fall short of what is necessary for a deadly global pandemic, they should prove sufficient to deal with the increased levels of flu activity expected this fall. The Coming Concern When H1N1 returns this fall, flu sickness will likely be much greater. More people than usual will die, and more severe illness could appear among groups (for example, children and young adults) that normally do not suffer severe complications from the flu. Yet the nation will not face a deadly global pandemic. An effective public response could significantly augment the national response and lessen the burdens on the society as a whole. Vaccine Strategy.By most estimates, H1N1 vaccines will not become generally available until October, which is after the beginning of the U.S. flu season. One H1N1 vaccine will require two doses given 12 weeks apart. That means full protection will not be available until after February, well after the flu season has peaked. Another vaccine in development requires only one dose and may provide a basic level of immunity within weeks. In either case, however, H1N1 flu vaccine may not be available in quantity to affect the spread of the disease at all this flu season. If stocks are available in time to make a difference, public health officials at all levels of government need to educate Americans on the national vaccination strategy, and Americans will need to listen. The most critical element of the national strategy is not that every individual has to be vaccinated, but vaccinating a sufficient percentage of the population will prevent a recurring pandemic. In addition, as many individuals in high-risk categories as possible should be vaccinated. The national strategy also needs to adjust to the availability of the vaccine. Under an appropriate strategy: - Individuals should receive seasonal flu vaccines. Even though the seasonal flu vaccine will not prevent H1N1 or even protect individuals against every strain of seasonal flu that might appear this fall, it will reduce the burden on medical providers and productivity losses due to illness. - The individuals most likely to spread the disease should be vaccinated first. A study by scientists Jan Medlock and Alison Galvani concludes that the vaccines should first be used to limit transmission within schools and to the parents of school children, who would then spread the flu to everyone else. This strategy would focus on children (ages five to 19) and adults (ages 30 to 39). This would require an estimated 63 million doses. - If more vaccine is available, the most vulnerable groups should be vaccinated next. Vulnerable groups should be vaccinated according to CDC guidance, including pregnant women, people who care for babies, children and young adults (ages six months to 24 years), people with chronic diseases that make them vulnerable to complications from flu illness, and health care - Other individuals should be vaccinated as flu vaccine becomes available. When sufficient vaccine becomes available, vaccinating 30 percent of the population is necessary to limit the threat of pandemic. Once a responsible level of national vaccination is reached, it would make more sense to ensure that other nations have adequate vaccine supplies rather than seeking to vaccinate the entire population. Prophylactic Strategy.Without vaccines, the single greatest contribution the public can make is to limit opportunities for infection. Public officials have distributed ample guidelines on appropriate preventative measures. These include: - Washing hands frequently and thoroughly with soap and water and avoiding touching mouth, noses, and eyes with unwashed hands or after touching surfaces; - Not sharing water bottles and drinking containers; - Avoiding people who are sick and exposure to coughing and - Coughing or sneezing into one's sleeve; - Staying at home if one feels sick; and - Seeking medical attention when appropriate, such as high fever, shortness of breath, chest pain, seizures, persistent vomiting, or inability to retain liquids. Response Strategies.Individuals, families, businesses, and community groups can help to mitigate the effects of the flu season. Their plans should focus on contingencies if individuals need to stay home from school or work or if key personnel are not available for several days. The best and most effective responses will likely be developed and implemented locally. The greater the scope and severity of the pandemic, the more individuals in communities will need to rely on each other. Many of the resources needed to sustain their communities will also be available Many consider the efforts of Seattle and King County, Washington, as a model for preparing for pandemic influenza. In response to the SARS outbreak in Asia, county leaders implemented several key actions. Such activities would be appropriate to address any flu outbreak. Specifically, Seattle and King - Established Vulnerable Population Action Teams "to reach individuals who may not or cannot access information from traditional sources that serve the general public," which included usingthe Community Communication Network to reach vulnerable populations through familiar contacts. - Conducted a two-day seminar for health care providers on business resiliency issues, such as regional hazards, essential services and critical functions, surge capacity, evacuation, and - Created an e-mail alert system that allows individuals to sign up to receive e-mail alerts. - Translated key documents, such as biohazard and disaster response fact sheets and preparedness check lists, into many languages, including Spanish, Chinese, Vietnamese, Korean, Russian, Somali, and Cambodian. - Developed and distributed Speak First: Communicating Effectively in Times of Crisis and Uncertainty, an advanced training practice kit on public health risk communication, and Business Not as Usual: Preparing for Pandemic Flu, a video for businesses, government, and community-based organizations. The Nation Responds The U.S. has the capacity to weather the upcoming flu season. Fear and panic are the greatest enemies, but they can be defeated. Federal, state, and local governments need to continue to refine and improve the capacity and efficiency of their pandemic planning and response. Public response will likely be the most significant factor in deciding how the nation fares in the months ahead. The outcome will depend largely on Americans adhering to a responsible vaccination strategy, adopting appropriate behaviors to limit the spread of contagion, and preparing to keep their communities resilient during a flu pandemic. James Jay Carafano, Ph.D., is Deputy Director of Kathryn and Shelby Cullom Davis Institute for International Studies and Director of the Douglas and Sarah Allison Center for Foreign Policy Studies, a division of the Davis Institute, at The Heritage Foundation. Richard Weitz, Ph.D., is Senior Fellow and Director of the Center for Political-Military Analysis at the
http://www.heritage.org/research/reports/2009/09/swine-flu-what-every-american-should-know
13
24
The Civil History Of The Confederate States THE political history of the Confederate States of America somewhat distinctly begins in 1850 with "the Settlement" of sectional agitation by the Compromise measures of that year, enacted by the Congress of the United States, approved by the President, confirmed by decisions of the Supreme court, endorsed in resolutions, political platforms and general elections by the people. The "Settlement" thus solemnly ordained by and among the States composing the Union, became equal in moral and political force, to any part of the Constitution of the United States. Its general object was to carry out the preamble to the Constitution, viz.: "We, the people of the United States, in order to form a more perfect Union, establish justice, insure domestic tranquillity, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity, do ordain and establish this Constitution for the United States of America." Its avowed special object was to settle forever all the disturbing, sectional agitations concerning slave labor, so as to leave that question where the Constitution had placed it, subject to the operation of humanity, moral law, economic law, natural law and the laws of the States. Its patriotic purpose was to eliminate sectionalism from the politics of the whole country. Grave questions of sectional nature had arisen during the colonial period on which the colonies North and the colonies South divided by their respective sections. The original division of the British territorial possessions on the American continent into the geographical designations, North and South, occurred historically in the grants made by King James, 1606; the first to the London Company of the territory south of the 38th degree north latitude, and the other to the Plymouth Colony of the territory north of the 41st degree north latitude. Both grants extended westward to an indefinite boundary. The Plymouth Settlement afterward subdued the Dutch possessions lying to the south, thus including that territory in the general term North. The settlements of Delaware and Maryland covered the areas lying north of Virginia and they were embraced in the section termed South. The general line of division, somewhat indistinct, lay between the 38th and the 39th degrees north latitude. The Mason and Dixon line---39 43' 26"--was established by subsequent surveys and was designed to settle certain boundary disputes. In the eighteenth century the original partition of King James was changed by various grants and the English possessions were also extended far down the Atlantic coast by grants of the Carolinas and Georgia. The original "Old South" extended by all these grants along the Atlantic shore from the south line of Georgia to the north of Delaware, and westward from that wide ocean front certainly to the French possessions on the Mississippi river, including the territory of Virginia in the northwest and embracing a vast area of the best part of America; but by proper construction of these and other original charters which made the western limit "the South sea," meaning the Pacific ocean, the vast domain of the Old South embraced also all Texas and much of the territory acquired from Mexico. The rivalry of the colonies included in these two sections in their struggle for population, commerce, wealth and general influence in American affairs, arose early and continued during the century preceding the American Revolution, each section becoming accustomed to a geographical and sectional grouping of colonies and each striving to advance its own local interests. Thus the colonies of both sections grew robustly as separate organizations into the idea of free statehood, but at the same time fostered the dangerous jealousies of sections. The sectional spirit grew alongside the development of colonial statehood. The colonies north became a group of Northern States, and the colonies south a group of Southern States. The conflict with Great Britain, which had been long impending, brought the sections together in a common cause as against the external enemy, but the achievement of the independence of the several colonies was accompanied by the quick return of the old antagonism which had previously divided them into geographic sections. The loose Union, which had been created pending the Revolutionary war, through Articles of Confederation, was found inefficient to control or even to direct the irrepressible conflict of opposing or emulating interests. Hence the Constitution of the United States was substituted for the Articles of Confederation and "a more perfect Union" was ordained, expressly to prevent or at least to modify sectional conflicts by the constitutional pledge to promote domestic tranquillity and provide for the general welfare. During all these struggles of the colonies among themselves, caused by commercial rivalries, the slavery of any part of the population was not the cause of dangerous disagreement anywhere. The British colonies were all slave-holding. Negroes were bought and sold in Boston and New York as well as in Richmond or Savannah. The Declaration of Independence, written by Jefferson, who was opposed to slavery, and concurred in by the committee of which Adams, Sherman, Livingston and Franklin, all Northern men, were members, made no declaration against slavery and no allusion to it, except to charge the King of Great Britain with the crime of exciting domestic insurrection. In framing the Constitution all sectional differences, including the subject of slavery, were compromised. "The compromises on the slavery question inserted in the Constitution were," as Mr. Blaine correctly remarks, "among the essential conditions upon which the Federal government was organized." (Twenty Years, vol. I, p. 1.) Sectional conflicts, subsequent to the formation of the Constitution from which the Union resulted, were also mainly caused by similar commercial rivalries and ambitions for political advantage. The maintenance of the political equilibrium between the North and the South occupied at all times the anxious thought of patriotic statesmen. In the contests which threatened this equality slavery was not the only nor at first the main disturbing cause. It was not the question in the war of 1812 upon which the States were divided into sections North and South, nor in the purchase of Louisiana Territory, as the debates show; the real ground of opposition being the fear that this vast territory would transfer political power southward, which was evidenced by Josiah Quincy's vehement declaration of his "alarm that six States might grow up beyond the Mississippi." Nor was the acquisition of Florida advocated or opposed because of slavery. The tariff issue, out of which the nullification idea arose, was decidedly on a question of just procedure in raising revenue, and not on slavery. The question was made suddenly and lamentably prominent in the application of Missouri to be admitted into the Union, but the agitation which then threatened the peace of the country was quelled by the agreement upon the dividing line of 36 deg. 30'. "The Missouri question marked a distinct era in the political thought of the country, and made a profound impression on the minds of patriotic men. Suddenly, without warning, the North and the South found themselves arrayed against each other in violent and absorbing conflict." The annexation of Texas was urged because it increased the area of the Union, and was opposed because its addition to the States gave preponderance to the South. Thus it is seen that the early sectional rivalries had no vital connection with slavery, and it will appear that its extinction will not of itself extinguish the fires that have so long burned between North and South. The great American conflict began through a geographical division of America made by the cupidity of an English king. It was continued for financial, economic, commercial and political reasons. A false idea of duality--a North and South--in the United States has been deeply rooted in the American mind. To understand the causes which produced the Confederate States of America, all the various incidents which successively agitated the fears of either section that the other would gain an advantage, must be held steadily in view. This sectional ambition to secure and maintain the preponderance of political power operated through various incidents in colonial times, then in those which attended the formation of the Constitution, also in subsequent incidents--such as the location of the capital; the appropriations of money for internal improvements; the war of 1812; the acquisition of Southern territory; the tariff issue and the distribution of government offices and patronage. One after another of these controversies subsiding, a period approached when slavery itself became the main incident of this long-continued sectional rivalry. Slavery, on coming so conspicuously into notice as to be the main ground of contention between North and South, was, therefore, regarded as the chief distraction to be removed by the settlement effected in 1850. Briefly stating the case in 1850, let it be considered that the old sectional differences on account of commercial rivalry and political supremacy had at length become hostilities, which for the first time seriously threatened the Union of the States. Let it also be understood that the agitation which immediately preceded the settlement of 1850 was caused directly by differences of views as to the proper disposition of the national institution of slavery. The statesmen of 1850 knew the following facts: The United States had indorsed the existence of slavery and authorized the importation of enslaved Africans: the colonies, separately acting previous to their Union, had established the institution in their labor systems. The European governments which held paramount authority over the colonies had originated it. The chiefs of African tribes enforced it by their wars, and profited by it in the sale of their captives to foreigners. The world at large practiced it in some form. Thus the African kings, the governments of Europe and America, the ship owners, slave traders, speculators and pioneers of the New World conspired to initiate a wrong, from which a retribution at length followed, in which the innocent slave with his last Southern owner suffered more than all the guilty parties who had profited by his bondage. The hardy and adventurous settlers in the American colonies permitted themselves after occasional protests, such as occurred in Virginia, and afterward in Georgia, to be seduced into the buying of negroes from the artful and avaricious slave traders of England, Holland and New England. The importations, however, were few, because the European possessions in the semi-tropics were the first takers of this species of property. Tidings of the evils of the system in its barbarous stages, and stories all too true of the horrors of the middle passage aboard the ships of the inhuman slave importers, made the colonies reluctant to engage in the traffic. Nevertheless, the colonies experimented with this form of labor, Massachusetts beginning in 1638 and South Carolina thirty. three years later, and the conclusion was reached before the close of the eighteenth century that the slavery system of labor could be made useful in some latitudes, but could not be made profitable in all sections of America. Therefore, Massachusetts, following a Canadian precedent, abolished slavery after being' a slave holding State over a hundred years, and soon after the American Revolution several States in the higher latitudes, all included in the old North Section of the original Plymouth colony, adopted the same policy. But it will be observed that generally this Northern abolition was to take effect after lapse of time, and thus notice was given to the owners sufficient to enable them, if so disposed, to sell their property to purchasers living farther South who still found such labor remunerative. Some availed themselves of this privilege and converted their slaves into other more productive property. Some were conscientious or were attached to their negroes and, therefore, cheerfully gave them freedom. Others were philanthropic enough to keep the old and infirm, who on reaching liberation would be cared for by the State, only selling the young and the strong for a good price. Through these sales and the continuance of the slave trade, both foreign and domestic, the people in the Southern States were induced to invest their money largely in negroes, thus greatly increasing the population of that class within the boundaries of those States. In all these changes of the labor system--this abolishing of slavery in several States--the moral side of the question does not appear to have had the uppermost consideration. The moral question certainly did have influence in Massachusetts and to some extent in other States; but the main reason for this early emancipation was the commercial and social disadvantages of slavery. As the South is said to have been awakened to the immorality and the blighting hindrances of slavery to its prosperity, after emancipation was enforced by armies, so the North saw the same immorality and general hurt to the Union only after proofs of the unprofitableness of the institution to that section. Both sections abolished slavery under duress---one under duress of unprofitableness, the other vi et armis. The law of profit and loss controlled the origin and extension of slavery; sectional rivalries seized upon it as an incident in the strife for supremacy; political party purpose at length found it to be an available means to partisan successes, and the moral side of it shone forth upon the whole nation only after a bloody war between the old sections. Eyes were opened by the shock of battle. The moral sense stood godfather to "military necessity." At the organization of the United States, slave-holding was legal everywhere in the Union except in Massachusetts, because at that time it bore some profit everywhere. It is a suggestion of Mr. Greeley in his American Conflict "that the importation of Africans in slave ships profited New England;" the labor of the slaves thus imported at a profit was valuable to their owners who bought them from the slave traders, and also to the manufacturers and sellers of the products of their toil. Trading in human flesh was, therefore, insisted on as a proper business demanding constitutional enforcement for twenty years. The following States voted for the continuance of the slave trade for twenty years until 1808: New Hampshire, Massachusetts, Connecticut, Maryland, North Carolina, South Carolina and Georgia. The following voted nay: New Jersey, Pennsylvania, Delaware and Virginia. Slave labor, therefore, must be treated historically as an institution sustained by the Constitution of the United States; the domestic trade in slaves, as a business sanctioned by that august instrument; and the foreign slave trade--to which the chief ignominy of the institution attaches--as a traffic expressly protected against the wishes of the majority of the States holding slaves. Each State was left by the Constitution with full power to dispose of the institution as it might choose, and the territory acquired as common property was open to settlement by slave-holders with their property. The African bondsman was classed as property by United States law. He was property to be acquired, held, sold, delivered on bills of sale which evidenced title. He could be bequeathed, donated, sold as part of an estate, or for debt, like any other property. The Federal and State governments derived revenue from his labor. For over a century the Southern States were encouraged to invest in him and his race as property. Not one government, European, Asian or African, declared against the enslavement of the negro by the United States; and not one State among those which had fought together to gain a common independence of England refused to enter the Union on account of the constitutional recognition and encouragement of the institution. If there be any wrong in all their action, the South was not more responsible for it than their Northern associates in what has been called the great crime of the United States. The evils of slavery, its wrong of any character, moral or political, were the result of an international cooperative action, and of an agreement among the States of the Union, the original motive of which was the cupidity of powerful African tribes and Caucasian slave dealers with the subsequent motive of profit and loss to the buyer. Such being its historical origin, it will be seen that the subsequent effort to destroy it was not mainly moral but partisan, and that the blow which struck it down fell on the lawful holders of inherited property, and was struck by the people of the Old World and the New, whose ancestors first inflicted the great wrong against humanity. The labor of the negro being more profitable in the mild climate, and on the more fertile and cheaper land of the South, his transference from the bleaker clime and less generous as well as higher priced soil of New England became commercially inevitable. The negro became unsalable where he was at first enslaved. He brought a good price south of 36 deg. 30', and hence by the course of interstate commerce many thousands (not all, but thousands) of this class of national property changed owners as well as States, the original masters taking the purchase money to reinvest in land, merchandise, factories, stocks and bonds or other prudent ventures, while the new master invested in the coerced labor which cut down his forests and tilled his soil, holding the laborer "bound to service" under the laws of his State made pursuant to the Constitution of the United States. The same commercial considerations which induced the enslavement of the unfortunate African caused his sale and removal from those sections of the Union where his enslavement was found to be unprofitable and his presence at least a social inconvenience. Accordingly the steady deportation of the race southward began during the close of the eighteenth century and was accelerated through the early years of the nineteenth century. The slave market was opened in the city of Washington and other Southern cities. Traders bought in Northern markets and sold for profit in the Southern. The domestic slave trade was thus inaugurated to compete with the African slave trade then in full blast and which could not be suppressed by any Southern State until the year 1808. Now and then a Southern State endeavored to hinder the infamous traffic, but the ship owners and slave traders were shielded by the supreme law of the land. The United States government was meanwhile entitled to revenue at the rate of $10 for each imported African. All the powers of the Union were put in operation to induce the people of the Southern States to invest their capital in this species of property. From this review of the slavery evil, it appears that the States in the South cannot be charged with the responsibility of its introduction, nor for the continuance of the slave trade, nor for the extension of it by the increase of negro population in the South, nor for the agitations which on this account disturbed the harmony of the sections, nor for the bloody mode adopted for its extinction. Jefferson Davis said: "War was not necessary to the abolition of slavery. Years before the agitation began at the North and the menacing acts to the institution, there was a growing feeling all over the South for its abolition. But the abolitionists of the North, both by publications and speech, cemented the South and crushed the feeling in favor of emancipation. Slavery could have been blotted out without the sacrifice of brave men and without the strain which revolution always makes upon established forms of government. I see it stated that I uttered the sentiment, or indorsed it, that 'slavery is the corner stone of the Confederacy.' That is not my utterance." "It is not conceivable," said General Stephen D. Lee, in 1897, "that the statesmen of the Union were incompetent to dispose of slavery without war." It will become clear to any who will conservatively reflect on the conditions existing at the beginning of the present century, that if the opposition to slavery had been firmly based on the principle that it was a violation of the first law of human brotherhood, and also on its breach of the economic principle that enforced labor should not compete with the labor of the free citizen--if the appeal for its discontinuance had been made to the public conscience and the private sense of right, and the just claims of honest free labor, the institution would have passed away in less than a generation from the date of the Declaration of Independence. Had all the New England States, with all other Northern slave-holding States, in 1776 (following the course of Massachusetts) abolished slavery without the sale of a single slave; had the slave trade been discontinued as the Southern States (except Georgia and South Carolina) desired; had the views of Virginia, Kentucky and North Carolina been fostered and made effective by Northern hearty cooperation, it is entirely reasonable to believe that the freedom of all the slaves would have been rapidly secured. An emancipation measure was proposed in the Virginia Legislature as late as 1832 and discussed. The general course of the debate shows a readiness in that day to give freedom to negroes, and was of such strength that a motion to postpone with a view to ascertain the wishes of the people was carried by a vote of 65 to 58. In Delaware, Maryland and Kentucky legislation leading to emancipation had already been under consideration. North Carolina and Tennessee contained large populations of whites averse to slavery, and no doubt exists as to the action of those States at any time during the first years of the century. The Louisiana and Florida purchase and the Texas annexation having not yet taken place, and nearly the entire West and Southwest being a wild, the question of emancipation with moderate compensation would have easily prevailed through the South. The barrier in the beginning was the profitable sale of the slaves from Northern States, and from the slave trade carried on in the ships of foreign nations and New England, and the commercial advantage of the trade in the products of slave labor. The interests of all Southern States except South Carolina, Georgia, Alabama, Mississippi and Louisiana, only thirty years prior to the election of Lincoln, lay on the side of emancipation. The first named States were alone dependent for their development on the labor of the slave, and even in those States only their Southern areas demanded slave labor. The northern parts of these five States were even then better adapted to free white labor. In the light of the years which close this century, it is seen that no part of the South was dependent on slave labor, and that such supposed dependence was imaginary, not real. Therefore, it may be fairly inferred from the sentiment of the South in the beginning of this century, from the conditions of labor and commerce then existing, from the political considerations then at work, the South, in the first years of this century, would have begun the emancipation of its slaves upon a plan of compensation to the owner, justice to the negro and safety to society, had not the interests of other sections demanded the continuance of the domestic and foreign trade in man. The period of twenty years granted by the Constitution for the continuance of the slave trade, was occupied actively in the importation of Africans throughout the Atlantic Southern States. During the same period the invention of the cotton gin increased vastly the commercial value of negro labor, not only to the producer, but most of all to the shipper and manufacturer of cotton. As a consequence, " the prosperity and commercial importance of a half dozen rising communities, the industrial and social order of a growing empire, the greatest manufacturing interest of manufacturing England, a vast capital, the daily bread of hundreds of thousands of free artisans, rested on American slavery." This new condition occurred at the period when the South was protesting against the African slave trade, and was exhibiting an increasing willingness to continue the emancipation movement, which had previously extended southward as far as Delaware, and had induced Virginia to include the anti-slavery clause in its great cession of Northwestern Territory. But the outlook of the cotton trade and the immense business arising from the increased production and manufacture of the staple were so beneficial to vast numbers in England and the United States, that the emancipation sentiment died down un. der the pressure of commercial considerations not only in the Cotton States, but also in the manufacturing and commercial centers of the world. (Greg's History, 351.) After the year 1808 (cessation of the legalized slave trade) the national increase of the enslaved race exceeded in percentage that of any free people on earth. Freed from care, fed, clothed and sheltered for the sake of their labor, protected from hurtful indulgence and worked with regularity--the physical conditions were all favorable to increase in numbers, stature, longevity and strength. It is clearly just to admit that such an improvement in the race imported from the African wilds undoubtedly proves the humanity with which these captured bondsmen were treated by the people of the United States. It was this commercial value of the slave to the Southern planters of cane, cotton, rice and tobacco, and to the Northern and European shippers, manufacturers, merchants and operatives--a value caused by the crude, elementary materials of wealth which negro labor produced --a value that grew in great proportions for commerce--a value that began to assume political importance because of the power that it gave the slave-holding States---it was this factor which on the one hand blinded many in all sections to those moral and economic fallacies on which African slavery really rested, and on the other hand finally excited political jealousy and sectional fears of the power which the Southern section might acquire in the control of the Union. This Page last updated 02/10/02 RETURN TO CONFEDERATE MILITARY HISTORY PAGE
http://civilwarhome.com/civilhistory.htm
13
18
WikiDoc Resources for Evidence Based Medicine Guidelines / Policies / Govt Patient Resources / Community Healthcare Provider Resources Continuing Medical Education (CME) Experimental / Informatics Editors-In-Chief: David Vulcano and C. Michael Gibson, M.S., M.D. In health care, a clinical trial is a comparison test of a medication or other medical treatment (such as a medical device), versus a placebo (inactive look-a-like), other medications or devices, or the standard medical treatment for a patient's condition. Clinical trials vary greatly in size: from a single researcher in one hospital or clinic to an international multicenter study with several hundred participating researchers on several continents. The number of patients tested can range from as few as 30 to several thousands. While undergoing the trial, the agent being tested is called an investigational new drug. In a clinical trial, the investigator first identifies the medication or device to be tested. Then the investigator decides what to compare it with (one or more existing treatments or a placebo), and what kind of patients might benefit from the medication/device. If the investigator cannot obtain enough patients with this specific disease or condition at his or her own location, then he or she assembles investigators at other locations who can obtain the same kind of patients to receive the treatment. During the clinical trial, the investigators: recruit patients with the predetermined characteristics, administer the treatment(s), and collect data on the patients' health for a defined time period. (These data include things like vital signs, amount of study drug in the blood, and whether the patient's health gets better or not.) The researchers send the data to the trial sponsor, who then analyzes the pooled data using statistical tests. Some examples of what a clinical trial may be designed to do: - assess the safety and effectiveness of a new medication or device on a specific kind of patient (e.g., patients who have been diagnosed with Alzheimer's disease for less than one year) - assess the safety and effectiveness of a different dose of a medication than is commonly used (e.g., 10 mg dose instead of 5 mg dose) - assess the safety and effectiveness of an already marketed medication or device on a new kind of patient (who is not yet approved by regulatory authorities to be given the medication or device) - assess whether the new medication or device is more effective for the patient's condition than the already used, standard medication or device ("the gold standard" or "standard therapy") - compare the effectiveness in patients with a specific disease of two or more already approved or common interventions for that disease (e.g., Device A vs. Device B, Therapy A vs. Therapy B) Note that while most clinical trials compare two medications or devices, some trials compare three or four medications, doses of medications, or devices against each other. Except for very small trials limited to a single location, the clinical trial design and objectives are written into a document called a clinical trial protocol. The protocol is the 'operating manual' for the clinical trial, and ensures that researchers in different locations all perform the trial in the same way on patients with the same characteristics. (This uniformity is designed to allow the data to be pooled.) A protocol is always used in multicenter trials. Synonyms for 'clinical trials' include clinical studies, research protocols and medical research. The most commonly performed clinical trials evaluate new drugs, medical devices (like a new catheter), biologics, psychological therapies, or other interventions. Clinical trials may be required before the national regulatory authority will approve marketing of the drug or device, or a new dose of the drug, for use on patients. Clinical trials were first introduced in Avicenna's The Canon of Medicine in the 1020s, in which he laid down rules for the experimental use and testing of drugs and wrote a precise guide for practical experimentation in the process of discovering and proving the effectiveness of medical drugs and substances. He laid out the following rules and principles for testing the effectiveness of new drugs and medications, which still form the basis of modern clinical trials: - "The drug must be free from any extraneous accidental quality." - "It must be used on a simple, not a composite, disease." - "The drug must be tested with two contrary types of diseases, because sometimes a drug cures one disease by Its essential qualities and another by its accidental ones." - "The quality of the drug must correspond to the strength of the disease. For example, there are some drugs whose heat is less than the coldness of certain diseases, so that they would have no effect on them." - "The time of action must be observed, so that essence and accident are not confused." - "The effect of the drug must be seen to occur constantly or in many cases, for if this did not happen, it was an accidental effect." - "The experimentation must be done with the human body, for testing a drug on a lion or a horse might not prove anything about its effect on man." One of the most famous clinical trials was James Lind's demonstration in 1747 that citrus fruits cure scurvy. He compared the effects of various different acidic substances, ranging from vinegar to cider, on groups of afflicted sailors, and found that the group who were given oranges and lemons had largely recovered from scurvy after 6 days. One way of classifying clinical trials is by the way the researchers behave. - In an observational study, the investigators observe the subjects and measure their outcomes. The researchers do not actively manage the experiment. This is also called a natural experiment. An example is the Nurses' Health Study. - In an interventional study, the investigators give the research subjects a particular medicine or other intervention. (Usually they compare the treated subjects to subjects who receive no treatment or standard treatment.) Then the researchers measure how the subjects' health changes. Another way of classifying trials is by their purpose. The U.S. National Institutes of Health (NIH) organizes trials into five (5) different types: - Prevention trials: look for better ways to prevent disease in people who have never had the disease or to prevent a disease from returning. These approaches may include medicines, vitamins, vaccines, minerals, or lifestyle changes. - Screening trials: test the best way to detect certain diseases or health conditions. - Diagnostic trials: conducted to find better tests or procedures for diagnosing a particular disease or condition. - Treatment trials: test experimental treatments, new combinations of drugs, or new approaches to surgery or radiation therapy. - Quality of Life trials: explore ways to improve comfort and the quality of life for individuals with a chronic illness (a.k.a. Supportive Care trials). A fundamental distinction in evidence-based medicine is between observational studies and randomized controlled trials. Types of observational studies in epidemiology such as the cohort study and the case-control study provide less compelling evidence than the randomized controlled trial. In observational studies, the investigators only observe associations (correlations) between the treatments experienced by participants and their health status or diseases. A randomized controlled trial is the study design that can provide the most compelling evidence that the study treatment causes the expected effect on human health. - Randomized: Each study subject is randomly assigned to receive either the study treatment or a placebo. - Blind: The subjects involved in the study do not know which study treatment they receive. If the study is double-blind, the researchers also do not know which treatment is being given to any given subject. This 'blinding' is to prevent biases, since if a physician knew which patient was getting the study treatment and which patient was getting the placebo, he/she might be tempted to give the (presumably helpful) study drug to a patient who could more easily benefit from it. In addition, a physician might give extra care to only the patients who receive the placebos to compensate for their ineffectiveness. A form of double-blind study called a "double-dummy" design allows additional insurance against bias or placebo effect. In this kind of study, all patients are given both placebo and active doses in alternating periods of time during the study. - Placebo-controlled: The use of a placebo (fake treatment) allows the researchers to isolate the effect of the study treatment. Of note, during the last ten years or so it has become a common practice to conduct "active comparator" studies (also known as "active control" trials). In other words, when a treatment exists that is clearly better than doing nothing for the subject (i.e. giving them the placebo), the alternate treatment would be a standard-of-care therapy. The study would compare the 'test' treatment to standard-of-care therapy. Although the term "clinical trials" is most commonly associated with the large, randomized studies typical of Phase III, many clinical trials are small. They may be "sponsored" by single physicians or a small group of physicians, and are designed to test simple questions. In the field of rare diseases sometimes the number of patients might be the limiting factor for a clinical trial. Other clinical trials require large numbers of participants (who may be followed over long periods of time), and the trial sponsor is a private company, a government health agency, or an academic research body such as a university. In designing a clinical trial, a sponsor must decide on the target number of patients who will participate. The sponsor's goal usually is to obtain a statistically significant result showing a significant difference in outcome (e.g., number of deaths after 28 days in the study) between the groups of patients who receive the study treatments. The number of patients required to give a statistically significant result depends on the question the trial wants to answer. (For example, to show the effectiveness of a new drug in a non-curable disease as metastatic kidney cancer requires many fewer patients than in a highly curable disease as seminoma if the drug is compared to a placebo). The number of patients enrolled in a study has a large bearing on the ability of the study to reliably detect the size of the effect of the study intervention. This is described as the "power" of the trial. The larger the sample size or number of participants in the trial, the greater the statistical power. However, in designing a clinical trial, this consideration must be balanced with the fact that more patients make for a more expensive trial. Clinical trials involving new drugs are commonly classified into four phases. Each phase of the drug approval process is treated as a separate clinical trial. The drug-development process will normally proceed through all four phases over many years. If the drug successfully passes through Phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV are 'post-approval' studies. Before pharmaceutical companies start clinical trials on a drug, they conduct extensive pre-clinical studies. Pre-clinical studies involve in vitro (i.e., test tube or laboratory) studies and trials on animal populations (in vivo). Wide-ranging dosages of the study drug are given to the animal subjects or to an in-vitro substrate in order to obtain preliminary efficacy, toxicity and pharmacokinetic information and to assist pharmaceutical companies in deciding whether it is worthwhile to go ahead with further testing. Phase 0 is a recent designation for exploratory, first-in-human trials conducted in accordance with the U.S. Food and Drug Administration’s (FDA) 2006 Guidance on Exploratory Investigational New Drug (IND) Studies. Phase 0 trials are also known as human microdosing studies and are designed to speed up the development of promising drugs or imaging agents by establishing very early on whether the drug or agent behaves in human subjects as was anticipated from preclinical studies. Distinctive features of Phase 0 trials include the administration of single subtherapeutic doses of the study drug to a small number of subjects (10 to 15) to gather preliminary data on the agent's pharmacokinetics (how the body processes the drug) and pharmacodynamics (how the drug works in the body). A Phase 0 study gives no data on safety or efficacy, being by definition a dose too low to cause any therapeutic effect. Drug development companies carry out Phase 0 studies to rank drug candidates in order to decide which has the best PK parameters in humans to take forward into further development. They enable base go/no go decisions to be based on relevant human models instead of relying on animal data, which can be unpredictive and vary between species. Phase I trials are the first stage of testing in human subjects. Normally, a small (20-80) group of healthy volunteers will be selected. This phase includes trials designed to assess the safety (pharmacovigilance), tolerability, pharmacokinetics, and pharmacodynamics of a drug. These trials are often conducted in an inpatient clinic, where the subject can be observed by full-time staff. The subject who receives the drug is usually observed until several half-lives of the drug have passed. Phase I trials also normally include dose-ranging, also called dose escalation, studies so that the appropriate dose for therapeutic use can be found. The tested range of doses will usually be a fraction of the dose that causes harm in animal testing. Phase I trials most often include healthy volunteers. However, there are some circumstances when real patients are used, such as patients who have end-stage disease and lack other treatment options. This exception to the rule most often occurs in oncology (cancer) and HIV drug trials. Volunteers are paid an inconvenience fee for their time spent in the volunteer centre. Pay ranges from a small amount of money for a short period of residence, to a larger amount of up to approx £4000 depending on length of participation. There are different kinds of Phase I trials: - Single Ascending Dose studies are those in which small groups of patients are given a single dose of the drug while they are observed and tested for a period of time. If they do not exhibit any adverse side effects, and the pharmacokinetic data is roughly in line with predicted safe values, the dose is escalated, and a new group of patients is then given a higher dose. This is continued until pre-calculated pharmacokinetic safety levels are reached, or intolerable side effects start showing up (at which point the drug is said to have reached the Maximum tolerated dose (MTD). - Multiple Ascending Dose studies are conducted to better understand the pharmacokinetics & pharmacodynamics of multiple doses of the drug. In these studies, a group of patients receives multiple low doses of the drug, whilst samples (of blood, and other fluids) are collected at various time points and analyzed to understand how the drug is processed within the body. The dose is subsequently escalated for further groups, up to a predetermined level. - Food effect - a short trial designed to investigate any differences in absorption of the drug by the body, caused by eating before the drug is given. These studies are usually run as a crossover study, with volunteers being given two identical doses of the drug on different occasions; one while fasted, and one after being fed. Once the initial safety of the study drug has been confirmed in Phase I trials, Phase II trials are performed on larger groups (20-300) and are designed to assess how well the drug works, as well as to continue Phase I safety assessments in a larger group of volunteers and patients. When the development process for a new drug fails, this usually occurs during Phase II trials when the drug is discovered not to work as planned, or to have toxic effects. Phase II studies are sometimes divided into Phase IIA and Phase IIB. Phase IIA is specifically designed to assess dosing requirements (how much drug should be given), whereas Phase IIB is specifically designed to study efficacy (how well the drug works at the prescribed dose(s)). Some trials combine Phase I and Phase II, and test both efficacy and toxicity. Some Phase II trials are designed as case series, demonstrating a drug's safety and activity in a selected group of patients. Other Phase II trials are designed as randomized clinical trials, where some patients receive the drug/device and others receive placebo/standard treatment. Randomized Phase II trials have far fewer patients than randomized Phase III trials. Phase III studies are randomized controlled multicenter trials on large patient groups (300–3,000 or more depending upon the disease/medical condition studied) and are aimed at being the definitive assessment of how effective the drug is, in comparison with current 'gold standard' treatment. Because of their size and comparatively long duration, Phase III trials are the most expensive, time-consuming and difficult trials to design and run, especially in therapies for chronic medical conditions. It is common practice that certain Phase III trials will continue while the regulatory submission is pending at the appropriate regulatory agency. This allows patients to continue to receive possibly lifesaving drugs until the drug can be obtained by purchase. Other reasons for performing trials at this stage include attempts by the sponsor at "label expansion" (to show the drug works for additional types of patients/diseases beyond the original use for which the drug was approved for marketing), to obtain additional safety data, or to support marketing claims for the drug. Studies in this phase are by some companies categorised as "Phase IIIB studies." While not required in all cases, it is typically expected that there be at least two successful Phase III trials, demonstrating a drug's safety and efficacy, in order to obtain approval from the appropriate regulatory agencies (FDA (USA), TGA (Australia), EMEA (European Union), etc.). Once a drug has proved satisfactory after Phase III trials, the trial results are usually combined into a large document containing a comprehensive description of the methods and results of human and animal studies, manufacturing procedures, formulation details, and shelf life. This collection of information makes up the "regulatory submission" that is provided for review to the appropriate regulatory authorities in different countries. They will review the submission, and, it is hoped, give the sponsor approval to market the drug. Most drugs undergoing Phase III clinical trials can be marketed under FDA norms with proper recommendations and guidelines, but in case of any adverse effects being reported anywhere, the drugs need to be recalled immediately from the market. While most pharmaceutical companies refrain from this practice, it is not abnormal to see many drugs undergoing Phase III clinical trials in the market. Phase IV trial is also known as Post Marketing Surveillance Trial. Phase IV trials involve the safety surveillance (pharmacovigilance) and ongoing technical support of a drug after it receives permission to be sold. Phase IV studies may be required by regulatory authorities or may be undertaken by the sponsoring company for competitive (finding a new market for the drug) or other reasons (for example, the drug may not have been tested for interactions with other drugs, or on certain population groups such as pregnant women, who are unlikely to subject themselves to trials). The safety surveillance is designed to detect any rare or long-term adverse effects over a much larger patient population and longer time period than was possible during the Phase I-III clinical trials. Harmful effects discovered by Phase IV trials may result in a drug being no longer sold, or restricted to certain uses: recent examples involve cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). Clinical trials are only a small part of the research that goes into developing a new treatment. Potential drugs, for example, first have to be discovered, purified, characterized, and tested in labs (in cell and animal studies) before ever undergoing clinical trials. In all, about 1,000 potential drugs are tested before just one reaches the point of being tested in a clinical trial. For example, a new cancer drug has, on average, at least 6 years of research behind it before it even makes it to clinical trials. But the major holdup in making new cancer drugs available is the time it takes to complete clinical trials themselves. On average, about 8 years pass from the time a cancer drug enters clinical trials until it receives approval from regulatory agencies for sale to the public. Drugs for other diseases have similar timelines. Some reasons a clinical trial might last several years: - For chronic conditions like cancer, it takes months, if not years, to see if a cancer treatment has an effect on a patient. - For drugs that are not expected to have a strong effect (meaning a large number of patients must be recruited to observe any effect), recruiting enough patients to test the drug's effectiveness (i.e., getting statistical power) can take several years. - Only certain people who have the target disease condition are eligible to take part in each clinical trial. Researchers who treat these particular patients must participate in the trial. Then they must identify the desirable patients and obtain consent from them or their families to take part in the trial. The biggest barrier to completing studies is the shortage of people who take part. All drug and many device trials target a subset of the population, meaning not everyone can participate. Some drug trials require patients to have unusual combinations of disease characteristics. It is a challenge to find the appropriate patients and obtain their consent, especially when they may receive no direct benefit (because they are not paid, the study drug is not yet proven to work, or the patient may receive a placebo). In the case of cancer patients, fewer than 5% of adults with cancer will participate in drug trials. According to the Pharmaceutical Research and Manufacturers of America (PhRMA), about 400 cancer medicines were being tested in clinical trials in 2005. Not all of these will prove to be useful, but those that are may be delayed in getting approved because the number of participants is so low. Clinical trials that do not involve a new drug usually have a much shorter duration. (Exceptions are epidemiological studies like the Nurses' Health Study.) Clinical trials designed by a local investigator and (in the U.S.) federally funded clinical trials are almost always administered by the researcher who designed the study and applied for the grant. Small-scale device studies may be administered by the sponsoring company. Phase III and Phase IV clinical trials of new drugs are usually administered by a contract research organization (CRO) hired by the sponsoring company. (The sponsor provides the drug and medical oversight.) A CRO is a company that is contracted to perform all the administrative work on a clinical trial. It recruits participating researchers, trains them, provides them with supplies, coordinates study administration and data collection, sets up meetings, monitors the sites for compliance with the clinical protocol, and ensures that the sponsor receives 'clean' data from every site. Recently, site management organizations have also been hired to coordinate with the CRO to ensure rapid IRB/IEC approval and faster site initiation and patient recruitment. At a participating site, one or more research assistants (often nurses) do most of the work in conducting the clinical trial. The research assistant's job can include some or all of the following: providing the local Institutional Review Board (IRB) with the documentation necessary to obtain its permission to conduct the study, assisting with study start-up, identifying eligible patients, obtaining consent from them or their families, administering study treatment(s), collecting data, maintaining data files, and communicating with the IRB, as well as the sponsor (if any) and CRO (if any). Clinical trials are closely supervised by appropriate regulatory authorities. All studies that involve a medical or therapeutic intervention on patients must be approved by a supervising ethics committee before permission is granted to run the trial. The local ethics committee has discretion on how it will supervise noninterventional studies (observational studies or those using already collected data). In the U.S., this body is called the Institutional Review Board (IRB). Most IRBs are located at the local investigator's hospital or institution, but some sponsors allow the use of a central (independent/for profit) IRB for investigators who work at smaller institutions. To be ethical, researchers must obtain the full and informed consent of participating human subjects. (One of the IRB's main functions is ensuring that potential patients are adequately informed about the clinical trial.) If the patient is unable to consent for him/herself, researchers can seek consent from the patient's legally authorized representative. In California, the state has prioritized the individuals who can serve as the legally authorized representative. In some U.S. locations, the local IRB must certify researchers and their staff before they can conduct clinical trials. They must understand the federal patient privacy (HIPAA) law and good clinical practice. International Conference of Harmonisation Guidelines for Good Clinical Practice (ICH GCP) is a set of standards used internationally for the conduct of clinical trials. The guidelines aim to ensure that the "rights, safety and well being of trial subjects are protected". Responsibility for the safety of the subjects in a clinical trial is shared between the sponsor, the local site investigators (if different from the sponsor), the various IRBs that supervise the study, and (in some cases, if the study involves a marketable drug or device) the regulatory agency for the country where the drug or device will be sold. - For safety reasons, many clinical trials of drugs are designed to exclude women of childbearing age, pregnant women, and/or women who become pregnant during the study. In some cases the male partners of these women are also excluded or required to take birth control measures. - Throughout the clinical trial, the sponsor is responsible for accurately informing the local site investigators of the true historical safety record of the drug, device or other medical treatments to be tested, and of any potential interactions of the study treatment(s) with already approved medical treatments. This allows the local investigators to make an informed judgment on whether to participate in the study or not. - The sponsor is responsible for monitoring the results of the study as they come in from the various sites, as the trial proceeds. In larger clinical trials, a sponsor will use the services of a Data Monitoring Committee (DMC, known in the U.S. as a Data Safety Monitoring Board). This is an independent group of clinicians and statisticians. The DMC meets periodically to review the unblinded data that the sponsor has received so far. The DMC has the power to recommend termination of the study based on their review, for example if the study treatment is causing more deaths than the standard treatment, or seems to be causing unexpected and study-related serious adverse events. - The sponsor is responsible for collecting adverse event reports from all site investigators in the study, and for informing all the investigators of the sponsor's judgment as to whether these adverse events were related or not related to the study treatment. This is an area where sponsors can slant their judgment to favor the study treatment. - The sponsor and the local site investigators are jointly responsible for writing a site-specific informed consent that accurately informs the potential subjects of the true risks and potential benefits of participating in the study, while at the same time presenting the material as briefly as possible and in ordinary language. FDA regulations and ICH guidelines both require that “the information that is given to the subject or the representative shall be in language understandable to the subject or the representative." If the participant's native language is not English, the sponsor must translate the informed consent into the language of the participant. Local site investigators - A physician's first duty is to his/her patients, and if a physician investigator believes that the study treatment may be harming subjects in the study, the investigator can stop participating at any time. On the other hand, investigators often have a financial interest in recruiting subjects, and can act unethically in order to obtain and maintain their participation. - The local investigators are responsible for conducting the study according to the study protocol, and supervising the study staff throughout the duration of the study. - The local investigator or his/her study staff are responsible for ensuring that potential subjects in the study understand the risks and potential benefits of participating in the study; in other words, that they (or their legally authorized representatives) give truly informed consent. - The local investigators are responsible for reviewing all adverse event reports sent by the sponsor. (These adverse event reports contain the opinion of both the investigator at the site where the adverse event occurred, and the sponsor, regarding the relationship of the adverse event to the study treatments). The local investigators are responsible for making an independent judgment of these reports, and promptly informing the local IRB of all serious and study-treatment-related adverse events. - When a local investigator is the sponsor, there may not be formal adverse event reports, but study staff at all locations are responsible for informing the coordinating investigator of anything unexpected. - The local investigator is responsible for being truthful to the local IRB in all communications relating to the study. Approval by an IRB, or ethics board, is necessary before all but the most informal medical research can begin. - In commercial clinical trials, the study protocol is not approved by an IRB before the sponsor recruits sites to conduct the trial. However, the study protocol and procedures have been tailored to fit generic IRB submission requirements. In this case, and where there is no independent sponsor, each local site investigator submits the study protocol, the consent(s), the data collection forms, and supporting documentation to the local IRB. Universities and most hospitals have in-house IRBs. Other researchers (such as in walk-in clinics) use independent IRBs. - The IRB scrutinizes the study for both medical safety and protection of the patients involved in the study, before it allows the researcher to begin the study. It may require changes in study procedures or in the explanations given to the patient. A required yearly "continuing review" report from the investigator updates the IRB on the progress of the study and any new safety information related to the study. - If a clinical trial concerns a new regulated drug or medical device (or an existing drug for a new purpose), the appropriate regulatory agency for each country where the sponsor wishes to sell the drug or device is supposed to review all study data before allowing the drug/device to proceed to the next phase, or to be marketed. However, if the sponsor withholds negative data, or misrepresents data it has acquired from clinical trials, the regulatory agency may make the wrong decision. - In the U.S., the FDA can audit the files of local site investigators after they have finished participating in a study, to see if they were correctly following study procedures. This audit may be random, or for cause (because the investigator is suspected of fraudulent data). Avoiding an audit is an incentive for investigators to follow study procedures. Different countries have different regulatory requirements and enforcement abilities. "An estimated 40 percent of all clinical trials now take place in Asia, Eastern Europe, central and south America. “There is no compulsory registration system for clinical trials in these countries and many do not follow European directives in their operations”, says Dr. Jacob Sijtsma of the Netherlands-based WEMOS, an advocacy health organisation tracking clinical trials in developing countries." In March 2006 the drug TGN1412 caused catastrophic systemic failure in the individuals receiving the drug during its first human clinical trials (Phase I) in Great Britain. Following this, an Expert Group on Phase One Clinical Trials published a report. The cost of a study depends on many factors, especially the number of sites that are conducting the study, the number of patients required, and whether the study treatment is already approved for medical use. Clinical trials follow a standardized process. The costs to a pharmaceutical company of administering a Phase III or IV clinical trial may include, among others: - manufacturing the drug(s)/device(s) tested - staff salaries for the designers and administrators of the trial - payments to the contract research organization, the site management organization (if used) and any outside consultants - payments to local researchers (and their staffs) for their time and effort in recruiting patients and collecting data for the sponsor - study materials and shipping - communication with the local researchers, including onsite monitoring by the CRO before and (in some cases) multiple times during the study - one or more investigator training meetings - costs incurred by the local researchers such as pharmacy fees, IRB fees and postage. - any payments to patients enrolled in the trial (all payments are strictly overseen by the IRBs to ensure that patients do not feel coerced to take part in the trial by overly attractive payments) These costs are incurred over several years. In the U.S. there is a 50% tax credit for sponsors of certain clinical trials. National health agencies such as the U.S. National Institutes of Health offer grants to investigators who design clinical trials that attempt to answer research questions that interest the agency. In these cases, the investigator who writes the grant and administers the study acts as the sponsor, and coordinates data collection from any other sites. These other sites may or may not be paid for participating in the study, depending on the amount of the grant and the amount of effort expected from them. Many clinical trials do not involve any money. However, when the sponsor is a private company or a national health agency, investigators are almost always paid to participate. These amounts can be small, just covering a partial salary for research assistants and the cost of any supplies (usually the case with national health agency studies), or be substantial and include 'overhead' that allows the investigator to pay the research staff during times in between clinical trials. In Phase I drug trials, participants are paid because they give up their time (sometimes away from their homes) and are exposed to unknown risks, without the expectation of any benefit. In most other trials, however, patients are not paid, in order to ensure that their motivation for participating is the hope of getting better or contributing to medical knowledge, without their judgment being skewed by financial considerations. However, they are often given small reimbursements for study-related expenses like travel. Phase 0 and Phase I drug trials seek healthy volunteers. Most other clinical trials seek patients who have a specific disease or medical condition. Depending on the kind of participants required, sponsors use various recruitment strategies, including patient databases, newspaper and radio advertisements, flyers, posters in places the patients might go (such as doctor's offices), and personal conversations with the investigator. Various resources are available for individuals who want to participate in a clinical trial. A patient may ask their physician about clinical trials available for their condition or contact other clinics directly. The US government, World Health Organization and commercial organizations provide online clinical trial resources. - Academic clinical trials - CIOMS Guidelines - Clinical trial management - Clinical data acquisition - Clinical Data Interchange Standards Consortium - Clinical site - Community-based clinical trial - Contract Research Organization - Data Monitoring Committees - Drug development - Drug recall - European Medicines Agency - FDA Special Protocol Assessment - Health care - Health care politics - Investigational Device Exemption - Interactive voice response - Medical ethics - Nursing ethics - Orphan drug - Philosophy of Healthcare - Randomized controlled trial - Remote Data Entry - World Medical Association - ↑ 1.0 1.1 The regulatory authority in the USA is the Food and Drug Administration; in Canada, Health Canada; in the EU, the European Medicines Agency; in Japan, the Ministry of Health, Labour and Welfare; the Health Sciences Authority (HSA) in Singapore. - ↑ Toby E. Huff (2003), The Rise of Early Modern Science: Islam, China, and the West, p. 218. Cambridge University Press, ISBN 0521529948. - ↑ David W. Tschanz, MSPH, PhD (August 2003). "Arab Roots of European Medicine", Heart Views 4 (2). - ↑ D. Craig Brater and Walter J. Daly (2000), "Clinical pharmacology in the Middle Ages: Principles that presage the 21st century", Clinical Pharmacology & Therapeutics 67 (5), p. 447-450 . - ↑ James Lind: A Treatise of the Scurvy (1754) (2001). Retrieved on 2007-09-09. - ↑ The power of a trial is not a single, unique value; it estimates the ability of a trial to detect a difference of a particular size (or larger) between the treated (tested drug/device) and control (placebo or standard treatment) groups. For example, a trial of a lipid-lowering drug versus placebo with 100 patients in each group might have a power of .90 to detect a difference between patients receiving study drug and patients receiving placebo of 10 mg/dL or more, but only have a power of .70 to detect a difference of 5 mg/dL. - ↑ Guidance for Industry, Investigators, and Reviewers Exploratory IND Studies. Food and Drug Administration (January 2006). Retrieved on 2007-05-01. - ↑ Guidance for Institutional Review Boards and Clinical Investigators. FDA (1999-03-16). Retrieved on 2007-03-27. - ↑ Periapproval Services (Phase IIIb and IV programs). Covance Inc. (2005). Retrieved on 2007-03-27. - ↑ Arcangelo, Virginia Poole; Andrew M. Peterson (2005). Pharmacotherapeutics for Advanced Practice: A Practical Approach. Lippincott Williams & Wilkins. ISBN 0781757843. - ↑ Web Site Editor (2007). "Clinical Trials - What Your Need to Know" (in English). American Cancer Society. - ↑ http://www.gts-translation.com/medicaltranslationpaper.pdf%7C Back Translation for Quality Control of Informed Consent Forms - ↑ Expert Group on Phase One Clinical Trials (Chairman: Professor Gordon W. Duff) (2006-12-07). Expert Group on Phase One Clinical Trials: Final report. The Stationery Office. Retrieved on 2007-05-24. - ↑ Tax Credit for Testing Expenses for Drugs for Rare Diseases or Conditions. FDA (2001-04-17). Retrieved on 2007-03-27. - Rang HP, Dale MM, Ritter JM, Moore PK (2003). Pharmacology 5 ed. Edinburgh: Churchill Livingstone. ISBN 0-443-07145-4 - Finn R, (1999). Cancer Clinical Trials: Experimental Treatments and How They Can Help You., Sebastopol: O'Reilly & Associates. ISBN 1-56592-566-1 - Chow S-C and Liu JP (2004). Design and Analysis of Clinical Trials : Concepts and Methodologies, ISBN 0-471-24985-8 - Pocock SJ (2004), Clinical Trials: A Practical Approach, John Wiley & Sons, ISBN 0-471-90155-5 There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies
http://www.wikidoc.org/index.php/Clinical_trial
13
15
Lepidoptera - Order of insects that includes butterflies and moths. They can be distinguished from all other insects by their two pairs of scale-covered wings, which are often brightly coloured, particularly in many butterflies. Lepidopterans undergo complete metamorphosis: eggs are laid, from which larvae hatch, and a pupal stage follows, during which the final adult form takes shape. Butterflies are slender-bodied (mostly) diurnal insects. - Quick facts - Status of knowledge - Richness and diversity in Canada - Species spotlight - Maritime Ringlet - Species spotlight - Monarch - Results of general status assessment - Comparison with previous Wild Species reports - Threats to Canadian butterflies - Further information - Butterflies represent one small branch of the lepidopterans, representing about 10% of known species (the others are moth species). Globally, there are about 18 000 butterfly species. Canada is home to 302 resident species of butterflies, although only five are endemic. - When excluding species ranked as Extinct, Extirpated, Undetermined, Not Assessed, Exotic or Accidental, the majority (82%) of butterflies in Canada have Canada General Status Ranks (Canada ranks) of Secure, while 9% have Canada ranks of Sensitive, 7% have Canada ranks of May Be At Risk and 2% have Canada ranks of At Risk. - One butterfly species, the Frosted Elfin (Callophrys irus), is extirpated from Canada. - Monarchs (Danaus plexippus) migrate thousands of kilometres to avoid the Canadian winter. - The Cabbage White (Pieris rapae) and European Skipper (Thymelicus lineola) are the two exotic butterflies in Canada. With their conspicuous daytime activity, bright colours, and jaunty flight patterns, butterflies tend to invoke the interest and sympathy of the general public. As a result, butterflies have become “flagship” invertebrates. That the Niagara Parks Butterfly Conservatory in Ontario attracted 850 000 visitors during its first full year of operation is one indicator of just how popular these insects have become. Although butterflies number only about 10% of the order Lepidoptera – with moths comprising the other 90% – butterflies tend to be more eye-catching than moths, which are generally active during the night and are usually somewhat drab in colour. However, all butterflies begin life in a relatively understated form, as a tiny cryptic egg. A key to the survival of each generation lies in a female butterfly’s careful timing and choice of location for laying her eggs. Not only must she set the eggs on the right “host plant,” but she must also secure them to the right part of the plant, since not all plant parts will be equally edible to the caterpillar when it hatches from the egg. Upon hatching, the plant-chomping butterfly caterpillar grows by way of periodic moulting or shedding its skin. The last larval moulting results in the formation of a pupal case or chrysalis, rather than a larger caterpillar. This marks the start of a remarkable change, for, after a period of time, the pupal case splits open, and a fully formed adult winged butterfly emerges. By undergoing total metamorphosis, butterfly larvae and adults are able to live radically different lifestyles in completely different environments – the former as a slow-crawling homebody with an insatiable appetite for vegetation, the latter as a flighty, wide-ranging sipper of nectar. Methodically munching through life, the larva exists in a tiny leafy world that contrasts greatly with that of the adult, which may be several hectares to several hundred square kilometres in extent. Indeed, Monarchs (Danaus plexippus) are known to undertake migratory flights of thousands of kilometres (adults tagged in Canada in the autumn have been subsequently recaptured in the winter forests of central Mexico). Most butterflies are relatively short-lived; entire cycle from egg to adult may be only a month or two, and adults may live only a week. Many species produce only one generation per year and fly only a few months out of the year. Throughout most of Canada, where temperatures drop below freezing during part of the winter, at least one stage in a butterfly species’ life cycle must enter a dormant state termed “diapause” in order to resist freezing. Most species that spend the winter months in Canada do so as caterpillars. Others pass the winter as eggs (e.g., hairstreaks) or pupae (elfins and other Callophrys), while a few species, mainly tortoiseshells (Nymphalis) and angelwings (Polygonia), spend the winter as adults, hibernating in holes in trees, crevices in rock, or other shelters, like buildings. Science now recognizes about 18 000 butterfly species worldwide, and this great variety is thought to relate to the broad diversity of plant species, since larvae typically use only a relatively narrow range of food plants. The North American butterflies of the genus Euphilotes, for instance, feed only on members of the knotweed family (Polygonaceae); the larvae eat the flowers and fruits, and the adults sip the nectar. Status of knowledge Butterflies are a relatively well studied insect group in Canada thanks in large part to the many professional and amateur specialists who have taken an interest in these unique insects. The considerable number of butterfly articles and books documenting Canadian species are complemented by numerous collections including the Lyman Entomological Museum (Macdonald Campus of McGill University in Montreal), the Royal Saskatchewan Museum of Natural History in Regina and the section on lepidopterans of the Canadian National Collection of insects in Ottawa. A recent publication by Peter W. Hall (2009) on behalf of NatureServe Canada, Sentinels on the Wing: The Status and Conservation of Butterflies in Canada, provides a comprehensive overview of the status of butterflies in Canada. This publication took into account the data and analyses from several organizations, including the general status results for butterflies developed by the National General Status Working Group and found in this report. Richness and diversity in Canada Within Canada, 302 butterfly species are described from coast to coast to coast, with the highest species richness found in the provinces from British Columbia through to Quebec. While many Canadian species are widespread, with the potential to be found in almost any province or territory (for example, Painted Lady - Vanessa cardui, Mourning Cloak - Nymphalis antiopa, Canadian Tiger Swallowtail - Papilio canadensis), a few species appear to be highly restricted in their distribution. For example, although further survey work may eventually describe a more extensive distribution for the species, Johansen’s Sulphur (Colias johanseni) has only been found on a single hillside near Bernard Harbour in Nunavut and in a coastal area near Coppermine. There are two species of Exotic butterflies in Canada. One of them, the European Skipper (Thymelicus lineola) arrived in Ontario in about 1910. Spreading south and west, this species has today become a major pest of Common Timothy (Phleum pratense). Still more “successful” is the now familiar Cabbage White (Pieris rapae), introduced at Quebec City in about 1860 and now found throughout most of North America. Species spotlight - Maritime Ringlet The Maritime Ringlet (Coenonympha nipisiquit) lives exclusively in salt marsh habitats in the Chaleur Bay region of Canada’s east coast. It has been found at only six sites. Population size and densities of the Maritime Ringlet are at their highest in habitats where there are large numbers of the host plant of the larva (salt meadow grass) and the source of nectar for the butterfly (sea lavender). The average wingspan for males is 3.4 cm and 3.6 cm for females. An eyespot is present on about 33% of the males and is more common and better developed in females. Both males and females demonstrate ochre, grey and cream colour patterns. Males tend to darken as they age. Flooding due to high tides and storm tides threaten all life stages of the Maritime Ringlet. Ice pushed onto their marshland habitats during winter storms can crush the overwintering larvae. The development and draining of marshland habitat are other significant threats. Researchers suspect there are likely other threats that have yet to be identified as there are numerous examples of ideal marshland habitats without populations of the butterfly. The Maritime Ringlet has a Canada General Status Rank (Canada Rank) of At Risk and was designated Endangered by the Committee on the Status of Endangered Wildlife in Canada (COSEWIC) in April 2009. Species spotlight - Monarch The Monarch (Danaus plexippus) is likely the most recognized of all North American butterflies. Its bright orange wings which span upwards of 93 to 105 mm display a thick black border with two rows of white spots. Additional markings include two highly visible black patches on the hind wings which are found only on males. The monarch is widely distributed across North America, from southern Canada southwards to Central America, and from the Pacific to the Atlantic coasts. Within Canada, the monarch has been recorded in all ten provinces and in the Northwest Territories. In general, two breeding populations of the Monarch are recognized: western and eastern, with the Rocky Mountains being the dividing point. Each of the two populations has a distinct migratory pattern. Those easts of the Rockies are overwintering in Central Mexico, while those wests of the Rockies are overwintering in California. Monarchs are wide-ranging and powerful fliers. In the fall, they migrate thousands of kilometres, travelling from Canada to Mexico and California. In Canada, the migrations are best observed in southern Ontario, particularly in Point Pelee National Park and Presqu’île Provincial Park. Monarchs conserve energy during migration by riding currents of rising warm air and will reach altitudes of over one kilometre in order to take advantage of the prevailing winds. Monarchs can thrive wherever milkweeds grow, as the larvae (caterpillars) feed exclusively on milkweed leaves. As long as there are healthy milkweed plants, Monarchs will put up with high levels of human disruption and have been known to breed along busy highways and city gardens. Threats to Monarch populations include environmental conditions such as violent storms, loss of breeding habitat and contaminants such as herbicides (which kills both the milkweed needed by the caterpillars and the nectar-producing wildflowers needed by the adults). A very large threat is the loss of overwintering habitats in Mexico and California. The Monarch has a Canada General Status Rank (Canada Rank) of Sensitive and was designated Special Concern by the Committee on the Status of Endangered Wildlife in Canada (COSEWIC) in November 2001. Results of general status assessment The report Wild Species 2010 marks the second national assessment for butterflies. On the 302 species of butterflies present in Canada, the majority have Canada ranks of Secure (217 species, 72%, figure 20 and table 27). Also, 24 species have Canada ranks of Sensitive (8%), 19 species have Canada ranks of May Be At Risk (6%) and four species are At Risk (2%). One butterfly species, the Frosted Elfin (Callophrys irus), is extirpated from Canada. Eleven species of butterfly have Canada ranks of Undetermined or Not Assessed (3%). Two species are ranked as Exotic (1%). Finally, a higher number of species (24) have Canada ranks of Accidental (8%), due to the greater mobility of butterflies than some other taxonomic groups. Comparison with previous Wild Species reports In the report Wild Species 2000, all butterfly species received a Canada Rank of Not Assessed. In 2002, the National General Status Working Group produced updated general status ranks for all wild species of butterflies, including Canada ranks. In 2002, assessments were based on the taxonomy of Layberry et al. (1998). In this report, Wild Species 2010, the taxonomy is very similar to that used in 2002, except for a few changes to bring the species list into accordance with that proposed by Pelham (2008). In general, the 2010 assessment resulted in fewer species that were identified as May Be At Risk or Sensitive and an increase in the number of species ranked as Secure (table 27). A total of 32 species had a change in their Canada rank since the last assessment. Among these changes, three species had an increased level of risk, 13 species had a reduced level of risk, three species were changed from or to the ranks Undetermined or Accidental, 11 species were added and two species were deleted. In most cases, this is not an indication of a biological change, but of an increase in knowledge or of a taxonomic change (table 28). A total of nine species were added since the assessment of 2000 updated in 2002. |Canada rank||Years of the Wild Species reports||Average change between reports||Total change since first report| |0||Extinct / Extirpated||1 |2||May Be At Risk||23 |Scientific name||English name||2000 Canada rank||2010 Canada rank||Reason for change| |Anthocharis stella||Stella Orangetip||4||-||(T) Taxonomic change.| |Apodemia mormo||Mormon Metalmark||3||1||(C) Change due to new COSEWIC assessment (Endangered, May 2003).| |Asterocampa celtis||Hackberry Emperor||2||3||(I) Improved knowledge of the species.| |Battus philenor||Pipevine Swallowtail||5||8||(I) Improved knowledge of the species.| |Callophrys gryneus||Juniper Hairstreak||2||4||(T) Taxonomic change.| |Callophrys gryneus gryneus||Olive Juniper Hairstreak||-||2||(T) Taxonomic change.| |Callophrys gryneus siva||Siva Juniper Hairstreak||-||4||(T) Taxonomic change.| |Celastrina lucia||Northern Spring Azure||-||4||(T) Taxonomic change; was previously included in Celastrina echo.| |Celastrina serotina||Cherry Gall Azure||5||4||(I) Improved knowledge of the species.| |Colias occidentalis||Western Sulphur||3||4||(T) Taxonomic change.| |Erebia lafontainei||Reddish Alpine||3||4||(T) Taxonomic change.| |Erebia mackinleyensis||Mt. Mckinley Alpine||3||4||(T) Taxonomic change.| |Erebia pawloskii||Yellow Dotted Alpine||3||4||(T) Taxonomic change.| |Erebia youngi||Four-dotted Alpine||3||4||(T) Taxonomic change.| |Erora laeta||Early Hairstreak||2||3||(I) Improved knowledge of the species.| |Erynnis baptisiae||Wild Indigo Duskywing||2||3||(B) This species has been expanding its range.| |Erynnis martialis||Mottled Duskywing||5||4||(I) Improved knowledge of the species.| |Euphydryas anicia||Anicia Checkerspot||-||4||(T) Taxonomic change.| |Euphydryas chalcedona||Variable Checkerspot||4||-||(T) Taxonomic change.| |Euphydryas colon colon||Colon Checkerspot||-||4||(T) Taxonomic change.| |Euphydryas colon paradoxa||Contrary Checkerspot||-||4||(T) Taxonomic change.| |Euphyes dion||Dion Skipper||3||4||(I) Improved knowledge of the species.| |Hesperia colorado||Western Branded Skipper||2||4||(T) Taxonomic change.| |Lerema accius||Clouded Skipper||-||8||(I) New species in Canada.| |Megathymus streckeri||Strecker's Giant Skipper||-||5||(I) New species in Canada.| |Oeneis philipi||Philip's Arctic||3||4||(T) Taxonomic change.| |Plebejus idas||Northern Blue||-||4||(T) Taxonomic change.| |Poanes zabulon||Zabulon Skipper||-||5||(I) New species in Canada, but unsure if a population exists or whether it is purely a vagrant.| |Polites sabuleti||Sandhill Skipper||3||2||(T) Taxonomic change.| |Satyrium caryaevora||Hickory Hairstreak||3||4||(I) Improved knowledge of the species.| |Satyrium semiluna||Sooty Hairstreak||2||1||(C) Change due to new COSEWIC assessment (Endangered, April 2006).| |Speyeria egleis||Great Basin Fritillary||-||6||(T) Taxonomic change.| Threats to Canadian butterflies Most experts agree that the modification and elimination of suitable habitat pose the greatest threat to native butterflies across the country. Butterflies associated with highly jeopardized natural communities, like the pine oak barrens and tallgrass prairies of Ontario and the Garry oak woodlands and the Okanagan and Silmilkameen valleys of British Columbia, are particularly susceptible. Butterflies are flagship species and play an important role in the ecosystems. However, there is a need to better understand the threats that these species are facing. Since the 2002 updates to the Wild Species 2000 ranks for butterflies, two species were assigned a Canada Rank of At Risk (the result of COSEWIC assessments). The 2010 report presented the results of the second assessment of Canada ranks for butterflies by the National General Status Working Group, and this process will hopefully help to gather more information on the ecology of these species. - Canadian Biodiversity Information Facility. 2006. Butterflies of Canada. http://www.cbif.gc.ca/spp_pages/butterflies/index_e.php (Accessed February 11, 2010). - Hall, P. W. 2009. Sentinels on the Wing: The status and conservation of butterflies in Canada. NatureServe Canada, Ottawa, Ontario: 68 pp. - Hinterland Who's Who. 2003. Insect fact sheets: Monarch. http://www.hww.ca/hww2.asp?id=34 (Accessed February 25, 2010). - Opler, P. A., Lotts, K. and Naberhaus, T. 2010. Butterflies and moths of North America. http://www.butterfliesandmoths.org/ (Accessed February 26, 2010). - COSEWIC. 2009. COSEWIC assessment and update status report on the Maritime Ringlet (coenonympha nipisiquit) in Canada. Committee on the Status of Endangered Wildlife in Canada, Ottawa. http://www.sararegistry.gc.ca/virtual_sara/files/cosewic/sr_maritime_ringlet_0809_e.pdf (Accessed February 26, 2010). - Environment Canada. 2010. Species at Risk Public Registry. Species Profile: Monarch. http://www.sararegistry.gc.ca/species/speciesDetails_e.cfm?sid=294#docs (Accessed February 25, 2010). - Layberry, R. A., Hall, P. W. and Lafontaine, J. D. 1989. The butterflies of Canada, University of Toronto Press. - Parks Canada. 2009. Point Pelee National Park of Canada: Monarch migration. http://www.pc.gc.ca/eng/pn-np/on/pelee/natcul/natcul5.aspx (Accessed February 26, 2010). - Pelham, J. P. 2008. A catalogue of the butterflies of the United States and Canada. The Lepidoptera Research Foundation Inc: 672 pp.
http://www.wildspecies.ca/wildspecies2010/results-insects-butterflies.cfm?lang=e
13
23
Indonesia did not exist as yet during the Palaeocene period (70 million years BC), the Eocene period (30 million years BC). the Oligacene period (25 million years BC) and the Miocene period (12 million years BC). It is believed that Indonesia must have existed during the Pleistocene period (4 million years BC) when it was linked with the present Asian mainland. It was during this period that the Homonids made their first appearance and Java Man inhabited the part of the world now called Indonesia. Java Man, named Pithecanthropus Erectus by Eugence Dubois who found the fossils on the island of Java, must have been the first inhabitant When the sea level rose as the result of melting Ice north of Europe and the American continent, many islands emerged. Including the Indonesian archipelago. It was also during this period (3000- 500 BC) that Indonesia was inhabited by Sub-Mongoloid migrants from Asia who later inter-married with the indigenous people. Later still (1000 BC) inter-marriage occurred with Indo-Arian migrants from the south-Asian sub-continent of India. The first Indian migrants came primarily from Gujarat in South- east India during the first Christian era. The Caka period in Indonesia witnessed the introduction of the Sanskrit language and the Pallawa script by the Indian Prince Aji Caka (78 AD). The Devanagari script of the Sanskrit language was also used, as shown in ancient stone and copper inscriptions (paracasthles) which have been unearthed. The language and script were adapted and called the Kawi language and included words and phrases derived from Javanese. Early trade relations were established between South India and Indonesia. Sumatra was then named Swarna Dwipa of "the island of gold," Java was called Java Dwipa or "the rice island," and a Hindu kingdom of Crivijaya in Sumatra and Nalanda in South India were not confined to religious and cultural exchanges. They later developed diplomatic relations, and even covered a wide range of trade. The influx of Indian settlers continued during the period from the first to the seventh century AD. Peacefully and gradually the Hindu religion spread throughout the archipelago. It was adopted by all layers of the people of Java, but limited to the upper classes on the other islands. THE PERIOD OF HINDU KINGDOMS Many well-organized kingdoms with a high degree of civilization were ruled by indigenous kings who had adopted the Hindu or Buddhist religion. This explains why this period in history is called the Period of Hindu Kingdoms. It lasted from ancient times to the 16th Century AD. Because the culture and civilization, which emanated from the Hindu and Buddhist religions, were syncretized with the local cultural elements, the period was also referred to as the Hindu-Indonesian period. Indian culture and customs were introduced, such as the system of government in a monarchy, the ancestry system, the organization of military troops, literature, music and dances, architecture, religious practices and rituals, and even the division of laborers into castes or varnas. The Hindu literary works known as Vedas and the "Mahabharata" and "Ramayana" epics were also introduced through the wayang. or shadow-play performance, which is still very popular in many parts of present day Indonesia. The first Indian Buddhists arrived in Indonesia between the Ist and 2nd Centuries AD. They brought with them Buddhism in its two sects. Hinayana and Mahayana. The latter became more advanced in the 8"' Century AD. In 144 AD a Chinese Buddhist saint. Fa Hsien. was caught in a storm and landed in Java-Dwipa, or Java Island, where he stayed for five months. The northern part of the island was then ruled by an Indonesian Hindu King named Kudungga. Kutai, on the island of Borneo, was successively ruled by the Hindu kings Devawarman, Aswawarman and Mulawarman. When the Greek explorer and geographer. Ptolemy of Alexandria, wrote on Indonesia, he named either the island of Java or Sumatra "abadiou". His chronicles described Java as a country with a good system of government and advanced agriculture, navigation and astronomy. There was even mention of the "batik" printing process of cloth that the people already knew. They a-l.so made metalware. used the metric system and printed coins. Chinese chronicles of 132 AD described the existence of diplomatic relations between Java-Dwipa and China. Around 502 AD Chinese annals mentioned the existence of the Buddhist Kingdom, Kanto Lirn in South Sumatra, presumably in the neighborhood of presentday Palembang. It was ruled by king Gautama Subhadra. and later by his son Pyrawarman of Vinyawarman who established diplomatic relations with China. Because of a spelling or pronunciation difficulty. what the Chinese called "Kanto Li" was probably Crivijaya, a mighty Buddhist kingdom. On his way to India, the Chinese Buddhist pilgrim. I Tsing, visited Crivijaya in 671 AD to study the Sanskrit language. He returned 18 years later, in 689 AD Crivijaya was then the center of Buddhist learning and had many well-known philosophy scholars like Sakyakirti. Dharmapala The kingdom had diplomatic relations with the south Indian kingdom of Nalanda. The Crivijaya mission built a school on its premises where Indians could learn the art of molding bronze statues and broaden their knowledge of the Buddhist philosophy. With the spread of Buddhism, Crivijaya's influence reached out to many other parts of the archipelago. Another known Buddhist kingdom was Cailendra in Central Java. It was ruled by the kings of Cailendra Dynasty. During their rule (750-850 AD) the famous Buddhist temple, Borobudur, was built. In 772 AD other Buddhist temple were also built. They include the Mendut, Kalasan and Pawon temples. All of these temples are now preserved as tourist objects near the city of Yogyakarta. The Cailendra kingdom was also known for its commercial and naval power, and its flourishing arts and culture. A guide to learn singing, known as the Chandra Cha-ana, was first written in 778 AD. The Prambanan temple, which was dedicated to Lord Civa, was started in 856 AD and completed in 900 AD by King Daksa. Earlier Civa temples were built in 675 AD on the Dieng mountain range, southwest of Medang Kamolan, the capital of the Mataram Kingdom. In West Java were the kingdoms of Galuh, Kanoman, Kuningan and Pajajaran. The latter was founded by King Purana with Pakuan as its capital. It replaced the kingdom of Galuh. The kingdoms of Tarurna Negara, Kawali and Parahyangan Sunda came later. At the end of the 10th Century (911-1007 AD) the powerful kingdom of Singasari emerged in East Java under King Dharmawangsa. He codified laws and translated into Javanese the "Mahabharata" epic and its basic philosophy, as exposed in the Bhisma Parva scripture. He also ordered the 12 translations of the Hindu holy book, the Bhagavat Gita. Meanwhile, the island of Bali was also ruled by King Airlangga, known as a wise and strong ruler. He had water-works built along the Brantas River that are still in use today. Before his death in 971 AD he divided his kingdom into the kingdoms of Janggala and Daha or Kediri. These were to be ruled by his two sons. King Jayabaya of Kediri 1135-1157 wrote a book in which he foretold the downfall of Indonesia. Subsequently, so he wrote, the country would be ruled by a white race, to be followed by a yellow race. His prediction turned out to be Dutch colonial rule and the Japanese occupation of the country during World War. However, Jayabaya also predicted that Indonesia would ultimately regain her independence. During the golden period of the Kediri Kingdom many other literary works were produced, including the Javanese version of the Mahabharata by Mpu (saint) Sedah and his brother Mpu Panuluh. This work was published in 1 157. The kingdoms of East Java were later slieceeded by the Majapahit Kinudorn. tirst ruled by l'rinre Wijava who vvas also known as King Kartarajasa. Under King llayain Wuriik tlie Majapahit Empire became the most powerlul kingdom in the history of Indonesia. It had dependencies in territories beyond the borders of the present archipelago, such as Champa in North Vietnam. Kampnchea.and the Philippines (1331-1364). King llayam Wuruk. with his able premier Gajah Mada. succeeded in gradually uniting the whole archipelago under the name of Dwipantara. During this golden period of Majapahit many literary works were produced. Among them was "Negara Kertagama." by the famous author l'rapancha (1335-1380). Parts of the book described the diplomatic and economic ties between Majapahit and numerous Southeast Asian countries including Myanmar. Thailand. Tonkin. Annam. Kampuchea and even India and China. Other works in Kawi. the old Javanese language, were "Pararaton." "Arjuna Wiwaha." "Ramayana." and "Sarasa Muschaya." These works were later translated into modern European languages for educational purposes. THE PERIOD OF ISLAMIC KINGDOMS Moslem merchants from Gujarat and Persia began visiting Indonesia in the 13th Century and established trade links between this country and India and Persia. Along with trade, they propagated Islam among the Indonesian people, particularly along the coastal areas of Java. like Demak. At a later stage they even influenced and converted Hindu kings to Islam, the first being the Sultan of Demak. This Moslem Sultan later spread Islam westwards to Cirebon and Banten. and eastward along the northern coast of Java to the kingdom of Gresik. In the end. he brought the downfall of the powerful kingdom of Majapahit (1293-1520). After the fall of Majapahit. Islam spread further east to where the sultanates of Bone and Goa in Sulawesi were established. Also under the influence of Islam, were the sultanates of Ternate and Tidore in Maluku. From North ot Java. the religion spread to Banjarmasin in Borneo and further west to Sumatra, where Palembang. Minangkabau (West Sumatra). Pasai and Perlak were converted. Meanwhile, descendants of the Majapahit aristocracy, religious scholars and Hindu Ksatrivas retreated through the East Java peninsula of Blambangan to the island of Bali and Lornbok. In a later period. however, the eastern part of Lornbok was converted to Islam. which entered the island from the southern Sulawesi city of Makassar. now named Ujungpandang. The capital of the West Java Kingdom of Pajajaran was Sunda Kelapa (1300 AD). It was located in the present capital city ofindonesia. Jakarta. In 1527 Sunda Kelapa was conquered by Falatehan, an Islamic troop commander of the sultanate of Demak. After his conquest the city was renamed Jaya Karta, meaning "the great city," this was the origin of the present name. Jakarta. Falatehan also defeated the Portuguese, who had also tried to seize the city. THE PORTUGUESE IN INDONESIA In their search for spices, the Portuguese arrived in Indonesia in 1511. after their conquest of the Islamic kingdom of Malacca on the Malay Peninsula. They were followed by the Spaniards. Both began to propagate Christianity and were most successful in Minahasa and Maluku. also known as the Moluccas. The Sultan of Aceh in Sumatra, the Sultan of Demak in Java and the Sultan of Ternate in the Maluku islands joined forces in trying to ward off the Portuguese. At that time the power and sovereignty of Ternate sultanate was recognized by more than 72 islands, including the island of Timor. In 1570. the Portuguese succeeded in killing the Sultan of Ternate. Khairun. However, his successor. Sultan Baabullah. besieged the Portuguese fortress at Ternate. Baabullah then allied himself with the Dutch to further confront the Portuguese and Spaniards. In 1651 the Dutch invaded Kupang in Western Timor. Despite the Dutch presence in Timor, the formal and precise definition ofthe territories controlled by the two colonial powers did not take place until more than 200 years after the Dutch conquest of Kupang. It was only on 20 April 1859. the Dutch concluded a treaty with Portugal to divide Timor into their respective control : The Dutch occupied the Western part and Portugal the eastern part of the island. From that time Portugal could secure a full control over East Timor until it left the region in 1975. THE BEGINNING OF DUTCH COLONIALISM Meanwhile, the Dutch had started their quest for Indonesian spices to sell on the European market at big profit. For the purpose of more efficient and better organized merchant trade they established the Dutch East India Company (VOC) in 1602. To protect the merchant fleet from frequent pirate attacks on the high seas, Dutch warships were ordered to accompany it. After the nationalization of the VOC in 1799. the Dutch Government had a firm grip on the vital territories of'the country. People in those territories were forced to surrender their agricultural produce to the Dutch merchants. Meanwhile, the Hindu Kingdom of Mataram converted to Islam and was ruled by the Muslim, Sultan Agung Hanyokrokusumo. He developed the political power of the state and was a keen patron of the arts and culture. In 1633 he introduced the Islamic Javanese calendar Sultan Agung was a fierce enemy of the Dutch. In 1629 he sent his troops to attack Batavia, but they were repulsed by the troops of Governor General Jan Pieterszoon Coen. After the seizure of Ambon in the Moluccas in 1605 and Banda Island in 1623, the Dutch secured the trade monopoly of the spice islands. A policy of ruthless exploitation by "divide and rule" tactics was carried out. In this way indigenous inter-island trade, like that between Makassar, Aceh, Mataram and Banten, as well as overseas trade, was gradually paralyzed. Indonesia was reduced to an agricultural country to supply European markets. At the same time, the Dutch adopted a so-called open-door policy toward the Chinese in order that they could serve as middlemen in their trade Sultan Hasanuddin of Goa waged a war against the Dutch in 1666. But was defeated and Goa became a vassal state of the VOC under the treaty of Bunggaya of 1667. Prince Trunojoyo of Madura also fought the Dutch. He was defeated and killed in 1680. To reinforce their spice monopoly in the Moluccas, the Dutch undertook their notorious Hongi expeditions, whereby they burned down the clove gardens of the people in an effort to eliminate overproduction, which brought down the prices of cloves on the European markets. In these outrageous expeditions countless atrocities were committed against people who defended their crops. In 1740 the Dutch suppressed a rebellion in Jakarta that was sparked by dissatisfied Chinese, who were later joined by Indonesians. Ten thousand Chinese were massacred. The Kingdom of Mataram began to see its downfall after it was divided by the VOC into the Principalities of Yogyakarta and Surakarta. However, mismanagement and corruption forced the VOC into bankruptcy and on December 31, 1799, all its territories in Indonesia were taken over by the Dutch Administration in Batavia. BRITISH TEMPORARY RULE In 1814 the British came to Indonesia and built Fort York in Bengkulu on the west coast of Sumatra. It was later renamed Fort Mariborough. During the Napoleonic wars in Europe when Holland was occupied by France, Indonesia fell under the rule of the British East India Company (1811-1816). Sir Thomas Stanford Raffles was appointed Lieutenant Governor General of Java and dependencies. He was subordinated to the Governor General in Bengal, Raffles introduced partial self-government and abolished the slave trade. In those days slaves were captured and traded by foreigners. He also introduced the land-tenure system, replacing the hated Dutch forced-agricultural system, whereby crops were grown and surrendered to the Government. Borobudur and other temples were restored and research conducted. Raffles wrote his famous book, 'The History of Java," in which he described Java's high civilization During the British stay in Sumatra (1814-1825), William Marsden wrote a similar book on the history of Sumatra, which was published in 1889. After the fall of Napoleon, and the end of the French occupation of Holland the British and Dutch signed a convention in London on August 13, 1814, in which it was agreed that Dutch colonial possessions dating from 1803 onwards should be returned to the Dutch Administration in Batavia. Thus, the Indonesian archipelago was recovered from the British in 1815. RETURN OF DUTCH RULE Soon the Dutch intensified their colonial rule. But this only sparked widespread revolts to seize freedom. These revolts, however, were suppressed one after the other. To mention only a few: Thomas Matulessy, alias Pattimura, staged a revolt against the Dutch in the Moluccas (1816-1818). Prince Diponegoro of Mataram led the Java War from 1825 until 1830. Again, it was a fierce struggle for freedom. Tuanku Imam Bonjol led the Padri War in West Sumatra, while Teuku Umar headed the Aceh War in North Sumatra (1873-1903). King Sisingamangaraja of the Bataks revolved against the Dutch in 1907. An attempt by the Dutch troops to occupy Bali in 1908 was repelled by King Udayana. Revolts were also erupting in Goa, South Sulawesi, and in South Kalimantan. When all these regional wars of independence failed, Indonesian nationalists began thinking of a more-organized struggle against Dutch colonialism. The move began with the founding of Boedi Oetomo, literally meaning "noble conduct." on May 20, 1908. This organization of Indonesian intellectuals was initially set up for educational purposes but later turned into politics. It was inspired by Japan's victory over Russia in 1901, which also gave impetus to nationalist movements in many parts of Indonesia. The founder of Boedi Oetomo was Dr. Soetomo who was, at the time, a student of STOVIA, an institution to train Indonesian medical officers. Dr. Soetomo was greatly influenced by Dr. Wahidin Soedirohoesodo and supported by Gunawan and Suradji. In 1912 Sarekat Dagang Islam, the Association of Moslem Merchants, was formed by Haji Samanhudi and others. Its objective was at first to stimulate and promote the interest of Indonesian business in the Dutch East Indies. However, in 1912 this organization of middle class businessmen turned into a political party and was renamed Sarekat Islam under the leadership of H.O.S. Tjokroammoto. Haji Agoes Salim and others. In 1912 a progressive Moslem organization, Muhammadiyah. was established by K.H. Akhmad Dahlan in Yogyakarta for the purpose of social and economic In December of the same year Partai Indonesia was founded by Douwes Dekker. later named Setiabudi, with Dr. Tjipto Mangunkusumo and Ki Hajar Dewantoro. The objective of the party was to strive for complete independence of Indonesia. All three leaders of the party were exiled by the colonial government in 1913. In 1914 communism was introduced in the East Indies by three Dutch nationals-SneevIiet, Baars and Brandsteder. In May 1920 Sarikat Islam split into a right and a left wing, the latter was to become the Partai Komunis Indonesia (PKI, the Indonesian Communist Party) under the leadership of Semaun, Darsono, Alimin. Muso and others. The Powerless People's Council In 1916 Sarikat Islam held its first convention in Bandung and resolved the demand for self-government for Indonesia in cooperation with the Dutch. When Sarikat Islam demanded a share in the legislative power in the colony, the Dutch responded by setting up the Volksraad in 1918 which was virtually a powerless people's council with an advisory status. Indonesian representatives on the council were indirectly elected through regional councils, but some of the other members were appointed colonial The Volksraad later developed into a semi-legislative assembly. Among the members of this body were prominent nationalist leaders like Dr. Tjipto Mangunkusurno, H.O.S. Tjokroaminoto, Abdul Muis, Dr. G.S.S.J. Ratulangi, M.H. Thamrin, Wiwoho, Sutardjo Kartohadikusumo. Dr. Radjiman, and Soekardjo Under the pressure of the social unrest in the Netherlands at the end of World War I, the Dutch promised to grant self-government to Indonesians. This was known as the "November promise." It was promise that was never Besides the Volksraad, there was another body called Raad van Indie, "the Council of the Indies," whose the members were appointed by the Government Achmad Djajadiningrat and Sujono were among the very few Indonesian members of this council. In 1923 deteriorating economic conditions and increasing labo strikes prompted the colonial government to put severe restriction on Indonesian civil liberties and make amendments to the colonia laws and penal codes. Freedom of assembly, speech and expressiol in writing was restricted. Further Growth of Indonesian Despite the political restrictions, on July 3, 1922 Ki Hajar Dewantoro founded Taman Siswa, an organization to promote national education. In 1924 the Indonesian Students Association, "Perhimpunan Mahasiswa Indonesia," was formed by Drs. Mohammad Hatta, Dr. Sukiman and others. This organization became a driving force of the nationalist movement to The Indonesian Communist Party (PKI) staged revolts against the colonial government in November 1926 in West Java, and in January 1927 in West Sumatra. After their suppression the Government exiled many non-communist nationalist leaders to Tanah Merah, which the Dutch called "Boven Digul" in Irian Jaya. Dr. Tjipto Mangunkusurno was exiled to Bandaneira. In February 1927 Mohammad Hatta, Achmad Soebardjo and other members of Indonesia's Movements attended the first international convention of the "League Against Imperialism and Colonial Oppression" in Brussels, together with Jawaharlal Nehru and many other prominent nationalist leaders from Asia and Africa. In July 1927, Soekarno, Sartono and others formed the Indonesian Nationalist Party (PNI), which adopted Bahasa Indonesia as the official language. This party adopted a militant policy of noncooperation with the Government as the result of a fundamental conflict of interest between Indonesian nationalism and Dutch colonialism. In the same year, an all-Indonesia nationalist movement was organized by Indonesian youth to replace earlier organizations, which had been based on regionalism, such as "Young Java," "Young Sumatra" and "Young Ambon." On October 28, 1928, delegates to Indonesian Youth Congress in Jakarta pledged allegiance to "one country, one nation and one language, Indonesia." Concerned about the growing national awareness of freedom, the colonial authorities arrested the PNI leader, Soekarno, in December 1929. This touched offwidespread protests by Indonesians. In 1930 the world was in the grip of an economic and monetary crisis. The severe impact of the crisis was felt in the Indies, a raw material producing country. The colonial government responded with a strict balanced budget policy that aggravated economic and social conditions. Two other leaders of the PNI, Gatot Mangkupradja and Maskun Supriadinata, were arrested and tried in court on charges of plotting against the Government. Soekarno was released in September 1931 but exiled again in August 1933. He remained in Dutch custody until the Japanese invasion in 1942. In January 1931, Dr. Soetomo founded Persatuan Bangsa Indonesia, the Indonesian Unity Party. Its objective was to improve the social status of the Indonesian people. In April of the same year, PNI was abandoned. A new party wa: formed by Sartono, LLM and named Partai Indonesia, the Indonesial Party. Its basis was nationalism, its line was independence. Also in 1931, Sutan Syahrir formed Pendidikan Nasional Indone sia. Known as the new PNI, it envisaged national education. Moham mad Hatta joined In 1933 a mutiny broke out on the Dutch warship "De Zeven Provincien" for which Indonesian nationalists were held responsible The following year Sutan Syahrir and Mohammad Hatta and othe nationalist leaders were arrested and banished until 1942. In 1935, Soetomo merged Persatuan Bangsa Indonesia and Boed Oetomo to form Partai Indonesia Raya (Parindra). Its fundamenta goal was the independence of Great Indonesia. In July 1936, Sutardjo submitted to the "Volksraad" a petition calling for greater autonomy for Indonesia. This petition was flatly rejected by the Dutch-dominated Council. In 1937 Dr. A.K. Gani started the Indonesian People's Movement Gerakan Rakyat Indonesia, which was based on the principles of na tionalism, social independence and self-reliance. In 1939 the All Indonesian Political Federation, GAPI, called fo the establishment of a full-fledged Indonesian parliament. This de mand was rejected by the Government in Holland in 1940. GAPI also demanded an Indonesian military service for the purpose of defending the country in times of war. Again, this was turned down, notwithstanding the impending outbreak of World War II. At the time, there were widespread movements for fundamental and progressive reforms in the colonies and dependencies THE JAPANESE OCCUPATION After their attack on Pearl Harbor in Hawaii, the Japanese forces moved southwards to conquer several Southeast Asian countries ter Singapore had fallen, they invaded the Dutch East Indies and colonial army surrendered in March 1942. Soekarno and Hatta were released from their detention. Japanese began their propaganda campaign for what they called "Great East Asia Coprosperity". But Indonesians soon realized that it was a camouflage for Japanese imperialism in place of Dutch colonialism. To further the cause of Indonesia's independence, Soekarno Hatta appeared to cooperate with the Japanese authorities. In reality, however, Indonesian nationalist leaders went underground and masterminded insurrections in Blitar (East Java), Tasikmalaya and lndramayu (West Java), and in Sumatra Under the pressure of the 4th Pacific war, where their supply lines were interrupted, and the increasing of Indonesian insurrections, the Japanese ultimately gave in to allow the red-and-white flag to fly as the Indonesian national flag. Recognition of "Indonesia Raya" as the national anthem and Bahasa Indonesia as the national language followed. Hence, the youth's pledge of 1928 was fulfilled. After persistent demands, the Japanese finally agreed to place the civil administration of the country into Indonesian hands. This was a golden opportunity for nationalist leaders to prepare for the proclamation of THE BIRTH OF THE REPUBLIC The Republic of Indonesia first saw light on August 17, 1945, when its independence was proclaimed just days after the Japanese surrender to the Allies. Pancasila became the ideological and philosophical basis of the Republic, and on August 18, 1945 the Constitution was adopted as the basic law of the country. Following the provisions of the Constitution, the country is headed by a President who is also the Chief Executive. He is assisted by a Vice-President and a cabinet of ministers. The sovereignty of the people rests with the People's Consultative Assembly (MPR). Hence, the President is accountable to the MPR. The legislative power is vested in the House of Representatives (DPR). Other institutions of the state are the Supreme Court, the Supreme Advisory Council and the Supreme Audit Board. Soekarno became the first President and Chief Executive, and Mohammad Hatta, the first Vice-President of the Republic. On September 5, 1945 the first cabinet was formed. The War of Independence The infant republic was soon faced with military threats to its very existence. British troops landed in Indonesia as a contingent of the Allied Forces to disarm the Japanese. Dutch troops also seized this opportunity to land in the country, but for a different purpose. namely, to regain control of the former East Indies. At the beginning they were assisted by British troops under General Christison, a fact later admitted by Lord Louis Mountbatten. the Commander of the Allied Forces in Southeast Asia based in Myanmar. In fact. the British troops were officially only assigned to the task of repatriating Allied prisoners of war and internees. On November 10, 1945, fierce fighting broke out between British troops and Indonesian freedom fighters in which the British lost Brigadier Mallaby. As a result, the British turned to an all-out combat from the sea. air and land. The newly-recruited army of the Republic soon realized the superiority of the British forces and withdrew from urban battles. They subsequently formed guerrilla units and fought together with armed groups of the people. Under the pretext of representing the Allied Forces, the Dutch sent in more troops to attack Indonesian strongholds. Between 1945 and 1949 they undertook two military actions. Diplomacy and Fighting Meanwhile, on November II. 1945. Vice-President Hatta issued a manifesto that outlined the basic policy of the new Republic. It was a policy of good neighborhood and peace 22 with the rest of the world. On November 14 of the same year. the newly-appointed Prime Minister. Sutan Syahrir. introduced a parliamentary system, with party representation, in the Republic. On December 22. Sutan Syahrir announced Indonesia's acceptance of the British proposal to disarm and confine to internment camps 25.000 Japanese troops throughout the country. This task was successfully carried out by TNI. the Indonesian National Army. Repatriation of the Japanese troops began on April 28, 1946. Because fighting with the Dutch troops continued, the seat of the Republican Government was moved from Jakarta to Yogyakarta on January 4. 1946. The Indonesian Question in the The war in Indonesia posed a threat to international peace and security. In the spirit of article 24 of the United Nations' Charter, the question of Indonesia was officially brought before the Security Council by Jacob Malik of the Soviet Unions. Soon afterwards, on February 10. 1946. the first official meeting of Indonesian and Dutch representatives took place under the chairmanship of Sir Archibald dark Kerr. But the freedom fight continued and Dutch military aggressions met with stiff resistance from Indonesian troops. The Indonesian Government conducted a diplomatic offensive against the Dutch. With the good offices of Lord Killearn of Great Britain, Indonesian and Dutch representatives met at Linggarjati in West Java. The negotiations resulted in the de facto recognition by the Dutch of Indonesia's sovereignty over Java. Sumatra and Madura. The Linggarjati Agreement was initiated on November 1946 and signed on March 25,1947. But the agreement was a violation of Indonesia's independence proclamation of August 17. 1945, which implied sovereignty over the whole territory of the Republic. As such, it met with the widespread disapproval of the people. Hence, guerrilla fighting continued, bringing heavy pressure on In July 1947 the Dutch launched a military offensive to reinforce their urban bases and to intensify their attacks on guerrilla strongholds. The offensive was, however, put to end by the signing of the Renville Agreement on January 17, 1948. The negotiation was initiated by India and Australia and took place under the auspices of the UN Security Council. It was during these critical moments that the Indonesian Communist Party (PKI) stabbed the newly- proclaimed Republic of Indonesia in the back by declaring the formation of the "Indonesian People's Republic" in Madiun, East Java. Muso led an attempt to overthrow the Government, but this was quickly stamped out and he was killed. In violation of the Renville agreement, on December 19, 1948, the Dutch launched their second military aggression. They invaded the Republic capital of Yogyakarta, arrested President Soekarno, Vice President Mohammad Hatta and other leaders, and detained them on the island of Bangka, off the east coast of Sumatra. A caretaker Government, with headquarters in Bukittinggi, West Sumatra, was set up under Syafruddin Prawiranegara. On the initiative of Pandit Jawaharlal Nehru of India, a meeting of 19 nations was convened in New Delhi that produced a resolution for submission to the United Nations, pressing for total Dutch surrender of sovereignty to the Republic of Indonesia by January 1, 1950. It also pressed for the release of all Indonesian detainees and the return of territories seized during the military actions. On January 28, 1949, the UN Security Council adopted a resolution to establish a ceasefire, the release of Republican leaders and their Yogyakarta. The Dutch, however, were adamant and continued to occupy the city of Yogyakarta by ignoring of the Republican Government and the National Army. They deliberately issued a false statement to the world that the Government and the army of the Republic of Indonesia no longer existed. To prove that the Dutch claim was a mere fabrication. Lieutenant Colonel Soeharto led an all-out attack on the Dutch troops in Yogyakarta on March 1, 1949, and occupied the city for several hours. This offensive is recorded in Indonesia's history as "the first of March allout attack" to show to the world at the time that the Republic and its military were not dead. Consequently, on May 7, 1949, an agreement was signed by Mohammad Roem of Indonesia and Van Rooyen of the Netherlands, to end hostilities, restore the Republican Government in Yogyakarta, and to hold further negotiations at a round table conference under the auspices of the United Nations. World Recognition and Indonesia's The Round Table conference was opened in the Hague on August 23, 1949, under the auspices of the UN. It was concluded on November 2 with an agreement that Holland was to recognize the sovereignty of the Republic of Indonesia. On December 27, 1949 the Dutch East Indies ceased to exist. It now became the sovereign Federal Republic of Indonesia with a federal constitution. The constitution, inter alla, provided for a parliamentary system in which the cabinet was responsible to Parliament.The question of sovereignty over Irian Jaya, formerly West New Guinea, was suspended for further negotiations between Indonesia and the Netherlands. This issue remained a perpetual source of conflict between the two countries for more than 13 years. On September 28, 1950, Indonesia became a member of the United Nations. The Unitary State of the Republic of Indonesia On August 17, 1950 the Unitary State of the Republic of Indonesia, as originally proclaimed, was restored. However, the liberal democratic system of government was retained whereby the cabinet would be accountable to the House of Representatives. This was a source of political instability with frequent changes in government. In the absence of a stable government, it was utterly impossible for a newly-independent state to embark on any development program. With the return of the unitary state, the President once again assumed the duties of Chief Executive and the Mandatary of the Provisional People's Consultative Assembly. He is assisted by a VicePresident and a cabinet of his own choosing. The Executive is not responsible to the House of Representatives. Challenges to the Unitary State The philosophy behind the Unitary State was that a pluralistic country like Indonesia could only be independent and strong If It was firmly united and integrated. This was obviously the answer to the Dutch colonial practice of divide and rule. Hence, the national motto was "Bhinneka Tunggal lka" as referred to earlier. However, no sooner was the Unitary State re-established then it had to face numerous armed rebellions. The Darul Islam rebels under Kartosuwiryo terrorized the countryside of West Java in their move to establish an Islamic State. It took years to stamp them out. Then there was the terrorist APRA band of former Dutch army captain Turco Westerling, which claimed the lives of thousands of innocent people. Outside Java, demobilized ex-colonial army men who remained loyal to the Dutch crown, staged a revolt and proclaimed what they called "the Republic of South Maluku". In South Sulawesi an ex-colonial army officer, Andi Aziz, also rebelled. In Kalimantan lbnu Hadjar led another armed revolt. Sumatra could also account for a number of separatist movements. And, to complete the list, the Indonesian Communist Party again staged an abortive coup under the name of 30th September movement, when they kidnaped and killed six of the country's top army generals in the early hours of October 1, 1965. The Asian-African Conference President Soekarno had to his credit the holding of the Asian-African Conference in Bandung, West Java, from April 18 to 24, 1955. The initiative was taken by Indonesia, India, Pakistan, Myanmar and Ceylon (Sri Lanka). The conference was attended by delegates from 24 Asian and African countries. The purpose of the meeting was to promote closer and amiable cooperation in the economic, cultural and political fields. The resolution adopted became known as the "Dasa Sila", or "The Ten Principles," of Bandung. It strived for world peace, respect for one another's sovereignty and territorial integrity, and for non-interference in each other's internal affairs. The resolution also sought to uphold the human rights principles of the United The Asian-African Conference became the embryo of the Non Aligned Movement. The seeds that sprouted in Bandung took firm root six years later when 25 newly independent countries formally founded the Non-Aligned Movement at the Belgrade Summit of 1961. Since then the membership of the Movement has grown to its present strength of 1 12 member countries. THE BEGINNING OF THE NEW ORDER Over-confident of their strength and precipitated by the serious illness of President Soekarno, who was undergoing treatment by a Chinese medical team from Beijing, the Indonesian Communist Party (PKI) attempted another coup on September 30, 1965. The uprising, however, was abrupt and quickly stamped out by the Armed Forces under Major General Soeharto, then Chief of the Army's Strategic Command. On the night of September 30, or more precisely in the early hours of October 1, 1965, armed PKI men and members of Cakrabirawa. the President's security guard, set out to kidnap, torture and kill six top Army Generals. Their bodies were dumped in an abandoned well at Lubang Buaya, on the outskirts of Jakarta. The coup was staged in the wake of troop deployments to Kalimantan, at the height of Indonesia's confrontation with Malaysia. Moreover, at the time, many cabinet members were attending a celebration of the Chinese October Revolution in Beijing. It was during this power vacuum that the communists struck again. Under instructions from General Soeharto, crack troops of the Army's Commando Regiment (RPKAD) freed the central radio station (RRI) and the telecommunication center from communist occupation. Students made for the streets in militant demonstrations to fight for a three-point claim, or "Tritura," that aimed to ban the PKI, replace Soekarno's cabinet ministers, and reduce the prices of basic necessities. They set up a "street parliament" to gather the demands of the people. Under these explosive conditions, President Soekarno eventually gave in and granted Soeharto full power to restore order and security in the country. The transfer of power was effected by a presidential order known as "the 11th of March order" of 1966. Soon afterwards, on March 12, 1966, General Soeharto banned the PKI. This decision was endorsed and sanctioned by virtue of the Provisional People's Consultative Assembly Decree No XXV/MPRS/1966. He also formed a new cabinet, but Soekarno remained as Chief Executive. This brought dualism into the cabinet, particularly when Soekarno did not show support for the cabinet's program to establish political and economic stability. Hence, a special session of the Provisional People's Consultative Assembly (MPRS) was convened from March 7-12, 1967. The Assembly resolved to relieve Soekarno of his presidential duties and appointed Soeharto as Acting President, pending the election of a new President by an elected People's Consultative Assembly. The New Order Government Ever since taking office in 1967, the New Order Government of President Soeharto was determined to return constitutional life by upholding the 1945 Constitution in a strict and consistent manner and by respecting Pancasila as the state philosophy and ideology. To emerge from the political and economic legacy of Soekarno's Old Order, the new government set out to undertake the following: 1. To complete the restoration of order and security and to establish political stability. 2. To carry out economic rehabilitation. 3. To prepare a plan for national development and execute it with the emphasis on economic development. 4. To end confrontation and normalize diplomatic relations with Malaysia. 5. To rejoin to the United Nations, which Indonesia had quit in January 1965. 6. To consistently pursue an independent and active foreign policy. 7. To resolve the West Irian question.
http://www.wirantaprawira.net/indon/history.html
13
16
Soviet Union in World War II Joseph Stalin was the General Secretary of the Communist Party of the Soviet Union's Central Committee from 1922 until his death in 1953. In the years following Lenin's death in 1924, he rose to become the authoritarian leader of the Soviet Union. In August 1939, at Stalin's direction, the Soviet Union entered into a non-aggression pact with Nazi Germany, containing a secret protocol, dividing the whole of eastern Europe into German and Soviet spheres of influence. Thereafter, Germany and the Soviet Union invaded their apportioned sections of Poland. The Soviet Union later invaded Estonia, Latvia, Lithuania and part of Romania, along with an attempted invasion of Finland. Stalin and Hitler later traded proposals for a Soviet entry into the Axis Pact. In June 1941, Germany began an invasion of the Soviet Union, before which Stalin had ignored reports of a German invasion. Stalin was confident that the total Allied war machine would eventually stop Germany, and the Soviets stopped the Wehrmacht some 30 kilometers from Moscow. Over the next four years, the Soviet Union repulsed German offensives, such as at the Battle of Stalingrad and Battle of Kursk, and pressed forward to victory in large Soviet offensives such as the Vistula-Oder Offensive. Stalin began to listen to his generals more after Kursk. Stalin met with Churchill and Roosevelt in Tehran Conference and began to discuss a two-front war against Germany and future of Europe after the war. Berlin finally fell in April 1945, but Stalin was never fully convinced his nemesis Hitler had committed suicide. Fending off the German invasion and pressing to victory in the East required a tremendous sacrifice by the Soviet Union, which suffered the highest military casualties in the war, losing approximately 35 million men. Stalin became personally involved with questionable tactics employed during the war, including the Katyn massacre, Order No. 270, Order No. 227 and NKVD prisoner massacres. Controversy also surrounds rapes and looting in Soviet-held territory, along with large numbers of deaths of POWs held by the Soviets, and the Soviets' abusive treatment of their own soldiers who had been held in German POW camps. Pact with Adolf Hitler In August 1939, Stalin accepted Adolf Hitler's proposal to enter into a non-aggression pact with Nazi Germany, negotiated by the foreign ministers Vyacheslav Molotov for the Soviets and Joachim von Ribbentrop for the Germans. Officially a non-aggression treaty only, an appended secret protocol, also reached on August 23, 1939, divided the whole of eastern Europe into German and Soviet spheres of influence. The USSR was promised an eastern part of Poland, then primarily populated by Ukrainians and Belarusians, in case of its dissolution, and Germany recognized Latvia, Estonia and Finland as parts of the Soviet sphere of influence, with Lithuania added in a second secret protocol in September 1939. Another clause of the treaty was that Bessarabia, then part of Romania, was to be joined to the Moldovan ASSR, and become the Moldovan SSR under control of Moscow. The Pact was reached two days after the breakdown of Soviet military talks with British and French representatives in August 1939 over a potential Franco-Anglo-Soviet alliance. Political discussions had been suspended on August 2 when Molotov stated they could not be restarted until progress was made in military talks late in August, after the talks had stalled over guarantees of the Baltic states, while the military talks upon which Molotov insisted started on 11 August. At the same time, Germany—with whom the Soviets had started secret discussions since July 29 -- argued that it could offer the Soviets better terms than Britain and France, with Ribbentrop insisting, "there was no problem between the Baltic and the Black Sea that could not be solved between the two of us." German officials stated that, unlike Britain, Germany could permit the Soviets to continue their developments unmolested, and that "there is one common element in the ideology of Germany, Italy and the Soviet Union: opposition to the capitalist democracies of the West." By that time, Molotov obtained information regarding Anglo-German negotiations and a pessimistic report from the Soviet ambassador in France. After disagreement regarding Stalin's demand to move Red Army troops through Poland and Romania (which Poland and Romania opposed), on August 21, the Soviets proposed adjournment of military talks using the excuse that the absence of the senior Soviet personnel at the talks interfered with the autumn manoeuvres of the Soviet forces, though the primary reason was the progress being made in the Soviet-German negotiations. That same day, Stalin received assurance that Germany would approve secret protocols to the proposed non-aggression pact that would grant the Soviets land in Poland, the Baltic states, Finland and Romania, after which Stalin telegrammed Hitler that night that the Soviets were willing to sign the pact and that he would receive Ribbentrop on August 23. Regarding the larger issue of collective security, some historians state that one reason that Stalin decided to abandon the doctrine was the shaping of his views of France and Britain by their entry into the Munich Agreement and the subsequent failure to prevent German occupation of Czechoslovakia. Stalin also viewed the Pact as gaining time in an inevitable war with Hitler in order to reinforce the Soviet military and shifting Soviet borders westwards, which would be militarily beneficial in such a war. Stalin and Ribbentrop spent most of the night of the Pact's signing trading friendly stories about world affairs and cracking jokes (a rarity for Ribbentrop) about England's weakness, and the pair even joked about how the Anti-Comintern Pact principally scared "British shopkeepers." They further traded toasts, with Stalin proposing a toast to Hitler's health and Ribbentrop proposing a toast to Stalin. Implementing the division of Eastern Europe and other invasions On September 1, 1939, the German invasion of its agreed upon portion of Poland started World War II. On September 17 the Red Army invaded eastern Poland and occupied the Polish territory assigned to it by the Molotov-Ribbentrop Pact, followed by co-ordination with German forces in Poland. Eleven days later, the secret protocol of the Molotov-Ribbentrop Pact was modified, allotting Germany a larger part of Poland, while ceding most of Lithuania to the Soviet Union. The Soviet portions lay east of the so-called Curzon Line, an ethnographic frontier between Russia and Poland drawn up by a commission of the Paris Peace Conference in 1919. In early 1940, the Soviets executed over 25,000 Polish POWs and political prisoners in the Katyn Forrest. After unsuccessfully attempting to install a communist puppet government in Finland, in November 1939, the Soviet Union invaded Finland. The Finnish defense defied Soviet expectations, and after stiff losses, Stalin settled for an interim peace granting the Soviet Union less than total domination by annexing only the eastern region of Karelia (10% of Finnish territory). Soviet official casualty counts in the war exceeded 200,000, while Soviet Premier Nikita Khrushchev later claimed the casualties may have been one million. After this campaign, Stalin took actions to bolster the Soviet military, modify training and improve propaganda efforts in the Soviet military. In mid-June 1940, when international attention was focused on the German invasion of France, Soviet NKVD troops raided border posts in Lithuania, Estonia and Latvia. Stalin claimed that the mutual assistance treaties had been violated, and gave six hour ultimatums for new governments to be formed in each country, including lists of persons for cabinet posts provided by the Kremlin. Thereafter, state administrations were liquidated and replaced by Soviet cadres, followed by mass repression in which 34,250 Latvians, 75,000 Lithuanians and almost 60,000 Estonians were deported or killed. Elections for parliament and other offices were held with single candidates listed, the official results of which showed pro-Soviet candidates approval by 92.8 percent of the voters of Estonia, 97.6 percent of the voters in Latvia and 99.2 percent of the voters in Lithuania. The resulting peoples assemblies immediately requested admission into the USSR, which was granted by the Soviet Union. In late June 1940, Stalin directed the Soviet annexation of Bessarabia and northern Bukovina, proclaiming this formerly Romanian territory part of the Moldavian Soviet Socialist Republic. But in annexing northern Bukovina, Stalin had gone beyond the agreed limits of the secret protocol. After the Tripartite Pact was signed by Axis Powers Germany, Japan and Italy, in October 1940, Stalin personally wrote to Ribbentrop about entering an agreement regarding a "permanent basis" for their "mutual interests." Stalin sent Molotov to Berlin to negotiate the terms for the Soviet Union to join the Axis and potentially enjoy the spoils of the pact. At Stalin's direction, Molotov insisted on Soviet interest in Turkey, Bulgaria, Romania, Hungary, Yugoslavia and Greece, though Stalin had earlier unsuccessfully personally lobbied Turkish leaders to not sign a mutual assistance pact with Britain and France. Ribbentrop asked Molotov to sign another secret protocol with the statement: "The focal point of the territorial aspirations of the Soviet Union would presumably be centered south of the territory of the Soviet Union in the direction of the Indian Ocean." Molotov took the position that he could not take a "definite stand" on this without Stalin's agreement. Stalin did not agree with the suggested protocol, and negotiations broke down. In response to a later German proposal, Stalin's stated that the Soviets would join the Axis if Germany foreclosed acting in the Soviet's sphere of influence. Shortly thereafter, Hitler issued a secret internal directive related to his plan to invade the Soviet Union. In an effort to demonstrate peaceful intentions toward Germany, on April 13, 1941, Stalin oversaw the signing of a neutrality pact with the Axis power Japan. While Stalin had little faith in Japan's commitment to neutrality, he felt that the pact was important for its political symbolism, to reinforce a public affection for Germany. Stalin felt that there was a growing split in German circles about whether Germany should initiate a war with the Soviet Union. Hitler breaks the pact During the early morning of June 22, 1941, Hitler broke the pact by starting Operation Barbarossa, the German invasion of Soviet-held territories and the Soviet Union that began the war on the Eastern Front. Before the invasion, Stalin felt that Germany would not attack the Soviet Union until Germany had defeated Britain. At the same time, Soviet generals warned Stalin that Germany had concentrated forces on its borders. Two highly placed Soviet spies in Germany, "Starshina" and "Korsikanets", had sent dozens of reports to Moscow containing evidence of preparation for a German attack. Further warnings came from Richard Sorge, a Soviet spy in Tokyo working undercover as a German journalist. Seven days before the invasion, a Soviet spy in Berlin warned Stalin that the movement of German divisions to the borders was to wage war on the Soviet Union. Five days before the attack, Stalin received a report from a spy in the German Air Ministry that "all preparations by Germany for an armed attack on the Soviet Union have been completed, and the blow can be expected at any time." In the margin, Stalin wrote to the people's commissar for state security, "you can send your 'source' from the headquarters of German aviation to his mother. This is not a 'source' but a dezinformator." Although Stalin increased Soviet western border forces to 2.7 million men and ordered them to expect a possible German invasion, he did not order a full-scale mobilization of forces to prepare for an attack. Stalin felt that a mobilization might provoke Hitler to prematurely begin to wage war against the Soviet Union, which Stalin wanted to delay until 1942 in order to strengthen Soviet forces. Viktor Suvorov suggested that Stalin had made aggressive preparations beginning in the late 1930s and was preparing to invade Germany in the summer 1941. He believes that Hitler forestalled Stalin and the German invasion was in essence a pre-emptive strike, precisely as Hitler claimed. This theory was supported by Igor Bunich, Joachim Hoffmann, Mikhail Meltyukhov (see Stalin's Missed Chance) and Edvard Radzinsky (see Stalin: The First In-Depth Biography Based on Explosive New Documents from Russia's Secret Archives). Other historians, especially Gabriel Gorodetsky and David Glantz, reject this thesis. General Fedor von Boch's diary says that the Abwehr fully expected a Soviet attack against German forces in Poland no later than 1942. In the initial hours after the German attack began, Stalin hesitated, wanting to ensure that the German attack was sanctioned by Hitler, rather than the unauthorized action of a rogue general. Accounts by Nikita Khrushchev and Anastas Mikoyan claim that, after the invasion, Stalin retreated to his dacha in despair for several days and did not participate in leadership decisions. But, some documentary evidence of orders given by Stalin contradicts these accounts, leading historians such as Roberts to speculate that Khrushchev's account is inaccurate. In the first three weeks of the invasion, as the Soviet Union tried to defend against large German advances,it suffered 750,000 casualties, and lost 10,000 tanks and 4,000 aircraft. In July 1940, Stalin completely reorganized the Soviet military, placing himself directly in charge of several military organizations. This gave him complete control of his country's entire war effort; more control than any other leader in World War II. A pattern soon emerged where Stalin embraced the Red Army's strategy of conducting multiple offensives, while the Germans overran each of the resulting small, newly gained grounds, dealing the Soviets severe casualties. The most notable example of this was the Battle of Kiev, where over 600,000 Soviet troops were quickly killed, captured or missing. By the end of 1941, the Soviet military had suffered 4.3 million casualties and the Germans had captured 3.0 million Soviet prisoners, 2.0 million of whom died in German captivity by February 1942. German forces had advanced c. 1,700 kilometers, and maintained a linearly-measured front of 3,000 kilometers. The Red Army put up fierce resistance during the war's early stages. Even so, according to Glantz, they were plagued by an ineffective defense doctrine against well-trained and experienced German forces, despite possessing some modern Soviet equipment, such as the KV-1 and T-34 tanks. Soviets stop the Germans While the Germans made huge advances in 1941, killing millions of Soviet soldiers, at Stalin's direction, the Red Army directed sizable resources to prevent the Germans from achieving one of their key strategic goals, the attempted capture of Leningrad. They held the city at the cost of more than a million Soviet soldiers in the region and more than a million civilians, many who died from starvation. While the Germans pressed forward, Stalin was confident of an eventual Allied victory over Germany. In September 1941, Stalin told British diplomats that he wanted two agreements: (1) a mutual assistance/aid pact and (2) a recognition that, after the war, the Soviet Union would gain the territories in countries that it had taken pursuant to its division of Eastern Europe with Hitler in the Molotov–Ribbentrop Pact. The British agreed to assistance but refused to agree to the territorial gains, which Stalin accepted months later as the military situation had deteriorated somewhat by mid-1942. In November 1941, Stalin rallied his generals in a speech given underground in Moscow, telling them that the German blitzkrieg would fail because of weaknesses in the German rear in Nazi-occupied Europe and the underestimation of the strength of the Red Army, and that the German war effort would crumble against the British-American-Soviet "war engine". On November 6, 1941, Stalin addressed the Soviet Union for the second time (the first was on July 2, 1941). Correctly calculating that Hitler would direct efforts to capture Moscow, Stalin concentrated his forces to defend the city, including numerous divisions transferred from Soviet eastern sectors after he determined that Japan would not attempt an attack in those areas. By December, Hitler's troops had advanced to within 30 km of the Kremlin in Moscow. On December 5, the Soviets launched a counteroffensive, pushing German troops back c. 80 km from Moscow in what was the first major defeat of the Wehrmacht in the war. In early 1942, the Soviets began a series of offensives labeled "Stalin's First Strategic Offensives", although there is no evidence that Stalin developed the offensives. The counteroffensive bogged down, in part due to mud from rain in the Spring of 1942. Stalin's attempt to retake Kharkov in the Ukraine ended in the disastrous encirclement of Soviet forces, with over 200,000 Soviet casualties suffered. Stalin attacked the competence of the generals involved. General Georgy Zhukov and others subsequently revealed that some of those generals had wished to remain in a defensive posture in the region, but Stalin and others had pushed for the offensive. Some historians have doubted Zhukov's account. At the same time, Hitler was worried about American support after their entry into the war following the Attack on Pearl Harbor, and a potential Anglo-American invasion on the Western Front in 1942 (which did not occur until the summer of 1944). He changed his primary goal from an immediate victory in the East, to the more long-term goal of securing the southern Soviet Union to protect oil fields vital to the long-term German war effort. While Red Army generals correctly judged the evidence that Hitler would shift his efforts south, Stalin thought it a flanking move in the German attempt to take Moscow. The German southern campaign began with a push to capture the Crimea, which ended in disaster for the Red Army. Stalin publicly criticized his generals' leadership. In their southern campaigns, the Germans took 625,000 Red Army prisoners in July and August 1942 alone. At the same time, in a meeting in Moscow, Churchill privately told Stalin that the British and Americans were not yet prepared to make an amphibious landing against a fortified Nazi-held French coast in 1942, and would direct their efforts to invading German-held North Africa. He pledged a campaign of massive strategic bombing, to include German civilian targets. Estimating that the Russians were "finished," the Germans began another southern operation in the fall of 1942, the Battle of Stalingrad. Hitler insisted upon splitting German southern forces in a simultaneous siege of Stalingrad and an offensive against Baku on the Caspian Sea. Stalin directed his generals to spare no effort to defend Stalingrad. Although the Soviets suffered in excess of 1.1 million casualties at Stalingrad, their victory over German forces, including the encirclement of 290,000 Axis troops, marked a turning point in the war. Within a year after Barbarossa, Stalin reopened the churches in the Soviet Union. He may have wanted to motivate the majority of the population who had Christian beliefs. By changing the official policy of the party and the state towards religion, he could engage the Church and its clergy in mobilizing the war effort. On September 4, 1943, Stalin invited the metropolitans Sergius, Alexy and Nikolay to the Kremlin. He proposed to reestablish the Moscow Patriarchate, which had been suspended since 1925, and elect the Patriarch. On September 8, 1943, Metropolitan Sergius was elected Patriarch. One account said that Stalin's reversal followed a sign that he supposedly received from heaven. he wrote that Ilya, Metropolitan of the Lebanon Mountains, claimed to receive a sign from heaven that "The churches and monasteries must be reopened throughout the country. Priests must be brought back from imprisonment, Leningrad must not be surrendered, but the sacred icon of Our Lady of Kazan should be carried around the city boundary, taken on to Moscow, where a service should be held, and thence to Stalingrad Tsaritsyn." Shortly thereafter, Stalin's attitude changed. Radzinsky wrote: "Whatever the reason, after his mysterious retreat, he began making his peace with God. Something happened which no historian has yet written about. On his orders many priests were brought back to the camps. In Leningrad, besieged by the Germans and gradually dying of hunger, the inhabitants were astounded, and uplifted, to see wonder-working icon Our Lady of Kazan brought out into the streets and borne in procession." Radzinsky asked, "Had he seen the light? Had fear made him run to his Father? Had the Marxist God-Man simply decided to exploit belief in God? Or was it all of these things at once?." Soviet push to Germany The Soviets repulsed the important German strategic southern campaign and, although 2.5 million Soviet casualties were suffered in that effort, it permitted to Soviets to take the offensive for most of the rest of the war on the Eastern Front. In 1943, Stalin ceded to his generals' call for the Soviet Union to take a defensive stance because of disappointing losses after Stalingrad, a lack of reserves for offensive measures and a prediction that the German's would likely next attack a bulge in the Soviet front at Kursk such that defensive preparations there would more efficiently use resources. The Germans did attempt an encirclement attack at Kursk, which was successfully repulsed by the Soviets after Hitler canceled the offensive, in part, because of the Allied invasion of Sicily, though the Soviets suffered over 800,000 casualties. Kursk also marked the beginning of a period where Stalin became more willing to listen to the advice of his generals. By the end of 1943, the Soviets occupied half of the territory taken by the Germans from 1941-1942. Soviet military industrial output also had increased substantially from late 1941 to early 1943 after Stalin had moved factories well to the East of the front, safe from German invasion and air attack. The strategy paid off, as such industrial increases were able to occur even while the Germans in late 1942 occupied over half of European Russia, including 40% (80 million) of its population, and c. 2,500,000 square kilometers of Russian territory. The Soviets had also prepared for war for over a decade, including preparing 14 million civilians with some military training. Accordingly, while almost all of the original 5 million men of the Soviet army had been wiped out by the end of 1941, the Soviet military had swelled to 8 million members by the end of that year. Despite substantial losses in 1942 far in excess of German losses, Red Army size grew even further, to 11 million. While there is substantial debate whether Stalin helped or hindered these industrial and manpower efforts, Stalin left most economic wartime management decisions in the hands of his economic experts. While some scholars claim that evidence suggests that Stalin considered, and even attempted, negotiating peace with Germany in 1941 and 1942, others find this evidence unconvincing and even fabricated. In November 1943, Stalin met with Churchill and Roosevelt in Tehran. Roosevelt told Stalin that he hoped that Britain and America opening a second front against Germany could initially draw 30-40 German division from the Eastern Front. Stalin and Roosevelt, in effect, ganged up on Churchill by emphasizing the importance of a cross-channel invasion of German-held northern France, while Churchill had always felt that Germany was more vulnerable in the "soft underbelly" of Italy (which the Allies had already invaded) and the Balkans. The parties later agreed that Britain and America would launch a cross-channel invasion of France in May 1944, along with a separate invasion of southern France. Stalin insisted that, after the war, the Soviet Union should incorporate the portions of Poland it occupied pursuant to the Molotov-Ribbentrop Pact with Germany, which Churchill tabled. In 1944, the Soviet Union made significant advances across Eastern Europe toward Germany, including Operation Bagration, a massive offensive in Belorussia against the German Army Group Centre. Stalin, Roosevelt and Churchill closely coordinated, such that Bagration occurred at roughly the same time as American and British forces initiation of the invasion of German held Western Europe on France's northern coast. The operation resulted in the Soviets retaking Belorussia and the western Ukraine, along with the successful effective destruction of the Army Group Centre and 300,000 German casualties, though at the cost of over 750,000 Soviet casualties. Successes at Operation Bagration and in the year that followed were, in large part, due to a weakened Wehrmacht that lacked the fuel and armament they needed to operate effectively, growing Soviet advantages in manpower and materials, and the attacks of Allies on the Western Front. In his 1944 May Day speech, Stalin praised the Western allies for diverting German resources in the Italian Campaign, Tass published detailed lists of the large numbers of supplies coming from Western allies, and Stalin made a speech in November 1944 stating that Allied efforts in the West had already quickly drawn 75 German divisions to defend that region, without which, the Red Army could not yet have driven the Wehrmacht from Soviet territories. The weakened Wehrmacht also helped Soviet offensives because no effective German counter-offensive could be launched, Beginning in the summer of 1944, a reinforced German Army Centre Group did prevent the Soviets from advancing in around Warsaw for nearly half a year. Some historians claim that the Soviets' failure to advance was a purposeful Soviet stall to allow the Wehrmacht to slaughter members of a Warsaw Uprising by the Polish home army in August 1944 that occurred as the Red Army approached, though others dispute the claim and cite sizable unsuccessful Red Army efforts to attempt to defeat the Wehrmacht in that region. Earlier in 1944, Stalin had insisted that the Soviets would annex the portions of Poland it divided with Germany in the Molotov-Ribbentrop Pact, while the Polish government in exile, which the British insisted must be involved in postwar Poland, demanded that the Polish border be restored to prewar locations. The rift further highlighted Stalin's blatant hostility toward the anti-communist Polish government in exile and their Polish home army, which Stalin felt threatened his plans to create a post-war Poland friendly to the Soviet Union. Further exacerbating the rift was Stalin's refusal to resupply the Polish home army, and his refusal to allow American supply planes to use the necessary Soviet air bases to ferry supplies to the Polish home army, which Stalin referred to in a letter to Roosevelt and Churchill as "power-seeking criminals." Worried about the possible repercussions of those actions, Stalin later began a Soviet supply airdrop to Polish rebels, though most of the supplies ended up in the hands of the Germans. The uprising ended in disaster with 20,000 Polish rebels and up to 200,000 civilians killed by Wehrmacht forces, with Soviet forces entering the city in January 1945. Other important advances occurred in late 1944, such as the invasion of Romania in August and Bulgaria. The Soviet Union declared war on Bulgaria in September 1944 and invaded the country, installing a communist government. Following the invasion of these Balkan countries, Stalin and Churchill met in the fall of 1944, where they agreed upon various percentages for "spheres of influence" in several Balkan states, though the diplomats for neither leader knew what the term actually meant. The Red Army also expelled German forces from Lithuania and Estonia in late 1944 at the cost of 260,000 Soviet casualties. In late 1944, Soviet forces battled fiercely to capture Hungary in the Budapest Offensive, but could not take it, which became a topic so sensitive to Stalin that he refused to allow his commanders to speak of it. The Germans held out in the subsequent Battle of Budapest until February 1945, when the remaining Hungarians signed an armistice with the Soviet Union. Victory at Budapest permitted the Red Army to launch the Vienna Offensive in April 1945. To the northeast, the taking of Belorussia and the Western Ukraine permitted the Soviets to launch the massive Vistula–Oder Offensive, where German intelligence had incorrectly guessed the Soviets would have a 3-to-1 numerical superiority advantage that was actually 5-to-1 (over 2 million Red Army personnel attacking 450,000 German defenders), the successful culmination of which resulted in the Red Army advancing from the Vistula river in Poland to the German Oder river in Eastern Germany. Stalin's shortcomings as strategist are frequently noted regarding massive Soviet loss of life and early Soviet defeats. An example of it is the summer offensive of 1942, which led to even more losses by the Red Army and recapture of initiative by the Germans. Stalin eventually recognized his lack of know-how and relied on his professional generals to conduct the war. Additionally, Stalin was well aware that other European armies had utterly disintegrated when faced with Nazi military efficacy and responded effectively by subjecting his army to galvanizing terror and nationalist appeals to patriotism. He also appealed to the Russian Orthodox church and images of national Russian people. Final Victory By April 1945, Germany faced its last days with 1.9 million German soldiers in the East fighting 6.4 million Red Army soldiers while 1 million German soldiers in the West battled 4 million Western Allied soldiers. While initial talk existed of a race to Berlin by the Allies, after Stalin successfully lobbied for Eastern Germany to fall within the Soviet "sphere of influence" at Yalta, no plans were made by the Western Allies to seize the city by a ground operation. Stalin still remained suspicious that western Allied forces holding at the Elbe river might move on the capital and, even in the last days, that the Americans might employ their two airborne divisions to capture the city. Stalin directed the Red Army to move rapidly in a broad front into Germany because he did not believe the Western Allies would hand over territory they occupied, while he made the overriding objective capturing Berlin. After successfully capturing Eastern Prussia, three Red Army fronts converged on the heart of Eastern Germany, with one of the last pitched battles of the war putting the Soviets at the virtual gates of Berlin. By April 24, Berlin was encircled by elements of two Soviet fronts, one of which had begun a massive shelling of the city on April 20 that would not end until the city's surrender. On April 30, Hitler and Eva Braun committed suicide, after which Soviet forces found their remains, which had been burned at Hitler's directive. German forces surrendered a few days later. Some historians argue that Stalin delayed the last final push for Berlin by two months in order to capture other areas for political reasons, which they argue gave the Wehrmacht time to prepare and increased Soviet casualties (which exceeded 400,000), though this is contested by other historians. Despite the Soviets' possession of Hitler's remains, Stalin did not believe that his old nemesis was actually dead, a belief that remained for years after the war. Stalin also later directed aides to spend years researching and writing a secret book about Hitler's life for his own private reading that reflected Stalin's prejudices, including an absence of criticism of Hitler for his treatment of Jews. Fending off the German invasion and pressing to victory over Nazi Germany in the World War II required a tremendous sacrifice by the Soviet Union (more than any other country in human history). Soviet military casualties totaled approximately 35 million (official figures 28.2 million) with approximately 14.7 million killed, missing or captured (official figures 11.285 million). Although figures vary, the Soviet civilian death toll probably reached 20 million. Millions of Soviet soldiers and civilians disappeared into German detention camps and slave labor factories, while millions more suffered permanent physical and mental damage. Economic losses, including losses in resources and manufacturing capacity in western Russia and Ukraine, were also catastrophic. The war resulted in the destruction of approximately 70,000 Soviet cities, towns and villages. Destroyed in that process were 6 million houses, 98,000 farms, 32,000 factories, 82,000 schools, 43,000 libraries, 6,000 hospitals and thousands of kilometers of roads and railway track. Questionable tactics After taking around 300,000 Polish prisoners in 1939 and early 1940, NKVD officers conducted lengthy interrogations of the prisoners in camps that were, in effect, a selection process to determine who would be killed. On March 5, 1940, pursuant to a note to Stalin from Lavrenty Beria, the members of the Soviet Politburo (including Stalin) signed an order to execute 25,700 Polish POWs, labeled "nationalists and counterrevolutionaries", kept at camps and prisons in occupied western Ukraine and Belarus. This became known as the Katyn massacre. Major-General Vasili M. Blokhin, chief executioner for the NKVD, personally shot 6,000 of the captured Polish officers in 28 consecutive nights, which remains one of the most organized and protracted mass murders by a single individual on record During his 29 year career Blokhin shot an estimated 50,000 people, making him ostensibly the most prolific official executioner in recorded world history. Stalin personally told a Polish general requesting information about missing officers that all of the Poles were freed, and that not all could be accounted because the Soviets "lost track" of them in Manchuria. After Polish railroad workers found the mass grave, the Nazi's used the massacre to attempt to drive a wedge between Stalin and the other Allies, including bringing in a European commission of investigators from twelve countries to examine the graves. In 1943, as the Soviets prepared to retake Poland, Nazi Propaganda Minister Joseph Goebbels correctly guessed that Stalin would attempt to falsely claim that the Germans massacred the victims. As Goebbels predicted, the Soviets had a "commission" investigate the matter, falsely concluding that the Germans had killed the POWs. The Soviets did not admit responsibility until 1990. On August 16, 1941, in attempts to revive a disorganized Soviet defense system, Stalin issued Order No. 270, demanding any commanders or commissars "tearing away their insignia and deserting or surrendering" to be considered malicious deserters. The order required superiors to shoot these deserters on the spot. Their family members were subjected to arrest. The second provision of the order directed all units fighting in encirclements to use every possibility to fight. The order also required division commanders to demote and, if necessary, even to shoot on the spot those commanders who failed to command the battle directly in the battlefield. Thereafter, Stalin also conducted a purge of several military commanders that were shot for "cowardice" without a trial. In June 1941, weeks after the German invasion began, Stalin directed that the retreating Red Army also sought to deny resources to the enemy through a scorched earth policy of destroying the infrastructure and food supplies of areas before the Germans could seize them, and that partisans were to be set up in evacuated areas. This, along with abuse by German troops, caused starvation and suffering among the civilian population that were left behind. Stalin feared that Hitler would use disgruntled Soviet citizens to fight his regime, particularly people imprisoned in the Gulags. He thus ordered the NKVD to take care of the situation. They responded by murdering around one hundred thousand political prisoners throughout the western parts of the Soviet Union, with methods that included bayoneting people to death and tossing grenades into crowded cells. Many others were simply deported east. In July 1942, Stalin issued Order No. 227, directing that any commander or commissar of a regiment, battalion or army, who allowed retreat without permission from his superiors was subject to military tribunal. The order called for soldiers found guilty of disciplinary measures to be forced into "penal battalions", which were sent to the most dangerous sections of the front lines. From 1942 to 1945, 427,910 soldiers were assigned to penal battalions. The order also directed "blocking detachments" to shoot fleeing panicked troops at the rear. In the first two months following the order, over 1,000 troops were shot by blocking units and blocking units sent over 130,000 troops to penal battalions. Despite having some effect initially, this measure proved to have a deteriorating effect on the troops' morale, so by October 1942 the idea of regular blocking units was quietly dropped By 20 November 1944 the blocking units were disbanded officially. After the capture of Berlin, Soviet troops reportedly raped German women and girls, with total victim estimates ranging from tens of thousands to two million. During and after the occupation of Budapest, (Hungary), an estimated 50,000 women and girls were raped. Regarding rapes that occurred in Yugoslavia, Stalin responded to a Yugoslav partisan leader's complaints saying, "Can't he understand it if a soldier who has crossed thousands of kilometers through blood and fire and death has fun with a woman or takes some trifle?" In former Axis countries, such as Germany, Romania and Hungary, Red Army officers generally viewed cities, villages and farms as being open to pillaging and looting. For example, Red Army soldiers and NKVD members frequently looted transport trains in 1944 and 1945 in Poland and Soviet soldiers set fire to the city centre of Demmin while preventing the inhabitants from extinguishing the blaze, which, along with multiple rapes, played a part in causing over 900 citizens of the city to commit suicide. In the Soviet occupation zone of Germany, when members of the SED reported to Stalin that looting and rapes by Soviet soldiers could result in negative consequences for the future of socialism in post-war East Germany, Stalin reacted angrily: "I shall not tolerate anybody dragging the honour of the Red Army through the mud." Accordingly, all evidence of looting, rapes and destruction by the Red Army was deleted from archives in the Soviet occupation zone. Stalin's personal military leadership was emphasied as part of the "cult of personality" after the publication of Stalin's ten victories extracted from 6 November 1944 speech "27th anniversary of the Great October socialist revolution" (Russian: «27-я годовщина Великой Октябрьской социалистической революции») during the 1944 meeting of the Moscow's Soviet deputies. According to recent figures, of an estimated four million POWs taken by the Russians, including Germans, Japanese, Hungarians, Romanians and others, some 580,000 never returned, presumably victims of privation or the Gulags, compared with 3.5 million Soviet POW that died in German camps out of the 5.6 million taken. Soviet POWs and forced laborers who survived German captivity were sent to special "transit" or "filtration" camps meant to determine which were potential traitors. Of the approximately 4 million to be repatriated 2,660,013 were civilians and 1,539,475 were former POWs. Of the total, 2,427,906 were sent home and 801,152 were reconscripted into the armed forces. 608,095 were enrolled in the work battalions of the defense ministry. 272,867 were transferred to the authority of the NKVD for punishment, which meant a transfer to the Gulag system. 89,468 remained in the transit camps as reception personnel until the repatriation process was finally wound up in the early 1950s. During the rapid German advances in the early months of the war, nearly reaching the cities of Moscow and Leningrad, the bulk of Soviet industry which could not be evacuated was either destroyed or lost due to German occupation. Agricultural production was interrupted, with grain harvests left standing in the fields that would later cause hunger reminiscent of the early 1930s. In one of the greatest feats of war logistics, factories were evacuated on an enormous scale, with 1523 factories dismantled and shipped eastwards along four principal routes to the Caucasus, Central Asian, Ural, and Siberian regions. In general, the tools, dies and production technology were moved, along with the blueprints and their management, engineering staffs and skilled labour. The whole of the Soviet Union become dedicated to the war effort. The population of the Soviet Union was probably better prepared than any other nation involved in the fighting of World War II to endure the material hardships of the war. This is primarily because the Soviets were so used to shortages and coping with economic crisis in the past, especially during wartime—World War I brought similar restrictions on food. Still, conditions were severe. World War II was especially devastating to citizens of the USSR because it was fought on Soviet territory and caused massive destruction. In Leningrad, under German siege, over a million people died of starvation and disease. Many factory workers were teenagers, women and old people. The government implemented rationing in 1941 and first applied it to bread, flour, cereal, pasta, butter, margarine, vegetable oil, meat, fish, sugar, and confectionary all across the country. The rations remained largely stable in other places during the war. Additional rations were often so expensive that they could not add substantially to a citizen’s food supply unless that person was especially well-paid. Peasants received no rations and had to make do with local resources they farmed themselves. Most rural peasants struggled and lived in unbearable poverty but others sold any surplus they had at a high price and a few became rouble millionaires until a currency reform two years after the end of the war wiped out their wealth. Despite harsh conditions, the war led to a spike in Soviet nationalism and unity. Soviet propaganda toned down extreme Communist rhetoric of the past as the people now rallied by a belief of protecting their Motherland against the evils of German invaders. Ethnic minorities thought to be collaborators were forced into exile. Religion, which was previously shunned, became a part of Communist Party propaganda campaign in the Soviet society in order to mobilize the religious elements. The social composition of Soviet society changed drastically during the war. There was a burst of marriages in June and July 1941 between people about to be separated by the war and in the next few years the marriage rate dropped off steeply, with the birth rate following shortly thereafter to only about half of what it would have been in peacetime. For this reason mothers with several children during the war received substantial honors and money benefits if they had a great enough number of children—mothers could earn around 1,300 rubles for having their fourth child and earn up to 5,000 rubles for their tenth. Survival in Leningrad The city of Leningrad endured more suffering and hardships than any other city in the Soviet Union during the war, as it was under siege for 900 days, from September 1941-January 1944. Hunger, malnutrition, disease, starvation, and even cannibalism became common during the siege of Leningrad; civilians lost weight, grew weaker, and became more vulnerable to diseases. Citizens of Leningrad managed to survive through a number of methods with varying degrees of success. Since only four hundred thousand Russians were evacuated before the siege began, this left two and a half million in Leningrad, including four hundred thousand children. More managed to escape the city; this was most successful when Lake Lagoda froze over and people could walk over the ice road—or “road of life”—to safety. Most survival strategies during the siege, though, involved staying within the city and facing the problems through resourcefulness or luck. One way to do this was by securing factory employment because many factories became autonomous and possessed more of the tools of survival during the winter, such as food and heat. Workers got larger rations than regular civilians and factories were likely to have electricity if they produced crucial goods. Factories also served as mutual-support centers and had clinics and other services like cleaning crews and teams of women who would sew and repair clothes. Factory employees were still driven to desperation on occasion and people resorted to eating glue or horses in factories where food was scarce, but factory employment was the most consistently successful method of survival, and at some food production plants not a single person died. Survival opportunities open to the larger Soviet community included bartering and farming on private land. Black markets thrived as private barter and trade became more common, especially between soldiers and civilians. Soldiers, who had more food to spare, were eager to trade with Soviet citizens that had extra warm clothes to trade. Planting vegetable gardens in the spring became popular, primarily because citizens got to keep everything grown on their own plots. The campaign also had a potent psychological effect and boosted morale, a survival component almost as crucial as bread. Many of the most desperate Soviet citizens turned to crime as a way to support themselves in trying times. Most common was the theft of food and of ration cards, which could prove fatal for a malnourished person if their card was stolen more than a day or two before a new card was issued. For these reasons, the stealing of food was severely punished and a person could be shot for as little as stealing a loaf of bread. More serious crimes such as murder and cannibalism also occurred, and special police squads were set up to combat these crimes, though by the end of the siege, roughly 1,500 had been arrested for cannibalism. - Stalin as War Leader History Today - Roberts 1992, pp. 57–78 - Encyclopædia Britannica, German-Soviet Nonaggression Pact, 2008 - Text of the Nazi-Soviet Non-Aggression Pact, executed August 23, 1939 - Christie, Kenneth, Historical Injustice and Democratic Transition in Eastern Asia and Northern Europe: Ghosts at the Table of Democracy, RoutledgeCurzon, 2002, ISBN 0-7007-1599-1 - Roberts 2006, pp. 30–32 - Lionel Kochan. The Struggle For Germany. 1914-1945. New York, 1963 - Shirer, William L. (1990), The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, p. 504, ISBN 0-671-72868-7 - Watson 2000, p. 709 - Michael Jabara Carley (1993). End of the 'Low, Dishonest Decade': Failure of the Anglo-Franco-Soviet Alliance in 1939. Europe-Asia Studies 45 (2), 303-341. - Watson 2000, p. 715 - Watson 2000, p. 713 - Fest, Joachim C., Hitler, Houghton Mifflin Harcourt, 2002, ISBN 0-15-602754-2, page 588 - Ulam, Adam Bruno,Stalin: The Man and His Era, Beacon Press, 1989, ISBN 0-8070-7005-X, page 509-10 - Shirer, William L., The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, 1990 ISBN 0-671-72868-7, page 503 - Fest, Joachim C., Hitler, Harcourt Brace Publishing, 2002 ISBN 0-15-602754-2, page 589-90 - Vehviläinen, Olli, Finland in the Second World War: Between Germany and Russia, Macmillan, 2002, ISBN 0-333-80149-0, page 30 - Bertriko, Jean-Jacques Subrenat, A. and David Cousins, Estonia: Identity and Independence, Rodopi, 2004, ISBN 90-420-0890-3 page 131 - Murphy 2006, p. 23 - Shirer, William L., The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, 1990 ISBN 0-671-72868-7, pages 528 - Max Beloff The Foreign Policy of Soviet Russia. vol. II, I936-41. Oxford University Press, 1949. p. 166, 211. - For example, in his article From Munich to Moscow, Edward Hallett Carr explains the reasons behind signing a non-aggression pact between USSR and Germany as follows: Since 1934 the U.S.S.R. had firmly believed that Hitler would start a war somewhere in Europe: the bugbear of Soviet policy was that it might be a war between Hitler and the U.S.S.R. with the western powers neutral or tacitly favourable to Hitler. In order to conjure this bugbear, one of three alternatives had to be envisaged: (i) a war against Germany in which the western powers would be allied with the U.S.S.R. (this was the first choice and the principal aim of Soviet policy from 1934–38); (2) a war between Germany and the western powers in which the U.S.S.R. would be neutral (this was clearly hinted at in the Pravda article of September 21st, 1938, and Molotov's speech of November 6th, 1938, and became an alternative policy to (i) after March 1939, though the choice was not finally made till August 1939); and (3) a war between Germany and the western powers with Germany allied to the U.S.S.R. (this never became a specific aim of Soviet policy, though the discovery that a price could be obtained from Hitler for Soviet neutrality made the U.S.S.R. a de facto, though non-belligerent, partner of Germany from August 1939 till, at any rate, the summer of 1940)., see E. H. Carr., From Munich to Moscow. I., Soviet Studies, Vol. 1, No. 1, (Jun., 1949), pp. 3–17. Taylor & Francis, Ltd. - This view is disputed by Werner Maser and Dmitri Volkogonov - Yuly Kvitsinsky. Russia-Germany: memoirs of the future, Moscow, 2008 ISBN 5-89935-087-3 p.95 - Watson 2000, pp. 695–722 - Shirer, William L., The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, 1990 ISBN 0-671-72868-7, pages 541 - Roberts 2006, p. 43 - Sanford, George (2005). Katyn and the Soviet Massacre Of 1940: Truth, Justice And Memory. London, New York: Routledge. ISBN 0-415-33873-5. - Wettig 2008, p. 20 - Roberts 2006, p. 37 - Roberts 2006, p. 45 - Kennedy-Pipe, Caroline, Stalin's Cold War, New York : Manchester University Press, 1995, ISBN 0-7190-4201-1 - Roberts 2006, p. 52 - Mosier, John, The Blitzkrieg Myth: How Hitler and the Allies Misread the Strategic Realities of World War II, HarperCollins, 2004, ISBN 0-06-000977-2, page 88 - Roberts 2006, p. 53 - Senn, Alfred Erich, Lithuania 1940 : revolution from above, Amsterdam, New York, Rodopi, 2007 ISBN 978-90-420-2225-6 - Simon Sebag Montefiore. Stalin: The Court of the Red Tsar. p. 334. - Wettig 2008, p. 21 - Brackman 2001, p. 341 - Roberts 2006, p. 58 - Brackman 2001, p. 343 - Roberts 2006, p. 59 - Roberts 2006, p. 63 - Roberts 2006, p. 66 - Roberts 2006, p. 82 - Roberts 2006, p. 67 - Ferguson, Niall (2005-06-12). "Stalin's Intelligence". The New York Times. Retrieved 2010-05-07. - Roberts 2006, p. 68 - Murphy 2006, p. xv - Roberts 2006, p. 69 - Roberts 2006, p. 70 - see e.g. Teddy J. Uldricks. "The Icebreaker Controversy: Did Stalin Plan to Attack Hitler?" Slavic Review, Vol. 58, No. 3 (Autumn, 1999), pp. 626-643. Stable URL: http://www.jstor.org/stable/2697571 or Gabriel Gorodetsky. Grand Delusion: Stalin and the German Invasion of Russia p. 5. Published by Yale University Press, 2001. ISBN 0-300-08459-5 - Simon Sebag Montefiore. Stalin: The Court of the Red Tsar, Knopf, 2004 (ISBN 1-4000-4230-5) - Roberts 2006, p. 89 - Roberts 2006, p. 90 - Roberts 2006, p. 85 - Roberts 2006, p. 97 - Roberts 2006, pp. 99–100 - Roberts 2006, pp. 116–7 - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, page 7 - Roberts 2006, p. 106 - Roberts 2006, pp. 114–115 - Roberts 2006, p. 110 - Roberts 2006, p. 108 - Roberts 2006, p. 88 - Roberts 2006, p. 112 - Roberts 2006, p. 122 - Roberts 2006, pp. 124–5 - Roberts 2006, pp. 117–8 - Roberts 2006, p. 126 - Roberts 2006, pp. 135–140 - Roberts 2006, p. 128 - Roberts 2006, p. 134 - Сталинградская битва - Roberts 2006, p. 154 - (Radzinsky 1996, p.472-3) - Roberts 2006, p. 155 - Roberts 2006, pp. 156–7 - McCarthy, Peter, Panzerkrieg: The Rise and Fall of Hitler's Tank Divisions, Carroll & Graf Publishers, 2003, ISBN 0-7867-1264-3, page 196 - Russian Central Military Archive TsAMO, f. (16 VA), f.320, op. 4196, d.27, f.370, op. 6476, d.102, ll.6, 41, docs from the Russian Military Archive in Podolsk. Loss records for 17 VA are incomplete. It records 201 losses for 5–8 July. From 1–31 July it reported the loss of 244 (64 in air-to-air combat, 68 to AAA fire. It reports a further 108 missing on operations and four lost on the ground. 2 VA lost 515 aircraft missing or due to unknown/unrecorded reasons, a further 41 in aerial combat and a further 31 to AAA fire, between 5–18 July 1943. Furthermore, another 1,104 Soviet aircraft were lost between 12 July and 18 August. Bergström, Christer (2007). Kursk — The Air Battle: July 1943. Chervron/Ian Allen. ISBN 978-1-903223-88-8, page 221. - Roberts 2006, p. 159 - Roberts 2006, p. 163 - Roberts 2006, pp. 164–5 - Roberts 2006, pp. 165–7 - Roberts 2006, p. 180 - Roberts 2006, p. 181 - Roberts 2006, p. 185 - Roberts 2006, pp. 186–7 - Roberts 2006, pp. 194–5 - Roberts 2006, pp. 199–201 - Williams, Andrew, D-Day to Berlin. Hodder, 2005, ISBN 0-340-83397-1, page 213 - Roberts 2006, pp. 202–3 - Roberts 2006, pp. 205–7 - Roberts 2006, pp. 208–9 - Roberts 2006, pp. 214–5 - Roberts 2006, pp. 216–7 - Wettig 2008, p. 49 - Roberts 2006, pp. 218–21 - Erickson, John, The Road to Berlin, Yale University Press, 1999 ISBN 0-300-07813-7, page 396-7. - Duffy, C., Red Storm on the Reich: The Soviet March on Germany 1945, Routledge, 1991, ISBN 0-415-22829-8 - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001 - Beevor, Antony, Berlin: The Downfall 1945, Viking, Penguin Books, 2005, ISBN 0-670-88695-5, page 194 - Williams, Andrew (2005). D-Day to Berlin. Hodder. ISBN 0-340-83397-1., page 310-1 - Erickson, John, The Road to Berlin, Yale University Press, 1999 ISBN 0-300-07813-7, page 554 - Beevor, Antony, Berlin: The Downfall 1945, Viking, Penguin Books, 2005, ISBN 0-670-88695-5, page 219 - Ziemke, Earl F (1969), Battle for Berlin End of the Third Reich Ballantine's Illustrated History of World War II (Battle Book #6), Ballantine Books, page 71 - Ziemke, Earl F, Battle For Berlin: End Of The Third Reich, NY:Ballantine Books, London:Macdomald & Co, 1969, pages 92-94 - Beevor, Antony, Revealed" Hitler's Secret Bunkers (2008) - Bullock, Alan, Hitler: A Study in Tyranny, Penguin Books, ISBN 0-14-013564-2, 1962, pages 799-800 - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, pages 91-93 - Kershaw, Ian, Hitler, 1936-1945: Nemesis, W. W. Norton & Company, 2001, ISBN 0-393-32252-1, pages 1038-39 - Dolezal, Robert, Truth about History: How New Evidence Is Transforming the Story of the Past, Readers Digest, 2004, ISBN 0-7621-0523-2, page 185-6 - Eberle, Henrik, Matthias Uhl and Giles MacDonogh, The Hitler Book: The Secret Dossier Prepared for Stalin from the Interrogations of Hitler's Personal Aides, PublicAffairs, 2006, ISBN 1-58648-456-7. A reprint of one of only two existing copies. This copy was Nikita Khrushchev's, and was deposited in the Moscow Party archives where it was later found by Henrik Eberle and Matthias Uhl, and made public for the first time in 2006. As of 2006, the only other known copy is in kept in a safe by Vladimir Putin. - Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, page 13 - Roberts 2006, pp. 4–5 - (Polish) obozy jenieckie zolnierzy polskich (Prison camps for Polish soldiers) Encyklopedia PWN. Last accessed on 28 November 2006. - (Polish) Edukacja Humanistyczna w wojsku. 1/2005. Dom wydawniczy Wojska Polskiego. ISNN 1734-6584. (Official publication of the Polish Army) - (Russian) Молотов на V сессии Верховного Совета 31 октября цифра «примерно 250 тыс.» (Please provide translation of the reference title and publication data and means) - (Russian) Отчёт Украинского и Белорусского фронтов Красной Армии Мельтюхов, с. 367. (Please provide translation of the reference title and publication data and means) - Fischer, Benjamin B., "The Katyn Controversy: Stalin's Killing Field", Studies in Intelligence, Winter 1999-2000. - Excerpt from the minutes No. 13 of the Politburo of the Central Committee meeting, shooting order of March 5, 1940 online, last accessed on 19 December 2005, original in Russian with English translation - Sanford, Google Books, p. 20-24. - "Stalin's Killing Field" (PDF). Retrieved 2008-07-19. - Parrish, Michael (1996). The Lesser Terror: Soviet state security, 1939–1953. Westport, CT: Praeger Press. pp. 324–325. ISBN 0-275-95113-8. - Montefiore, Simon Sebag (2005-09-13). Stalin: The Court of the Red Tsar. New York: Vintage Books. pp. 197–8, 332–4. ISBN 978-1-4000-7678-9. - Katyn executioners named Gazeta Wyborcza. December 15, 2008 - (Polish) Various authors. Biuletyn „Kombatant” nr specjalny (148) czerwiec 2003 Special Edition of Kombatant Bulletin No.148 6/2003 on the occasion of the Year of General Sikorski. Official publication of the Polish government Agency of Combatants and Repressed - Ромуальд Святек, "Катынский лес", Военно-исторический журнал, 1991, №9, ISSN 0042-9058 - Brackman 2001 - (Polish) Barbara Polak (2005). "Zbrodnia katynska" (pdf). Biuletyn IPN: 4–21. Retrieved 2007-09-22. - Engel, David. " Facing a Holocaust: The Polish Government-In-Exile and the Jews, 1943–1945]". 1993. ISBN 0-8078-2069-5. - Bauer, Eddy. "The Marshall Cavendish Illustrated Encyclopedia of World War II". Marshall Cavendish, 1985 - Goebbels, Joseph. The Goebbels Diaries (1942–1943). Translated by Louis P. Lochner. Doubleday & Company. 1948 - "CHRONOLOGY 1990; The Soviet Union and Eastern Europe." Foreign Affairs, 1990, pp. 212. - Text of Order No. 270 - Roberts 2006, p. 98 - Robert Gellately. Lenin, Stalin, and Hitler: The Age of Social Catastrophe. Knopf, 2007 ISBN 1-4000-4005-1 p. 391 - Anne Applebaum. Gulag: A History, Doubleday, 2003 (ISBN 0-7679-0056-1) - Richard Rhodes (2002). Masters of Death: The SS-Einsatzgruppen and the Invention of the Holocaust. New York: Alfred A. Knopf. pp. 46–47. ISBN 0-375-40900-9. See also: Allen Paul. Katyn: Stalin’s Massacre and the Seeds of Polish Resurrection, Naval Institute Press, 1996, (ISBN 1-55750-670-1), p. 155 - Roberts 2006, p. 132 - G. I. Krivosheev. Soviet Casualties and Combat Losses. Greenhill 1997 ISBN 1-85367-280-7 - Catherine Merridale. Ivan's War: Life and Death in the Red Army, 1939-1945. Page 158. Macmillan, 2006. ISBN 0-8050-7455-4 - Schissler, Hanna The Miracle Years: A Cultural History of West Germany, 1949-1968 - Mark, James, "Remembering Rape: Divided Social Memory and the Red Army in Hungary 1944-1945", Past & Present — Number 188, August 2005, page 133 - Naimark, Norman M., The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949. Cambridge: Belknap, 1995, ISBN 0-674-78405-7, pages 70-71 - Beevor, Antony, Berlin: The Downfall 1945, Penguin Books, 2002, ISBN 0-670-88695-5. Specific reports also include Report of the Swiss legation in Budapest of 1945 and Hubertus Knabe: Tag der Befreiung? Das Kriegsende in Ostdeutschland (A day of liberation? The end of war in Eastern Germany), Propyläen 2005, ISBN 3-549-07245-7 German). - Urban, Thomas, Der Verlust, Verlag C. H. Beck 2004, ISBN 3-406-54156-9, page 145 - Beevor, Antony, Berlin: The Downfall 1945, Viking, Penguin Books, 2005, ISBN 0-670-88695-5 - Buske, Norbert (Hg.): Das Kriegsende in Demmin 1945. Berichte Erinnerungen Dokumente (Landeszentrale für politische Bildung Mecklenburg-Vorpommern. Landeskundliche Hefte), Schwerin 1995 - Wolfgang Leonhard, Child of the Revolution ,Pathfinder Press, 1979, ISBN 0-906133-26-2 - Norman M. Naimark. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949. Harvard University Press, 1995. ISBN 0-674-78405-7 - Wolfgang Leonhard, Child of the Revolution, Pathfinder Press, 1979, ISBN 0-906133-26-2. - Richard Overy, The Dictators Hitler's Germany, Stalin's Russia p.568–569 - (“Военно-исторический журнал” (“Military-Historical Magazine”), 1997, №5. page 32) - Земское В.Н. К вопросу о репатриации советских граждан. 1944-1951 годы // История СССР. 1990. № 4 (Zemskov V.N. On repatriation of Soviet citizens. Istoriya SSSR., 1990, No.4 - Walter Scott Dunn (1995). The Soviet Economy and the Red Army, 1930-1945. Greenwood. p. 34. - John Barber and Mark Harrison, The Soviet Home Front, 1941-1945: a social and economic history of the USSR in World War II (Longman, 1991), 77, 81, 85-6. - Barber and Harrison, The Soviet Home Front, 1941-1945 91-93. - Robert Forczyk (2009). Leningrad 1941-44: The epic siege. Osprey. - Barber and Harrison, The Soviet Home Front, 1941-1945 pp 86-7. - Richard Bidlack; Nikita Lomagin (26 June 2012). The Leningrad Blockade, 1941-1944: A New Documentary History from the Soviet Archives. Yale U.P. p. 406. - Bidlack, “Survival Strategies in Leningrad pp 90-94. - Bidlack, “Survival Strategies in Leningrad p 97. - Bidlack, “Survival Strategies in Leningrad p 98 - Brackman, Roman (2001), The Secret File of Joseph Stalin: A Hidden Life, Frank Cass Publishers, ISBN 0-7146-5050-1 - Brent, Jonathan; Naumov, Vladimir (2004), Stalin's Last Crime: The Plot Against the Jewish Doctors, 1948-1953, HarperCollins, ISBN 0-06-093310-0 - Henig, Ruth Beatrice (2005), The Origins of the Second World War, 1933-41, Routledge, ISBN 0-415-33262-1 - Lewkowicz Nicolas, The German Question and the Origins of the Cold War (IPOC, Milan) (2008) [ISBN 8895145275] - Merridale, Catherine (2007). Ivan's War: Life and Death in the Red Army, 1939-1945. Macmillan. ISBN 978-0-312-42652-1. - Murphy, David E. (2006), What Stalin Knew: The Enigma of Barbarossa, Yale University Press, ISBN 0-300-11981-X - Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997), Pariahs, Partners, Predators: German-Soviet Relations, 1922-1941, Columbia University Press, ISBN 0-231-10676-9 - Roberts, Geoffrey (2006), Stalin's Wars: From World War to Cold War, 1939–1953, Yale University Press, ISBN 0-300-11204-1 - Roberts, Geoffrey (2002), Stalin, the Pact with Nazi Germany, and the Origins of Postwar Soviet Diplomatic Historiography 4 (4) - Roberts, Geoffrey (1992), "The Soviet Decision for a Pact with Nazi Germany", Soviet Studies 55 (2) - Soviet Information Bureau (1948), Falsifiers of History (Historical Survey), Moscow: Foreign Languages Publishing House, 272848 - Department of State (1948), Nazi-Soviet Relations, 1939–1941: Documents from the Archives of The German Foreign Office, Department of State - Taubert, Fritz (2003), The Myth of Munich, Oldenbourg Wissenschaftsverlag, ISBN 3-486-56673-3 - Watson, Derek (2000), "Molotov's Apprenticeship in Foreign Policy: The Triple Alliance Negotiations in 1939", Europe-Asia Studies 52 (4) - Wettig, Gerhard (2008), Stalin and the Cold War in Europe, Rowman & Littlefield, ISBN 0-7425-5542-9 - Abramov, Vladimir K. "Mordovia During the Second World War," Journal of Slavic Military Studies (2008) 21#2 pp 291-363. - Annaorazov, Jumadurdy. "Turkmenistan during the Second World War," Journal of Slavic Military Studies (2012) 25#1 pp 53-64. - Barber, John, and Mark Harrison. The Soviet Home Front: A Social and Economic History of the USSR in World War II, Longman, 1991. - Berkhoff, Karel C. Harvest of Despair: Life and Death in Ukraine Under Nazi Rule. Harvard U. Press, 2004. 448 pp. - Braithwaite, Rodric. Moscow 1941: A City and Its People at War (2006) - Thurston, Robert W., and Bernd Bonwetsch (Eds). The People's War: Responses to World War II in the Soviet Union (2000) - Dallin, Alexander. Odessa, 1941-1944: A Case Study of Soviet Territory under Foreign Rule. Portland: Int. Specialized Book Service, 1998. 296 pp. - Ellmana, Michael, and S. Maksudovb. "Soviet deaths in the great patriotic war: A note," Europe-Asia Studies (1994) 46#4 pp 671-680 DOI: 10.1080/09668139408412190 - Glantz, David M. (2001). The Siege of Leningrad, 1941-1944: 900 Days of Terror. Zenith. ISBN 978-0-7603-0941-4. - Hill, Alexander. "British Lend-Lease Aid and the Soviet War Effort, June 1941-June 1942," Journal of Military History (2007) 71#3 pp 773-808. - Overy, Richard. Russia's War: A History of the Soviet Effort: 1941-1945 (1998) 432pp excerpt and txt search - Reese, Roger R. "Motivations to Serve: The Soviet Soldier in the Second World War," Journal of Slavic Military Studies (2007) 10#2 pp 263-282. - Thurston, Robert W. and Bernd Bonwetsch (2000). The People's War: Responses to World War II in the Soviet Union. U. of Illinois Press. ISBN 978-0-252-02600-3. - Vallin, Jacques; Meslé, France; Adamets, Serguei; and Pyrozhkov, Serhii. "A New Estimate of Ukrainian Population Losses During the Crises of the 1930s and 1940s." Population Studies (2002) 56(3): 249-264. in JSTOR Reports life expectancy at birth fell to a level as low as ten years for females and seven for males in 1933 and plateaued around 25 for females and 15 for males in the period 1941-44. Primary sources - Bidlack, Richard, and Nikita Lomagin, eds. The Leningrad Blockade, 1941-1944: A New Documentary History from the Soviet Archives. Yale U.P. - Hill, Alexander, ed. The Great Patriotic War of the Soviet Union, 1941-45: A Documentary Reader (2011) 368pp
http://en.wikipedia.org/wiki/Soviet_Union_in_World_War_II
13
32
78.02.02 Booker T. Washington and W. E. B. Dubois: The Problem of Negro Leadership This unit was designed for high school students, grades ten through twelve. The unit contains excellent background information on Booker T. Washington and W. E. B. Dubois. Lesson plans and activities could be adapted for fifth graders with additional resources from the library media specialist. 78.02.05 Migration North to the Promised Land This unit, written for seventh grade Social Studies students centers, discusses the great migration of blacks from the rural south to northern industrial cities. In addition, the unit addresses issues concerning the post-Civil War and post-reconstruction conditions in the south. The unit can easily be adapted for second to fifth grade students. Lessons contain activities centered on human and urban geography, as well as history, political science and economics. 78.02.08 The Social Contributions of the Harlem Renaissance This unit examines the social contributions of Harlem intellectuals during the decade from 1918-29. Intended for middle-school students, it could easily be adapted for middle and upper elementary students, 3-5. The unit gives a beautiful account of the West Indian influence on the Harlem Renaissance. It contains excellent reference material. 78.02.09 Two Controversial Cases in New Haven History: The Amistad Affair (1839) and the Black Panther Trials (1970) This study is designed to make a descriptive comparison of two dramatic revolts for freedom in New Haven. A wealth of information is given in the unit describing the history of the Amistad affair and the Black Panther trials. This unit could be adapted for lower and upper elementary grade levels, 1-5. 80.06.09 Slavery in Connecticut 1640-1848 This unit traces the history of slavery in Connecticut. It contains information that could be adapted to the upper elementary grades 4-5. 81.01.08 Yet Do I Marvel: A Comparative Study of Black American Literature Unit presents black history through the study of African American authors, focusing on the Harlem Renaissance of the 1920's and the black Revolutionary period the 1960's. It contains excellent background information. Lesson plans and work sheets could be modified for middle and upper elementary students, 3-5. 85.05.01 Black Emancipators of the Nineteenth Century This unit focuses on emancipators who spoke about abolishing slavery in the United States. Information is given on individuals who stages slave rebellions and who escaped slavery. An attempt to provide students with a more realistic view of slavery in the context of American history is the emphasis of this unit. Recommended as a resource for teachers in grades 3, 4 and 5. 85.05.03 Lincoln, the Great Emancipator? Discussing the enslavement of African Americans and how slavery relates to the Lincoln presidency, this unit is a resource for fourth and fifth grade teachers who may desire additional information and perspective. 86.04.09 The Poetry of 20th Century Black America This unit studies African American poetry and its related history to help stimulate and prepare students to write their own poetry. Suggests possible lessons and contains historical information. It is adaptable to grades 3-5. 87.01.05 Slavery: The American Way The unit is intended to give the student a good understanding about the historical events that took place during the development of slavery on the American Continent. This unit is easily adaptable for middle and upper elementary students, 3-5. Students will analyze the main reasons that led to the need for slave labor, locate areas on a map, and use reading and writing skills throughout the unit. 87.03.02 Non-Violent Protest Through the Ages The major concepts of this unit include Dr. Martin Luther King's view on non-violence. The unit emphasizes how civil disobedience has long been exercised in protest of unjust laws. This unit could be adapted for middle and upper elementary students, and use in conjunction with Project Charlie for helping students understand that problems can be solved through non-violence. 87.03.03 Portraits: The Black Experience in American Culture This unit exposes students to black writers who have written of the black experience in American culture, both fiction and non-fiction. Students view specific works, which examine the black struggle to survive in our American culture as well as the search for self-identity. Teachers of any grade level could draw information from the valuable background information in the unit. The activities are adaptable for upper/middle elementary students, 3-5. 87.03.07 The Roots of the Afro-American Culture - The Artist Approach The focal point of this unit will show how white Europeans tried to force their values on African Americans. The unit provides a type of reference that represents, in large measure, the totality of the past and present life and culture of African Americans in art. The unit contains great background information on African American art for teachers of all grade levels. Lessons adaptable for all grade levels. 88.02.05 The Insights of American Blacks During the 19th and 20th Centuries in New Haven, Connecticut This unit highlights the contributions of African American individuals and organizations within the New Haven community during the 1880s and early 1900s. Resources include written and oral history accounts and a family tree handout. Recommended for teachers in grades 3, 4, and 5. 89.01.05 The Impact of the Music of the Harlem Renaissance on Society The main focus of this unit is on the people, places, and music of the Harlem Renaissance from 1918-1933. The unit contains excellent background information for teachers on the musical heritage of African Americans. Lesson can be adapted for upper elementary students, grade 5. 89.01.13 The Church Community: The Oldest Black Church, Past and Present Although designed for students in grades 5-8, this unit could be adapted to include the lower and middle elementary grades. The main focus of the unit is the church community that of the African Methodist Episcopal Zion Church founded by James Varick in 1796. The unit also includes African Americans such Frederick Douglas, Sojouner Truth, Harriet Tubman, etc. 89.05.08 The Roots of the Modern Day African Americans and the Suggested Motivation For A Bright Future: Actual Experiences of Booker T. Washington, Frederick Douglas and Joseph Sengbe (Cinque) The unit seeks to educate youths about African American roots. It contains interesting historical background information that could be adapted for all elementary grade levels, K-5. 90.02.08 The Amistad Affair: Problem Solving Applied Through Theater This unit is adaptable for middle and upper elementary grade students, 3-5. The unit centers on the Amistad Affair and is divided into eight-week segments. Each class contains theater games and activities designed to enhance a particular facet of problem solving. 90.03.09 Famous Afro-Americans Historical Sites Recognized by the National Park System Adaptable for middle and upper elementary students, this unit identifies National Historical sites that have been named in honor of famous African Americans. The unit contains a wealth of information about the location, history, and reason why each site was designated as a National Historical site. 90.04.05 How the African American Storyteller Impacts the Black Family and Society Although this unit has been developed for sixth graders during Black History month, the lesson plans are easily adaptable for all grade levels. The main emphasis is on black storytellers as they emerged from slavery to the present. Some of the storytellers included are: Maya Angelou, James Baldwin, Winnie Mandela, Bill Costy, etc. 90.04.09 The Art and Culture of the Afro-American This ten-week unit is designed to help students learn about the art and culture of the African American. Students build a stronger identify of themselves that can reduce poor self-esteem, cynicism and apathy. Activities and strategies are adaptable for upper elementary students, grade 5. 90.05.09 American Families: Portraits of African American Families Students in middle and upper elementary grades would profit by this unit. If selections are too difficult, the teacher in class could read parts. The unit begins with a study of the African American family from a historical perspective. It uses the oral tradition as well as various family-related poems. 90.05.10 The Family That Endured An Historical View of African American Families As Seen through American Literature and Art This unit centers on the historical development of the African American family by fostering the kind of understandings that will allow students to see themselves, their family, and their ancestors as part of an institution whose role is one to be admired. Adaptable for middle and upper elementary grades, the unit involves the use of literature, paintings, photographs, and artifacts to reinforce the concept of the African American family as a positive force. 91.01.07 African American Literature: A Contrast Between North and South Although the literature recommended for 11th grade English students is too difficult for elementary grade children, students in middle and upper grades would find the background history of this unit interesting. The unit covers the black experience in the United States, from the period after the Civil War through the Great Migration to the north, culminating with the Harlem Renaissance of the 1920's. 91.03.01 Langston Hughes: Voice Among Voices Many of the readings are too advanced for elementary grade students; however, many of the readings can be read in part or as a whole to the students. The background information about Langston Hughes would be of interest to all elementary students. 91.03.02 Prince Hall and His Organization of Black Free Masons in the United States This unit was written for middle and upper elementary grade students. It centers on the story of Prince Hall, organizer of Negro Masonry in the United States. He was also an abolitionist and spokesman in regard to all of the conditions which made the circumstances of Negroes intolerable and nonproductive. 91.03.04 Use of John Johnson's Life Story in Conjunction With Other Black Entrepreneurs as Role Models for Potential Black Businessmen The unit centers on the lives of Mary McLeod Bethune, Booker T. Washington, and Jake Simmons, Jr. It compares the role models that each person followed in their search for success. Although written for high school students, middle and upper elementary students would profit by this unit. 91.03.08 Dark Voices From Unmarked Graves Although the narratives chosen are too advanced for elementary students, they could be told or read to middle and upper grades. The unit focuses on samplings of oral and written testimony about slavery from slaves as they experienced it and from former slaves as they remembered it. The narratives are analyzed and reenacted in the classroom. 91.03.09 Building Dreams - Who Is There to Help You? This unit centers around two texts, Roll of Thunder Hear My Cry, and Maggie's American Dream. Although the unit is geared for fifth grade students, it could be adapted for lower and middle elementary students, grades 1-3, where the teacher reads or tells the stories to the students. The lesson plans contain many questions for analysis of the stories and role-playing activities. 91.03.10 Amazing Grace The unit presents specific events in the lives of Maya Angelou, famous author, and James Comer, psychiatrist. The term "Jim Crow" will be discussed, so that students will have background knowledge for understanding the materials that will be presented. The unit is adaptable for middle and upper elementary students, 3-5. 92.03.02 Tales from the City The Harlem Renaissance is the concentration of this unit. The information available can be adapted to grades 4 and 5. Literature of the Harlem Renaissance is the main discussion in this unit. 92.03.04 Cathedrals, Pyramids and Mosques African history is discussed in this unit. The lessons and activities are best used in the upper grades, but some can be adapted to grades 4 and 5. This unit contains good background information for teachers. 92.03.07 Recognizing Voice and Finding your Own Voice in Writing about the City This unit is best described as a resource for teachers in grades 4 and 5. The Harlem Renaissance is discussed. Literature from the period of the Harlem Renaissance is included. 92.04.02 Colonial Living: A Look at the Arts, Crafts, History and Literature of Early Americans This unit covers many aspects of Colonial Life that are overlooked as well as common Colonial Life activities. It is written for students in the upper grades but can be adapted to grades 4 and 5. The discussion of the attitude of African Americans toward slavery in the United States is also addressed. 95.04.02 Literature and Art through Our Eyes: The African American Children This unit uses prose, poetry, and art to examine the issues of self-awareness, family, community, and friends primarily as they relate to African American children. Integrated approach. The unit is suitable for grades 2-5. 96.01.01 An Analysis of Jim Crow Laws and Their Effect on Race Relations Designed for first grade, this unit could easily be adapted to higher elementary grades. Focuses on the effects of discriminatory laws. Literature based, emphasizing decision-making and self-awareness. Lessons include a mock segregated role-play, a poetry lesson, and discussion of Dr. King's role as a hero. Relates well to social development curriculum. 96.01.02 Langston Hughes: Artist and Historian Unit uses the poetry of Langston Hughes to study the African American experience in the United States from the 1920's to the 1960's. Also uses works of art and photography to understand discrimination against African Americans, Japanese, and the poor. Though designed for grades 6, the material could be adapted for use in upper elementary grades. 96.01.04 Justice Demands an End to Segregation, But it Does Not End Designed for grades 7-8, this unit emphasizes the course of civil rights and civil liberties for African Americans in the United States from 1954-1964. Contains historical information that might be of value to elementary teachers. Some activities might be adapted for grades 4-5. 96.01.07 A New Generation of Fighters Though designed to inspire middle school students to stand up against racism, this unit contains some material and activities that could be adapted to elementary use. Contains lengthy account of Robert Coles who worked with Ruby Bridges. 96.03.02 Using Film and Literature to Examine Uncle Remus: A Comparison and Analysis of the Film-Song of the South Designed for a second grade, this unit centers around the Disney film, Song of the South, using the Uncle Remus figure to develop a more accurate picture of slavery and the importance of story telling during this period. Elements could be applied to any elementary grade. Interdisciplinary approach. 96.03.05 Recognizing Stereotypical Images of African Americans in Television and Film For a fifth grade, which is probably its lowest grade limit, this unit uses television and film to help students to, first, understand stereotyping and, then, be able to recognize it presence in film and on TV as a force that can have damaging effects. Activities and discussions are thought provoking. Unit is part of a school team. 97.02.05 Examining African American Culture through the Use of Children's Literature This unit examines African American culture by using children's literature. It emphasizes building self-esteem and positive relationships. Interdisciplinary. The unit is suitable for grades 2-5. Relates well to social development goals. 97.02.08 Celebrate a People Unit includes a picture book resource listing and ways to incorporate Afro centric literature into the curriculum. Interdisciplinary. Aimed at grades K-2. 97.03.01 Struggle of Black Women Some parts of this unit could be used with upper elementary students to help build self-esteem and to develop an understanding of the obstacles faced by black women. Relates well to social development curriculum. 97.05.03 How to Blues Students' understanding of African American culture is broadened through a study of the history, philosophy, and performers of the blues. Suggests related musical activities in which students might participate. This unit is suitable for grades 4-5. 97.05.04 Finding the Rhythm of Blues in Children's Poetry, Art, and Music This unit, a language based, integrated approach focuses on slavery in the United States, along with the blues ideology. Focus on music. It is suitable for grades 1-5. 97.05.07 Sing Two Stanzas and Rebel in the Morning: The Role of Black Religious Music in the Struggle for Freedom This unit examines the role of black religious music in the African American struggle for freedom and civil rights. Uses an integrated approach. Focus on music. It is suitable for grades 4-5. 97.05.08 Building Character: Remaining Resilient, Resourceful, and Responsible in the Face of Adversity In this unit, students are exposed to the blues culture as a means of understanding and appreciating the African American struggle. Focus on Music. The unit is suitable for grades 4-5. 98.01.03 Slavery of Africans in the Americas: Resistance to Enslavement Although written for grades 6-8, this unit can easily be adapted for grades 1-5. The unit uses film and other media to show the various forms and ways African and African American slaves resisted their enslavement in the Americas. There is special emphasis on the slave songs and maroon societies of slaves. 98.01.05 A Film and Literature Study of the African American Migration Written for grade 2, the unit is easily adaptable for grades 1-6. The unit uses films such as The Promised Land, Goin’ to Chicago, and The Killing Floor to convey the messages of the migration movement. Students demonstrate their understanding of the migration movement through written works, discussions, and illustrations. The unit includes a great page on possible activities to accompany the study. 98.01.09 Discrimination and the Struggle for Equality: African Americans in Professional Baseball: A Reflection of the Civil Rights Movement This unit developed for an understanding of Black baseball and the Negro Leagues as they existed in the United States during the days of segregated professional baseball could be adapted for all elementary grades K-5. The study relates to an examination of African American history from slavery through the Civil Rights Movement. The unit includes topics for discussion and awareness such as family events, showmanship, Black press, segregation, etc. Lesson plans are detailed, loaded with activities, and integrate curriculum areas. 98.02.03 African Myths and What They Teach Written for grade three, this unit can easily be adapted for grades 1-5. The unit presents some of the many myths that deal with nature, human behavior, and creation. The stories presented are used to discuss with children ideas about friendship, manners, and scientific truths, which they can apply to their own lives. The unit contains a list of integrated activities that can be used with the myths. 98.02.04 Three African Trickster Myths/Tales: Primary Style Using an interdisciplinary approach this unit can be used in grades K-5. The unit presents many teacher-ready work sheets that can be used in grades 1-4. Myths included in the unit are "Anansi’s Rescue from the River," "Ijapa and Yanrinbo Swear an Oath," and Zomo the Rabbit. 81.02.05 China: Portrait of Change Presented to high school students, this unit could be adapted for any elementary classroom curriculum regarding the history. This is a very resourceful unit on geography, history, and culture of China. However, a word of caution, there has been many changes in China since 1981. 82.02.08 His Story/Her Story/Your Story This unit utilizes autobiographies and biographies in an effort to inform students of African American history. Key topics are social conditions, personal experiences, and the economics of African American communities. Recommended for grades 4 and 5. 82.05.03 Multicultural Education: A Calendar of Ethnic Festivals and Celebrations Students participating in this unit’s lessons will have hands-on experiences in a variety of ethnic festivals and celebrations through music, dance, and folk craft instruction. Multi-ethnic foods are also prepared through this unit. A very adaptable unit for grades Kindergarten through 5, it can also encourage parent and community involvement. 82.06.10 Family Life in America: Past, Present and Future The history of the American family is the focus of this unit. The definition of the American family and how it has been altered as the nation grew is a resource to teachers studying family life and American history. This unit can be adapted to grades 4 and 5. 88.02.01 The Cajuns: Natives with a Difference! This unit, written specifically for students studying the French language, examines the history and culture of Acadians and Cajuns who settled in Louisiana. The students explore how the French culture of the Acadians and the Cajuns has influenced the culture of the United States. Activities span literature, writing and art. Recommended as a resource for grades 4 and 5. 92.02.04 Adventure in the Caribbean Effects of the Discovery of Haiti, Martinique and Guadeloupe This unit is a resource for lower grade teachers. The history of Haiti, Martinique and Guadeloupe is discussed. Teachers in grades 4 and 5 will appreciate the lessons and activities. This is great for diversity projects. 92.02.05 Rediscovering the Aztec Indians The Aztec Indians are the focus of this unit. Teachers in grades K–5, easily use the information relating to history; however, the lessons and activities are better suited for grades 4 and 5. This unit can also be applied to The Arts. 98.01.08 Teaching Ethnicity and Race through Films Upper elementary grades, 4-5 could benefit from this unit written to teach ethnicity and race through films. The unit presents movie reviews and discussion questions for five films about ethnicity: Far and Away, Avalon, A Bronx Tale, The Long Walk Home, and Mi Familia. The unit concludes with a brief discussion of inaccuracies and misimpressions in Hollywood film. 98.02.05 Universal Myths and Symbols: Animal Creatures and Creation Using an interdisciplinary approach this unit was written for second grade children but can be adapted for grades K-5. The unit brings myths and its language to today’s generation by exploring the immense wealth of mythological creation stories. The lessons focus on the role that the animal plays in the stories, and takes a close look at the Phoenix as representative of mythological creatures. 98.05.01 Who’s Who in America? Multicultural Achievers A to Z: Past and Present This unit is interdisciplinary in approach and includes areas such as reading, science, art, writing, and some physical activities. Although written for grade k, this unit can easily be adapted to include all elementary grades K-5. The main objective of this unit is to help children celebrate the achievements of individuals of different ethnic groups by focusing on the contributions made in the fields of music, sports, science, etc. The unit provides children the opportunity to read about the dreams, aspirations, and goals of people who were once children like themselves. 80.06.08 Puerto Rican Cultural Differences in Politics Though aimed at older students, this unit contains background information that could be adapted to upper elementary classrooms, grades 4-5. 81.01.05 The Hispanic View of the Urban Setting Presented to advanced students in the Bilingual Program. The unit encompasses selected readings, class discussion, trips, films and lectures. Although written for a high school Bilingual class, the unit could be adapted for middle and upper elementary grade students, 3-5. Gives great information on the Puerto Rican migration to the city. 84.03.02 Hispanic Immigrants: Trials and Tribulations Although written for high school Spanish classes, parts could be extracted in connection with any unit about Hispanics and their neighborhoods. One interesting notation: Little Italy of New York City is mentioned in the unit. Today Little Italy is owned and surrounded by Chinatown. It remains as a tourist attraction. 84.03.03 Pre-Colombian Mythology The unit on Pre-Colombian Mythology describes the culture of the Mesoamerican Indians through art and myths. Two legends are adapted into skits. There is great background information about the history of this era for any grade level teacher. Parts could be extracted for any grade level, K-5. 84.03.07 Latin American Women The unit speaks about differences in males and females in Latin culture with emphasis on women writers. Good resource material for those studying Latino culture in the middle and upper elementary grades, 3-5. 84.03.08 The Art of the Puerto Rican People Through the use of art, the history of the Puerto Rican people from Pre-Colombian times to today is emphasized in this unit. History and art lessons can easily be adapted for all elementary grades. Some of the materials maybe difficult to acquire (e.g. slide sets and paintings.) 86.02.01 Spain in Puerto Rico: The Early Settlements Unit contains a considerable amount of information on the history, geography, and major cities of Puerto Rico, which both bilingual and regular education elementary teachers could use at almost any level. 86.02.03 1986 Capsule: Hispanic Influence in the New World This unit contains a considerable amount of information on Hispanic history and influence in The New World. Elementary teachers at any grade level could adapt some of this Material to supplement Hispanic and/or diversity studies. 87.01.01 An Analysis of "The Highroad of Saint James" Although written and based on literature for eighth grade students, the unit could be adapted for upper elementary students. "The Highroad of Saint James" is an allegorical pilgrimage set against the historical background of sixteenth century Flanders, France, Spain, and Cuba. Throughout the story, Juan, a common name to represent the common man, gives us a picture of the history and culture of Cuba. 87.01.02 Studio Art Lessons Based on Latin American Art and Crafts A beautiful unit based on arts and crafts of Latin American countries that could be adapted for middle and upper elementary students, 3-5. Many examples are given through pictures, and narrative along with detailed instructions for making crafts in the classroom. 87.01.04 Puerto Rico…Its Land, History, Culture, and Literature Presents an overview of the geography, history, and culture of Puerto Rico and then focuses on its literature. Could easily be adapted for middle and upper elementary students. Gives a wealth of information on the culture of Puerto Rico. 87.01.06 Improving thinking skills of Spanish Learning Disabled Students Through the analysis of Latin-American Short Stories The unit helps students understand the need for comparison in any life situation. There are many excellent work sheets and activities listed that can be adapted for middle and upper elementary grade students, 3-5. Parts could be adapted for lower elementary students, 1-2. The unit covers two short stories from Central America and from Puerto Rico. The main emphasis is on Christmas in Costa Rica and Puerto Rico. 89.03.02 The Heritage of Puerto Rico and Cuba The unit includes a comparative study of Puerto Rico and Cuba and its relation to the rest of the Caribbean. It gives an overview of the contributions of Latin America culture in the United States, the histories of Puerto Rico and Cuba, and writers and their times. Can be adapted for upper elementary grade students, grade 5. 90.01.07 In Search of the "Yo Latino - Americano" Although written primarily for Spanish classes, the unit contains a wealth of information for any teacher searching for material about the Spanish-speaking world. Hands-on activities can be used with upper elementary students. 91.02.06 The Heritage and Culture of Puerto Ricans This beautiful unit can easily be adapted to include lower elementary grade children. The unit is intended to provide students with opportunities to learn more about Hispanic people through a study of the heritage and culture of Puerto Ricans. Lesson plans include hands-on activities and work sheets. 92.02.03 The Culture of Conquest in the Modern World This unit contains information on Latin American culture. It can be easily integrated with most diversity curricula. The lessons, resources and activities are recommended for grades K-5. 92.02.06 Dividing the Spoils: Portugal and Spain in South America Covering basic Latin American history, this unit can be used as a resource for all grade levels. The lessons and various other aspects of the unit are best suited for grades 4 and 5. 97.01.01 Chicano and Puerto Rican Literature Designed for middle school Spanish classes, this unit places its emphasis on Latina writers, particularly Puerto Rican and Mexican American women. Includes information on Chicano literature and authors. Portions of the suggested literature, the background information, and possible lessons could be modified for some upper elementary classrooms. 97.01.02 Reflections in a Latin American Mirror Unit introduces students to Cuba, the Dominican Republic, Haiti, Puerto Rico, Guatemala, Mexico, and Chile through poetry, folklore, and contemporary fiction. Interdisciplinary approach. This unit is suitable for any elementary group. 97.01.04 A Close Look at Mexico Designed for grades 2-4, this unit uses hands-on activities to help children understand the culture of Mexico. Interdisciplinary approach. Has something for any elementary level. 97.01.06 Twentieth Century Latin American Writhing: Books, Stories, Folktales, Poetry, and More This unit uses stories, folktales, poetry, rhymes, and songs to teach students about Hispanic culture. Integrated approach. Aimed at grades 2-4. 97.01.10 Short Novels, Stories, and Poetry of the Latin Americas This unit suggests short novels, stories, and poetry from contemporary Latin American authors to be used with elementary students. Interdisciplinary approach. Suitable for grades K-5. 97.02.06 Understanding Hispanic/Latino Culture and History through the Use of Children's Literature This unit uses children's literature to develop a better understanding and appreciation of Hispanic/Latino culture and history. Interdisciplinary approach. Recommended for grades 3-5. 98.01.07 Heroes and Villains of the Rain Forest: Latin American History through Film The unit was written for grades 7-12, but can be adapted for upper elementary grades 4-5. Ten historical films that are set in rain forests are guides to Latin American history. Topics included in the unit are Discovery and Conquest, Political Divisions, In the Name of God, The Fate of the Indigenous Peoples, The Haves and the Have Nots, The Burden of Eternal Vigilance. Lesson plans give many suggestions for student analysis. 99.02.07 Broken Shields/Enduring Culture Although written for sixth grade students, the unit can be adapted to include grades 3-5. The unit is divided into two sections. Part one is Picturing the World. Students work with compasses and then label the walls of the classroom with the four directions of the compass written in a number of languages. In addition, students work with maps, marking their own trip to Mesoamerica with Maya and Aztec ‘glyphs.’ They make screen fold books in which, like the Maya and the Aztec, they can record important information. Part two is Living in the World. Students construct a small Maya village, exploring its ecology and begin research projects on Aztec as well as Maya topics, culminating in a fiesta for families or other classes in which they present their work. 88.02.03 Immigration into an Urban Industrialized Northeast: 1879-1914 This unit is a comparative study of the conditions that lured the Italians, Slavs and African Americans to the industrialized urban centers of the United States. Examining the manner in which these people were received by the then present population, this unit supplies significant background information. Although the overall subject matter is meant for older students, this unit can be used as resource for grades 4 and 5. 88.02.06 The American Experience Through the study of immigration in America, students will become aware of the difficulties faced by immigrants in the past and present. This unit relates to the cultural heritages of most New Haven students and contains significant background information for teachers. Recommended as a resource for grades 4 and 5. 90.05.07 Irish Immigrant Families in Mid-Late 19th Century America The strategies in this unit include reading first-hand accounts of immigrants' lives through diaries, letters, ballads, and songs. Although written for high school students, some of the literature could be read to middle and upper elementary students, (grades 3- 5), and used for student discussion and writing. Students imagine themselves as an Irish immigrant coming to America and describe their experiences in a personal journal. 96.04.06 Coming to America: Opportunities, Risks, Consequences Using an interdisciplinary approach, this unit explores the opportunities and challenges encountered by immigrants. Contains pupil-engaging activities. Recommended for grades 3-5. 96.04.07 Crossing the Border, A Study of Immigration through Literature Unit attempts to help upper elementary students understand the history, challenges, and contributions of immigrants to the United States. This unit contains Cloze reading and language arts connections. Interdisciplinary approach. 96.04.09 Moving Communities and Immigration into the Bilingual Classroom In order to develop an understanding of the immigrant experience, this unit examines the factors that accompany immigration. Families share in some activities. It is suitable for most elementary situations. Interdisciplinary approach. 96.04.11 Footsteps to Liberty: A Journal Journey Through a variety of activities that should easily involve pupils, this unit follows the steps taken during immigration. Interdisciplinary approach. It is suitable for grades 1-5. 99.03.02 Those Who Built New Haven This unit allows students to obtain detailed knowledge of the their city’s history through the story of those who immigrated and worked here. Students explore the struggles and triumphs of some of the diverse groups who have contributed to New Haven over the past three hundred and fifty years. The unit focuses upon the unique nature of the immigration experience for individuals and ethnic groups within New Haven. The unit contains excellent work sheets and reference materials for teachers and students along with background information from John Davenport to Frank Pepe. The unit also contains an interesting field trip to Judge’s Cave on West Rock. The unit is recommended for grades 4-12. 99.03.07 The Non-Immigrant Immigrants: Puerto Ricans The purpose of this unit is to present some of the ingredients that have gone into making the rich culture of Puerto Rico, the forces that have caused Puerto Ricans both to leave the island and to return to it, and the resiliency that Puerto Ricans must have to preserve their identity as a people. Although the unit is recommended for grades 11 and 12, there is a wealth of information that any classroom teacher would benefit by when using a unit on Latino culture. 99.03.08 St. Patrick – Symbol of Irishness This unit introduces students to the story of St. Patrick and the history of the Irish in America by exploring the history of New Haven’s St. Patrick’s Day parade. The unit is recommended for grades 5-8 and gives excellent background information for the teachers and students. 84.02.08 The Athenian Court and the American Court System This unit gives an overview of the Athenian and the U.S. court system. The Athenian courts discussed in this unit existed in the second half of the fifth century and the first half of the fourth century B.C. Parts could be extracted for upper elementary students, grade 5. 87.03.08 The United States Constitution and Selected Amendments The unit provides the origin and history of the Declaration of Independence, Constitution, Bill of Rights, and the additional selected Amendments. Students are involved in the process of thinking, writing, and rethinking. The unit contains great background information for teachers of all grade levels. Lessons adaptable for middle/upper elementary students, grades 3-5. 89.01.09 "Lynch Law" - An American Community Enigma This unit contains a lot of useful background information concerning lynching in American history. It does not contain lesson plans adaptable for the elementary grades, but teachers could use the material in writing their own. 95.03.03 Understanding Criminal Justice Activities in this unit help students learn about crime, our justice system, and Constitutional rights. Designed for grades 4-6, but elements can be used at all levels. 95.03.06 Living to Avoid the Criminal Justice System Using an integrated approach, this unit focuses on conflict resolution, interpersonal relationships, and parental involvement. It attempts to develop the skills needed to avoid later problems with the law. Relates well to social development curriculum. Suitable for grades K-5. 95.03.08 You and the Law-Beating the Odds Though generally aimed at older students, this unit contains some activities applicable to elementary students. Relates well to social development curriculum. 96.01.06 Multicultural Issues and the Law: Gender and Race Based Schooling Designed for middle and high school students, this unit discusses the pros and cons of schooling based on gender and/or race segregation and its relationship to the law. Contains information that might be adapted to fourth of fifth grade classroom. 96.01.08 The Impact of Culture on United States Law This unit examines the clashes that sometimes occur between religion and the law. Some of the material covered might be used in upper elementary level graces to develop an increased understanding of diversity, but most of unit does not apply elementary grades. 96.01.12 Affirmative Action Debate This unit discusses the history and pros and cons of affirmative action. Activities and general content are suitable to upper elementary grades 3-5. Relates well to social development curriculum. 96.01.14 Why Do We Have to Suffer from the Rights of Others? This unit discusses positive values and conflict resolution as solutions to some of society's problems. Unit would fit equally well in the area of social development studies. Designed for grades K-8. 98.04.01 Democracy in Action Written for grades K-4, this unit can be adapted to include grade 5. Each week students will be introduced to a new American political thinker such as Cesar Chavez, Jane Addams, Sojourner Truth and Harriet Beecher Stowe. Students will solve problems at class meetings and engage in dramatics as their favorite political thinker. The students will be introduced to the art of murals and marionettes. Included in the unit is a section for making a marionette. 78.03.01 The American Economy This unit could be adapted for middle and upper elementary students. The unit seeks to have students gain a basic understanding of how our economic system works. Good background information and activities, (e.g. visit a loan officer at a local bank and apply for a loan.) 80.02.03 Comic Books: Superheroes/Heroines, Domestic Scenes, and Animal Images Though designed for middle school students, this unit examines the history of the comic book. The unit offers opportunities to integrate with art. Could be interdisciplinary in approach for upper elementary grades. 81.01.02 Madras (India) and Boston - A Comparative Study and Analysis This is an interesting unit that gives a brief history and background of each great city - Madras and Boston. Students develop parallel comparisons and differences through suggested readings. Designed for eight grade students. Could be adapted for middle and upper elementary grade students, 3-5. 81.01.06 Past and Present New York through a Comparative Study of Photography and Poetry Although written for high school students, ideas could be extracted, modified and adapted for any elementary class. Good background information on photography and history of New York City. 81.02.02 Pirates, Pieces of Eight, and Pacific Nights The unit is based upon Robert Louis Stevenson's writings: Treasure Island, Kidnapped, A Child's Garden of Verses, and The Strange Case of Dr. Jekyll and Mr. Hyde. Although the literature is geared for eight grade students, the topic of pirates is intriguing to elementary school students. Perhaps portions could be read to middle and upper elementary school students, 3-5. 82.05.05 Countries of South America In this unit, students investigate the major countries of South America. The investigation includes population, natural resources and economics. The unit’s strong emphasis on geography lends itself to grades 2-5 as a resource. 84.02.07The Grouch by Menander - An Example of Greek New Comedy Students and teachers alike can enjoy the slapstick humor of Greek New Comedy, originally presented 2,000 years ago, as if it were written yesterday. The unit contains background material on Menander's life and on Greek Old, Middle, and New Comedy. Adaptable for upper elementary students, grade 5. 85.04.04 The Geophysics and Cultural Aspects of the Greater Antilles This unit examines the influence of geography upon a region’s culture. The emphasis of this unit is the West Indies and the surrounding islands. Students will gain knowledge in the use of topographical maps and connect the information to the ethnic groups that migrated to the West Indies. Recommended as a resource for teachers in grade 4 and 5. 85.06.02 Mexican Culture Taught through the Aztec Calendar This unit uses a study of the Aztec calendar to add a different perspective to the standard Spanish language class curriculum. Six calendar designs and a detailed chart of the eighteen months of the calendar are included. Information regarding ceremonies that coincide with the various months of the Aztec year is also provided. Recommended for grades 3, 4 and 5. 85.07.04 Dinosaurs: Here Yesterday, Gone Today This unit focuses onthe history of. Adaptable to all grade levels, the history of dinosaurs on the earth is outlined and detailed throughout this unit. Kindergarten through grade 5 teachers will find this unit useful in their study of dinosaurs and the earth at the time they existed. This unit is a good resource for those teachers interested in visiting the Peabody Museum’s collection of dinosaurs, or any other museum collection of dinosaurs. 87.01.02 Studio Art Lessons Based on Latin American Art and Crafts A beautiful unit based on arts and crafts of Latin American countries that could be adapted for middle and upper elementary students, 3-5. Many examples are given through pictures, and narrative along with detailed instructions for making crafts in the classroom. 90.01.05 Parallel Studies of the Afro-American and Puerto Rican Experience in America This unit examines the commonalty of experiences for both the African American and Puerto Rican cultures. The emphasis is on New York because it became the focal point of settlement for both groups. It is easily adaptable for all elementary grade levels, K-5. 90.02.07 "Come - Alive" Social Studies; A Study of Cultures through Play - Writing This unit written for upper elementary students combines play writing and the dramatic arts with the study of the river cultures of ancient Egypt. The unit contains a study of Egyptian myth, economic life, politics, history, and arts. It culminates in a performance for a school audience. 90.02.11 Melting Pot Theater: Teaching for Cultural Understanding This is a great unit easily adaptable for all elementary grade level students of all abilities, K-5. The unit contains an abundance of activities and games to teach dance, drama, and music. Three countries, U.S.A., Puerto Rico, the U.S.S.R., Russia, and Africa, Ghana are used to study culture in the unit. 91.01.03 "Carefully Taught": The Effect of Regions on Prejudice A beautiful unit that can easily be adapted to include all elementary grade students. This unit is an oral history project focused on the topic of regional expression of prejudice. Classes in the north and south exchange oral histories of older people. Results are tabulated charts for comparison. 91.02.01 History through Fashion Although written for middle school foreign language students, middle and upper elementary students (grades 3-5) would profit by this unit. Fashion and art are used to make the study of France more interesting. 91.02.04 Family Life Among the Asauti of West Africa Although written for high school students, middle and upper elementary students would enjoy the study of the Ashanti culture. The unit focuses on family customs among the Ashanti, a tribal group living in Ghana, near the West African coast. Interesting cultural beliefs and practices lend themselves to role-playing situations in the classroom. 91.02.07 The Inuit Family: A Study of its History, Beliefs, and Images This unit is easily adaptable for any elementary grade level student. The unit is designed to understand the Inuit (Eskimo), their belief system, and activities related to the Inuit Culture. Lesson plans contain many interesting hands-on activities. 90.03.01 The Wilderness Concept - our National Parks, History and Issues The unit centers on environmental issues, suitable for high school students. This is of interest to elementary teachers and students would part three, which deals with urban park areas accessible to our students. Here, an examination of issues and problems, which urban park areas face, is given. From this section, many ecological as well as management questions can be raised. 90.03.02 Outdoor Museums: History and Parks The underlying purpose of this unit is to provide students with an awareness of our National Parks, and how they exist and reflect American values regarding resources, unique landscapes and our history. Woven into the prose are interesting myths, including stories and lots of information about sleeping Giant State Park. The unit is adaptable for all grade levels, K-5. 90.03.04 The Ingredients Needed for a Musical National Park Adaptable for upper elementary students, (grade 5), the unit's major focus is the National Parks System and how it should go about selecting a musical unit. There is also a great deal of information on the correlation of music, geography, and social studies. 90.03.06 Introducing Children to National Parks The unit is adaptable for middle and upper elementary students, and introduces students to the United States National Park System. The unit shows historical and chronological views of the creation of National parks with the use of videos and National Park Handbooks. 90.03.07 Presidents in the Parks This unit contains descriptions of national parks, memorials, monuments, and sites dedicated to those Presidents who made significant contributions to United States of America. The unit contains a wealth of information describing the Presidents' accomplishments while they held office. The unit is adaptable for all grade levels, K-5. 90.03.10 Regionalism as Seen through the National Parks This unit was written to introduce 5th grade students to the National Park System while studying the regions of the United States. Specific parks are discussed and related to the history, geography, or culture of that region. Parts can be easily adaptable for all grade levels. The unit contains a nice list of National Parks and their addresses. 80.02.02 The Native American: Through the Eyes of His Mask with a Special Focus on the Indians of Connecticut This unit could easily fit with a study of Connecticut's history. It investigates Connecticut Indian artifacts, as well as the masks of the Northwest Coast and Iroquois Indians. Contains hands-on activities. There is emphasis on mask making. Best suited for grades 3-5. 89.05.05 A Different Approach for a Special Child This unit gives a beautiful overview of the early Indians of Puerto Rico and United States. Customs and life styles are contrasted and analyzed as students participate in discussions, reading, and writing. Lesson plans contain an abundance of work sheets and activities suitable for students in all elementary grade levels, K-5. 90.03.08 The Four Corners Region of the United States Cultures, Ruins and Landmarks Easily adaptable for all grade levels, this unit centers on the states of Utah, Colorado, New Mexico, and Arizona. Ways of the Indian cultures and their ruins are discussed, along with the history of each state. The following landmarks are also discussed: Chaco Canyon, National Monument, Mesa Verde, Canyon de Chelly National Monument, and Monument Valley. 91.01.01 Between Aztlaw and Quivira: Europeans and Indians in the Southwestern Untied States Although written primarily for middle school Spanish students, middle and upper elementary students would profit by this interesting study. The unit centers on the people who first inhabited this land, prehistoric Indians ad their descendents, and on the endurance of Indian culture through the successive waves of invasion. Concluding activities involve journal entries of comparisons and reactions from the unit. 91.04.01 Finding New Voices: Native American Poets Although many parts of this unit are too difficult for elementary students, the medicine wheel would be of interest to elementary children. An object from the earth such as stone, a flower, a feather, etc. is put in the center. Students are instructed to write a phrase or a sentence about their relationship to the object and read it in class. The unit contains a lot of information about Native Americans. 96.03.01 Willie Sunday: A Critical Analysis of Factual Information in Film With a classroom puppet leading the way, this unit uses two films, Pocahontas and Little Red Riding Hood, to help young children approach film and literature with a critical eye. A more accurate picture of Pocahontas, of Native Americans in general, and of the nature of wolves is achieved through a variety of pupil involving activities stressing drama. Designed for first grade, but elements could apply to any elementary grade. 96.03.06Pocahontas: From Fiction to Fact: Using Disney's Film to Teach the True Story Designed for a fifth grade classroom, this unit relates closely to the study of United States history with primary emphasis on Pocahontas and the Powhatan. Uses film and written material to develop goals. Information clearly presented. Portions could be adapted to both higher and lower grades. Relates well to study of United States History. 98.03.08 The Environmental Adaption of the Native American Indian Written for grade 4-12, this unit can be adapted for grades K-5. The unit teaches how the environment was central to the Native Americans for all their needs and inspiration. The unit gives background information for the students and then detailed lesson plans on creating artifacts such as masks, wampum beads, Kachina Dolls, dioramas, etc. 78.02.06 Italians and Blacks in New Haven: The Establishment of Two Ethnic Communities This six to eight week unit was designed to introduce high school students to the history of blacks and Italians in New Haven. Students learn how and why separate institutions were formed, and will understand the relationship of each group to the larger community. Parts could be adapted into the middle and upper elementary curriculum, grades 3-5. 79.03.01 The Development of Westville The underlying purpose of this unit is to provide students with an awareness of the Westville section of New Haven, Connecticut. Although current information is provided, the focus of the unit is on the historic aspects of Westville. A great teacher resource, this unit is adaptable to most grade levels; specifically grades 2-5. 79.03.04 New Haven’s Hill Neighborhood This is an activity filled unit that involves students in the historical process at a personal level. Students are encouraged to become more acquainted with various locations in the Hill section of New Haven, Connecticut. This unit allows for the development and practice of map-reading skills. This unit can be adapted for grades 4 and 5. 80.02.04 New Haven: Its Ships and Its Trades, 1800-1920 Though designed for high school, this unit contains some material that might be adapted to an elementary grade study of New Haven. 81.01.03 A Method of Teaching Inner-City Youths to Produce Urban Literature The unit attempts to help high school students develop an historical appreciation for the city and New Haven in particular through the use of poetry and literature. Beautiful background information for the teacher about U.S. cities in general and New Haven in particular. Could be adapted for middle and upper elementary students, 3-5. 81.01.04 The City and the Family Written for high school students about the history of the city and New Haven in particular. Could be adapted for all elementary grade levels. The unit has an interesting activity - preparing an oral biography of the student's family. 84.06.09 The Life and Times of the West River 1776 - 1896: A Study of Early Industry in Westville The unit presents an interesting history of the West River System of Westville. The overall purpose is to develop in the student the ability to make observations and draw conclusions from available evidence. The lesson plans at the end of the unit are designed to develop these skills and can be adapted to the middle and upper elementary grades, 3-5. 84.06.10 Gateway to New Haven: The New Haven Harbor The geographical location of the New Haven Harbor and how it has affected the lives of the people both past and present is the main emphasis of this unit. The unit contains good resource material about areas surrounding the New Haven Harbor. The unit is easily adaptable for upper elementary students, grade 5. 85.05.04 Against the Tide: Three Who Made It! By examining the successes of New Haven natives, this unit provides students with a picture of African American life in New Haven, Connecticut during the 1930s, 1940s and 1950s. Constance Baker Motley, Adam Clayton Powell and Raymond St. Jacques are highlighted in this unit. Recommended as a resource for teachers in grades 4 and 5. 89.01.01 Integrating the Hispanic Youth Population of the Hill into the New Haven Community This unit presents an interesting picture of the New Haven Hispanics and their contributions to the community, the Hill area of New Haven. Although the unit designates one area of New Haven, concepts and activities surrounding the community can be used and adapted to include other communities of New Haven. The unit could be adapted for middle and upper elementary students, 3-5. 89.01.02 American Studies, "The Hill Community" The Hill area of New Haven is the main focus of this unit, and how it was selected as a Model Cities area. Emphasis is placed upon what a community is, how a community functions, and what causes it to grow or remain stagnant. Although designed for middle and high school students, parts of this unit could be adapted for upper elementary students, grade 5. 89.01.03 The Community and You: Learning Your Way Around Fair Haven The unit emphasizes historical development and change in the Fair Haven community. Topics include: the role of oystering, the influx of immigrants, and industrial growth. The unit contains background information for the teacher and can easily be adapted for middle and upper elementary students, grades 3-5. 89.01.07 A Conceptual Model for Teaching Community Development History and geography comprise the main components of this unit while examining local communities. The student is taken step by step though the major aspects of their community. An interesting suggestion is taking students on a bus tour of their local community. Can be adapted for all elementary grade levels, K-5. 89.01.10 Communities in Transition Newhallville is the central focus for the study of community in this unit. It combines personal research and interviews to gain an understanding of the historical development of the community. Information could be used in developing lessons and activities for upper elementary students, grade 5. 89.01.11 Urban New Haven in the Making The unit provides information about the historical background, physical structure, organizations, businesses, and individual contributions of the Dixwell community. The unit contains important background information for the teacher about the Dixwell community, and could be used as a model for gathering information for upper elementary students about their own community. 89.01.12 Cultural Communities of New Haven This unit is designed primarily for grades 2-3, but can be adapted for all elementary grade levels, K-5. The unit covers several ethnic groups who contributed to the growth of New Haven. Various ethnic groups are studied along with activities for each month of the school calendar. 89.01.14 Newhallville: A Neighborhood of Changing Prosperity This unit explores the history of New Haven with emphasis placed on its effect of the Newhallville neighborhood. The lessons center on the carriage industry and on the firearms industry to discuss the prosperity of the city and neighborhood. Could be adapted for upper elementary students, grades 4-5. 91.01.06 Minority Families Moved from the South to the North for Economic Growth This unit can be adapted to include upper elementary students, (grade 5). The highlight of the unit is the Winchester Repeating Arms Factory, which was started in 1778 by Eli Whitney who manufactured more effective firearms. The lesson plans contain many activities, using work sheets, writing, and oral reporting. 91.02.08 New Haven Families: Artifacts and Attitudes, 1770s to 1890s The unit adaptable for any elementary grade level gives students direct experience with their city's cultural past. The unit requires about 12 weeks and encourages frequent trips to study community resources listed in the text. 92.03.03 A New Look at Old New Haven Connecticut Colonial history is discussed in this unit. The concentration of information relates to New Haven history. This unit is easily adapted to the Social Studies curricula of grades 3, 4, and 5. 92.03.05 My City, My Home: Good Neighborhoods are Essential to a Better City This unit discusses the urban life of New Haven, Connecticut. It provides information pertinent to the study of communities, citizenship and urban improvement. It is recommended as a resource for grades K-3 and adaptable for use in grades 4-5. 92.03.08 Examining the African American Role in Hew Haven History: Pride in the Past—Hope for the Future This is an excellent unit written for students in grades 3-5. The unit discusses African American history as it applies to New Haven history. It can be used as a resource or as a complete unit in the upper elementary grades. 97.04.01 The City of New Haven In this unit, as students learn about New Haven and its history, they learn about their own family. Integrated approach. Relates well to Social development curriculum. Recommended for grades 4-5. 80.06.01 The World War II Holocaust Though designed for older students, this unit on the Holocaust contains considerable information that could be adapted to help upper elementary students understand both the period and the nature of prejudice. Could be used when studying United States history. Recommended for grades 4-5. 81.03.05 Teenage Boys: Perspectives on the Adolescent Male's Development in an Urban Setting Although written for middle and high school teenagers, the unit gives excellent background information for upper elementary teachers, (grade 5), about issues confronting adolescent boys in today's urban environment. The unit is divided into four sections: Identity-self image; Moral Dilemmas; Adolescent Sexuality, and Teenage Fatherhood. 82.06.08 Impact of a Handicapped Child on the Family This unit focuses on disabled children and the effect they have on their families. Handicaps and disabilities are discussed as length. The life of a family with a disabled child is also discussed. This unit is a good resource for lessons on the handicapped and disabled and can be adapted to grades K-5. 84.05.03 Television and Teens This unit examines some of the viewing habits of Americans, and then suggests ways to help students cut back on their viewing time. In addition, the unit looks at ways in which students can utilize TV to their advantage. Teachers in middle and upper elementary grades, 3-5, can easily adapt the lessons and activities by using current TV shows, commercials, etc. 84.05.06 Identity: A Path to Self-Esteem This unit seeks to raise self-esteem in students. Many of the activities related to the lesson plans can easily be adapted into the elementary school’s social curriculum, "Project Charlie." 84.05.07 Anger, Aggression and Adolescents The goal of this unit is to help students be competent and assertive, not hostile and aggressive. Ideas are presented for helping students cope with their aggressive feelings with decision-making strategies. There are great ideas for middle/upper elementary students, 3-5. 84.05.08 A Middle School Orientation Program for Parents, Students, and Teachers This unit is specifically designed for the middle school student. However, activities listed for parents such as helping students limit TV to having dinner or breakfast together can be adapted for any grade level, K-5. There are beautiful ideas for a parent orientation package at the beginning of the school term. 87.05.08 Culture in Relationship to the Mind Various themes discussed in this unit are: 1) Moral Values; 2) Family and School; 3) the Brain; 4) Intellect and Memory; 5) Giftedness, and; 6) Technology and Art. Parts of this unit could be extracted for upper elementary classrooms, grade 5. The unit contains great working vocabulary and many activities given along with detailed lesson plans. 88.02.04 Ethnic Humor Ethnic humor is the focus of a study of ethnicity and its implications on how we view ourselves and how we view each other. This unit discusses topics such as diversity, stereotypes, and immigration trends. The activities included in this unit emphasize analysis, creative writing, critical thinking, and reading. Recommended as a resource for teachers in grades 4 and 5. 88.05.02 Talking with Kids about Sex and AIDS This unit discusses in detail the mental and physical health issues associated with adolescents suffering from AIDS and HIV. Although this unit can be characterized as a health unit it approaches the topic of AIDS and HIV with a more social awareness than scientific perspective. Recommended for grades 3 through 5 for background information. 89.02.02 A Special Education Curriculum Unit Dealing with Death, Depression and Suicide Using Poetry This unit is written for children in grades 3-12 with the main emphasis being on depression and death. Children read poems to learn the signs of depression and suicide to help themselves identify feelings they may have. 89.05.09 Images of the American Family This unit covers all four marking periods of the school year with goals, objectives and activities to be covered during each one. The unit examines the American family during the 19th and 20th centuries. Activities include making a family tree, conducting interviews for oral reports, poetry writing, book reports, etc. Activities could easily be adapted for middle and upper elementary students. 90.02.03 We Are One This unit helps to build esteem by centering on problem solving for life skills, and building trust within individuals and in a group setting. The unit contains many activities that could integrate in to any elementary "Project Charlie" lesson. 90.04.01 Changing Images of the American Family in Literature and Media: 1945-1990 The unit makes use of television, literature, film and slides of paintings to reconstruct moments when the family has changed in our American culture. Not only does the unit show how families have changed, but similarities as well. The unit is suitable for middle and upper elementary grades, 3-5. 90.04.02 Depicting Family Life: Changes and Modifications This unit is adaptable for upper elementary students. The years 1945 to 1990 are studied through political, scientific, and technological events. Novels are read reflecting the changes in family life as a result of these occurrences. The lesson plans contain an abundance of work sheets and activities related to the subject matter. 90.04.04 A Different Approach for a Special Child: Part Two Although written for the Spanish-speaking child, this unit can be adapted for any elementary grade level, K-5. The unit discusses why families move from one country to another looking for better living conditions, how they adapt parts of the new culture, and how families and countries go through changes over time. The lesson plans contain an abundance of activities centered on the subject matter. 90.04.08 The Changing Family: How Changes in the Family Reflects Social and Economic changes in Society Students of this unit study the history of the American family from the time of the Native Americans to present. They examine American society when its inhabitants were hunters and gatherers, then farmers, and lastly, wage earners in an industrial/urban society. The unit is adaptable for upper elementary students. 90.04.10 Highlights of Modern American Family Art and Literature This unit adaptable for upper elementary students, (grade 5), discusses Grassroots Artists and the Social Realists Movement. The unit teaches students that family structure can vary from region to region in different decades. 90.05.01 Cultural Diversity: the American Family - Past, Present, and Future This unit looks at the American histories of six different ethnic groups: Black, Hispanic, White, Japanese, Chinese, and Native Americans. The unit examines the changing American family through a series of short reading selections. Activities suggested with lesson plans are easily adaptable for all elementary students. 92.01.05 Getting to Know Your Classmates with Special Needs This unit discusses handicapped children in the realm of family life and in the realm of the classroom. It can be a resource for social development curriculum in grades 4 and 5. 96.03.03 Beauty is More Than Skin Deep: Examining the Positive and Negative Depiction of Physical Appearance in Children's Films This unit uses a variety of popular children's films to develop a more thoughtful approach toward defining beauty, as opposed to the stereotypes often presented in film. Uses multicultural literature to reinforce basic theme. Elements could be used with most elementary students. Interdisciplinary approach. 97.02.12 American Children's Literature: A Bibliotheraputic Approach This unit presents suggested readings and related activities for various grade levels, designed to help children deal with problems that occur in their lives. Interdisciplinary approach. Suitable for grades K-5. 97.04.02 Strategies for Teaching the Value of Diversity In this unit, students move from looking at themselves and their family to examining prejudice and discrimination. Integrated approach. Could be used to develop an understanding of diversity. Suitable for grades K-5. 97.04.05 Planning for Student Diversity When Teaching about Puberty This unit helps teachers to consider the diversity in pupil development and background that should be considered when teaching about puberty. It is suitable for grades 4-5. 97.04.09 My Self -and No Other This unit allows students to examine themselves, discover their strong points, and recognize areas in need of improvement. Interdisciplinary approach. Suitable for grades K-5. 78.03.03 Prohibition as a Reform Designed for high school students, this unit could be adapted for middle and upper elementary students, grades 3-5. Contains excellent background information on the prohibition movement with special emphasis on colonial days. Beautiful lesson plans including games, movies, debates, drama, etc. 78.03.06 America's Wars, 1898-1945 Designed for 11th grade history students, parts could be adapted for upper elementary students, grade 5. The unit focuses on three wars between 1898 and 1945, the Spanish-American War, World War I, and World War II. Provides an interesting lesson on New Haven and the nation preparing for protection and production. 78.04.02 History of Connecticut This unit presents a beautiful picture about the history of Connecticut. Areas of study include the geology of Connecticut, the Indians of Connecticut and the people of Connecticut. Great resource for teachers and can be easily adapted for all elementary grade levels, K-5. 78.04.03 The Architecture of New England and the Southern Colonies as it Reflects the Changes in Colonial Life Provides excellent resource material for the elementary teacher that can be used when planning a unit about the history of New England, particularly Connecticut. 79.03.03 Discover Eli Whitney The biography of Eli Whitney is the main thrust of this unit. The inventor and his inventions are connected to many local events, places and resources. The recommended grade levels for this unit are grades 4 and 5. 81.02.06 The Industrial Revolution Good overview of the Industrial Revolution. Material could be adapted for middle/upper elementary students, grades 3-5. A "study set" consisting of maps, drawings and other resources are available at the Institute office to supplement this unit. 81.02.07 World War II: A Comparative Study through Literature Written for 11th and 12th grade English students. Although too advanced for elementary students, the narrative presents excellent background information on World War II. 81.04.02 Reading and Writing About the Civil War Parts of the unit could easily be adapted for middle and upper elementary grade students (grades 3-5) with the assistance of the library medial specialist for help with the research. One part suggests that students choose a person from the Civil War era and write a biography. Students write five questions they feel comfortable with regarding their Civil War person. Each student answers the questions and is interviewed by classmates. 82.02.03 My Place in Time The main thrust of this unit is the biography of Benjamin Franklin. The information contained in this unit offers the necessary background information for an educated discussion of Benjamin Franklin in a grade 4 or a grade 5 classroom. Techniques for writing biographies are also mentioned. 82.03.01 When Military Necessity Overrides Constitutional Guarantees: The Treatment of Japanese-Americans during World War II This unit is designed to evaluate the history of Japan and the events that shaped Japanese-American relations during and after World War II. Specific information leading up to Pearl Harbor is included. The main thrusts of this unit are research skills and developing reports. Recommended for grades 4 and 5. 87.02.01 Willa Cather's My Antonia: "The Happiness and the Curse" Could be adapted for advanced upper elementary students (grade 5) in connection with a unit on immigration and the westward expansion. Emphasizes conditions of life on the rural Nebraska prairie land of late nineteenth century America. Contains excellent ideas for creative writing. 87.02.08 War Beyond Romance: The Red Badge of Courage and Other Considerations The unit seeks to investigate the nature of war, and man in war. It is two-part in structure. The first part does research concerning the motivations behind warfare. The second part uses Stephen Crane's The Red Badge of Courage and a critical examination of its main ideas and characters. Could be adapted for advanced upper elementary students, grade 5. 87.03.06 The Humor of America This unit traces, chronologically, what Americans in particular have found humorous, so students maybe aware of what we considered to be funny historically and how it relates to what we consider funny now. Although most of the unit is based upon literature that is too advanced, parts can be extracted (e.g. Davy Crockett tales for all grade levels.) 87.04.01 At Home: The Ties that Bind Has great material depicting life at home during World War II. Middle to upper elementary students (grades 3-5) would profit by the unit. Although the literature is too advanced for elementary students, portions could be read to the students, helping to give an overall picture of this time frame. 87.04.02 Reading Laura Ingalls Wilder: A Journey of Discovery This unit can be adapted for both lower and upper elementary children, grades K-5. Unit gives a clear picture of pioneer life on the prairie. Lesson plans include role-playing, writing, and discussion questions. 87.04.06 Three Literary Views of the American Frontier Children learn about early frontier living through three novels, Shane, Caddie Woodlawn, and The Trees. Parts can be adapted for middle and upper elementary students, grades 3-5. Role-playing is used where students are divided into two groups, homesteaders and cattlemen. Each group works together to formulate a list of reasons why they have a right to the land. 87.06.05 A Stitch in Time Has great hands-on lessons for the introduction of weaving. Could be adapted for middle/upper elementary students, 3-5, along with units about Native Americans or early pioneers. Take home lessons could involve parents for lower elementary students, 1-2. 89.05.02 Stepping Into a Colonial Family, a Primary Student's Perspective of Colonial Crafts, Customs and Traditions This unit designed for students in grades 2-5 focuses on colonial education, the colonial homestead, colonial craftsmen and their crafts. The unit offers teachers a series of classroom activities that reflect the colonial life style. 90.05.04 American Life: A Comparison of Colonial Life to Today's Life Although designed for grade one, the abundance of beautiful activities can easily be adapted for all elementary grade levels. The unit focuses on two areas to use as comparisons - the Pilgrims’ life, and life in colonial Connecticut. The unit is basically a visual and hands-on unit. 90.07.03 Mankind's Fascination with Flight This interesting unit can be adapted for all elementary grade levels. The unit focuses on the Wright brothers, and their significant contributions to society. Objectives include the early history of flight; the science and inspiration of flight; and the achievements of the Wright brothers. 91.01.04 The Victorian Age: A People in Search of Themselves as Seen through Their architecture Upper elementary students would profit by this unit and find it interesting. Of special interest of students would be the slide presentation of buildings showing Victorian architecture and the accompanying walking tour to see the actual buildings. Students can learn to develop an understanding of how architecture reflects the hopes and dreams of the people who lived in the area. 91.02.05 Buildings of America This unit is adaptable for middle and upper elementary grade students. Students study buildings from different regions and different historical periods. They learn how climate, natural resources and culture have affected the design of these buildings. 91.02.09 Changing Images of Childhood in America: Colonial, Federal and Modern England This unit uses art activities to study colonial New Haven and its surrounding areas. Children participate in a wide variety of craft projects related to colonial days. The unit would be of interest to all elementary students, K-5. 92.02.01 The Indians’ Discovery of Columbus This unit discusses Christopher Columbus and his relationship to Native Americans. It is a good resource for all grades, but is adaptable to grades K–5. 92.02.02 French Creole in Louisiana: An American Tale A source of information on the Creole culture in Louisiana, this unit offers information for teachers in grades K–5. The lessons and activities are good resources. 92.02.07 Windows of Time Since 1492 This unit provides information regarding the earliest historical events of the modern-day United States. The discussion of Christopher Columbus provides ample background information for all teachers. The unit is recommended for grades three through five. 92.04.01 Researching Columbus: Encounters and Exchanges Christopher Columbus is the central topic of this unit. Students in grades K–5, can use the information and lessons. The bibliography is useful. 96.03.07 The Eye Behind the Camera: The Voice Behind the Story Images of Slavery-Fact, Fiction, and Myth Designed for grades 6-8, this unit examines Hollywood’s treatment of the slavery era. Both stereotypical and more positive films are presented. Literature and story telling also play an important role. Activities could be adapted to elementary grades 3-5, with perhaps some going lower. 96.03.04 Using Film as a Springboard to Explore the Truth about AIDS Designed for a third grade, the interdisciplinary unit uses film and written, material to present a more accurate picture of HIV and AIDS. Activities aim to dispel many of the stereotypes regarding this disease and its victims. It is easily suitable for elementary grades 3-5. 96.03.10 Representation in Art and Film: Identity and Stereotype Designed for a 7-12 grade art class, this unit uses a variety of media and approaches, including visual art, literature, video, writing, and discussion to help students develop a deepening awareness of identity and stereotype. Though it might be a stretch to adapt most material to an elementary class, the Aunt Jamima and "Crooklyn" sections offer possibilities. 96.03.11 Mosaic America on Film: Fact Versus Fiction Through the use of film, this unit examines how minorities, ethnic groups, and history are portrayed in film. Encourages use of computers. Designed for grades 7-12. A creative elementary teacher could adapt some material, especially that which refers to Pocahontas. 97.02.02 Literature of the Civil War This unit attempts to better understand the people and events of the Civil War through the use of children's literature. Integrated approach. It is suitable for grades 4-5. 97.02.03 World War II as Seen through Children's Literature This unit uses literature and film to help students understand World War II. Integrated approach. It is suitable for grade 5. 97.02.04 Using Children's Literature to Understand Working Women and Children during World War II This unit uses literature and film to help students understand World War II. There is emphasis on women and children. Interdisciplinary approach. Suitable for grades K-5. 97.03.04 Keeping the Home Fires: The Lives of Western Women This unit focuses on Western Expansion in the United States with an emphasis on the role played by women. Interdisciplinary approach. It is suitable for grades 3-5. 97.03.05 All American Girl This unit uses the American Girl series, Dear America, and American Diaries to help students understand the role of women during the Colonial, Revolutionary, and Civil war periods. It is suitable for grades 4-5. 97.03.08 Common Threads Weaved Together the Lives of Civil War Women Through an integrated approach, this unit illuminates the strength and dedication of women during the Civil War. It is suitable for grades 3-5. 97.03.09 American Girls through Times and Trial This unit uses books depicting three different women living in three different historical settings to allow students to learn both about the historical period and the role and struggles of women during that particular period. Integrated approach. The unit is suitable for grades 3-5. 98.01.02 Women Portrayed in Film This unit written for students in grades 1-3 can easily be adapted for grades 5 and 6 as well. The unit is a study about women and their contribution to history, and about using film and books to help students lean productively. Three American women are used in the study: Harriet Tubman, Annie Oakley, and Wilma Rudolph. The unit provides great background information on the three women and excellent activities. 98.01.04 Mr. Friday and Friends: A Prospectus of Early Pioneer Life through Film Mr. Friday is a puppet that assists in bringing information pertinent to stories that the children are viewing in class. He guides the children in their critical analysis of historical facts in film by asking relevant questions about the stories. Children in all grade levels K-5 would enjoy gathering historical information about early pioneer life. Films such as Daniel Boone, Davy Crockett, and Johnny Appleseed are used in the unit. 98.01.10 Teaching Music through its Relationship to History with the Use of Film, Video and the Specious Present This unit can be used in grades K-5. The unit uses the concept of "Specious Present" to bind all disciplines together in its development for having students gain an understanding of different time periods and how our personal views and opinions can either obscure or provide insight as to understanding ones past. 98.04.04 The Great Depression and the New Deal There is excellent background information about the Great Depression and President Herbert Hoover. Although written for grade 8, can be adapted for grade 5. Includes a field trip to the New York Stock Market and the United Nations. 98.04.05 Who Gets to Invent and How Do Inventors Change Our Lives Interdisciplinary in approach, this unit was written for grades 2-6. The unit emphasizes the positive and negative effects of innovations. The study encourages students to become problem-solvers and come up with solutions for everyday dilemmas.
http://www.yale.edu/ynhti/curriculum/referencelists/elementary/SocialStudies.html
13
14
Climate can be described as the sum of weather. While the weather is quite variable, the trend over a longer period, the climate, is more stable. However, the climate still changes over time scales of decades to millennia. Ice ages are the prototypical example of a long time scale change. Natural climate changes are due to both the internal dynamics of the climate system and changes in external climate forcings. Historical temperature records and proxy records of climate variables show fluctuations on all time scales. Some of these changes can be plausibly attributed to external forcing factors such as the cool temperatures of the Maunder minimum of the 1800ís, which may have been caused by a decrease in solar irradiation.1 The eruption of Mt. Pinatubo in 1991 caused a cooling of the Earthís surface due to the injection of light reflecting aerosol particles into the stratosphere.2 Natural and human systems have adapted to the prevailing amount of sunshine, wind, and rain. While these systems can adapt to small changes in climate, adaptation is more difficult or even impossible if the change in climate is too rapid or too large. This is the driving concern over anthropogenic, or human induced, climate change. If climate changes are too rapid then many natural systems will not be able to adapt and will be damaged and societies will need to incur the costs of adapting to a changed climate. The physics of climate change Weather and climate are driven by the absorption of solar radiation and the subsequent re-distribution of that energy through radiative, advective, and hydrological processes. The Earthís surface temperature is primarily determined by the balance between the absorption and emission of radiation. A change in this radiative balance is termed a radiative forcing, which is measured in Watts per square meter. Naturally occurring greenhouse gases, primarily water vapor and carbon dioxide, trap thermal radiation from the Earthís surface and this effect keeps the surface warmer than it would be otherwise. Human activities are causing an enhancement of the natural greenhouse effect by substantially increasing the atmospheric concentrations of greenhouse gases. For example, the atmospheric concentration of carbon dioxide has already risen by about 30% from its pre-industrial level and methane concentrations are more than double their pre-industrial value. Further substantial increases in carbon dioxide concentrations are inevitable, at least in the near term, as world-wide use of fossil fuels continues to increase.ssss Figure IV.1.1 Carbon dioxide concentrations from atmospheric observations (Mauna Loa) and ice-core records (Siple ice core). The relationships between the atmospheric concentration of greenhouse gases and their radiative effects are well quantified. Forcing from the long-lived greenhouse gases: carbon dioxide, methane, and nitrous oxide, is presently about 2.5 Watts per meter squared (W/m2). Of this total, 1.6 W/m2 is from carbon dioxide alone. The total anthropogenic forcing is uncertain, particularly because the magnitude of the negative forcing associated with sulfate aerosols is unclear. While changes in solar irradiance may have affected global climate in the last century, a 0.15% change irradiance, the order of estimated changes, results in only a 0.36 W/m2 forcing. There are still significant uncertainties in moving from greenhouse gas emissions, particularly those of carbon dioxide, to atmospheric concentrations. However the largest difficulty is moving from changes in the concentration of greenhouse gases to changes in climate. The largest source of uncertainty lies in determining the magnitude of climate feedbacks. For example, an increase in trapped radiation and the associated warming is expected to increase the level of water vapor in the atmosphere, which would tend to further enhance the greenhouse effect ó a positive feedback. An example of a negative feedback would be an increase in clouds that reflected more sunlight back into space. The actual feedback from changes in clouds is uncertain since they also act to trap outgoing infrared radiation. It is the balance between positive and negative feedbacks which will determine the net effect of increased greenhouse gases. While climate models agree that the net effect will be warming, the amount of warming (and other changes) given by various models is different. The current central warming estimate, developed by the Intergovernmental Panel on Climate Change (IPCC), is a global average temperature rise of two degrees centigrade by the year 2100.3 The primary tools for study of the climate system, particularly in the context of the anthropogenic greenhouse effect, are complex computer models known as General Circulation Models, or GCMs. Since these global models must operate on a relatively large spatial scale, small scale phenomena such as the formation and properties of clouds, rainfall, and turbulent processes cannot be explicitly represented and must be parameterized. Improving parameterizations smaller scale phenomena is one of the primary goals of climate modelers. The accuracy of GCMs in simulating present climatic conditions has steady improved, although there are still significant errors for some features, such as cloud cover. While this lends increasing confidence in the results, models can only be rigorously tested against recent climatic conditions. Their accuracy in simulating future climate can never be fully tested. An impression of the range in estimates is given by combining the IPCC low emissions scenario with a low climate sensitivity and combining the high emissions scenario with a high climate sensitivity. The resulting range is one to three and a half degrees centigrade global average temperature increase by 2100.3 Note that the temperature change in a specific region will often be quite different than the global average. Estimating the impacts of climate change requires information on a regional level. The spatial resolution of general circulation models is too coarse to provide such information. While techniques have been developed to produce higher resolution climate change data, reliable regional projections of future climate change are still not possible. In addition, the spatial pattern of climate changes is different for different GCMs, although a number of broad similarities are present. CO2 Emissions Factors |Oil||Å||20 Tg C/EJ| |Natural Gas||Å||15 Tg C/EJ| |Coal||Å||26 Tg C/EJ| |Non-commercial fuel||Å||21 Tg C/EJ| |1 Gt Carbon||=||3.664 Gt CO2| Table 1: Carbon content of various fossil fuels in terms of The principal anthropogenic greenhouse gas is carbon dioxide, with a substantial contribution from methane. While chlorofluorocarbons (CFCs) are potent greenhouse gases, the stratospheric ozone depletion that they cause partially cancels out their direct radiative effect. Sulfate aerosols, formed in the atmosphere from sulfur dioxide produced primarily by the use of coal, are a crucial contributor to climate change, although in the opposite direction since they act to cool the Earthís surface. Carbon dioxide represents about 60% of the positive anthropogenic radiative forcing. The largest source of carbon dioxide is from the use of fossil fuels. The carbon content of different fossil fuels varies, with natural gas having the lowest carbon content and coal the highest (Table 1). Typical generation efficiencies also vary greatly. Combined-cycle natural gas generating plants are the preferred generating mode today due to their high efficiency, while traditional uses of non-commercial fuels such as wood are generally quite inefficient. Land-use changes, primarily tropical deforestation, contribute about a fifth of current carbon dioxide emissions; however, the importance of deforestation, relative to fossil fuel emissions, is expected to continue to diminish in the future. Unlike other greenhouse gases, carbon dioxide is not destroyed in the atmosphere but instead cycles between the atmosphere, terrestrial biosphere, and oceans. Because of this complicated cycle carbon dioxide does not have a single atmospheric lifetime. Only about half the carbon dioxide emitted today remains in the atmosphere, some portion of which will remain there for centuries. The rest is absorbed in either the ocean or the terrestrial biosphere. It is carbon dioxide sequestered by ancient forests that we are burning today as fossil fuels. There are numerous sources of anthropogenic methane emissions, including fossil fuel use (natural gas) and production, ruminant animals, waste disposal, and rice agriculture. While methane is a potent greenhouse gas, with twenty one times the radiative effect of carbon dioxide per molecule, it has a much shorter atmospheric lifetime. Methane is oxidized in the atmosphere in roughly a decade while carbon dioxide is essentially indestructible and stays in the atmosphere until absorbed by the oceans or terrestrial biosphere. Therefore carbon dioxide is the greenhouse gas of primary concern, due to its long atmospheric lifetime and the large quantity that is released into the atmosphere. Sulfate aerosols are light colored particles, part of the haze seen in industrialized areas. They act in the opposite sense to greenhouse gases, reflecting light and tending to cool the Earthís surface. The best estimate of the cooling effect of sulfate aerosols is that they have offset a bit more than a third of the global-average warming due to anthropogenic greenhouse gases released to date. However, the radiative effects of aerosols are quite uncertain, and their cooling effect could be significantly different than the current ìbest guessî value. Sulfate aerosols are expected to caused an indirect effect by acting as condensation nuclei and thus causing clouds to be denser and more reflective. The magnitude of the indirect effect is very uncertain. Even though these aerosols, along with those caused by biomass burning, tend to cool the atmosphere they can not exactly cancel the warming caused by greenhouse gasses even if the magnitude of the two effects were equal. While greenhouse gases such as carbon dioxide and methane are fairly evenly distributed in the atmosphere, aerosols are concentrated near their sources. Thus sulfate aerosol cooling effects are concentrated near heavily industrialized regions, particularly the eastern United States and western Europe. While the climate effect of these compounds might be considered beneficial, when sulfur dioxide and sulfate aerosols are eventually removed from the atmosphere they acidify the soil, which damages natural and agricultural systems. Energy use is the primary source of greenhouse gases. The main factors that drive energy use are economic growth and population growth. Contrary to most popular conceptions, it is economic growth not population growth that is the primary driver, both historically and in model projections, of greenhouse gas emissions. Population growth is, however, still a significant contributor to increased future greenhouse gas emissions. Emissions of most greenhouse gasses are expected to continue to increase in the future. Greenhouse gas emissions from developing and developed countries are currently comparable in magnitude. However, most of the growth in greenhouse gas emissions will occur in developing countries, where economic growth rates are much larger than those in industrialized regions. If developing countries follow the energy-intensive development path followed by the presently industrialized countries then atmospheric concentrations of greenhouse gases will increase dramatically. One of the largest uncertainties in future greenhouse emissions is the effect of technological change. If renewable energy sources become cost-effective, if there are major gains in the efficiency of energy utilization, or if there is a large increase in the use of nuclear energy (fission or fusion), then emissions of greenhouse gases may be substantially restrained. Central projections of greenhouse gas emissions result in a doubling of anthropogenic concentrations of carbon dioxide before the end of the next century. This is likely to result in a an ìaverage rate of warming [which] would probably be greater than any seen in the last 10,000 years.î 3 However, if favorable technological developments are assumed to occur then carbon dioxide emissions could stabilize or even fall by the end of the next century. An ongoing debate has been over the rate at which such developments would occur either with or without policy intervention. The importance of the climate change issue stems from the impact of changes in climate on human and natural systems. The two most well known consequences of climate change are an increase in global-mean temperature and a rise in sea level. The primary components of sea-level rise are thermal expansion and the melting of small (continental) glaciers. However there are other changes in climate could be as important, or even so, than changes in the mean climate state. These include changes in precipitation and climate variability, particularly changes in the intensity and/or severity of extreme events such as droughts, floods, or tropical storms. The extent to which any of these latter changes might occur is still quite uncertain. The level of damages from climate change is also uncertain. Although changes in climate will be beneficial in some areas, net costs are expected from a change in climate due to increases in the concentrations of greenhouse gases. Coastal regions are heavily populated and are particularly sensitive to climate changes, particularly sea-level rise. Agricultural activities are very sensitive to climate. However, damage estimates for this sector are uncertain since the extent to which rising levels of atmospheric carbon dioxide will enhance crop growth is not clear. Other sectors that will be affected by climate change include forestry, air quality, water resources, human health, and energy use. The anticipated rate of anthropogenic climate change is greater than the natural rate at which climate has changed in the past. This has led to considerable concern that the rate of anthropogenic climate change will be greater than the rate at which some natural systems are able to adapt. If the climate changes to a state that is outside the range of tolerance of an individual species then that species must migrate to a suitable area. Plant species migrate very slowly, and the migration of many animal and plant species is severely limited by human development. Many ecosystems, such as wetlands, are particularly vulnerable to a change in climate or a rise in sea-level. Responding to climate change There are two principal responses to climate change, mitigation and adaptation. The rate at which carbon dioxide, methane, and other greenhouse gasses are released into the atmosphere can be decreased. This is termed mitigation and would reduce the magnitude of future climate change. Emission reductions can occur through either reduced energy demand, use of more efficient energy production technologies, and/or use of energy sources that produce no net greenhouse gas emissions. Carbon-free energy sources include renewable energy, geothermal energy, and nuclear energy. Reductions in energy use can be obtained by direct policy measures, such as a carbon tax, and by improvements in the efficiency of energy using and producing equipment. Modern energy production technologies, such as combined-cycle power plants, are significantly more efficient than older power plants. However, in the long term, stabilization of carbon dioxide concentrations will require the development of non-fossil energy supplies, that is, renewable and/or nuclear energy. The second choice is adaptation, that is adjusting to the effects of future climate change. While richer countries can build sea walls or shift agricultural production, these actions will take away resources from other activities. Poorer countries are more vulnerable to climate change since they are generally more dependent on natural resources and they lack the economic resources with which to cope with damages. Efforts to reduce anthropogenic effects on climate are strongly affected by the inertia present in climate and human systems. The effect of increasing concentrations of greenhouse gases is strongly moderated by the thermal inertia of the oceans. On the human side, the systems by which we generate and use energy, along with society in general, also change slowly. Responding to climate change also requires an informed public. Studies of public ìenvironmental valuesî have found widespread support for environmental protection and even a general willingness to forgo economic gains in favor of the environment.4 Public understanding of the climate change issue is, however, flawed. The connection between energy use and climate change is practically nonexistent in the public mind. In addition, a majority of people confuse climate change with pollution and ozone depletion ó often expressing the view that climate change can be abated through pollution controls. Figure IV.1.2: Global average surface temperature. The solid line is a running mean. The rise in temperature over the past century is clear, however the cause of this rise is less certain. Much of the public debate over climate change has confused the issue of detection of climate change with the inevitability of climate change. The consensus of the scientific community is clear: increasing emissions of greenhouse gases will inevitably cause the levels of greenhouse gases in the Earthís atmosphere to rise, which will change the Earthís climate. While the inevitability of climate change is generally accepted, the magnitude and nature of these changes are still uncertain. While anthropogenic climate change has not been unambiguously detected, the evidence for a human effect on climate is mounting. The surface temperature of the earth has risen by about half a degree centigrade over the last century. This rate of change is similar in magnitude to natural climate changes but also well within the range of the possible effects of the historical rise in greenhouse gas concentrations. 5 Unambiguously detecting climate change through the record of global mean temperature is not possible at this point since, while we may detect warming we cannot uniquely attribute a general warming to anthropogenic influence. Fingerprint detection is a more promising technique. This scheme involves using GCMs to identify distinctive spatial patterns caused by anthropogenic influence. A number of studies using this technique have recently found evidence of human influence on climate. These studies, plus other changes in weather and temperature patterns, lead working group I of the IPCC to conclude that, while there still many uncertainties, ìthe balance of evidence suggests that there is a discernible human influence on global climate.î3 The degree to which the climate will change in the future is still uncertain. However climate change may lead to significant damage to both human and natural systems. Estimates of the cost of reducing greenhouse gas emissions are also uncertain and a definitive cost-benefit calculation which compares climate change damages to mitigation costs is not possible at this time. Stripped of the baggage associated with political and economic interests, much of the debate over climate change boils down to differences in values. Technological change and a general increase in wealth through economic growth will leave the world better able to deal with this issue in the future. However, some, perhaps small, amount of damage will accrue in the interim. A risk-averse viewpoint argues for mitigation of greenhouse gas emissions as soon as possible to avoid the possibility of harm. An opposite view advocates waiting until we are more certain about climate change effects (and more able to effect changes). This part of the debate will be better informed, but not solved, by improved science. Further information, references, and much quantitative information, can be found in the IPCC reports. The most recent report is in three volumes. The first volume6 reports on climate science; the second on impacts, adaptations, and mitigation; and the third on economic and social dimensions. The policy-maker summaries are available on the internet.3 The 1990 IPCC report also contains much useful information and discussion, some not repeated in later reports.7 - J. Lean, A. Skumanich and O. White, ìEstimating the sunís radiative output during the Maunder minimumî GRL. 19 (15), 1591-1594 (1992). - J. Hanson, A. Lacis, R. Ruedy, M. Sato and H. Wilson, ìHow sensitive is the worldís climate?î Natl. Geog. Res. Explor. 9, 143-158 (1993). - The Intergovernmental Panel on Climate Change (IPCC), Climate Change 1995: IPCC Second Assessment Report, Working Group I 1995 Summary for Policymakers (1995). Available at: http://www.unep.ch/ipcc/ipcc95.html - Kempton, W. J. S. Boster, and J. A. Hartley, Environmental Values in American Culture (Cambridge: MIT Press), (1995). - Often one sees graphs of historical temperature change overlaid with greenhouse gas emissions (or concentrations). While illustrative, it is inappropriate to simply correlate the two time series as an attribution measure. The climate system is non-linear, one of the most important effects being ocean thermal dynamics. The appropriate procedure is to run the greenhouse gas forcing through a model that accounts for these effects, and obtaining a temperature time series that accounts for thermal lag effects. - Climate Change 1995: The Science of Climate Change, Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change, J.T. Houghton et al., Eds. (Cambridge University Press, Cambridge, UK, 1995). - Climate Change: The IPCC Scientific Assessment J.T. Houghton, et al., Eds. (Cambridge University Press, Cambridge, UK, 1990) Carbon Dioxide concentrations: Neftel, A., H. Friedli, E. Moor, H. Lotscher, H. Oeschger, U. Siegenthaler, B. Stauffer. 1994. ìHistorical CO2 record from the Siple Station ice core.î pp. 11-14. In Trends ë93: A Compendium of Data on Global Change, edited by T. A. Boden, D.P. Kaiser, R. J. Sepanski and F. W. Stoss. Oak Ridge, Tennessee: Carbon Dioxide Information Analysis Center. Keeling, C.D., and T.P. Whorf. 1991. ìAtmospheric CO2 records from sites in the SIO air sampling network.î pp. 16-26. In Trends ë93: A Compendium of Data on Global Change, edited by T. A. Boden, D.P. Kaiser, R. J. Sepanski and F. W. Stoss. Oak Ridge, Tennessee: Carbon Dioxide Information Analysis Center. Updated 1995 at CDIAC: http://cdiac.esd.ornl.gov/ftp/ Historical temperature change: Jones, P.D., T.M.L. Wigley, and K.RE. Briffa. 1994, ìGlobal and hemispheric temperature anomaliesóland and sea instrumental records.î pp. 603-608. In Trends ë93: A Compendium of Data on Global Change, edited by T. A. Boden, D.P. Kaiser, R. J. Sepanski and F. W. Stoss. Oak Ridge, Tennessee: Carbon Dioxide Information Analysis Center. (updated)
http://www.aps.org/policy/reports/popa-reports/energy/climate.cfm
13
44
Franklin Roosevelt has for years been given credit for shepherding the nation through the Great Depression. For decades, FDR’s New Deal policies were believed by many economists to have prevented a total collapse of the United States economy until the markets and industry could recover as they geared up production to supply the US and its allies with war material to fight the Axis powers. More recently, a close examination of FDR’s programs has revealed that the opposite may be true. In FDR’s Folly, Jim Powell shows that many of Roosevelt’s New Deal programs did far more to hurt the economy and delay recovery than they did to help. Higher taxes, strict regulation, and centralized economic planning all combined to keep unemployment high and the economy stagnant for the entire decade of the 1930s. The recession that became the Great Depression had its roots in the Federal Reserve’s monetary policy. In 1928 and 1929, the Fed increased interest rates and caused a severe monetary contraction. Powell estimates that the money supply actually decreased by 1/3 (chapter 2). Powell also notes that many states had banking laws that prohibited banks from having branches. This prevented diversification and made banks weaker. Approximately 10,000 US banks failed between 1929 and 1933 [http://bit.ly/9UToiQ]. In Canada, where there were no such restrictions on bank branches, there were no bank failures. Most of the failed banks were rural single-office banks (chapter 4). Herbert Hoover, who was president when the stock market crashed in 1929, took aggressive steps to save the economy, many of which are similar to steps taken by congress and President Obama over the last few years (chapter 3). Hoover encouraged industry to keep wages high in spite of falling sales and demand. He tried to put people back to work with public works projects and signed the Davis-Bacon Act which required local governments to pay union wages, which helped keep labor costs artificially high. He also backed farm subsidies, which led to overproduction and low prices. Further, Hoover signed the Smoot-Hawley Tariff in 1930, which raised prices on imported goods. Many other countries retaliated by raising prices on American goods. The Revenue Act of 1932 also raised taxes. Other Hoover policies included restrictions on short sales of stocks and revisions to bankruptcy law that limited the rights of creditors. Hoover’s responses, and Roosevelt’s adoption of many of his policies, turned a recession into the Great Depression. When FDR became president in 1933, he initiated a series of policies called the New Deal. Many of FDR’s policies took Hoover’s government actions and expanded them. One of FDR’s first actions was to declare a series of bank holidays, in which banks were ordered to close. Powell argues that the bank holidays actually contributed to the bank runs. People knew that the banks were going to be closed. They also knew that, in the days before credit cards, they needed cash. Their response was to rush to the bank and withdraw money while it was open… and solvent. Another early action of FDR was to sign the Glass-Steagall Banking Act of 1933. This law (repealed in 1999) created a wall between investment banks and commercial (lending) banks. It also established the FDIC to insure bank deposits. The separation of banks prevented diversification and required many of the strongest banks in the country to split into smaller – weaker - parts. Deposit insurance eased the minds of depositors, but Powell argues that it also made people more risk tolerant. If people knew that their funds were insured by the government, they would pay less attention to what the banks were doing with their deposits. In turn, it encouraged the banks to be more risky with their depositor’s money because they knew that it was guaranteed by the government. FDR also raised taxes dramatically. The Revenue Act of 1936 increased federal taxes on income, dividends and estates, while limiting deductions. The Undistributed Profits Tax of 1936 raised corporate tax rates and limited deductions for business losses (chapter 6). By the end of FDR’s tenure, the top marginal rates for both personal and corporate taxes were in excess of 90% (chapter 18). These high tax rates discouraged corporate investment and further slowed economic growth. At the same time that federal tax rates were rising, local and state taxes were also increasing. Many states saw dramatic increases in their income taxes for individuals and businesses as well as higher sales taxes. Congress actually passed the legislation in 1939 which would have reversed the higher tax trend. The Revenue Act of 1939 would have lowered corporate taxes to a flat 18% and eliminated the Undistributed Profits Tax. However, FDR refused to sign the bill into law. Another massive New Deal tax increase was passed into law as the Social Security Act of 1935 (chapter 13). As originally passed, Social Security established a payroll tax that would go into an Old Age Retirement Account. Benefits for retirees would begin after January 1, 1942 (although this was later changed to 1940). This was meant to allow funds to build up to pay out benefits, although the program quickly became pay-as-you-go after FDR and congress depleted the trust fund in 1940. The passage of Social Security slowed the recovery for several reasons. First, it obviously depleted the purchasing power of employees since the tax decreased their take-home pay at a time when wages were already depressed. Since employers were also taxed, it made hiring more expensive and discouraged businesses from adding employees. Finally, it removed money from circulation that could have been spent on goods and services because the tax receipts went into a trust fund for several years before they were paid out to retirees. The ultimate legacy of Social Security is an unfunded entitlement that is expected to go bankrupt by 2037 [bit.ly/9Dq9jO].
http://www.examiner.com/article/fdr-s-folly-and-what-obama-should-learn-from-it-part-i?cid=rss
13
33
The Residence Act of 1790, officially titled An Act for establishing the temporary and permanent seat of the Government of the United States, is the United States federal law that settled the question of locating the capital of the United States, selecting a site along the Potomac River. The federal government was located in New York City at the time the bill was passed and had previously been located in Philadelphia, Annapolis and several other settlements. Congress passed the Residence Act as part of a compromise brokered between James Madison, Thomas Jefferson and Alexander Hamilton. Madison and Jefferson favored a southerly site for the capital on the Potomac River, but they lacked a majority to pass the measure through Congress. Meanwhile, Hamilton was pushing for Congress to pass the Assumption Bill, to allow the Federal government to assume debts accumulated by the states during the American Revolutionary War. With the compromise, Hamilton was able to muster support from the New York State delegates for the Potomac site, while four delegates (all from districts bordering the Potomac) switched from opposition to support for the Assumption Bill. The Residence Act gave authority to President George Washington to select an exact site for the capital, along the Potomac, and set a deadline of December 1800 for the capital to be ready. In the meantime, Philadelphia was chosen as a temporary capital. Washington had authority to appoint three commissioners and oversee the construction of Federal buildings in Washington, D.C., something to which he gave much personal attention. Thomas Jefferson was a key adviser to Washington, and helped organize a design competition to solicit designs for the United States Capitol and the President's house. The construction of the Capitol building was fraught with problems, including insufficient funds, and was only partially complete in November 1800 when Congress convened for the first time in the Capitol. During the American Revolutionary War, the Second Continental Congress convened in Philadelphia at the Pennsylvania State House. On account of British military actions, the Continental Congress was forced to relocate to Baltimore, Lancaster, Pennsylvania, and York, Pennsylvania for a period of time before returning to Philadelphia. Upon gaining independence, the Congress of the Confederation was formed, and convened in Philadelphia until June 1783, when a mob of angry soldiers converged upon Independence Hall, demanding payment for their service during the American Revolutionary War. Congress requested that John Dickinson, the governor of Pennsylvania, call up the militia to defend Congress from attacks by the protesters. In what became known as the Pennsylvania Mutiny of 1783, Dickinson sympathized with the protesters and refused to remove them from Philadelphia. As a result, Congress was forced to flee to Princeton, New Jersey on June 21, 1783, and met in Annapolis and Trenton, before ending up in New York City. The United States Congress was established upon ratification of the United States Constitution in 1789, and New York City initially remained home to Congress. Locating the capital The question of where to locate the capital was raised in 1783. Numerous locations were offered by the states to serve as the nation's capital, including: Kingston, New York; Nottingham Township in New Jersey; Annapolis; Williamsburg, Virginia; Wilmington, Delaware; Reading, Pennsylvania; Germantown, Pennsylvania; Lancaster, Pennsylvania; New York City; Philadelphia; and Princeton; among others. The Southern states refused to accept a capital located in the North, and vice versa. Another suggestion was for there to be two capitals. Congress approved a plan in 1783 for a capital on the Potomac, near Georgetown, in Maryland, and another capital on the Delaware River; this plan was rescinded the following year. The issue of locating the capital was put on hold for several years, until the Constitutional Convention was held in 1787, to draft the United States Constitution. The Constitution granted power to Congress over a federal district, with Article I, Section 8 of the Constitution stating: To exercise exclusive Legislation in all Cases whatsoever, over such District (not exceeding ten Miles square) as may, by Cession of particular States, and the Acceptance of Congress, become the Seat of the Government of the United States, and to exercise like Authority over all Places purchased by the Consent of the Legislature of the State in which the Same shall be, for the Erection of Forts, Magazines, Arsenals, dock-Yards, and other needful buildings. The debate heated up in 1789 when Congress convened. Two sites were favored by members of Congress: one site on the Potomac River near Georgetown; and another site on the Susquehanna River near Wrights Ferry (now Columbia, Pennsylvania). The Susquehanna River site was approved by the House in September 1789, while the Senate bill specified a site on the Delaware River near Germantown, Pennsylvania. Congress did not reach an agreement at the time. |Wikisource has original text related to this article:| The issue of locating the capital resurfaced in the summer of 1790. At the same time, Secretary of the Treasury Alexander Hamilton was pushing for Congress to pass a financial plan. A key provision of Hamilton's plan involved the Federal government assuming states' debts incurred during the American Revolutionary War. Northern states had accumulated a huge amount of debt during the war, amounting to 21.5 million dollars, and wanted the federal government to assume their burden. The Southern states, whose citizens would effectively be forced to pay a portion of this debt if the Federal Government assumed it, were disinclined to accept this proposal. Some states, including Virginia, had paid almost half of their debts, and felt that their taxpayers should not be assessed again to bail out the less provident, and further argued that the plan passed beyond the scope of the new Constitutional government. James Madison, then a representative from Virginia, led a group of legislators from the south in blocking the provision and prevent the plan from gaining approval. When Jefferson ran into Hamilton at President Washington's residence in New York City in late June 1790, Jefferson offered to host a dinner to bring Madison and Hamilton together. Subsequently, a compromise was reached, in which the northern delegates would agree to the southerly Potomac River site, and in return, the federal government would assume debts accumulated by the states during the American Revolutionary War. Jefferson wrote a letter to James Monroe explaining the compromise. The 1st United States Congress agreed to the compromise, which narrowly passed as the Residence Act. Jefferson was able to get the Virginia delegates to support the bill, with the debt provisions, while Hamilton convinced the New York delegates to agree to the Potomac site for the capital. The bill was approved by the Senate by a vote of 14 to 12 on July 1, 1790, and by the House of Representatives by a vote of 31 to 29 on July 9, 1790. Washington signed the Act into law one week later on July 16. The Assumption Bill narrowly passed the Senate on July 16, 1790, followed by passage in the House on July 26. The Residence Act specified that the capital be located along the Potomac River between the Eastern Branch (the Anacostia River) and the Connogochegue (near Williamsport and Hagerstown, Maryland), and encompass an area of no more than "ten miles square" (i.e., 10 miles (16 km) on a side, for an area of 100 square miles (259 km2)). The Act limited to the Maryland side of the Potomac River the location of land that the commissioners could acquire for federal use. The Act gave President George Washington the authority to decide the exact location and hire a surveyor. The President was required to have suitable buildings ready for Congress and other government offices by the first Monday in December 1800, and that the federal government would provide financing for all public buildings. The Act specified that the laws of the state from which the area was ceded would apply in the federal district, meaning that Maryland laws applied on the eastern side of the Potomac while Virginia laws applied on the western side in the District of Columbia until the government officially took residence. Upon assuming control of the federal district in 1800, Congress would have full authority over local matters within the District of Columbia. In order to garner enough votes to pass the Assumption Bill, Hamilton also needed votes from the Pennsylvania delegates. This led to the decision to designate Philadelphia as the temporary capital city of the United States federal government for a period of ten years, until the permanent capital was ready. Congress reconvened in Philadelphia on December 6, 1790 at Congress Hall. Some hoped that the plan to establish the capital on the Potomac would not materialize, and that the capital would remain permanently in Philadelphia. However, George Washington quickly got the ball rolling, and along with Jefferson, personally oversaw the process as plans were developed and implemented. While plans for the permanent capital were being developed, Pennsylvania delegates continued to put forth effort to undermine the plan, including allocating funds for federal buildings and a house for the President in Philadelphia. Although the legislation did not specify an exact location, it was assumed that Georgetown would be the capital. Washington began scouting out the territory to the southeast of Georgetown, near the Anacostia River (Eastern Branch). Some of the property owners expressed to the President that they were willing to sell land for the capital. Washington also looked at other sites along the Potomac. He decided that a few sites should be surveyed to provide specific details about the land and its ownership. Washington returned to Philadelphia in late November 1790 to meet with Thomas Jefferson to discuss the implementation of the Residence Act. At this time, the decision had been reached to locate the capital at or adjacent to Georgetown, which was a short distance below the fall line and the farthest inland point for navigation. In January 1791, the President proceeded to appoint, in accordance with the Residence Act, a three-member commission, consisting of Daniel Carroll, Thomas Johnson and David Stuart, to oversee the surveying of the federal district, and appointed Andrew Ellicott as surveyor. Washington informed Congress of the site selection on January 24, and suggested that Congress amend the Act to allow the capital to encompass areas to the south of the Eastern Branch, including Alexandria, Virginia. Congress agreed to the President's suggested change. However, consistent with language in the original Act, the amendment specifically prohibited the "erection of the public buildings otherwise than on the Maryland side of the river Potomac". Pierre (Peter) Charles L'Enfant began working on a city plan for the capital in early spring 1791. A design competition was held to solicit designs for the United States Capitol and the White House. Architect James Hoban was selected to design the White House, while no satisfactory drawings were submitted for the Capitol. A late submission by William Thornton was selected for the Capitol, though his plans were amateur in many respects. Stephen Hallet was hired to oversee construction, which got underway in September 1793. Hallet proceeded to make alterations to the design, against the wishes of Washington and Jefferson, and was subsequently dismissed. George Hadfield was hired in October 1795 as superintendent of construction, but resigned three years later in May 1798, due to dissatisfaction with Thornton's plan and quality of work done thus far. The original intention of the Residence Act was to use proceeds from selling lots in Washington, D.C. to cover costs of constructing federal buildings in the capital. However, few were interested in purchasing lots. A shortage of funds further contributed to the delays and problems in building the Capitol and other federal buildings in Washington, D.C. The Senate wing was completed in 1800, while the House wing was completed in 1811. However, the House of Representatives moved into the House wing in 1807. Though the building was incomplete, the Capitol held its first session of United States Congress on November 17, 1800. The legislature was moved to Washington prematurely, at the urging of President John Adams in hopes of securing enough Southern votes to be re-elected for a second term as president. Residents of the Virginia portion of the District (Alexandria County) successfully petitioned Congress to retrocede their portion of the federal capital to Virginia. This happened on July 9, 1846. Alexandria County is now Arlington County and a portion of the City of Alexandria. - Ellis, Joseph J., (2000) Founding Brothers, Vintage Books, New York, NY, p. 73 - Reps 1965, pp. 240–242 - Crew 1892, p. 66 - See List of capitals in the United States for a complete accounting. - Allen 2001, p. 4 - Constitution of the United States, United States Senate, retrieved 2008-12-12 - Ellis 2002, pp. 48–52 - Residence Act, Library of Congress, retrieved 2008-12-12 - An ACT for establishing the Temporary and Permanent Seat of the Government of the United States, Library of Congress, retrieved 2008-12-12 - Elkins 1995, p. 160 - Miller 2003, p. 251 - The Senate Moves to Philadelphia, United States Senate, retrieved 2008-12-12 - Bowling 2000, pp. 3–4 - Elkins 1995, p. 169 - Elkins 1995, p. 174 - Statutes At Large, 1st Congress, Session III, Chapter 18, pp. 214-215, March 3, 1791. - Hazelton 1903, p. 4 - Allen 2001, p. 8 - Allen 2001, pp. 13–15 - Allen 2001, p. 19 - Frary 1969, pp. 34–35 - Frary 1969, pp. 44–45 - Bowling 2005, p. 58 - Carter II, Edward C. (1971-1972), "Benjamin Henry Latrobe and the Growth and Development of Washington, 1798-1818", Records of the Columbia Historical Society: 139 - Allen, William C. (2001), History of the United States Capitol - A Chronicle of Design, Construction, and Politics, Government Printing Office, ISBN 0-16-050830-4 - Berg, Scott W. (2007), Grand Avenues: The Story of the French Visionary who Designed Washington, D.C., Pantheon Books, ISBN 0-375-42280-3 - Bowling, Kenneth R. (1988), Creating the Federal City, 1774-1800: Potomac Fever, American Institute of Architects Press, ISBN 1-55835-011-X - Bowling, Kenneth R. (2000), "The Federal Government and the Republican Court Move to Philadelphia, November 1790 - March 1791", Neither Separate Nor Equal: Congress in the 1790s, Ohio University Press, ISBN 0-8214-1327-9 - Bowling, Kenneth R. (2005), Establishing Congress: The Removal to Washington, D.C., and the Election of 1800, Ohio University Press, ISBN 0-8214-1619-7 - Crew, Harvey W.; William Bensing Webb, John Wooldridge (1892), Centennial History of the City of Washington, D. C., Dayton, Ohio: United Brethren Publishing House - Elkins, Stanley M.; Eric L. McKitrick (1995), The Age of Federalism: The Early American Republic, 1788-1800, Oxford University Press - Ellis, Joseph J. (2002), Founding Brothers: The Revolutionary Generation, Vintage, ISBN 0-375-70524-4 - Frary, Ihna Thayer (1969), They Built the Capitol, Ayer Publishing, ISBN 0-8369-5089-5 - Hazelton, George C. (1903), The National Capitol: Its Architecture, Art, and History, J.F. Taylor - Miller, John (2003), Alexander Hamilton and the Growth of the New Nation, Transaction Publishers, ISBN 0-7658-0551-0 - Reps, John William (1965), "Planning the National Capital", The Making of Urban America, Princeton University Press, ISBN 0-691-00618-0
http://en.wikipedia.org/wiki/Residence_Act
13
14
In the sixteenth century Spain, as we have seen, had thrust up into the North the two outposts of Florida and New Mexico. In time foreign intrusion made it necessary to occupy the intervening region called Texas, which embraced a goodly slice of what is now Louisiana. While Spain was busy farther south, other nations were encroaching on her northern claims. By 1670 England had planted strong centers of colonization all the way from Jamaica to New England, and had erected trading posts on Hudson Bay. French traders from Canada, meanwhile, had been pushing up the St. Lawrence to the Great Lakes and branching north and south through the wilderness. At the same time French and English buccaneers from the West Indies were marauding the Florida settlements and the coast towns of Mexico. English, French, and Spanish territorial claims and frontier p208settlements clashed. The lines of competition, imperial and commercial, were drawing tighter with every passing year. On the Atlantic coast the Anglo-Spanish frontiers clashed with resounding echo from the very moment of the founding of Charleston (1670), just across from the Spanish outpost Santa Elena, on Port Royal Sounds. If Plymouth Rock and Hudson Bay were too remote to have a direct influence on Spanish claims, nevertheless their indirect influence — through the acceleration they gave to French activities — was to be potent. France's opportunity, indeed, seemed golden. And it was in the West. In Europe France was rapidly taking the position of supremacy which had been Spain's; and New France promised to become not only a valuable source of revenue through the fur trade — if the wide beaver lands "beyond" could be secured — but also the point of control over the Strait of Anian for which French explorers as well as Spanish sought. The French had heard also of a great river flowing through the continent; they hoped to discover that river and thus control the best trade route to China. When Joliet and Marquette descended the Mississippi to the Arkansas in 1673 and returned to publish their news in Quebec, some of p209their hearers at least believed that the river had been found. Chief of these was Robert Cavelier de la Salle, a recent arrival in Canada. La Salle hurried to France and laid before the King a plan to extend the fur trade to the Illinois country and explore the Mississippi, which rose in Asia, to its mouth. Four years later, having erected posts in Illinois, La Salle landed at the mouth of the Mississippi and claimed the territory along its course for France. The discovery that the river emptied into the Mexican Gulf put a new idea into La Salle's fertile brain. He made another journey to France and proposed to plant a colony at the mouth of the Mississippi, and thus to secure the river highway for France and establish a vantage point for the control of the Gulf and for descent upon the Spanish mines of northern Mexico. In the summer of 1684 he sailed from France with his colony; and toward the end of the year he landed on the Texas coast at Matagorda Bay. It was because of faulty maps, perhaps, that he had missed the mouth of the Mississippi. One of his four ships had been captured by Spaniards en route and another was wrecked on entering the bay. Beaujeu, the naval commander, who had quarrelled with La Salle from the first, turned his p210vessel about and returned to France, carrying away some of the soldiers and a large quantity of much needed supplies. Tonty, La Salle's lieutenant in the Illinois country, who was to meet him at the mouth of the Mississippi with men and provisions, found no trace of him there and, after vain waiting, returned to the Illinois post. Indian attacks and an epidemic worked havoc among the settlers, and La Salle moved his colony to a better site on the Garcitas River near the head of Lavaca Bay.1 He set out from this point in search of the Mississippi, which he believed to be near, expecting to meet with Tonty. While he was exploring the eastern waters of Matagorda Bay, the last of his ships was wrecked. La Salle then started overland, northeastward. He reached the Nasoni towns north of the present Nacogdoches in northeastern Texas, •some eighty miles from the Red River. Illness, and the desertion of some of his men, forced him to retrace his steps. He found his colony, a mere handful now, facing starvation. Though worn with hardships and fatigue, La Salle resolved on the effort to bring help from the p211Illinois posts. This would seem a hopeless undertaking; for he had not found the Mississippi, by which he had previously descended from the Illinois country, and he had no idea of the distances he must travel across an unknown wilderness. He set out nevertheless with a few companions, including his brother, the Abbé Jean Cavelier, and his nephew Moranget. He crossed the Colorado near the press Columbus and, keeping on northward, forded the Brazos just above Navasota. Here he was treacherously slain by some of his men,2 who had already killed Moranget. The survivors of La Salle's party continued northeastward. Some deserted in the Indian towns. The others, including La Salle's brother, crossed the Red River near Texarkana and the intervening country to the mouth of the Arkansas, ascended to Tonty's post on the Illinois, and returned to Canada. They did not inform Tonty of La Salle's death, nor of the perilous condition of the little colony on the Gulf. Except for two or three men and some children, who were taken by the Indians — nine persons in all — the whole colony perished. p212When the mishaps attending La Salle's venture are reviewed — including a former attempt to poison him, the capture of one of his ships by the Spaniards, the desertion of Beaujeu, his assassination and the suppression of the news of it from the faithful Tonty who might have succored the colony — it is difficult not to suspect that his efforts were beset with subtle treachery from the beginning. If the news of La Salle's expedition caused a sensation in Spain, it roused the greatest alarm along the whole northern Spanish frontier in the New World, from Chihuahua to Cuba. The West Indies were no longer solely Spanish. The progress of the century had brought English, French, and Dutch to the lesser islands neglected by Spain. English settlers now occupied the Bermudas and several other islands. English arms had taken Jamaica and, in the peace concluded in 1670, Spain had recognized England's right to it and to the others she had colonized. The French West India Company had founded colonies on Guadeloupe, Martinique, and in the Windward Islands. The Dutch had trading stations on St. Eustatius, Tobago, and Curaçao; and English, French, and Dutch held posts in Guiana. Raids from these bases on Spanish ports and treasure fleets were all p213too frequent and too costly, even if no recent buccaneer had rivaled the exploit of Piet Heyn of the Dutch West India Company who, in 1628, had chased the Vera Cruz fleet into Matanzas river, Cuba, and captured its cargo worth $15,000,000. That sons of a France growing swiftly in power had pushed south from Canada through the hinterland and planted themselves on the Gulf where they could coöperate with the lively pirates of the French Indies was news to stir Mexico, Florida, and the Spanish West Indies to a ferment. The Spanish authorities hastily sent out expeditions east and west by sea and land to discover and demolish La Salle's colony. Mariners from Vera Cruz returned to that harbor to report two wrecked French ships in Matagorda Bay and no sign of a colony. It was concluded that La Salle's expedition had been destroyed and that the French menace was over, for the time being at least. The outposts in New León and Coahuila, just south of the Río Grande, had been no less roused than the harbor towns of Havana and Vera Cruz. To the Spanish frontiersmen, dreaming even yet of a rich kingdom "beyond," the thought of a French colony expanding to bar their way was intolerable. Their spirit was embodied in the figure p214of Alonso de León. A frontiersman by birth and training, famed for a score of daring exploits as a border fighter, Alonso de León was well fitted for the task to which the needs of the time summoned him. Under orders from Mexico, in 1686, León set off from Monterey on the first of his expeditions in search of La Salle's colony, following the south bank of Río Grande to the Gulf of Mexico. Next year he reconnoitered the north bank. But not till his third expedition did he come in direct touch with the French peril. He was now governor of Coahuila, at Monclova. This time he encountered a tribe of Indians north of the Río Grande who were being ruled with all a chief's pomp by a Frenchman called by the Spaniards Jarri. It appears that Jarri was not one of La Salle's settlers, but an independent adventurer who had wandered thus early into southwestern Texas from the Illinois country or from Canada. He was promptly stripped of his feathers, of course, and sent to Mexico to be questioned by the Viceroy. The officials were now thoroughly frightened. A new expedition was immediately sent out under León, who took with him Father Damián Massanet, a Franciscan friar, the Frenchman Jarri, one hundred soldiers, and seven hundred mules and p215horses. León could at least promise the Indians a show of Spanish pomp and power. In March, 1689, León crossed the Río Grande and, bearing eastward, crossed the Nueces, Frío, San Antonio, and Guadalupe rivers. Late in April he came upon the site of La Salle's settlement. There stood five huts about a small wooden fort built of ship planking, with the date "1684" carved over the door. The ground was scattered with weapons, casks, broken furniture, and corpses. Among some Indians a few leagues away León found two of the colonists, one of whom had had a hand in La Salle's murder. He learned also that Tonty had erected a fort on a river inland, the Arkansas, or perhaps the Illinois. On the Colorado River León and Massanet had a conference with the chief of the Nabedache tribe, who had come from the Neches River to meet them. The chief promised to welcome missionaries at his village. León returned to make a report in which piety and business sense are eloquently combined. "Certainly it is a pity," he admonished, "that people so rational, who plant crops and know that there is a God, should have no one to teach them the Gospel, especially when the province of Texas is so large and fertile and has so fine a climate." p216 A large and fertile country already menaced by the French did indeed call for missions. León was dispatched a fifth time with one hundred and ten soldiers to escort Massanet and his chosen helpers to the Nabedache towns of the Asinai (Texas) Indians, near the Neches river in eastern Texas. On the way they paused long enough for Father Massanet to set fire to La Salle's fort. As the Spaniards were approaching their objective from the Southwest, Tonty on a second journey to seek La Salle — in Illinois he had heard sinister reports through the Indians — reached the red River and sent an Indian courier to the Nabedache chief to request permission to make a settlement in his town. On being told of León's proximity Tonty retreated. The fleur-de‑lis receded before the banner of Castile. The Spanish flag was raised at the Nabedache village in May, 1690, before the eyes of the wondering natives, formal possession was taken and the mission of San Francisco was founded. The expedition now turned homeward, leaving three Franciscan friars and three soldiers to hold Spain's first outpost in Texas. Another expedition, after Alonso de León's death in 1691, set out from Monclova under Domingo Terán, a former governor of Sinaloa and p217Sonora, accompanied by Massanet to found more missions, on the Red River as well as the Neches. Terán returned without having accomplished anything, largely because of violent quarrels with Massanet, who opposed the planting of presidios beside the missions. Massanet remained with two friars and nine soldiers — the peppery padre protesting against the presence even of the nine. He soon learned that soldiers were sometimes needed. The Indians, roused by their leaders, turned against the missionaries and ordered them to depart. There was no force to resist the command. On October 25, 1693, Massanet applied the torch to the first Spain mission in Texas, even as he had earlier fired La Salle's French fort, and fled. Four soldiers deserted to the Indians. One of them, José Urrutia, after leading a career as a great Indian chief, returned to civilization, and became commander at San Antonio, where his descendants still live and are prominent. The other five, with the three friars, after three months of weary and hungry marching, during forty days of which they were lost, at last entered Coahuila. For the time being Texas was now abandoned by both contestants. But the French traders were only looking for a better opportunity and a more p218advantageous spot to continue the conflict, which, on their side, was directed against England as well as Spain. They had learned that English fur traders from South Carolina had already penetrated to the Creeks and to other tribes east of the Mississippi and they feared that England would seize the most of the river. The Spaniards also were disturbed by the English. They had been driven, in 1686, out of Port Royal and northern Georgia. Now they were alarmed by English fur-trading expeditions into Alabama and by the discovery that the Indians of Mobile Bay had moved north to trade with the English of Carolina. Thus, while France prepared to carry out La Salle's plan to colonize the Gulf coast, Spain with jealous eye watched the movements of both England and France. It was a three-cornered struggle. In 1697 the King of France, Louis XIV, commissioned Pierre Le Moyne d'Iberville, fighting trader, hero of the fur raids on Hudson Bay, and the most dashing military figure in New France, to found, on the Mexican Gulf, a colony to be named Louisiana. To forestall the French an expedition was immediately dispatched from Vera Cruz to Pensacola Bay, where in November, 1698, the post of San Carlos was erected and garrisoned. The p219move was none too soon. In January (1699) Iberville's fleet stood off the harbor and demanded admittance. The commander of San Carlos refused courteously but firmly. Iberville rewarded him for his compliments with others from the same mint, withdrew, sailed westward, and built a fort at Biloxi. But there were to be no battles, at present, between Spaniards and French for Louisiana. The fate of that territory was settled in Europe. The Spanish King, Charles II, died. He left no son; and, forced by the danger that a dismembering war for the succession would follow on his death, he bequeathed the crown to his grandnephew, the Duke of Anjou, grandson of Louis XIV and French in blood, sympathies, and education. The new King, Philip V, harkened readily enough to his French grandfather's suggestion that, in order to protect Spain's Gulf possessions from England, France must be allowed to colonize Louisiana. The Spanish War Council objected, and Philip let the matter drop, but the French settlement was quietly moved from Biloxi to Mobile Bay, nearer to the Spanish border. When in 1702 the War Council heard of it and protested, they were rebuked by Philip. Thus Spain, dominated p220by a Bourbon King, was forced to permit the occupation of Louisiana by France. Iberville's brother, the Sieur de Bienville, a brilliant and vigorous commander, was appointed in 1701 Governor of Louisiana. Bienville concentrated his energies on alliances with the tribes east and west of the Mississippi to prevent them from trafficking with the English and to divert the southern fur trade to the French posts. Bienville was succeeded in 1713 by Cadillac, founder of Detroit, who served for three years, but Bienville continued to be the life of the colony. By 1716 the Mississippi, Mobile, and Red rivers had been explored by Bienville's men, sometimes led by himself. And French traders from Canada and the Illinois had explored the Missouri for several hundred miles and had built posts southward from the Illinois to the lower Ohio. In 1718 Bienville founded New Orleans. France's hold was thus fastened upon Louisiana, and Spain's colonies round the Gulf were split in two. During the sixteen years of Bienville's activity, disturbing rumors had reached the Spanish border. To New Mexico came reports of Frenchmen trading with the Pawnees and of French voyageurs on the rivers to the northeast. Though p221in various Spanish expeditions from Santa Fé against Comanches and Apaches no French were seen, yet the fear of their approach increased. Similar rumors were heard on the Río Grande border. One not slow to take advantage of this general alarm was Father Hidalgo, a Franciscan who had been with Massanet at his mission in Texas. The intervening years had been spent by Hidalgo chiefly in founding and conducting missions in Coahuila, a work which had led the way for the secular powers and thus pushed the frontier of mining and ranching to the south bank of the Río Grande. With heart burning for the welfare of his former ungrateful charges, he had made many earnest appeals to be allowed to return to Texas, but the superior of his Order would not sanction his plea.3 Hidalgo, with genuine political shrewdness, then resolved to turn the French menace to good account. If he could prove that Spain's territory of Texas was in imminent danger, he knew that missions would be founded without delay. So he wrote a letter in 1711 to the French p222priests of Louisiana, begging them to "pacify the tribes hostile to the Asinai nation, who were nearer to their settlements, thereby to give the greatest honor and glory to God." Just why pacification of the Louisiana tribes bordering on the Texas Indians would honor Heaven more than missionary labors in other parts of Louisiana he did not make clear, but it is plain enough that the first result of the pacification would be the establishment of French posts near or among the Asinai. This might or might not honor Heaven, but it would undoubtedly interest Spain. Father Hidalgo sent an Indian servant with the letter to the Asinai country, where it was confided to a Louisiana Indian who happened to be there. Getting no reply, a year later he sent out another letter, addressed to the Governor of Louisiana. Neither missive appears to have reached its address; but in May, 1713, the first letter — after having been handed about among Indians for two years — came into Governor Cadillac's possession. It interested Cadillac very much, for he had recently been instructed by Antoine Crozat, to whom Louis XIV had granted a monopoly of all the Louisiana commerce, to attempt to open trade with Mexico despite the rigorous Spanish commercial p223regulations. Cadillac had already tried by way of Vera Cruz and failed. Better luck might follow an attempt to open an inland route to the Río Grande border, where Spanish smugglers could be trusted to do the rest, for the stupid commercial systems of European governments at the time made habitual smugglers of all frontier dwellers in America. At any rate Hidalgo's letter inspired the Governor to make the effort, just as Hidalgo had probably surmised it would. Cadillac chose his cleverest agent. He sent Louis Juchereau de St. Denis,a explorer, fur trader, and commander at Biloxi, with instructions to visit Hidalgo, who, so Cadillac inferred from the letter, was among the Asinai, and to build a post on the Red River within easy access of their territory. St. Denis established the post of Natchitoches,º put in the winter trading, and by spring was seeking Hidalgo in Texas. There he learned that the friar was on the Coahuila border, so on June 1, 1714, with three French companions and twenty-five Indians he set out on foot for the Río Grande. Strangely enough, two of his companions were the Talon brothers, survivors of the ill-fated La Salle expedition who had been ransomed from the Indians by León and Terán. On the 18th of July St. Denis p224reached Hidalgo's mission of San Juan, •forty miles below Eagle Pass. Hidalgo had gone to Querétaro, but the other missionaries and Captain Ramón at the post received St. Denis hospitably, and Ramón wrote to Hidalgo that, in view of the French danger, "it looks to me as though God would be pleased that your Reverence would succeed in your desires." This letter reveals Father Hidalgo's finesse. While Ramón entertained St. Denis and dispatched messengers to the authorities in Mexico City asking what he should do with him, St. Denis improved his time by winning the heart of Ramón's granddaughter, Manuela Sánchez, who later went with him to Natchitoches and there reigned for years as the Grand Dame of the post, becoming godmother, as the baptismal records show, of most of the children of the place. A new French menace had arisen. The Viceroy of Mexico hastily decided to found new missions in Texas and to protect them by strong garrisons. St. Denis, having by his marriage and his cleverness ingratiated himself with the Spaniards, was engaged at five hundred dollars to guide the Texas expedition, which was commanded by Captain Domingo Ramón, his wife's cousin. It looks more like a family affair than an international p225row. Meanwhile Hidalgo had given the Viceroy a satisfactory explanation of his random missives and had received permission to go to Texas with the expedition. The colony crossed the Río Grande in April, 1716. It consisted of sixty-five persons, including soldiers, nine friars, and six women, a thousand head of cattle, sheep, and goats, and the equipment for missions, farms, and garrison. At the head of the missionaries went two of Spain's most distinguished men in America, Father Espinosa, the well-known historian, and Father Margil, whose great services in the American wilds will probably result in his canonization by the Papal Court.b The Asinais welcomed the Spaniards and helped them to erect four missions and a garrison near the Neches and Angelina rivers. Shortly afterward a mission was built at Los Adaes (now Robeline) Louisiana, within •fifteen miles of St. Denis's post of Natchitoches. The success of the French traders with the powerful tribes, the coming of John Law's colonists to Louisiana, and the need of a halfway base, inspired the Spanish authorities to send out another colony, to occupy a site at the beautiful San Pedro Springs, on the San Antonio River, which lay on the direct route between the Neches River and the p226settlement at San Juan, near Eagle Pass. Early in 1718 the new colony, numbering some sixty whites, with friars and Indian neophytes, founded San Antonio a few months before New Orleans was born. And Father Olivares began the San Antonio, or Alamo, Mission, which was later to become famous as the shrine of Texas liberty. Spain had at last occupied eastern Texas, but her hold was not yet undisturbed. In the following year France and Spain went to war over European questions, and the conflict was echoed in the American wilderness, all the way from Pensacola to Platte River. Pensacola was captured by the French, recaptured by the Spaniards, and taken again by Bienville. The French at Natchitoches descended upon Texas and the garrison retreated to San Antonio without striking a blow. A plan for conquering Coahuila and New Mexico was drawn up on paper in Louisiana, perhaps by St. Denis. Eight hundred Frenchmen and a large body of Indian allies were to march overland from Natchitoches, while a flotilla sailed along the Texas coast and ascended the Río Grande. It was La Salle's old plan in a new guise. St. Denis was made "commander of the River of Canes" (the Colorado), and two expeditions were sent in 1720 and p2271721 to take possession of Matagorda Bay. Both of them failed. In New Mexico the Governor had heard, before the war broke out, that the French were settling on Platte River and, on his recommendation, the Viceroy ordered that alliances be made with the tribes to the northeast, a colony planted at El Cuartelejo in Colorado, and a presidio established on the North Platte — that is, at some point in the present Nebraska or Wyoming. In August, 1720, an expedition from New Mexico penetrated to the North Platte but, not finding any signs of a French colony, turned back. On the South Platte, in Colorado, it was almost totally annihilated by Indians armed with French weapons. Apparently tribes from as far north as Wisconsin took part in this fray, a fact which indicates the scope and power of the early French trader's influence. The end of the war in Europe caused the Viceroy to abandon his plans colonizing to the north of New Mexico. The treaty of peace restored Pensacola to Spain. Meanwhile affairs had moved apace on the Texas border. The Marquis of Aguayo, then Governor of Coahuila, undertook the reconquest, mainly at his own expense. Before the end of 1720 he had raised eight companies of cavalry, comprising over p228five hundred men and five thousand horses. It was the largest military expedition to enter the northern interior since the days of De Soto. Leaving Monclova in November, Aguayo strengthened San Antonio, and sent a garrison to occupy Matagorda Bay. Peace had now been declared, and at the Neches River Aguayo was met by St. Denis, who, swimming his horse across the stream for a parley, informed Aguayo that the war was over and agreed to permit an unrestricted occupation of the abandoned posts. Proceeding east, Aguayo reëstablished the six abandoned missions and the presidio of Dolores, and added a presidio at Los Adaes, facing Natchitoches. The expedition had been a success, but the poor horses paid a terrible price for the bloodless victory. The return journey to San Antonio, through a storm of sleet, was so severe that of his five thousand beasts only fifty were left alive when he arrived in January, 1722. Aguayo had fixed the hold of Spain on Texas. It was he who clinched the nails driven by León, Massanet, Hidalgo, and Ramón. There were now in Texas ten missions, four presidios, and four centers of settlement — Los Adaes, Nacogdoches, San Antonio, and La Bahía (Matagorda Bay). A governor was appointed and the capital of the province p229fixed at Los Adaes, now Robeline, Louisiana. Originally the name Texas had applied only to the country east of the Trinity River, but now the western boundary was fixed at the Medina River. It was to be moved half a century later to the Nueces. After much petty quarrelling with the French of Louisiana, the little Arroyo Hondo was made the eastern boundary, and thus for a century old Texas included a large strip of the present State of Louisiana.4 For twenty years after the Aguayo expedition, the Frenchman St. Denis, or "Big Legs," as the natives fondly called him, ruled the border tribes with paternal sway from his post at Natchitoches on the red River. The relations of French and Spaniards on this border were generally amicable. Intermarriages and a mutual love of gayety made friendship a pleasanter and more natural condition for the Latin neighbors than strife. Indeed, when in June, 1744, the long career of the redoubtable St. Denis came to a close, prominent among those assembled at Natchitoches to assist in the funeral honors were Governor Boneo and Father Vallejo, p230from Los Adaes, across the international boundary line. And yet, when, a few days later, Boneo reported the event to his Viceroy in Mexico, he did so in terms which meant, "St. Denis is dead, thank God; now we can breathe more easily." Spain's hold upon Texas was secure against France, but many a battle was yet to be fought for the territory with the ferocious Apaches and Comanches, and the incursions of French traders into the Spanish settlements continued to be a source of friction. The jealous trade policy of Spain only increased the eagerness of these traders to enter New Mexico, where the Pueblo Indians and the colonists alike were promising customers, if Spanish officers could be bribed or outwitted. For a long time the way from Louisiana was blocked by Apaches and Comanches, who were at war with the Louisiana tribes, and the river highways were unsafe. Canadians, however, conspicuous among them being La Vérendrye and his sons, descended from the north through the Mandan towns on the Missouri, reaching the borders of Colorado, and two brothers named Mallet succeeded in piercing the Indian barrier, entered New Mexico, and returned safely to Louisiana. The town of Gracia Real below Albuquerque where they lodged p231was given the nickname of "Canada." Later on French traders in numbers invaded New Mexico, some of whom were seized and sent to Mexico or to Spain and thrown into prison. Spanish troops were sent to guard the approaches to Chihuahua below El Paso; fears were felt for even distant California; and to keep the New Orleans traders from the Texas coast tribes, a presidio and a mission were established on the Louisiana border at the mouth of the Trinity River, near Galveston Bay. But the scene soon shifted. The Seven Years' war removed France from the American continent, left Louisiana in the hands of Spain, brought Spain and England face to face along the Mississippi. 1 Not on the Lavaca River as stated by Parkman and Winsor. The author in 1914 determined that the site of the colony was •five miles above the mouth of the Garcitas River on the ranch of Mr. Claude Keeran, in Victoria County, Texas. 2 Historians have supposed that this dastardly act was committed near the Trinity or the Neches, but evidence now available makes it clear that the spot was between the Brazos and Navasota rivers and near the present city of Navasota. 3 A myth has found currency in recent years to the effect that, despite this opposition, Hidalgo returned to Texas, dwelt for a time among the Asinais and there wrote his appeal to the French priests. But his writings preserved in the College of Querétaro in Mexico and examined by the author disprove the story. 4 In 1819, long after French rivalry had passed, the Sabine River was made the boundary. It is an error to suppose that it was originally the boundary between New France and New Spain. Thayer's Note: Strictly speaking, this is correct, since Spanish and French officials in the area did agree at one point that the border would be at the Arroyo Hondo; but as the balance of power shifted to the French, the latter encroached on the area between the Hondo and the Sabine, which became the de facto border. When Louisiana became a Spanish possession, the question was moot of course; but when in 1803 the territory passed out of Spanish hands to France and a few weeks afterwards to the United States, the exact course of the border mattered once more, and was finally confirmed at the Sabine by the Adams-Onís Treaty of 1819. For further details, see the Official Mexican Report of 1828 on the Texas-Louisiana boundary, LHQ 1:21‑43. a A much longer and more circumstantial account can be read in Gayarré's History of Louisiana; although somewhat romanced, it also adds some important details, telling us for example something that our Hispanophile Bolton omits: the first reaction of the Spanish authorities was to arrest St. Denis and drag him off to Mexico City. The ultimate source for St. Denis's travails is the contemporary diary of Pennicaut; that section of the diary is also online on this site (Grace King, New Orleans: The Place and the People, p21 ff.). b No change since Bolton wrote, and since 1836 before him. To date (2007), he is still only the Venerable, as indirectly stated in the last sentence of the article Antonio Margil in the 1917 Catholic Encyclopedia. Images with borders lead to more information. The Spanish Borderlands A page or image on this site is in the public domain ONLY Page updated: 7 Apr 10
http://penelope.uchicago.edu/Thayer/E/Gazetteer/Places/America/United_States/_Topics/history/_Texts/BOLSPB/8*.html
13
29
Money Lessons: A Guide to Financial-Literacy Resources The Internet offers teaching tools on economics for any grade level. Helping your students get a handle on finance doesn't have to take up a big chunk of your school year, especially if you have the right lessons at your fingertips. Whether you teach fourth-grade social studies, seventh-grade math, or high school economics, chances are you can begin online to plan a money-management class. From downloadable lesson plans that take up one class period to online games that teach key concepts, Edutopia has found the Web resources that can get you started. Here they are, broken down by grade level. On the Federal Reserve Bank of New York's Education page, you'll find the Econ Explorers Journal, a workbook designed to help elementary school math students understand money. One activity has students visit their local bank to collect savings account and checking account deposit slips and a car-loan application. Then they create characters who deposit and withdraw money, pay bills, and take out a car loan, all the while drawing pictures to illustrate what happens at each step. The lesson teaches the basics of bank accounts, interest rates, and budgets. The National Council on Economic Education (NCEE) offers the EconEdLink Web site, which includes dozens of free, downloadable lesson plans for K-12 students. For elementary school students, check out A Perfect Pet, which teaches kids about how people have to make choices when they have limited resources. It uses a downloadable story about a trip to the pet store, as well as a puzzle and other activities, to reinforce the point. The Jump$tart Coalition for Personal Financial Literacy has links to hundreds of other Web sites that offer lesson plans geared toward every grade level, especially elementary school grades. Check out, for example, this lesson on borrowing and lending from Take Charge America. It's a natural for social studies teachers covering a unit on the Revolutionary War. It teaches lending using a book about Benjamin Franklin and facts about how the American colonies borrowed money from France for the war effort. Middle school math teachers can teach basic financial literacy using a downloadable, four-lesson math-curriculum supplement called Money Math. Students use math concepts to learn about budgets, expenses, interest, and taxes. For instance, a lesson called WallpaperWoes asks students to figure out the area of a room that needs to be wallpapered and calculate how much it will cost. For lesson plans that prepare students to be entrepreneurs, check out those from NCEE, such as All in Business. You'll find activities that teach business-plan essentials, including how to figure out a business's costs and benefits. It also offers links to other Web sites that can supplement the lesson, such as the Real Planet, which uses funky characters to teach young teenagers about entrepreneurship. Other lessons on the topic for elementary school, middle school, and high school students can be found at These Kids Mean Busines$. The Web site Rich Kid, Smart Kid has a number of financial lesson plans for all grade levels, including some interactive games. Teach your students how to budget with a lesson from the Federal Reserve Bank of San Francisco that also prepares them for the financial realities of different jobs. Students learn about budgeting, saving, and investing, and they can play a game to help illustrate how one's education, job, and spending habits make a difference to their financial security. To prepare your students for the barrage of credit card offers they'll encounter, go to Consumer Jungle. The site requires registration, but the materials on it are free. Choose the section on credit, and you'll get a complete unit, including an outline, the standards it meets, vocabulary, and lessons that range from how to choose a card to the meaning of credit scores. You can also download Microsoft PowerPoint presentations. Create Your Own Are you game to write your own financial-literacy lesson plans? A number of sites offer materials and programs to help. The American Institute of Certified Public Accountants offers a financial literacy section on their website with advice for young children and teens about money, a video on budgeting for older kids, and activities for elementary school students. To teach high school students about personal bankruptcy, the U.S. Courts (the official Web site of the federal judiciary) has a program that gives teachers the option of bringing their classes into the courtroom. Or you can show your students how to create their own businesses with help from Bplans.com. Geared toward professionals, this site has detailed instructions for writing a business plan and tips on how to find funding. Montgomery Blair High School business teacher Kevin Murley tries to give his students lessons on investing, credit, and budgeting, along with lessons on entrepreneurship. The Silver Spring, Maryland, educator is a believer that beyond reading and writing, "the things that make for a successful life are your health and your financial health." And he wants to instill in his students "a concept of how important money is."
http://www.edutopia.org/financial-literacy-resources
13